mercredi 28 janvier 2015

REAL APPLICATION CLUSTER : spfile


  • Parameters That Must Have Unique Settings on All Instances

# INSTANCE_NUMBER
You must set a unique value for INSTANCE_NUMBER for each instance and you cannot use a default value
# UNDO_TABLESPACE
When you specify UNDO_TABLESPACE with automatic undo management enabled,
then set this parameter to a unique undo tablespace name for each instance

To simplify administration, use the same number for both the THREAD and INSTANCE_NUMBER parameters


  • Parameters that Must Have Identical Settings on All Instances

# ACTIVE_INSTANCE_COUNT
# ARCHIVE_LAG_TARGET
# CLUSTER_DATABASE
# CLUSTER_DATABASE_INSTANCES
# CONTROL_FILES
# DB_BLOCK_SIZE
# DB_DOMAIN
# DB_FILES
# DB_NAME
# DB_RECOVERY_FILE_DEST
# DB_RECOVERY_FILE_DEST_SIZE
# DB_UNIQUE_NAME
# UNDO_MANAGEMENT

  • Parameters that Should Have Identical Settings on All Instances
# ARCHIVE_LAG_TARGET
# LOG_ARCHIVE_FORMAT
# SPFILE
# TRACE_ENABLED
# UNDO_RETENTION

  • Specific parameter for RAC
# CLUSTER_DATABASE
Enables a database to be started in cluster mode. Set this parameter to TRUE.
# CLUSTER_DATABASE_INSTANCES
Sets the number of instances in your Oracle RAC environments
# CLUSTER_INTERCONNECTS
Specifies the cluster interconnect when there is more than one interconnect


alter system set SGA_TARGET = 2G scope=memory sid='MADB1';
alter system set SGA_TARGET = 2G scope=memory sid='*'
alter system reset open_cursors scope=spfile sid='prod1';

REAL APPLICATION CLUSTER : INSIDE




  • Global Resource Directory ( GRD)
RAC uses the Cache Fusion architecture to simulate a global SGA across all cluster nodes.
Access to blocks in the buffer cache needs to be coordinated for read consistency and write access, and enqueues to shared resources are now global across the cluster.

These two main concepts are implemented in the Global Cache Service (GCS), for accessing the common buffer cache; and the Global Enqueue Service (GES), for managing enqueues in the cluster.

Both GCS and GES work transparently to applications. The meta structure used internally is the previously mentioned Global Resource Directory (GRD),
which is maintained by the GCS and GES processes. The GRD is distributed across all nodes in the cluster and part of the SGA,
which is why the SGA of a RAC database is larger than a single instance's equivalent. 

GRD is like a in-memory database which contains details about all the blocks that are present in cache. GRD is stored in SGA. Typically GRD contains following and more information : 
#Data Block Address  This is the address of data block being modified
# Location of most current version of data block
# Modes of data block
# Roles of data block
# SCN number of data block
# Image of data block : Could be current image or past image.





    • GLOBAL CACHE SERVICE (GCS)
Global Cache Service (GCS) is the main component of Oracle Cache Fusion technology.
This is represented by background process LMSn, there can be max 10 LMS process for an instance.
# These are the main fonctions of GCS : The main function of GCS is to track the status and location of data blocks. Status of data block means the mode and role of data block
# GCS is also responsible for block transfer between the instances.
# GCS is the main mechanism by which cache coherency among "multiple cache" is maintained.
#GCS used to be called Dynamic lock Manager (DLM) in the previous versions of the database


    • Global Enqueue Service GES


Global Enqueue Service (GES) tracks the status of all Oracle enqueuing mechanism.
This involves all non-cache fusion intra instance operations. GES performs concurrency control on dictionary cache locks,
library cache locks and transactions. If performs this operation for resources that are accessed by more then once instance.
Enqueue services are also present in single instance database.
These are responsible for locking the rows on a table using different locking modes.

LMON : Global Enqueue Service Monitorw
LMD global enqueue service daemon 

      • Buffer states in RAC


A block is the least addressable unit of a database and it's on the disk. There is only one copy of a block in the database.
When a user requests a row in the block, the server process loads the entire block from the disk to the buffer cache and returns it to the user.
In the buffer cache, the buffer can be in various states, depending on how it was retrieved. When a user requests a buffer, it can be retrieved in one of the two modes:

# Consistent Read (CR) mode: when the buffer is requested to be simply be selected; not updated
# Current mode : when then buffer is requested for the intention of modifying it, even if actually not updated, e.g. in case of SELECT FOR UPDATE.
# Past image : It means that the block has been modified (and therefore is not CR) but modified again in another instance (therefore not Exclusive Current)

      • Lock Queuing RAC
each block in a RAC instance has two queues:

# Grant Queue - the queue where the requesters are queued for the locks to be granted in a certain mode
# Convert Queue - the queue where the granted requests are queued to be notified to the requesters

When a process requests a lock to be downgraded or upgraded on a buffer, it must put a request in the Grant Queue,
When the request is filled, the process is moved to the convert queue to let it know that its request has been satisfied.

The queues for a particular block are kept in only one instance. This instance is known as the Master Instance of the block A block has only one master instance.
When any instance requests a lock on a specific buffer, it contacts the LMS process of the master instance via its own LMS process

The Global Resource Directory (GRD) is replicated across all instances and holds the information on
which is the master instance of a specific block; not who holds the lock.

select b.dbablk, r.kjblmaster master_node from x$le l, x$kjbl r, x$bh b
where b.obj = DataObjectId
and b.le_addr = l.le_addr
and l.le_kjbl = r.kjbllockp

mardi 13 janvier 2015

AWR features

MMON transfers cumulative statistics from memory to repository tables on disk
used by ADDM
AWR:Repository Workload set of tables (in SYSAUX tbs : WR*)where performance data can be stored

  • AWR : Snapshot

AWR : Snapshot pulls infromations from fixed tables in memory : not ressource-intensive
identify by a unique snapshot ID : unique within repository
RAC : all nodes share a snapshot ID
 


SQL commands
  
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot; Creating a AWR snapshot
BEGIN
DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
retention => 43200, -- Minutes (= 30 Days). Current value retained if NULL.
interval => 30); -- Minutes. Current value
retained if NULL.
END;
/
-- Modifiying snapshot settings
BEGIN
DBMS_WORKLOAD_REPOSITORY.drop_snapshot_range (
low_snap_id => 22,
high_snap_id => 32);
END;
/ Dropping snapshots range
SELECT *
FROM dba_hist_snapshot
WHERE begin_interval_time > TO_DATE ('13/10/2010', 'dd/mm/yyyy')
AND end_interval_time < TO_DATE ('14/10/2010', 'dd/mm/yyyy'); Retried snapid from dates

PARAMETERS : STATISTICS_LEVEL

  • Transporting AWR

Oracle Database enables you to transport AWR data between systems. This is useful in cases where you want to use a separate system to perform analysis of the AWR data.
To transport AWR data, you must first extract the AWR snapshot data from the database on the source system,
then load the data into the database on the target system

1 ORACLE_HOME/rdbms/admin/awrextr.sql Extracting AWR Data
2 ORACLE_HOME/rdbms/admin/awrload.sql Loading AWR Dat

vendredi 9 janvier 2015

Using transportable tablespaces

  • Limitations
    • same character set and national character set
    • tablesapace must not exists in taget database
    • unscope : tablespace system, SYS objects

  • Steps

            Step 1:  Determine if Platforms are Supported and Endianness
                    select d.platform_name, endian_format  from v$transportable_platform tp, v$database d     where tp.platform_name = d.platform_name;
                    If # convert tablespace ( cf RMAN page for conversion ) 


            Step 2: Pick a Self-Contained Set of Tablespaces
                execute dbms_tts.transport_set_check('sales_1,sales_2', true); execute_catalog_role is required
                select * from transport_set_violations; 

            Step 3: Generate a Transportable Tablespace Set
                ALTER TABLESPACE sales_1 READ ONLY;
                 ALTER TABLESPACE sales_2 READ ONLY;

                 EXPDP system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir                     TRANSPORT_TABLESPACES = sales_1,sales_2
 or
                 EXPDP system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir   TRANSPORT_TABLESPACES=sales_1,sales_2 TRANSPORT_FULL_CHECK=Y # fails if outside depends exist

                 Step 4: Transport the Tablespace Set
                     host copy datafiles and dump files
                 Step 5: Plug In the Tablespace Set
                     IMPDP system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir   TRANSPORT_DATAFILES= /salesdb/sales_101.dbf, /salesdb/sales_201.dbf REMAP_SCHEMA=(dcranney:smith) REMAP_SCHEMA=(jfee:williams)
                     ALTER TABLESPACE sales_1 READ WRITE;
                     ALTER TABLESPACE sales_2 READ WRITE;

Various types of exp/imp expdp/impdp invocations

Use sysdba to exp/imp

  • imp '"sys/password as sysdba"' file=file.dmp ignore=yes full=y commit=y
  • exp '"sys/password as sysdba"' file=file.dmp  full=y log=file.lst

Export/import on network

  • export FIC=testimp.$1.$$
  • export FIC_PIPE=$FIC.pipe
  • export FIC_LOG=$FIC.log
  • export FIC_LOG_EXP=$FIC.log
  • /usr/sbin/mknod  $FIC_PIPE p
  • imp system/xxx@tttt fromuser=$1 touser=$1 log=$FIC_LOG file=$FIC_PIPE &
  • exp system/xxxxx file=$FIC_PIPE log=${FIC_LOG_EXP} owner=$1 


DataPump in pl/sql sample


declare
  h1   NUMBER;
begin
     h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'SCHEMA', job_name => 'IMPORT004306', version => 'COMPATIBLE'); 
    dbms_datapump.set_parallel(handle => h1, degree => 3); 
    dbms_datapump.add_file(handle => h1, filename => 'imp_infocom.LOG', directory => 'MYN_DIR', filetype => 3); 
    dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0); 
    dbms_datapump.add_file(handle => h1, filename => 'export_%U.dmp', directory => 'MYN_DIR', filetype => 1); 
    dbms_datapump.metadata_remap(handle => h1, name => 'USER1', old_value => 'USER2', value => 'COFACC_ATLAS'); 
    dbms_datapump.metadata_remap(handle => h1, name => 'REMAP_TABLESPACE', old_value => 'TBS', value => 'TBS2'); 
    dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''USER1'')'); 
    dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC'); 
    dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1); 
    dbms_datapump.set_parameter(handle => h1, name => 'SKIP_UNUSABLE_INDEXES', value => 0); 
    dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0); 
    dbms_datapump.detach(handle => h1); 
end;
/

Debugging a hung database

sqlplus –prelim
connect / as sysdba # At this point you can use oradebug utilities to invistigate
  1. SQL> oradebug hanganalyze 3 
  2. Wait at least 2 minutes to give time to identify process state changes. 
  3. SQL> oradebug hanganalyze 3 
  4. Open a separate SQL session and immediately generate a system state dump. 
  5. SQL> alter session set events 'immediate trace name SYSTEMSTATE level 10';
 
 
# sqlplus /nolog

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Sep 26 04:32:18 2012

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

SQL>  set _prelim on
SQL>
conn / as sysdbaSQL>
Prelim connection established
SQL>  oradebug setmypid
Statement processed.
SQL>  oradebug unlimit
Statement processed.
SQL>  oradebug call ksmcpg
Function returned 10525E
SQL>  oradebug dump hanganalyze 3
Statement processed.
SQL> oradebug dump systemstate 266
Statement processed.
SQL>  oradebug dump hanganalyze 3
Statement processed.
SQL>  oradebug dump systemstate 266
Statement processed.
SQL>  oradebug tracefile_name /u1/tot1_ora_22439.trc
SQL> exit