jeudi 21 mai 2015

SQL Tuning : TABLE FULL SCAN / WITH INDEXES


  • Full Scan I/O is cheaper than I/O
95% of disk latency is the time required for the read-write head to move itself under the proper cylinder


  • Blevel
    • number of level in a b-tree index, common size 3 but can be 4 in data warehouse
    • column on dba_indexes

dimanche 10 mai 2015

SQL Tuning : Dynamic sampling


  • Optimizer_dynamic_sampling parameter
Default value 2, 0 to 10

  • Goal
Create more accurate selectivity and cardinality estimates
  • Level
    • 1 : sample tables that appear in join or subquery conditions that have no indexes
    • 2 : Samples all unanalyzed tables that have more than 32 blocks
    • 3 : Samples using a column that applies the selectivity of the table
    • 4 : 2 or more columns 
    • 5 : 64 blocks used
    • 6 : 128 blocks
Table with >32 blocks

  • dynamic_sampling hint
select /* dynamic_sampling (customer 4) */
  customer_name ...

Useful for tables that are created dynamically such as GTT

  • Scope
Ideal for a query that are going to be executed many times
When there is complex where clause

SQL Tuning : Join order


  • Table join optimization
    • Histograms
Setting histograms on columns in where clause join can help Optimizer to choose optimal join order


    • Transitive clausure
if a=b and b=c, therefore a=c

select * from a,b,c where
a.id=b.id and b.id=c.id
select * from a,b,c where
a.id=b.id and b.id=c.id and a.id=c.id


    • Ordered hint
The first table that appear  the from clause is the driving table, often used when 4 or more tables are involved in join. It can save a huge  amount of parse time.

select /*+ ordered use_hash(emp, dept) use_nl (sal, bonus) */
from
  emp,
  dept,
  sal,
  bonnus
where ....


    • orderered_predicates hint
CBO  typically evaluates  SQL predicate following 3 rules :
a) Subqueries are evaluated before outer Booleans conditions
b) Booleans conditions lacking (subqueries or built in-fonction) are evaluated in their order in where clause
c) Boolean conditions using build in-function are evaluated according to their costs

There ordered_predicates overrides these rules, allowing the where clause items to be evaluated in the order that they appear in the query.

vendredi 8 mai 2015

SQL Tuning : SQL Join Internal


  • Introduction
Joining 2 tables with index will result in Nested loops join, while missing index may result in sort merge join

Nested loops join may serve outer join
  • Logical joins
    • equi-join
select emp.name, bonnus.com from emp, bonnus where emp.name=bonnus.name
select emp.name, bonnus.com from emp inner join bonnus on emp.name=bonnus.name
    • outer join
select emp.name, bonnus.com from emp, bonnus where emp.name=bonnus.name(+) : include emp rows even where there was no matching in bonus table
select emp.name, bonnus.com from emp LEFT OUTER JOIN bonnus  ON emp.name=bonnus.name


    • full outer join
select
   author_last_name C1,
   book_title   C2
from
  author full outer join book_author using (author_key)
             full outer join book using (book_key)
order by author_last_name;  : include books without author and author without books


    • Union and Union all
with Union duplicate rows are removed, internally union all  plan is executed and sort unique operation is done : so Union all is better than Union  regarding performance aspect


    • minus
select deptno from dept
minus
select deptno from emp

Use minus instead of not in operation


    • intersect

select deptno from dept
intersect
select deptno from emp


    • anti-join ( not in, not exists)
select author_last_name from author 
where
  author_key not in ( select author_key from book_author);

select author_last_name from author a
where not exists
  ( select 1   from book_author b where b.author_key=a.author_key)

    • cross join
select last_name, dept_id
from
emp cross join dept;

result in a cartesian product
    • nutural join
select stuff from emp natural join dept;
Require that tables have same column name

    • using clause
select depart_name, city
from
departments join locations using (location_id)

  • SORT MERGE JOIN
    • Usually used : no index exist to join the table columns, hash_area_size or pga_aggregate_target does not offer enough RAM for a hash join
    • Optimizer cardinality estimates indicate that the SQL will require visiting the majority of data blocks from both tables
    • When the all_rows optimizer_mode determines that a full-table scan is less resource intensive than index access

    • Find_merge_joins.sql
col c1 heading ‘Date’  format a20
col c2 heading ‘Hash|Join|Count’  format 99,999,999
col c3 heading ‘Rows|Processed’  format 99,999,999
col c4 heading ‘Disk|Reads’  format 99,999,999
col c5 heading ‘CPU|Time’ format 99,999,999
ttitle ‘Merge Joins over time’
select
  to_char(
    sn.begin_interval_time,
    'yy-mm-dd hh24'
  )
  count(*)
  sum(st.rows_processed_delta)
  sum(st.disk_reads_delta)
  sum(st.cpu_time_delta)
from
   dba_hist_snapshot   sn,
   dba_hist_sqlstat    st,
   dba_hist_sql_plan   sp
snap_time,
ct,
row_ct,
disk,
cpu
where
-- Additional cost licenses are required to access the dba_hist tables.
   st.snap_id = sn.snap_id
and
   st.dbid = sn.dbid
and
   st.instance_number = sn.instance_number
and
   sp.sql_id = st.sql_id
and
   sp.dbid = st.dbid
and
   sp.plan_hash_value = st.plan_hash_value
and
   sp.operation = 'MERGE JOIN’
group by
   to_char(sn.begin_interval_time,'yy-mm-dd hh24');

    • use_merge hint
select /+* use_merge(e,b) parallel(e, 63 ) parallel (b,63) */
e.ename, hiredate, b.com
from emp e, bonnus b
where e.ename=b.ename

A parallel query is run against a 64 CPU server
  • NESTED LOOPS JOIN
    • sample
select employee_name, department_name 
from employees e, departments d
where e.department_id=d.department_id

suppose we have standard primary key,  foreign-key relationship for department_id (i.e. index on these columns) Oracle may use NESTED LOOP method performing full table scan on departements table( the driving table ) and looping through the emp.idpt_ix to access the machine rows in emp table


    • The driving table 
Optimizer will choose the smallest table or table without index

    • no parallelism
Nested loop joins always invoke an index and hence, they cannot be paralleled
    • hints
Use use_nl_with_index hint instead of use_nl,  Optimizer will use nested loop if suitable index exists


    • awr_nested_join_alert.sql

col c1 heading ‘Date’                format a20
col c2 heading ‘Nested|Loops|Count’  format 99,999,999
col c3 heading ‘Rows|Processed’ format 99,999,999
col c4 heading ‘Disk|Reads’ format 99,999,999
col c5 heading ‘CPU|Time’ format 99,999,999
accept nested_thr char prompt ‘Enter Nested Join Threshold: ‘
ttitle ‘Nested Join Threshold|&nested_thr’
select
  to_char(
    sn.begin_interval_time,
    'yy-mm-dd hh24'
  )
  count(*)
  sum(st.rows_processed_delta)
  sum(st.disk_reads_delta)
  sum(st.cpu_time_delta)
from
   dba_hist_snapshot   sn,
   dba_hist_sqlstat    st,
   dba_hist_sql_plan   sp
snap_time,
ct,
row_ct,
disk,
cpu
-- make sure that you are licensed to read the AWR tables
where
   st.snap_id = sn.snap_id
and
   st.dbid = sn.dbid
and
   st.instance_number = sn.instance_number
and
   sp.sql_id = st.sql_id
and
   sp.dbid = st.dbid
and
   sp.plan_hash_value = st.plan_hash_value
and
   sp.operation = 'NESTED LOOPS'
group by
   to_char(sn.begin_interval_time,'yy-mm-dd hh24')
having
        count(*) > &nested_thr;


  • HASH JOIN
    • how it works


Use PGA RAM to build an in-memory hash table from the smaller driving table. Rows in the RAM hast table are then used as build input rows source, providing ROWID required into the larger table.

    • hash_area_size or pga_aggregate_target  have to be set to high value
    • temp usage
If there is no room in the RAM Oracle can overflow on TEMP tablespace

    • vs nested loop
Whenever the driving table will fit into the hash_area_size(pga_aggregate_target) RAM, a hash join may run faster than a nested loops join.

    • Sizing hash area in RAM
hash area should be 1.6 times as large as a driving table :
set heading off;
set feedback off;
set verify off;
set pages 999;
spool run_hash.sql
select
'alter session set hash_area_size='||trunc(sum(bytes)*1.6)||';'
from
   dba_segments
where
   segment_name = upper('&1');
spool off;

    • Hint
Use hint parallel  instead of setting parallelisum on table level
select /+* use_hash(e,b) parallel(e, 63 ) parallel (b,63) */
e.ename, hiredate, b.com
from emp e, bonnus b
where e.ename=b.ename

    • hash_join_alert.sql

col c1 heading ‘Date’ format a20
col c2 heading ‘Hash|Join|Count’ format 99,999,999
col c3 heading ‘Rows|Processed’ format 99,999,999
col c4 heading ‘Disk|Reads’ format 99,999,999
col c5 heading ‘CPU|Time’ format 99,999,999
accept hash_thr char prompt ‘Enter Hash Join Threshold: ‘
ttitle ‘Hash Join Threshold|&hash_thr’
select
   to_char(sn.begin_interval_time,'yy-mm-dd hh24')  c1,
   count(*)                                         c2,
   sum(st.rows_processed_delta)                     c3,
   sum(st.disk_reads_delta)                         c4,
   sum(st.cpu_time_delta)                           c5
from
   dba_hist_snapshot sn,
   dba_hist_sql_plan  p,
   dba_hist_sqlstat  st
where
   st.sql_id = p.sql_id
and
   sn.snap_id = st.snap_id
and
   p.operation = 'HASH JOIN'
having
   count(*) > &hash_thr
group by
   begin_interval_time;

SQL Tuning : Using Hints


  • Good hints vs Bad hints
    • good hint : ordered
  • Automatic Query Rewrite with Hint
    • use_concat
select ename from emp
where
    deptno=10
or
   sal< 5000
or
   job='CLERK'

The first option is to create a bitmap index on the 3 columns, the second option is to use use_concat hint :

select /*+ use_concat */ ename from emp
where
    deptno=10
or
   sal< 5000
or
   job='CLERK'

SQL Tuning : INDEX

- Bitmap index and clause or : Oracle automatically uses bitmap indexes for queries with multiple or conditions on bitmap columns

lundi 4 mai 2015

Execution Plan


SET AUTOTRACE TRACEONLY STATISTICS EXPLAIN
select * from module where module_id=125;
==> compair rows processed with consistant gets

Costs and the Cost-based Optimizer
cost column in execution plan is not a reliable way to judge the real costs of a SQL statements response time. It is supposed to be a guess of number of single block reads required

The most common causes for inaccurate CBO costing estimates may include :

Bad table join order : Sub-optimal table join order can be caused by missing histogram
Inaccurate I/O estimates : Oracle uses the estimate of single block read time which can be inaccurate if system statistics are missing
Buffer caching estimate : If opimizer_index_caching is not set or if the content of the data buffers carry greatly, the optimizer can mis-estimate the I/O costs for a query optimization.

Reading Execution Plan

  • Explain plan supplemental information
    • Card : number of rows Optimizer guesses it will return, used to determine optimal table join order
    • Bytes : This is the number of bytes that might be returned, a rough idea of the total baggage that must be passed to any subsequent steps
  • The Nested Loops Join
    • Driving table : usually uses table access full or index range scan access method, it is the table specificed first after nested loops table access method
    • Second table : normally uses index unique scan method
    • More : only one of indexes may be accessed as a range
  • Phisical Disk reads : aw_top_tables_phyrd.sql
col c0 heading ‘Begin|Interval|time’ format a8
col c1 heading ‘Table|Name’
col c2 heading ‘Disk|Reads’
col c3 heading ‘Rows|Processed’
select
*
from (
select
format a20
format 99,999,999
format 99,999,999
     to_char(s.begin_interval_time,'mm-dd hh24') c0,
     p.object_name c1,
     sum(t.disk_reads_total) c2,
     sum(t.rows_processed_total) c3,
DENSE_RANK() OVER (PARTITION BY to_char(s.begin_interval_time,'mm-dd hh24') ORDER BY SUM(t.disk_reads_total) desc) AS rnk
from
  dba_hist_sql_plan   p,
  dba_hist_sqlstat t,
  dba_hist_snapshot s
where
   p.sql_id = t.sql_id
and
   t.snap_id = s.snap_id
and
   p.object_type like '%TABLE%'
group by
   to_char(s.begin_interval_time,'mm-dd hh24'),
   p.object_name
order by
c0 desc, rnk
)
where rnk <= 5;
  • Index range scan : awr_sql_index_freq.sql
col c1 heading ‘Object|Name’         format a30
col c2 heading ‘Option’              format a15
col c3 heading ‘Index|Usage|Count’   format 999,999
select
p.object_name c1,
  p.options     c2,
  count(1)      c3
from
   dba_hist_sql_plan   p,
   dba_hist_sqlstat  s
where
   p.object_owner <> 'SYS'
and
   p.options like '%RANGE SCAN%'
and
   p.operation like ‘%INDEX%’
and
   p.sql_id = s.sql_id
group by
   p.object_name,
   p.operation,
   p.options
order by
  1,2,3;

  • Total counts for each object and table access : aw_sql_object_freq.sql
col c1 heading ‘Object|Name’ format a30
col c2 heading ‘Operation’ format a15
col c3 heading ‘Option’ format a15
col c4 heading ‘Object|Count’ format 999,999
break on c1 skip 2
break on c2 skip 2
select
  p.object_name c1,
  p.operation   c2,
  p.options     c3,
  count(1)      c4
from
   dba_hist_sql_plan p,
   dba_hist_sqlstat  s
where
   p.object_owner <> 'SYS'
and
   p.sql_id = s.sql_id
group by
   p.object_name,
   p.operation,
   p.options
order by
  1,2,3;
  •  Count number of access type : TFS, Index range scan, index unique access, index full scan
    • plan9i.sql : v$sql_plan
    • plan10g.sql : dba_hist_sql_plan
    • Use case 1 : candidate for keep pool ( small TFS <2% db_cache_size), for recycle pool ( large TFS)
    • Use case 2 :  Indexes that are accessed via ROWID ==> non-range scan access
  • Tracing SQL execution history : dba_hist_sql_plan : dbms_xplan.display_awr
  • Using SQL profiles : stored persistently in the data dictionary : generate a better execution plan than the normal optimisation because it is tested against a real-world workload in the SQL Tuning Set(STS)

dimanche 3 mai 2015

SQL Tuning : Gather statistics

export_fixed_objects_stats
import_fixed_objects_stats

Workload Statistics
multiblock reads ( mreadtim)
sequential read I/O time ( seriatim)
Both are updated by dbms_stats.gather_system.stats procedure

Noworkload Statistics : cpuspeednw, ioseektim, and iofrspeed
Workload Statistics : sreadtim, mreadtim, cpuspeed, mbrc, maxthr and slavethr updated by dbms_stats.gather_system.stats procedure
If Both are available CBO will used Workload Statistics

select
 sname,
 pname,
pvall
from
 sys.aux_stats$

 sreadtim : Single block read time in millisecond
 mreadtim : Multiple block read time in millisecond
 cpuspeed : CPU speed
 mbrc : Average block read per multiblock read
 maxthr :  Maximum I/O throughput
 slavethr : Slave throughput


dbms_stats.gather_system.stats procedure  is important to choose TFS vs Index scan

execute bms_stats.gather_system.stats( start)
-- one hour delay during high worklaod
execute bms_stats.gather_system.stats( stop)

or
dbms_stats.gather_system_stats('INTERVAL', interval=60)



SQL Tuning : Histograms

Skew only is time consuming
only useful for column used in where clause
Histograms are used only in 2 cases :
- Tables join order
- Table access : TFS vs Index Scan

3 options
method_opt=> 'for all columns size skewonly'
method_opt=> 'for all columns  size repeat'
method_opt=> 'for all columns size auto'


1) for all columns size skew only
2) Monitor ( alter table toto monitor) : no default?
3) for all columns size auto
4)for all columns  size repeat
5) periodically skew only & auto, when there is a new column added


A low cardinality < 254 disincts values will create a frequency histo
A high caridinality >254 distints values will create a heigh-balanced hits

Use histogram for foreign key columns; lack of histogram can lead to bad order in join, set histogram instead of using ORDER hint




vendredi 17 avril 2015

SQL Tuning Introduction


  • NOT IN vs NOT EXIST

NOT IN subqueries may return null values

  • Booleans predicate in WHERE clause
Placing most restrictive Boolean predicate first after WHERE clause is a good practice
  • Tables & Index Statistics vs SQL Profiles & Optimizer Plan stability
The dynamic shop does not use plan stability because they want their SQL plan to change whenever there is major change inside tables
  •  Adaptative Cursor Sharing
Shop that are plugged with non-reusable SQL can adopt either persistent or dynamic philosophy. To use persistent with non-reusable SQL DBA will set CUSROR_SHARING=force
Persistent shop : only one optimal SQL plan for any SQL, SQL plan rarely changed regardless how often statistics are computed


  • PL/SQL procdeures
They are load once in Shared pool and remain there unless they become paged out, subsequent executions are faster than SQL
  • Function Base index Statistics
dbms_stats.gather_table_stats(owner=>'SCOTT', table=>'SALES'
method_opt=>'FOR ALL COLUMNS FOR COLUMNS(compute_total(cust_sales))') : compute_total is a function

  • DBMS_SQLTUNE.import_sql_profile
Use of sqlprof_attr( init parameter can be changed)

  • dbms_stats.gather_system_stats
CPU speed, single disk read speed, multi-block reads

  • Reducing hard parse
Place SQL inside stored procedures
Use bind variable
A high value for "parse call" (>10/sec) ==> high income unique SQL statement or SQL is not reentrant
  • Optimizer_mode
opt_param hint
select /*+ opt_param('optimizer_mode','first_rows_10' */  col1 ...
all_rows : minimize computing resources and favors full-table scans
For batch use all_rows
first_row_1, first_row_10, first_rows_100, first_rows_1000 : minimize time response and favors index access
Index access involves additional I/O vs Full scan : multiple blockread
choose :  all_rows or rule depending on statistics

  • Optimizer_index_cost_adj 
Alter the costing algorithm for access path involving indexes, the smaller the value, the lower the cost of index access.
The small value 10 to 30 index access become less costly ==> OLTP system
default value 100 
Usefull <10g, dbms_stats.gather_system_stats make this parameter less useful
0 to 10 000

-- measuring I/O costs on databas and get a recommended starting value for optimizer_index_cost_adj
col c1 heading 'Average Waits|forFull| Scan Read I/O'
col c2 heading 'Average Waits|for Index|Read I/O'
col c3 heading 'Percent of| I/O Waits|for Full Scans'
col c4 heading 'Percent of| I/O Waits|for Index Scans'
col c5 heading 'Starting|Value|for|optimizer|index|cost|adj'
select
a.average_wait c1, b.average_wait c2, a.total_waits /(a.total_waits + b.total_waits) c3, b.total_waits /(a.total_waits + b.total_waits) c4,
format 9999.999 format 9999.999 format 9.99 format 9.99 format 999
(b.average_wait / a.average_wait)*100 from
v$system_event a,
v$system_event b where
a.event = 'db file scattered read' and
b.event = 'db file sequential read' ;
  • Optimizer_index_caching 
How much index is likely to be in the RAM
Affects decision to use an index for a table join(nested loops) or favor a full-table scan

  • db_file_multiblock_read_count
Autotuned > 10GR2

Use plan10g.sql

  • Setting SQL Opimizer Cost Model
_optmizer_cost_model : choose, io, cpu(default)

jeudi 19 février 2015

Using RMAN


  • Some useful commands
backup incremental level 0 database ;

restore database datafile 9 from tag='TAG20101201T170101';


crosscheck archivelog all;
backup archivelog all not backed up 1 times;
delete noprompt archivelog until time 'sysdate - 1' backed up 1 times to device type sbt;
backup archivelog all not backed up delete input;

restore archivelog sequence between 932 and 935 thread=1;
restore archivelog SCN BETWEEN 6055449295951 AND 6055451858904;

---- controlfile

CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/10.2.0.4/dbs/snapcf_cis.f';
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u02mars/ORADATA/cis/autobackup/%F';

restore controlfile to '/oradata/ctvfile.bak' from autobackup;
restore controlfile to '/oradata/ctvfile.bak' from 'c-4119285822-20100302-0a'
restore spfile from autobackup
create spfile from MEMORY


CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 4;


--- duplicate --
run {
allocate channel nbu_sbt0 type 'SBT_TAPE' PARMS 'SBT_LIBRARY=/usr/openv/netbackup/bin
/libobk.so, ENV=(NB_ORA_SERV=mynetback,NB_ORA_CLIENT=myserv)';
allocate channel nbu_sbt1 type 'SBT_TAPE' PARMS 'SBT_LIBRARY=/usr/openv/netbackup/bin
/libobk.so, ENV=(NB_ORA_SERV=mynetback,NB_ORA_CLIENT=myserv)';
allocate auxiliary channel nbu_sbt2 type 'SBT_TAPE' PARMS 'SBT_LIBRARY=/usr/openv/netbackup
/bin/libobk.so, ENV=(NB_ORA_SERV=mynetbackNB_ORA_CLIENT=myserv)';
allocate auxiliary channel nbu_sbt3 type 'SBT_TAPE' PARMS 'SBT_LIBRARY=/usr/openv/netbackup
/bin/libobk.so, ENV=(NB_ORA_SERV=mynetback,NB_ORA_CLIENT=myserv)';
DUPLICATE TARGET DATABASE TO MYDBDEV until sequence=27757 thread=1 NOFILENAMECHECK;
}
DUPLICATE TARGET DATABASE TO MYDBDEV until time="to_date('30/06/2011 20:00:00', 'DD/MM/YYYY HH24:MI:SS')" NOFILENAMECHECK;

export NLS_DATE_FORMAT='dd/mm/yyyy hh24:mi:ss'
DUPLICATE TARGET DATABASE TO MYDBDEV until time '04/03/2010 04:00:00' NOFILENAMECHECK;

-- delete
delete noprompt obsolete;
delete noprompt archivelog until time 'sysdate - 8/24' backed up 1 times to device type sbt;
DELETE NOPROMPT OBSOLETE RECOVERY WINDOW OF 7 DAYS;

crosscheck archivelog all;
DELETE NOPROMPT EXPIRED archivelog all completed before 'sysdate -40' ;

crosscheck backup;
DELETE NOPROMPT EXPIRED backup completed before 'sysdate -40';

-- list
list copy of archivelog from time='sysdate - 2';
        list backupset by backup summary; Listing backupset by summary
spool log to list.txt
        list backupset by file;
list backup of database;
list backup of tablespace tbs_tab_users;
list incarnation of database;
-- report
       report schema; Display all datafiles in database including temporary datafile
       report need backup days 1;
       report obsolete;
       report need backup;
       report unrecoverable;
REPORT OBSOLETE REDUNDANCY = 2 DEVICE TYPE sbt;

 validate database;
 select * from V$DATABASE_BLOCK_CORRUPTION

-- catalog
create user catrman identified by catrman
default tablespace tbs_tab_repos temporary tablespace temp;
grant CONNECT , RESOURCE, RECOVERY_CATALOG_OWNER to catuser;
RMAN>CONNECT CATALOG catuser/catuser ;
RMAN>CREATE CATALOG TABLESPACE tbs_tab_repos
rman target / target catalog catuser/catuser@catmran;
register database;
unregister database;
  • Dealing controlfile enqueue issue
SELECT s.sid, username AS "User", program, module, action, logon_time "Logon", l.*
FROM v$session s, v$enqueue_lock l
WHERE l.sid = s.sid and l.type = 'CF' AND l.id1 = 0 and l.id2 = 2;

  • Restore database skeeping tbs or renaming datafiles

col redondance format 999,999
col fichier format a50
set pages 800
set lines 100
set feedback off
set heading off
spool switch.txt
define REP_CIBLE=/u01/ORADATA/myndb/
select 'switch datafile '||file_id||' to datafilecopy ''&&REP_CIBLE'||file_id||fichier||''';'
from
(
select substr(fichier,instr(fichier,'/',-1)+1) fichier,file_id
from
( select file_name fichier,file_id from dba_data_files where tablespace_name not in
('TBS1','TBS2')
) FIC) T
/
Generation switch datafiles script
for restore database with skeeped tablespaces

col redondance format 999,999
col fichier format a50
set pages 800
set lines 100
set feedback off
set heading off
spool newname.txt
define REP_CIBLE=/u01/ORADATA/myndb/
select 'set newname for datafile '||file_id||' to ''&&REP_CIBLE'||file_id||fichier||
''';'
from
(
select substr(fichier,instr(fichier,'/',-1)+1) fichier,file_id
from
( select file_name fichier,file_id from dba_data_files where tablespace_name not in
('TBS1','TBS2')
) FIC) T
/

Generating setnew name script for
restore database with skeeped tablespaces

run {
allocate channel nbu_sbt0 type 'SBT_TAPE' PARMS
'SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so, ENV=(NB_ORA_SERV=frnbumaster,NB_ORA_ CLIENT=mynclient)';
allocate channel nbu_sbt1 type 'SBT_TAPE' PARMS
'SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so, ENV=(NB_ORA_SERV=frnbumaster,NB_ORA_ CLIENT=mynclient)';
allocate channel nbu_sbt2 type 'SBT_TAPE' PARMS
'SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so, ENV=(NB_ORA_SERV=frnbumaster,NB_ORA_ CLIENT=mynclient)';
allocate channel nbu_sbt3 type 'SBT_TAPE' PARMS
'SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so, ENV=(NB_ORA_SERV=frnbumaster,NB_ORA_ CLIENT=mynclient)';
set newname for datafile 1 to '/u01/ORADATA/myndb/1system01.dbf';
set newname for datafile 2 to '/u01/ORADATA/myndb/2undotbs01.dbf';
set newname for datafile 3 to '/u01/ORADATA/myndb/3sysaux01.dbf';
set newname for datafile 10 to '/u01/ORADATA/myndb/10tbs_tab_iris_01.dbf';
set newname for datafile 11 to '/u01/ORADATA/myndb/11tbs_ind_iris_01.dbf';
restore database skip forever tablespace TBS1,TBS2;
switch datafile 1 to datafilecopy '/u01/ORADATA/myndb/1system01.dbf';
switch datafile 2 to datafilecopy '/u01/ORADATA/myndb/2undotbs01.dbf';
switch datafile 3 to datafilecopy '/u01/ORADATA/myndb/3sysaux01.dbf';
switch datafile 10 to datafilecopy '/u01/ORADATA/myndb/10tbs_tab_iris_01.dbf ';
switch datafile 11 to datafilecopy '/u01/ORADATA/myndb/11tbs_ind_iris_01.dbf ';


ASM


  • ASM instance

  • ASM Disk group



  • Convert a tablespace to ASM
        SQL>alter tablespace tbs1 offline;
RMAN>backup as copy tablespace tbs1 format '+DATA';
        RMAN>switch tablespace tbs1 to copy; 
        SQL>alter tablespace tbs1 online;

dimanche 15 février 2015

Dataguard : SETUP


  • Creating A DataGuard Broker Configuration Using DGMGRL


  1. Configure Primary tnsname hint db_unique_name  of standby
  2. LTER SYSTEM SET dg_broker_start=TRUE; # on both site, start DMON process
  3. dgmgrl> connect sys/xxxxx
CREATE CONFIGURATION 'myndb_config' AS
PRIMARY DATABASE IS 'myndb'
CONNECT IDENTIFIER IS myndb;

show configuration;

ADD DATABASE 'myndb_pra' AS
CONNECT IDENTIFIER IS myndb_pra
MAINTAINED AS PHYSICAL;
show configuration; 

SHOW DATABASE VERBOSE gdddev1;
SHOW DATABASE VERBOSE gdddev1_terre; 

ENABLE CONFIGURATION; 

EDIT DATABASE 'gdddev1_terre' SET PROPERTY LogXptMode='ASYNC';



  • Activating Standby and going back to Standby
alter system set log_archive_dest_state_3='defer'; # primary site
alter database recover managed standby database cancel;

alter system set DG_BROKER_START=FALSE;
CREATE RESTORE POINT before_pra_myn GUARANTEE FLASHBACK DATABASE;
select * from v$restore_point;
alter database activate standby database;
startup mount force;
alter database open;

create table test ( id number);

shutdown immediate
STARTUP MOUNT FORCE;
flashback database to restore point before_pra_myn;
ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
ALTER SYSTEM SET DG_BROKER_START=TRUE;
STARTUP MOUNT FORCE;

shutdown immediate;
startup nomount;
alter database mount standby database;
alter database recover managed standby database disconnect from session;
drop restore point before_pra_myn ;
alter system set log_archive_dest_state_3='enable';  # primary site


  • Using RMAN Incremental Backups to Refresh a Standby Database
BACKUP DEVICE TYPE DISK INCREMENTAL FROM SCN 750983 DATABASE FORMAT '/tmp/incr_standby_%U'; # primary site
CATALOG START WITH '/standbydisk1/incrback/';
RECOVER DATABASE NOREDO;

  • Renaming a bad file name

ALTER SYSTEM SET standby_file_management='MANUAL';
alter database create datafile '/u01/app/oracle/product/10.2.0.4/dbs/UNNAMED00422' as '/u01/ORADATA/mydb/myntbs_06.dbf'; -
ALTER SYSTEM SET standby_file_management=AUTO;


  • Finding duplicate archivelog names on standby

set heading off
set feedback off
set linesize 200
set pagesize 0
spool calalog.txt
select al.recid, al.name from v$archived_log al
where al.status = 'A' and al.standby_dest= 'NO'
minus
select al.recid, al.name from v$archived_log al,
(select max(recid) recid
from v$archived_log
where status = 'A'
group by name) keepal
where al.status = 'A'
and al.standby_dest = 'NO'
and keepal.recid = al.recid;

RMAN> catalog archivelog '/u01/ORADATA/myndb/arch/arch_35_1_716749251';


  • Creating Standby with RMAN 11G

run {
DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE DORECOVER NOFILENAMECHECK ;
}
rman target / AUXILIARY "sys/xxxxx@myndb" cmdfile=$1 log=log/$1.log 

Dataguard : Architecture


  • Overview


  • Broker

lundi 9 février 2015

Real Application Cluster : Performance

Module details
ORAUSER Logout
Module overviewEdit
   
The following informations are about the module RAC PERFORMANCE : GENERIC APPROCH
Module information detailsEdit
   
NAMEDEFINITION
RAC PERFORMANCE : GENERIC APPROCHMost global cache wait events that show a high total time as reported in the AWR and Statspack reports or
in the dynamic performance views are normal and may present themselves as the top database time consumers without actually indicating a problem; If user response times increase and a high proportion of time waited is for global cache, then you should determine the cause.
Most reports include a breakdown of events sorted by percentage of the total time.
It is useful to start with an ADDM report, which analyzes the routinely collected performance statistics with respect to their impact,
and points to the objects and SQL contributing most to the time waited, and then moves on to the more detailed reports produced by AWR and Statspack.
1 - 1
Add To FuncEdit
More informationsEdit
   
DESCRIPTION
Most global cache wait events that show a high total time as reported in the AWR and Statspack reports or
in the dynamic performance views are normal and may present themselves as the top database time consumers without actually indicating a problem; If user response times increase and a high proportion of time waited is for global cache, then you should determine the cause.
Most reports include a breakdown of events sorted by percentage of the total time.
It is useful to start with an ADDM report, which analyzes the routinely collected performance statistics with respect to their impact,
and points to the objects and SQL contributing most to the time waited, and then moves on to the more detailed reports produced by AWR and Statspack.
1 - 1
DetailEdit
   
no data found
AddEdit
Related optionsEdit
   
NAMEDESCRIPTION
Block-Related Wait Events The main wait events for block-related waits are:
  • gc current block 2-way
  • gc current block 3-way
  • gc cr block 2-way
  • gc cr block 3-way The block-related wait event statistics indicate that a block was received as either the result of a 2-way or a 3-way message,
  • 2-way : the block was sent from the resource master requiring 1 message and 1 transfer.
  • 3-way : the block was forwarded to a third node from which it was sent, requiring 2 messages and 1 block transfer.
  • Contention-Related Wait EventsThe main wait events for contention-related waits are:
  • gc current block busy
  • gc cr block busy
  • gc buffer busy acquire/release The contention-related wait event statistics indicate that a block was received which was pinned by a session on another node,
    was deferred because a change had not yet been flushed to disk or because of high concurrency, and therefore could not be shipped immediately
    The gc current block busy and gc cr block busy wait events indicate that the local instance that is making the request did not immediately receive a current or consistent read block.
    The term busy in these events' names indicates that the sending of the block was delayed on a remote instance.
    For example, a block cannot be shipped immediately if Oracle Database has not yet written the redo for the block's changes to a log file. In comparison to block busy wait events, a gc buffer busy event indicates that Oracle Database cannot immediately grant access to data that is stored in the local buffer cache.
    This is because a global operation on the buffer is pending and the operation has not yet completed.
    In other words, the buffer is busy and all other processes that are attempting to access the local buffer must wait to complete
  • Load-Related Wait Events The main wait events for load-related waits are:
  • gc current block congested
  • gc cr block congested The load-related wait events indicate that a delay in processing has occurred in the GCS,
    which is usually caused by high load, CPU saturation and would have to be solved by additional CPUs,
    load-balancing, off loading processing to different times or a new cluster node.
    For the events mentioned, the wait time encompasses the entire round trip from the time a session starts to wait after initiating a block request until the block arrives.
  • Message-Related Wait Events The main wait events for message-related waits are:
  • gc current grant 2-way
  • gc cr grant 2-way The message-related wait event statistics indicate that no block was received because it was not cached in any instance.
    Instead a global grant was given, enabling the requesting instance to read the block from disk(cr grant) or modify it(current grant).
    If the time consumed by these events is high, then it may be assumed that the frequently used SQL causes a lot of disk I/O (in the event of the cr grant) or
    that the workload inserts a lot of data and needs to find and format new blocks frequently (in the event of the current grant).
  • 1 - 4
    AddEdit
    Related sub optionsEdit
       
    no data found
    AddEdit
    Module componentsEdit
       
    Edit
    NAMEDEFINITION
    [ RAC : Hot objects ]hot tables
    hot sequences
    [ RAC : Library Cache ]
  • In RAC, Library Cache is global
  • So, parsing cost is worse than non-RAC
  • Solutions:
    1. Reduce parsing
    2. Minimize table alters, drops, creates, truncates
    3. Use PL/SQL stored programs, not unnamed blocks
  • [ RAC : Wait Events ]Various wait events
    1 - 3
    AddEdit
    Related parametersEdit
       
    no data found
    AddEdit
    All module typeEdit
       
    no data found
    AddEdit
    SQL commands informationsEdit
       
    STEPSTATEMENTDESCRIPTION
    1SELECT
    A.VALUE "GC BLOCKS LOST 1",
    B.VALUE "GC BLOCKS CORRUPT 1",
    C.VALUE "GC BLOCKS LOST 2",
    D.VALUE "GC BLOCKS CORRUPT 2"
    FROM GV$SYSSTAT A, GV$SYSSTAT B, GV$SYSSTAT C, GV$SYSSTAT D
    WHERE A.INST_ID=1 AND A.NAME='gc blocks lost'
    AND B.INST_ID=1 AND B.NAME='gc blocks corrupt'
    AND C.INST_ID=2 AND C.NAME='gc blocks lost'
    AND D.INST_ID=2 AND D.NAME='gc blocks corrupt';
    global cache blocks lost and global cache blocks corrupt should always be as near to zero as possible
    1 - 1
    AddEdit
    Related tasks Edit
       
    NAMEDESCRIPTION
    Measuring block transfer timeContention in blocks can be measured by using the block transfer time.
    To determine block transfer time, examine the statistics global cache cr block receive time and global cache cr blocks received.
    1 - 1
    AddEdit
    Related scriptsEdit
       
    DetailName
    Detailracdiag
    1 - 1
    HomeEditEdit AllEditAddEdit

     
     
    HomeApplication 101Edit Page 17CreateSessionActivityDebugShow Edit Links