Tuesday, September 19, 2017

Wait Events


Whatever changes you make in the database, the changes are recorded in the online redo log files. These are important files for recovery in the event of a crash. There must be atleast two redo log groups in the database. The online redo log files are used in a circular fashion, i.e., when the current redo log file is full the changes are recorded in the next available online redo log file. When the last redo log file is full the changes are then recorded in the first redo log file overwriting the already available redo changes. If you have enabled the archiving then the online redo log file must have been archived before it is overwritten.

log switch occurs when the current online redo log file is full. It enables the LGWr process to close the current redo log file and open the next available redo log file and start writing the changes in that file. A checkpoint occurs during the log switch which enables the DBWr to write the dirty buffers to be flushed to the data files.

When the redo log file size is small then the log file gets filled frequently causing log switches to occur more frequently. When a user session waits for log file switch completion wait event, it means the LGWr has not completed its work.

Tuning Option

Increase the size of the online redo log file. Check out the v$log_history view to see how often the log switch has taken place. Size your log files so that the log switch occurs every 30 minutes. For eg., if your current log file size is 50MB and log switch occurs every 5 minutes then increase the file size to 300MB.


The cause for this wait event is same as mentioned here. When you see the log file switch completion wait event you will most likely see the checkpoint incomplete wait event. During the log switch a checkpoint occurs. The checkpoint signals the DBWr to write the dirty buffers to the data files.

The difference between log file switch completion and log file switch completion (Checkpoint Incomplete) wait event is, in the case of former wait event the users wait for the Log writer background process (LGWr) to complete its work (log switch). In the case of latter wait event the users wait for the Database Writer background process (DBWr) to complete its work (checkpoint).

Tuning option

Increase the size of redo log files. Increase the number of redo log groups.


When a user session requires free buffers, the server process scans the LRU list to a get a free buffer space. After scanning the LRU list up to a threshold, if the server process could not get free space, it requests the DBWr to write the dirty buffer from the LRU list to disk. While the DBWr process writes the dirty buffers the session waits on 'Free Buffer Waits'.

Tuning Options

Poor SQL Statements--

Query the V$SQL view for statements that have high DISK_READS. Tune the statements to reduce the physical reads. The poorly written SQL Statements are the main cause of this wait event.

DBWr Processes--

Increase the DBWr processes (or)
Decrease the Buffer Cache (or)
Decrease the FAST_START_MTTR_TARGET parameter.

Delayed Block Cleanout---
The delayed block cleanout will cause the free buffer wait events. To avoid delayed block cleanout perform a full table scan on a table that has been loaded with a lot of rows before it is released to the application.
Small Buffer Cache----
Increase the size of Buffer Cache if you feel that the buffer cache is under sized and check for the wait event.

Slow IO
log file sync wait
When a user issues a commit or rollback then the redo data in the redo buffer is written to the online redo log file. The user session waits for this event to finish before continuing with other processing. This wait time is represented as log file sync wait event.

A number of people have asked the question as what is the difference between log file parallel write and log file sync.

The difference is....

log file parallel write occurs when LGWR writes redo records from redo buffer to online redo log file. This may take place very frequently when it meets any one of the following condition,

1. Once in every three seconds.
2. _LOG_IO_SIZE threshold is met.
3. 1MB worth of redo entries are buffered.
4. Commit.
5. Rollback.
6. When DBWr requests.

The user sessions will never experience the log file parallel write wait event.

When the user session issue commit or rollback then it leads to log file sync wait event, which the user will experience by response time.
The log file sync event occurs when a user issues Commit or Rollback. Click here for the difference between log file sync and log parallel write wait event.

When a user issues a commit or rollback command, the redo data in the redo buffer is written to online redo log file. This write is known as sync write. During this synchronization process the user process waits in log file sync event, while the LGWr waits on log file parallel write event.

The log file sync event is very fast and usually unnoticed by the end users. However you may notice that there are very high time waited for this wait event in certain cases. The main cause for such high wait for this event is as follows,

Too many commits

If you notice high waits at session level then it may be due to running batch processes there are commits within a loop. If that is the case then the application logic can be changed by eliminating unneccessary commits and reduce commit frequency.

If you notice high waits at system level then it may be due to short transactions. OLTP databases usually have short transactions and have high log file sync wait events. Only thing you can do to improve the performance, in this case, is to use faster IO subsystem, rawdevices.

Large Log buffer

The redo entries from buffer to log files take place either through sync writes as explained earlier or through background writes (Such as 1/3 full, 1MB redo etc). When redo log buffer is large then more redo data are accumulated in the buffer. The background writes (i.e., when redo becomes 1/3 full ) are limited or delayed. When a user issues a commit or roll back then the sync writes will take more time.

The 'Buffer Busy Waits' Event occurs due to the following reasons,

1. A user wants to access a data block for read or write operation. The block is present in the Buffer Cache but locked by another session. The user has to wait till the other session releases the lock on that block.
2. A user wants to access a data block for read or write operation. The block is not present in the Buffer Cache. The block has to be read from data files into Buffer Cache. But the same block is being read by another session. Hence the user patiently waits for the IO of the other session to complete. Prior to oracle 10g, this wait is referred to as Buffer busy wait, but from oracle 10g this wait event is referred to as 'read by other session'wait.
Tuning Options,
Run the following query to find whether any block or range of blocks are always responsible for buffer busy waits,
SQL> select p1 "File #", p2 "Block #", p3 "Reason Code" From v$session_wait
where event = 'buffer busy waits';
Use the following query to find the segment the block belongs to,
SQL> select owner,segment_name,segment_type From dba_extents
where file_id = &file#
and &block# between block_id and block_id + blocks -1;

Once the segment name is identified use the V$Segment_Statistics view to monitor the statistics of the segment.

SQL>select * from v$segment_statistics
where owner like 'RACFIN'
and statistic_name like 'buffer busy waits'
and object_name like 'IBM_PARTY_BRANCH' ;

Use the following query to find what kind of contention is causing the buffer busy waits.

SQL> Select * from v$waitstat;

The output shows the sum and total time of all waits for particular class of block such as data block, segment header, undo header block etc.

To avoid the buffer busy waits,

1. Increase the PCTFREE and PCTUSED values to reduce the number of rows per block to avoid data block contention.
2. Increase the INITRANS value to avoid data block contention.
3. Increase the FREELIST and FREELIST GROUPS value to avoid freelist block contention and segment header block contention.


Enqueue waits are locking mechanisms that control the access to shared resources. There are various modes of enqueues.

The following query gives you the detail of Sessions holding the lock, the lock type, mode.

SQL> select DECODE(request,0,'Holder: ','Waiter: ')sid sess, id1, id2, lmode, request, type
FROM V$LOCK
WHERE (id1, id2, type) IN (SELECT id1, id2, type FROM V$LOCK WHERE request>0)
ORDER BY id1, request ;

The most common enqueue waits are discussed below,

TYPE: TM (Table Lock)
LMODE: 3
CAUSE: Unindexed Foreign Key
SOLUTION: The holder has to issue commit or rollback. To avoid this kind of lock in first place create indexes on the foreign key columns. You can do this by taking the ID1 column value in v$lock. This ID1 value is the object ID of the child table. Use dba_objects dictionary table and get the object name. Create the index on the foreign key column.


TYPE: TX (Row level lock)
LMODE: 6
CAUSE: Updating or deleting rows that are currently locked by another transaction.
SOLUTION: Application issue. The lock is released when the holding session issues a commit or rollback. Killing the holding session will rollback the transaction.
RESOURCE LOCKED: Issue the following query to find the resource that is locked.

SQL> select c.sid waiter_sid, a.object_name, a.object_typefrom dba_objects a, v$session b, v$session_wait cwhere (a.object_id = b.row_wait_obj# or a.data_object_id = b.row_wait_obj#)and b.sid = c.sidand chr(bitand(c.P1,-16777216)/16777215) chr(bitand(c.P1,16711680)/65535) = ’TX’and c.event = ’enqueue’;


TYPE: TX (ITL Shortage)
LMODE: 4
CAUSE: i) ITL (Interested Transaction List) Shortage. ii) Unique Key Enforcement. iii) Bitmap index Entry.
SOLUTION: To see whether the wait is due to ITL shortage dump the data block and see how many ITL slots are being used.

SQL> Alter system dump datafile block

If it is indeed due to ITL shortage, then increase the INITRANS value of the object. Also increase the PCTFREE value of the objects.

ii) If the Wait is due to the Unique key Enforcement (i.e, if more than one session inserts the same value that has unique or primary key then the insert will not succeed). If the first session that inserted the value commits then the waiting session will receive the unique constraint violation error. If the first session rollsback then the second session succeeds.

iii) Bitmap Index Entry: A bitmap entry covers a range of ROWIDs. When a bitmap entry is locked all the ROWIDs that correspond to the bitmap entry are locked. When multiple users attempt to delete or update different rows that have the same bitmap entry then a wait for TX in mode 4 will occur.

It is difficult to find whether the lock was due to unique key enforcement or bitmap index entry by merely looking in to the V$Lock view. You have to capture the SQL statements that holder and waiter have issued. If the statement is an insert then wait is due to the unique key enforcement. If the statement is update or delete then the wait is due to the bitmap index entry.


The control file parallel write wait event occurs due to some operations that caused the control file to be updated, such as

1. log switches by LGWR process.
2. adding a datafile.
3. removing a datafiles.
4. checkpoint information by CKPT process.
5. archive log information by ARCH process.

To find which sessions cause transactions to controlfile, issue the following statement.

SQL> select a.sid,decode(a.type, 'BACKGROUND', 'BACKGROUND-' || substr
(a.program,instr(a.program,'(',1,1)), 'FOREGROUND') type, b.time_waited,
round(b.time_waited/b.total_waits,4) average_wait, round((sysdate - a.logon_time)*24) hours_connected
from v$session_event b, v$session a
where a.sid = b.sid
and b.event = 'control file parallel write'
order by type, time_waited;

The output of the above statement shows which background process is writing to control file frequently, For eg., if LGWr has more time_waited then it implies that the log switches are more. If the foreground process have more time_waited then it implies that there are more changes to database that requires to update the SCN in control file.




'controlfile sequential read' occurs while reading control file (backup, share information from controlfile between instances etc). The parameters in V$session_wait are as follows,

P1 - The file# of control file from which the session is reading.
P2 – The block# from which the session starts reading.
P3 – The no. of blocks the session is trying to read.

'controlfile parallel write' occurs while writing to all the control files. The parameters in V$session_wait are as follows,

P1 – No. of control files being updated.
P2 – No. of blocks that are being updated.
P3 – No. of IO requests.

Tuning Options: Use Asynchronous IO if possible. Move the controlfile to a different disk or use faster disk.


'db file parallel read' occurs during recovery. The datablocks that need to be changed are read from various datafiles and are placed in non-contiguous buffer blocks. The server process waits till all the blocks are read in to the buffer.

Tuning options - same as db file sequential read.

'db file parallel write' occurs when database writer (DBWr) is performing parallel write to files and blocks. Check the average_wait in V$SYSTEM_EVENT, if it is greater than 10 milliseconds then it signals a slow IO throughput.

Tuning options - The main blocker for this wait event is the OS I/O sub systems. Hence use OS monitoring tools (sar -d, iostat) to check the write performance. To improve the average_wait time you can consider the following,

If the data files reside on raw devices use asynchronous writes. However if the data files reside on cooked file systems use synchronous writes with direct IO.

Note: If the average_wait time for db file parallel write is high then you may see that the system waits on free buffer waits event.


'db file sequential read' occurs during single block read (Reading index blocks, row fetch by row id).

Tuning Options

1. Find the top SQL with high physical reads (AWR or Statspack).
Analyze the objects for better Execution plans.
Use more selective index.
Rebuild the indexes if it is fragmented.
Use Partition if possible.

2. Find the I/O Statistics
Check hot disks using V$filestat.
Move datafiles to avoid contention to a single disk.

3. Try to increase the Buffer Cache
In 9i, use buffer cache advisory and in 10g use ASSM to determine the optimal size for buffer cache.
Check for hot segments and place it in the Keep Pool.





'db file scattered read' occurs during multiblock read (Full table Scan, Index Fast Full Scans).

Tuning Options

1. Check for SQL that performs Full scans. Tune for optimal plans.
2. If the multiblock scans are due to optimal plans then increase the init parameter DB_FILE_MULTIBLOCK_READ_COUNT (up to 9i). Set this parameter to 0 (automatic tuning) in 10g.
3. Use Partitions if possible.


When you find a session waiting for either sequential read or scattered read, it might be useful to find which object is being accessed for further tuning.

To find the object and the block number the session is accessing,

SQL> Select SID, Event, P1 File#, p2 Block#, p3 “Blocks Fetched”, 
wait_time, seconds_in_wait, state
From V$Session_Wait
Where Sid in (Select Sid From V$Session where osuser != ‘oracle’
and status = ‘ACTIVE’);


From the above query get the file# and the block#.

To find the name of the file, issue the following query.

SQL> SELECT tablespace_name, file_name FROM dba_data_files
WHERE file_id = &File#;


To find the object, issue the following query.

SQL> SELECT owner , segment_name , segment_type, partition_name
FROM dba_extents
WHERE file_id = &File#
AND &Block# BETWEEN block_id AND block_id + blocks -1
 ;


The 'log file parallel write' event is caused by the Log writer (LGWR) process. The LGWR writes the redo buffer to the online redo log files . It issues a series of write calls to the system IO. The LGWR waits for the writes to complete on log file parallel write. A slow LGWR process can introduce log file sync waits which makes the user to experience wait times during commit or rollback. The log file parallel write and log file sync wait events are interrelated and must be dealt simultaneously.

If the average_wait time is high (above 10 milliseconds) it indicates that the system IO throughput is slow. To improve the average_wait time follow the same techniques used in db file parallel write wait event.

Tuning options:

1. Avoid running hot backups during peak hours.
2. Check for high commit sessions and try to change the application logic to commit less frequently. Use the following queries to find high commit sessions,

SQL> select sid, value from v$sesstat
where statistic# = select statistic# from v$statname where name = 'user commits') order by 2 desc;


A high redo wastage also indicates high frequency commits

SQL> select b.name, a.value, round(sysdate - c.startup_time) days_old
from v$sysstat a, v$statname b, v$instance c
where a.statistic# = b.statistic#
and b.name in ('redo wastage','redo size');



Thursday, September 14, 2017

Oracle 12c Data Guard Active Standby Database (Multitenant Database – 4 Node RAC)

Step by Step approach to create an Oracle 12c Data Guard Active Standby Database (Multitenant Database – 4 Node RAC)

Step by Step approach to create an Oracle 12c Data Guard Active Standby Database.

Environment Setup details
  • Primary database  up and running on 4 node RAC
  • Data files, OCR and voting Disks (in separate DG)  are in ASM storage.
  • Binaries are in files system.
  • DB Password files and init<sid>.ora  files are in  $ORACLE_HOME/dbs
  • Installed and Configured Grid Infra on Standby machines, 4 Node RAC.
  • OCR and voting Disks (in separate DG)  are in ASM storage.
  • Created required Disk group on Standby site for data files.
  • RDBMS software installed in standby machines.
  • Ensured Network connectivity between PRIMARY and STANDBY server (TCP/IP – Port 1521 & 1525).
Environment details – Primary Databases

OS Version: Linux 6.2
Oracle Version: 12.1.0.2
Host Server: LABPRD1, LABPRD2, LABPRD3, LABPRD4
Primary DB : PRASPRD Instances : PRASPRD1,PRASPRD2,PRASPRD3,PRASPRD4
Storage System: ASM for Data Storage and Filesystem for Binaries
Grid Home(Binaries): /prasprd/gridinfra/grid1
RDBMS Home(Binaries) :/prasprd/db/db1
Disk Groups: CRS1(High Redundancy, For OCR and Voting Disk), LABDATA2(Data files, Red logs and Control files)

Environment details – Standby Databases

OS Version: Linux 6.2
Oracle Version: 12.1.0.2
Host Server : LABSBY1, LABSBY2, LABSBY3, LABSBY4
Primary DB : PRASSBY   Instances : PRASPRD1,PRASPRD2,PRASPRD3,PRASPRD4
Storage System: ASM for Data Storage and Filesystem for Binaries
Grid Home(Binaries): /prasprd/gridinfra/grid1
RDBMS Home(Binaries) :/prasprd/db/db1
Disk Groups: CRS1(High Redundancy, For OCR and Voting Disk), LABDATA3(Data files, Red logs and Control files)

Procedure
  1. Enable force logging in PRIMARY.
  2. Create SRL (standby redo logs) in PRIMARY.
  3. Copy init and password file from primary to STANDBY.
  4. Create directory structure in STANDBY.
  5. Make Standby parameter changes in the parameter file of PRIMARY.
  6. Make Standby parameter changes in the parameter file of STANDBY.
  7. Add an entry in the oratab file
  8. Establish the connectivity between PRIMARY and STANDBY.
  9. Start the STANDBY instance
  10. Use RMAN duplicate to create standby database from PRIMARY database.
  11. Start the MRP process in STANDBY.
  12. Verify whether the log are shipped and applied properly @the STANDBY.
  13. Start other standby RAC nodes.
  14. Verify Pluggable database’s status.

Step by Step Active Standby Build

1.  Enable force logging on PRIMARY
     SQL> alter database force logging;

2.  Configure Standby Redo Log on PRIMARY
 a. Check the log files and sizes.
     SQL>SELECT GROUP#, BYTES FROM V$LOG;
b. Create Standby Redo Log
     SQL> ALTER DATABASE ADD STANDBY LOGFILE  THREAD <TN> GROUP <GN>  (‘/path/<File    
     Name>’,  ‘/path/<File Name>’   )  SIZE  <S> ;

NOTE: SIZE OF STANDBY LOG FILE SHOULD BE SAME AS ONLINE LOG FILE

 c. Verify the standby redo log file groups were created .Verify this in standby after standby build.
     SQL> SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$STANDBY_LOG;

(Primary)
GROUP#
THREAD#
SEQUENCE#
ARC
STATUS
9
1
0
YES
UNASSIGNED
10
1
0
YES
UNASSIGNED
11
2
0
YES
UNASSIGNED
12
2
0
YES
UNASSIGNED
13
3
0
YES
UNASSIGNED
14
3
0
YES
UNASSIGNED
15
4
0
YES
UNASSIGNED
16
4
0
YES
UNASSIGNED

(Standby – Below table output  after the standby is created)
GROUP#
THREAD#
SEQUENCE#
ARC
STATUS
9
1
0
YES
UNASSIGNED
10
1
509
YES
ACTIVE
11
2
0
YES
UNASSIGNED
12
2
298
YES
ACTIVE
13
3
0
YES
UNASSIGNED
14
3
500
YES
ACTIVE
15
4
0
YES
UNASSIGNED
16
4
287
YES
ACTIVE

**Here I have 4 nodes having 2 redo groups each node, total 8 groups; created 8 more standby log groups 2 per node.

3.       Copy init and password file from primary to standby

Copy initPRASPRD1.ora to standby database (all 4 Nodes), name of init<sid>.ora needs to be changed depends on instance names

              Copy the password file orapwPRASPRD1 from primary to standby database (all 4 nodes), name of init needs to be changed depends on instance names

4.       Create directory structure in standby
              $ mkdir -p  /u01/app/oracle/admin/prasprd/adump (on all 4 node)

5.       Make Standby parameter changes in the parameter file of PRIMARY. 

Add below entries in the  init<sid>.ora.(Make the entry in one Node pfile and copy to other nodes)

$ cat initprasprd1.ora
 ##########DG Enrty ################
*.log_archive_max_processes='8'
*.db_unique_name=PRASPRD'
*.log_archive_config='dg_config=(PRASSBY,PRASPRD)'
*.log_archive_dest_1='location=+LABDATA2 valid_for=(all_logfiles,all_roles)
*.log_Archive_dest_2='service=PRASSBY_DR async noaffirm reopen=15 valid_for=(all_logfiles,primary_role) db_unique_name=PRASSBY'
*.log_file_name_convert='+LABDATA3','+LABDATA2'
*.db_file_name_convert='+LABDATA3','+LABDATA2'
*.fal_server=PRASSBY_DR;
*.fal_client=PRASPRD;
#############DG Entry ############################

 6.       Make Standby parameter changes in the parameter file of Standby                                     
Add below entries in the  init<sid>.ora.(Make the entry in one Node pfile and copy to other nodes)

$ cat initprasprd1.ora
##########DG Enrty ################
*.log_archive_max_processes='8'
*.db_unique_name='PRASSBY'
*.standby_file_management='AUTO'
*.log_archive_config='dg_config=(PRASPRD,PRASSBY)'
*.log_archive_dest_1='location=+LABDATA3 valid_for=(all_logfiles,all_roles) db_unique_name=PRASSBY'
*.log_Archive_dest_2='service=PRASPRD async noaffirm reopen=15 valid_for=(all_logfiles,primary_role) db_unique_name=PRASPRD'
*.log_file_name_convert='+LABDATA2','+LABDATA3'
*.db_file_name_convert='+LABDATA2','+LABDATA3'
*.fal_server=PRASPRD
*.fal_client=PRASSBY_DR
#############DG Entry ############################

7.       Add an entry in the oratab file
PRASPRD1: /prasprd/db/db1: N  (Node1)
PRASPRD2: /prasprd/db/db1: N  (Node2)
PRASPRD3: /prasprd/db/db1: N  (Node3)
PRASPRD4: /prasprd/db/db1: N  (Node4)

8.       Establish the connectivity between PRIMARY and STANDBY.   

On Standby (Add a static entry in the listener.ora for PRASSBY)
LISTENER12C =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST =LABSBY1.me.domain)(PORT = 1525))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1525))
    )
  )
SID_LIST_LISTENER12C =
  (SID_LIST =
 (SID_DESC =
      (GLOBAL_DBNAME = PRASSBY)
      (ORACLE_HOME = /prasprd/db/db1)
      (SID_NAME = PRASPRD1)
    )
 )
Reload the listener
$ lsnrctl reload listener12c
 LSNRCTL for Linux: Version 12.1.0.2.0 - Production on Tue Jan 6 12:11:31 2015
 Copyright (c) 1991, 2014, Oracle.  All rights reserved.
 Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST= LABPRD1.me.domain) (PORT=1525)))
The command completed successfully

On Primary (Add a below entry on tnsnames.ora)
PRASSBY_DR =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = LABSBY1.me.domain)(PORT = 1525))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME =PRASSBY)
    )
  )
 PRASPRD =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST =LABPRD1.corporate.domain)(PORT = 1525))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = PRASPRD)
    )
  )

On Standby(Add a below entry on tnsnames.ora)
PRASSBY_DR =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = LABSBY1.corporate.domain)(PORT = 1525))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME =PRASSBY)
    )
  )
PRASPRD =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST =LABPRD1.corporate.domain)(PORT = 1525))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = PRASPRD)
    )
  )


9.       Start the STANDBY database instance.

$ . oraenv
ORACLE_SID = [prasprd1] ? prasprd1
The Oracle base has been set to /u01/app/orabase
 oradb@PRASSBY1.ME.DOMAIN:/home/oradb ~]sqlplus / as sysdba
 SQL*Plus: Release 12.1.0.2.0 Production on Tue Jan 6 12:11:31 2015
 Copyright (c) 1982, 2014, Oracle.  All rights reserved.
 Connected to an idle instance.
 SQL> startup nomount;
 ORACLE instance started.
Total System Global Area  XXXXXXXXX bytes
Fixed Size                  XXXXXXX bytes
Variable Size             XXXXXXXXX bytes
Database Buffers           XXXXXXXX bytes
Redo Buffers                XXXXXXX bytes

10.       Use RMAN duplicate to create standby database on Primary database.
Recovery Manager: Release 12.1.0.2.0 – Production on Tue Jan 6 14:40:12 2015 Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
 RMAN> connect target sys/*****@PRASPRD connected to target database: PRASPRD (DBID=2845711114)
 RMAN> connect auxiliary sys/*****@PRASSBY_DR
 connected to auxiliary database: PRASPRD (not mounted)
 run {
 allocate channel c1 type disk;
 allocate channel c2 type disk;
 allocate auxiliary channel aux type disk;
 duplicate target database for standby from active database;
 alter system set log_archive_max_processes=’8′;
 alter system set db_unique_name=’PRASSBY’;
 alter system standby_file_management=’AUTO’;
 alter system set log_archive_config=’dg_config=(PRASPRD,PRASSBY)’;
 alter system set log_archive_dest_1=’location=USE_DB_RECOVERY_FILE_DEST valid_for=(all_logfiles,all_roles) db_unique_name=PRASSBY’;
 alter system set log_Archive_dest_2=’service=prasprd async noaffirm reopen=15 valid_for=(all_logfiles,primary_role) db_unique_name=PRASPRD’;
 log_file_name_convert=’+LABDATA3′,’+LABDATA2′
 db_file_name_convert=’+LABDATA3′,’+LABDATA2′ }

11.   Start the MRP process on standby database

SQL> startup ;
ORACLE instance started.
Total System Global Area  XXXXXXXXX bytes
Fixed Size                  XXXXXXX bytes
Variable Size              XXXXXXXXX bytes
Database Buffers            XXXXXXXXX bytes
Redo Buffers                 XXXXXXX bytes
Database mounted.
Database opened. SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
  Media recovery complete……..
12.       Verify MRP process on standby database
oradb@LABPRD.CORPORATE.DOMAIN:/home/oradb ~]ps -ef | grep mrp
oradb    23678     1  0 Jan05 ?        00:00:05 ora_mrp0_prasprd1
 SQL> select process,status,thread#,sequence#,blocks from v$managed_standby where process like ‘%MRP%’;
PROCESS
STATUS
THREAD#
SEQUENCE#
BLOCKS
MRP0
APPLYING_LOG
1
498
102400


13.     Verify whether the log are shipped and applied properly @the standby.
On Primary
SQL> ALTER SYSTEM SWITCH LOGFILE;
SQL > select thread#, max(sequence#) “Last Primary Seq Generated”
           From v$archived_log val, v$database vdb
           where val.resetlogs_change# = vdb.resetlogs_change#
             group by thread# order by 1;

THREAD#
LAST  PRIMARY SEQ GENERATED
1
497
2
283
3
484
4
277

On Standby
SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE#
“Last Sequence Received”,
APPL.SEQUENCE# “Last Sequence Applied”,
(ARCH.SEQUENCE# – APPL.SEQUENCE#) “Difference”
FROM
(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG
WHERE
(THREAD#,FIRST_TIME )
 IN (SELECT THREAD#,MAX(FIRST_TIME)
FROM V$ARCHIVED_LOG GROUP BY THREAD#))
ARCH,(SELECT THREAD# ,SEQUENCE#
FROM V$LOG_HISTORY
WHERE (THREAD#,FIRST_TIME )
 IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY
GROUP BY THREAD#)) APPL
WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;
THREAD#
LAST SEQUENCE RECEIVED
LAST SEQUENCE APPLIED
DIFFERENCE
1
497
497
0
2
283
283
0
3
484
484
0
4
277
277
0
14.        Startup other RAC standby node.

SQL> startup open;
15.      Check pluggable database and status in PRIMARY and STANDBY.

On Primary

 SQL> select name, open_mode from v$pdbs; or show pdbs
NAME
OPEN_MODE
PDB$SEED
READ ONLY
PDB1
MOUNTED
PDB2
MOUNTED
 On Standby
 SQL> select name, open_mode from v$pdbs; or show pdbs
NAME
OPEN_MODE
PDB$SEED
READ ONLY
PDB1
MOUNTED
PDB2
MOUNTED


16.       Check database ROLE
 On Primary
 SQL> select NAME, DATABASE_ROLE, SWITCHOVER_STATUS,DB_UNIQUE_NAME,CDB,CON_ID from v$database;

NAME
DATABASE_ROLE
SWITCHOVER_STATUS
DB_UNIQUE_NAME
CDB
CON_ID
CDBORCL
PRIMARY
TO STANDBY
cdborcl
YES
0

On Standby
             SQL> select NAME, DATABASE_ROLE, SWITCHOVER_STATUS,DB_UNIQUE_NAME,CDB,CON_ID from v$database;
NAME
DATABASE_ROLE

SWITCHOVER_STATUS
DB_UNIQUE_NAME
CDB
CON_ID
CDBORCL
PHYSICAL STANDBY
NOT ALLOWED
cdborcdr
YES
0


17.       Confirm that the primary database protection mode

SQL> SELECT PROTECTION_MODE FROM V$DATABASE;

PROTECTION_MODE
——————————-
MAXIMUM PERFORMANCE
 ——————> At this point Physical standby creation completed.


Test SCENARIOS
=============
 You can do several test scenarios to make sure changes are getting shipped to standby location.
  • ·         Test 1 – Create Table on any PDB schema can verify the changes take effect on Standby site.
Primary
Create a table, insert some values
Standby
 1. alter pluggable database all open read only;
                  2.  Connect to PDB .
                  3. Select count (*) from schema.tablename
  • ·         Test 2 – Clone a PDB and verify the changes take effect on Standby site.
Primary
CREATE PLUGGABLE DATABASE pdb2 ADMIN USER pdb2adm IDENTIFIED BY meadmin;
SQL> select name, open_mode from v$pdbs; or show pdbs;
Standby
1.       Verify Alert log
2.  SQL> select name, open_mode from v$pdbs; or show pdbs;

Oracle row type vs PostgreSQL record

The Oracle row type concept can be accomplished with the record type in PostgreSQL. Example: Oracle: CURSOR travel_cur IS  SELECT  c.travel_...