Sunday 6 November 2016

Data Guard Physical Standby Setup in Oracle Database 11g Release 2

How a Data Guard Physical Standby Setup in Oracle Database 11g Release 2
Data Guard is the name for Oracle's standby database solution, used for disaster recovery and high availability.

Architecture
The Oracle Data Guard architecture incorporates the following items:

Primary Database - A production database that is used to create standby databases. The archive logs from the primary database are transfered and applied to standby databases. Each standby can only be associated with a single primary database, but a single primary database can be associated with multiple standby databases.

Standby Database - A replica of the primary database.

Log Transport Services - Control the automatic transfer of archive redo log files from the primary database to one or more standby destinations.

Network Configuration - The primary database is connected to one or more standby databases using Oracle Net.

Log Apply Services - Apply the archived redo logs to the standby database. The Managed Recovery Process (MRP) actually does the work of maintaining and applying the archived redo logs.

Role Management Services - Control the changing of database roles from primary to standby. The services include switch-over, switchback and fail-over.

Data Guard Broker - Controls the creation and monitoring of Data Guard. It comes with a GUI and command line interface.

The services required on the primary database are:

Log Writer Process (LGWR) - Collects redo information and updates the online redo logs. It can also create local archived redo logs and transmit online redo to standby databases.

Archiver Process (ARCn) - One or more archiver processes make copies of online redo logs either locally or remotely for standby databases.

Fetch Archive Log (FAL) Server - Services requests for archive redo logs from FAL clients running on multiple standby databases. Multiple FAL servers can be run on a primary database, one for each FAL request. .

The services required on the standby database are:

Fetch Archive Log (FAL) Client - Pulls archived redo log files from the primary site. Initiates transfer of archived redo logs when it detects a gap sequence.

Remote File Server (RFS) - Receives archived and/or standby redo logs from the primary database.

Archiver (ARCn) Processes - Archives the standby redo logs applied by the managed recovery process (MRP).

Managed Recovery Process (MRP) - Applies archive redo log information to the standby database.
Database Synchronization Options

Data Guard can be configured to run with varying synchronization modes indicating the potential for data loss.

No-Data-Loss mode : This simply means that the log transport services will not acknowledge modifications to the primary database until they are available to the standby database. This doesn't mean that the modifications have been applied to the standby database, merely that the log information is available to the log apply services should fail over occur. This mode is implemented using standby redo logs on the standby server.

No-Data-Divergence mode : This is an extension of the no-data-loss mode whereby modifications to the primary database are prevented if connectivity between the primary and at least one standby database is unavailable.

Minimal-Data-Loss mode : When the performance requirements of the primary database are the top priority this mode provides the optimum balance of data protection and performance.

Primary Server Setup
Logging
Check that the primary database is in archivelog mode.

SQL>> SELECT log_mode FROM v$database;
LOG_MODE
------------
NOARCHIVELOG

If it is noarchivelog mode, switch is to archivelog mode.

SQL>> SHUTDOWN IMMEDIATE;
SQL>> STARTUP MOUNT;
SQL>> ALTER DATABASE ARCHIVELOG;
SQL>> ALTER DATABASE OPEN;

Enabled forced logging by issuing the following command.

SQL>> ALTER DATABASE FORCE LOGGING;

Initialization Parameters
Check the setting for the DB_NAME and DB_UNIQUE_NAME parameters. In this case they are both set to "DB11G" on the primary database.

SQL>> show parameter db_name
NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
db_name     string DB11G

SQL>> show parameter db_unique_name
NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name     string DB11G

The DB_NAME of the standby database will be the same as that of the primary, but it must have a different DB_UNIQUE_NAME value. 

The DB_UNIQUE_NAME values of the primary and standby database should be used in the DG_CONFIG setting of the LOG_ARCHIVE_CONFIG parameter. 

For this example, the standby database will have the value "DB11G_STBY".

SQL>> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(DB11G,DB11G_STBY)';

Set suitable remote archive log destinations. In this case I'm using the fast recovery area for the local location, but you could specify an location explicitly if you prefer. 

Notice the SERVICE and the DB_UNIQUE_NAME for the remote location reference the standby location.

SQL>> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db11g_stby NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';

SQL>> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;

The LOG_ARCHIVE_FORMAT and LOG_ARCHIVE_MAX_PROCESSES parameters must be set to appropriate values and the REMOTE_LOGIN_PASSWORDFILE must be set to exclusive.

SQL>> ALTER SYSTEM SET LOG_ARCHIVE_FORMAT='%t_%s_%r.arc' SCOPE=SPFILE;
SQL>> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=30;
SQL>> ALTER SYSTEM SET REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE SCOPE=SPFILE;

In addition to the previous setting, it is recommended to make sure the primary is ready to switch roles to become a standby. For that to work properly we need to set the following parameters. Adjust the *_CONVERT parameters to account for your filename and path differences between the servers.

SQL>> ALTER SYSTEM SET FAL_SERVER=DB11G_STBY;

--ALTER SYSTEM SET DB_FILE_NAME_CONVERT='DB11G_STBY','DB11G' SCOPE=SPFILE;

--ALTER SYSTEM SET LOG_FILE_NAME_CONVERT='DB11G_STBY','DB11G'  SCOPE=SPFILE;

SQL>> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;

Remember, some of the parameters are not modifiable, so the database will need to be restarted before they take effect.

Service Setup
Entries for the primary and standby databases are needed in the "$ORACLE_HOME/network/admin/tnsnames.ora" files on both servers. You can create these using the Network Configuration Utility (netca) or manually. The following entries were used during this setup.

DB11G =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = ol5-112-dga1)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = DB11G.WORLD)
    )
  )

DB11G_STBY =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = ol5-112-dga2)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = DB11G.WORLD)
    )
  )

Backup Primary Database

If you are planning to use an active duplicate to create the standby database, then this step is unnecessary. For a backup-based duplicate, or a manual restore, take a backup of the primary database.

$ rman target /

RMAN> BACKUP DATABASE PLUS ARCHIVELOG;
Create Standby Controlfile and PFILE

Create a controlfile for the standby database by issuing the following command on the primary database.

SQL>> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/db11g_stby.ctl';

Create a parameter file for the standby database.

SQL>> CREATE PFILE='/tmp/initDB11G_stby.ora' FROM SPFILE;

Amend the PFILE making the entries relevant for the standby database. I'm making a replica of the original server, so in my case I only had to amend the following parameters.

*.db_unique_name='DB11G_STBY'
*.fal_server='DB11G'
*.log_archive_dest_2='SERVICE=db11g ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G'

Standby Server Setup (Manual)

Copy Files
Create the necessary directories on the standby server.

$ mkdir -p /u01/app/oracle/oradata/DB11G
$ mkdir -p /u01/app/oracle/fast_recovery_area/DB11G
$ mkdir -p /u01/app/oracle/admin/DB11G/adump

Copy the files from the primary to the standby server.

$ # Standby controlfile to all locations.
$ scp oracle@ol5-112-dga1:/tmp/db11g_stby.ctl /u01/app/oracle/oradata/DB11G/control01.ctl
$ cp /u01/app/oracle/oradata/DB11G/control01.ctl /u01/app/oracle/fast_recovery_area/DB11G/control02.ctl

$ # Archivelogs and backups
$ scp -r oracle@ol5-112-dga1:/u01/app/oracle/fast_recovery_area/DB11G/archivelog /u01/app/oracle/fast_recovery_area/DB11G
$ scp -r oracle@ol5-112-dga1:/u01/app/oracle/fast_recovery_area/DB11G/backupset /u01/app/oracle/fast_recovery_area/DB11G

$ # Parameter file.
$ scp oracle@ol5-112-dga1:/tmp/initDB11G_stby.ora /tmp/initDB11G_stby.ora

$ # Remote login password file.
$ scp oracle@ol5-112-dga1:$ORACLE_HOME/dbs/orapwDB11G $ORACLE_HOME/dbs

Notice, the backups were copied across to the standby server as part of the FRA copy. 
If your backups are not held within the FRA, you must make sure you copy them to the standby server and make them available from the same path as used on the primary server.

Start Listener

Make sure the listener is started on the standby server.

$ lsnrctl start

Restore Backup

Create the SPFILE form the amended PFILE.

$ export ORACLE_SID=DB11G
$ sqlplus / as sysdba

SQL> CREATE SPFILE FROM PFILE='/tmp/initDB11G_stby.ora';

Restore the backup files.

$ export ORACLE_SID=DB11G
$ rman target /

RMAN> STARTUP MOUNT;
RMAN> RESTORE DATABASE;

Create Redo Logs

Create online redo logs for the standby. It's a good idea to match the configuration of the primary server.

SQL>> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=MANUAL;

SQL>> ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/DB11G/online_redo01.log') SIZE 50M;

SQL>> ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/DB11G/online_redo02.log') SIZE 50M;

SQL>> ALTER DATABASE ADD LOGFILE ('/u01/app/oracle/oradata/DB11G/online_redo03.log') SIZE 50M;

SQL>> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;

In addition to the online redo logs, you should create standby redo logs on both the standby and the primary database (in case of switchovers).

The standby redo logs should be at least as big as the largest online redo log and there should be one extra group per thread compared the online redo logs.

In my case, the following standby redo logs must be created on both servers.

SQL>> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo01.log') SIZE 50M;

SQL>> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo02.log') SIZE 50M;

SQL>> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo03.log') SIZE 50M;

SQL>> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo04.log') SIZE 50M;

Once this is complete, we can start the apply process.

Standby Server Setup (DUPLICATE)

Copy Files
Create the necessary directories on the standby server.

$ mkdir -p /u01/app/oracle/oradata/DB11G
$ mkdir -p /u01/app/oracle/fast_recovery_area/DB11G
$ mkdir -p /u01/app/oracle/admin/DB11G/adump
Copy the files from the primary to the standby server.

$ # Standby controlfile to all locations.
$ scp oracle@ol5-112-dga1:/tmp/db11g_stby.ctl /u01/app/oracle/oradata/DB11G/control01.ctl
$ cp /u01/app/oracle/oradata/DB11G/control01.ctl /u01/app/oracle/fast_recovery_area/DB11G/control02.ctl

$ # Parameter file.
$ scp oracle@ol5-112-dga1:/tmp/initDB11G_stby.ora /tmp/initDB11G_stby.ora

$ # Remote login password file.
$ scp oracle@ol5-112-dga1:$ORACLE_HOME/dbs/orapwDB11G $ORACLE_HOME/dbs

Start Listener

When using active duplicate, the standby server requires static listener configuration in a "listener.ora" file. In this case I used the following configuration.

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = DB11G.WORLD)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1)
      (SID_NAME = DB11G)
    )
  )

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = ol5-112-dga2.localdomain)(PORT = 1521))
    )
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

ADR_BASE_LISTENER = /u01/app/oracle

Make sure the listener is started on the standby server.

$ lsnrctl start

Create Standby Redo Logs on Primary Server

The DUPLICATE command automatically creates the standby redo logs on the standby. To make sure the primary database is configured for switchover, we must create the standby redo logs on the primary server.

SQL>> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo01.log') SIZE 50M;

SQL>> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo02.log') SIZE 50M;

SQL>> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo03.log') SIZE 50M;

SQL>> ALTER DATABASE ADD STANDBY LOGFILE ('/u01/app/oracle/oradata/DB11G/standby_redo04.log') SIZE 50M;

Create Standby Using DUPLICATE

Start the auxillary instance on the standby server by starting it using the temporary "init.ora" file.

$ export ORACLE_SID=DB11G
$ sqlplus / as sysdba

SQL> STARTUP NOMOUNT PFILE='/tmp/initDB11G_stby.ora';

Connect to RMAN, specifying a full connect string for both the TARGET and AUXILLARY instances. DO not attempt to use OS authentication.

$ rman TARGET sys/password@DB11G AUXILIARY sys/password@DB11G_STBY

Now issue the following DUPLICATE command.

DUPLICATE TARGET DATABASE
  FOR STANDBY
  FROM ACTIVE DATABASE
  DORECOVER
  SPFILE
    SET db_unique_name='DB11G_STBY' COMMENT 'Is standby'
    SET LOG_ARCHIVE_DEST_2='SERVICE=db11g ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G'
    SET FAL_SERVER='DB11G' COMMENT 'Is primary'
  NOFILENAMECHECK;

A brief explanation of the individual clauses is shown below.

FOR STANDBY: This tells the DUPLICATE command is to be used for a standby, so it will not force a DBID change.

FROM ACTIVE DATABASE: The DUPLICATE will be created directly from the source datafile, without an additional backup step.

DORECOVER: The DUPLICATE will include the recovery step, bringing the standby up to the current point in time.

SPFILE: Allows us to reset values in the spfile when it is copied from the source server.

NOFILENAMECHECK: Destination file locations are not checked.
Once the command is complete, we can start the apply process.

Start Apply Process

Start the apply process on standby server.

# Foreground redo apply. Session never returns until cancel. 
SQL>> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;

# Background redo apply. Control is returned to the session once the apply process is started.
SQL>> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

If you need to cancel the apply process, issue the following command.

SQL>> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

If you prefer, you can set a delay between the arrival of the archived redo log and it being applied on the standby server using the following commands.

SQL>> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

SQL>> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DELAY 30 DISCONNECT FROM SESSION;

SQL>> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

SQL>> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY DISCONNECT FROM SESSION;

Provided you have configured standby redo logs, you can start real-time apply using the following command.

SQL>> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;

Test Log Transport

On the primary server, check the latest archived redo log and force a log switch.

SQL>> ALTER SESSION SET nls_date_format='DD-MON-YYYY HH24:MI:SS';

SQL>> SELECT sequence#, first_time, next_time FROM   v$archived_log ORDER BY sequence#;

SQL>> ALTER SYSTEM SWITCH LOGFILE;

Check the new archived redo log has arrived at the standby server and been applied.

SQL>> ALTER SESSION SET nls_date_format='DD-MON-YYYY HH24:MI:SS';

SQL>> SELECT sequence#, first_time, next_time, applied FROM   v$archived_log ORDER BY sequence#;

Protection Mode
There are three protection modes for the primary database:
Maximum Availability: Transactions on the primary do not commit until redo information has been written to the online redo log and the standby redo logs of at least one standby location. If no standby location is available, it acts in the same manner as maximum performance mode until a standby becomes available again.

Maximum Performance: Transactions on the primary commit as soon as redo information has been written to the online redo log. Transfer of redo information to the standby server is asynchronous, so it does not impact on performance of the primary.

Maximum Protection: Transactions on the primary do not commit until redo information has been written to the online redo log and the standby redo logs of at least one standby location. If not suitable standby location is available, the primary database shuts down.

By default, for a newly created standby database, the primary database is in maximum performance mode.

SQL>> SELECT protection_mode FROM v$database;

PROTECTION_MODE
--------------------
MAXIMUM PERFORMANCE

The mode can be switched using the following commands. Note the alterations in the redo transport attributes.

-- Maximum Availability.
SQL>> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db11g_stby AFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';

SQL>> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY;

-- Maximum Performance.
SQL>> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db11g_stby NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';

SQL>> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;

-- Maximum Protection.
SQL>> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db11g_stby AFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';

SQL>> SHUTDOWN IMMEDIATE;

SQL>> STARTUP MOUNT;

SQL>> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PROTECTION;

SQL>> ALTER DATABASE OPEN;

Database Switchover

A database can be in one of two mutually exclusive modes (primary or standby). These roles can be altered at runtime without loss of data or resetting of redo logs. 

This process is known as a Switchover and can be performed using the following statements.

-- Convert primary database to standby
SQL>> CONNECT / AS SYSDBA
SQL>> ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY;

-- Shutdown primary database
SQL>> SHUTDOWN IMMEDIATE;

-- Mount old primary database as standby database
SQL>> STARTUP NOMOUNT;
SQL>> ALTER DATABASE MOUNT STANDBY DATABASE;
SQL>> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

On the original standby database issue the following commands.

-- Convert standby database to primary
SQL>> CONNECT / AS SYSDBA
SQL>> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

-- Shutdown standby database
SQL>> SHUTDOWN IMMEDIATE;

-- Open old standby database as primary
SQL>> STARTUP;

Once this is complete, test the log transport as before. If everything is working fine, switch the primary database back to the original server by doing another switchover. This is known as a switchback.

Failover

If the primary database is not available the standby database can be activated as a primary database using the following statements.

SQL>> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
SQL>> ALTER DATABASE ACTIVATE STANDBY DATABASE;

Since the standby database is now the primary database it should be backed up immediately.

The original primary database can now be configured as a standby. If Flashback Database was enabled on the primary database, then this can be done relatively easily (shown here).
If not, the whole setup process must be followed, but this time using the original primary server as the standby.

Flashback Database
It was already mentioned in the previous section, but it is worth drawing your attention to Flashback Database once more. Although a switchover/switchback is safe for both the primary and standby
database, a failover renders the original primary database useless for converting to a standby database. If flashback database is not enabled, the original primary must be scrapped and recreated as a standby database.

An alternative is to enable flashback database on the primary (and the standby if desired) so in the event of a failover, the primary can be flashed back to the time
before the failover and quickly converted to a standby database. That process is shown here.

Read-Only Standby and Active Data Guard
Once a standby database is configured, it can be opened in read-only mode to allow query access. This is often used to offload reporting to the standby server, thereby freeing up resources
on the primary server. When open in read-only mode, archive log shipping continues, but managed recovery is stopped, so the standby database becomes increasingly out of date until
managed recovery is resumed.

To switch the standby database into read-only mode, do the following.

SQL>> SHUTDOWN IMMEDIATE;
SQL>> STARTUP MOUNT;
SQL>> ALTER DATABASE OPEN READ ONLY;

To resume managed recovery, do the following.

SQL>> SHUTDOWN IMMEDIATE;
SQL>> STARTUP MOUNT;
SQL>> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

In 11g, Oracle introduced the Active Data Guard feature. This allows the standby database to be open in read-only mode, but still apply redo information.

This means a standby can be available for querying, yet still be up to date. There are licensing implications for this feature, but the following commands show how active data guard can be enabled.

SQL>> SHUTDOWN IMMEDIATE;
SQL>> STARTUP MOUNT;
SQL>> ALTER DATABASE OPEN READ ONLY;
SQL>> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Since managed recovery continues with active data guard, there is no need to switch back to managed recovery from read-only mode in this case.

Snapshot Standby

Introduced in 11g, snapshot standby allows the standby database to be opened in read-write mode. When switched back into standby mode, all changes made whilst in read-write mode are lost.
This is achieved using flashback database, but the standby database does not need to have flashback database explicitly enabled to take advantage of this feature, thought it works just the same if it is.

If you are using RAC, turn off all but one of the RAC instances. Make sure the instance is in MOUNT mode.

SQL>> SHUTDOWN IMMEDIATE;
SQL>> STARTUP MOUNT;

Make sure managed recovery is disabled.

SQL>> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

Convert the standby to a snapshot standby. The following example queries the V$DATABASE view to show that flashback database is not enabled prior to the conversion operation.

SQL>> SELECT flashback_on FROM v$database;

FLASHBACK_ON
------------------
NO

SQL>> ALTER DATABASE CONVERT TO SNAPSHOT STANDBY;
SQL>> ALTER DATABASE OPEN;
SQL>> SELECT flashback_on FROM v$database;

FLASHBACK_ON
------------------
RESTORE POINT ONLY

You can now do treat the standby like any read-write database.

To convert it back to the physical standby, losing all the changes made since the conversion to snapshot standby, issue the following commands.

SQL>> SHUTDOWN IMMEDIATE;
SQL>> STARTUP MOUNT;
SQL>> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
SQL>> SHUTDOWN IMMEDIATE;
SQL>> STARTUP NOMOUNT;
SQL>> ALTER DATABASE MOUNT STANDBY DATABASE;
SQL>> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
SQL>> SELECT flashback_on FROM v$database;

FLASHBACK_ON
------------------
NO

The standby is once again in managed recovery and archivelog shipping is resumed. Notice that flashback database is still not enabled.


Thanks & Comments Please..................!



Sunday 2 October 2016

What is a Deadlock and how Deadlock situation occurs?


What is a Deadlock?
A deadlock occurs when a session (A) wants a resource held by another session (B) , but that session also wants a resource held by the first session (A). There can be more than 2 sessions involved but the idea is the same.

Example of Deadlock
The following example demonstrates a deadlock scenario.

Setup
create table eg_60 ( num number, txt varchar2(10) );
insert into eg_60 values ( 1, 'First' );
insert into eg_60 values ( 2, 'Second' );
commit;

select rowid, num, txt from eg_60;

ROWID                     NUM TXT
------------------ ---------- ----------
AAASuCAAEAAAAinAAA          1 First
AAASuCAAEAAAAinAAB          2 Second

Session #1:
update eg_60 set txt='ses1' where num=1;

Session #2:
update eg_60 set txt='ses2' where num=2;
update eg_60 set txt='ses2' where num=1;
Session #2 is now waiting for the TX lock held by Session #1

Session #1:
update eg_60 set txt='ses1' where num=2;
Session #1 is now waiting  on the TX lock for this row.
The lock is held by Session #2.
However Session #2 is already waiting on Session #1
This causes a deadlock scenario so deadlock detection kicks in and one of the sessions signals an ORA-60.

Session #2:
*
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource

Session #1 is still blocked until Session #2 commits or rolls back as ORA-60  only rolls back the current statement and not the entire transaction.
Diagnostic information produced by an ORA-60
ORA-60 error normally writes the error message in the alert.log together with the name of the trace file created. The exact format of this varies between Oracle releases. The trace
file will be written to the directory indicated by the USER_DUMP_DEST or BACKGROUND_DUMP_DEST, depending on the type of process that creates the trace file.

The trace file will contain a deadlock graph and additional information similar to that shown below. This is the trace output from the above example which signaled an ORA-60 to Session #2:

DEADLOCK DETECTED ( ORA-00060 )

[Transaction Deadlock]

The following deadlock is not an ORACLE error. It is a
deadlock due to user error in the design of an application
or from issuing incorrect ad-hoc SQL. The following
information may aid in determining the deadlock:

Deadlock graph:
                       ---------Blocker(s)--------  ---------Waiter(s)---------
Resource Name          process session holds waits  process session holds waits
TX-00050018-000004fa        22     132     X             19     191           X
TX-00070008-00000461        19     191     X             22     132           X

session 132: DID 0001-0016-00000005     session 191: DID 0001-0013-0000000C
session 191: DID 0001-0013-0000000C     session 132: DID 0001-0016-00000005

Rows waited on:
  Session 132: obj - rowid = 00012B82 - AAASuCAAEAAAAinAAA
  (dictionary objn - 76674, file - 4, block - 2215, slot - 0)
  Session 191: obj - rowid = 00012B82 - AAASuCAAEAAAAinAAB
  (dictionary objn - 76674, file - 4, block - 2215, slot - 1)

----- Information for the OTHER waiting sessions -----
Session 191:
  sid: 191 ser: 5 audsid: 340002 user: 88/DAVID flags: 0x45
  pid: 19 O/S info: user: davidx, term: UNKNOWN, ospid: 3163
    image: oracle@quad.xx (TNS V1-V3)
  client details:
    O/S info: user: davidx, term: pts/3, ospid: 3097
    machine: quad.xx program: sqlplus@quad.xx (TNS V1-V3)
    application name: SQL*Plus, hash value=3669949024
  current SQL:
  update eg_60 set txt='ses1' where num=2

Information for THIS session:

----- Current SQL Statement for this session (sql_id=13b96yk6y5zny) -----
update eg_60 set txt='ses2' where num=1
===================================================
PROCESS STATE
-------------
.....

How Deadlock situation occurs?

Deadlock is a situation that occurs when two or more users are waiting for data locked by each other. Oracle automatically detects a deadlock and resolves them by rolling back one of the statements involved in the deadlock. The example below demonstrates how deadlock occurs.

Suppose there is a table test with two rows.

create table test (
row_row_num row_number,
txt varchar2(10) );

insert into test values ( 1, 'abcd' );
insert into test values ( 2, 'xyz' );
commit;

SQL> Select * from test ;

ROW_NUM    TXT
1          abcd
2          xyz


Ses#1:  Issue the following command:
SQL> update test set txt='ses1' where row_num=1;

Ses#2:  Issue the following command:

SQL> update test set txt='ses2' where row_num=2;

SQL> update test set txt='ses2' where row_num=1;

Ses#2 is now waiting for the lock held by Ses#1

Ses#1: Issue the following command:

SQL> update test set txt='ses1' where row_num=2;

This update would cause Ses#1 to wait on the lock held by Ses#2, but Ses#2 is already waiting on this session. This causes a deadlock.

Deadlock Detection

Transaction deadlocks occur when two or more transactions are attempting to access an object with incompatible lock modes. The following script can be used to identify deadlocks in the database. The query depends upon objects that are created by the script $ORACLE_HOME/rdbms/admin/dbmslock.sql. Log on as SYS or with SYSDBA authority and run this script in all databases. You may have to run the deadlock monitoring script below numerous times before you identify the transaction that is causing the problem.

What does the trace information mean ?
In this section we explain each part of the above trace.
Note that not all this information is produced in all Oracle releases.

Section 1: Deadlock Graph
Deadlock graph:
                       ---------Blocker(s)--------  ---------Waiter(s)---------
Resource Name          process session holds waits  process session holds waits
TX-00050018-000004fa        22     132     X             19     191           X
TX-00070008-00000461        19     191     X             22     132           X

session 132: DID 0001-0016-00000005     session 191: DID 0001-0013-0000000C
session 191: DID 0001-0013-0000000C     session 132: DID 0001-0016-00000005

This shows which process was holding each lock, and which process was waiting for each lock.
For each resource there are 2 parts each giving information on the relevant process:
  • Blocker(s)
  • Waiters(s)
The columns in the graph indicate:
  • Resource Name: Lock name being held / waited for.
  • process              V$PROCESS.PID of the Blocking / Waiting session
  • session               V$SESSION.SID of the Blocking / Waiting session
  • holds                  Mode the lock is held in
  • waits                  Mode the lock is requested in (waiting for)
So in this example:
SID 132 (Process 22) is holding TX-00050018-000004fa in eXclusive mode and is requesting TX-00070008-00000461 in eXclusive mode.

SID 191 (Process 19) is holding TX-00070008-00000461  in eXclusive mode and is requesting  TX-00050018-000004fa in eXclusive mode.

The important things to note here are the LOCK TYPE, the MODE HELD and the MODE REQUESTED for each resource as these give a clue as to the reason for the deadlock.

Section 2: Rows waited on
Rows waited on:
  Session 132: obj - rowid = 00012B82 - AAASuCAAEAAAAinAAA
  (dictionary objn - 76674, file - 4, block - 2215, slot - 0)
  Session 191: obj - rowid = 00012B82 - AAASuCAAEAAAAinAAB
  (dictionary objn - 76674, file - 4, block - 2215, slot - 1)

If the deadlock is due to row-level locks being obtained in different orders then this section of the trace file indicates the exact rows that each session is waiting to lock for themselves. Ie: If the lock requests are TX mode X waits then the 'Rows waited on' may show useful information.
For any other lock type / mode the 'Rows waited on' is not relevant and usually shows as "no row".

In the above example:

SID 132 was waiting for ROWID 'AAASuCAAEAAAAinAAA' of object 76674
SID 191 was waiting for ROWID 'AAASuCAAEAAAAinAAB' of object 76674

This can be decoded to show the exact row/s.
Eg: SID 132 can be shown to be waiting thus:

SELECT owner, object_name, object_type FROM dba_objects WHERE object_id = 76674;

OWNER      OBJECT_NAM OBJECT_TYP
---------- ---------- ----------
DAVID       EG_60      TABLE

SELECT * FROM david.eg_60 WHERE ROWID='AAASuCAAEAAAAinAAA';

       NUM TXT
---------- ----------
         1 ses1

Section 3: Information on OTHER waiting session(s)
----- Information for the OTHER waiting sessions -----
Session 191:
  sid: 191 ser: 5 audsid: 340002 user: 88/DAVID flags: 0x45
  pid: 19 O/S info: user: davidx, term: UNKNOWN, ospid: 3163
    image: oracle@quad.xx (TNS V1-V3)
  client details:
    O/S info: user: davidx, term: pts/3, ospid: 3097
    machine: quad.xx program: sqlplus@quad.xx (TNS V1-V3)
    application name: SQL*Plus, hash value=3669949024
  current SQL:
  update eg_60 set txt='ses1' where num=2

This section displays information regarding the other sessions (apart from the session that produced the ORA-60 deadlock trace) that are involved in the deadlock. The information includes:
  • session details
  • client details
  • Current SQL
    In this case: update eg_60 set txt='ses1' where num=2
Section 4: Information for this session
Information for THIS session:

----- Current SQL Statement for this session (sql_id=13b96yk6y5zny) -----
update eg_60 set txt='ses2' where num=1
===================================================
PROCESS STATE
-------------
.....

Displays the current sql for the session that creates the ORA-60 trace as well as a complete PROCESS STATE for the session.
Avoiding Deadlock
The above deadlock example occurs because the application which issues the update statements has no strict ordering of the rows it updates. Applications can avoid row-level lock deadlocks by enforcing some ordering of row updates. This is purely an application design issue.
Eg: If the above statements had been forced to update rows in ascending 'num' order then:

Session #1:          update eg_60 set txt='ses1' where num=1;
Session #2:          update eg_60 set txt='ses2' where num=1;
                         > Session #2 is now waiting for the TX lock held by Ses#1
Session #1:          update eg_60 set txt='ses1' where num=2;
                          > Succeeds as no-one is locking this row
                         commit;
                           > Session #2 is released as it is no longer waiting for this TX
Session #2:           update eg_60 set txt='ses2' where num=2;
                           commit;

The strict ordering of the updates ensures that a deadly embrace cannot occur. This is the simplest deadlock scenario to identify and resolve. Note that the deadlock need not be between rows of the same table - it could be between rows in different tables. Hence it is important to place  rules on the order in which tables are updated as well as the order of the rows within each table.
Other deadlock scenarios are discussed below.
Different Lock Types and Modes
The most common lock types seen in deadlock graphs are TX and TM locks. These may appear held / requested in a number of modes. It is the lock type and modes which help determine what situation has caused the deadlock.
Lock Mode
Mode    Requested    
Probable Cause                                                      
TX
X (mode 6)
Application row level conflict.
Avoid by recoding the application to ensure  rows are always locked in
a particular order.
TX
S (mode 4)
There are a number of reasons that a TX lock may be requested in
S mode.
TM
SSX (mode 5)
or
S (mode 4)
This is usually related to the existence of foreign key constraints where the columns are not indexed on the child table.

Although other deadlock scenarios can happen the above are the most common.
TM locks - which object ?
ID1 of a TM lock indicates which object is being locked. This makes it very simple to isolate the object involved in a deadlock when a TM lock is involved.

The TM lock id is in the form TM-00012B85-00000000 where 00012B85 is the object number in hexadecimal format.
  1. Convert 00012B85 from hexadecimal to a decimal number
    Hexadecimal 00012B85 is  Decimal 76677
  2. Locate the object using DBA_OBJECTS
 SELECT owner,object_name,object_type FROM dba_objects WHERE object_id= 76677;

OWNER      OBJECT_NAM OBJECT_TYP
---------- ---------- ----------
DAVID      EMP        TABLE

This is the object id that the TM lock covers.
Note that with TM locks it is possible that the lock is already held in  some mode in which case the REQUEST is to escalate the lock mode.
How to obtain Additional Information
If you are still having problems identifying the cause of a deadlock Oracle Support may be able to help. Additional information can be collected by adding the following to the init.ora parameters:
event="60 trace name errorstack level 3;name systemstate level 266"
or by setting events using alter system in which case the event will be set for the life of the Oracle instance and only for new sessions:
ALTER SYSTEM SET events'60 trace name errorstack level 3;name systemstate level 266';

Note that this can generate a very large trace file which may get
truncated unless MAX_DUMP_FILE_SIZE is large enough to accommodate the output.

When this is set any session encountering an ORA-60 error will write information about all processes on the database at the time of the error. This may help show the cause of the deadlock as it can show information about both users involved in the deadlock. Oracle Support will need
all the information you have collected in addition to the new trace file to help identify where in the application you should look for problems.

It may be necessary to run the offending jobs with SQL_TRACE  or 10046 event enabled to show the order in which each session issues its commands in order to get into a deadlock scenario.
References
Note:62354.1 TX lock waits and why they occur
Note:33453.1 TM locks and Foreign Key Constraints
Note:38373.1  Example TM locks During Referential Integrity Enforcement

Which views can be used to detect locking problems?
A number of Oracle views permits to detect locking problems.

V$SESSION_WAIT
When a session is waiting on a resource, it can be found waiting on the enqueue wait event
Example:
SELECT * FROM V$SESSION_WAIT WHERE EVENT = 'enqueue';
  • SID identifier of session holding the lock
  • P1, P2, P3 determine the resource when event = 'enqueue'
  • SECONDS_IN_WAIT gives how long the wait did occurs
V$SESSION
Session information and row locking information
  • SID, SERIAL# identifier of the session
  • EVENT event waited on
  • P1, P2, P3 determine the resource when event = 'enqueue'
  • # SECONDS_IN_WAIT gives how long the wait did occurs
  • LOCKWAIT address of the lock waiting, otherwise null
  • ROW_WAIT_OBJ# object identified of the object we are waiting on (object_id of dba_objects)
  •  ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW#
    file_id , block_id and  row location within block of the locked row
V$LOCK
List of all the locks in the system
  • SID identifier of session holding the lock
  • TYPE, ID1 and ID2 determine the resource
  • LMODE and REQUEST indicate which queue the session is waiting on, as follows:
    LMODE > 0, REQUEST = 0 owner
    LMODE = 0, REQUEST > 0 acquirer
    LMODE > 0, REQUEST > 0 converter
  • CTIME time since current mode was converted
  • BLOCK are we blocking another lock
    BLOCK = 0 non blocking
    BLOCK = 1 blocking others

DBA_LOCK or DBA_LOCKS
Formatted view on V$LOCK (created via $ORACLE_HOME/rdbms/admin/catblock.sql)
  • SESSION_ID == SID in V$LOCK
  • LOCK_TYPE, LOCK_ID1, LOCK_ID2 formatted value of TYPE, ID1, ID2 from V$LOCK
  • MODE_HELD and MODE_REQUESTED formatted value of LMODE and REQUEST from V$LOCK
  • LAST_CONVERT == CTIME of V$LOCK
  • BLOCKING_OTHERS formatted value of BLOCK from V$LOCK

V$TRANSACTION_ENQUEUE
Subset of V$LOCK for the blocking TX resources only
(same description as for the V$LOCK view)

V$ENQUEUE_LOCK
Subset of V$LOCK for the system resources only and
blocked TX resources only. (same description as for the V$LOCK view)

DBA_DML_LOCKS
Subset of the V$LOCK for the DML (TM) locks only
Created via $ORACLE_HOME/rdbms/admin/catblock.sql
Same description as the DBA_LOCK view

V$LOCKED_OBJECT

Same info as DBA_DML_LOCKS, but linked with the rollback and session information
  • XIDUSN, XIDSLOT and XIDSQN rollback information to be linked with V$TRANSACTION
  • OBJECT_ID object being locked
  • SESSION_ID session id
  • ORACLE_USERNAME oracle user name
  • OS_USER_NAME OS user name
  • PROCESS OS process id
  • LOCKED_MODE lock mode
V$RESOURCE
List of all the currently locked resources in the system.
Each row can be associated with one or more rows in V$LOCK
  • TYPE, ID1 and ID2 determine the resource
DBA_DDL_LOCKS 
Has a row for each DDL lock that is being held, and one row for each outstanding request for a DDL lock.
It is subset of DBA_LOCKS
Same description as the DBA_LOCK view

DBA_WAITERS
View that retrieve information for each session waiting on a
lock (created via $ORACLE_HOME/rdbms/admin/catblock.sql)
  • WAITING_SESSION waiting session
  • HOLDING_SESSION holding session
  • LOCK_TYPE, LOCK_ID1, LOCK_ID2 resource locked
  • MODE_HELD lock type held
  • MODE_REQUESTED lock type requested
DBA_BLOCKERS
View that gives the blocking sessions (created via  $ORACLE_HOME/rdbms/admin/catblock.sql)
  • HOLDING_SESSION holding session

Thanks Comments Please..................!