Skip Headers

Oracle® Data Guard Concepts and Administration
10g Release 1 (10.1)

Part Number B10823-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Feedback

Go to previous page
Previous
Go to next page
Next
View PDF

10
Data Guard Scenarios

This chapter provides a collection scenarios you might encounter while administering your Data Guard configuration. Each scenario can be adapted to your specific environment. Table 10-1 lists the scenarios presented in this chapter.

Table 10-1  Data Guard Scenarios 
Reference Scenario

Section 10.1

Setting Up and Verifying Archival Destinations

Section 10.2

Choosing the Best Available Standby Database for a Role Transition

Section 10.3

Using Flashback Database After a Failover

Section 10.4

Using Flashback Database After Issuing an Open Resetlogs Statement

Section 10.5

Using a Physical Standby Database with a Time Lag

Section 10.6

Recovering from a Network Failure

Section 10.7

Recovering After the NOLOGGING Clause Is Specified

Section 10.8

Resolving Archive Gaps Manually

Section 10.9

Creating a Standby Database That Uses OMF or ASM

10.1 Setting Up and Verifying Archival Destinations

The following sections set up the LOG_ARCHIVE_DEST_n initialization parameter and other related parameters to enable and disable role-specific archiving:

10.1.1 Configuring a Primary Database and a Physical Standby Database

Figure 10-1 shows the chicago primary database, the boston physical standby database, and the initialization parameters for each system.

Figure 10-1 Primary and Physical Standby Databases Before a Role Transition

Text description of valid1_1.gif follows.

Text description of the illustration valid1_1.gif

DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG=
'DG_CONFIG=(chicago,boston)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/arch1/chicago/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_2=
'SERVICE=boston
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
STANDBY_ARCHIVE_DEST=/arch1/chicago/
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE

DB_UNIQUE_NAME=boston
LOG_ARCHIVE_CONFIG=
'DG_CONFIG=(chicago,boston)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/arch1/boston/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_2=
'SERVICE=chicago
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
STANDBY_ARCHIVE_DEST=/arch1/boston/
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE

The following table describes the archival processing shown in Figure 10-1:

Chicago Database (Primary Role) Boston Database (Physical Standby Role)

LOG_ARCHIVE_DEST_1

Directs archiving of redo data to the local archived redo log files in /arch1/chicago/.

Directs archiving of redo data to the local archived redo log files in /arch1/boston/.

LOG_ARCHIVE_DEST_2

Directs transmission of the redo data to the remote physical standby database boston.

Is ignored; valid only when boston is running in the primary role.

STANDBY_ARCHIVE_DEST

Is ignored; valid only when chicago is running in the standby role.

Directs archival of redo data to the archived redo log files in the local /arch1/boston/directory.

Figure 10-2 shows the same configuration after a switchover.

Figure 10-2 Primary and Physical Standby Databases After a Role Transition

Text description of valid1_2.gif follows.

Text description of the illustration valid1_2.gif

The following table describes the archival processing shown in Figure 10-2:

Chicago Database (Physical Standby Role) Boston Database (Primary Role)

LOG_ARCHIVE_DEST_1

Directs archiving of redo data to the local /arch1/chicago/ directory.

Directs archiving of redo data to the local archived redo log files in /arch1/boston/.

LOG_ARCHIVE_DEST_2

Is ignored; valid only when chicago is running in the primary role.

Directs transmission of redo data to the remote physical standby destination chicago.

STANDBY_ARCHIVE_DEST

Directs archiving of redo data to the archived redo log files in the local /arch1/chicago/directory.

Is ignored; valid only when boston is running in the standby role.

10.1.2 Configuring a Primary Database and a Logical Standby Database

Figure 10-3 shows the chicago database running in the primary role, the denver database running in the logical standby role, and the initialization parameters for each system. Inactive components are grayed out.

Figure 10-3 Configuring Destinations for a Primary Database and a Logical Standby Database

Text description of valid2.gif follows.

Text description of the illustration valid2.gif

DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG=
'DG_CONFIG=(chicago,denver)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/arch1/chicago/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_2=
'LOCATION=
/arch2/chicago/
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)
DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_3=
'SERVICE=denver
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_STATE_3=ENABLE
STANDBY_ARCHIVE_DEST=/arch2/chicago/
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE

DB_UNIQUE_NAME=denver
LOG_ARCHIVE_CONFIG=
'DG_CONFIG=(chicago,denver)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/arch1/denver/
VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_2=
'LOCATION=
/arch2/denver/
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)
DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_3=
'SERVICE=chicago
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_STATE_3=ENABLE
STANDBY_ARCHIVE_DEST=/arch2/denver
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE

The following table describes the archival processing shown in Figure 10-3:

Chicago Database (Primary Role) Denver Database (Logical Standby Role)

LOG_ARCHIVE_DEST_1

Directs archiving of redo data generated by the primary database from the local online redo log files to the local archived redo log files in /arch1/chicago/.

Directs archiving of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in /arch1/denver/.

LOG_ARCHIVE_DEST_2

Is ignored; valid only when chicago is running in the standby role. (You must configure a standby redo log on this site to perform switchovers.)

Directs archiving of redo data from the standby redo log files to the local archived redo log files in /arch2/denver/.

LOG_ARCHIVE_DEST_3

Directs transmission of redo data to the remote logical standby destination denver.

Is ignored; valid only when denver is running in the primary role.

STANDBY_ARCHIVE_DEST

Is ignored; valid only when chicago is running in the standby role.

Directs archiving of redo data received from the primary database directly to archived redo log files in /arch2/denver/.

Unlike physical standby databases, logical standby databases are open databases that generate redo data and have multiple log files (online redo log files, archived redo log files, and standby redo log files). It is good practice to specify separate local destinations for:

Because the example configurations shown in Figure 10-3 (and Figure 10-4) do not include a physical standby database, the configuration sets up the LOG_ARCHIVE_DEST_3 destination for switchover with the logical standby database. Figure 10-4 shows the same configuration after a switchover.

Figure 10-4 Primary and Logical Standby Databases After a Role Transition

Text description of valid2_so.gif follows.

Text description of the illustration valid2_so.gif

The following table describes the archival processing shown in Figure 10-4:

Chicago Database (Logical Standby Role) Denver Database (Primary Role)

LOG_ARCHIVE_DEST_1

Directs archiving of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in /arch1/chicago/.

Directs archiving of redo data from the local online redo log files to the local archived redo log files in /arch1/denver/.

LOG_ARCHIVE_DEST_2

Directs archiving of redo data from the standby redo log files to the archived redo log file in /arch2/chicago/.

Is ignored; valid only when denver is running in the standby role.

LOG_ARCHIVE_DEST_3

Is ignored; valid only when chicago is running in the primary role.

Directs transmission of redo data to the remote logical standby destination chicago.

STANDBY_ARCHIVE_DEST

Directs archiving of the redo data received from the primary database directly to the archived redo log files in /arch2/chicago/.

Is ignored; valid only when denver is running in the standby role.

10.1.3 Configuring Both Physical and Logical Standby Databases

Figure 10-5 shows the chicago database running in the primary role, the boston database running in the physical standby role, and the denver database running in the logical standby database role. The initialization parameters are shown under each system. Components that are grayed out are inactive for the database's current role. This example assumes that a switchover would occur only between chicago and boston. In this configuration, the denver logical standby database is intended to be a reporting database only; denver will never be the target of a switchover or run in the primary database role.

Figure 10-5 Configuring a Primary Database with Physical and Logical Standby Databases

Text description of valid3.gif follows.

Text description of the illustration valid3.gif

DB_UNIQUE_NAME=boston
LOG_ARCHIVE_CONFIG=
'DG_CONFIG=(chicago,boston, denver)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/arch1/boston/
VALID_FOR=
(ONLINE_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_2=
'SERVICE=denver
VALID_FOR=
(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_3=
'SERVICE=chicago
VALID_FOR=
(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_STATE_3=ENABLE
STANDBY_ARCHIVE_DEST=
/arch1/boston/
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE

DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG=
'DG_CONFIG=(chicago,boston, denver)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/arch1/chicago/
VALID_FOR=
(ONLINE_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_2=
'SERVICE=denver
VALID_FOR=
(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_3=
'SERVICE=boston
VALID_FOR=
(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_STATE_3=ENABLE
STANDBY_ARCHIVE_DEST=
/arch1/chicago/
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE

DB_UNIQUE_NAME=denver
LOG_ARCHIVE_CONFIG=
'DG_CONFIG=(chicago,boston, denver)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/arch1/denver/
VALID_FOR=
(ONLINE_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_2=
'LOCATION=
/arch2/denver/
VALID_FOR=
(STANDBY_LOGFILES,STANDBY_ROLE)
DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
STANDBY_ARCHIVE_DEST=/arch2/denver
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE

The following table describes the archival processing shown in Figure 10-5:

Chicago Database
(Primary Role)
Boston Database
(Standby Role)
Denver Database
(Standby Role)

LOG_ARCHIVE_DEST_1

Directs archiving of redo data from the online redo log files to the local archived redo log files in /arch1/chicago/.

Directs archiving of redo data from the standby redo log files to the local archived redo log files in /arch1/boston/.

Directs archiving of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in /arch1/denver/.

LOG_ARCHIVE_DEST_2

Directs transmission of redo data to the remote logical standby destination denver.

Is ignored; valid only when boston is running in the primary role.

Directs archiving of redo data from the standby redo log files to the local archived redo log files in /arch2/denver/.

LOG_ARCHIVE_DEST_3

Directs transmission of redo data to the remote physical standby destination boston.

Is ignored; valid only when boston is running in the primary role.

Is not defined for this database.

STANDBY_ARCHIVE_DEST

Is ignored; valid only for standby role.

Directs archiving of redo data received from the primary database directly to archived redo log files in /arch1/boston/.

Directs archiving of redo data received from the primary database directly to archived redo log files in /arch2/denver/.

Figure 10-6 shows the same configuration after a switchover changes the chicago database to the standby role and the boston database to the primary role.

Figure 10-6 Primary, Physical, and Logical Standby Databases After a Role Transition

Text description of valid3_so.gif follows.

Text description of the illustration valid3_so.gif

The following table describes the archival processing shown in Figure 10-6:

Chicago Database
(Standby Role)
Boston Database
(Primary Role)
Denver Database
(Standby Role)

LOG_ARCHIVE_DEST_1

Directs archival of redo data from the standby redo log files to the local archived redo log files in /arch1/chicago/.

Directs archival of redo data from the online redo log files to the local archived redo log files in /arch1/boston/.

Directs archival of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in /arch1/denver/.

LOG_ARCHIVE_DEST_2

Is ignored; valid only when chicago is running in the primary role.

Directs transmission of redo data to the remote logical standby destination denver.

Directs archival of redo data from the standby redo log files to the local archived redo log files in /arch2/denver/.

LOG_ARCHIVE_DEST_3

Is ignored; valid only when chicago is running in the primary role.

Directs transmission of redo data to the remote physical standby destination chicago.

Is not defined for this database.

STANDBY_ARCHIVE_DEST

Directs archival of redo data received from the primary database directly to the archived redo log files in /arch1/chicago/.

Is ignored; valid only for standby role.

Directs archival of redo data received from the primary database directly to archived redo log files in /arch2/denver/.

10.1.4 Verifying the Current VALID_FOR Attribute Settings for Each Destination

To see whether or not the current VALID_FOR attribute settings are valid right now for each destination in the Data Guard configuration, query the V$ARCHIVE_DEST view, as shown in Example 10-1.

Example 10-1 Finding VALID_FOR Information in the V$ARCHIVE_DEST View

SQL> SELECT DEST_10,VALID_TYPE,VALID_ROLE,VALID_NOW FROM V$ARCHIVE_DEST;
DEST_10  VALID_TYPE      VALID_ROLE   VALID_NOW
-------  --------------- ------------ ----------------
1        ALL_LOGFILES    ALL_ROLES    YES
2        STANDBY_LOGFILE STANDBY_ROLE WRONG VALID_TYPE
3        ONLINE_LOGFILE  STANDBY_ROLE WRONG VALID_ROLE
4        ALL_LOGFILES    ALL_ROLES    UNKNOWN
5        ALL_LOGFILES    ALL_ROLES    UNKNOWN
6        ALL_LOGFILES    ALL_ROLES    UNKNOWN
7        ALL_LOGFILES    ALL_ROLES    UNKNOWN
8        ALL_LOGFILES    ALL_ROLES    UNKNOWN
9        ALL_LOGFILES    ALL_ROLES    UNKNOWN
10       ALL_LOGFILES    ALL_ROLES    UNKNOWN
 10 rows selected.

In Example 10-1, each line represents one of the ten destinations in the Data Guard configuration. The first line indicates that the VALID_FOR attribute for LOG_ARCHIVE_DEST_1 is set to (ALL_LOGFILES,ALL_ROLES), which is the only keyword pair that is valid at all times.

More interesting are the second and third lines in the view, which are both currently invalid, but for different reasons:

All of the other destinations are shown as UNKNOWN, which indicates the destinations are either undefined or the database is started and mounted but archiving is not currently taking place. See the V$ARCHIVE_DEST view in the Oracle Database Reference for information about these and other columns.

10.2 Choosing the Best Available Standby Database for a Role Transition

Every standby database is associated with only one primary database. A single primary database can, however, support multiple physical or logical standby databases. This scenario illustrates how to determine the information you need to choose the best available standby database for a failover or switchover.

If a configuration contains physical standby databases, Oracle recommends that you perform the role transition using the best available physical standby database if the environment uses both physical and logical standby databases. This is recommended because:

Because of these limitations, a logical standby database should be considered as the target for a role transition only in the the following special situations:

Once you determine whether to use a physical or a logical standby database, the specific standby database you select as the target for the role transition is determined by how much of the recent primary database modifications are available at the standby location and by how much of these modifications were applied to the standby database. Because the primary database remains accessible during switchovers, there will be no loss of data, and the choice of the standby database used during a switchover will only affect the time required to complete the switchover. For failovers, however, the choice of standby database might involve tradeoffs between additional risk of data loss and the time required to transition a standby database to the primary role.

10.2.1 Example: Best Physical Standby Database for a Failover

In a disaster, the most critical task for the DBA is to determine if it is quicker and safer to repair the primary database or fail over to a standby database. When deciding that a failover is necessary and multiple physical standby databases are configured, the DBA must choose which physical standby database is the best target for the failover. While there are many environmental factors that can affect which standby database represents the best choice, this scenario assumes these things to be equal for the purpose of emphasizing data loss assessment.

This scenario begins with a Data Guard configuration consisting of the HQ primary database and two physical standby databases, SAT and NYC. The HQ database is operating in maximum availability protection mode, and the standby databases are each configured with three standby redo log files. See Section 1.4 for more information about the maximum availability protection mode for physical standby databases.

Table 10-2 provides information about the databases used in this scenario.

Table 10-2  Identifiers for the Physical Standby Database Example
Identifier HQ Database SAT Database NYC Database

Location

San Francisco

Seattle

New York City

Database name

HQ

HQ

HQ

Instance name

HQ

SAT

NYC

Initialization parameter file

hq_init.ora

sat_init.ora

nyc_init.ora

Control file

hq_cf1.f

sat_cf1.f

nyc_cf1.f

Datafile

hq_db1.f

sat_db1.f

nyc_db1.f

Redo log file 1

hq_log1.f

sat_log1.f

nyc_log1.f

Redo log file 2

hq_log2.f

sat_log2.f

nyc_log2.f

Standby redo log file 1

hq_srl1.f

sat_srl1.f

nyc_srl1.f

Standby redo log file 2

hq_srl2.f

sat_srl2.f

nyc_srl2.f

Standby redo log file 3

hq_srl3.f

sat_srl3.f

nyc_srl3.f

Primary protection mode

Maximum availability

Not applicable

Not applicable

Standby protection mode

Not applicable

Maximum availability (synchronous)

Maximum performance (asynchronous)

Network service name (client defined)

hq_net

sat_net

nyc_net

Listener

hq_listener

sat_listener

nyc_listener


Note:

The New York city database is operating in maximum performance mode because sending redo data synchronously from HQ to NYC might impact the primary database performance during peak workload periods. However, the New York City standby database is still considered a viable candidate for failovers because it uses a standby redo log.


Assume that an event occurs in San Francisco where the primary site is located, and the primary site is damaged in such a way that it cannot be repaired in a timely manner. You must fail over to one of the standby databases. You cannot assume that the DBA who set up the multiple standby database configuration is available to decide to which standby database to fail over. Therefore, it is imperative to have a disaster recovery plan at each standby site, as well as at the primary site. Each member of the disaster recovery team needs to know about the disaster recovery plan and be aware of the procedures to follow. This scenario identifies the information you need when deciding which standby database should be the target of the failover.

One method of conveying information to the disaster recovery team is to include a ReadMe file at each standby site. This ReadMe file is created and maintained by the DBA and should describe how to:

See Appendix F for a sample ReadMe file.

When choosing a standby database, there are two critical considerations: which standby database received the most recent redo data and which standby database has applied the most redo.

Follow these steps to determine which standby database is the best candidate for failover when only physical standby databases are in the configuration. Always start with the standby database providing the highest protection level. In this scenario, the Seattle standby database provides the highest protection level because it is operating in maximum availability protection mode.

Step 1 Connect to the SAT physical standby database.

Issue a SQL statement such as the following:

SQL> CONNECT SYS/CHANGE_ON_INSTALL AS SYSDBA;
Step 2 Determine how much current redo data is available in the archived redo log file.

Query the columns in the V$MANAGED_STANDBY view, as shown:

SQL> SELECT THREAD#, SEQUENCE#, BLOCK#, BLOCKS
     2> FROM V$MANAGED_STANDBY WHERE STATUS='RECEIVING';
   THREAD#  SEQUENCE#     BLOCK#     BLOCKS
---------- ---------- ---------- ----------
         1         14        234         16

This standby database received 249 blocks of redo data from the primary database. To compute the number of blocks received, add the BLOCKS column value to the BLOCK# column value, and subtract 1 (because block number 234 is included in the 16 blocks received).


Note:

Depending on how long the primary database has been unavailable, the previous query might not return any selected rows because the RFS process might detect the network disconnection and terminate itself. If this occurs, it is always best to select a standby database that is configured to receive the redo data in a synchronous manner.


Step 3 Obtain a list of the archived redo log files that were applied or are currently pending application to the SAT database.

Query the V$ARCHIVED_LOG view:

SQL> SELECT SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
  2> FROM V$ARCVHIVED_LOG ORDER BY SEQUENCE#;
FILE_NAME                  SEQUENCE# APP
------------------------- ---------- ---
/oracle/dbs/hq_sat_2.log           2 YES 
/oracle/dbs/hq_sat_3.log           3 YES 
/oracle/dbs/hq_sat_4.log           4 YES
/oracle/dbs/hq_sat_5.log           5 YES 
/oracle/dbs/hq_sat_6.log           6 YES 
/oracle/dbs/hq_sat_7.log           7 YES 
/oracle/dbs/hq_sat_8.log           8 YES 
/oracle/dbs/hq_sat_9.log           9 YES
/oracle/dbs/hq_sat_10.log         10 YES
/oracle/dbs/hq_sat_11.log         11 YES
/oracle/dbs/hq_sat_13.log         13  NO

This output indicates that archived redo log file 11 was completely applied to the standby database. (The line for log file 11 in the example output is in bold typeface to assist you in reading the output. The actual output will not display bolding.)

Also, notice the gap in the sequence numbers in the SEQUENCE# column. In the example, the gap indicates the SAT standby database is missing archived redo log file number 12.

Step 4 Connect to the NYC database to determine if it is more recent than the SAT standby database.

Issue a SQL statement such as the following:

SQL> CONNECT SYS/CHANGE_ON_INSTALL AS SYSDBA;
Step 5 Determine how much current redo data is available in the archived redo log file.

Query the columns in the V$MANAGED_STANDBY view as shown:

SQL> SELECT THREAD#, SEQUENCE#, BLOCK#, BLOCKS
     2> FROM V$MANAGED_STANDBY WHERE STATUS='RECEIVING';
   THREAD#  SEQUENCE#     BLOCK#     BLOCKS
---------- ---------- ---------- ----------
         1         14        157         93

This standby database has also received 249 blocks of redo information from the primary database. To compute the number of blocks received, add the BLOCKS column value to the BLOCK# column value, and subtract 1 (because block number 157 is included in the 93 blocks received).

Step 6 Obtain a list of the archived redo log files that were applied or are currently pending application to the NYC database.

Query the V$ARCHIVED_LOG view:

SQL> SELECT SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
  2> FROM V$ARCVHIVED_LOG ORDER BY SEQUENCE#;
FILE_NAME                  SEQUENCE# APP
------------------------- ---------- ---
/oracle/dbs/hq_nyc_2.log           2 YES 
/oracle/dbs/hq_nyc_3.log           3 YES 
/oracle/dbs/hq_nyc_4.log           4 YES
/oracle/dbs/hq_nyc_5.log           5 YES 
/oracle/dbs/hq_nyc_6.log           6 YES 
/oracle/dbs/hq_nyc_7.log           7 YES 
/oracle/dbs/hq_nyc_8.log           8  NO 
/oracle/dbs/hq_nyc_9.log           9  NO
/oracle/dbs/hq_nyc_10.log         10  NO
/oracle/dbs/hq_nyc_11.log         11  NO
/oracle/dbs/hq_nyc_12.log         12  NO
/oracle/dbs/hq_nyc_13.log         13  NO

This output indicates that archived redo log file 7 was completely applied to the standby database. (The line for log file 7 in the example output is in bold typeface to assist you in reading the output. The actual output will not display bolding.)

More redo data was received at this location, but less was applied to the standby database.

Step 7 Choose the best target standby database.

In most cases, the physical standby database you choose as a failover target should provide a balance between risk of data loss and time required to perform the role transition. As you analyze this information to make a decision about the best failover candidate in this scenario, consider the following:

Based on your business requirements, choose the best target standby database.

Step 8 Bring the selected standby database to its most current state.
If you chose the SAT database as the best target based on your business requirements, perform the following steps:
  1. Retrieve any missing archived redo log files using an operating system copy utility. (This example uses the UNIX cp command). In this case, the SAT database is missing archived redo log file 12. Because the NYC database received this archived redo log file, you can copy it from the NYC database to the SAT database, as follows:
    % cp /net/nyc/oracle/dbs/hq_nyc_12.log /net/sat/oracle/dbs/hq_sat_12.log
    
    
  2. Determine if a partial archived redo log file exists for the next sequence number. In this example, the next sequence number should be 14. The following UNIX command searches the directory on the SAT database for the presence of an archived redo log file named hq_sat_14.log:
    % ls -l /net/sat/oracle/dbs/hq_sat_14.log 
    /net/sat/oracle/dbs/hq_sat_14.log: No such file or directory
    
    

    Because the SAT standby database is using standby redo log files, there should not be any partial archived redo log files.

  3. Register the retrieved archived redo log file. (There is no need to stop log apply services).
    SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE '/oracle/dbs/hq_sat_12.log';
    
    
  4. Query the V$ARCHIVED_LOG view again to make sure the archived redo log files were successfully applied:
    SQL> SELECT SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
      2> FROM V$ARCVHIVED_LOG ORDER BY SEQUENCE#;
    
    FILE_NAME                  SEQUENCE# APP
    ------------------------- ---------- ---
    /oracle/dbs/hq_sat_2.log           2 YES 
    /oracle/dbs/hq_sat_3.log           3 YES 
    /oracle/dbs/hq_sat_4.log           4 YES
    /oracle/dbs/hq_sat_5.log           5 YES 
    /oracle/dbs/hq_sat_6.log           6 YES 
    /oracle/dbs/hq_sat_7.log           7 YES 
    /oracle/dbs/hq_sat_8.log           8 YES 
    /oracle/dbs/hq_sat_9.log           9 YES
    /oracle/dbs/hq_sat_10.log         10 YES
    /oracle/dbs/hq_sat_11.log         11 YES
    /oracle/dbs/hq_sat_12.log         12 YES
    /oracle/dbs/hq_sat_13.log         13 YES
    
    

If you chose the NYC database as the best target based on your business requirements, perform the following steps:

  1. Determine if a partial archived redo log file exists for the next sequence number. The following UNIX command searches the directory on the NYC database for the presence of an archived redo log file named with the next sequence (hq_nyc_14):
    % ls -l /net/nyc/oracle/dbs/hq_nyc_14.log
    /net/nyc/oracle/dbs/hq_nyc_14.log: No such file or directory
    
    

    Because the NYC standby database is using standby redo log files, there should not be any partial archived redo log files.

  2. Start log apply services to apply the most current log file:
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
      2> DISCONNECT FROM SESSION;
    
    
  3. Query the V$ARCHIVED_LOG view again to make sure the archived redo log files were successfully applied:
    SQL> SELECT SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
      2> FROM V$ARCVHIVED_LOG ORDER BY SEQUENCE#;
    FILE_NAME                  SEQUENCE# APP
    ------------------------- ---------- ---
    /oracle/dbs/hq_nyc_2.log           2 YES 
    /oracle/dbs/hq_nyc_3.log           3 YES 
    /oracle/dbs/hq_nyc_4.log           4 YES
    /oracle/dbs/hq_nyc_5.log           5 YES 
    /oracle/dbs/hq_nyc_6.log           6 YES 
    /oracle/dbs/hq_nyc_7.log           7 YES 
    /oracle/dbs/hq_nyc_8.log           8 YES 
    /oracle/dbs/hq_nyc_9.log           9 YES
    /oracle/dbs/hq_nyc_10.log         10 YES
    /oracle/dbs/hq_nyc_11.log         11 YES
    /oracle/dbs/hq_nyc_12.log         12  NO
    /oracle/dbs/hq_nyc_13.log         13  NO
    
    

    Applying the archived redo log files might take some time to complete. Therefore, you must wait until all archived redo log files are designated as applied, as shown:

    SQL> SELECT SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
      2> FROM V$ARCVHIVED_LOG ORDER BY SEQUENCE#;
    
    FILE_NAME                  SEQUENCE# APP
    ------------------------- ---------- ---
    /oracle/dbs/hq_nyc_2.log           2 YES 
    /oracle/dbs/hq_nyc_3.log           3 YES 
    /oracle/dbs/hq_nyc_4.log           4 YES
    /oracle/dbs/hq_nyc_5.log           5 YES 
    /oracle/dbs/hq_nyc_6.log           6 YES 
    /oracle/dbs/hq_nyc_7.log           7 YES 
    /oracle/dbs/hq_nyc_8.log           8 YES 
    /oracle/dbs/hq_nyc_9.log           9 YES
    /oracle/dbs/hq_nyc_10.log         10 YES
    /oracle/dbs/hq_nyc_11.log         11 YES
    /oracle/dbs/hq_nyc_12.log         12 YES
    /oracle/dbs/hq_nyc_13.log         13 YES
    
Step 9 Perform the failover.

You are now ready to stop log apply services and fail over the selected physical standby database to the primary role.

See Section 7.2.2 for additional information about how to fail over to a physical standby database.

10.2.2 Example: Best Logical Standby Database for a Failover Operation

In a disaster when only logical standby databases are available, the critical task is to determine which logical standby database is the best target for the failover. While there are many environmental factors that can affect which is the best target standby database, this scenario assumes these things to be equal for the purpose of emphasizing data loss assessment. See Section 1.4 for more information about the maximum availability protection mode for logical standby databases.

This scenario starts out with a Data Guard configuration consisting of the HQ primary database and two logical standby databases, SAT and NYC. Table 10-3 provides information about each of these databases.

Table 10-3  Identifiers for Logical Standby Database Example
Identifier HQ Database SAT Database NYC Database

Location

San Francisco

Seattle

New York City

Database name

HQ

SAT

NYC

Instance name

HQ

SAT

NYC

Initialization parameter file

hq_init.ora

sat_init.ora

nyc_init.ora

Control file

hq_cf1.f

sat_cf1.f

nyc_cf1.f

Datafile

hq_db1.f

sat_db1.f

nyc_db1.f

Redo log file 1

hq_log1.f

sat_log1.f

nyc_log1.f

Redo log file 2

hq_log2.f

sat_log2.f

nyc_log2.f

Database link (client-defined)

hq_link

sat_link

nyc_link

Network service name (client-defined)

hq_net

sat_net

nyc_net

Listener

hq_listener

sat_listener

nyc_listener

Follow these steps to determine which standby database is the best candidate for failover when only logical standby databases are in the configuration:

Step 1 Connect to the SAT logical standby database.

Issue a SQL statement such as the following:

SQL> CONNECT SYS/CHANGE_ON_INSTALL AS SYSDBA;
Step 2 Determine the highest applied SCN and highest (newest) applicable SCN on the SAT database.

Query the following columns in the DBA_LOGSTDBY_PROGRESS view:

SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;

APPLIED_SCN NEWEST_SCN
----------- ----------
     144059     144059
Step 3 Obtain a list of the archived redo log files that were applied or are currently pending application to the SAT database.

Query the DBA_LOGSTDBY_LOG view:

SQL> SELECT SUBSTR(FILE_NAME,1,25) FILE_NAME, SUBSTR(SEQUENCE#,1,4) "SEQ#",
  2> FIRST_CHANGE#, NEXT_CHANGE#, TO_CHAR(TIMESTAMP, 'HH:MI:SS') TIMESTAMP,
  3> DICT_BEGIN BEG, DICT_END END, SUBSTR(THREAD#,1,4) "THR#"
  4> FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;

FILE_NAME                 SEQ# FIRST_CHANGE# NEXT_CHANGE# TIMESTAM BEG END THR#
------------------------- ---- ------------- ------------ -------- --- --- ----
/oracle/dbs/hq_sat_2.log  2           101579       101588 11:02:57 NO  NO  1
/oracle/dbs/hq_sat_3.log  3           101588       142065 11:02:01 NO  NO  1
/oracle/dbs/hq_sat_4.log  4           142065       142307 11:02:09 NO  NO  1
/oracle/dbs/hq_sat_5.log  5           142307       142739 11:02:47 YES YES 1
/oracle/dbs/hq_sat_6.log  6           142739       143973 12:02:09 NO  NO  1
/oracle/dbs/hq_sat_7.log  7           143973       144042 01:02:00 NO  NO  1
/oracle/dbs/hq_sat_8.log  8           144042       144051 01:02:00 NO  NO  1
/oracle/dbs/hq_sat_9.log  9           144051       144054 01:02:15 NO  NO  1
/oracle/dbs/hq_sat_10.log 10          144054       144057 01:02:20 NO  NO  1
/oracle/dbs/hq_sat_11.log 11          144057       144060 01:02:25 NO  NO  1
/oracle/dbs/hq_sat_13.log 13          144089       144147 01:02:40 NO  NO  1

Notice that for log file 11, the SCN of 144059 (recorded in Step 2) is between the FIRST_CHANGE# column value of 144057 and the NEXT_CHANGE# column value of 144060. This indicates log file 11 is currently being applied. (The line for log file 11 in the example output is in bold typeface to assist you in reading the output. The actual output will not display bolding.) Also, notice the gap in the sequence numbers in the SEQ# column; in the example, the gap indicates that SAT database is missing archived redo log file 12.

Step 4 Connect to the NYC database.

Issue a SQL statement such as the following:

SQL> CONNECT SYS/CHANGE_ON_INSTALL AS SYSDBA;
Step 5 Determine the highest applied SCN and highest applicable SCN on the NYC database.

Query the following columns in the DBA_LOGSTDBY_PROGRESS view:

SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
APPLIED_SCN NEWEST_SCN
----------- ----------
     143970     144146
Step 6 Obtain a list of the log files that were processed or are currently pending processing on the NYC database.

Issue a SQL statement such as the following:

SQL> SELECT SUBSTR(FILE_NAME,1,25) FILE_NAME, SUBSTR(SEQUENCE#,1,4) "SEQ#", 
  2> FIRST_CHANGE#, NEXT_CHANGE#, TO_CHAR(TIMESTAMP, 'HH:MI:SS')  TIMESTAMP, 
  3> DICT_BEGIN BEG, DICT_END END, SUBSTR(THREAD#,1,4) "THR#" 
  4> FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;

FILE_NAME                 SEQ# FIRST_CHANGE# NEXT_CHANGE# TIMESTAM BEG END THR#
------------------------- ---- ------------- ------------ -------- --- --- ----
/oracle/dbs/hq_nyc_2.log  2           101579       101588 11:02:58 NO  NO  1
/oracle/dbs/hq_nyc_3.log  3           101588       142065 11:02:02 NO  NO  1
/oracle/dbs/hq_nyc_4.log  4           142065       142307 11:02:10 NO  NO  1
/oracle/dbs/hq_nyc_5.log  5           142307       142739 11:02:48 YES YES 1
/oracle/dbs/hq_nyc_6.log  6           142739       143973 12:02:10 NO  NO  1
/oracle/dbs/hq_nyc_7.log  7           143973       144042 01:02:11 NO  NO  1
/oracle/dbs/hq_nyc_8.log  8           144042       144051 01:02:01 NO  NO  1
/oracle/dbs/hq_nyc_9.log  9           144051       144054 01:02:16 NO  NO  1
/oracle/dbs/hq_nyc_10.log 10          144054       144057 01:02:21 NO  NO  1
/oracle/dbs/hq_nyc_11.log 11          144057       144060 01:02:26 NO  NO  1
/oracle/dbs/hq_nyc_12.log 12          144060       144089 01:02:30 NO  NO  1
/oracle/dbs/hq_nyc_13.log 13          144089       144147 01:02:41 NO  NO  1

Notice that for log file 6, the SCN of 143970 (recorded in Step 5) is between the FIRST_CHANGE# column value of 142739 and the NEXT_CHANGE# column value of 143973. This indicates that log file 6 is currently being applied. (The line for log file in the example output is in bold typeface to assist you in reading the output. The actual output will not display bolding.) Also, notice that there are no gaps in the sequence of log files that remain to be processed.

Step 7 Choose the best target standby database.

In most cases, the logical standby database you choose as a failover target should provide a balance between risk of data loss and time required to perform the role transition. As you analyze this information to make a decision about the best failover candidate in this scenario, consider the following:

Based on your business requirements, choose the best target standby database.

Step 8 Bring the selected standby database to its most current state.

If you chose the SAT database as the best target based on your business requirements, perform the following steps:

  1. Manually retrieve any missing archived redo log files using an operating system utility. (This example uses the UNIX cp command.) In this case, the SAT database is missing archived redo log file 12. Because the NYC database received this archived redo log file, you can copy it from the NYC database to the SAT database, as follows:
    %cp /net/nyc/oracle/dbs/hq_nyc_12.log
    /net/sat/oracle/dbs/hq_sat_12.log
    
    
  2. Determine if a partial archived redo log file exists for the next sequence number. In this example, the next sequence number should be 14. The following UNIX command shows the directory on the SAT database, looking for the presence of an archived redo log file named hq_sat_14.log:
    %ls -l /net/sat/oracle/dbs/hq_sat_14.log
    -rw-rw----   1 oracle    dbs  333280 Feb 12  1:03 hq_sat_14.log
    
    
  3. Stop log apply services and register both the retrieved archived redo log file and the partial archived redo log file:
    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/oracle/dbs/hq_sat_12.log';
    SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/oracle/dbs/hq_sat_14.log';
    
    
    
  4. Start log apply services to apply the most current log file:
    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
    
    
  5. Determine the highest applied SCN on the SAT database by querying the DBA_LOGSTDBY_PROGRESS view to see if the value of the APPLIED_SCN column is equal to the value of the NEWEST_SCN column:
    SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
    
    APPLIED_SCN NEWEST_SCN
    ----------- ----------
         144205     144205
    
    

    Because the SCN values match, you can be assured that there is no longer a delay (lag) between the current log file on the primary database and the last log file applied to the SAT database.

If you chose the NYC database as the best target based on your business requirements, perform the following steps:

  1. Determine if a partial archived redo log file exists for the next sequence number. In this example, the next sequence number should be 14. The following UNIX command shows the directory on the NYC database, looking for the presence of an archived redo log file named hq_nyc_14:
    %ls -l /net/nyc/oracle/dbs/hq_nyc_14.log
    -rw-rw----   1 oracle    dbs  333330 Feb 12  1:03 hq_nyc_14.log
    
    
  2. Register the partial archived redo log file on the NYC database:
    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/oracle/dbs/hq_nyc_14.log';
    
    
  3. Start log apply services to apply the most current log file:
    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
    
    
  4. Determine the highest applied SCN on the NYC database by querying the DBA_LOGSTDBY_PROGRESS view to see if the value of the APPLIED_SCN column is equal to the value of the NEWEST_SCN column:
    SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
    
    APPLIED_SCN NEWEST_SCN
    ----------- ----------
         144205     144205
    
    

    Because the SCN values match, you can sure there is no longer a delay (lag) between the current log file on the primary database and the last log file received and applied by the NYC database.

Step 9 Perform the failover.

You are now ready to stop log apply services and fail over the selected logical standby database to the primary role.

See Section 7.3.2 for additional information on how to perform the failover.

10.3 Using Flashback Database After a Failover

After a failover occurs, the original primary database can no longer participate in the Data Guard configuration until it is repaired and established as a standby database in the new configuration. To do this, you can use the Flashback Database feature to recover the failed primary database to a point in time before the failover occurred, and then convert it into a physical or logical standby database in the new configuration. The following sections describe:

10.3.1 Converting a Failed Primary Database into a Physical Standby Database

The following steps assume the user has already performed a failover involving a physical standby database and Flashback Database has been enabled on the old primary database. This procedure brings the old primary database back into the Data Guard configuration as a new physical standby database.

Step 1 Determine the SCN at which the old standby database became the primary database.

On the new primary database, issue the following query to determine the SCN at which the old standby database became the new primary database:

SQL> SELECT TO_CHAR(STANDBY_BECAME_PRIMARY_SCN) FROM V$DATABASE;
Step 2 Flash back the failed primary database.

To create a new physical standby database, shut down the database (if necessary), mount the old primary database, and flash it back to the value for STANDBY_BECAME_PRIMARY_SCN that was determined in Step 1:

SQL> SHUTDOWN IMMEDIATE;                  
SQL> STARTUP MOUNT;
SQL> FLASHBACK DATABASE TO SCN standby_became_primary_scn;

The old primary database is now a new physical standby database and is referred to as such in the following steps.

Step 3 Mount the new physical standby database.

Perform the following steps on the new physical standby database:

  1. Disable the Flashback Database feature. This deletes the flashback logs, which are obsolete after the standby control file is restored:
    SQL> ALTER DATABASE FLASHBACK OFF;
    
    
  2. Create the standby control file and shut down the database:
    SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS control_file_name;
    SQL> SHUTDOWN IMMEDIATE;
    
    
  3. Issue operating system copy commands to replace the current control files with the new standby control file.
  4. Mount the new physical standby database using the new standby control file.
    SQL> STARTUP MOUNT;
    
    
  5. Ensure the listener is running:
    LSNRCTL STAT list_name;
    
    
  6. Enable Flashback Database:
    SQL> ALTER DATABASE FLASHBACK ON;
    
Step 4 Restart log transport services to the new physical standby database.

Before the new standby database was created, the new primary database probably stopped transmitting redo to the remote destination. To restart log transport services, perform the following steps on the new primary database:

  1. Issue the following query to see the current state of the archive destinations:
    SQL> SELECT DEST_ID, DEST_NAME, STATUS, PROTECTION_MODE, DESTINATION, ERROR, SRL
      2> FROM V$ARCHIVE_DEST_STATUS;
    
    
    
  2. If necessary, enable the destination:
    SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_n=ENABLE;
    
    
  3. Perform a log switch to ensure the standby database begins receiving redo data from the new primary database, and verify it was sent successfully:
    SQL> ALTER SYSTEM SWITCH LOGFILE;
    SQL> SELECT DEST_ID, DEST_NAME, STATUS, PROTECTION_MODE, DESTINATION, ERROR, SRL
      2> FROM V$ARCHIVE_DEST_STATUS;
    
    

On the new standby database, you may also need to change the LOG_ARCHIVE_DEST_n initialization parameters so that log transport services do not transmit redo data to other databases. This step can be skipped if both the primary and standby database roles were set up with the VALID_FOR attribute in one server parameter file (SPFILE). By doing this, the Data Guard configuration operates properly after a role transition.

Step 5 Start Redo Apply.

Start Redo Apply or real-time apply on the new physical standby database:

Once the failed primary database is restored and is running in the standby role, you can optionally perform a switchover to transition the databases to their original (pre-failure) roles. See Section 7.2.1, "Switchovers Involving a Physical Standby Database" for more information.

10.3.2 Converting a Failed Primary Database into a Logical Standby Database

The following steps assume that the Data Guard configuration has already completed a failover involving a logical standby database and Flashback Database has been enabled on the old primary database. This procedure brings the old primary database back into the Data Guard configuration as a new standby database, without re-creating the old primary database.

Step 1 Determine the SCN at which the old standby database became the primary database.

On the new primary database, determine the SCN at which the old standby database became the new primary database using the following query:

SQL> SELECT VALUE AS BECAME_PRIMARY_SCN FROM DBA_LOGSTDBY_PARAMETERS
  2> WHERE NAME = 'END_PRIMARY_SCN';
Step 2 Flash back the failed primary database.

To create a new logical standby database, shut down the database (if necessary), mount the old primary database, flash it back to the value for BECAME_PRIMARY_SCN that was determined in Step 1, and enable the database guard.

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> FLASHBACK DATABASE TO SCN <became_primary_scn>;
SQL> ALTER DATABASE GUARD ALL;
Step 3 Open the database with the RESETLOGS option.
SQL> ALTER DATABASE OPEN RESETLOGS;
Step 4 Create a database link to the new primary database and start SQL Apply.
SQL> CREATE PUBLIC DATABASE LINK mylink
  2> CONNECT TO system IDENTIFIED BY password 
  3> USING 'service_name_of_new_primary_database';
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY NEW PRIMARY mylink;

The role reversal is now complete.

Once the failed primary database has been restored and is running in the standby role, you can optionally perform a switchover to transition the databases to their original (pre-failure) roles. See Section 7.3.1, "Switchovers Involving a Logical Standby Database" for more information.

10.4 Using Flashback Database After Issuing an Open Resetlogs Statement

Suppose an error has occurred on the primary database in a Data Guard configuration in which the standby database is using real-time apply. In this situation, the same error will be applied on the standby database.

However, if Flashback Database is enabled, you can revert the primary and standby databases back to their pre-error condition by issuing the FLASHBACK DATABASE and OPEN RESETLOGS statements on the primary database, and then issuing a similar FLASHBACK STANDBY DATABASE statement on the standby database before restarting log apply services. (If Flashback Database is not enabled, you need to re-create the standby database, as described in Chapter 3 and Chapter 4, after the point-in-time recovery was performed on the primary database.)

10.4.1 Flashing Back a Physical Standby Database

The following steps describe how to avoid re-creating a physical standby database after you issued the OPEN RESETLOGS statement on the primary database.

Step 1 Determine the SCN before the RESETLOGS operation occurred.

On the primary database, use the following query to obtain the value of the system change number (SCN) that is 2 SCNs before the RESETLOGS operation occurred on the primary database:

SQL> SELECT TO_CHAR(RESETLOGS_CHANGE# - 2) FROM V$DATABASE;
Step 2 Obtain the current SCN on the standby database.

On the standby database, obtain the current SCN with the following query:

SQL> SELECT TO_CHAR(CURRENT_SCN) FROM V$DATABASE;
Step 3 Determine if it is necessary to flash back the database.
Step 4 Restart log apply services.

The standby database is now ready to receive and apply redo from the primary database.

10.4.2 Flashing Back a Logical Standby Database

The following steps describe how to avoid re-creating a logical standby database after you have issued the OPEN RESETLOGS statement on the primary database.

Step 1 Flash back the primary database.

On the primary database, execute the following SQL statements to flash back and then open the database with the RESETLOGS option:

SQL> FLASHBACK DATABASE TO TIMESTAMP <timestamp you want to flash back to>;
SQL> ALTER DATABASE OPEN RESETLOGS;
Step 2 Determine the SCN at the primary database.

On the primary database, use the following query to obtain the value of the system change number (SCN) that is 2 SCNs before the RESETLOGS operation occurred on the primary database:

SQL> SELECT TO_CHAR(RESETLOGS_CHANGE# - 2) FROM V$DATABASE;
Step 3 Stop SQL Apply.

On the logical standby database, stop SQL Apply:

SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
SQL> SELECT APPLIED_SCN FROM DBA_LOGSTDBY_PROGRESS;

If the APPLIED_SCN is less than the value of the <resetlogs_change#-2>, you do not need to flashback the standby database and can proceed to Step 6. This may happen if SQL Apply is running with a delay. Otherwise, flash back the standby database as described in Step 4.

Step 4 Flash back the logical standby database.

Issue the following SQL statements to flash back the logical standby database to the same time used to flash back the primary database:

SQL> SHUTDOWN;
SQL> STARTUP MOUNT EXCLUSIVE;
SQL> FLASHBACK DATABASE TO TIMESTAMP <time of primary database flashback>;
SQL> ALTER DATABASE OPEN READ ONLY;
SQL> SELECT APPLIED_SCN FROM DBA_LOGSTDBY_PROGRESS;
Step 5 Open the logical standby database with the RESETLOGS option.

Open the logical standby database with the RESETLOGS option:

SQL> SHUTDOWN;
SQL> STARTUP MOUNT EXCLUSIVE;
SQL> ALTER DATABASE OPEN RESETLOGS;
Step 6 Archive the current log on the primary database.

Perform a log switch:

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
Step 7 Start SQL Apply.
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;

10.5 Using a Physical Standby Database with a Time Lag

By default, when log apply services are running on the standby database, the redo data is either written to archived log files and applied, or when real-time apply is enabled, the redo is written to the standby database as it arrives from the primary database. But in some cases, you may want to create a time lag between the archiving of an online redo log file at the primary site and the application of the archived redo log file at the standby site. A time lag can protect against the transfer of corrupted or erroneous data from the primary site to the standby site.

For example, suppose you run a batch job every night on the primary database. Unfortunately, you accidently ran the batch job twice, and you did not realize the mistake until the batch job completed for the second time. Ideally, you need to roll back the database to the point in time before the batch job began. A primary database that has a standby database with a time lag could help you to recover. You could fail over the standby database with the time lag and use it as the new primary database.

To create a standby database with a time lag, use the DELAY attribute of the LOG_ARCHIVE_DEST_n initialization parameter in the primary database initialization parameter file.


Note:

If you define a delay for a destination that has real-time apply enabled, the delay is ignored


Although the redo data is still automatically transmitted from the primary database to the standby database and written to archived redo log files (and standby redo log files, if implemented), the log files are not immediately applied to the standby database. The log files are applied when the specified time interval expires.

This scenario uses a 4-hour time lag and covers the following topics:

Readers of this scenario are assumed to be familiar with the procedures for creating a typical standby database. The details were omitted from the steps outlined in this scenario. See Chapter 3 for details about creating physical standby databases.

10.5.1 Establishing a Time Lag on a Physical Standby Database

To create a physical standby database with a time lag, modify the LOG_ARCHIVE_DEST_n initialization parameter on the primary database to set a delay for the standby database. The following is an example of how to add a 4-hour delay to the LOG_ARCHIVE_DEST_n initialization parameter:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=stdby DELAY=240';

The DELAY attribute indicates that the archived redo log files at the standby site are not available for recovery until the 4-hour time interval has expired. The time interval (expressed in minutes) starts when the archived redo log files are successfully transmitted to the standby site. The redo information is still sent to the standby database and written to the disk as normal.

See Section 6.2.2 for a more information about establishing a time lag on physical and logical standby databases.

10.5.2 Failing Over to a Physical Standby Database with a Time Lag

A standby database configured to delay application of archived redo log files can be used to recover from user errors or data corruptions on the primary database. In most cases, you can query the time-delayed standby database to retrieve the data needed to repair the primary database (for example, to recover the contents of a mistakenly dropped table). In cases where the damage to the primary database is unknown or when the time required to repair the primary database is prohibitive, you can also consider failing over to a time-delayed standby database.

Assume that a backup file was inadvertently applied twice to the primary database and that the time required to repair the primary database is prohibitive. You choose to fail over to a physical standby database for which the application of archived redo log files is delayed. By doing so, you transition the standby database to the primary role at a point before the problem occurred, but you will likely incur some data loss. The following steps illustrate the process:

  1. Initiate the failover by issuing the appropriate SQL statements on the time-delayed physical standby database:
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    SQL> ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE SKIP STANDBY LOGFILE;
    SQL> SHUTDOWN IMMEDIATE;
    SQL> STARTUP
    
    

    The ACTIVATE statement immediately transitions the standby database to the primary role and makes no attempt to apply any additional redo data that might exist at the standby location. When using this statement, you must carefully balance the cost of data loss at the standby location against the potentially extended period of downtime required to fully repair the primary database.

  2. Re-create all other standby databases in the configuration from a copy of this new primary database.

10.5.3 Switching Over to a Physical Standby Database with a Time Lag

All of the redo data is transmitted to the standby site as it becomes available. Therefore, even when a time delay is specified for a standby database, you can make the standby database current by overriding the delay using the SQL ALTER DATABASE RECOVER MANAGED STANDBY statement.


Note:

To recover from a logical error, you must perform a failover instead of a switchover.


The following steps demonstrate how to perform a switchover to a time-delayed physical standby database that bypasses a time lag. For the purposes of this example, assume that the primary database is located in New York, and the standby database is located in Boston.

Step 1 Apply all of the archived redo log files to the original (time-delayed) standby database bypassing the lag.

Switchover will not begin until the standby database applies all of the archived redo log files. By lifting the delay, you allow the standby database to proceed without waiting for the specified time interval to pass before applying the archived redo log files.

Issue the following SQL statement to lift the delay:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY 
  2> DISCONNECT FROM SESSION THROUGH LAST SWITCHOVER;
Step 2 Stop read or update activity on the primary and standby databases.

You must have exclusive database access before beginning a switchover. Ask users to log off the primary and standby databases, or query the V$SESSION view to identify users that are connected to the databases and close all open sessions except the SQL*Plus session from which you are going to execute the switchover statement. See Oracle Database Administrator's Guide for more information about managing users.

Step 3 Switch the primary database to the physical standby role.

On the primary database (in New York), execute the following statement:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY 
  2> WITH SESSION SHUTDOWN;

This statement does the following:

Step 4 Shut down and start up the former primary instance, and mount the database.

Execute the following statement on the former primary database (in New York):

SQL> SHUTDOWN NORMAL;
SQL> STARTUP MOUNT;
Step 5 Switch the original standby database to the primary role.

Issue the following SQL statement:

Step 6 SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY DATABASE;Shut down and restart the new primary database instance.

Issue the following SQL statements:

SQL> SHUTDOWN;
SQL> STARTUP PFILE=Failover.ora;

10.6 Recovering from a Network Failure

The following steps describe how to recover after a network failure.

Step 1 Identify the network failure.

The V$ARCHIVE_DEST view contains the network error and identifies which standby database cannot be reached. On the primary database, execute the following SQL statement for the destination that experienced the network failure. For example:

SQL> SELECT DEST_ID, STATUS, ERROR FROM V$ARCHIVE_DEST WHERE DEST_ID = 2;

DEST_ID    STATUS    ERROR
---------- --------- --------------------------------------------------------
        2  ERROR     ORA-12224: TNS:no listener

The query results show there are errors archiving to the standby database, and the cause of the error is TNS:no listener. You should check whether or not the listener on the standby site is started. If the listener is stopped, then start it.

Step 2 Prevent the primary database from stalling.

If you cannot solve the network problem quickly, and if the standby database is specified as a mandatory destination, try to prevent the database from stalling by doing one of the following:

Step 3 Archive the current online redo log file.

On the primary database, archive the current online redo log file:

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

When the network is back up again, log apply services can detect and resolve the archive gaps automatically when the physical standby database resumes Redo Apply.

10.7 Recovering After the NOLOGGING Clause Is Specified

In some SQL statements, the user has the option of specifying the NOLOGGING clause, which indicates that the database operation is not logged in the online redo log file. Even though the user specifies the clause, a redo record is still written to the online redo log file. However, there is no data associated with this record. This can result in log application or data access errors at the standby site and manual recovery might be required to resume applying log files.


Note:

To avoid these problems, Oracle recommends that you always specify the FORCE LOGGING clause in the CREATE DATABASE or ALTER DATABASE statements. See the Oracle Database Administrator's Guide.


10.7.1 Recovery Steps for Logical Standby Databases

For logical standby databases, when SQL Apply encounters a redo record for an operation performed with the NOLOGGING clause, it skips over the record and continues applying changes from later records. Later, if an attempt is made to access one of the records that was updated with NOLOGGING in effect, the following error is returned: ORA-01403 no data found

To recover after the NOLOGGING clause is specified, re-create one or more tables from the primary database, as described in Section 9.1.7.


Note:

In general, use of the NOLOGGING clause is not recommended. Optionally, if you know in advance that operations using the NOLOGGING clause will be performed on certain tables in the primary database, you might want to prevent the application of SQL statements associated with these tables to the logical standby database by using the DBMS_LOGSTDBY.SKIP procedure.


10.7.2 Recovery Steps for Physical Standby Databases

When the archived redo log file is copied to the standby site and applied to the physical standby database, a portion of the datafile is unusable and is marked as being unrecoverable. When you either fail over to the physical standby database, or open the standby database for read-only access, and attempt to read the range of blocks that are marked as UNRECOVERABLE, you will see error messages similar to the following:

ORA-01578: ORACLE data block corrupted (file # 1, block # 2521)
ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option

To recover after the NOLOGGING clause is specified, you need to copy the datafile that contains the unjournaled data from the primary site to the physical standby site. Perform the following steps:

Step 1 Determine which datafiles should be copied.

Follow these steps:

  1. Query the primary database:
    SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;
    NAME                                                  UNRECOVERABLE
    ----------------------------------------------------- -------------
    /oracle/dbs/tbs_1.dbf                                       5216
    /oracle/dbs/tbs_2.dbf                                          0
    /oracle/dbs/tbs_3.dbf                                          0
    /oracle/dbs/tbs_4.dbf                                          0
    4 rows selected.
    
  2. Query the standby database:
    SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;
    NAME                                                  UNRECOVERABLE
    ----------------------------------------------------- -------------
    /oracle/dbs/stdby/tbs_1.dbf                                 5186
    /oracle/dbs/stdby/tbs_2.dbf                                    0
    /oracle/dbs/stdby/tbs_3.dbf                                    0
    /oracle/dbs/stdby/tbs_4.dbf                                    0
    4 rows selected.
    
  3. Compare the query results of the primary and standby databases.

    Compare the value of the UNRECOVERABLE_CHANGE# column in both query results. If the value of the UNRECOVERABLE_CHANGE# column in the primary database is greater than the same column in the standby database, then the datafile needs to be copied from the primary site to the standby site.

    In this example, the value of the UNRECOVERABLE_CHANGE# in the primary database for the tbs_1.dbf datafile is greater, so you need to copy the tbs_1.dbf datafile to the standby site.

Step 2 On the primary site, back up the datafile you need to copy to the standby site.

Issue the following SQL statements:

SQL> ALTER TABLESPACE system BEGIN BACKUP;
SQL> EXIT;
% cp tbs_1.dbf /backup
SQL> ALTER TABLESPACE system END BACKUP;
Step 3 On the standby database, restart Redo Apply.

Issue the following SQL statement:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM
  2>SESSION;

You might get the following error messages (possibly in the alert log) when you try to restart Redo Apply:

ORA-00308: cannot open archived log 'standby1'
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf'

If you get the ORA-00308 error and Redo Apply does not terminate automatically, you can cancel recovery by issuing the following statement from another terminal window:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

These error messages are returned when one or more log files in the archive gap have not been successfully applied. If you receive these errors, manually resolve the gaps, and repeat Step 3. See Section 5.8.4 for information about manually resolving an archive gap.

10.7.3 Determining If a Backup Is Required After Unrecoverable Operations

If you performed unrecoverable operations on your primary database, determine if a new backup operation is required by following these steps:

  1. Query the V$DATAFILE view on the primary database to determine the system change number (SCN) or the time at which the Oracle database generated the most recent invalidated redo data.
  2. Issue the following SQL statement on the primary database to determine if you need to perform another backup:
    SELECT UNRECOVERABLE_CHANGE#, 
           TO_CHAR(UNRECOVERABLE_TIME, 'mm-dd-yyyy hh:mi:ss') 
    FROM   V$DATAFILE;
    
    
  3. If the query in the previous step reports an unrecoverable time for a datafile that is more recent than the time when the datafile was last backed up, then make another backup of the datafile in question.

See Oracle Database Reference for more information about the V$DATAFILE view.

10.8 Resolving Archive Gaps Manually

An archive gap is a range of archived redo log files created whenever you are unable to apply the next archived redo log file generated by the primary database to the standby database. This section contains the following topics:

10.8.1 What Causes Archive Gaps?

An archive gap can occur whenever the primary database archives the current online redo log file locally, but the redo data is not archived at the standby site. Because the standby database requires the sequential application of log files, media recovery stops at the first missing log file encountered.

Archive gaps can occur in the following situations:

10.8.1.1 Creation of the Standby Database

One example of an archive gap occurs when you create the standby database from an old backup. For example, if the standby database is made from a backup that contains changes through log file 100, and the primary database currently contains changes through log file 150, then the standby database requires that you apply log files 101 to 150. Another typical example of an archive gap occurs when you generate the standby database from a hot backup of an open database.

For example, assume the scenario illustrated in Figure 10-7.

Figure 10-7 Manual Recovery of Archived Redo Log Files in an Archive Gap

Text description of sbr81090.gif follows.

Text description of the illustration sbr81090.gif

The following steps occur:

  1. You take a hot backup of primary database.
  2. At time t, while you are busy configuring the network files, primary archives log files, sequences 4 and 5.
  3. At time t + 1, you start the standby instance.
  4. primary archives redo log files with sequences 6, 7, and 8 on the primary site, and transmits the redo to the standby site.

Archived redo log file sequences 4 and 5 are now part of an archive gap, and these log files must be applied to the standby database.

10.8.1.2 Shutdown of the Standby Database When the Primary Database Is Open

You might be required to shut down the standby database to resolve maintenance issues. For example, you must shut down the standby database when you change a control file parameter, such as MAXDATAFILE, in the primary database.

To avoid creating archive gaps, follow these rules:

If you violate either of these two rules, then the standby database is down while the primary database is open and archiving. Consequently, the Oracle database can create an archive gap.


Note:

If the standby site is specified as MANDATORY in one of the LOG_ARCHIVE_DEST_n parameters of the primary initialization parameter file, dynamically change it to OPTIONAL before shutting down the standby database. Otherwise, the primary database eventually stalls because it cannot archive its online redo log files.


10.8.1.3 Network Failure Prevents Transmission of Archived Log Files

If you maintain a Data Guard environment, and the network goes down, the primary database might continue to archive to disk but be unable to archive to the standby site. In this situation, archived redo log files accumulate as usual on the primary site, but the standby instance is unaware of them.

See:

10.8.2 Determining If an Archive Gap Exists

To determine if there is an archive gap, query the V$ARCHIVED_LOG and V$LOG views. If an archive gap exists, the output of the query specifies the thread number and log sequence number of all log files in the archive gap. If there is no archive gap for a given thread, the query returns no rows.

Identify the log files in the archive gap

Query the V$ARCHIVED_LOG and V$LOG views on the standby database. For example, the following query shows there is a difference in the RECD and SENT sequence numbers for the destination specified by DEST_ID=2, indicating that there is a gap:

SQL> SELECT MAX(R.SEQUENCE#) LAST_SEQ_RECD, MAX(L.SEQUENCE#) LAST_SEQ_SENT FROM
  2> V$ARCHIVED_LOG R, V$LOG L WHERE
  3> R.DEST_ID=2 AND L.ARCHIVED='YES';
LAST_SEQ_RECD LAST_SEQ_SENT
------------- -------------
            7            10

Use the following query to determine the names of the archived redo log files on the local system that must be copied to the standby system that has the gap:

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND
  2> SEQUENCE# BETWEEN 7 AND 10;
NAME
--------------------------------------------------------------------------------
/primary/thread1_dest/arcr_1_7.arc
/primary/thread1_dest/arcr_1_8.arc
/primary/thread1_dest/arcr_1_9.arc
/primary/thread1_dest/arcr_1_10.arc

10.8.3 Manually Transmitting Log Files in the Archive Gap to the Standby Site

After you have obtained the sequence numbers of the log files in the archive gap, you can obtain their filenames by querying the V$ARCHIVED_LOG view on the primary site. The archived redo log path names on the standby site are generated by the STANDBY_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT parameters in the standby initialization parameter file.

If the standby database is on the same site as the primary database, or the standby database is on a remote site with a different directory structure than the primary database, the path names for the log files on the standby site cannot be the same as the path names of the log files archived by the primary database. Before transmitting the redo data to the standby site, determine the correct path names for the archived redo log files at the standby site.

To copy log files in an archive gap to the standby site

  1. Review the list of archive gap log files that you obtained earlier. For example, assume you have the following archive gap:
    THREAD#    LOW_SEQUENCE#   HIGH_SEQUENCE#
    ---------- -------------   --------------
           1             460              463
           2             202              204
           3             100              100
    

    If a thread appears in the view, then it contains an archive gap. You need to copy log files from threads 1, 2, and 3.

  2. Determine the path names of the log files in the archive gap that were transmitted by the primary database. After connecting to the primary database, issue a SQL query to obtain the name of a log file in each thread. For example, use the following SQL statement to obtain filenames of log files for thread 1:
    SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1
      2> AND SEQUENCE# > 459 AND SEQUENCE# < 464;
    
    NAME
    ---------------------------------------------------------------------
    /primary/thread1_dest/arcr_1_460.arc
    /primary/thread1_dest/arcr_1_461.arc
    /primary/thread1_dest/arcr_1_462.arc
    /primary/thread1_dest/arcr_1_463.arc
    4 rows selected
    
    
  3. On the standby site, review the settings for STANDBY_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT in the standby initialization parameter file. For example, you discover the following:
    STANDBY_ARCHIVE_DEST = /standby/arc_dest/
    LOG_ARCHIVE_FORMAT = log_%t_%s_d.arc
    

    These parameter settings determine the filenames of the archived redo log files at the standby site.

  4. On the primary site, copy the log files in the archive gap from the primary site to the standby site, renaming them according to values for STANDBY_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT. For example, enter the following copy commands to copy the archive gap log files required by thread 1:
    % cp /primary/thread1_dest/arcr_1_460.arc /standby/arc_dest/log_1_460.arc
    % cp /primary/thread1_dest/arcr_1_461.arc /standby/arc_dest/log_1_461.arc
    % cp /primary/thread1_dest/arcr_1_462.arc /standby/arc_dest/log_1_462.arc
    % cp /primary/thread1_dest/arcr_1_463.arc /standby/arc_dest/log_1_463.arc
    
    

    Perform similar commands to copy archive gap log files for threads 2 and 3.

  5. On the standby site, if the LOG_ARCHIVE_DEST and STANDBY_ARCHIVE_DEST parameter values are not the same, then copy the archive gap log files from the STANDBY_ARCHIVE_DEST directory to the LOG_ARCHIVE_DEST directory. If these parameter values are the same, then you do not need to perform this step.

    For example, assume the following standby initialization parameter settings:

    STANDBY_ARCHIVE_DEST = /standby/arc_dest/
    LOG_ARCHIVE_DEST = /log_dest/
    

    Because the parameter values are different, copy the archived redo log files to the LOG_ARCHIVE_DEST location:

    % cp /standby/arc_dest/* /log_dest/
    

    When you initiate manual recovery, the Oracle database looks at the LOG_ARCHIVE_DEST value to determine the location of the log files.

Now that all required log files are in the STANDBY_ARCHIVE_DEST directory, you can proceed to Section 10.8.4 to apply the archive gap log files to the standby database. See also Section 6.3.4.3 and the V$ARCHIVED_LOG view in Chapter 14.

10.8.4 Manually Applying Log Files in the Archive Gap to the Standby Database

After you have copied the log files in the archive gap to the standby site, you can apply them using the RECOVER AUTOMATIC statement.

To apply the archived redo log files in the archive gap

  1. Start up and mount the standby database (if it is not already mounted). For example, enter:
    SQL> STARTUP MOUNT PFILE=/oracle/admin/pfile/initSTBY.ora
    
  2. Recover the database using the AUTOMATIC option:
    SQL> ALTER DATABASE RECOVER AUTOMATIC STANDBY DATABASE;
    

    The AUTOMATIC option automatically generates the name of the next archived redo log file needed to continue the recovery operation.

    After recovering the available log files, the Oracle database prompts for the name of a log file that does not exist. For example, you might see:

    ORA-00308: cannot open archived log '/oracle/standby/standby_logs/arcr_1_
    540.arc'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    
  3. Cancel recovery after the Oracle database applies the available log files by typing CTRL/C:
    SQL> <CTRL/C>
    Media recovery cancelled.
    

    The following error messages are acceptable after recovery cancellation and do not indicate a problem:

    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error 
    below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'some_filename'
    ORA-01112: media recovery not started
    
  4. After you finish manually applying the missing log file, you can restart log apply services on the standby database, as follows:
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    

10.9 Creating a Standby Database That Uses OMF or ASM

Chapters 3 and 4 described how to create physical and logical standby databases. This section augments the discussions in those chapters with additional steps that must be performed if the primary database uses Oracle Managed Files (OMF) or Automatic Storage Management (ASM).


Note:

The discussion in this section is presented at a level of detail that assumes the reader already knows how to create a physical standby database and is an experienced user of the RMAN, OMF, and ASM features. For more information, see:


Perform the following tasks to prepare for standby database creation:

  1. Enable forced logging on the primary database.
  2. Enable archiving on the primary database.
  3. Set all necessary initialization parameters on the primary database.
  4. Create an initialization parameter file for the standby database.
  5. If the primary database is configured to use OMF, then Oracle recommends that the standby database be configured to use OMF, too. To do this, set the DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n initialization parameters to appropriate values. Maintenance and future role transitions are simplified if the same disk group names are used for both the primary and standby databases.
  6. Set the STANDBY_FILE_MANAGEMENT initialization parameter to AUTO.
  7. Configure Oracle Net, as required, to allow connections to the standby database.
  8. Create a remote login password file for the standby database. Use the same password for the SYS account as on the primary database.
  9. Start the standby database instance without mounting the control file.

Perform the following tasks to create the standby database:

  1. If the standby database is going to use ASM, create an ASM instance if one does not already exist on the standby database system.
  2. Use the RMAN BACKUP command to create a backup set that contains a copy of the primary database's datafiles, archived log files, and a standby control file.
  3. Use the RMAN DUPLICATE ... FOR STANDBY command to copy the datafiles, archived redo log files and standby control file in the backup set to the standby database's storage area.

    The DUPLICATE ... FOR STANDBY command performs the actual data movement at the standby instance. If the backup set is on tape, the media manager must be configured so that the standby instance can read the backup set. If the backup set is on disk, the backup pieces must be readable by the standby instance, either by making their primary path names available through NFS, or by copying them to the standby system and using RMAN CATALOG BACKUPPIECE command to catalog the backup pieces before restoring them.

After you successfully complete these steps, continue with the steps in Section 3.2.7, to verify the configuration of the physical standby database.

To create a logical standby database, continue with the standby database creation process described in Chapter 4, but with the following modifications:

  1. For a logical standby database, setting the DB_CREATE_FILE_DEST parameter does not force the creation of OMF filenames. However, if this parameter was set on the primary database, it must also be set on the standby database.
  2. After creating a logical standby control file on the primary system, do not use an operating system command to copy this file to the standby system. Instead, use the RMAN RESTORE CONTROLFILE command to restore a copy of the logical standby control file to the standby system.
  3. If the primary database uses OMF files, use RMAN to update the standby database control file to use the new OMF files created on the standby database. To perform this operation, connect only to the standby database, as shown in the following example:
    > RMAN TARGET sys/oracle@lstdby
    RMAN> CATALOG START WITH '+stby_diskgroup';
    RMAN> SWITCH DATABASE TO COPY;
    
    

After you successfully complete these steps, continue with the steps in Section 4.2.4 to start, recover, and verify the logical standby database.