【Dataguard】Oracle多租戶環境對Dataguard的影響

xysoul_雲龍發表於2021-04-30

本文件主要介紹在多租戶環境下,使用Dataguard,管理方面的一些影響或注意事項。 


SOLUTION

 ========

Disclaimer:

NOTE: In the images and/or the document content below, the user information and environment data used represents
fictitious data from the Oracle sample schema(s), Public Documentation delivered with an Oracle database product or
other training material. Any similarity to actual environments, actual persons, living or dead, is purely coincidental and
not intended in any manner.

For the purposes of this document, the following fictitious environment is used as an example to describe the procedure:

Primary Site:
CDB Name: cdb100,cdbprim  -  PDB Name: srcpdb,testpdb  -  ASM Diskgroup: +data

Standby Site:
CDB Name: cdbstby  -  PDB Name: testpdb  -  Instance Name: cdbstby1  -  ASM Diskgroup: +data  - Net Connection Alias: stbyhost

========

Data Guard Impact on Oracle Multitenant Environments

Data Guard fully supports Oracle Multitenant environments. Note that all Data Guard role transitions occur at the container database level, i.e. you cannot failover or switchover an individual pluggable database.  There are some nuances when dealing with pluggable database creation, clone or plugin when done in a Data Guard configuration, and some background information is necessary.  

Oracle Managed Files (OMF) is a feature where the RDBMS is given control of the naming and location of files.  Files are named using system generated names and are placed in the location specified in the DB_CREATE_FILE_DEST initialization parameter.  When you use Oracle Automatic Storage Management (ASM) to manage and store your files, OMF is also used to simplify file management.  If a Data Guard physical standby database is also in use, there are initialization parameters (DB_FILE_NAME_CONVERT, LOG_FILE_NAME_CONVERT, STANDBY_FILE_MANAGEMENT) to assist with converting names used by the primary to those used by the standby for existing and newly created datafiles when OMF is in use.

When ASM is used, the directory structure and naming convention is as follows.  A traditional file name in ASM consists of 3 parts, <tablespace_name>.<ASM file number>.<incarnation>.  You are not allowed to directly create files with this naming structure.  The directory structure for datafiles traditionally consists of <DB_CREATE_FILE_DEST>/<DB_UNIQUE_NAME>/DATAFILE/<files>.  With Oracle Multitenant, the directory structure changes for each pluggable database (PDB) that exists in a container database (CDB).  Each database (non-CDB, CDB, PDB) as of 12gR1 has a generic unique identifier (GUID) associated with it.  For pre-12c non-CDB databases, the GUID will be created when the database is upgraded to 12c.  This GUID will not change throughout the life of the container/non-CDB and is one of the main reasons a PDB should be cloned (cloning generates a new GUID) rather than copying the same PDB to multilple locations and plugging in to multiple CDBs.  The GUID for a particular container/non-CDB can be found by querying V$CONTAINERS in any 12c database. To assist with identifiying which files belong to which PDB, an ASM directory structure of <DB_CREATE_FILE_DEST>/<DB_UNIQUE_NAME>/<GUID>/DATAFILE/<files> is used for PDBs.

The physical standby database and redo apply will normally expect a new PDB’s datafiles to have been pre-copied to the standby site and be in such a state that redo received from the primary database can be immediately applied.  The standby database ignores any file name conversion specification on the CREATE PLUGGABLE DATABASE statement and relies solely on the standby database’s initialization parameter settings for DB_CREATE_FILE_DEST and DB_FILE_NAME_CONVERT for locations and file naming.

 

Additional My Oracle Support Notes

A number of My Oracle Support Notes have been created to assist with reducing any downtime that may occur or any potential impact to disaster recovery protection of the new PDBs.

  • Step by Step Examples of Migrating non-CDBs and PDBs Using ASM for File Storage  Document 1576755.1
    This note provides methods for migrating non-CDBs to multitenant with minimal actual downtime.  In addition, the note details how to ensure that as soon as the new PDB is created on a primary CDB in a Data Guard environment, any physical standby databases will also fully create the PDB and provide immediate disaster recovery capabilities.

    The methods described in this note are best used when immediate disaster recovery protection is desired AND one of the following:
    • The source database is a non-CDB, can be either a 12c database or a release prior to 12c
    • The source database is a PDB that is being unplugged from one container database and plugged into another.
  • Making Use of the STANDBYS=NONE Feature with Oracle Multitenant  Document 1916648.1
    This note describes usage of the STANDBYS=NONE clause on a create pluggable database statement, also known as deferring recovery of a PDB.  When the STANDBYS=NONE clause is specified on the CREATE PLUGGABLE DATABASE statement, the PDB is created at all standbys, but only enough to allow redo apply to proceed.  The files for the new PDB are marked as OFFLINE/RECOVER and any additional redo for that PDB will be ignored.  At some point in the future, the expectation is that the PDBs files will be copied to any/all physical standby databases and recovery for the PDB be enabled.  Note that at PDB creation time it is not possible to specify that the PDB is created on a subset of standby databases.  

    If STANDBYS=NONE is included on the CREATE PLUGGABLE DATABASE statement, recovery of the new PDB will be deferred at all physical standbys and each standby on which you wish to enable recovery will require that the enable recovery process is performed.  All existing PDBs on the standby databases will retain their current recovery status (DEFERRED or ENABLED).

    The note describes the steps for enabling recovery of the PDB and how Data Guard role transitions will behave when having PDBs with deferred recovery.  

    The methods described in this note are best used when immediate disaster recovery protection is not required OR one of the following:
    • The source PDB is being cloned from another container database either by remote cloning (create pluggable database pdbclone from pdb@db_link) or using a manifest file (create pluggable database pdbclone as clone using ‘manifest.xml’).
    • The source PDB and cloned PDB will reside in the same container database but you do not have an Active Data Guard license.
  • Reusing the Source Standby Database Files When Plugging a PDB into the Primary Database of a Data Guard Configuration  Document 2273829.1
    This note provides a method for reusing the source PDB's standby datafiles when a plugin operation is performed.  When performing a PDB plugin, a standby database expects files to be pre-staged in a particular location, the steps in this note show you how to create ASM aliases on the standby environment that point to files used by the source database standby database prior to the plugin operation.  This will allow redo apply to locate the files and use them directly without having to defer recovery and without causing redo apply to fail.  

    The method described in this note are best used under the following conditions:
    • The source CDB standby and the destination CDB standby share database file storage
    • The destination CDB is at the same or later version as the source CDB.
    • You do not have enough storage for two copies of the PDB on both the primary and standby environments (having an additional copy provides quick and easy fallback)
    • You require the PDB to have immediate HA and DR.
  • Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration  Document 2273304.1
    This note provides a method for using the source non-CDB's standby datafiles when a plugin and conversion to PDB operation is performed.  When performing this type of non-CDB plugin. a standby database expects files to be pre-staged in a particular location, the steps in this note show you how to create ASM aliases on the standby environment that point to files used by the source database standby database prior to the plugin operation. This will allow redo apply to locate the files and use them directly without having to defer recovery and without causing redo apply to fail.

    The methods described in this note are best used under the following conditions:
    • The source non-CDB standby and the destination CDB standby share database file storage
    • The destination CDB is at the same version as the source non-CDB
    • You do not have enough storage for two copies of the non-PDB on both the primary and standby environments (having an additional copy provides quick and easy fallback)
    • You require the new PDB to have immediate HA and DR
  • Using standby_pdb_source_file_dblink and standby_pdb_source_file_directory to Maintain Standby Databases when Performing PDB Remote Clones or Plugins  Document 2274735.1
    The note provides the steps for either cloning a PDB from another container and having the standby database use the same database link to retrieve the blocks from the source PDB or providing backups of the source PDB  datafiles to a location accessible to a standby database so on plugin it can copy the files to the standby known location.
    • The steps for cloning in this note are best used under the following conditions:
      • Your standby databases are in Active Data Guard mode.  If the standby is not in Active Data Guard mode, the option cannot be used.
      • You can leave your source PDB be in Read Only state throughout the duration of the copy to the primary and all standbys involved in using this method.  This option cannot be used with any of 12.2 hot cloning functionality.
    • The steps for plugin in this note are best used under the following conditions:
      • You have a PDB that is used as a gold image to seed new PDBs multiple times.
      • You have multiple standby databases that need to be maintained and only wish to copy the files one time to make them accessible to all standbys.
      • You have space available to create backup image copies of the source PDB and can make the backup location available to the standby database.
      • If you are using the AS CLONE clause when plugging in the PDB at the primary and therefore may not determine the known location at the standby.
  • Parameter enabled_pdbs_on_standby and STANDBYS Option With Data Guard Subset Standby  Document 2417018.1
    The 12.2 new feature Subset Standby allows you to control at PDB creation time which standby databases are expected to be maintained as part of the PDB creation. This feature does not limit the the redo being shipped to standby databases nor does it provide automation of file migration to individual standby databases. Any standby that is included as part of the criteria for inclusion in the Subset still must follow the same restrictions and usage notes listed below, e.g. files still must be available at the standby prior to plugin redo being applied, the standby must be in Active Data Guard mode for local cloning to continue automatically, etc.

    Subset Standby configuration is only followed and maintained at PDB creation time. It is possible to disable or enable recovery of a PDB at some point in the future regardless of the settings.

    This note provides examples of Subset Standby usage.

   

NOTE: In the images and/or the document content below, the user information and data used represents fictitious data or data from the Oracle sample schema(s) or Public Documentation delivered with an Oracle database product. Any similarity to actual persons, living or dead, is purely coincidental and not intended in any manner.

   

PDB Creation at Standby Databases and Tempfiles

When a PDB is created at the standby database, regardless of how the creation is done, tempfiles for the TEMP tablespace will not be automatically created.  If you are running in Active Data Guard mode, you will see messages in the log file similar to the following when opening the PDB:

diag_adl:*********************************************************************
diag_adl:WARNING: The following temporary tablespaces in container(PDB1001)
diag_adl: contain no files.
diag_adl: This condition can occur when a backup controlfile has
diag_adl: been restored. It may be necessary to add files to these
diag_adl: tablespaces. That can be done using the SQL statement:
diag_adl:
diag_adl: ALTER TABLESPACE <tablespace_name> ADD TEMPFILE
diag_adl:
diag_adl: Alternatively, if these temporary tablespaces are no longer
diag_adl: needed, then they can be dropped.
diag_adl: Empty temporary tablespace: TEMP
diag_adl:*********************************************************************


This is expected behavior.  You should follow the recommendation in the alert messaging and manually create a tempfile with a SIZE setting of your choice for the following reasons:

  • It is possible to receive errors when running queries on the standby database that require temp space once PGA for the session is exhausted if no tempfile is created.

Note that the database must be open to be able to add the tempfile.

Standby Database Behavior when Creating, Cloning or Plugging in a PDB

The following items describe the various behaviors that will occur at the standby database when a PDB is created, either as a new PDB, a plugged in PDB or a cloned PDB.

NOTE: When using 12.2 hot clone, PDB refresh and PDB relocate functionality, the new PDBs should always be created with the STANDBYS=NONE clause. Currently there is no way for the standby to be able to exactly repeat the same operation as the primary (blocks at the source have changed by the time redo is received at the standby, it is not possible to run multiple media recovery sessions on the standby to apply redo from the source generated during the hot clone, etc.), so you must defer recovery of the PDB and enable recovery at some point in the future. This applies to both remote (source PDB is in another container database) and local (source PDB is in the same container database) clones. PDBs created as refreshable should never have recovery enabled as long as there may be a refresh operation that can be performed. If the refreshable PDB has been transitioned into a standalone PDB via the alter pluggable database....refresh none operation, at that point it can have recovery enabled at any corresponding standby.

Starting in 18c, we will automatically create the PDBs as if STANDBYS=NONE has been specified. You will not need to use the clause for hot clones, PDB refresh PDB and PDB relocate in this version and later. You will see an entry in the standby database alert log similar to the following:

<PDB>(28):Tablespace-SYSTEM during PDB create skipped since source is in r/w mode or this is a refresh clone
<PDB>(28):File #<FILENUMBER> added to control file as 'UNNAMED00221'. Originally created as:
<PDB>(28):'<FILENAME>'
<PDB>(28):because the pluggable database was created with nostandby
<PDB>(28):or the tablespace belonging to the pluggable database is
<PDB>(28):offline.

   


  • Creating a new PDB from SEED
    Creation of a brand new empty pluggable database will automatically replicate to each physical standby database where redo apply will copy the PDB$SEED files at the standby site to create the new PDB.  

    For these cases Oracle recommends that recovery not be deferred.
  • Cloning an existing PDB to a new PDB in the same container database
    Behavior on a standby database differs if the standby is running Active Data Guard (the standby database is open read only with apply running) versus when it is not.      

       
    NOTE:  All snapshot clones should be created with the STANDBYS=NONE clause.  At this time replicating of snapshot clones to a Data Guard standby database is not supported.
       

    • Cloning a PDB in the same container at the primary:
      • On any physical standby database running Active Data Guard the standby will copy the source files located at the standby site and automatically create the PDB on the standby.  
        NOTE: The source PDB on the primary database must be in READ ONLY mode for the duration of the clone creation process.  If the source PDB is in READ WRITE, redo apply at the standby will fail when attempting to create the cloned PDB.
           

        For these cases Oracle recommends that recovery not be deferred.
      • On any physical standby database not running Active Data Guard, the redo apply process currently cannot copy the source PDB files directly as it cannot guarantee file consistency.  

        For these cases, Oracle recommends deferring recovery of the PDB using the STANDBYS=NONE clause on the CREATE PLUGGABLE DATABASE statement.  Recovery of the new PDB can be enabled at some point in the future once the datafiles of the newly plugged in PDB have been copied from the primary CDB to the standby database in a manner similar to that documented in  Document 1916648.1.
  • Plugging in a PDB (not cloning) from another CDB
    Redo apply has the ability to search in “known locations” (those derived from either DB_FILE_CREATE_DEST or DB_FILE_NAME_CONVERT) but only for the resulting files that it will apply redo to, not the source files.  These files must be in the exact same state as the primary sees them when they are plugged in which means they must be copied from the source database to the standby database known location prior to plugin at the primary.  Redo apply will look in the known location for the datafiles and validate file header information to ensure the files are correct.  For Oracle Managed Files, redo apply has been coded in such a way that it will search the known location directory and ignore file names, looking at file header information for matches.  The DB_FILE_NAME_CONVERT parameter must be specified with values for the search algorithm to be enabled.  Even though OMF environments will not directly use the convert parameter, you must set it to something, a recommendation is to set it to '/<primary db_unique_name>/,/<standby db_unique_name>/'.  

    For these cases Oracle recommends that recovery not be deferred.  You should pre-copy the new PDB's files to the known location before executing the plugin statement at the primary CDB so that redo apply can ingest them into the standby database in a manner similar to the process documented in  Document 1576755.1.  

    If you do not have the space to pre-copy files to the standby and your source database has a standby and that standby shares storage with the destination standby, you also have the option of using the process defined in  Document 2273829.1 if the source is a PDB or  Document 2273304.1 if the source is a non-CDB.  This method works best when both the source primary and destination primary databases share storage and also the source standby and destination standby databases share storage.  This would most commonly be the case when you are using unplug/plug to upgrade individual PDBs and have created a new CDB at a later version in the same environment as the source.

    If the source PDB is a Gold Image type PDB that you use to frequently seed new PDBs, you have the option of using the process defined in  Document 2274735.1.  Although this method works for a one time copy, if you are pre-copying to the standby it is better to copy the files to the standby known location.  See the description of the note in the Additional Notes section for more possibilities of using this method.
     
  • Cloning a PDB from another CDB via a Manifest File 
    Cloning a PDB using a manifest file is similar to plugging in a PDB (previous item) but there's a difference.  The process creates a new PDB which generates a new GUID for the PDB.  There is no way of determining what that GUID will be prior to the new PDB being created.  In an OMF/ASM configuration, this means you will not be able to pre-copy the files to the standby “known location” mentioned earlier. 

    For these cases, Oracle recommends deferring recovery of the PDB using the STANDBYS=NONE clause on the CREATE PLUGGABLE DATABASE statement.  Recovery of the PDB can be enabled at some point in the future once the PDB's data files have been copied from the primary database to the standby database in a manner similar to that documented in  Document 1916648.1.


    If the source PDB is a Gold Image type PDB that you use to frequently seed new PDBs, you have the option of using the process defined in  Document 2274735.1 without specifying STANDBYS=NONE.  Since you are using the AS CLONE clause for OMF environments you will not be able to determine the standby known location in advance.  Using this method redo apply will copy the datafiles where they need to go.
  • Performing a remote PDB clone via Database Link

       
    NOTE: All snapshot clones should be created with the STANDBYS=NONE clause. At this time replicating of snapshot clones to a Data Guard standby database is not supported.
       

    Performing a remote PDB clone via a database link is similar to cloning via a manifest.  For OMF/ASM configurations, the GUID of the new PDB cannot be pre-determined so it's not possible to pre-copy the datafiles to the “known location”.

    If you can endure having the source PDB in Read Only mode for the duration of the copies and your standby databases are in Active Data Guard mode, Oracle recommends using the process defined in  Document 2274735.1.  This allows redo apply to copy the source PDB across the database link to the standby and maintaining HA/DR in real time.  If only some of your standby databases are in Active Data Guard mode, you can use the new 12.2 Subset Standby feature to specify which standby databases to include in the automated copy.

    If the clone is being done with the source PDB open Read Write or your standby databases are not in Active Data Guard mode, Oracle recommends deferring recovery of the PDB using the STANDBYS=NONE clause on the CREATE PLUGGABLE DATABASE statement.  Recovery of the PDB can be enabled at some point in the future once the PDB's data files have been copied from the primary database to the standby database in a manner similar to that documented in  Document 1916648.1.

Considerations for Application Root/PDBs

Starting in 12.2, you can create an Application Root and Application PDBs inside that container in a primary database. These are created as hot clones and, after applying  , with the STANDBYS=NONE clause. If you have an application container/pdb:

  1. Install   to avoid interrupted recovery on the standby (fixed in 18.1).
  2. See the notes above about the 12.2 hot clone feature and standbys.
  3. Wherever possible, deploy application changes by means of Application Patches rather than upgrades.
  4. Where Application upgrades are necessary, Oracle recommends doing one or more of the following:
    1. Mitigate the exposure to DR vulnerabilities caused by STANDBYS=NONE in the interval between begin upgrade and the last tenant sync by taking applications offline during this interval.
    2. Immediately after creating the Application Root Clone, follow the steps in  Document 1916648.1 to enable recovery of the Application Root Clone.

Resuming Redo Apply in Failure Scenarios

If redo apply has failed because it can’t find the data files, you must manually resolve the problem on the standby so redo apply can continue on. 

The following is an example of the error redo apply will encounter.

Recovery interrupted!

Recovery stopped due to failure in applying recovery marker (opcode 17.34).

Datafiles are recovered to a consistent state at change 63451382 but controlfile could be ahead of datafiles.

Wed Aug 26 11:08:32 2015

Errors in file < Oracle base >/diag/rdbms/cdbstby/cdbstby1/trace/cdbstby1_pr00_20293.trc:

ORA-01274: cannot add data file that was originally created as '+DATA/CDB100/1E390C96F9DB840AE0537A28890A285B/DATAFILE/system.489.888749763'

 

There are four possible resolutions short of rebuilding the entire standby database.  The methods are described in the following sections.

Disabling recovery of the New PDB at the Standby Database

This is by far the simplest method.  With this method you will immediately be able to restart redo apply to maintain DR for all the other PDBs in the container and position yourself for a zero downtime method of establishing DR for the specific PDB as described in Making Use of the STANDBYS=NONE Feature with Oracle Multitenant  Document 1916648.1.

  1. If you manage your Data Guard configuration with Data Guard Broker, disable redo apply on the standby vi broker command line
    DGMGRL> edit database cdbstby set state='apply-off';
       
  2. If the database was running in Active Data Guard mode, shut it down and restart in mount mode
    For RAC database use srvctl to shutdown all instances
    $srvctl stop db -d cdbstby

    For Single Instance databases, shutdown using SQL*Plus
    SQL> shutdown immediate
       
  3. Regardless of RAC or Single Instance, use SQL*Plus to mount one instance of the standby database.
    SQL> startup mount
       
  4. In the SQL*Plus session connect to the PDB that is causing the failure:
    SQL> alter session set container=testpdb;
       
  5. Disable recovery of the PDB causing the failure
    SQL> alter pluggable database disable recovery;
       
  6. Restart redo apply
    If you manage your Data Guard configuration with Data Guard Broker, exit SQL*Plus and enable redo apply on the standby vi broker command line
    SQL> exit

    DGMGRL> edit database cdbstby set state='apply-on';

    If you do not manage your Data Guard Configuration with Data Guard Broker, restart redo apply via SQL*Plus

    SQL> alter session set container=cdb$root;

    SQL> recover managed standby database disconnect;
       
  7. Once redo apply has had a chance to synchronize the files, Active Data Guard mode can be re-enabled
  8. When ready, use the instructions found in Making Use of the STANDBYS=NONE Feature with Oracle Multitenant  Document 1916648.1 to enable recovery of the failed PDB.
     

Restoring Original Data Files for the New PDB to the Standby Database

If you still have the data files from the source PDB available and the source PDB has not been re-opened since the plugin operation, you can copy the source datafiles to the known location directory on the standby or if the new PDB has not yet been opened on the primary CDB, you can copy the new PDBs data files to the standby database.  This process is best used when there are a small number (less than 10) datafiles in the source PDB.

   

  1. Get the GUID and CON_ID for the source PDB
    SQL> select name, con_id, guid from v$pdbs where name='SRCPDB';
    NAME                               CON_ID GUID
    ------------------------------ ---------- --------------------------------
    SRCPDB                                  8 1E15D0913C3F7AECE0537A28890AB256
       

  2. Get the GUID and CON_ID for the newly created PDB from the primary
    SQL> select name, con_id, guid from v$pdbs where name='TESTPDB';

    NAME                               CON_ID GUID
    ------------------------------ ---------- --------------------------------
    TESTPDB                                 9 1E390C96F9DB840AE0537A28890A285B
       

  3. In ASM on the standby site, create directory structures for the GUID of the newly created PDB
    ASMCMD [+] > cd +data/cdbstby
    ASMCMD [+data/cdbstby] > mkdir 1E390C96F9DB840AE0537A28890A285B
    ASMCMD [+data/cdbstby] > cd 1E390C96F9DB840AE0537A28890A285B
    ASMCMD [+data/cdbstby/ 1E390C96F9DB840AE0537A28890A285B] > mkdir DATAFILE
       

  4. In ASM, copy the datafiles from the source PDB to the standby database directory structure just created.  This can be done via the ASMCMD cp command.  For example if the new PDB was cloned from another PDB in the same container database, you can copy the files from the source PDB on the standby.  Note that the new file names must not be ASM generated file names.  You can use a variation of the following script to generate ASMCMD commands to copy the files to the standby ASM environment, replacing the data in <> with values appropriate for your environment.  The REPLACE clauses will change the disk group name, the database unique name and the source PDB guid and replace the periods in the file name to underscores.  

    The script should be executed in the CDB$ROOT of the CDB that contained the source PDB.
    set newpage 0
    set linesize 999
    set pagesize 0
    set feedback off
    set heading off
    set echo off
    set space 0
    set tab off
    set trimspool on
    spool asm_cp_files.sh
    select 'asmcmd cp '||name||'   CDB name>','<standby cdb name>'),'<source disk group>','<standby disk group>'),'<source PDB guid>','<standby PDB guid>') from v$datafile where con_id=<source PDB con_id>;
    exit
       

    Using the values from this example, the query would look like:
    select 'asmcmd cp '||name||' sys/<sys password>@stbyhost.+ASM1:'||replace(replace(replace(replace(name,'.','_'),'CDB100','CDBSTBY'),'DATA','DATA'),'1E15D0913C3F7AECE0537A28890AB256','1E390C96F9DB840AE0537A28890A285B') from v$datafile where con_id=8;
       
    Set your environment to point to the local ASM instance for the source PDB and execute the generated script to copy the files to the remote ASM environment of the standby database.

  5. Set STANDBY_FILE_MANAGEMENT=MANUAL in the standby database
    SQL> alter system set standby_file_management=manual;
       

  6. Repeat the following for each datafile.
    1. Retrieve the file names for the new PDB from v$datafile using the Container ID from  Step 2 above.
      SQL> select con_id, name from v$datafile where con_id=9;
         
      The first time you run this query, if STANDBY_FILE_MANAGEMENT was set to AUTO the file name will be an OMF generated file name.  After STANDBY_FILE_MANAGEMENT is set to MANUAL, all subsequent names will default to a form of $ORACLE_HOME/dbs/UNAMEDXXX.
         

    2. Rename the incorrectly named datafile(s) to the appropriate filename that was just copied
      SQL> alter database rename file '<filename in v$datafile' to '<new file name for file built in  Step 4>';
         
      Redo apply adds datafiles by tablespace.  If a tablespace has multiple datafiles, you will need to rename all the datafiles in the tablespace before moving on to the next step
         
    3. Restart redo apply
      SQL> alter database recover managed standby database disconnect;
         
    1.    
      • If additional tablespaces for this PDB are encountered, redo apply will fail and add the datafile names for the tablespace to v$datafile.
      • When a failure is encountered, loop back to  Step 6.1 and repeat the process of renaming the datafiles.
         
  7. After all files have been successfully added to the standby, set STANDBY_FILE_MANAGEMENT=AUTO in the standby database
    SQL>ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
       

 

Refreshing the Controlfile on the Standby Database and Copying the New PDB's Datafiles to the Standby Database

If you do not have all of the source PDB datafiles in the correct state or there is a large number (more than 10) of datafiles in the PDB, use RMAN to refresh the controlfile on the standby, copy the datafiles for the new PDB from the primary database to the standby database and restart media recovery.

  1. Connect via SQL*Plus to the standby database and save the CURRENT_SCN for later. This will be used to determine if newly created files since redo apply died need to be restored from the source database.
    SQL> select current_scn from v$database;

    CURRENT_SCN
    -----------
       63611987
       
  2. Note the names of the online redo log files and standby redo log files of the standby database. You may require these names in a later step.
    SQL> select member from v$logfile order by group#;

    MEMBER
    --------------------------------------------------------------------------------
    +DATA/CDBSTBY/ONLINELOG/group_1.340.885050893
    +DATA/cdbstby/onlinelog/group_2.327.879790205
    +DATA/cdbstby/onlinelog/group_3.317.879790263
    +DATA/cdbstby/onlinelog/group_4.353.879790319
    +DATA/cdbstby/onlinelog/group_5.270.879790373
    +DATA/cdbstby/onlinelog/group_6.351.879790431
    +DATA/cdbstby/onlinelog/group_7.308.879790487
    +DATA/cdbstby/onlinelog/group_8.388.879790543
    +DATA/cdbstby/onlinelog/group_9.330.879790599
    +DATA/cdbstby/onlinelog/group_10.398.879790655
    +DATA/cdbstby/onlinelog/group_11.314.879790711
    +DATA/cdbstby/onlinelog/group_12.312.879790767
    +DATA/cdbstby/onlinelog/group_13.397.879790823
    +DATA/cdbstby/onlinelog/group_14.261.879790881
    +DATA/cdbstby/onlinelog/group_15.278.879790937
    +DATA/cdbstby/onlinelog/group_16.355.879790993
    +DATA/cdbstby/onlinelog/group_17.391.879791049
    +DATA/cdbstby/onlinelog/group_18.295.879791107
    +DATA/cdbstby/onlinelog/group_19.410.879791161
    +DATA/cdbstby/onlinelog/group_20.263.879791219
    +DATA/cdbstby/onlinelog/group_21.380.879791275
    +DATA/cdbstby/onlinelog/group_22.383.879791331
    +DATA/cdbstby/onlinelog/group_23.386.879791389
    +DATA/cdbstby/onlinelog/group_24.392.879791445

    24 rows selected.
       
  3. Disable apply on the standby database if using Data Guard Broker
    DGMGRL> edit database cdbstby set state='apply-off';
       

  4. Refresh the control file of the standby database from the primary database to make the control file current.
    1. Shutdown the standby database and restart one instance in NOMOUNT mode

       
      $ srvctl stop db –d cdbstby –o abort

      SQL> startup nomount
       
    2. Connect to the standby database instance as the RMAN target.
    3. Restore the standby control file by using the control file on the primary database.
       
      RMAN> RESTORE STANDBY CONTROLFILE FROM SERVICE ;
       
    4. Mount the standby database
       
      RMAN> alter database mount;
  5. If you are not using an RMAN catalog, the names of files in the standby control file are the names that were used in the primary database. You must update the names of the data files and the temp files in the standby control file. Use the CATALOG command and the SWITCH command to update all the data file names. The SWITCH command will be used after restoring any newly created files from the source database in  Step 7.  The CATALOG command searches sub-directories so the command should START WITH the highest level directory in ASM for the database to ensure all datafiles, including those for PDBs, are found.
    RMAN> CATALOG START WITH '+DATA/CDBSTBY/';
       

    In the example above, "+DATA/CDBSTBY/" is the location where the data files and PDB subdirectories reside on the standby database. The catalog operation will search all subdirectories for datafiles.  All data files must be stored in this location.

  6. Use the CURRENT_SCN from  Step 1 to determine the new files for the PDB that were added and need to be restored from the primary.  At a minimum this will include all the datafiles for the just created PDB plus any additional datafiles that may have been created since redo apply failed.
    RMAN> select file#, con_id from v$datafile where creation_change# >= 63611987 order by 2,1;
         FILE#     CON_ID
    ---------- ----------
           188          9
           189          9
           190          9
           191          9
           192          9
           193          9
           194          9
           195          9
           196          9
           197          9
           198          9
           199          9
           200          9
           201          9
    <snip>
           234          9
           235          9
           236          9
           237          9
           238          9
           239          9
           240          9

    53 rows selected
       
    Ensure all of the datafiles in the previous list are for the just created PDB.

  7. Restore the PDBs datafiles from the primary database. Execute an RMAN RUN block similar to the following to restore all of the files for the pluggable database. 
    RMAN> run{ 
    2> set newname for pluggable database TESTPDB to '+DATA';
    3> restore pluggable database TESTPDB from service CDBPRIM;
    4> }
       
  8. If additional files for other pluggable databases need to be restored, execute an RMAN RUN block similar to the following specifying the file numbers of the additional files to be restored.  The file numbers can be determined from the results of  Step 6.
    RMAN> run{ 
    2> set newname for database to '+DATA';
    3> restore datafile 241,242 from service CDBPRIM;
    4> }
     
  9. If not using an RMAN catalog, rename the datafiles in the standby control file by switching to the copies cataloged in  Step 5.
    RMAN> SWITCH DATABASE TO COPY;
       
  10. Update the names of the online redo logs and standby redo logs in the standby control file using one of the following methods:
    1. Use the ALTER DATABASE CLEAR command to clear the log files in all redo log groups of the standby database. RMAN then recreates all the standby redo logs and the online redo log files.
      Clearing log files is recommended only if the Data Guard replica does not have access to the online redo log files and standby redo log files of the primary database. If the standby database has access to the redo log files of the primary database and the redo log file names of the primary database are OMF names, then the ALTER DATABASE command will delete log files from the primary database. Also, the clearing of the logfiles will create new logfiles, it will not use the existing logfiles as the new standby control file is not aware of those existing files. To conserve space, delete the existing logfiles from ASM prior to running the ALTER DATABASE CLEAR commands.
         

      The GROUP# column of the V$LOGFILE view queried in  Step 5 provides the redo log group identifiers of the log groups that must be cleared. Use separate ALTER DATABASE CLEAR commands to clear each redo log group.

      The following command clears the redo log group with identifier 2.

       
      SQL> ALTER DATABASE CLEAR LOGFILE GROUP 2;
       
    2. Use the ALTER DATABASE RENAME FILE command to rename the redo log files in the controlfile of the standby database. You can use a single command to rename the redo log files listed in  Step 5.

      NOTE: Renaming log files is recommended when the number of online redo logs files and standby redo log files is the same in the source database and the Data Guard replica.
      1. Ensure STANDBY_FILE_MANAGEMENT is set to MANUAL to allow renaming of the redo log files
        SQL> alter system set standby_file_management=manual;

        If using Data Guard Broker

        DGMGRL> edit database cdbstby set property standbyfilemanagement=manual;
           

      2. Rename the log files
        SQL> alter database rename file '+DATA/CDBPRIM/ONLINELOG/group_1.290.878572819', …
        '+DATA/CDB100/ONLINELOG/group_24.402.879704457'
        to
        ‘DATA/CDBSTBY/ONLINELOG/group_1.340.885050893’,

        '+data/cdbstby/onlinelog/group_24.392.879791445';
           
      3. Set STANDBY_FILE_MANAGEMENT to AUTO
        SQL> alter system set standby_file_management=auto;

        If using Data Guard Broker

        DGMGRL> edit database cdbstby set property standbyfilemanagement=AUTO;
           
  11. Restart redo apply on the standby database   
    SQL> alter database recover managed standby database disconnect;

    If using Data Guard Broker

    DGMGRL> edit database cdbstby set state=apply-on;
       
       

Using Flashback Database and Guaranteed Restore Points (GRPs)

It is possible to rollback a plug in operation on a standby, allowing you to repair issues such as missing or incorrect files and quickly re-establish HA and DR capabilities. Prior to plugging in a PDB on the primary database, create a GRP on the standby database. This will allow us to replay the plugin redo apply to search for an add datafiles for the PDB being plugged in. The patches listed in the prerequisite allow us to reuse existing files already in place in addition to copying files directly from the primary database that have already been plugged in but are at an advanced SCN to the rest of the standby database.


Prerequisites:

  • <Patch 31765246> is required to provide additional options for more easily and quickly repairing the standby should issues arise.
  •  is required to allow for use of flashback database as an option to rerun from failures.
  • Creating a Guaranteed Restore Point (GRP) on the standby database prior to plugging in a PDB on the primary.
  1. Just prior to plugging in the PDB on the primary, create a GRP on the standby. This does not require flashback database to be enabled.
       
    SQL> create restore point pre_plugin_standby guarantee flashback database;
       
       
    NOTE: All work to copy files and setup the standby should be completed prior to creating the GRP to avoid maintaining large flashback log
       
  2. Perform the plugin operation on the primary.
    1. If the plugin redo is applied without issue on the standby, once all files have been added drop the GRP as it is no longer required.
         
      SQL> drop restore point pre_plugin_standby;
         
    2. If issues arose while applying the redo on the standby, do the following:
         
      NOTE: While these steps are being executed, the entire standby CDB will be impacted, no recovery will be performed. Once the steps are complete and redo apply is restarted, it will apply all of the redo that has been shipped from the primary since the time of creating the GRP.

      Prior to using this method ensure all patches from the prerequisites section have been applied.
         

      We must disable recovery of the PDB for Flashback Database to work correctly;
      1. Turn off redo apply in Data Guard Broker so it does not automatically start on restart of the standby
           
        DGMGRL> edit database cdbstby set state=’APPLY-OFF’;
           
      2. Restart the CDB standby in mount mode, ensuring in Real Application Cluster environments only one instance is running.
        1. For RAC databases
             
          $ srvctl stop database –d cdbstby –o immediate
          $ srvctl start instance –d cdbstby –i cdbstby1 –o mount
             
        2. For Single Instance databases (SIDB)
             
          SQL> shutdown immediate
          SQL> startup mount
             
      3. Connect to the PDB and disable recovery of the PDB on the CDB standby.
           
        SQL> alter session set container=TESTPDB;
        SQL> alter pluggable database disable recovery;
           
      4. Connect to the root and flashback the standby database
           
        SQL> alter session set container=cdb$root;
        SQL> flashback database to restore point pre_plugin_standby;
           
      5. Repair any issues that caused redo apply to fail (e.g. missing ASM aliases or datafiles)
      6. Staying in mount mode on the CDB standby, start redo apply
           
        SQL> recover managed standby database disconnect;
           

        Redo apply should now start applying all redo from the GRP forward, including rescanning for all the files for the newly plugged in PDB.

Steps 1-6 can be repeated as many times as is required until all issues are resolved, all files for the PDB are added to the standby and additional redo is being applied. To return to normal operations do the following:

  1. Stop recovery on the CDB standby database
       
    SQL> recover managed standby database cancel;
       
  2. Drop the GRP in the CDB standby database
       
    SQL> drop restore point pre_plugin_standby;
       
  3. Restart redo apply in Data Guard Broker
       
    DGMGRL> edit database cdbstby set state=’APPLY-ON’;
       
  4. For RAC databases, restart additional database instances
       
    $ srvctl start instance –d cdbstby –i cdbstby2 –o mount
       
  5. If you have the Active Data Guard license, connect to your CDB standby database via Data Guard Broker and open the database in read only mode.
       
    DGMGRL> sql “alter database open read only”
       

If you continue to have issues and require that your CDB standby database maintain protection of additional PDBs in the standby during problem resolution:

  1. Disable recovery of the failed PDB as noted above. 
  2. Restart redo apply so that the other PDBs in the CDB standby are protected.
  3. Follow the Enable Recovery steps in  Document 1916648.1 to enable recovery of the failed PDB.
  4. Drop the GRP in the CDB standby

REFERENCES

NOTE:1576755.1  - Step by Step Process of Migrating non-CDBs and PDBs Using ASM for File Storage
NOTE:2274735.1  - Using standby_pdb_source_file_dblink and standby_pdb_source_file_directory to Maintain Standby Databases when Performing PDB Remote Clones or Plugins
NOTE:2042607.1  - Cloning a Pluggable Database from an RMAN Container Database Backup
NOTE:1916648.1  - Making Use Deferred PDB Recovery and the STANDBYS=NONE Feature with Oracle Multitenant
NOTE:2273829.1  - Reusing the Source Standby Database Files When Plugging a PDB into the Primary Database of a Data Guard Configuration
NOTE:2273304.1  - Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration


來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/29487349/viewspace-2770697/,如需轉載,請註明出處,否則將追究法律責任。

相關文章