RMAN跨平臺傳輸表空間(different Endian)

renjixinchina發表於2013-04-10
How to Migrate to different Endian Platform. Using Transportable Tablespaces With RMAN [ID 371556.1]

Goal

Starting with Oracle Database 10g, you can transport tablespaces across platforms. In this note there is a step by step guide about how to do it  with ASM datafiles and with OS filesystem datafiles.

Fix

Supported platforms

You can query the V$TRANSPORTABLE_PLATFORM. view to see the platforms that are supported and to determine each platform's endian format (byte ordering).

SQL> COLUMN PLATFORM_NAME FORMAT A32
SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;

PLATFORM_ID PLATFORM_NAME                    ENDIAN_FORMAT
----------- -------------------------------- --------------
          1 Solaris[tm] OE (32-bit)          Big
          2 Solaris[tm] OE (64-bit)          Big
          7 Microsoft Windows IA (32-bit)    Little
         10 Linux IA (32-bit)                Little
          6 AIX-Based Systems (64-bit)       Big
          3 HP-UX (64-bit)                   Big
          5 HP Tru64 UNIX                    Little
          4 HP-UX IA (64-bit)                Big
         11 Linux IA (64-bit)                Little
         15 HP Open VMS                      Little
          8 Microsoft Windows IA (64-bit)    Little
          9 IBM zSeries Based Linux          Big
         13 Linux 64-bit for AMD             Little
         16 Apple Mac OS                     Big
         12 Microsoft Windows 64-bit for AMD Little
         17 Solaris Operating System (x86)   Little


If the source platform. and the target platform. are of different endianness, then an additional step must be done on either the source or target platform. to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.

Transporting the tablespace

  1. Prepare for export of the tablespace.
    • Check that the tablespace will be self contained:
      SQL> execute sys.dbms_tts.transport_set_check('TBS1,TBS2', true);
      SQL> select * from sys.transport_set_violations;

      Note: these violations must be resolved before the tablespaces can be transported.
    • The tablespaces need to be in READ ONLY mode in order to successfully run a transport tablespace export:
      SQL> ALTER TABLESPACE TBS1 READ ONLY;
      SQL> ALTER TABLESPACE TBS2 READ ONLY;
  2. Export the metadata.
    • Using the original export utility:
      exp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_exp.log transport_tablespace=y tablespaces=TBS1,TBS2
    • Using Datapump export:
      First create the directory object to be used for Datapump, like in:
      CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir' ;
      GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

      Then initiate Datapump Export:
      expdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_TABLESPACES = TBS1,TBS2

      If you want to perform. a transport tablespace operation with a strict containment check, use the TRANSPORT_FULL_CHECK parameter:
      expdp system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir TRANSPORT_TABLESPACES= TBS1,TBS2 TRANSPORT_FULL_CHECK=Y

      If the tablespace set being transported is not self-contained then the export will fail.
  3. Use V$TRANSPORTABLE_PLATFORM to determine the endianness of each platform. You can execute the following query on each platform. instance:
    SELECT tp.platform_id,substr(d.PLATFORM_NAME,1,30), ENDIAN_FORMAT
    FROM V$TRANSPORTABLE_PLATFORM. tp, V$DATABASE d
    WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;

    If you see that the endian formats are different and then a conversion is necessary for transporting the tablespace set:
    RMAN> convert tablespace TBS1 to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';

    RMAN> convert tablespace TBS2 to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';

    Then copy the datafiles as well as the export dump file to the target environment.
  4. Import the transportable tablespace.
    • Using the original import utility:
      imp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_imp.log transport_tablespace=y datafiles='/tmp/....','/tmp/...'
    • Using Datapump:
      CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir';
      GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

      Followed by:
      impdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_DATAFILES='/tmp/....','/tmp/...' REMAP_SCHEMA=(source:target) REMAP_SCHEMA=(source_sch2:target_schema_sch2)

      You can use REMAP_SCHEMA if you want to change the ownership of the transported database objects.
  5. Put the tablespaces in read/write mode:
    SQL> ALTER TABLESPACE TBS1 READ WRITE;
    SQL> ALTER TABLESPACE TBS2 READ WRITE;

Using DBMS_FILE_TRANSFER

You can also use DBMS_FILE_TRANSFER to copy datafiles to another host. You need to follow the same steps specified above for ASM files. But if the endian formats are different then you must use the RMAN convert after transfering the files.

This is an example of usage:

RMAN> CONVERT DATAFILE
      '/hq/finance/work/tru/tbs_31.f',
      '/hq/finance/work/tru/tbs_32.f',
      '/hq/finance/work/tru/tbs_41.f'
      TO PLATFORM="Solaris[tm] OE (32-bit)"
      FROM PLATFORM="HP TRu64 UNIX"
      DB_FILE_NAME_CONVERT= "/hq/finance/work/tru/", "/hq/finance/dbs/tru"
      PARALLELISM=5;


The same example, but here showing the destination being an +ASM diskgroup:

RMAN> CONVERT DATAFILE
      '/hq/finance/work/tru/tbs_31.f',
      '/hq/finance/work/tru/tbs_32.f',
      '/hq/finance/work/tru/tbs_41.f'
      TO PLATFORM="Solaris[tm] OE (32-bit)"
      FROM PLATFORM="HP TRu64 UNIX"
      DB_FILE_NAME_CONVERT="/hq/finance/work/tru/", "+diskgroup"
      PARALLELISM=5;

 

Note: If the source files are on ASM diskgroup and target platform. has different endianess , the CONVERT command needs to be executed. The files cannot be copied directly between two ASM instances of different platform.

 

*** WARNING ***

  • Index Organized Tables (IOT) can become corrupt when using Transportable Tablespace (TTS) from Solaris, Linux or AIX to HP/UX.
    This is a restriction caused by BUG:9816640.
    Currently there is no patch for this issue, the Index Organized Tables (IOT) need to be recreated after the TTS.

    See Document 1334152.1 Corrupt IOT when using Transportable Tablespace to HP from different OS
    .
  • When using dropped columns,  - Datapump transport_tablespaces produces wrong dictionary metadata for some tables can occur.See Document 1440203.1 for details on this alert.

Additional Resources

Community: 

Still have questions? Use the above community to search for similar discussions or start a new discussion on this subject.

Limitations on Transportable Tablespace Use

  1. The source and target database must use the same character set and national character set.
  2. You cannot transport a tablespace to a target database in which a tablespace with the same name already exists. However, you can rename either the tablespace to be transported or the destination tablespace before the transport operation.
  3. Objects with underlying objects (such as materialized views) or contained objects (such as partitioned tables) are not transportable unless all of the underlying or contained objects are in the tablespace set.
    • Review Table "Objects Exported and Imported in Each Mode" from the Oracle Database Utilities documentation, there are several object types that are not exported in tablespace mode.
  4. If the owner/s of tablespace objects does not exist on target database, the usernames need to be created manually before starting the transportable tablespace import.

    • If you use spatial indexes, then:
      • be aware that TTS across different endian platforms are not supported for spatial indexes in 10gR1 and 10gR2; such a limitation has been released in 11g
      • specific Spatial packages must be run before exporting and after transportation, please see Oracle Spatial documentation.
  5. Beginning with Oracle Database 10g Release 2, you can transport tablespaces that contain XMLTypes, but you must use the IMP and EXP utilities, not Data Pump. When using EXP, ensure that the CONSTRAINTS and TRIGGERS parameters are set to Y (the default).

    The following query returns a list of tablespaces that contain XMLTypes:
    select distinct p.tablespace_name
    from dba_tablespaces p, dba_xml_tables x, dba_users u, all_all_tables t
    where t.table_name=x.table_name and
          t.tablespace_name=p.tablespace_name and
          x.owner=u.username;

    Transporting tablespaces with XMLTypes has the following limitations:
    1. The target database must have XML DB installed.
    2. Schemas referenced by XMLType tables cannot be the XML DB standard schemas.
    3. Schemas referenced by XMLType tables cannot have cyclic dependencies.
    4. Any row level security on XMLType tables is lost upon import.
    5. If the schema for a transported XMLType table is not present in the target database, it is imported and registered. If the schema already exists in the target databasean error is returned unless the ignore=y option is set.
  6. Advanced Queues Transportable tablespaces do not support 8.0-compatible advanced queues with multiple recipients.
  7. You cannot transport the SYSTEM tablespace or objects owned by the user SYS.
  8. Opaque Types Types(such as RAW, BFILE, and the AnyTypes) can be transported, but they are not converted as part of the cross-platform. transport operation. Their actual structure is known only to the application, so the application must address any endianness issues after these types are moved to the new platform.
  9. Floating-Point Numbers BINARY_FLOAT and BINARY_DOUBLE types are transportable using Data Pump but not the original export utility, EXP.
  10. Please also check Document 1454872.1 - Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicable

Transportable tablespace EXP/IMP of ASM files

  • Using RMAN CONVERT

    There is no direct way to exp/imp ASM files as transportable tablespace. However, the funcationality can be done via RMAN.

    You must follow this steps:
    1. Prepare for exporting the tablespace.

      • Check that the tablespace will be self contained:
        SQL>execute sys.dbms_tts.transport_set_check('TBS1,TBS2', true);
        SQL> select * from sys.transport_set_violations;

        Note: these violations must be resolved before the tablespaces can be transported.
      • The tablespaces need to be in READ ONLY mode in order to successfully run a transport tablespace export:
        SQL> ALTER TABLESPACE TBS1 READ ONLY;
        SQL> ALTER TABLESPACE TBS2 READ ONLY;
    2. Export the metadata.
      • Using the original export utility:
        exp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_exp.log transport_tablespace=y tablespaces=TBS1,TBS2
      • Using Datapump Export:
        CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir';
        GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

        followed by:
        expdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_TABLESPACES = TBS1,TBS2

        If you want to perform. a transport tablespace operation with a strict containment check, use the TRANSPORT_FULL_CHECK parameter:
        expdp system/password DUMPFILE=expdat.dmp DIRECTORY = dpump_dir TRANSPORT_TABLESPACES= TBS1,TBS2 TRANSPORT_FULL_CHECK=Y

      If the tablespace set being transported is not self-contained, then the export will fail.
    3. Use V$TRANSPORTABLE_PLATFORM to find the exact platform. name of target database. You can execute the following query on target platform. instance:
      SELECT tp.platform_id,substr(d.PLATFORM_NAME,2,30), ENDIAN_FORMAT
      FROM V$TRANSPORTABLE_PLATFORM. tp, V$DATABASE d
      WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
    4. Generate an OS file from the ASM file, in target platform. format:
      RMAN> CONVERT TABLESPACE TBS1
            TO PLATFORM. 'HP-UX (64-bit)' FORMAT '/tmp/%U';
      RMAN> CONVERT TABLESPACE TBS2
            TO PLATFORM. 'HP-UX (64-bit)' FORMAT '/tmp/%U';
    5. Copy the generated file to target server if different from source.
    6. Import the transportable tablespace
      • Using the original import utility:
        imp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_imp.log transport_tablespace=y datafiles='/tmp/....','/tmp/...'
      • Using Datapump Import:
        CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir';
        GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;

        followed by:
        impdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_DATAFILES='/tmp/....','/tmp/...' REMAP_SCHEMA=(source:target) REMAP_SCHEMA=(source_sch2:target_schema_sch2)

        You can use REMAP_SCHEMA if you want to change the ownership of the transported database objects.
    7. Put the tablespaces in read/write mode:
      SQL> ALTER TABLESPACE TBS1 READ WRITE;
      SQL> ALTER TABLESPACE TBS2 READ WRITE;

      If you want to transport the datafiles from ASM area to filesystem, you have finished after the above steps. But if you want to transport tablespaces between two ASM areas you must continue.
    8. Copy the datafile '/tmp/....dbf' into the ASM area using rman:
      rman nocatalog target /
      RMAN> backup as copy datafile '/tmp/....dbf' format '+DGROUPA';

      where +DGROUPA is the name of the ASM diskgroup.
    9. Switch the datafile to the copy.
      If the 10g database is open you need to offline the datafile first:
      SQL> alter database datafile '/tmp/....dbf' offline;

      Switch to the copy:
      rman nocatalog target /
      RMAN> switch datafile '/tmp/....dbf' to copy;

      Note down the name of the copy created in the +DGROUPA diskgroup, ex. '+DGROUPA/s101/datafile/tts.270.5'.
    10. Put the datafile online again, we need to recover it first:
      SQL> recover datafile '+DGROUPA/s101/datafile/tts.270.5';
      SQL> alter database datafile '+DGROUPA/s101/datafile/tts.270.5' online;
    11. Check if datafile is indeed part of the ASM area and online:
      SQL> select name, status from v$datafile;

      The output should be:
      +DGROUPA/s101/datafile/tts.270.5 ONLINE
  • Using DBMS_FILE_TRANSFER

    You can also use DBMS_FILE_TRANSFER to copy datafiles from one ASM disk group to another, even on another host. Starting with 10g release 2 you can also use DBMS_FILE_TRANSFER also to copy datafiles from ASM to filesystem and to filesystem to ASM.

    The PUT_FILE procedure reads a local file or ASM and contacts a remote database to create a copy of the file in the remote file system. The file that is copied is the source file, and the new file that results from the copy is the destination file. The destination file is not closed until the procedure completes successfully.

    Syntax:
    DBMS_FILE_TRANSFER.PUT_FILE(
       source_directory_object       IN  VARCHAR2,
       source_file_name              IN  VARCHAR2,
       destination_directory_object  IN  VARCHAR2,
       destination_file_name         IN  VARCHAR2,
       destination_database          IN  VARCHAR2);

    Where:
    • source_directory_object: The directory object from which the file is copied at the local source site. This directory object must exist at the source site.
    • source_file_name: The name of the file that is copied from the local file system. This file must exist in the local file system in the directory associated with the source directory object.
    • destination_directory_object: The directory object into which the file is placed at the destination site. This directory object must exist in the remote file system.
    • destination_file_name: The name of the file placed in the remote file system. A file with the same name must not exist in the destination directory in the remote file system.
    • destination_database: The name of a database link to the remote database to which the file is copied.

    If we want to use DBMS_FILE_TRANSFER.PUT_FILE to transfer the file from source to destination host, the steps 3,4,5 should be changed by the following:
    1. Create a directory at target database host, and give permissions to local user. This is the directory object into which the file is placed at the destination site, it must exist in the remote file system:
      CREATE OR REPLACE DIRECTORY target_dir AS '+DGROUPA';
      GRANT WRITE ON DIRECTORY target_dir TO "USER";
    2. Create a directory at source database host. The directory object from which the file is copied at the local source site. This directory object must exist at the source site:
      CREATE OR REPLACE DIRECTORY source_dir AS '+DGROUPS/subdir';
      GRANT READ,WRITE ON DIRECTORY source_dir TO "USER";
      CREATE OR REPLACE DIRECTORY source_dir_1 AS '+DGROUPS/subdir/subdir_2';
    3. Create a dblink to connect to target database host:
      CREATE DATABASE LINK DBS2 CONNECT TO 'user' IDENTIFIED BY 'password' USING 'target_connect';

      where target_connect is the connect string for target database and USER is the user that we are going to use to transfer the datafiles.
    4. Connect to source instance. The following items are used:
      • dbs1: Connect string to source database
      • dbs2: dblink to target database
      • a1.dat: Filename at source database
      • a4.dat: Filename at target database
      CONNECT user/password@dbs1

      -- - put a1.dat to a4.dat (using dbs2 dblink)
      -- - level 2 sub dir to parent dir
      -- - user has read privs on source_dir_1 at dbs1 and write on target_dir 
      -- - in dbs2
      BEGIN
          DBMS_FILE_TRANSFER.PUT_FILE('source_dir_1', 'a1.dat',
                                      'target_dir', 'a4.dat', 'dbs2' );
      END;

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/15747463/viewspace-758164/,如需轉載,請註明出處,否則將追究法律責任。

相關文章