expdp/impdp的原理及使用(轉)
原文連線:
http://blog.csdn.net/tianlesoftware/archive/2009/10/16/4674224.aspx
[@more@]防止網頁失效,複製過來保留一份:
一. 官網說明
1. Oracle 10g文件如下:
Data Pump Components
Oracle Data Pump is made up of three distinct parts:
(1)The command-line clients, expdp and impdp
(2)The DBMS_DATAPUMP PL/SQL package (also known as the Data Pump API)
(3)The DBMS_METADATA PL/SQL package (also known as the Metadata API)
The Data Pump clients, expdp and impdp, invoke the Data Pump Export utility and Data Pump Import utility, respectively. They provide a user interface that closely resembles the original export (exp) and import (imp) utilities.
The expdp and impdp clients use the procedures provided in the DBMS_DATAPUMP PL/SQL package to execute export and import commands, using the parameters entered at the command-line. These parameters enable the exporting and importing of data and metadata for a complete database or subsets of a database.
Note:
All Data Pump Export and Import processing, including the reading and writing of dump files, is done on the system (server) selected by the specified database connect string. This means that, for nonprivileged users, the database administrator (DBA) must create directory objects for the Data Pump files that are read and written on that server file system. For privileged users, a default directory object is available. See Default Locations for Dump, Log, and SQL Files for more information about directory objects.
When data is moved, Data Pump automatically uses either direct path load (or unload) or the external tables mechanism, or a combination of both. When metadata is moved, Data Pump uses functionality provided by the DBMS_METADATA PL/SQL package. The DBMS_METADATA package provides a centralized facility for the extraction, manipulation, and resubmission of dictionary metadata.
The DBMS_DATAPUMP and DBMS_METADATA PL/SQL packages can be used independently of the Data Pump clients.
See Also:
Oracle Database PL/SQL Packages and Types Reference for descriptions of the DBMS_DATAPUMP and DBMS_METADATA packages
What New Features Do Data Pump Export and Import Provide?
The new Data Pump Export and Import utilities (invoked with the expdp and impdp commands, respectively) have a similar look and feel to the original Export (exp) and Import (imp) utilities, but they are completely separate. Dump files generated by the new Data Pump Export utility are not compatible with dump files generated by the original Export utility. Therefore, files generated by the original Export (exp) utility cannot be imported with the Data Pump Import (impdp) utility.
Oracle recommends that you use the new Data Pump Export and Import utilities because they support all Oracle Database 10g features, except for XML schemas and XML schema-based tables. Original Export and Import support the full set of Oracle database release 9.2 features. Also, the design of Data Pump Export and Import results in greatly enhanced data movement performance over the original Export and Import utilities.
Note:
See Chapter 19, "Original Export and Import" for information about situations in which you should still use the original Export and Import utilities.
The following are the major new features that provide this increased performance, as well as enhanced ease of use:
(1)The ability to specify the maximum number of threads of active execution operating on behalf of the Data Pump job. This enables you to adjust resource consumption versus elapsed time. See PARALLEL for information about using this parameter in export. See PARALLEL for information about using this parameter in import. (This feature is available only in the Enterprise Edition of Oracle Database 10g.)
(2)The ability to restart Data Pump jobs. See START_JOB for information about restarting export jobs. See START_JOB for information about restarting import jobs.
(3)The ability to detach from and reattach to long-running jobs without affecting the job itself. This allows DBAs and other operations personnel to monitor jobs from multiple locations. The Data Pump Export and Import utilities can be attached to only one job at a time; however, you can have multiple clients or jobs running at one time. (If you are using the Data Pump API, the restriction on attaching to only one job at a time does not apply.) You can also have multiple clients attached to the same job. See ATTACH for information about using this parameter in export. See ATTACH for information about using this parameter in import.
(4)Support for export and import operations over the network, in which the source of each operation is a remote instance. See NETWORK_LINK for information about using this parameter in export. See NETWORK_LINK for information about using this parameter in import.
(5)The ability, in an import job, to change the name of the source datafile to a different name in all DDL statements where the source datafile is referenced. See REMAP_DATAFILE.
(6)Enhanced support for remapping tablespaces during an import operation. See REMAP_TABLESPACE.
(7)Support for filtering the metadata that is exported and imported, based upon objects and object types. For information about filtering metadata during an export operation, see INCLUDE and EXCLUDE. For information about filtering metadata during an import operation, see INCLUDE and EXCLUDE.
(8)Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs. See Commands Available in Export's Interactive-Command Mode and Commands Available in Import's Interactive-Command Mode.
(9)The ability to estimate how much space an export job would consume, without actually performing the export. See ESTIMATE_ONLY.
(10)The ability to specify the version of database objects to be moved. In export jobs, VERSION applies to the version of the database objects to be exported. See VERSION for more information about using this parameter in export.
In import jobs, VERSION applies only to operations over the network. This means that VERSION applies to the version of database objects to be extracted from the source database. See VERSION for more information about using this parameter in import.
For additional information about using different versions, see Moving Data Between Different Database Versions.
(11)Most Data Pump export and import operations occur on the Oracle database server. (This contrasts with original export and import, which were primarily client-based.) See Default Locations for Dump, Log, and SQL Files for information about some of the implications of server-based operations.
The remainder of this chapter discusses Data Pump technology as it is implemented in the Data Pump Export and Import utilities. To make full use of Data Pump technology, you must be a privileged user. Privileged users have the EXP_FULL_DATABASE and IMP_FULL_DATABASE roles. Nonprivileged users have neither.
Privileged users can do the following:
(1)Export and import database objects owned by others
(2)Export and import nonschema-based objects such as tablespace and schema definitions, system privilege grants, resource plans, and so forth
(3)Attach to, monitor, and control Data Pump jobs initiated by others
(4)Perform remapping operations on database datafiles
(5)Perform remapping operations on schemas other than their own
2. Oracle 11gR2 中文件:
Oracle Data Pump Export and Import
Oracle Data Pump enables high-speed movement of data and metadata from one database to another. This technology is the basis for the following Oracle Database data movement utilities:
(1)Data Pump Export (Export)
Export is a utility for unloading data and metadata into a set of operating system files called a dump file set. The dump file set is made up of one or more binary files that contain table data, database object metadata, and control information.
(2)Data Pump Import (Import)
Import is a utility for loading an export dump file set into a database. You can also use Import to load a destination database directly from a source database with no intervening files, which allows export and import operations to run concurrently, minimizing total elapsed time.
Oracle Data Pump is made up of the following distinct parts:
(1)The command-line clients expdp and impdp
These client make calls to the DBMS_DATAPUMP package to perform Oracle Data Pump operations (see "PL/SQL Packages").
(2)The DBMS_DATAPUMP PL/SQL package, also known as the Data Pump API。This API provides high-speed import and export functionality.
(3)The DBMS_METADATA PL/SQL package, also known as the Metadata API。This API, which stores object definitions in XML, is used by all processes that load and unload metadata.
Figure 18-2 shows how Oracle Data Pump integrates with SQL*Loader and external tables. As shown, SQL*Loader is integrated with the External Table API and the Data Pump API to load data into external tables (see "External Tables"). Clients such as Database Control and transportable tablespaces can use the Oracle Data Pump infrastructure.
Figure 18-2 Oracle Data Pump Architecture
二. Data Pump 介紹
在第一部分看了2段官網的說明, 可以看出資料泵的工作流程如下:
(1)在命令列執行命令
(2)expdp/impd 命令呼叫DBMS_DATAPUMP PL/SQL包。 這個API提供高速的匯出匯入功能。
(3)當data 移動的時候, Data Pump 會自動選擇direct path 或者external table mechanism 或者 兩種結合的方式。 當metadata(物件定義) 移動的時候,Data Pump會使用DBMS_METADATA PL/SQL包。 Metadata API 將metadata(物件定義)儲存在XML裡。 所有的程式都能load 和unload 這些metadata.
因為Data Pump 呼叫的是服務端的API, 所以當一個任務被排程或執行,客戶端就可以退出連線,任務Job 會在server端繼續執行,隨後透過客戶端實用程式從任何地方檢查任務的狀態和進行修改。
在下面連線文章裡對expdp/impdp 不同模式下的原理做了說明:
exp/imp 與 expdp/impdp 對比 及使用中的一些最佳化事項
http://blog.csdn.net/tianlesoftware/archive/2010/12/23/6093973.aspx
在上面說了expdp/impdp 是JOB,我們可以停止與修改。 因此我們在這裡做一個簡答的測試:
匯出語句:
expdp system/oracle full=y directory=dump dumpfile=orcl_%U.dmp parallel=2 job_name=davedump
job_name:指定要匯出Job的名稱, 預設為SYS_XXX。 在前面已經說過, 呼叫的API都是Job。 我們為這個JOB命名一下, 等會還要用這個job name。
C:UsersAdministrator.DavidDai>expdp system/oracle full=y directory=dump dumpfile=orcl_%U.dmp job_name=davedump
Export: Release 11.2.0.1.0 - Production on 星期一 12月 27 15:24:38 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
連線到: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
啟動 "SYSTEM"."DAVEDUMP": system/******** full=y directory=dump dumpfile=orcl_%U.dmp job_name=davedump
正在使用 BLOCKS 方法進行估計...
處理物件型別 DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
使用 BLOCKS 方法的總估計: 132.6 MB
處理物件型別 DATABASE_EXPORT/TABLESPACE
處理物件型別 DATABASE_EXPORT/PROFILE
處理物件型別 DATABASE_EXPORT/SYS_USER/USER
處理物件型別 DATABASE_EXPORT/SCHEMA/USER
處理物件型別 DATABASE_EXPORT/ROLE
處理物件型別 DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
處理物件型別 DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
處理物件型別 DATABASE_EXPORT/SCHEMA/ROLE_GRANT
處理物件型別 DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
處理物件型別 DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
處理物件型別 DATABASE_EXPORT/RESOURCE_COST
處理物件型別 DATABASE_EXPORT/TRUSTED_DB_LINK
處理物件型別 DATABASE_EXPORT/SCHEMA/SEQUENCE/SEQUENCE
--按下CTRL+C 組合,退出互動模式
Export>
Export> status
作業: DAVEDUMP
操作: EXPORT
模式: FULL
狀態: EXECUTING
處理的位元組: 0
當前並行度: 1
作業錯誤計數: 0
轉儲檔案: D:BACKUPORCL_01.DMP
寫入的位元組: 4,096
轉儲檔案: d:Backuporcl_%u.dmp
Worker 1 狀態:
程式名: DW00
狀態: EXECUTING
物件名: STORAGE_CONTEXT
物件型別: DATABASE_EXPORT/CONTEXT
完成的物件數: 7
總的物件數: 7
Worker 並行度: 1
--停止作業
Export> stop_job
是否確實要停止此作業 ([Y]/N): yes
--用job_name再次連線到job
C:UsersAdministrator.DavidDai>expdp system/oracle attach=davedump
-- ATTACH用於在客戶會話與已存在匯出作用之間建立關聯. 如果使用ATTACH選項,在命令列除了連線字串和ATTACH選項外,不能指定任何其他選
Export: Release 11.2.0.1.0 - Production on 星期一 12月 27 15:26:14 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
連線到: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
作業: DAVEDUMP
所有者: SYSTEM
操作: EXPORT
建立者許可權: TRUE
GUID: 454A188F62AA4D578AA0DA4C35259CD8
開始時間: 星期一, 27 12月, 2010 15:26:16
模式: FULL
例項: orcl
最大並行度: 1
EXPORT 個作業引數:
引數名 引數值:
CLIENT_COMMAND system/******** full=y directory=dump dumpfile=orcl_%U.dmp job_name=davedump
狀態: IDLING
處理的位元組: 0
當前並行度: 1
作業錯誤計數: 0
轉儲檔案: d:Backuporcl_01.dmp
寫入的位元組: 950,272
轉儲檔案: d:Backuporcl_%u.dmp
Worker 1 狀態:
程式名: DW00
狀態: UNDEFINED
啟動JOB
Export> start_job
-- 檢視狀態
Export> status
作業: DAVEDUMP
操作: EXPORT
模式: FULL
狀態: EXECUTING
處理的位元組: 0
當前並行度: 1
作業錯誤計數: 0
轉儲檔案: d:Backuporcl_01.dmp
寫入的位元組: 954,368
轉儲檔案: d:Backuporcl_%u.dmp
Worker 1 狀態:
程式名: DW00
狀態: EXECUTING
在此期間的備份情況,可以使用status命令來檢視:
Export> status
作業: DAVEDUMP
操作: EXPORT
模式: FULL
狀態: EXECUTING
處理的位元組: 0
當前並行度: 1
作業錯誤計數: 0
轉儲檔案: d:Backuporcl_01.dmp
寫入的位元組: 954,368
轉儲檔案: d:Backuporcl_%u.dmp
Worker 1 狀態:
程式名: DW00
狀態: EXECUTING
物件方案: SYSMAN
物件名: AQ$_MGMT_NOTIFY_QTABLE_T
物件型別: DATABASE_EXPORT/SCHEMA/TABLE/TABLE
完成的物件數: 59
Worker 並行度: 1
Export> help
------------------------------------------------------------------------------
下列命令在互動模式下有效。
注: 允許使用縮寫。
ADD_FILE
將轉儲檔案新增到轉儲檔案集。
CONTINUE_CLIENT
返回到事件記錄模式。如果處於空閒狀態, 將重新啟動作業。
EXIT_CLIENT
退出客戶機會話並使作業保持執行狀態。
FILESIZE
用於後續 ADD_FILE 命令的預設檔案大小 (位元組)。
HELP
彙總互動命令。
KILL_JOB
分離並刪除作業。
PARALLEL
更改當前作業的活動 worker 的數量。
REUSE_DUMPFILES
覆蓋目標轉儲檔案 (如果檔案存在) [N]。
START_JOB
啟動或恢復當前作業。
有效的關鍵字值為: SKIP_CURRENT。
STATUS
監視作業狀態的頻率, 其中
預設值 [0] 表示只要有新狀態可用, 就立即顯示新狀態。
STOP_JOB
按順序關閉作業執行並退出客戶機。
有效的關鍵字值為: IMMEDIATE。
Export>
注意,就是在expdp命令進行互動式切換時,不能使用paralle 引數。 我在開始測試的時候,指定了這個引數,當stop_job後,在啟動時就會報錯。 說找不到指定的job_name.
三、EXPDP/IMPDP 命令使用詳解
Data Pump包括匯出表,匯出方案,匯出表空間,匯出資料庫4種方式.
3.1 EXPDP命令引數及說明
(1). ATTACH
該選項用於在客戶會話與已存在匯出作用之間建立關聯.語法如下
ATTACH=[schema_name.]job_name
Schema_name用於指定方案名,job_name用於指定匯出作業名.注意,如果使用ATTACH選項,在命令列除了連線字串和ATTACH選項外,不能指定任何其他選項,示例如下:
Expdp scott/tiger ATTACH=scott.export_job
(2). CONTENT
該選項用於指定要匯出的內容.預設值為ALL
CONTENT={ALL | DATA_ONLY | METADATA_ONLY}
當設定CONTENT為ALL 時,將匯出物件定義及其所有資料.為DATA_ONLY時,只匯出物件資料,為METADATA_ONLY時,只匯出物件定義。
Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dump CONTENT=METADATA_ONLY
(3) DIRECTORY
指定轉儲檔案和日誌檔案所在的目錄,DIRECTORY=directory_object
Directory_object用於指定目錄物件名稱.需要注意,目錄物件是使用CREATE DIRECTORY語句建立的物件,而不是OS 目錄。
Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dump
先在對應的位置建立物理資料夾,如D:backup
建立目錄:
create or replace directory backup as '/opt/oracle/utl_file'
SQL>CREATE DIRECTORY backup as ‘d:backup’;
SQL>grant read,write on directory backup to SYSTEM;
查詢建立了那些子目錄:
SELECT * FROM dba_directories;
(4). DUMPFILE
用於指定轉儲檔案的名稱,預設名稱為expdat.dmp
DUMPFILE=[directory_object:]file_name [,….]
Directory_object用於指定目錄物件名,file_name用於指定轉儲檔名.需要注意,如果不指定directory_object,匯出工具會自動使用DIRECTORY選項指定的目錄物件:Expdp scott/tiger DIRECTORY=dump1 DUMPFILE=dump2:a.dmp
(5). ESTIMATE
指定估算被匯出表所佔用磁碟空間分方法.預設值是BLOCKS。
EXTIMATE={BLOCKS | STATISTICS}
設定為BLOCKS時,oracle會按照目標物件所佔用的資料塊個數乘以資料塊尺寸估算物件佔用的空間,設定為STATISTICS時,根據最近統計值估算物件佔用空間: Expdp scott/tiger TABLES=emp ESTIMATE=STATISTICS DIRECTORY=dump DUMPFILE=a.dump
(6). EXTIMATE_ONLY
指定是否只估算匯出作業所佔用的磁碟空間,預設值為N
EXTIMATE_ONLY={Y | N}
設定為Y時,匯出作用只估算物件所佔用的磁碟空間,而不會執行匯出作業,為N時,不僅估算物件所佔用的磁碟空間,還會執行匯出操作.
Expdp scott/tiger ESTIMATE_ONLY=y NOLOGFILE=y
(7). EXCLUDE
該選項用於指定執行操作時釋放要排除物件型別或相關物件
EXCLUDE=object_type[:name_clause] [,….]
Object_type用於指定要排除的物件型別,name_clause用於指定要排除的具體物件.EXCLUDE和INCLUDE不能同時使用。
Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dup EXCLUDE=VIEW
(8). FILESIZE
指定匯出檔案的最大尺寸,預設為0,(表示檔案尺寸沒有限制)
(9). FLASHBACK_SCN
指定匯出特定SCN時刻的表資料。FLASHBACK_SCN=scn_value
Scn_value用於標識SCN值.FLASHBACK_SCN和FLASHBACK_TIME不能同時使用: Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dmp FLASHBACK_SCN=358523
(10). FLASHBACK_TIME
指定匯出特定時間點的表資料
FLASHBACK_TIME=”TO_TIMESTAMP(time_value)”
Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dmp FLASHBACK_TIME= “TO_TIMESTAMP(’25-08-2004 14:35:00’,’DD-MM-YYYY HH24:MI:SS’)”
(11). FULL
指定資料庫模式匯出,預設為N。 FULL={Y | N} 。為Y時,標識執行資料庫匯出.
(12). HELP
指定是否顯示EXPDP命令列選項的幫助資訊,預設為N。當設定為Y時,會顯示匯出選項的幫助資訊. Expdp help=y
(13). INCLUDE
指定匯出時要包含的物件型別及相關物件。INCLUDE = object_type[:name_clause] [,… ]
(14). JOB_NAME
指定要匯出作用的名稱,預設為SYS_XXX 。JOB_NAME=jobname_string
(15). LOGFILE
指定匯出日誌檔案檔案的名稱,預設名稱為export.log
LOGFILE=[directory_object:]file_name
Directory_object用於指定目錄物件名稱,file_name用於指定匯出日誌檔名.如果不指定directory_object.匯出作用會自動使用DIRECTORY的相應選項值.
Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dmp logfile=a.log
(16). NETWORK_LINK
指定資料庫連結名,如果要將遠端資料庫物件匯出到本地例程的轉儲檔案中,必須設定該選項.
(17). NOLOGFILE
該選項用於指定禁止生成匯出日誌檔案,預設值為N.
(18). PARALLEL
指定執行匯出操作的並行程式個數,預設值為1
(19). PARFILE
指定匯出引數檔案的名稱。PARFILE=[directory_path] file_name
(20). QUERY
用於指定過濾匯出資料的where條件
QUERY=[schema.] [table_name:] query_clause
Schema用於指定方案名,table_name用於指定表名,query_clause用於指定條件限制子句.QUERY選項不能與CONNECT=METADATA_ONLY,EXTIMATE_ONLY,TRANSPORT_TABLESPACES等選項同時使用.
Expdp scott/tiger directory=dump dumpfiel=a.dmp Tables=emp query=’WHERE deptno=20’
(21). SCHEMAS
該方案用於指定執行方案模式匯出,預設為當前使用者方案.
(22). STATUS
指定顯示匯出作用程式的詳細狀態,預設值為0
(23). TABLES
指定表模式匯出
TABLES=[schema_name.]table_name[:partition_name][,…]
Schema_name用於指定方案名,table_name用於指定匯出的表名,partition_name用於指定要匯出的分割槽名.
(24). TABLESPACES
指定要匯出表空間列表
(25). TRANSPORT_FULL_CHECK
該選項用於指定被搬移表空間和未搬移表空間關聯關係的檢查方式,預設為N. 當設定為Y時,匯出作用會檢查表空間直接的完整關聯關係,如果表空間所在表空間或其索引所在的表空間只有一個表空間被搬移,將顯示錯誤資訊.當設定為N時,匯出作用只檢查單端依賴,如果搬移索引所在表空間,但未搬移表所在表空間,將顯示出錯資訊,如果搬移表所在表空間,未搬移索引所在表空間,則不會顯示錯誤資訊.
(26). TRANSPORT_TABLESPACES
指定執行表空間模式匯出
(27). VERSION
指定被匯出物件的資料庫版本,預設值為COMPATIBLE.
VERSION={COMPATIBLE | LATEST | version_string}
為COMPATIBLE時,會根據初始化引數COMPATIBLE生成物件後設資料;為LATEST時,會根據資料庫的實際版本生成物件後設資料.version_string用於指定資料庫版本字串.
3.2 EXPDP 使用示例
使用EXPDP工具時,其轉儲檔案只能被存放在DIRECTORY物件對應的OS目錄中,而不能直接指定轉儲檔案所在的OS目錄.因此,使用EXPDP工具時,必須首先建立DIRECTORY物件.並且需要為資料庫使用者授予使用DIRECTORY物件許可權.
CREATE DIRECTORY dump_dir AS ‘D:DUMP’;
GRANT READ, WIRTE ON DIRECTORY dump_dir TO scott;
(1)匯出表
Expdp scott/tiger DIRECTORY=dump_dir DUMPFILE=tab.dmp TABLES=dept,emp logfile=exp.log;
(2)匯出方案 (schema,與使用者對應)
Expdp scott/tiger DIRECTORY=dump_dir DUMPFILE=schema.dmp SCHEMAS=system,scott logfile=/exp.log;
(3)匯出表空間
Expdp system/manager DIRECTORY=dump_dir DUMPFILE=tablespace.dmp TABLESPACES=user01,user02 logfile=/exp.log;
(4)匯出資料庫
Expdp system/manager DIRECTORY=dump_dir DUMPFILE=full.dmp FULL=Y logfile=/exp.log;
3.3 IMPDP 命令引數說明
IMPDP命令列選項與EXPDP有很多相同的,不同的有:
(1)REMAP_DATAFILE
該選項用於將源資料檔名轉變為目標資料檔名,在不同平臺之間搬移表空間時可能需要該選項.
REMAP_DATAFIEL=source_datafie:target_datafile
(2)REMAP_SCHEMA
該選項用於將源方案的所有物件裝載到目標方案中.
REMAP_SCHEMA=source_schema:target_schema
(3)REMAP_TABLESPACE
將源表空間的所有物件匯入到目標表空間中
REMAP_TABLESPACE=source_tablespace:target_tablespace
(4)REUSE_DATAFILES
該選項指定建立表空間時是否覆蓋已存在的資料檔案.預設為N。
REUSE_DATAFIELS={Y | N}
(5)SKIP_UNUSABLE_INDEXES
指定匯入是是否跳過不可使用的索引,預設為N
(6)SQLFILE
指定將匯入要指定的索引DDL操作寫入到SQL指令碼中。
SQLFILE=[directory_object:]file_name
Impdp scott/tiger DIRECTORY=dump DUMPFILE=tab.dmp SQLFILE=a.sql
(7)STREAMS_CONFIGURATION
指定是否匯入流後設資料(Stream Matadata),預設值為Y.
(8)TABLE_EXISTS_ACTION
該選項用於指定當表已經存在時匯入作業要執行的操作,預設為SKIP
TABBLE_EXISTS_ACTION={SKIP | APPEND | TRUNCATE | FRPLACE }
當設定該選項為SKIP時,匯入作業會跳過已存在表處理下一個物件;當設定為APPEND時,會追加資料,為TRUNCATE時,匯入作業會截斷表,然後為其追加新資料;當設定為REPLACE時,匯入作業會刪除已存在表,重建表病追加資料,注意,TRUNCATE選項不適用與簇表和NETWORK_LINK選項
(9)TRANSFORM
該選項用於指定是否修改建立物件的DDL語句
TRANSFORM=transform_name:value[:object_type]
Transform_name用於指定轉換名,其中SEGMENT_ATTRIBUTES用於標識段屬性(物理屬性,儲存屬性,表空間,日誌等資訊),STORAGE用於標識段儲存屬性,VALUE用於指定是否包含段屬性或段儲存屬性,object_type用於指定物件型別.
Impdp scott/tiger directory=dump dumpfile=tab.dmp Transform=segment_attributes:n:table
(10)TRANSPORT_DATAFILES
該選項用於指定搬移空間時要被匯入到目標資料庫的資料檔案。
TRANSPORT_DATAFILE=datafile_name
Datafile_name用於指定被複制到目標資料庫的資料檔案
Impdp system/manager DIRECTORY=dump DUMPFILE=tts.dmp TRANSPORT_DATAFILES=’/user01/data/tbs1.f’
3.4 IMPDP 命令例項
(1)匯入表
Impdp scott/tiger DIRECTORY=dump_dir DUMPFILE=tab.dmp TABLES=dept,emp logfile=/exp.log;
--將DEPT和EMP表匯入到SCOTT方案中
Impdp system/manage DIRECTORY=dump_dir DUMPFILE=tab.dmp
TABLES=scott.dept,scott.emp REMAP_SCHEMA=SCOTT:SYSTEM logfile=/exp.log;
-- 將DEPT和EMP表匯入的SYSTEM方案中.
(2)匯入方案
Impdp scott/tiger DIRECTORY=dump_dir DUMPFILE=schema.dmp SCHEMAS=scott logfile=/exp.log;
Impdp system/manager DIRECTORY=dump_dir DUMPFILE=schema.dmp
SCHEMAS=scott REMAP_SCHEMA=scott:system logfile=/exp.log;
(3)匯入表空間
Impdp system/manager DIRECTORY=dump_dir DUMPFILE=tablespace.dmp TABLESPACES=user01 logfile=/exp.log;
(4)匯入資料庫
Impdp system/manager DIRECTORY=dump_dir DUMPFILE=full.dmp FULL=y logfile=/exp.log;
本文來自CSDN部落格,轉載請標明出處:http://blog.csdn.net/tianlesoftware/archive/2009/10/16/4674224.aspx
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/19602/viewspace-1048478/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- EXPDP/IMPDP工具的使用
- Expdp,impdp工具的使用
- 【EXPDP/IMPDP】使用 EXPDP/IMPDP工具“模糊”匯出和匯入
- Oracle expdp/impdp 使用示例Oracle
- expdp/impdp 使用總結
- expdp\impdp及exp\imp 詳解
- expdp 和impdp的筆記[轉帖]筆記
- expdp impdp 使用命令解析
- oracle expdp和impdp使用例子Oracle
- impdp/expdp 示例
- oracle expdp and impdpOracle
- expdp/impdp的使用總結 (供查詢使用)
- expdp,impdp的多使用者遷移
- EXPDP 和 IMPDP 資料泵的使用_1
- EXPDP 和 IMPDP 資料泵的使用_2
- ORACLE EXPDP IMPDP 的停止和啟動及監控Oracle
- 使用expdp、impdp遷移資料庫資料庫
- expdp和impdp自動使用resumable特性
- oracle實驗記錄 (expdp/impdp使用)Oracle
- Expdp Impdp詳解
- oracle expdp/impdp用法Oracle
- exp/expdp imp/impdp的使用【實戰實用】
- exp/imp expdp/impdp Tables 萬用字元 % 的使用字元
- impdp和expdp的總結
- expdp impdp Data Pump(資料泵)使用解析
- expdp/impdp 用法詳解
- expdp/impdp操作例項
- expdp, impdp characteristic 特性--索引索引
- ORACLE expdp/impdp詳解Oracle
- 將partition expdp後impdp
- expdp/impdp用法詳解
- oracle資料庫的impdp,expdpOracle資料庫
- exp/imp 與 expdp/impdp 對比 及使用中的一些優化事項優化
- ORACLE 10G expdp/impdpOracle 10g
- Oracle10g expdp & impdpOracle
- [zt] expdp / impdp 用法詳解
- exp/expdp與imp/impdp的區別
- EXP/IMP和EXPDP/IMPDP的區別