about datapump parallel
For Data Pump Export, the value that is specified for the parallel parameter should be less than or equal to the number of files in the dump file set. Each worker or Parallel Execution Process requires exclusive access to the dump file, so having fewer dump files than the degree of parallelism will mean that some workers or PX processes will be unable to write the information they are exporting. If this occurs, the worker processes go into an idle state and will not be doing any work until more files are added to the job. See the explanation of the DUMPFILE parameter in the Database Utilities guide for details on how to specify multiple dump files for a Data Pump export job.
For Data Pump Import, the workers and PX processes can all read from the same files. However, if there are not enough dump files, the performance may not be optimal because multiple threads of execution will be trying to access the same dump file. The performance impact of multiple processes sharing the dump files depends on the I/O subsystem containing the dump files. For this reason, Data Pump Import should not have a value for the PARALLEL parameter that is significantly larger than the number of files in the dump file set.
In a typical export that includes both data and metadata, the first worker process will unload the metadata: tablespaces, schemas, grants, roles, tables, indexes, and so on. This single worker unloads the metadata, and all the rest unload the data, all at the same time. If the metadata worker finishes and there are still data objects to unload, it will start unloading the data too. The examples in this document assume that there is always one worker busy unloading metadata while the rest of the workers are busy unloading table data objects.
If the external tables method is chosen, Data Pump will determine the maximum number of PX processes that can work on a table data object. It does this by dividing the estimated size of the table data object by 250 MB and rounding the result down. If the result is zero or one, then PX processes are not used to unload the table
The PARALLEL parameter works a bit differently in Import than Export. Because there are various dependencies that exist when creating objects during import, everything must be done in order. For Import, no data loading can occur until the tables are created because data cannot be loaded into tables that do not yet exist
For Data Pump Import, the workers and PX processes can all read from the same files. However, if there are not enough dump files, the performance may not be optimal because multiple threads of execution will be trying to access the same dump file. The performance impact of multiple processes sharing the dump files depends on the I/O subsystem containing the dump files. For this reason, Data Pump Import should not have a value for the PARALLEL parameter that is significantly larger than the number of files in the dump file set.
In a typical export that includes both data and metadata, the first worker process will unload the metadata: tablespaces, schemas, grants, roles, tables, indexes, and so on. This single worker unloads the metadata, and all the rest unload the data, all at the same time. If the metadata worker finishes and there are still data objects to unload, it will start unloading the data too. The examples in this document assume that there is always one worker busy unloading metadata while the rest of the workers are busy unloading table data objects.
If the external tables method is chosen, Data Pump will determine the maximum number of PX processes that can work on a table data object. It does this by dividing the estimated size of the table data object by 250 MB and rounding the result down. If the result is zero or one, then PX processes are not used to unload the table
The PARALLEL parameter works a bit differently in Import than Export. Because there are various dependencies that exist when creating objects during import, everything must be done in order. For Import, no data loading can occur until the tables are created because data cannot be loaded into tables that do not yet exist
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/22990797/viewspace-2143371/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- Datapump:EXCLUDE/INCLUDE
- about me
- About HTMLHTML
- Trivia about pythonPython
- About My Blog
- Oracle Parallel DMLOracleParallel
- Narrative writing about a person
- An example about git hookGitHook
- About the Oracle GoldenGate TrailOracleGoAI
- 3.4.1 About Quiescing a DatabaseUIDatabase
- 2.3.3.1 About Application MaintenanceAPPAINaN
- 2.3.1 About Application ContainersAPPAI
- ORACLE for aix 11.2.0.1 DATAPUMP expdp之BUG 9470768OracleAI
- datapump 匯出匯入ORA-07445
- Some notes about patch workflows
- Some ideas About ‘invisible bug‘Idea
- Notes about Vue Style GuideVueGUIIDE
- Something about seniority in the family or clan
- [20220128]Check the datapump file header information in Oracle.txtHeaderORMOracle
- 聊聊flink的Parallel ExecutionParallel
- DataPump Export (EXPDP) Fails With Error LPX-216 Invalid CharacterExportAIError
- Something about 計算幾何
- some notes about distributed workflows in GitGit
- What is the N prefix in MSSQL all about?SQL
- Talk about the naming of spring bean namesSpringBean
- Something about 樹鏈剖分
- Tell Me About Yourself Example 1
- What you should know about JavaJava
- MU5IN160 – Parallel ProgrammingParallel
- Linux parallel 命令使用手冊LinuxParallel
- Oracle's Parallel Execution Features(zt)OracleParallel
- parallel rollback引數總結Parallel
- 並行處理 Parallel Processing並行Parallel
- alter table nologging /*+APPEND PARALLEL(n)*/APPParallel
- coca搭配 on vs about 基於ngrams
- 將About加入系統選單
- about oracle10g rac(轉)Oracle
- Mandelbrot set 以parallel_for_實現Parallel
- CSCI-UA.0480-051: Parallel ComputingParallel