Local Capture and Downstream Capture

zhouxianwang發表於2012-06-15

The Source Database Performs All Change Capture Actions

If you configure local capture, then the following actions are performed at the source database:

  • The DBMS_CAPTURE_ADM.BUILD procedure is run to extract (or build) the data dictionary to the redo log.

  • Supplemental logging at the source database places additional information in the redo log. This information might be needed when captured changes are applied by an .

  • The first time a capture process is started at the database, Oracle uses the extracted data dictionary information in the redo log to create a , which is separate from the primary data dictionary for the source database. Additional capture processes can use this existing LogMiner data dictionary, or they can create new LogMiner data dictionaries.

  • A capture process scans the redo log for changes using LogMiner.

  • The evaluates changes based on the s in one or more of the capture process s.

  • The capture process enqueues changes that satisfy the rules in its rule sets into a local ANYDATA queue.

  • If the captured changes are shared with one or more other databases, then one or more s propagate these changes from the source database to the other databases.

  • If database objects at the source database must be instantiated at a , then the objects must be prepared for and a mechanism such as an Export utility must be used to make a copy of the database objects.

Advantages of Local Capture

The following are the advantages of using local capture:

  • Configuration and administration of the capture process is simpler than when downstream capture is used. When you use local capture, you do not need to configure redo log file copying to a , and you administer the capture process locally at the database where the captured changes originated.

  • A can scan changes in the online redo log before the database writes these changes to an archived redo log file. When you use downstream capture, archived redo log files are copied to the downstream database after the source database has finished writing changes to them, and some time is required to copy the redo log files to the downstream database.

  • The amount of data being sent over the network is reduced, because the entire redo log file is not copied to the downstream database. Even if s are propagated to other databases, the captured messages can be a subset of the total changes made to the database, and only the LCRs that satisfy the rules in the rule sets for a are propagated.

  • Security might be improved because only the source (local) database can access the redo log files. For example, if you want to capture changes in the hr schema only, then, when you use local capture, only the source database can access the redo log to enqueue changes to the hr schema into the capture process . However, when you use downstream capture, the redo log files are copied to the downstream database, and these redo log files contain all of the changes made to the database, not just the changes made to the hr schema.

  • Some types of custom rule-based transformations are simpler to configure if the capture process is running at the local source database. For example, if you use local capture, then a custom rule-based transformation can use cached information in a PL/SQL session variable which is populated with data stored at the source database.

  • In a Streams environment where messages are captured and applied in the same database, it might be simpler, and use fewer resources, to configure local queries and computations that require information about captured changes and the local data.

Real-Time Downstream Capture

A real-time downstream capture configuration works in the following way:

  • Redo transport services use the log writer process (LGWR) at the source database to send redo data to the downstream database either synchronously or asynchronously. At the same time, the LGWR records redo data in the online redo log at the source database.

  • A remote file server process (RFS) at the downstream database receives the redo data over the network and stores the redo data in the standby redo log.

  • A log switch at the source database causes a log switch at the downstream database, and the ARCHn process at the downstream database archives the current standby redo log file.

  • The real-time downstream capture process captures changes from the standby redo log whenever possible and from the archived standby redo log files whenever necessary. A capture process can capture changes in the archived standby redo log files if it falls behind. When it catches up, it resumes capturing changes from the standby redo log.

 

 

Archived-Log Downstream Capture

A configuration means that archived redo log files from the source database are copied to the downstream database, and the capture process captures changes in these archived redo log files. You can copy the archived redo log files to the downstream database using redo transport services, the DBMS_FILE_TRANSFER package, file transfer protocol (FTP), or some other mechanism.

 the source database for a change captured by a downstream capture process is the database where the change was recorded in the redo log, not the database running the downstream capture process.

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/27036311/viewspace-732975/,如需轉載,請註明出處,否則將追究法律責任。

相關文章