Master Note for Streams Recommended Configuration [ID 418755.1]
Master Note for Streams Recommended Configuration [ID 418755.1]
In this Document
Purpose |
Scope |
Details |
Configuration |
1.0 Software Version |
2.0 Database Parameters |
2.1 Significance of AQ_TM_PROCESSES with respect to Oracle Streams |
3.0 Database Storage |
3.1. Tablespace for Streams Administrator queues |
3. 2. Separate queues for capture and apply |
4.0 Privileges |
5.0 Source Site Configuration |
5.1. Streams and Flash Recovery Area (FRA) |
5.2. Archive Logging must be enabled |
5.3. Supplemental logging |
5.4. Implement a Heartbeat Table |
5.5. Flow Control |
5.6. Perform. periodic maintenance |
Database Version 9iR2 and 10gR1 |
Database Version 10gR2 and above |
5.7. Capture Process Configuration |
5.8. Propagation Configuration |
5.9. Additional Configuration for RAC Environments for a Source Database |
6.0 Target Site Configuration |
6.1. Privileges |
6.2. Instantiation |
6.3. Conflict Resolution |
6.4. Apply Process Configuration |
6.5. Additional Configuration for RAC Environments for an Apply Database |
OPERATION |
Global Name |
Certification/compatibility/interoperability between different database versions |
Apply Error Management |
Backup Considerations |
NLS and Characterset considerations |
Batch Processing |
Source Queue Growth |
Streams Cleanup/Removal |
Automatic Optimizer Statistics Collection |
MONITORING |
Streams Healthcheck Scripts |
Alert Log |
Monitoring Utility STRMMON |
References |
Applies to:
Oracle Database - Enterprise Edition - Version 9.2.0.1 to 11.2.0.3 [Release 9.2 to 11.2]Information in this document applies to any platform.
Purpose
Oracle Streams enables the sharing of data and events in a data stream either within a database or from one database to another. This Note describes best practices for Oracle Streams configurations for both downstream capture and upstream (local) capture in version 9.2 and above.
Scope
The information contained in this note targets Replication administrators implementing Streams replication in Oracle 9.2 and higher. This note contains key recommendations for successful implementation of Streams in Oracle database release 9.2 and above.
Details
Configuration
To ensure a successful Streams implementation, use the following recommendations when setting up a Streams environment:
- Software Version
- Database Settings: Parameters, Storage, and Privileges
- Source Site Configuration
- Target Site Configuration
1.0 Software Version
Oracle recommends to run streams with the latest available patchset, and the
list of recommended patches from Document
437838.1.
Please assess if any recommended patch conflicts with
existing patches on your system.
There is Streams support in both
DbControl and GridControl. GridControl should be used to manage multiple
databases in a Streams environment.
2.0 Database Parameters
For best results in a Streams environment, set the following initialization
parameters, as necessary, at each participating instance: global_names,
_job_queue_interval, sga_target, streams_pool_size:
Some
initialization parameters are important for the configuration, operation,
reliability, and performance of an Oracle Streams environment. Set these
parameters appropriately for your Oracle
Streams
environment.
Setting Initialization Parameters Relevant to Oracle
Streams
For
Oracle version 11.2
For
Oracle version 11.1
For
Oracle version 10.2
2.1 Significance of AQ_TM_PROCESSES with respect to Oracle Streams
AQ_TM_PROCESSES controls the process Qmon. Qmon process takes care of Queue
maintenance operation such as message expiration, retry, delay, maintaining
queue statistics, removing PROCESSED messages from a queue table and updating
the dequeue IOT as necessary. Qmon process does monitor and maintain system and
user-owned AQ persistent and buffered objects.For example, the Oracle job
scheduler uses AQ and serves as a client to various database components to allow
operations to be coordinated at scheduled times and intervals. Similarly, Oracle
Grid Control relies on AQ for its Alerts and Service Metrics and database server
utilities such as datapump now use AQ. Furthermore, Oracle Applications has been
using AQ for a significant period of time and this will continue.
In
10.2 and above, it is recommended to leave the parameter aq_tm_processes unset
and let the database autotune the parameter.
Refer to below notes to
understand more about Qmon and AQ_TM_PROCESSES
Note.305662.1
Master Note for AQ Queue Monitor Process (QMON)
Note.428441.1 "Warning Aq_tm_processes Is Set To 0" Message in
Alert Log After Upgrade to 10.2.0.3
3.0 Database Storage
3.1. Tablespace for Streams Administrator queues
Create a separate tablespace for the streams administrator schema (STRMADMIN)
at each participating Streams database. This tablespace will be used for any
objects created in the streams administrator schema, including any spillover of
messages from the in-memory queue.
For example:
CREATE TABLESPACE
&streams_tbs_name DATAFILE '&db_file_directory/&db_file_name' SIZE
25 M REUSE AUTOEXTEND ON NEXT 25M MAXSIZE UNLIMITED;
ALTER USER
strmadmin DEFAULT TABLESPACE &streams_tbs_name QUOTA UNLIMITED ON
&streams_tbs_name;
3. 2. Separate queues for capture and apply
Configure separate queues for changes that are captured locally and for
receiving captured changes from each remote site. This is especially important
when configuring bi-directional replication between multiple databases. For
example, consider the situation where Database db1.net replicates its changes to
databases db2.net, and Database db2.net replicates to db1.net. Each database
will maintain 2 queues: one for capturing the changes made locally and other
queue receiving changes from the other database.
Similarly, for 3
databases (db1.net, db2.net, db3.net) replicating the local changes directly to
each other database, there will be 3 queues at each database. For example at
db1.net, queue1 for the capture process, and queue2 and queue3 for receiving
changes from each of the other databases. The two apply processes on db1.net
(apply_from_db2, apply_from_db3) apply the changes, each associated with a
specific queue (queue2 or queue3)
Queue names should not exceed 24
characters in length. Queue table names should not exceed 24 characters in
length. To pre-create a queue for Streams, use the SET_UP_QUEUE procedure in the
DBMS_STREAMS_ADM package. If you use the MAINTAIN_TABLES, MAINTAIN_SCHEMAS, or
MAINTAIN_GLOBAL procedures to configure Streams and do not identify specific
queue names, individual queues will be created automatically.
Example: To
configure a site (SITEA) that is capturing changes for distribution to another
site, as well as receiving changes from that other site (SITEB), configure each
queue at SITEA with a separate queue_table as
follows:
dbms_streams_adm.set_up_queue(queue_table_name='QT_CAP_SITE_A,
queue_name=>'CAP_SITEA',
)
dbms_streams_adm.set_up_queue(queue_table_name='QT_APP_FROM_SITEB',
queue_name=>'APP_FROM_SITEB');
If desired, the above set_up_queue
procedure calls can include a storage_clause parameter to configure separate
tablespace and storage specifications for each queue table. Typically, Logical
Change Records (LCRs) are queued to an in-memory buffer and processed from
memory. However, they can be spilled to disk if they remain in memory too long
due to an unavailable destination or on memory pressure (Streams_Pool memory is
too low). The storage clause parameter can be used to preallocate space for the
queue table or specify an alternative tablespace for the queue table without
changing the default tablespace for the Streams Administrator.
4.0 Privileges
The streams administrator (strmadmin) must be granted the following on each
participating Streams participating database:
GRANT EXECUTE ON DBMS_AQADM
TO strmadmin;
GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
GRANT EXECUTE
ON DBMS_CAPTURE_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO
strmadmin;
GRANT EXECUTE ON DBMS_STREAMS TO strmadmin;
GRANT EXECUTE ON
DBMS_STREAMS_ADM TO strmadmin;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege =>
DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'strmadmin',
grant_option => FALSE); END;
/
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege =>
DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'strmadmin',
grant_option
=> FALSE);
END;
/
In order to create capture and apply processes, the Streams Administrator
must have DBA privilege. This privilege must be explicitly granted to the
Streams Administrator.
GRANT DBA to STRMADMIN;
In addition, other
required privileges must be granted to the Streams Administrator schema
(strmadmin) on each participating Streams database with the GRANT_ADMIN_PRIVILEGE
procedure:
In Oracle 10g and above, all the above (except DBA) can be
granted using the procedure:
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
5.0 Source Site Configuration
The following recommendations apply to source databases, ie, databases in which Streams capture is configured.
5.1. Streams and Flash Recovery Area (FRA)
In Oracle 10g and above, configure a separate log archive destination independent of the Flash Recovery Area for the Streams capture process for the database. Archive logs in the FRA can be removed automatically on space pressure, even if the Streams capture process still requires them. Do not allow the archive logs for Streams capture to reside solely in the FRA.
5.2. Archive Logging must be enabled
Verify that each source database is running in ARCHIVE LOG mode. For downstream capture sites (ie, databases in which the Streams capture is configured for another database), the database at which the source redo logs are created must have archive logging enabled.
5.3. Supplemental logging
ORACLE redo log files contain redo information needed for instance and media
recovery . However, some of the redo based applications such as STREAMS, Logical
Standby, Adhoc LogMiner need additional information to be logged into the
redolog files. The process of logging this additional information into the redo
files is called Supplemental Logging. Confirm supplemental logging is enabled at
each source site
SUPPLEMENTAL LOGGING
LEVELS
There are two levels of Supplemental
Logging.
1. Database Level Supplemental Logging - There
are two types of Database level logging.
Minimal supplemental
Logging - This places information needed for
identifying the rows in the redo logs. This can be enabled using the following
command.
Identification Key Logging - This places the before
and after images of the specified type of columns in the redo log files. This
type of logging can be specified for ALL ,PRIMARY KEY, UNIQUE and FOREIGN KEY.
This can be enabled using the following command
You can check the database level supplemental logging using the following
query
SUPPLEMENTAL_LOG_DATA_all from v$database;
2. Table-level Supplemental Logging -
Creates individual log groups for each table. Logging can be unconditional or
conditional .
Unconditional Logging means before images of the columns
are logged regardless of whether the column is updated or not. Conditional means
the before images of the columns are logged only when the corresponding columns
are updated. After images are always captured for the columns specified in the
log group. The following query can be used to check the table level log groups
defined in the database.
SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_schemas UNION
SELECT supplemental_log_data_pk, supplemental_log_data_ui, supplemental_log_data_fk FROM dba_capture_prepared_database;
Select log_group_name, table_name,decode(always, 'ALWAYS', 'Unconditional', NULL, 'Conditional') ALWAYS from DBA_LOG_GROUPS;
select owner,log_group_name,table_name,column_name,logging_property, from DBA_LOG_GROUP_COLUMNS;
SUPPLEMENTAL LOGGING
REQUIREMENTS FOR STREAMS REPLICATION
Supplemental logging
should always be enabled on the source database. The following types of columns
at the APPLY site are candidates for supplemental logging on the
source.
1. All columns that are used in Primary keys at the source site
for which changes are applied
on the target must be unconditionally logged at
the table level or at the db level.
2. All columns that are used as
substitute columns at the APPLY site must be unconditionally logged .
3.
All columns that are used in DML handlers, Error handlers, Rules, Rule based
transformations,
virtual dependency definitions, Subset rules must be
unconditionally logged.
4. All columns that are used in column list for
conflict resolution methods must be conditionally logged,
if more than one
column from the source is part of the column list.
5. If Parallelism of
APPLY is > 1, then all columns that are part of FOREIGN KEY, UNIQUE
KEY
constraints that are defined on more than 1 column and BIT MAP indexes
that are defined on more than
one column at the source must be conditionally
logged.
You can enable table level supplemental logging using
the
following command.
ADD SUPPLEMENTAL LOG GROUP emp_fulltime (EMPLOYEE_ID, LAST_NAME, DEPARTMENT_ID);
Alternately, you can create user-defined log groups using the following
command. Omitting the ALWAYS clause creates a conditional log groups.
add SUPPLEMENTAL LOG data (PRIMARY KEY,UNIQUE,FOREIGN KEY,ALL) columns;
These columns will be ignored when an ALL clause if specified in the command.
ADD SUPPLEMENTAL LOG GROUP emp_parttime(
DEPARTMENT_ID NO LOG, EMPLOYEE_ID);
In 9iR2 Streams apply requires unconditional logging of Unique Index and
Foreign Key constraints, even if those columns are not modified. This is because
of unpublished Bug 4198593 Apply incorrectly requires unconditional logging of
Unique and FK constraints fixed in 9.2.0.8.
In versions 10g and above,
prepare_xxx_instantiation procedure implicitly creates supplemental log groups.
For downstream capture sites (ie, databases in which the Streams capture is
configured for another database), the database at which the source redo logs are
created must have supplemental logging for the database objects of interest to
the downstream capture process. Type of supplemental logging that is enabled
implicitly using this command can be checked using the sql in the following link
to the documentation. However, additional supplemental logging might need to be
enabled depending on the requirements as mentioned above.
http://docs.oracle.com/cd/B19306_01/server.102/b14228/mon_rep.htm#BABGFFDC
5.4. Implement a Heartbeat Table
To ensure that the applied_scn of the DBA_CAPTURE
view is updated periodically, implement a "heart beat" table. A "heart beat"
table is especially useful for databases that have a low activity rate. The
streams capture process requests a checkpoint after every 10Mb of generated
redo. During the checkpoint, the metadata for streams is maintained if there are
active transactions. Implementing a heartbeat table ensures that there are open
transactions occurring regularly within the source database enabling additional
opportunities for the metadata to be updated frequently. Additionally, the
heartbeat table provides quick feedback to the database administrator as to the
health of the streams replication.
To implement a heartbeat table: Create
a table at the source site that includes a date or timestamp column and the
global name of the database. Add a rule to capture changes to this table and
propagate the changes to each target destination. Make sure that the target
destination will apply changes to this table as well. Set up an automated job to
update this table at the source site periodically, for example every minute.
Refer to Document 461278.1
Example of a Streams Heartbeat Table
5.5. Flow Control
In Oracle 9iR2, when the threshold for memory of the buffer queue is
exceeded, Streams will write the messages to disk. This is sometimes referred to
as "spillover". When spillover occurs, Streams can no longer take advantage of
the in-memory queue optimization. One technique to minimize this spillover is to
implement a form. of flow control. See the following note for the scripts and
pre-requisites:
Script. to Prevent Excessive Spill of Message From the
Streams Buffer Queue To Disk (Doc ID 259609.1)
In Oracle 10g and above
flow control is automatically handled by the database so there is no need to
implement it manually.
5.6. Perform. periodic maintenance
Database Version 9iR2 and 10gR1
Periodically force capture to checkpoint. This checkpoint is not the same as a database checkpoint. To force capture to checkpoint, use the capture parameter _CHECKPOINT_FORCE and set the value to YES. Forcing a checkpoint ensure that the DBA_CAPTURE view columns CAPTURED_SCN and APPLIED_SCN are maintained.
Database Version 10gR2 and above
A. Confirm checkpoint retention. In Oracle 10gR2 and above, the mining process checkpoints itself for quicker restart. These checkpoints are maintained in the SYSAUX tablespace by default. The capture parameter, checkpoint_retention_time, controls the amount of checkpoint data retained by moving the FIRST_SCN of the capture process forward. The FIRST_SCN is the lowest possible scn available for capturing changes. When the checkpoint_retention_time is exceeded (default = 60 days), the FIRST_SCN is moved and the Streams metadata tables previous to this scn (FIRST_SCN) can be purged and space in the SYSAUX tablespace reclaimed. To alter the checkpoint_retention_time, use the DBMS_CAPTURE_ADM.ALTER_CAPTURE procedure.
B. Dump fresh copy of Dictionary to redo. Issue a DBMS_CAPTURE_ADM.BUILD command to dump a current copy of the data dictionary to the redo logs. Doing this will reduce the amount of logs to be processed in case of additional capture process creation or process rebuild.
C. Prepare database objects for instantiation Issue a DBMS_CAPTURE_ADM.PREPARE_*_INSTANTIATION where * indicates the level (TABLE, SCHEMA, GLOBAL) for the database objects captured by Streams. This is used in conjunction with the BUILD in B above for new capture creation or rebuild purposes.
5.7. Capture Process Configuration
A. Configuring Capture
Use the DBMS_STREAMS_ADM.ADD_*_RULES procedures (ADD_TABLE_RULES, ADD_SCHEMA_RULES for DML and DDL, ADD_GLOBAL_RULES for DDL only). These procedures minimize the number of steps required to configure Streams processes. Also, it is possible to create rules for non-existent objects, so be sure to check the spelling of each object specified in a rule carefully.
CAPTURE requires a rule set with rules.The ADD_GLOBAL_RULES procedure cannot
be used to capture DML changes for entire database. ADD_GLOBAL_RULES can be used
to capture all DDL changes for the database.
A single Streams capture can
process rules for multiple tables or schemas. For best performance, rules should
be simple. Rules that include NOT or LIKE clauses are not simple and will
impact the performance of Streams.
Minimize the number of rules added
into the process rule set. A good rule of thumb is to keep the number of rules
in the rule set to less than 100. If more objects need to be included in the
ruleset, consider constructing rules using the IN clause. For example, a rule
for the 6 TB_M21* tables in the MYACCT schema would look like the following:
(:dml.get_object_owner() = 'MYACCT' and :dml.is_null_tag() = 'Y' and
:dml.get_object_name() IN
('TB_M21_1','TB_M21_2','TB_M21_3',
'TB_M21_40','TB_M21_10','TB_M211B010'))
In
version 10.2 and above, use the DBMS_STREAMS_ADM.
MAINTAIN_* (where *=TABLE,SCHEMA,GLOBAL, TTS) procedures to configure
Streams. These procedures automate the entire configuration of the streams
processes between databases, following the Streams best practices. For local
capture, the default behavior. of these procedures is to implement a separate
queue for capture and apply. If you are configuring a downstream capture and
applying the changes within the same database, override this behavior. by
specifying the same queue for both the capture_queue_name and
apply_queue_name.
If the maintain_* procedures are not suitable for your
environment, please use the ADD_*_RULES procedures (ADD_TABLE_RULES,
ADD_SCHEMA_RULES
for DML and DDL, ADD_SUBSET_RULES
for DML only, and ADD_GLOBAL_RULES
for DDL only). These procedures minimize the number of steps required to
configure Streams processes. It is also possible to create rules for
non-existent objects, so be sure to check the spelling of each object specified
in a rule carefully.
The Streams capture process requires a rule set with
rules. The ADD_GLOBAL_RULES
procedure can be used to capture DML changes for entire database as long as a
negative ruleset is created for the capture process that includes rules for
objects with unsupported datatypes.. ADD_GLOBAL_RULES can be used to capture all
DDL changes for the database.
A single Streams capture can process
changes for multiple tables or schemas. For best performance, rules for these
multiple tables or schemas should be simple. Rules that include LIKE clauses are
not simple and will impact the performance of Streams.
To eliminate
changes for particular tables or objects, specify the include_tagged_lcr
clause along with the table or object name in the negative rule set for the
Streams process. Setting this clause will eliminate ALL changes, tagged or not,
for the table or object.
B. Capture Parameters
Set the following parameters
after a capture process is created:
Parameter & Recommendation |
Values | Comment |
---|---|---|
PARALLELISM=1 | Default: 1 | Number of parallel execution servers to configure one or more preparer processes used to prefilter changes for the capture process. Recommended value is 1. |
_CHEKPOINT_FREQUENCY=500 | Default: 10 <10.2.0.4 Default 1000 in 10.2.0.4 |
Modify the frequency of logminer checkpoints especially in a
database with significant LOB or DDL activity. Larger values decrease the
frequency of logminer checkpoints. Smaller numbers increase the frequency of
those checkpoints. Logminer checkpoints are not the same as database
checkpoints. Availability of logminer checkpoints impacts the time required to
recover/restart the capture after database restart. In a low activity database
(ie, small amounts of data or the data to be captured is changed infrequently),
use a lower value, such as 100. A logminer checkpoint is requested by default every 10Mb of redo mined. If the value is set to 500, a logminer checkpoint is requested after every 500Mb of redo mined. Increasing the value of this parameter is recommended for active databases with significant redo generated per hour. It should not be necessary to configure _CHECKPOINT_FREQUENCY in 10.2.0.4 or higher |
_SGA_SIZE | Default: 10 | Amount of memory available from the streams pool for logminer processing. The default amount of streams_pool memory allocated to logminer is 10Mb. Increase this value especially in environments where large LOBs are processed. This parameter should not be increased unless the logminer error ORA-1341 is encountered. Streams pool memory allocated to logminer is unavailable for other usage |
Capture parameters can be set using the SET_PARAMETER procedure from the
DBMS_CAPTURE_ADM package. For example, to set the checkpoint frequency of the
streams capture process named CAPTURE_EX, use the following syntax while logged
in as the Streams Administrator to request a logminer checkpoint after
processing every Gigabyte (1000Mb) of
redo:
dbms_capture_adm.set_parameter('capture_ex','_checkpoint_frequency','1000');
5.8. Propagation Configuration
A. Configuring Propagation
If the maintain_* procedures are not suitable for your environment(Oracle 9iR2 and 10gR1), please use the ADD_*__PROPAGATION_RULES procedures (ADD_TABLE_PROPAGATION_RULES, ADD_SCHEMA_PROPAGATION_RULES , ADD_GLOBAL_PROPAGATION_RULES for both DML and DDL., ADD_SUBSET_PROPAGATION_RULES for DML only) These procedures minimize the number of steps required to configure Streams processes. Also, it is possible to create rules for non-existent objects, so be sure to check the spelling of each object specified in a rule carefully.
The rules in the rule set for propagation can differ from the rules specified for the capture process. For example, to configure that all captured changes be propagated to a target site, a single ADD_GLOBAL_PROPAGATION_RULES procedure can be specified for the propagation even though multiple ADD_TABLE_RULES might have been configured for the capture process.B. Propagation mode
For new propagation processes configured in 10.2 and above. set the queue_to_queue propagation parameter to TRUE. If the database is RAC enabled, an additional service is created typically named in the format: sys$schema.queue_name.global_name when the Streams subscribers are initially created. A streams subscriber is a defined propagation between two Streams queues or an apply process with the apply_captured parameter set to TRUE. This service automatically follows the ownership of the queue on queue ownership switches (ie, instance startup, shutdown, etc). The service name can be found in the network name column of DBA_SERVICES view.
If the maintain_* (TABLE,SCHEMA,GLOBAL) procedures are used to configure Streams, queue_to_queue is automatically set to TRUE, if possible. The database link for this queue_to_queue propagation must use a TNS servicename (or connect name) that specifies the GLOBAL_NAME in the CONNECT_DATA clause of the descriptor. See section 6 on Additional Considerations for RAC below.
Propagation process configured prior to 10.2 continue to use the dblink mode of propagation. In this situation, if the database link no longer connects to the owning instance of the queue, propagation will not succeed. You can continue to use the 10.1. best practices for this propagation, or during a maintenance window recreate propagation. Make sure that the queue is empty with no unapplied spilled messages before you drop the propagation. Then, recreate the propagation with the queue_to_queue parameter set to TRUE.
Queues created prior to 10.2 on RAC instances should be dropped and recreated in order to take advantage of the automatic service generation and queue_to_queue propagation. Be sure to perform. this activity when the queue is empty and no new LCRs are being enqueued into the queue.C. Propagation Parameters
Parameter & Recommendation Values Comment latency=5 Default: 60 Maximum wait, in seconds, in the propagation window for a message to be propagated after it is enqueued.
The default value is 60. Caution: if latency is not specified for this call, then latency will over-write any existing value with this default value (60).
For example, if the latency is 60 seconds, then during the propagation window, if there are no messages to be propagated, then messages from that queue for the destination will not be propagated for at least 60 more seconds. It will be at least 60 seconds before the queue will be checked again for messages to be propagated for the specified destination. If the latency is 600, then the queue will not be checked for 10 minutes and if the latency is 0, then a job queue process will be waiting for messages to be enqueued for the destination and as soon as a message is enqueued it will be propagated.
Propagation parameters can be set using the ALTER_PROPAGATION_SCHEDULE procedure from the DBMS_AQADM package. For example, to set the latency parameter of the streams propagation from the STREAMS_QUEUE owned by STRMADMIN to the target database whose global_name is DEST_DB for the queue Q1, use the following syntax while logged in as the Streams Administrator:
dbms_aqadm.alter_propagation_schedule('strmadmin.streams_queue','DEST_DB',destination_queue=>'Q1',latency=>5);
D. Network Connectivity
When using Streams propagation across a Wide Area Network (WAN), increase the session data unit (SDU) to improve the propagation performance. The maximum value for SDU is 32K (32767). The SDU value for network transmission is negotiated between the sender and receiver sides of the connection: the minimum SDU value of the two endpoints is used for any individual connection. In order to take advantage of an increased SDU for Streams propagation, the receiving side sqlnet.ora file must include the default_sdu_size parameter. The receiving side listener.ora must indicate the SDU change for the SID. The sending side tnsnames.ora connect string must also include the SDU modification for the particular service.
Tuning the tcp/ip networking parameters can significantly improve performance across the WAN. Here are some example tuning parameters for Linux. These parameters can be set in the /etc/sysctl.conf file and running sysctl -p . When using RAC, be sure to configure this at each instance.
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# increase Linux autotuning TCP buffer limits # min, default, and max # number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
In addition, the SEND_BUF_SIZE and RECV_BUF_SIZE sqlnet.ora parameters increase the performance of propagation on your system. These parameters increase the size of the buffer used to send or receive the propagated messages. These parameters should only be increased after careful analysis on their overall impact on system performance.
For further information, please review the Oracle Net Services Guide
5.9. Additional Configuration for RAC Environments for a Source Database
Archive Logs
The archive log threads from all
instances must be available to any instance running a capture process. This is
true for both local and downstream capture.
Queue
Ownership
When Streams is configured in a RAC environment, each
queue table has an "owning" instance. All queues within an individual queue
table are owned by the same instance. The Streams components
(capture/propagation/apply) all use that same owning instance to perform. their
work. This means that
- a capture process is run at the owning instance of the source queue.
- a propagation job must run at the owning instance of the queue
- a propagation job must connect to the owning instance of the target queue.
Ownership of the queue can be configured to remain on a specific instance, as
long as that instance is available, by setting the PRIMARY _INSTANCE and/or
SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE.
If the primary_instance is set to a specific instance (ie, not 0), the queue
ownership will return to the specified instance whenever the instance is
up.
Capture will automatically follow the ownership of the queue. If the
ownership changes while capture is running, capture will stop on the current
instance and restart at the new owner instance.
For queues created with
Oracle Database 10g Release 2, a service will be created with the service name=
schema.queue and the network name SYS$schema.queue.global_name for that queue.
If the global_name of the database does not match the db_name.db_domain name of
the database, be sure to include the global_name as a service name in the
init.ora.
For propagations created with the Oracle Database 10g Release
2 code with the queue_to_queue parameter to TRUE, the propagation job will
deliver only to the specific queue identified. Also, the source dblink for the
target database connect descriptor must specify the correct service (global name
of the target database ) to connect to the target database. For example, the
tnsnames.ora entry for the target database should include the CONNECT_DATA
clause in the connect descriptor for the target database. This clause should
specify (CONNECT_DATA=(SERVICE_NAME='global_name of target database')). Do NOT
include a specific INSTANCE in the CONNECT_DATA clause.
For example,
consider the tnsnames.ora file for a database with the global name
db.mycompany.com. Assume that the alias name for the first instance is db1 and
that the alias for the second instance is db2. The tnsnames.ora file for this
database might include the following entries:
(description=
(load_balance=on)
(address=(protocol=tcp)(host=node1-vip)(port=1521))
(address=(protocol=tcp)(host=node2-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)))
db1.mycompany.com=
(description=
(address=(protocol=tcp)(host=node1-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)
(instance_name=db1)))
db2.mycompany.com=
(description=
(address=(protocol=tcp)(host=node2-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)
(instance_name=db2)))
Use the italicized tnsnames.ora alias in the
target database link USING clause.
DBA_SERVICES
lists all services for the database. GV$ACTIVE_SERVICES identifies all active
services for the database In non_RAC configurations, the service name will
typically be the global_name. However, it is possible for users to manually
create alternative services and use them in the TNS connect_data specification .
For RAC configurations, the service will appear in these views as
SYS$schema.queue.global_name.
Propagation
Restart
Use the procedures START_PROPAGATION
and STOP_PROPAGATION
from DBMS_PROPAGATION_ADM to enable and disable the propagation schedule. These
procedures automatically handle queue_to_queue propagation.
Example:
exec
DBMS_PROPAGATION_ADM.STOP_PROPAGATION('name_of_propagation'); or
exec
DBMS_PROPAGATION_ADM.STOP_PROPAGATION('name_of_propagation',force=>true);
exec
DBMS_PROPAGATION_ADM.START_PROPAGATION('name_of_propagation');
6.0 Target Site Configuration
The following recommendations apply to target databases, ie, databases in which Streams apply is configured.
6.1. Privileges
Grant Explicit Privileges to APPLY_USER
for the user tables
Examples:
Privileges for table level DML:
INSERT/UPDATE/DELETE,
Privileges for table level DDL: CREATE (ANY) TABLE
, CREATE (ANY) INDEX, CREATE (ANY) PROCEDURE
6.2. Instantiation
Set Instantiation SCNs manually if not using export/import. If manually
configuring the instantiation scn for each table within the schema, use the
RECURSIVE=>TRUE option on the DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN
procedure
For DDL Set Instantiation SCN at next higher level (ie, SCHEMA
or GLOBAL level).
6.3. Conflict Resolution
If updates will be performed in multiple databases for the same shared
object, be sure to configure conflict resolution. Refer to 779801.1 Streams
Conflict Resolution, for more detail.
To simplify conflict resolution on
tables with LOB columns, create an error handler to handle errors for the table.
When registering the handler using the DBMS_APPLY_ADM.SET_DML_HANDLER
procedure, be sure to specify the ASSEMBLE_LOBS parameter as TRUE.
265201.1 Master
Note for Troubleshooting Streams Apply Errors ORA-1403, ORA-26787 or
ORA-26786,Conflict Resolution
6.4. Apply Process Configuration
A. Rules
If the maintain_* procedures are not suitable for your environment, please use the ADD_*_RULES procedures (ADD_TABLE_RULES , ADD_SCHEMA_RULES , ADD_GLOBAL_RULES (for DML and DDL), ADD_SUBSET_RULES
APPLY can be configured with or without a ruleset. The ADD_GLOBAL_RULES can be used to apply all changes in the queue for the database. If no ruleset is specified for the apply process, all changes in the queue are processed by the apply process.
A single Streams apply can process rules for multiple tables or schemas located in a single queue that are received from a single source database . For best performance, rules should be simple. Rules that include LIKE clauses are not simple and will impact the performance of Streams.
To eliminate changes for particular tables or objects, specify the include_tagged_lcr clause along with the table or object name in the negative rule set for the Streams process. Setting this clause will eliminate all changes, tagged or not, for the table or object.
B. Parameters
Parameter Values Comment DISABLE_ON_ERROR=N Default: Y If Y, then the apply process is disabled on the first unresolved error, even if the error is not fatal.
If N, then the apply process continues regardless of unresolved errors.PARALLELISM= 4 Default: 1 Parallelism configures the number of apply servers available to the apply process for performing user transactions from the source database. Choose a value 4, 8, 12, 16 based on the concurrent replicated workload generated at the source AND the number of CPUs available at the target. TXN_LCR_SPILL_THRESHOLD Default=10,000 New in 10.2. Leave this parameter as default initially.
It enables you to specify that an apply process begins to spill messages for a transaction from memory to disk when the number of messages in memory for a particular transaction exceeds the specified number.
Setting this parameter to a value that is higher than the default to try to stage everything in memory must be done carefully so that queue spilling is not increased. Setting TXN_LCR_SPILL_THRESHOLD to 'infinite' is not recommended because this will revert Streams to the old pre-10.2 behaviour.
The DBA_APPLY_SPILL_TXN and V$STREAMS_APPLY_READER views enable you to monitor the number of transactions and messages spilled by an apply process.
Refer to Document 365648.1 Explain TXN_LCR_SPILL_THRESHOLD in Oracle10GR2 Streams
Apply parameters can be set using the SET_PARAMETER procedure from the DBMS_APPLY_ADM package. For example, to set the DISABLE_ON_ERROR parameter of the streams apply process named APPLY_EX, use the following syntax while logged in as the Streams Administrator:
exec dbms_apply_adm.set_parameter('apply_ex','disable_on_error','n');In some cases, performance can be improved by setting the following hidden parameter. This parameter should be set when the major workload is UPDATEs and the updates are performed on just a few columns of a many-column table.
Parameter Values Comment _DYNAMIC_STMTS=Y Default: N If Y, then for UPDATE statements, the apply process will optimize the generation of SQL statements based on required columns. _HASH_TABLE_SIZE=1000000 Default: 80*parallelism Set the size of the hash table used to calculate transaction dependencies to 1 million.
6.5. Additional Configuration for RAC Environments for an Apply Database
Queue Ownership
When Streams is configured in a RAC
environment, each queue table has an "owning" instance. All queues within an
individual queue table are owned by the same instance. The Streams components
(capture/propagation/apply) all use that same owning instance to perform. their
work. This means that
- the database link specified in the propagation must connect to the owning instance of the target queue.
- the apply process is run at the owning instance of the target queue
Ownership of the queue can be configured to remain on a specific instance, as
long as that instance is available, by setting the PRIMARY _INSTANCE and
SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE.
If the primary_instance is set to a specific instance (ie, not 0), the queue
ownership will return to the specified instance whenever the instance is
up.
Apply will automatically follow the ownership of the queue. If the
ownership changes while apply is running, apply will stop on the current
instance and restart at the new owner instance.
Changing the
GLOBAL_NAME of the Database
See the OPERATION section on Global_name below. The following are some additional
considerations when running in a RAC environment. If the GLOBAL_NAME of the
database is changed, ensure that the queue is empty before changing the name and
that the apply process is dropped and recreated with the apply_captured
parameter = TRUE. In addition, if the GLOBAL_NAME does not match the
db_name.db_domain of the database, include the GLOBAL_NAME in the list of
services for the database in the database parameter initialization file.
OPERATION
A Streams process will automatically restart after a database startup,
assuming that the process was in a running state before the database shut down.
No special startup or shutdown procedures are required in the normal
case.
Global Name
Streams uses the GLOBAL_NAME of the database to identify changes from or to a
particular database. Do not modify the GLOBAL NAME of a Streams database after
capture has been created. Changes captured by the Streams capture process
automatically include the current global name of the source database. This means
that if the global name is modified after a capture process has been configured,
the capture process will need to be dropped and recreated following the
GLOBAL_NAME modification. In addition, the system-generated rules for capture,
propagation, and apply typically specify the global name of the source database.
These rule will need to be modified or recreated to adjust the
source_database_name. Finally, if the GLOBAL_NAME does not match the
db_name.db_domain of the database, include the GLOBAL_NAME in the list of
services for the database in the database parameter initialization
file.
If the global name must be modified on the database, do it at a
time when NO user changes are possible on the database and the Streams queues
are empty with no outstanding changes to be applied, so that the Streams
configuration can be recreated. Keep in mind that all subscribers (propagations
to target databases and the target apply processes) must also be recreated if
the source database global_name is changed. Follow the directions in the Streams
Replication Administrator's Guide for Changing
the DBID or GLOBAL NAME of a source database.
It is also strongly
recommended that the database init.ora parameter global_names be set to TRUE to
guarantee that database link names match the global name of the target
database.
Certification/compatibility/interoperability between different database versions
Streams has 2 types of capture => Local capture and downstream
capture.
Local capture:
Local capture means that
a capture process runs on the source database
Local Capture can be used
across different hardware, different OS & different Oracle
Versions.
Downstream capture:
Downstream capture
means that a capture process runs on a database other than the source database.
The redo log files from the source database are copied to the other database,
called a downstream database, and the capture process captures changes in these
redo log files at the downstream database
Operational
Requirements for Downstream Capture
The following are
operational requirements for using downstream capture:
* The source
database must be running at least Oracle Database 10g and the downstream capture
database must be running the same version of Oracle as the source database or
higher.
* The operating system on the source and downstream capture sites
must be the same, but the operating system release does not need to be the same.
In addition, the downstream sites can use a different directory structure from
the source site.
* The hardware architecture on the source and downstream
capture sites must be the same. For example, a downstream capture configuration
with a source database on a 32-bit Sun system must have a downstream database
that is configured on a 32-bit Sun system. Other hardware elements, such as the
number of CPUs, memory size, and storage configuration, can be different between
the source and downstream sites.
In a downstream capture environment, the
source database can be a single instance database or a multi-instance Real
Application Clusters (RAC) database. The downstream database can be a single
instance database or a multi-instance RAC database, regardless of whether the
source database is single instance or multi-instance.
Apply Error Management
The view DBA_APPLY_ERROR includes the message_number within the transaction
on which the reported error occurred. Use this message number in conjunction
with the procedures from the documentation manual Streams Concepts and
Administration ( Chapter 22 Monitoring Streams Apply Processes "Displaying
Detailed Information About Apply Errors") to print out the column values of
each logical change record within the failed transaction.
Backup Considerations
1. Ensure that any manual backup procedures that include the any of the following statements include a non-null Streams tag:
ALTER TABLESPACE ... BEGIN BACKUP
ALTER TABLESPACE ... END BACKUP
The tag should be chosen such that these DDL commands will be ignored by the capture rule set.
To set a streams tag, use the DBMS_STREAMS.SET_TAG procedure. A non-null tag should be specified to avoid capturing these commands.
Backups performed using RMAN do not need to set a Streams tag.
2. Do not allow any automated backup of the archived logs to remove necessary archive logs. It is especially important in a Streams environment that all necessary archived logs remain available online and in the expected location until the capture process has finished processing them. If a log required by the capture process is unavailable, the capture process will abort. Force a checkpoint (capture/logminer) before beginning the manual backup procedures. To force a checkpoint, explicitly reset the hidden capture parameter _CHECKPOINT_FORCE to 'Y'. The REQUIRED_CHECKPOINT_SCN column of the DBA_CAPTURE view specifies the lowest required SCN to restart capture. A procedure to determine the minimum archive log necessary for successful capture restart is available in the Streams health check script.
3. Ensure that all archive logs (from all threads) are available. Database recovery depends on the availability of these logs, and a missing log will result in incomplete recovery.
4. Ensure that the APPLY process parameter, COMMIT_SERIALIZATION, is set to the default value, FULL, so that the apply commits all transactions, regardless of whether they contain dependent row LCRs, in the same order as the corresponding transactions on the source database.
5. Implement a "heartbeat" table. To ensure that the applied_scn of the DBA_CAPTURE view is updated periodically, implement a "heart beat" table. Implementing a heartbeat table ensures that the metadata is updated frequently. Additionally, the heartbeat table provides quick feedback as to the health of streams replication. Refer to the Source Site Configuration Section: Implement a Hearbeat Table for more details.
6. In situations that result in incomplete recovery (Point-in-Time recovery) at the source site, follow the instructions in Chapter 9 of the Streams Replication Administrators Guide
Performing Point-in-Time Recovery on the Source in a Single-Source Environment
Performing Point-in-Time Recovery in a Multiple-Source Environment
7. In situations that result in incomplete recovery at the destination site, follow the instructions in Chapter 9 of the Streams Replication Administrator's Guide
Performing Point-in-Time Recovery on a Destination Database
8. In situations where combined capture and apply (CCA) is implemented in a single-source replication environment :
Combined Capture and Apply and Point-in-Time Recovery
NLS and Characterset considerations
Ensure that all the databases in the streams configuration and corresponding NLS settings are configured appropriately for the successful replication of linguistic data.
For more information, refer to the Database Globalization Support Guide
Batch Processing
For best performance, the commit point for batch processing should be kept
low. It is preferable that excessively large batch processing be run
independently at each site. If this technique is utilized, be sure to implement
DBMS_STREAMS.SET_TAG
to skip the capture of batch processing session. Setting this tag is valid only
in the connected session issuing the set_tag command and will not impact the
capture of changes from any other database sessions.
DDL
Replication
When replicating DDL, keep in mind the effect the DDL
statement will have on the replicated sites. In particular, do not allow system
generated naming for constraints or indexes, as modifications to these will most
likely fail at the replicated site. Also, storage clauses may cause some issues
if the target sites are not identical.
If you decide NOT to replicate DDL
in your Streams environment, any table structure change must be performed
manually.
Refer to Document 313478.1
Performing Manual DDL in a Streams Environment
Propagation
At times, the
propagation job may become "broken" or fail to start after an error has been
encountered or after a database restart. The typical solution is to disable the
propagation and then re-enable it.
- exec dbms_propagation_adm.stop_propagation('propagation_name');
- exec dbms_propagation_adm.start_propagation('propagation_name');
If the above does not fix the problem, perform. a stop of propagation with the force parameter and then start propagation again.
- exec dbms_propagation_adm.stop_propagation('propagation_name',force=>true);
- exec dbms_propagation_adm.start_propagation('propagation_name');
An additional side-effect of stopping the propagation with the force
parameter is that the statistics for the propagation are cleared
The
above is documented in the Streams Replication Administrator's Guide: Restart
Broken Propagations
Source Queue Growth
Source queue may grow if one of the target sites is down for an extended period, or propagation is unable to deliver the messages to a particular target site (subscriber) due to network problems for an extended period.
- Automatic flow control minimizes the impact of this queue growth. Queued messages (LCRs) for unavailable target sites will spill to disk storage while messages for available sites are processed normally.
- Propagation is implemented using the DBMS_JOB subsystem. If a job is unable to execute 16 successive times, the job will be marked as "broken" and become disabled. Be sure to periodically check that the job is running successfully to minimize source queue growth due to this problem.
Streams Cleanup/Removal
Removing the Streams administrator schema with DROP USER ..... CASCADE can be used to remove the entire Streams configuration.
Automatic Optimizer Statistics Collection
Oracle database 10g has the Automatic Optimizer Statistics Collection feature
that runs every night and gathers optimizer stats of tables whose stats have
become stale. The problem with volatile tables, such as the Streams queue
tables, is that it is quite possible that when the stats collection job runs
these tables may not have data that is representative of their full load period.
For this reason we recommend to customers that for volatile tables, they run the
dbms_stats.gather job manually on them when they are at the fullest and then
immediately lock the stats of using the PL/SQL API's (dbms_stats.lock ...)
provided. This will ensure that when the nightly Automatic Optimizer Statistics
Collection job runs, these volatile tables will be skipped and hence not
analyzed.
These volatile AQ/Streams tables are created through a call to
dbms_aqadm.create_queue_table
(qtable_name, etc.) or dbms_streams_adm.setup_queue()
command with a user defined queue table (qtable_name). In addition to the queue
table, the call internally creates the following tables which also tend to be
volatile:
aq$_{qtable_name}_i
aq$_{qtable_name}_h
aq$_{qtable_name}_t
aq$_{qtable_name}_p
aq$_{qtable_name}_d
aq$_{qtable_name}_c
Oracle has the ability to restore old stats on tables including data
dictionary tables using the dbms_stats.restore... API's. This feature can be
used for short term resolution, but the real solution is the first one, where
you lock optimizer stats of volatile tables.
MONITORING
All Streams processing is done at the "owning instance" of the queue. To determine the owning instance, use the query below:
FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
WHERE t.object_type = 'SYS.ANYDATA' AND
q.queue_table = t.queue_table AND
q.owner = t.owner;
To display the monitoring view information, either query the monitoring
views from the owning instance or use the GV$ views for dynamic streams views.
Refer Note 735976.1 for
streams related views.
Streams Healthcheck Scripts
The Streams health check script. is a collection of queries to determine the configuration of the streams environment. This script. should be run at each participating database in a streams configuration. In addition to configuration information, analysis of the rules specified for streams is included to enable quicker diagnosis of problems. A guide to interpreting the output is provided. The healthcheck script. is an invaluable tool for problem solving customer issues. The Streams Healthcheck script. is available from Document 273674.1 Streams Configuration Report and Health Check Script
Alert Log
Streams capture and apply processes report long-running and long transactions
in the alert log.
Long-running transactions are open transactions with no
activity( ie, no new change records , rollback or commit ) for an extended
period (20 minutes). Large transactions are open transactions with a large
number of change records. The alert log will report the fact that a long-running
or large transaction has been seen every 20 minutes. Not all such transactions
will be reported - only 1 per 10 minute period. When the commit or rollback is
received, this fact will be reported in the alert log as well.
Refer to
Document
783927.1 Streams Long-Running Transactions: Explanation, Diagnosis, and
Troubleshooting
Monitoring Utility STRMMON
STRMMON
is a monitoring tool focused on Oracle Streams. Using this tool, database
administrators get a quick overview of the Streams activity occurring within a
database. In a single line display, strmmon reports information The reporting
interval and number of iterations to display are configurable. STRMMON is
available in the rdbms/demo directory in $ORACLE_HOME. The most recent version
of the tool is available from Document 290605.1
Oracle Streams STRMMON Monitoring Utility
Still have questions ?
Enjoy a short Video about Oracle´s Support Communities - to quickly understand it´s benefits for you right now (http://bcove.me/tlygjitz)
To provide feedback on this note, click on the "Rate this document" link above.
References
NOTE:730036.1 - Troubleshooting Oracle Streams Performance IssuesNOTE:735976.1 - All Replication Configuration Views For Streams, AQ, CDC and Advanced Replication
NOTE:1264598.1 - Master Note for Streams Downstream Capture - 10g and 11g [Video]
NOTE:238455.1 - Streams DML Types Supported and Supported Datatypes
NOTE:461278.1 - Example of a Streams Heartbeat Table
NOTE:259609.1 - Script. to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk
NOTE:265201.1 - Master Note for Troubleshooting Streams Apply Errors ORA-1403, ORA-26787 or ORA-26786,Conflict Resolution
NOTE:273674.1 - Streams Configuration Report and Health Check Script
NOTE:290605.1 - Oracle Streams STRMMON Monitoring Utility
NOTE:305662.1 - Master Note for AQ Queue Monitor Process (QMON)
NOTE:335516.1 - Master Note for Streams Performance Recommendations
NOTE:313279.1 - Master Note for Troubleshooting Streams capture 'WAITING For REDO' or INITIALIZING
NOTE:313478.1 - Performing Manual DDL in a Streams Environment
NOTE:779801.1 - Streams Conflict Resolution
NOTE:783927.1 - Troubleshooting Long-Running Transactions in Oracle Streams
NOTE:789445.1 - Master Note for Streams Setup and Administration
NOTE:365648.1 - Explain TXN_LCR_SPILL_THRESHOLD in Oracle10GR2 Streams
NOTE:428441.1 - "Warning: Aq_tm_processes Is Set To 0" Message in Alert Log After Upgrade to 10.2.0.3 or Higher
NOTE:437838.1 - Streams Recommended Patches
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/17252115/viewspace-764980/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 12-Overview-Recommended LabelsView
- oracle 之 CLUSTER_INTERCONNECTS is not set to the recommended valueOracle
- STREAMS MONITORING
- Establishing SSL connection without server's identity verification is not recommended.ServerIDE
- ! [rejected] master -> master (fetch first)AST
- redis configurationRedis
- Streams 流處理
- 精讀《web streams》Web
- Azkarra Streams簡介:Apache Kafka Streams的第一個微框架ApacheKafka框架
- Flutter Provider and Streams [翻譯]FlutterIDE
- 流和向量(Streams and Vectors)
- Java Streams 的潛力Java
- Node.js Streams(流)Node.js
- openwrt advanced configuration
- etcd install & configuration
- 9.(轉)Spring Configuration Check Unmapped Spring configuration files foundSpringAPP
- mysql noteMySql
- 聊聊rocketmq-streams的ILeaseServiceMQ
- .net core Configuration物件物件
- spring 解析 @Configuration 流程Spring
- git merge origin master git merge origin/master區別GitAST
- Android Transition NoteAndroid
- [Vue] Reactive noteVueReact
- note1
- note2
- git rebase masterGitAST
- 1248:Dungeon MasterAST
- Scrum Master JobGPTScrumASTGPT
- asyncio非同步IO——Streams詳解非同步
- Kafka Streams開發入門(1)Kafka
- java .stream(). 使用介紹 Streams APIJavaAPI
- How to Use the Stdin, Stderr, and Stdout Streams in Bash
- 11.Wagtail streams應用-2AI
- MySQL5.7 Master-Master主主搭建for Centos7MySqlASTCentOS
- Deep learning - note 1
- JS note ---語句JS
- LeetCode | 383 RanSom NoteLeetCode
- Node.js Streams 基礎總結Node.js
- How to open and close static streams in a USB bulk endpoint