ORACLE 10G OCA 042 筆記(二)

kunlunzhiying發表於2017-04-13

************$8 管理一致性與併發性*********************************
oracle 10g 保留使用者自從一個事務開始以來已被修改過的資料所需要的撤銷資料. oracle的GUI工具EM Database control使撤銷管理和調整大小變得很容易.UNDO ADVISOR收集統計資料,幫助管理員調整撤銷表空間的大小,以便一個事務的DML語句能順利執行完畢,同時又使SELECT語句不返回"快照太舊"的錯誤.
作用:rollback對資料庫的修改,支援讀一致,支援回閃查詢,資料庫恢復操作(回退未提交資料).
一個資料庫可以有一個以上的撤銷表空間,可只有一個表空間是活動的.
sql>select * from v$rollname;(檢索當前活動撤銷段的名稱)
V$TRANSACTION動態效能檢視顯示一個事務和各撤銷段的關係.
for example:
sql>set transaction name 'update clerk salaries';
sql>update hr.employees set salary = salary * 1.25 where job_id like '%CLERK';
SQL>select xid,status,start_time,xidusn seg_num,r.name seg_name from v$transaction t join v$rollname r on t.xidusn=r.usn where t.name='update clerk salaries';
說明:insert語句幾乎不使用撤銷表中的空間. 新錶行的指標被儲存在撤銷表空間中.
還可把讀一致性應用於一個完整的事務.如下所示:
>set transaction read only;
直到該事務回退或者提交之前,該事務中的所由查詢只能看到該事務提交之前發生在其他表中的已提交的事務.換句話說,只讀事務不能含有任何一條修改表資料的語句,無論表駐留在什麼地方.例如ALTER SUER語句.
sql>alter system set undo_management = auto scope=spfile; --或 manual
sql>alter system set undo_tablespace=undo_atch;
sql>alter system set undo_retention = 43200; (12小時)
UNDO_RETENTION引數:
撤銷資訊分三類:未提交撤銷資訊,已提交撤銷資訊,過期撤銷資訊.
 
 
監視撤銷表空間
OEM(oracle 10g 圖中長查詢以分鐘為單位,是是否顯示錯了),使用UNDO ADVISOR
SQL>SELECT to_char(begin_time,'yyyy-mm-dd hh24:mi:ss') starttime,to_char(end_time,'yyyy-mm-dd hh24:mi:ss') endtime,undoblks,maxquerylen maxqrylen from v$undostat;
 
 
手工使用V$UNDOSTAT檢視確定撤銷表空間的大小
undo_tablespace_size = UR * UPS * blocksize (UR--UNDO_RETENTION,UPS--UNDOBLKS)
for example:undo_tablespace_size = 43200 * 626 * 8192 = 206 GB 再乘(1+10~20%)
 
 
保障撤銷保持能力
sql>alter tablespace undotbs1 retention guarantee;
sql>alter tablespace undotbs1 retention noguarantee;
sql>select tablespace_name,contents,retention from dba_tablespaces;
 
 
監視加鎖和解決鎖衝突
加鎖方式(文字費解,以後看)
ROW SHARE
ROW EXCLUSIVE
SHARE
SHARE ROW EXCLUSIVE
EXCLUSIVE
 
 
SQL>LOCK table hr.employees,hr.departments in EXCLUSIVE MODE;
sql>select * from hr.employees where mnager_id = 100 for update;
sql>lock table hr.employees in share row exclusive mode nowait;
 
 
檢測鎖衝突 OEM---performance---instance locks 很直觀,自給看看了.kiss session(死鎖處理)
 
 

死鎖示例
會話1 時間會話2
update employees set salary=salary 11:29 update employees set manager=100
*1.2 where employee_id=102; where employee_id=190;
update employees set salary=salary 11:44 update employees set manager=100
*1.2 where employee_id=190; *1.2 where employee_id=102;
ORA-00060:Deadlock detected while 11:45 還會等,如果會話1 rollback or comited
waiting for resource , 才執行完畢
此時commited 或者rollbakc吧
 
 
************$9 前瞻性資料庫維護與效能監視**************************
oracle新增的特性
Automatic Workload Repository(AWR)
Automated Database Diagnostic Monitoring (ADDM)
Oracle 10g Tuning and Diagnostic advisor
DBA用兩種方式來監視系統的管理和效能問題.
reactive monitoring(反應性監視)
proactive monitoring(前瞻性監視)
從各種源(資料目錄,動態效能檢視和作業系統等)中收集資料,還把基於代價的最佳化器統計資料廣泛用於它的前瞻性監視.
AWR:MMON(memory monitor),MMNL(memory monitor light)二程式合作收集資料.MMON每60分醒來一次,從資料目錄檢視,動態效能檢視和最佳化器中收集資料.然後儲存在AWR中.
儲存這些資料的表(AWR)由SYSMAN擁有.在SYSAUX表空間中.
啟用AWR特性:設定statistics_level引數:BASIC,TYPICAL,ALL
修改時間間隔:
sql>execute dbms_workload_repository.modify_snapshot_settings(interval=>60,retention=>43200);(以分鐘為單位,30天,1小時)
sql>begin DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(14400,60); end;
 
 
AWR執行完畢之後,ADDM自動分析收集到的統計資料,並把它們和前兩個AWR快照所收集的統計資料進行比較.分析結果儲存在AWR中.
ADDM會推薦可能的糾正方法,目標是最大限度降低DB TIME. db time = cpu時間和等待時間 (非空閒資料庫使用者累加所有非空閒使用者的會話時間)
 
 

oem介面語言設定:
set NLS_LANG = american_america.ZHS16GBK
SET nls_lang =SIMPLIFIED CHINESE_CHINA.ZHS16GBK
 
 
OEM平時多觀查。圖不知如何理解。
 
 
使用資料目錄檢視檢視ADDM分析
ADDM Data Dictionary Views
View Name :Description
DBA_ADVISOR_FINDINGS :Describes the findings identified by the ADDM analysis
DBA_ADVISOR_OBJECTS : Describes the objects that are referenced in the ADDM findings and recommendations
DBA_ADVISOR_RECOMMENDATIONS:Describes the recommendations made based on ADDM findings
DBA_ADVISOR_RATIONALE :Describes the rationale behind each ADDM finding
 
 
The following SQL statement shows a sample query on the DBA_ADVISOR_FINDINGS data dictionary view that identifies the type of performance problem that is causing the most impact
on the database:
SQL> SELECT task_id, type, message FROm dba_advisor_findings WHERE impact= (select MAX(impact) FROM dba_advisor_findings);
 
 
TASK_ID value shown in the query on DBA_ADVISOR_FINDINGS. A query on that view, using the TASK_ID of 164 returned by the ADDM session that had the potential for the greatest database
impact, returns the SQL statements shown here:
sql> SELECT attr4 FROM dba_advisor_objects WHERE task_id = 164;
 
 
The following query shows the recommendations for correcting the performance issues associated with TASK_ID 164, which was identified earlier as being the costliest database activity:
SQL> SELECT TRIM(attr1) ATTR1, TRIM(attr2) ATTR2, TRIM(attr3) ATTR3 FROM dba_advisor_actions WHERE task_id = 164;
 
 
The following example shows a sample query on the DBA_ADVISOR_RATIONALE view using the TASK_ID of 164 identified earlier:
SQL> SELECT message FROM dba_advisor_rationale WHERE task_id = 164;
 
 
ADDM診斷顧問(測試)
SQL Tuning Advisor
[1] SQL Access Advisor
[1] Memory Advisor
[1] Mean Time To Recover Advisor
[1] Segment Advisor
[1] Undo Management Advisor
 
 
ADDM Alerts
測試下傳送郵件報警……
測試下基準度量的收集……
ADDM怎樣計算閥值:AWR儲存每個度量標準的高位值和低位值。計算方法取決於制定該閥值的比較運算子和相關的高位值和低位值。
 
 
效能監視:
dba應該再開始任何調節努力之前準確測出系統當前正在怎樣執行。(如各種基準度量標準等)
Sources of Tuning Information
[1] The Alert log
[1] Background and user trace files
[1] Dynamic performance views
[1] Data dictionary views
The Alert log:The Oracle Alert log records informational and error messages for a variety of activities that
have occurred against the database during its operation.
Background and user trace files:Oracle trace files are text files that contain session information for the process that created
them。
Dynamic performance views: V$SGASTAT V$EVENT_NAME V$SYSTEM_EVENT V$SESSION_EVENT V$SESSION_WAIT V$STATNAME V$SYSSTAT V$SESSTAT V$SESSION V$WAITSTAT
OEM廣泛使用這些檢視,不一個一個瞭解了。
Data dictionary views:DBA_TABLES Table DBA_INDEXES INDEX_STATS DBA_DATA_FILES DBA_SEGMENTS DBA_HISTOGRAMS DBA_OBJECTS
Identifying Unusable Objects Using Data Dictionary
SQL> SELECT owner, object_name, object_type FROM dba_objects WHERE status = 'INVALID';
ALTER PACKAGE my_package COMPILE;
ALTER PACKAGE my_package COMPILE BODY;
ALTER PROCEDURE my_procedure COMPILE;
ALTER FUNCTION my_function COMPILE;
ALTER TRIGGER my_trigger COMPILE;
ALTER VIEW my_view COMPILE;
 
 
SQL> SELECT owner, index_name, index_type FROM dba_indexes WHERE status = 'UNUSABLE';
SQL> ALTER INDEX hr.job_id_pk REBUILD;
用OEM實現查詢編譯索引重建,很方便。索引重建選擇離線或聯機還可以制定“便箋式”表空間(儲存中間結果)
 
 
Storing Database Statistics in the Data Dictionary
Some columns in the DBA views are not populated with data until the table or index referenced
by the view is analyzed.
The cost-based optimizer (CBO) uses these statistics to formulate efficient execution plans
for each SQL statement that is issued by application users.
Because of this, the statistics gathered and stored in the data dictionary views are sometimes
called optimizer statistics.
Automatic Collection of Statistics:DBCA建立的資料庫,預設,定時執行
Manual Collection of Statistics(不推薦,有定時的job)
OEM:略
Manually Gathering Statistics Using DBMS_STATS:備份舊的統計資料,允許以後恢復,執行並行分析更快速收集統計資料等……
GATHER_INDEX_STATS GATHER_INDEX_STATS GATHER_SCHEMA_STATS GATHER_DATABASE_STATS
如:The following example shows how the DBMS_STATS packages can be used to gather statistics on the PRODUCT_HISTORY table in SH’s schema:
SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS (‘SH’,’PRODUCT_HISTORY’);
The presence of accurate optimizer statistics has a big impact on two important measures of overall system performance: throughput and response time.
 
 
Important Performance Metrics
Throughput is another example of a statistical performance metric. Throughput is the amount of processing that a computer or system can perform. in a given amount of time,
Throughput is an important measure when considering the scalability of the system. Scalability refers to the degree to which additional users can be added to the system without
system performance declining significantly.
Another important metric related to performance is response time. Response time is the amount of time that it takes for a single user’s request to return the desired result when using
an application
把伺服器的I/O能力告訴ADDM
做I/O測試,我們的磁碟子系統預設的是10毫秒值
 
 
Summary
Oracle 10g provides many tools for proactively identifying and fixing potential performance
and management problems in the database. At the core of the monitoring system is the Automatic
Workload Repository (AWR), which uses the MMON background process to gather statistics
from the SGA and store them in a collection of tables owned by the user SYSMAN.
Following each AWR statistics collection interval, the Automatic Database Diagnostic Monitoring
(ADDM) feature examines the newly gathered statistics and compares them with the
two previous AWR statistics to establish baselines in an attempt to identify poorly performing
components of the database. The ADDM then summarizes these findings on the EM Database
Control main and Performance screens. Using these screens, you can identify and examine the
SQL statements that are contributing the most to DB Time. You can further explore the opportunities
for improving the performance or manageability of your database using the EM Database
Control advisors, which include the SQL Tuning Advisor, SQL Access Advisor, Memory
Advisor, Mean Time To Recover Advisor, Segment Advisor, and Undo Management Advisor.
Using the SQL Tuning Advisor, you can identify the SQL statements that have had the greatest
performance impact on the database. You can then examine these statements using the SQL
Access Advisor to determine if adjustments can be made to improve the execution paths for
these statements and therefore minimize their impact on total DB Time.
The Memory Advisor suggests changes that can potentially improve Oracle’s use of memory
within the SGA and PGA.
The Mean Time To Recover Advisor helps you determine if your database is properly configured
to meet service-level agreements for instance recovery in the event of a server failure or
an instance crash.
The Segment Advisor helps you determine which segments are using excess storage space and
which might benefit from a shrink operation. Shrinking these segments not only frees storage
space for use by other segments, but also minimizes the number of physical I/Os required to
access the segments.
Using the Undo Management Advisor, you can monitor and manage undo segments to minimize
the likelihood of ORA-01555, Snapshot Too Old error messages, and improve the application’s
overall read consistency.
You can also configure ADDM alerts to notify you via the EM Database Control or e-mail
whenever the performance of the database varies from established baselines or target levels.
Available storage space, excessive wait times, and high I/O activity are all examples of events
that you can monitor using alerts.
In addition to EM Database Control, you can find indicators of database performance in the
database Alert log, user and background trace files, data dictionary views, and dynamic performance
views. Some data dictionary views do not contain accurate information about the
segments in the database until after statistics are collected on those objects. Therefore, you can
automatically collect segment statistics through the use of EM Database Control jobs.
Invalid and unusable database objects also have a negative impact on performance and manageability.
You can monitor and repair invalid and unusable objects using the data dictionary
and the EM Database Control Administration screen.
EM Database Control summarizes several important performance metrics on the EM Database
Control main screen. These metrics include performance statistics for the host server, user
sessions, and instance throughput.
 
 
************$10 實現資料庫備份************************************
 
 
Implementing Database Backups
As a database administrator, your primary goal is to keep the database open and available
for users, usually 24 hours a day, 7 days a week. Your partnership with the server’s system
administrator includes the following tasks:

Proactively solving common causes of failures

Increasing the mean time between failure (MTBF)

Ensuring a high level of hardware redundancy

Increasing availability by using Oracle options such as Real Application Clusters (RAC)
and Oracle Streams (an advanced replication technology)

Decreasing the
mean time to recover (MTTR)
by setting the appropriate Oracle initialization
parameters and ensuring that backups are readily available in a recovery scenario

Minimizing or eliminating loss of committed transactions by using archived redo logs,
standby databases, and Oracle Data Guard
RAC, Streams, Data Guard, and standby databases are beyond the scope of this
book, but are covered in more detail in advanced Oracle courseware.
 
 
Understanding and Configuring Recovery Components
The control files maintain the list of database files in the database, along with a record of the most recent database backups (if you are using RMAN for your backups). The checkpoint (CKPT) background process works in concert with the database writer (DBWn) process to manage the amount of time required for instance recovery; during instance recovery, the redo log files are used to synchronize the datafiles. For more serious types of failures such as media failures, archived redo log files are applied to a restored backup copy of a datafile to synchronize the datafiles and ensure that no committed transactions are lost. Finally, the Flash Recovery area, new to Oracle 10g , is a common area for all recovery-related files that makes your job much easier when backing up or recovering your database.
 
 
Control Files
The current log sequence number , which is a unique identifier that is incremented and recorded when an online redo log file is switched. Tablespace names,The most recent checkpoint information.等等
When you add a new file to the database or relocate a file, an Oracle server process immediately updates the information in the control file. Back up the control file after any structural
changes. The log writer (LGWR) process updates the control file with the current log sequence number. CKPT updates the control file with the recent checkpoint information. When the database is in ARCHIVELOG mode, the archiver (ARCn) processes update the control file with information such as the archive log filename and log sequence number.
Multiplexing Control Files Using an SPFILE
1. Alter the SPFILE while the database is still open:
SQL> ALTER SYSTEM SET CONTROL_FILES =
‘/ora01/oradata/MYDB/ctrlMYDB01.ctl’,
‘/ora02/oradata/MYDB/ctrlMYDB02.ctl’,
‘/ora03/oradata/MYDB/ctrlMYDB03.ctl’,
‘/ora04/oradata/MYDB/ctrlMYDB04.ctl’ SCOPE=SPFILE;
This parameter change takes effect only after the next instance restart by using the
SCOPE=SPFILE qualifier. The contents of the binary SPFILE are changed immediately, but
the old specification of CONTROL_FILES is used until the instance is restarted.
2. Shut down the database:
SQL> SHUTDOWN NORMAL
3. Copy an existing control file to the new location:
$ cp /ora01/oradata/MYDB/ctrlMYDB01.ctl/ora04/oradata/MYDB/ctrlMYDB04.ctl
4. Start the instance:
SQL> STARTUP
 
 
Checkpoints
During a checkpoint, CKPT updates the control file and the header of the datafiles to reflect the last successful transaction by recording the last system change number (SCN). The SCN, which is a number sequentially assigned to each transaction in the database, is also recorded in the control file against the datafile name that is taken offline or made read-only.
A checkpoint is an event that flushes the modified data from the buffer cache to the disk and updates the control file and datafiles.
提前檢查點,可以減少MTTR
 
 
Redo Log File
The LGWR process writes redo information from the redo log buffer to the online redo log files under a variety of circumstances:
[1] When a user commits a transaction, even if this is the only transaction in the log buffer.
[1] When the redo log buffer becomes one-third full.
[1] When the buffer contains approximately 1MB of changed records. This total does not include deleted or inserted records.
如果讀者有一個RAC配置(單個資料庫多個例項)。每個例項都有一個聯機重做執行緒。寫到相同的聯機重做日誌檔案。因而ORACLE必須跟蹤資料庫修改正來自哪一個例項。
日誌切換操作:The file that is actively being written to is known as the current log file. The log files that are required for instance recovery are known as the active log files. The other log files are known as inactive.
提示:Keep the redo log files on a separate disk for better performance. If you have to store a datafile on the same disk as the redo log file, do not put the SYSTEM, UNDOTBS, SYSAUX, or any very active data or index tablespace file on this disk. A commit cannot complete until a transaction’s information has been written to the redo logs, so maximizing the throughput of the redo log files is a top priority.
The size of the redo log affects the checkpoint performance. If the size of the redo log is smaller compared with the number of transactions, a log switch occurs often, and so does the checkpoint.
You can adjust checkpoints primarily by using the initialization parameter FAST_START_MTTR_TARGET.
 
 
CREATE DATABASE “MYDB01”
. . .
LOGFILE
GROUP 1 (‘/ora02/oradata/MYDB01/redo0101.log’,
‘/ora03/oradata/MYDB01/redo0102.log’) SIZE 10M,
GROUP 2 (‘/ora02/oradata/MYDB01/redo0201.log’,
‘/ora03/oradata/MYDB01/redo0202.log’) SIZE 10M;
 
 
ALTER DATABASE ADD LOGFILE
GROUP 3 (‘/ora02/oradata/MYDB01/redo0301.log’,
‘/ora03/oradata/MYDB01/redo0302.log’) SIZE 10M;
 
 
ALTER DATABASE ADD LOGFILE
(‘/ora02/oradata/MYDB01/redo0301.log’,
‘/ora03/oradata/MYDB01/redo0302.log’) SIZE 10M;
 
 
ALTER DATABASE ADD LOGFILE
‘/ora02/oradata/MYDB01/redo0301.log’ REUSE;
 
 
ALTER DATABASE ADD LOGFILE MEMBER
‘/ora04/oradata/MYDB01/redo0203.log’ TO GROUP 2;
 
 
ALTER DATABASE DROP LOGFILE GROUP 3;
 
 
ALTER DATABASE DROP LOGFILE MEMBER
‘/ora04/oradata/MYDB01/redo0203.log’;
 
 
ALTER DATABASE CLEAR LOGFILE GROUP 3;
ALTER DATABASE CLEAR unarchived LOGFILE GROUP 3;(最好馬上做一個全部資料庫備份,因為未存檔的重做日誌檔案不再適用於資料庫恢復)
 
 
Archived Redo Log Files
Setting the Archive Destination
LOG_ARCHIVE_DEST_n = “null_string” | ((SERVICE = tnsnames_name | LOCATION = ‘directory_name’) [MANDATORY | OPTIONAL] [REOPEN [= integer]])
For example: LOG_ARCHIVE_DEST_1 = ((LOCATION=’/archive/MYDB01’) MANDATORY REOPEN = 60)
LOG_ARCHIVE_DEST_2 = (SERVICE=STDBY01) OPTIONAL REOPEN;
LOG_ARCHIVE_MIN_SUCCEED_DEST:成功寫入的最小目的地數量
LOG_ARCHIVE_FORMAT:
%s Log sequence number
%t Thread number
%r Resetlogs ID: ensures uniqueness even after using advanced recovery techniques that resets the log sequence numbers
%d Database ID
Setting ARCHIVELOG
ALTER DATABASE ARCHIVELOG.
 
 
The Flash Recovery Area
Flash Recovery Area Occupants
Control files,Archived log files,Flashback logs,Control file and SPFILE autobackups,Datafile copies,RMAN backup sets。
You must define two initialization parameters to set up the Flash Recovery area: DB_RECOVERY_FILE_DEST_SIZE and DB_RECOVERY_FILE_DEST.
SQL> alter system set db_recovery_file_dest_size = 8g scope=both;
SQL> alter system set db_recovery_file_dest = ‘/OraFlash’ scope=both;
 
 

Performing Backups
Your backup strategy depends on the activity of your database, the level of availability required by your service-level agreements (SLAs), and how much downtime you can tolerate during a recovery effort.
Understanding Backup Terminology(一些很重要的概念)
Whole backups and partial backups are known as Oracle backup strategies. The backup type can be divided into two general categories: full backups and incremental backups. Depending on whether you make your database backups when the database is open or closed, backups can be further categorized into the backup modes known as consistent and inconsistent backups.
策略 strategies:Whole backups and partial backups
型別 type:full backups and incremental backups
方式 modes:consistent and inconsistent backups
Whole database A whole database backup includes all datafiles and at least one control file.Online redo log files are never backed up;
Partial database A partial database backup includes zero or more tablespaces, which in turn includes zero or more datafiles;a control file is optional in a partial database backup.
Full A full backup includes all blocks of every datafile backed up in a whole or partial database backup.
Incremental An incremental backup makes a copy of all data blocks that have changed since a previous backup.Oracle 10g supports five levels of incremental backups, from 0 to 4.
(好像看到有書說只有0,1)
Consistent A consistent backup, also known as an offline backup, is performed while the database is not open.
Inconsistent Although the term inconsistent backup may sound like something you might avoid in a database, it is a way to maintain availability of the database while performing backups.The backup is inconsistent because the SCN in the control file is most likely out of synch with the SCN in the header of the datafiles.
 
 
Backing Up the Control File
The trace backup is created in the directory specified by the initialization parameter USER_DUMP_DEST and its format is sid_ora_pid.trc,
SQL> alter database backup controlfile to trace;
 
 
Using RMAN to Create Backups
RMAN不用來備份聯機重做日誌檔案,密碼檔案和基於文字的INIT.ORA初始化檔案。
 
 
Configuring RMAN Backup Settings(oem,略)
Creating Full and Incremental Backups
The Oracle recommended backup strategy uses RMAN to make a one-time whole-database,baseline incremental level 0 online backup weekly, and then a level 1 incremental backup for the other days of the week. You can easily fine-tune this strategy for your own needs by making, for example, a level 2 incremental backup at noon during the weekdays if heavy DML (Data Manipulation Language) is occurring in the database.
RMAN> backup incremental level 0 as compressed backupset database;
RMAN> backup incremental level 1 as compressed backupset database;
測試之
 
 
另一個變動策略是建立一個遞增更新備份
書中指令碼
run
{
recover copy of database with tag 'inc_upd_img';
backup incremental level 1 for
recover of copy with tag 'inc_upd_img' database;
}
術語解釋: COPY OF
Applies incremental backups to the specified image copy to roll it forward to any time equal to or before the most recent incremental backup of the file...... in conjunction with the BACKUP... FOR RECOVER OF COPY syntax.
 
 
An incrementally updated backup uses an incremental backup and updates the changed blocks in an existing image copy as if the entire image copy were backed up.
OEM中實驗這個功能
1 做 LEVEL 0 全備份 (backupset)
2 做全部資料庫備份(副本形式 as copy)
rman> backup device type disk tag '%TAG' database include current controlfile;(已經設定了oem預設是as copy,tag忘記設定了),此時有了資料庫全部副本
3 更改預設設定是backupset,做 LEVEL 1備份,複選同時更新最近的副本 …………
4 OK!
以下是OEM 第3步驟的指令碼
backup incremental level 1 cumulative device type disk tag '%TAG' database include current controlfile;
recover copy of database;
backup device type disk tag '%TAG' archivelog all not backed up;
 
 

我測試下單個資料檔案的遞增更新(老出錯,以後修正)
RMAN> backup as copy datafile 4 tag 'users';
-------更新部分資料---------commit
RMAN> run {
recover copy of datafile 4 with tag 'users';
backup incremental level 1 for
recover of copy with tag 'users' datafile 4;
}
??找不到copy?錯誤?
 
 

Managing Backups--Catalog Maintenance
Catalog Additional Files Adds any image copy backups made outside RMAN to the RMAN catalog.
Crosscheck All Double-checks the backup files listed in the catalog against the actual files on disk to make sure that they are all available.
Delete All Obsolete Deletes all backup files not needed to satisfy the existing retention policy.
Delete All Expired Deletes the catalog entry for any backups not found when a crosscheck was performed.
 
 
eygle's blog 的……Oracle10g如何釋放flash_recovery_area,解決ORA-19815錯誤
如果想釋放flash_recovery_area空間,use RMAN CROSSCHECK and DELETE EXPIRED commands.
sys as sysdba>SELECT substr(name, 1, 30) name, space_limit AS quota,
space_used AS used,
space_reclaimable AS reclaimable,
number_of_files AS files
FROM v$recovery_file_dest ;
使用rman登入資料庫進行crosscheck:
RMAN> crosscheck archivelog all;
RMAN> delete expired archivelog all;
Oracle指出,我們可以透過執行以下命令:
RMAN> backup recovery area;
將閃回區的內容備份到第三方介質,也同樣可以解決這個問題。
是以為記。
 
 
釋放空間(不符合保留策略的)
rman>delete noprompt obsolete 廢棄檔案
SQL> select name, space_limit max_size,
space_used used, space_reclaimable obsolete,
number_of_files num_files
from v$recovery_file_dest;
************$11 實現資料庫恢復*************************************
 
 
Implementing Database Recovery
In a nutshell, your job is to increase the mean time between failures (MTBF) by providing redundant components where possible and leveraging other Oracle high-availability features such as Real Application Clusters (RAC) and Streams (an advanced replication technology).
 
 
The six general categories of failures are as follows:
Statement
A single database operation fails, such as a DML (Data Manipulation Language)
statement—INSERT,UPDATE, and so on.
User process
A single database connection fails.
Network
A network component between the client and the database server fails, and the session
is disconnected from the database.
User error
An error message is not generated, but the operation’s result, such as dropping a
table, is not what the user intended.
Instance
The database instance fails unexpectedly.
Media
One or more of the database files is lost, deleted, or corrupted.
 
 
Statement Failures
Although granting user privileges or additional quotas within a tablespace solves many of
these problems, also consider whether there are any gaps in the user education process that
might lead to some of these problems in the first place.
 
 
User Process Failures
The PMON (process monitor) background process periodically checks all user processes to ensure that the session is still connected.
.......... training users to terminate the application gracefully before shutting down their workstation.
 
 
Network Failures
To guard against these kinds of failures, you can provide redundant network paths from your clients to the server, as well as additional listener connections on the Oracle server and redundant network cards on the server.
 
 
User Error Failures
If a COMMIT has already been performed, you have a number of options at your disposal, such as using data in the undo tablespace for a Flashback Query or using data in the archived and online redo logs with the LogMiner utility, available as a command-line or GUI interface.
測試下logminer的gui版本
 
 
Instance Failures
An instance failure occurs when the instance shuts down without synchronizing all the database files to the same system change number (SCN), requiring a recovery operation the next time the instance is started
If the cause of the instance failure is related to an Oracle background process failure, you can use the alert log and process-specific trace files to debug the problem.
The EM Database Control makes it easy to review the contents of the alert log and anyother alerts generated right before the point of failure.
 
 
Media Failures
 
 
Understanding Instance Startup
SHUTDOWN
NOMOUNT
MOUNT
OPEN
You will have to se the dynamic performance view V$RECOVER_FILE to display a list of all files that need attention.
SQL> select file#, error from v$recover_file;
SQL> select file#, name from
v$datafile join v$recover_file using (file#);
 
 
Recovering from Instance Failure
Instance recovery occurs in two distinct phases: the first phase uses the online redo log files to estore the datafiles to the state before instance failure in a roll forward operation; after this step s completed, Oracle uses the undo tablespace to roll back any uncommitted transactions. The roll orward operation includes data in the undo tablespace; without a consistent undo tablespace, the oll back operation cannot succeed. Once
The amount of time required for instance recovery depends on how long it takes to bring the atafiles up-to-date from the last checkpoint position to the latest SCN in the control file.
To revent performance problems, the distance between the checkpoint position and the end of the redo log group cannot be more than 90 percent of the size of the redo log group.
You can tune instance recovery by setting an MTTR target, in seconds, using the initialization arameter FAST_START_MTTR_TARGET.
 
 
Recovering from User Errors
Using Flashback Query
Make sure that there is an undo tablespace in the database that is large enough to retain hanges made by all users for a specified period of time.
Specify how long the undo information will be retained for use by flashback queries by sing the initialization parameter UNDO_RETENTION.
SQL> select employee_id, last_name, email
from hr.employees
as of timestamp (systimestamp - interval '15' minute)
where employee_id = 101;
>select systimestamp from dual; 看系統時間
You can just as easily specify an absolute time of day to retrieve the contents of the row at hat time, as in this example:
SQL> select employee_id, last_name, email
from hr.employees
as of timestamp
(to_timestamp ('01-Sep-04 16:18:57.845993','DD-Mon-RR HH24:MI:SS.FF'))
where employee_id = 101;
 
 

Using Flashback Drop and the Recycle Bin
The recycle bin, new to Oracle 10g, is implemented as a data dictionary table.
The dropped object still belongs to the owner and still counts against the quota for the owner n the tablespace; in fact, the table itself is still directly accessible from the recycle bin, as you will see in subsequent examples.
Retrieving Dropped Tables from the Recycle Bin
SQL> flashback table order_items to before drop;
SQL> drop table order_items;
SQL> flashback table order_items to before drop rename to order_items_old_version;
限制因素
Only non-SYSTEM locally managed tablespaces can have a recycle bin.
A table’s dependent objects are saved in the recycle bin when the table is dropped, except for bitmap join indexes, referential integrity constraints (foreign key constraints), and
materialized view logs.
Indexes are protected only if the table is dropped first; explicitly dropping an index does not
place the index into the recycle bin.
 
 
Using Flashback Table
Flashback Table is different from Flashback Drop; Flashback Table undoes recent transactions to an existing table, whereas Flashback Drop recovers a dropped table.
Flashback Table uses data in the undo tablespace, whereas Flashback Drop uses the recycle bin.
To be able to flashback a table, you must enable row movement for the table.Because DML operations are used to bring the table back to its former state, the ROWIDs in the table change.
例如:兩個表被刪除了部分資料
SQL> alter table hr.employees enable row movement;
SQL> alter table hr.departments enable row movement;
SQL> flashback table hr.employees, hr.departments
to timestamp systimestamp - interval '15' minute;
Integrity constraints are not violated when one or more tables are flashed back; this is why you typically group tables related by integrity constraints or parent-child relationships in the
FLASHBACK TABLE command
OEM中顯示的sql:
FLASHBACK TABLE HR.EMPLOYEES, HR.JOBS,HR.DEPARTMENTS, HR.LOCATIONS TO TIMESTAMP TO_TIMESTAMP(‘2004-09-12 01:15:25 PM’,‘YYYY-MM-DD HH:MI:SS AM’)
 
 
Using LogMiner
LogMiner extracts all DDL and DML activity from the redo log files for viewing via the dynamic performance view V$LOGMNR_CONTENTS.
LogMiner works differently from Oracle’s Flashback Query feature. The Flashback Query feature allows a user to see the contents of a table at a specified time in the past; LogMiner can
search a time period for all DDL against the table. A Flashback Query uses the undo information stored in the undo tablespace; LogMiner uses redo logs, both online and archived. Both
tools can be useful for tracking down how and when changes to database objects took place.
 
 
sql command 命令列配置和使用logminer 略。以前有筆記
圖形化的GUI工具,沒有看到?在哪?網友說要裝grid 才有,console是不行地。
 
 
SQL> alter system set control_files = '/u02/oradata/ord/control01.ctl','/u06/oradata/ord/control02.ctl' scope = spfile;
Recovering from Loss of a Redo Log File
The dynamic performance view V$LOGFILE provides the status of each member of each redo
log file member of each redo log group; the STATUS column is defined as follows:
INVALID The file is corrupted or missing.
STALE This redo log file member is new and has never been used.
DELETED The file is no longer being used.
The redo log file is in use and is not corrupted.
 
 

1. Verify which redo log file group member is missing.
2. Archive the log file group’s contents; if you clear this log file group before archiving it, you must back up the full database to ensure maximum recoverability of the database in the case
of the loss of a datafile. Use the command ALTER SYSTEM ARCHIVE LOG GROUP groupnum; to force the archive operation.
3. Clear the log group to re-create the missing redo log file members using the command ALTER DATABASE CLEAR LOGFILE GROUP groupnum;.
SQL> select * from v$logfile 2 order by group#;
SQL> ! rm /u08/oradata/ord/redo01.log
SQL> select * from v$logfile order by group#;
SQL> alter system archive log group 1;
SQL> alter database clear logfile group 1;(重建)
SQL> select * from v$logfile order by group#;
 
 
Recovering from Loss of a System-Critical Datafile
The loss of a system-critical datafile in NOARCHIVELOG mode requires complete restoration of the database, including the control files and all datafiles, not just the missing datafiles.
The recovery of a system-critical datafile in ARCHIVELOG mode cannot proceed while the database is open; recovery must be performed while the database is in the MOUNT state.
 
 
Recovering from Loss of a Non–System-Critical Datafile
As with a system-critical datafile, the loss of a non–system-critical datafile in NOARCHIVELOG mode requires complete restoration of the database, including the control files and all datafiles, not just the missing datafiles.
The loss of a non–system-critical datafile in ARCHIVELOG mode affects only objects that are in the missing file, and recovery can proceed while the rest of the database is online.

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/28211342/viewspace-2137212/,如需轉載,請註明出處,否則將追究法律責任。

相關文章