核心引數導致的備庫當機分析

jeanron100發表於2015-11-19
在前幾天搭建好備庫之後,因為同步檔案著實花了些時間,首先配置備庫能夠正常接收歸檔,然後核心引數也基本沒有設定,簡單使用指令碼算出一個Hugepage的值,就直接改了。當時從資料庫日誌中確實也沒有發現hugepage啟用的情況,但是因為不是很影響備庫的效能,自己就沒有重視。
結果早上的時候,首先受到了一封報警郵件。

ZABBIX-監控系統:
------------------------------------
報警內容: DG_issue
------------------------------------
報警級別: PROBLEM
------------------------------------
監控專案: dg_issue2015-11-19 10:28:05.0Log Transport ServicesErrorError 1034 received logging on to the standby2015-11-19 10:28:05.0Log Transport ServicesErrorPING[ARC0]: Heartbeat failed to connect to standby 'sstatdb2'. Error is 1034.
------------------------------------
報警時間:2015.11.19-10:23:39

最開始的時候自己倒沒有多想,也沒有重視,但是過了一會兒之後就開始頻繁收到報警了。這個時候感覺是出了什麼問題,但是讓我意外的是,備庫竟然自動宕了。
這個著實有點出乎我的意料之外,如果swap爭用高也就算了,效能影響倒不是很太大,但是怎麼會自動當機呢。
我檢視了一下日誌。發現昨天開始就已經有了memlock設定不足的提醒了。

Stopping background process VKTM
Wed Nov 18 11:38:00 2015
Instance shutdown complete
Wed Nov 18 11:38:03 2015
Starting ORACLE instance (normal)
Memlock limit too small: 65536 to accommodate segment size: 134217728
Wed Nov 18 11:42:17 2015
Starting ORACLE instance (normal)
Memlock limit too small: 65536 to accommodate segment size: 134217728
Wed Nov 18 11:50:11 2015

看看當機時間的日誌,發現是因為471的錯誤導致的當機。
Archived Log entry 344 added for thread 1 sequence 149889 ID 0x9bd5d62a dest 1:
Thu Nov 19 10:17:58 2015
PMON (ospid: 28529): terminating the instance due to error 471
Thu Nov 19 10:17:58 2015
System state dump requested by (instance=1, osid=28547 (DIAG)), summary=[abnormal instance termination].
System State dumped to trace file /U01/app/oracle/diag/rdbms/sstatdb2/statdb2/trace/statdb2_diag_28547.trc
Dumping diagnostic data in directory=[cdmp_20151119101758], requested by (instance=1, osid=28547 (DIAG)), summary=[abnormal instance termination].
Instance terminated by PMON, pid = 28529
Thu Nov 19 10:26:33 2015
Starting ORACLE instance (normal)
Memlock limit too small: 65536 to accommodate segment size: 134217728
那麼00471是個什麼錯誤呢,發現竟然和dbwr有關。
$ oerr ora 00471
00471, 00000, "DBWR process terminated with error"
// *Cause:  The database writer process died
// *Action: Warm start instance
其實我這臺資料庫例項只開了兩個dbwr程式。
SQL> show parameter db_writer_proce
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_writer_processes                  integer     2
看看資料庫啟動的日誌,發現28529確實是pmon對應的程式號,那麼如果問題是由於DBWR出問題,那麼很可能是Pmon開啟了殺戒。
Wed Nov 18 11:53:11 2015
PMON started with pid=2, OS id=28529
Wed Nov 18 11:53:11 2015
PSP0 started with pid=3, OS id=28533
Wed Nov 18 11:53:12 2015
然後pmon一旦停掉,整個資料庫例項就會停掉。
對於這個部分,可以從作業系統日誌中發現一些資訊來。首先就是發現在問題時間點,呼叫了oom killer,在Linux下這種機制,它會在系統記憶體耗盡的情況下,啟用自己演算法有選擇性的kill 掉一些程式.所以說dbwr,pmon就是這樣相互影響,最終雙雙倒下。
Nov 19 10:14:12 tc_154_11 SAFE-BASEINFO[2169]: [INFO] 2015/11/19 10:14:12 processAbnormal: [N/A]
Nov 19 10:17:55 tc_154_11 kernel: oracle invoked oom-killer: gfp_mask=0x200da, order=0, oom_adj=0, oom_score_adj=0
Nov 19 10:17:55 tc_154_11 kernel: oracle cpuset=/ mems_allowed=0-1
Nov 19 10:17:55 tc_154_11 kernel: Pid: 28833, comm: oracle Not tainted 2.6.32-431.el6.x86_64 #1
Nov 19 10:17:55 tc_154_11 kernel: Call Trace:
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff810d05b1>] ? cpuset_print_task_mems_allowed+0x91/0xb0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff81122960>] ? dump_header+0x90/0x1b0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8122798c>] ? security_real_capable_noaudit+0x3c/0x70
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff81122de2>] ? oom_kill_process+0x82/0x2a0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff81122d21>] ? select_bad_process+0xe1/0x120
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff81123220>] ? out_of_memory+0x220/0x3c0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8112fb3c>] ? __alloc_pages_nodemask+0x8ac/0x8d0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811651c9>] ? alloc_page_interleave+0x39/0x90
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff81167c46>] ? alloc_pages_vma+0x146/0x150
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8113d14f>] ? shmem_alloc_page+0x5f/0x70
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8117c805>] ? mem_cgroup_charge_common+0xa5/0xd0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8111f73e>] ? find_get_page+0x1e/0xa0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8112098a>] ? find_lock_page+0x2a/0x80
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8113ef67>] ? shmem_getpage_gfp+0x2a7/0x610
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8113f3ab>] ? shmem_fault+0x4b/0x70
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8121777e>] ? shm_fault+0x1e/0x20
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8114a084>] ? __do_fault+0x54/0x530
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8114a657>] ? handle_pte_fault+0xf7/0xb00
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8127fe09>] ? cfq_set_request+0x329/0x560
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811220e5>] ? mempool_alloc_slab+0x15/0x20
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff81122283>] ? mempool_alloc+0x63/0x140
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8137ffd0>] ? scsi_done+0x0/0x60
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8114b28a>] ? handle_mm_fault+0x22a/0x300
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8114b48a>] ? __get_user_pages+0x12a/0x430
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8114b829>] ? get_user_pages+0x49/0x50
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8104f717>] ? get_user_pages_fast+0x157/0x1c0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811c78a5>] ? do_direct_IO+0x9f5/0xfa0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811c8196>] ? __blockdev_direct_IO_newtrunc+0x346/0x1270
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff810a70a1>] ? ktime_get_ts+0xb1/0xf0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffffa008e360>] ? ext4_get_block+0x0/0x120 [ext4]
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8115e91b>] ? swap_free+0x2b/0x40
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811c9137>] ? __blockdev_direct_IO+0x77/0xe0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffffa008e360>] ? ext4_get_block+0x0/0x120 [ext4]
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8111f73e>] ? find_get_page+0x1e/0xa0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffffa008d10a>] ? ext4_ind_direct_IO+0xba/0x250 [ext4]
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffffa008e360>] ? ext4_get_block+0x0/0x120 [ext4]
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffffa008d2f6>] ? ext4_direct_IO+0x56/0x270 [ext4]
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811a50f1>] ? touch_atime+0x71/0x1a0
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811217cb>] ? generic_file_aio_read+0x6bb/0x700
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8108412c>] ? lock_timer_base+0x3c/0x70
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff81121110>] ? generic_file_aio_read+0x0/0x700
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811d4f64>] ? aio_rw_vect_retry+0x84/0x200
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811d6924>] ? aio_run_iocb+0x64/0x170
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811d7d51>] ? do_io_submit+0x291/0x920
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff811d83f0>] ? sys_io_submit+0x10/0x20
Nov 19 10:17:55 tc_154_11 kernel: [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b

從錯誤堆疊可以看出確實和記憶體緊張有關,但是為什麼使用緊張了呢,可以看到其實swap已經徹底空了,一丁點空間都沒有了。
Nov 19 10:17:55 tc_154_11 kernel: Node 1 Normal free:36600kB min:36636kB low:45792kB high:54952kB active_anon:3824492kB inactive_anon:637348kB active_file:1576kB inactive_file:4012kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:13445120kB mlocked:0kB dirty:0kB writeback:0kB mapped:1922560kB shmem:4162084kB slab_reclaimable:17076kB slab_unreclaimable:33408kB kernel_stack:848kB pagetables:68944kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:5460 all_unreclaimable? yes
Nov 19 10:17:55 tc_154_11 kernel: lowmem_reserve[]: 0 0 0 0
Nov 19 10:17:55 tc_154_11 kernel: Node 0 Normal: 409*4kB 346*8kB 248*16kB 202*32kB 141*64kB 64*128kB 19*256kB 1*512kB 7*1024kB 0*2048kB 0*4096kB = 44596kB
Nov 19 10:17:55 tc_154_11 kernel: Node 1 DMA: 2*4kB 2*8kB 1*16kB 1*32kB 1*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15624kB
Nov 19 10:17:55 tc_154_11 kernel: Node 1 DMA32: 486*4kB 375*8kB 338*16kB 304*32kB 203*64kB 116*128kB 46*256kB 2*512kB 0*1024kB 0*2048kB 0*4096kB = 60720kB
Nov 19 10:17:55 tc_154_11 kernel: Node 1 Normal: 172*4kB 252*8kB 224*16kB 189*32kB 97*64kB 48*128kB 45*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 36208kB
Nov 19 10:17:55 tc_154_11 kernel: 2697804 total pagecache pages
Nov 19 10:17:55 tc_154_11 kernel: 1620 pages in swap cache
Nov 19 10:17:55 tc_154_11 kernel: Swap cache stats: add 7640245, delete 7638625, find 2745828/3185908
Nov 19 10:17:55 tc_154_11 kernel: Free swap  = 0kB
Nov 19 10:17:55 tc_154_11 kernel: Total swap = 4194296kB
Nov 19 10:17:55 tc_154_11 kernel: 8388607 pages RAM
Nov 19 10:17:55 tc_154_11 kernel: 172490 pages reserved
Nov 19 10:17:55 tc_154_11 kernel: 4183499 pages shared
Nov 19 10:17:55 tc_154_11 kernel: 1183690 pages non-shared
所以這個問題就可以更加間接證明了hugepage設定出錯,然後memlock這個引數也有一些影響。
最終導致了swap爭用過高,最後啟用了oom的清理演算法,直接導致了資料庫例項當機。
那麼問題的修復也就相應會簡單一些,首先是設定memlock為它推薦的值,alert日誌中也給出了推薦的值。
然後再次啟動資料庫,就會看到下面的提示了。
Instance shutdown complete
Thu Nov 19 10:30:40 2015
Starting ORACLE instance (normal)
****************** Large Pages Information *****************
 
Total Shared Global Region in Large Pages = 20 GB (100%)
 
Large Pages used by this instance: 10241 (20 GB)
Large Pages unused system wide = 4 (8192 KB) (alloc incr 64 MB)
Large Pages configured system wide = 10245 (20 GB)
Large Page size = 2048 KB
所以這些小問題還是不能夠輕視,很可能造成很大的影響。





來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/23718752/viewspace-1841567/,如需轉載,請註明出處,否則將追究法律責任。

相關文章