VXFS啟用非同步IO導致的嚴重問題
real 0m5.284s
user 0m0.003s
sys 0m0.031s
以下是當時做的sar的記錄。
07:00:01 AM CPU %user %nice %system %iowait %steal %idle
09:20:01 AM all 10.48 0.11 1.76 2.89 0.00 84.76
09:30:01 AM all 10.59 0.10 1.81 2.45 0.00 85.04
09:40:01 AM all 7.91 0.18 1.61 3.20 0.00 87.10
09:50:01 AM all 7.26 0.07 1.66 3.23 0.00 87.78
10:00:01 AM all 7.54 0.13 1.53 3.67 0.00 87.13
10:10:01 AM all 7.78 0.09 1.76 3.92 0.00 86.45
10:20:01 AM all 8.24 0.09 2.27 3.98 0.00 85.43
10:30:01 AM all 7.38 0.08 1.79 5.18 0.00 85.57
10:40:01 AM all 8.14 0.16 2.01 6.36 0.00 83.33
10:50:02 AM all 7.05 0.10 1.74 4.83 0.00 86.29
11:00:01 AM all 7.61 0.09 2.04 5.43 0.00 84.83
11:10:01 AM all 7.22 0.09 1.70 6.22 0.00 84.76
11:20:01 AM all 6.71 0.12 2.10 7.35 0.00 83.72
11:30:01 AM all 9.36 0.10 2.87 5.03 0.00 82.63
11:40:01 AM all 7.26 0.25 1.76 6.08 0.00 84.65
11:50:01 AM all 7.17 0.12 2.40 5.24 0.00 85.07
12:00:01 PM all 6.30 0.10 2.64 5.27 0.00 85.69
Average: all 10.36 0.26 1.14 3.40 0.00 84.83
一個月前的資料情況
Production statistics 20-June-14:
204+0 records in
204+0 records out
213909504 bytes (214 MB) copied, 1.44182 seconds, 148 MB/s
real 0m1.445s
user 0m0.001s
sys 0m0.039s
測試環境
TEST machine statistics:
204+0 records in
204+0 records out
213909504 bytes (214 MB) copied, 0.550607 seconds, 388 MB/s
real 0m0.595s
user 0m0.001s
sys 0m0.072s
另外一個資料遷移伺服器
TEST2 machine statistics:
213909504 bytes (214 MB) copied, 0.320128 seconds, 668 MB/s
real 0m0.43s
user 0m0.01s
sys 0m0.42s
The first two are major
VXFS version
We had IO performance issues with the very same version of VXFS installed in TRUE 6.0.100.000
Eventually we found we were hitting the following bug which is fixed with version 6.0.3
this happened at that site – even though it was a fresh install and NOT and upgrade as indicated in the below.
We did see the very same issues of performance degrading when removing the direct mount option
Hence we recommend installing this patch
SYMPTOM:
Performance degradation is seen after upgrade from SF 5.1SP1RP3 to SF 6.0.1 on
Linux
DESCRIPTION:
The degradation in performance is seen because the I/O are not unplugged before
getting delivered to lower layers in the IO path. These I/Os are unplugged by
OS at a default time which 3 milli seconds, which resulted in an additional
overhead in completion of I/Os.
RESOLUTION:
Code Changes made to explicitly unplug the I/Os before sending then to the lower
layer.
* 3254132 (Tracking ID: 3186971)
Power management
Servers should have power management savings disabled set to high performance
Make sure C-state is disabled set to C0
This is executed at the BIOS level and requires a reboot.
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/8494287/viewspace-1347068/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 使用Nginx解決IIS繫結域名導致應用程式重啟的問題Nginx
- 記一次 Mac 意外重啟導致的 Homestead 問題Mac
- 定時重啟tomcat指令碼導致的亂碼問題Tomcat指令碼
- IO模式和IO多路複用(阻塞IO、非阻塞IO、同步IO、非同步IO等概念)模式非同步
- Python 支援重啟的非同步 IOPython非同步
- 蘋果iOS 8再曝嚴重漏洞 可導致ios裝置無限重啟蘋果iOS
- 由一條sql語句導致的系統IO問題SQL
- java同步非阻塞IOJava
- 關於沒有熔斷降級導致服務重啟問題
- Java 非阻塞 IO 和非同步 IOJava非同步
- Android之點選Home鍵後再次開啟導致APP重啟問題AndroidAPP
- IO - 同步 非同步 阻塞 非阻塞的區別非同步
- Laravel 5.2 的一處嚴重效能問題Laravel
- 【故障處理】因AIX非同步IO沒有開啟導致SQL*Plus不可用AI非同步SQL
- undo表空間出現壞塊導致資料庫重啟問題解決資料庫
- High Water Mark過高導致cache buffer chain等待嚴重AI
- Linux上Oracle啟用非同步IOLinuxOracle非同步
- ANALYZE導致的阻塞問題分析
- MySQL Flush導致的等待問題MySql
- 歸檔問題導致的資料庫無法啟動資料庫
- 非歸檔模式下異常斷電導致的資料庫無法啟動的問題修復模式資料庫
- 【ASK_ORACLE】因process用盡導致的rac重啟的解決方法Oracle
- 談談對不同I/O模型的理解 (阻塞/非阻塞IO,同步/非同步IO)模型非同步
- 【死磕NIO】— 阻塞IO,非阻塞IO,IO複用,訊號驅動IO,非同步IO,這你真的分的清楚嗎?非同步
- IP地址被清空導致例項重啟
- 記php-fpm重啟導致的一個bugPHP
- 解決hyper v導致docker無法啟動問題Docker
- 微軟修復了導致 Outlook 啟動時崩潰的問題微軟
- jdk版本導致tomcat,eclipse無法啟動的問題JDKTomcatEclipse
- Oracle資料庫非同步IO導致查詢響應緩慢Oracle資料庫非同步
- LightDB/Postgresql 記錄客戶端啟動版本問題導致啟動失敗問題SQL客戶端
- oracle AS重啟問題Oracle
- 11、協程和io教程01 -- 併發 並行 同步 非同步 阻塞 非阻塞 以及 IO多路複用並行非同步
- Nodejs mkdirP 模組導致CPU佔用高的問題NodeJS
- 蘋果iOS再曝嚴重問題 2.5萬應用程式存在漏洞蘋果iOS
- Adobe Reader存在嚴重漏洞 或導致系統崩潰
- 嚴重 PHP 漏洞導致伺服器遭受遠端程式碼執行PHP伺服器
- 克隆ORACLE軟體的導致的問題Oracle