Ceph MDS States狀態詳解
MDS States
後設資料伺服器(MDS)在CephFS的正常操作過程中經歷多個狀態。例如,一些狀態指示MDS從MDS的先前例項從故障轉移中恢復。在這裡,我們將記錄所有這些狀態,幷包括狀態圖來視覺化轉換。
State Descriptions
Common states
狀態 | 說明 |
---|---|
up:active | This is the normal operating state of the MDS. It indicates that the MDS and its rank in the file system is available. 這個狀態是正常執行的狀態。 這個表明該mds在rank中是可用的狀態。 |
up:standby | The MDS is available to takeover for a failed rank (see also :ref:mds-standby ). The monitor will automatically assign an MDS in this state to a failed rank once available.這個狀態是災備狀態,用來接替主掛掉的情況。 |
up:standby_replay | The MDS is following the journal of another up:active MDS. Should the active MDS fail, having a standby MDS in replay mode is desirable as the MDS is replaying the live journal and will more quickly takeover. A downside to having standby replay MDSs is that they are not available to takeover for any other MDS that fails, only the MDS they follow. 災備守護程式就會持續讀取某個處於 up 狀態的 rank 的後設資料日誌。這樣它就有後設資料的熱快取,在負責這個 rank 的守護程式失效時,可加速故障切換。 一個正常執行的 rank 只能有一個災備重放守護程式( standby replay daemon ),如果兩個守護程式都設定成了災備重放狀態,那麼其中任意一個會取勝,另一個會變為普通的、非重放災備狀態。 一旦某個守護程式進入災備重放狀態,它就只能為它那個 rank 提供災備。如果有另外一個 rank 失效了,即使沒有災備可用,這個災備重放守護程式也不會去頂替那個失效的。 |
up:boot | This state is broadcast to the Ceph monitors during startup. This state is never visible as the Monitor immediately assign the MDS to an available rank or commands the MDS to operate as a standby. The state is documented here for completeness. 此狀態在啟動期間被廣播到CEPH監視器。這種狀態是不可見的,因為監視器立即將MDS分配給可用的秩或命令MDS作為備用操作。這裡記錄了完整性的狀態。 |
up:creating | The MDS is creating a new rank (perhaps rank 0) by constructing some per-rank metadata (like the journal) and entering the MDS cluster. |
up:starting | The MDS is restarting a stopped rank. It opens associated per-rank metadata and enters the MDS cluster. |
up:stopping | When a rank is stopped, the monitors command an active MDS to enter the up:stopping state. In this state, the MDS accepts no new client connections, migrates all subtrees to other ranks in the file system, flush its metadata journal, and, if the last rank (0), evict all clients and shutdown (see also :ref:cephfs-administration ). |
up:replay | The MDS taking over a failed rank. This state represents that the MDS is recovering its journal and other metadata. 日誌恢復階段,他將日誌內容讀入記憶體後,在記憶體中進行回放操作。 |
up:resolve | The MDS enters this state from up:replay if the Ceph file system has multiple ranks (including this one), i.e. it's not a single active MDS cluster. The MDS is resolving any uncommitted inter-MDS operations. All ranks in the file system must be in this state or later for progress to be made, i.e. no rank can be failed/damaged or up:replay. 用於解決跨多個mds出現權威後設資料分歧的場景,對於服務端包括子樹分佈、Anchor表更新等功能,客戶端包括rename、unlink等操作。 |
up:reconnect | An MDS enters this state from up:replay or up:resolve. This state is to solicit reconnections from clients. Any client which had a session with this rank must reconnect during this time, configurable via mds_reconnect_timeout. 恢復的mds需要與之前的客戶端重新建立連線,並且需要查詢之前客戶端釋出的檔案控制程式碼,重新在mds的快取中建立一致性功能和鎖的狀態。mds不會同步記錄檔案開啟的資訊,原因是需要避免在訪問mds時產生多餘的延遲,並且大多數檔案是以只讀方式開啟。 |
up:rejoin | The MDS enters this state from up:reconnect. In this state, the MDS is rejoining the MDS cluster cache. In particular, all inter-MDS locks on metadata are reestablished. If there are no known client requests to be replayed, the MDS directly becomes up:active from this state. 把客戶端的inode載入到mds cache |
up:clientreplay | The MDS may enter this state from up:rejoin. The MDS is replaying any client requests which were replied to but not yet durable (not journaled). Clients resend these requests during up:reconnect and the requests are replayed once again. The MDS enters up:active after completing replay. |
down:failed | No MDS actually holds this state. Instead, it is applied to the rank in the file system |
down:damaged | No MDS actually holds this state. Instead, it is applied to the rank in the file system |
down:stopped | No MDS actually holds this state. Instead, it is applied to the rank in the file system |
主從切換流程:
- handle_mds_map state change up:boot --> up:replay
- handle_mds_map state change up:replay --> up:reconnect
- handle_mds_map state change up:reconnect --> up:rejoin
- handle_mds_map state change up:rejoin --> up:active
State Diagram
This state diagram shows the possible state transitions for the MDS/rank. The legend is as follows:
Color
- 綠色: MDS是活躍的.
- 橙色: MDS處於過渡臨時狀態,試圖變得活躍.
- 紅色: MDS指示一個狀態,該狀態導致被標記為失敗.
- 紫色: MDS和rank為停止.
- 紅色: MDS指示一個狀態,該狀態導致被標記為損壞.
Shape
- 圈:MDS保持這種狀態.
- 六邊形:沒有MDS保持這個狀態.
Lines
- A double-lined shape indicates the rank is "in".
參考:
https://github.com/ceph/ceph/blob/master/doc/cephfs/mds-states.rst
相關文章
- 分散式儲存Ceph之PG狀態詳解分散式
- ceph 叢集報 mds cluster is degraded 故障排查薦
- HTTP狀態碼詳解HTTP
- linux程式狀態詳解Linux
- UIButton基本狀態及各種疊加狀態詳解UI
- vuex管理狀態倉庫詳解Vue
- MySQL執行緒狀態詳解MySql執行緒
- Vue狀態管理庫Pinia詳解Vue
- CEPH-3:cephfs功能詳解
- java執行緒的五大狀態,阻塞狀態詳解Java執行緒
- 前端開發:HTTP狀態碼詳解前端HTTP
- java執行緒棧狀態詳解Java執行緒
- CEPH-4:ceph RadowGW物件儲存功能詳解物件
- 【PHP】啟用php-fpm狀態詳解PHP
- MongoDB狀態查詢db.serverStatus()詳解MongoDBServer
- 【Android】狀態列通知Notification、NotificationManager詳解Android
- MySQL執行狀態show status中文詳解MySql
- Ceph pg unfound處理過程詳解
- 行為型設計模式 - 狀態模式詳解設計模式
- HTTP協議狀態碼詳解(HTTP Status Code)HTTP協議
- oracle使用者狀態(account_status)詳解 .Oracle
- Elasticsearch叢集狀態健康值處於red狀態問題分析與解決(圖文詳解)Elasticsearch
- CEPH-2:rbd功能詳解及普通使用者應用ceph叢集
- SAP-PP-CO 生產訂單狀態詳解
- HTTP請求方法及響應狀態碼詳解HTTP
- 【Android】狀態列通知Notification、NotificationManager詳解(轉載)Android
- 《Terraform 101 從入門到實踐》 第四章 States狀態管理ORM
- 【使用者狀態】詳細解讀Oracle使用者ACCOUNT_STATUS的九種狀態Oracle
- Oracle資料庫啟動過程及狀態詳解Oracle資料庫
- MYSQL連線相關引數和狀態值詳解MySql
- MDS:解決SOA的資料難題
- 詳解Ceph系統資料是如何佈局的?
- 【docker專欄6】詳解docker容器狀態轉換管理命令Docker
- JavaScript物件的的建立及屬性狀態維護詳解JavaScript物件
- HTTP 499狀態碼 nginx下499錯誤詳解HTTPNginx
- 樹狀陣列詳解陣列
- Git入門教程,詳解Git檔案的四大狀態Git
- 密碼學系列之:線上證書狀態協議OCSP詳解密碼學協議