以太坊原始碼分析(52)以太坊fast sync演算法
this PR aggregates a lot of small modifications to core, trie, eth and other packages to collectively implement the eth/63 fast synchronization algorithm. In short, geth --fast.
這個提交請求包含了對core,trie,eth和其他一些package的微小的修改,來共同實現eth/63的快速同步演算法。 簡單來說, geth --fast.
## Algorithm 演算法
The goal of the the fast sync algorithm is to exchange processing power for bandwidth usage. Instead of processing the entire block-chain one link at a time, and replay all transactions that ever happened in history, fast syncing downloads the transaction receipts along the blocks, and pulls an entire recent state database. This allows a fast synced node to still retain its status an an archive node containing all historical data for user queries (and thus not influence the network's health in general), but at the same time to reassemble a recent network state at a fraction of the time it would take full block processing.
快速同步演算法的目標是用頻寬換計算。 快速同步不是通過一個連結處理整個區塊鏈,而是重放歷史上發生的所有事務,快速同步會沿著這些塊下載事務處理單據,然後拉取整個最近的狀態資料庫。 這允許快速同步的節點仍然保持其包含用於使用者查詢的所有歷史資料的存檔節點的狀態(並且因此不會一般地影響網路的健康狀況),對於最新的區塊狀態更改,會使用全量的區塊處理方式。
An outline of the fast sync algorithm would be:
- Similarly to classical sync, download the block headers and bodies that make up the blockchain
- Similarly to classical sync, verify the header chain's consistency (POW, total difficulty, etc)
- Instead of processing the blocks, download the transaction receipts as defined by the header
- Store the downloaded blockchain, along with the receipt chain, enabling all historical queries
- When the chain reaches a recent enough state (head - 1024 blocks), pause for state sync:
- Retrieve the entire Merkel Patricia state trie defined by the root hash of the pivot point
- For every account found in the trie, retrieve it's contract code and internal storage state trie
- Upon successful trie download, mark the pivot point (head - 1024 blocks) as the current head
- Import all remaining blocks (1024) by fully processing them as in the classical sync
快速同步演算法的概要:
- 與原有的同步類似,下載組成區塊鏈的區塊頭和區塊body
- 類似於原有的同步,驗證區塊頭的一致性(POW,總難度等)
- 下載由區塊頭定義的交易收據,而不是處理區塊。
- 儲存下載的區塊鏈和收據鏈,啟用所有歷史查詢
- 當鏈條達到最近的狀態(頭部 - 1024個塊)時,暫停狀態同步:
- 獲取由 pivot point定義的區塊的完整的Merkel Patricia Trie狀態
- 對於Merkel Patricia Trie裡面的每個賬戶,獲取他的合約程式碼和中間儲存的Trie
- 當Merkel Patricia Trie下載成功後,將pivot point定義的區塊作為當前的區塊頭
- 通過像原有的同步一樣對其進行完全處理,匯入所有剩餘的塊(1024)
## 分析 Analysis
By downloading and verifying the entire header chain, we can guarantee with all the security of the classical sync, that the hashes (receipts, state tries, etc) contained within the headers are valid. Based on those hashes, we can confidently download transaction receipts and the entire state trie afterwards. Additionally, by placing the pivoting point (where fast sync switches to block processing) a bit below the current head (1024 blocks), we can ensure that even larger chain reorganizations can be handled without the need of a new sync (as we have all the state going that many blocks back).
通過下載和驗證整個頭部鏈,我們可以保證傳統同步的所有安全性,頭部中包含的雜湊(收據,狀態嘗試等)是有效的。 基於這些雜湊,我們可以自信地下載交易收據和整個狀態樹。 另外,通過將pivoting point(快速同步切換到區塊處理)放置在當前區塊頭(1024塊)的下方一點,我們可以確保甚至可以處理更大的區塊鏈重組,而不需要新的同步(因為我們有所有的狀態 TODO)。
## 注意事項 Caveats
The historical block-processing based synchronization mechanism has two (approximately similarly costing) bottlenecks: transaction processing and PoW verification. The baseline fast sync algorithm successfully circumvents the transaction processing, skipping the need to iterate over every single state the system ever was in. However, verifying the proof of work associated with each header is still a notably CPU intensive operation.
基於歷史塊處理的同步機制具有兩個(近似相似成本)瓶頸:交易處理和PoW驗證。 基線快速同步演算法成功地繞開了事務處理,跳過了對系統曾經處於的每一個狀態進行迭代的需要。但是,驗證與每個頭相關聯的工作證明仍然是CPU密集型操作。
However, we can notice an interesting phenomenon during header verification. With a negligible probability of error, we can still guarantee the validity of the chain, only by verifying every K-th header, instead of each and every one. By selecting a single header at random out of every K headers to verify, we guarantee the validity of an N-length chain with the probability of (1/K)^(N/K) (i.e. we have 1/K chance to spot a forgery in K blocks, a verification that's repeated N/K times).
但是,我們可以在區塊頭驗證期間注意到一個有趣的現象 由於錯誤概率可以忽略不計,我們仍然可以保證鏈的有效性,只需要驗證每個第K個頭,而不是每個頭。 通過從每個K頭中隨機選擇一個頭來驗證,我們保證N長度鏈的可能會被偽造的概率為(1 / K)^(N / K)(在K塊中我們有1 / K的機會發現一個偽造,而驗證經行了N/K次。)。
Let's define the negligible probability Pn as the probability of obtaining a 256 bit SHA3 collision (i.e. the hash Ethereum is built upon): 1/2^128. To honor the Ethereum security requirements, we need to choose the minimum chain length N (below which we veriy every header) and maximum K verification batch size such as (1/K)^(N/K) <= Pn holds. Calculating this for various {N, K} pairs is pretty straighforward, a simple and lenient solution being http://play.golang.org/p/B-8sX_6Dq0.
我們將可忽略概率Pn定義為獲得256位SHA3衝突(以太坊的Hash演算法)的概率:1/2 ^ 128。 為了遵守以太坊的安全要求,我們需要選擇最小鏈長N(在我們每個塊都驗證之前),最大K驗證批量大小如(1 / K)^(N / K)<= Pn。 對各種{N,K}對進行計算是非常直接的,一個簡單和寬鬆的解決方案是http://play.golang.org/p/B-8sX_6Dq0。
|N |K |N |K |N |K |N |K |
| ------|-------|-------|-----------|-------|-----------|-------|---|
|1024 |43 |1792 |91 |2560 |143 |3328 |198|
|1152 |51 |1920 |99 |2688 |152 |3456 |207|
|1280 |58 |2048 |108 |2816 |161 |3584 |217|
|1408 |66 |2176 |116 |2944 |170 |3712 |226|
|1536 |74 |2304 |128 |3072 |179 |3840 |236|
|1664 |82 |2432 |134 |3200 |189 |3968 |246|
The above table should be interpreted in such a way, that if we verify every K-th header, after N headers the probability of a forgery is smaller than the probability of an attacker producing a SHA3 collision. It also means, that if a forgery is indeed detected, the last N headers should be discarded as not safe enough. Any {N, K} pair may be chosen from the above table, and to keep the numbers reasonably looking, we chose N=2048, K=100. This will be fine tuned later after being able to observe network bandwidth/latency effects and possibly behavior on more CPU limited devices.
上面的表格應該這樣解釋:如果我們每隔K個區塊頭驗證一次區塊頭,在N個區塊頭之後,偽造的概率小於攻擊者產生SHA3衝突的概率。 這也意味著,如果確實發現了偽造,那麼最後的N個頭部應該被丟棄,因為不夠安全。 可以從上表中選擇任何{N,K}對,為了選擇一個看起來好看點的數字,我們選擇N = 2048,K = 100。 後續可能會根據網路頻寬/延遲影響以及可能在一些CPU效能比較受限的裝置上執行的情況來進行調整。
Using this caveat however would mean, that the pivot point can be considered secure only after N headers have been imported after the pivot itself. To prove the pivot safe faster, we stop the "gapped verificatios" X headers before the pivot point, and verify every single header onward, including an additioanl X headers post-pivot before accepting the pivot's state. Given the above N and K numbers, we chose X=24 as a safe number.
然而,使用這個特性意味著,只有匯入N個區塊之後再匯入pivot節點才被認為是安全的。 為了更快地證明pivot的安全性,我們在距離pivot節點X距離的地方停止隔塊驗證的行為,對隨後出現的每一個塊進行驗證直到pivot。 鑑於上述N和K數字,我們選擇X = 24作為安全數字。
With this caveat calculated, the fast sync should be modified so that up to the pivoting point - X, only every K=100-th header should be verified (at random), after which all headers up to pivot point + X should be fully verified before starting state database downloading. Note: if a sync fails due to header verification the last N headers must be discarded as they cannot be trusted enough.
通過計算caveat,快速同步需要修改為pivoting point - X,每隔100個區塊頭隨機挑選其中的一個來進行驗證,之後的每一個塊都需要在狀態資料庫下載完成之後完全驗證,如果因為區塊頭驗證失敗導致的同步失敗,那麼最後的N個區塊頭都需要被丟棄,應為他們達不到信任標準。
## 缺點 Weakness
Blockchain protocols in general (i.e. Bitcoin, Ethereum, and the others) are susceptible to Sybil attacks, where an attacker tries to completely isolate a node from the rest of the network, making it believe a false truth as to what the state of the real network is. This permits the attacker to spend certain funds in both the real network and this "fake bubble". However, the attacker can only maintain this state as long as it's feeding new valid blocks it itself is forging; and to successfully shadow the real network, it needs to do this with a chain height and difficulty close to the real network. In short, to pull off a successful Sybil attack, the attacker needs to match the network's hash rate, so it's a very expensive attack.
常見的區塊鏈(比如比特幣,以太坊以及其他)是比較容易受女巫攻擊的影響,攻擊者試圖把被攻擊者從主網路上完全隔離開,讓被攻擊者接收一個虛假的狀態。這就允許攻擊者在真實的網路同時這個虛假的網路上花費同一筆資金。然而這個需要攻擊者提供真實的自己鍛造的區塊,而且需要成功的影響真實的網路,就需要在區塊高度和難度上接近真實的網路。簡單來說,為了成功的實施女巫攻擊,攻擊者需要接近主網路的hash rate,所以是一個非常昂貴的攻擊。
Compared to the classical Sybil attack, fast sync provides such an attacker with an extra ability, that of feeding a node a view of the network that's not only different from the real network, but also that might go around the EVM mechanics. The Ethereum protocol only validates state root hashes by processing all the transactions against the previous state root. But by skipping the transaction processing, we cannot prove that the state root contained within the fast sync pivot point is valid or not, so as long as an attacker can maintain a fake blockchain that's on par with the real network, it could create an invalid view of the network's state.
與傳統的女巫攻擊相比,快速同步為攻擊者提供了一種額外的能力,即為節點提供一個不僅與真實網路不同的網路檢視,而且還可能繞過EVM機制。 以太坊協議只通過處理所有事務與以前的狀態根來驗證狀態根雜湊。 但是通過跳過事務處理,我們無法證明快速同步pivot point中包含的state root是否有效,所以只要攻擊者能夠保持與真實網路相同的假區塊鏈,就可以創造一個無效的網路狀態檢視。
To avoid opening up nodes to this extra attacker ability, fast sync (beside being solely opt-in) will only ever run during an initial sync (i.e. when the node's own blockchain is empty). After a node managed to successfully sync with the network, fast sync is forever disabled. This way anybody can quickly catch up with the network, but after the node caught up, the extra attack vector is plugged in. This feature permits users to safely use the fast sync flag (--fast), without having to worry about potential state root attacks happening to them in the future. As an additional safety feature, if a fast sync fails close to or after the random pivot point, fast sync is disabled as a safety precaution and the node reverts to full, block-processing based synchronization.
為了避免將節點開放給這個額外的攻擊者能力,快速同步(特別指定)將只在初始同步期間執行(節點的本地區塊鏈是空的)。 在一個節點成功與網路同步後,快速同步永遠被禁用。 這樣任何人都可以快速地趕上網路,但是在節點追上之後,額外的攻擊向量就被插入了。這個特性允許使用者安全地使用快速同步標誌(--fast),而不用擔心潛在的狀態 在未來發生的根攻擊。 作為附加的安全功能,如果快速同步在隨機 pivot point附近或之後失敗,則作為安全預防措施禁用快速同步,並且節點恢復到基於塊處理的完全同步。
## 效能 Performance
To benchmark the performance of the new algorithm, four separate tests were run: full syncing from scrath on Frontier and Olympic, using both the classical sync as well as the new sync mechanism. In all scenarios there were two nodes running on a single machine: a seed node featuring a fully synced database, and a leech node with only the genesis block pulling the data. In all test scenarios the seed node had a fast-synced database (smaller, less disk contention) and both nodes were given 1GB database cache (--cache=1024).
為了對新演算法的效能進行基準測試,執行了四個單獨的測試:使用經典同步以及新的同步機制,從Frontier和Olympic上的scrath完全同步。 在所有情況下,在一臺機器上執行兩個節點:具有完全同步的資料庫的種子節點,以及只有起始塊拉動資料的水蛭節點。 在所有測試場景中,種子節點都有一個快速同步的資料庫(更小,更少的磁碟爭用),兩個節點都有1GB的資料庫快取(--cache = 1024)。
The machine running the tests was a Zenbook Pro, Core i7 4720HQ, 12GB RAM, 256GB m.2 SSD, Ubuntu 15.04.
執行測試的機器是Zenbook Pro,Core i7 4720HQ,12GB RAM,256GB m.2 SSD,Ubuntu 15.04。
| Dataset (blocks, states) | Normal sync (time, db) | Fast sync (time, db) |
| ------------------------- |:-------------------------:| ---------------------------:|
|Frontier, 357677 blocks, 42.4K states | 12:21 mins, 1.6 GB | 2:49 mins, 235.2 MB |
|Olympic, 837869 blocks, 10.2M states | 4:07:55 hours, 21 GB | 31:32 mins, 3.8 GB |
The resulting databases contain the entire blockchain (all blocks, all uncles, all transactions), every transaction receipt and generated logs, and the entire state trie of the head 1024 blocks. This allows a fast synced node to act as a full archive node from all intents and purposes.
結果資料庫包含整個區塊鏈(所有區塊,所有的區塊,所有的交易),每個交易收據和生成的日誌,以及頭1024塊的整個狀態樹。 這使得一個快速的同步節點可以充當所有意圖和目的的完整歸檔節點。
## 結束語 Closing remarks
The fast sync algorithm requires the functionality defined by eth/63. Because of this, testing in the live network requires for at least a handful of discoverable peers to update their nodes to eth/63. On the same note, verifying that the implementation is truly correct will also entail waiting for the wider deployment of eth/63.
快速同步演算法需要由eth / 63定義的功能。 正因為如此,現網中的測試至少需要少數幾個可發現的對等節點將其節點更新到eth / 63。 同樣的說明,驗證這個實施是否真正正確還需要等待eth / 63的更廣泛部署。
相關文章
- 以太坊原始碼分析(52)trie原始碼分析原始碼
- 以太坊原始碼分析(37)eth以太坊協議分析原始碼協議
- 以太坊原始碼分析(18)以太坊交易執行分析原始碼
- 以太坊原始碼分析(54)以太坊隨機數生成方式原始碼隨機
- 以太坊原始碼分析(3)以太坊交易手續費明細原始碼
- 以太坊交易池原始碼分析原始碼
- 以太坊原始碼分析(13)RPC分析原始碼RPC
- 以太坊原始碼分析(36)ethdb原始碼分析原始碼
- 以太坊原始碼分析(38)event原始碼分析原始碼
- 以太坊原始碼分析(41)hashimoto原始碼分析原始碼
- 以太坊原始碼分析(43)node原始碼分析原始碼
- 以太坊原始碼分析(51)rpc原始碼分析原始碼RPC
- 以太坊原始碼分析(15)node包建立多重協議以太坊節點原始碼協議
- 以太坊原始碼分析(8)區塊分析原始碼
- 以太坊原始碼分析(9)cmd包分析原始碼
- 以太坊原始碼分析(16)挖礦分析原始碼
- 以太坊原始碼分析(5)accounts程式碼分析原始碼
- 以太坊原始碼分析(53)以太坊測試網路Clique_PoA介紹原始碼
- 死磕以太坊原始碼分析之Kademlia演算法原始碼演算法
- 死磕以太坊原始碼分析之state原始碼
- 死磕以太坊原始碼分析之txpool原始碼
- 以太坊原始碼分析(10)CMD深入分析原始碼
- 以太坊原始碼分析(12)交易資料分析原始碼
- 以太坊原始碼分析(19)core-blockchain分析原始碼Blockchain
- 以太坊原始碼分析(35)eth-fetcher原始碼分析原始碼
- 以太坊原始碼分析(20)core-bloombits原始碼分析原始碼OOM
- 以太坊原始碼分析(24)core-state原始碼分析原始碼
- 以太坊原始碼分析(29)core-vm原始碼分析原始碼
- 以太坊原始碼分析(34)eth-downloader原始碼分析原始碼
- 以太坊交易池原始碼解析原始碼
- 以太坊原始碼分析(23)core-state-process原始碼分析原始碼
- 以太坊原始碼分析(31)eth-downloader-peer原始碼分析原始碼
- 以太坊原始碼分析(32)eth-downloader-peer原始碼分析原始碼
- 以太坊原始碼分析(33)eth-downloader-statesync原始碼分析原始碼
- 以太坊原始碼分析(4)accounts包簡介原始碼
- 以太坊原始碼分析(7)Ethereum 資源分享原始碼
- 以太坊原始碼分析(17)Internal包簡介原始碼
- 死磕以太坊原始碼分析之downloader同步原始碼