以太坊原始碼分析(11)eth目前的共識演算法pow的整理

尹成發表於2018-05-13
### eth目前的共識演算法pow的整理

##### 涉及的程式碼子包主要有consensus,miner,core,geth

```
/consensus 共識演算法
   consensus.go
1. Prepare方法
2. CalcDifficulty方法:計算工作量
3. AccumulateRewards方法:計算每個塊的出塊獎勵
4. VerifySeal方法:校驗pow的工作量難度是否符合要求,返回nil則通過
5. verifyHeader方法:校驗區塊頭是否符合共識規則
```



/miner 挖礦
work.go
commitNewWork():提交新的塊,新的交易,從交易池中獲取未打包的交易,然後提交交易,進行打包
__核心程式碼__:
```
// Create the current work task and check any fork transitions needed
    work := self.current
    if self.config.DAOForkSupport && self.config.DAOForkBlock != nil && self.config.DAOForkBlock.Cmp(header.Number) == 0 {
        misc.ApplyDAOHardFork(work.state)
    }
    pending, err := self.eth.TxPool().Pending()
    if err != nil {
        log.Error("Failed to fetch pending transactions", "err", err)
        return
    }
    txs := types.NewTransactionsByPriceAndNonce(self.current.signer, pending)
    work.commitTransactions(self.mux, txs, self.chain, self.coinbase)

```



```
eth/handler.go
    NewProtocolManager --> verifyHeader --> VerifySeal

```

__整條鏈的執行,打包交易,出塊的流程__
```
/cmd/geth/main.go/main
    makeFullNode-->RegisterEthService-->eth.New-->NewProtocolManager --> verifyHeader --> VerifySeal

```

##### eth的共識演算法pow呼叫棧詳解


###### 核心的邏輯需要從/eth/handler.go/NewProtocolManager方法下開始,關鍵程式碼:
```
manager.downloader = downloader.New(mode, chaindb, manager.eventMux, blockchain, nil, manager.removePeer)

    validator := func(header *types.Header) error {
        return engine.VerifyHeader(blockchain, header, true)
    }
    heighter := func() uint64 {
        return blockchain.CurrentBlock().NumberU64()
    }
    inserter := func(blocks types.Blocks) (int, error) {
        // If fast sync is running, deny importing weird blocks
        if atomic.LoadUint32(&manager.fastSync) == 1 {
            log.Warn("Discarded bad propagated block", "number", blocks[0].Number(), "hash", blocks[0].Hash())
            return 0, nil
        }
        atomic.StoreUint32(&manager.acceptTxs, 1) // Mark initial sync done on any fetcher import
        return manager.blockchain.InsertChain(blocks)
    }
    manager.fetcher = fetcher.New(blockchain.GetBlockByHash, validator, manager.BroadcastBlock, heighter, inserter, manager.removePeer)

    return manager, nil
```

###### 該方法中生成了一個校驗區塊頭部的物件`validator`
###### 讓我們繼續跟進`engine.VerifyHeader(blockchain, header, true)`方法:

```
// VerifyHeader checks whether a header conforms to the consensus rules of the
// stock Ethereum ethash engine.
func (ethash *Ethash) VerifyHeader(chain consensus.ChainReader, header *types.Header, seal bool) error {
    // If we're running a full engine faking, accept any input as valid
    if ethash.fakeFull {
        return nil
    }
    // Short circuit if the header is known, or it's parent not
    number := header.Number.Uint64()
    if chain.GetHeader(header.Hash(), number) != nil {
        return nil
    }
    parent := chain.GetHeader(header.ParentHash, number-1)
    if parent == nil {
        return consensus.ErrUnknownAncestor
    }
    // Sanity checks passed, do a proper verification
    return ethash.verifyHeader(chain, header, parent, false, seal)
}

```

###### 首先看第一個if,它的邏輯判斷是如果為true,那麼就關閉所有的共識規則校驗,緊跟著兩個if判斷是隻要該block的header的hash和number或者上一個header的hash和number存在鏈上,那麼它header的共識規則校驗就通過,如果都不通過,那麼該區塊校驗失敗跑出錯誤.如果走到最後一個return,那麼就需要做一些其他額外驗證

```
// verifyHeader checks whether a header conforms to the consensus rules of the
// stock Ethereum ethash engine.
// See YP section 4.3.4. "Block Header Validity"
func (ethash *Ethash) verifyHeader(chain consensus.ChainReader, header, parent *types.Header, uncle bool, seal bool) error {
    // Ensure that the header's extra-data section is of a reasonable size
    if uint64(len(header.Extra)) > params.MaximumExtraDataSize {
        return fmt.Errorf("extra-data too long: %d > %d", len(header.Extra), params.MaximumExtraDataSize)
    }
    // Verify the header's timestamp
    if uncle {
        if header.Time.Cmp(math.MaxBig256) > 0 {
            return errLargeBlockTime
        }
    } else {
        if header.Time.Cmp(big.NewInt(time.Now().Unix())) > 0 {
            return consensus.ErrFutureBlock
        }
    }
    if header.Time.Cmp(parent.Time) <= 0 {
        return errZeroBlockTime
    }
    // Verify the block's difficulty based in it's timestamp and parent's difficulty
    expected := CalcDifficulty(chain.Config(), header.Time.Uint64(), parent)
    if expected.Cmp(header.Difficulty) != 0 {
        return fmt.Errorf("invalid difficulty: have %v, want %v", header.Difficulty, expected)
    }
    // Verify that the gas limit is <= 2^63-1
    if header.GasLimit.Cmp(math.MaxBig63) > 0 {
        return fmt.Errorf("invalid gasLimit: have %v, max %v", header.GasLimit, math.MaxBig63)
    }
    // Verify that the gasUsed is <= gasLimit
    if header.GasUsed.Cmp(header.GasLimit) > 0 {
        return fmt.Errorf("invalid gasUsed: have %v, gasLimit %v", header.GasUsed, header.GasLimit)
    }

    // Verify that the gas limit remains within allowed bounds
    diff := new(big.Int).Set(parent.GasLimit)
    diff = diff.Sub(diff, header.GasLimit)
    diff.Abs(diff)

    limit := new(big.Int).Set(parent.GasLimit)
    limit = limit.Div(limit, params.GasLimitBoundDivisor)

    if diff.Cmp(limit) >= 0 || header.GasLimit.Cmp(params.MinGasLimit) < 0 {
        return fmt.Errorf("invalid gas limit: have %v, want %v += %v", header.GasLimit, parent.GasLimit, limit)
    }
    // Verify that the block number is parent's +1
    if diff := new(big.Int).Sub(header.Number, parent.Number); diff.Cmp(big.NewInt(1)) != 0 {
        return consensus.ErrInvalidNumber
    }
    // Verify the engine specific seal securing the block
    if seal {
        if err := ethash.VerifySeal(chain, header); err != nil {
            return err
        }
    }
    // If all checks passed, validate any special fields for hard forks
    if err := misc.VerifyDAOHeaderExtraData(chain.Config(), header); err != nil {
        return err
    }
    if err := misc.VerifyForkHashes(chain.Config(), header, uncle); err != nil {
        return err
    }
    return nil
}

```
###### 該方法會去校驗該區塊頭中的extra資料是不是比約定的引數最大值還大,如果超過,則返回錯誤,其次會去判斷該區塊是不是一個uncle區塊,如果header的時間戳不符合規則則返回錯誤,然後根據鏈的配置,header的時間戳以及上一個區塊計算中本次區塊期待的難度,如果header的難度和期待的不一致,或header的gasLimit比最大數字還大,或已用的gas超過gasLimit,則返回錯誤.如果gasLimit超過預定的最大值或最小值,或header的number減去上一個block的header的number不為1,則返回錯誤.`seal`為true,則會去校驗該區塊是否符合pow的工作量證明的要求(`verifySeal`方法),因為私有鏈一般是不需要pow.最後兩個if是去判斷區塊是否具有正確的hash,防止使用者在不同的鏈上,以及校驗塊頭的額外資料欄位是否符合DAO硬叉規則

###### uncle block:
__eth允許曠工在挖到一個塊的同時包含一組uncle block列表,主要有兩個作用:__
1. 由於網路傳播的延遲原因,通過獎勵那些由於不是鏈組成區塊部分而產生陳舊或孤立區塊的曠工,從而減少集權激勵
2. 通過增加在主鏈上的工作量來增加鏈條的安全性(在工作中,少浪費工作在陳舊的塊上)

eth的pow核心程式碼:

```
// CalcDifficulty is the difficulty adjustment algorithm. It returns
// the difficulty that a new block should have when created at time
// given the parent block's time and difficulty.
// TODO (karalabe): Move the chain maker into this package and make this private!
func CalcDifficulty(config *params.ChainConfig, time uint64, parent *types.Header) *big.Int {
    next := new(big.Int).Add(parent.Number, big1)
    switch {
    case config.IsByzantium(next):
        return calcDifficultyByzantium(time, parent)
    case config.IsHomestead(next):
        return calcDifficultyHomestead(time, parent)
    default:
        return calcDifficultyFrontier(time, parent)
    }
}
```
###### 該方法的第一個`case`是根據拜占庭規則去計算新塊應該具有的難度,第二個`case`是根據宅基地規則去計算新塊應該具有的難度,第三個`case`是根據邊界規則去計算難度


#### pow計算生成新塊程式碼解析
__/consensus/seal.go/seal__

```
// Seal implements consensus.Engine, attempting to find a nonce that satisfies
// the block's difficulty requirements.
func (ethash *Ethash) Seal(chain consensus.ChainReader, block *types.Block, stop <-chan struct{}) (*types.Block, error) {
   // If we're running a fake PoW, simply return a 0 nonce immediately
   if ethash.fakeMode {
       header := block.Header()
       header.Nonce, header.MixDigest = types.BlockNonce{}, common.Hash{}
       return block.WithSeal(header), nil
   }
   // If we're running a shared PoW, delegate sealing to it
   if ethash.shared != nil {
       return ethash.shared.Seal(chain, block, stop)
   }
   // Create a runner and the multiple search threads it directs
   abort := make(chan struct{})
   found := make(chan *types.Block)

   ethash.lock.Lock()
   threads := ethash.threads
   if ethash.rand == nil {
       seed, err := crand.Int(crand.Reader, big.NewInt(math.MaxInt64))
       if err != nil {
           ethash.lock.Unlock()
           return nil, err
       }
       ethash.rand = rand.New(rand.NewSource(seed.Int64()))
   }
   ethash.lock.Unlock()
   if threads == 0 {
       threads = runtime.NumCPU()
   }
   if threads < 0 {
       threads = 0 // Allows disabling local mining without extra logic around local/remote
   }
   var pend sync.WaitGroup
   for i := 0; i < threads; i++ {
       pend.Add(1)
       go func(id int, nonce uint64) {
           defer pend.Done()
           ethash.mine(block, id, nonce, abort, found)
       }(i, uint64(ethash.rand.Int63()))
   }
   // Wait until sealing is terminated or a nonce is found
   var result *types.Block
   select {
   case <-stop:
       // Outside abort, stop all miner threads
       close(abort)
   case result = <-found:
       // One of the threads found a block, abort all others
       close(abort)
   case <-ethash.update:
       // Thread count was changed on user request, restart
       close(abort)
       pend.Wait()
       return ethash.Seal(chain, block, stop)
   }
   // Wait for all miners to terminate and return the block
   pend.Wait()
   return result, nil
}

```

###### 該方法的foreach中並行挖新塊,一旦停止或者找到新快,則廢棄其他所有的,如果協程計算有變更,則重新呼叫方法

###### 好了pow挖礦的核心方法已經出現,`ethash.mine`,如果挖取到新塊,那麼就寫入到`found channel`

```
// mine is the actual proof-of-work miner that searches for a nonce starting from
// seed that results in correct final block difficulty.
func (ethash *Ethash) mine(block *types.Block, id int, seed uint64, abort chan struct{}, found chan *types.Block) {
    // Extract some data from the header
    var (
        header = block.Header()
        hash = header.HashNoNonce().Bytes()
        target = new(big.Int).Div(maxUint256, header.Difficulty)

        number = header.Number.Uint64()
        dataset = ethash.dataset(number)
    )
    // Start generating random nonces until we abort or find a good one
    var (
        attempts = int64(0)
        nonce = seed
    )
    logger := log.New("miner", id)
    logger.Trace("Started ethash search for new nonces", "seed", seed)
    for {
        select {
        case <-abort:
            // Mining terminated, update stats and abort
            logger.Trace("Ethash nonce search aborted", "attempts", nonce-seed)
            ethash.hashrate.Mark(attempts)
            return

        default:
            // We don't have to update hash rate on every nonce, so update after after 2^X nonces
            attempts++
            if (attempts % (1 << 15)) == 0 {
                ethash.hashrate.Mark(attempts)
                attempts = 0
            }
            // Compute the PoW value of this nonce
            digest, result := hashimotoFull(dataset, hash, nonce)
            if new(big.Int).SetBytes(result).Cmp(target) <= 0 {
                // Correct nonce found, create a new header with it
                header = types.CopyHeader(header)
                header.Nonce = types.EncodeNonce(nonce)
                header.MixDigest = common.BytesToHash(digest)

                // Seal and return a block (if still needed)
                select {
                case found <- block.WithSeal(header):
                    logger.Trace("Ethash nonce found and reported", "attempts", nonce-seed, "nonce", nonce)
                case <-abort:
                    logger.Trace("Ethash nonce found but discarded", "attempts", nonce-seed, "nonce", nonce)
                }
                return
            }
            nonce++
        }
    }
}


```

###### `target`是目標新塊的難度,`hashimotoFull`方法計算出一個hash值,如果產生的`hash`值小於等於`target`的值,則hash難度符合要求,將符合要求的`header`寫入到`found channel`中,並返回,否則一直迴圈

```
// hashimotoFull aggregates data from the full dataset (using the full in-memory
// dataset) in order to produce our final value for a particular header hash and
// nonce.
func hashimotoFull(dataset []uint32, hash []byte, nonce uint64) ([]byte, []byte) {
    lookup := func(index uint32) []uint32 {
        offset := index * hashWords
        return dataset[offset : offset+hashWords]
    }
    return hashimoto(hash, nonce, uint64(len(dataset))*4, lookup)
}

```

```
// hashimoto aggregates data from the full dataset in order to produce our final
// value for a particular header hash and nonce.
func hashimoto(hash []byte, nonce uint64, size uint64, lookup func(index uint32) []uint32) ([]byte, []byte) {
    // Calculate the number of theoretical rows (we use one buffer nonetheless)
    rows := uint32(size / mixBytes)

    // Combine header+nonce into a 64 byte seed
    seed := make([]byte, 40)
    copy(seed, hash)
    binary.LittleEndian.PutUint64(seed[32:], nonce)

    seed = crypto.Keccak512(seed)
    seedHead := binary.LittleEndian.Uint32(seed)

    // Start the mix with replicated seed
    mix := make([]uint32, mixBytes/4)
    for i := 0; i < len(mix); i++ {
        mix[i] = binary.LittleEndian.Uint32(seed[i%16*4:])
    }
    // Mix in random dataset nodes
    temp := make([]uint32, len(mix))

    for i := 0; i < loopAccesses; i++ {
        parent := fnv(uint32(i)^seedHead, mix[i%len(mix)]) % rows
        for j := uint32(0); j < mixBytes/hashBytes; j++ {
            copy(temp[j*hashWords:], lookup(2*parent+j))
        }
        fnvHash(mix, temp)
    }
    // Compress mix
    for i := 0; i < len(mix); i += 4 {
        mix[i/4] = fnv(fnv(fnv(mix[i], mix[i+1]), mix[i+2]), mix[i+3])
    }
    mix = mix[:len(mix)/4]

    digest := make([]byte, common.HashLength)
    for i, val := range mix {
        binary.LittleEndian.PutUint32(digest[i*4:], val)
    }
    return digest, crypto.Keccak256(append(seed, digest...))
}

```

###### 可以看到,該方法是不斷的進行sha256的hash運算,然後返回進行hash值難度的對比,如果hash的十六進位制大小小於等於預定的難度,那麼這個hash就是符合產塊要求的

產生一個隨機seed,賦給nonce隨機數,
然後進行sha256的hash運算,如果算出的hash難度不符合目標難度,則nonce+1,繼續運算



worker.go/wait()
func (self *worker) wait() {
    for {
        mustCommitNewWork := true
        for result := range self.recv {
            atomic.AddInt32(&self.atWork, -1)

            if result == nil {
                continue
            }


agent.go/mine()方法


func (self *CpuAgent) mine(work *Work, stop <-chan struct{}) {
    if result, err := self.engine.Seal(self.chain, work.Block, stop); result != nil {
        log.Info("Successfully sealed new block", "number", result.Number(), "hash", result.Hash())
        self.returnCh <- &Result{work, result}
    } else {
        if err != nil {
            log.Warn("Block sealing failed", "err", err)
        }
        self.returnCh <- nil
    }
}

如果挖到一個新塊,則將結果寫到self的return管道中


寫塊的方法WriteBlockAndState

wait方法接收self.recv管道的結果,如果有結果,說明本地挖到新塊了,則將新塊進行儲存,
並把該塊放到待確認的block判定區

miner.go/update方法,如果有新塊出來,則停止挖礦進行下載同步新塊,如果下載完成或失敗的事件,則繼續開始挖礦.



/geth/main.go/geth --> makeFullNode --> utils.RegisterEthService

--> eth.New(ctx, cfg) --> miner.New(eth, eth.chainConfig, eth.EventMux(), eth.engine)


這個是啟動鏈後到挖礦,共識程式碼的整個呼叫棧,開始分析核心方法


func New(eth Backend, config *params.ChainConfig, mux *event.TypeMux, engine consensus.Engine) *Miner {
    miner := &Miner{
        eth: eth,
        mux: mux,
        engine: engine,
        worker: newWorker(config, engine, common.Address{}, eth, mux),
        canStart: 1,
    }
    miner.Register(NewCpuAgent(eth.BlockChain(), engine))
    go miner.update()

    return miner
}

從miner.Update()的邏輯可以看出,對於任何一個Ethereum網路中的節點來說,挖掘一個新區塊和從其他節點下載、同步一個新區塊,根本是相互衝突的。這樣的規定,保證了在某個節點上,一個新區塊只可能有一種來源,這可以大大降低可能出現的區塊衝突,並避免全網中計算資源的浪費。

首先是:

func (self *Miner) Register(agent Agent) {
    if self.Mining() {
        agent.Start()
    }
    self.worker.register(agent)
}

func (self *worker) register(agent Agent) {
    self.mu.Lock()
    defer self.mu.Unlock()
    self.agents[agent] = struct{}{}
    agent.SetReturnCh(self.recv)
}

該方法中將self的recv管道繫結在了agent的return管道


然後是newWorker方法

func newWorker(config *params.ChainConfig, engine consensus.Engine, coinbase common.Address, eth Backend, mux *event.TypeMux) *worker {
    worker := &worker{
        config: config,
        engine: engine,
        eth: eth,
        mux: mux,
        txCh: make(chan core.TxPreEvent, txChanSize),
        chainHeadCh: make(chan core.ChainHeadEvent, chainHeadChanSize),
        chainSideCh: make(chan core.ChainSideEvent, chainSideChanSize),
        chainDb: eth.ChainDb(),
        recv: make(chan *Result, resultQueueSize),
        chain: eth.BlockChain(),
        proc: eth.BlockChain().Validator(),
        possibleUncles: make(map[common.Hash]*types.Block),
        coinbase: coinbase,
        agents: make(map[Agent]struct{}),
        unconfirmed: newUnconfirmedBlocks(eth.BlockChain(), miningLogAtDepth),
    }
    // Subscribe TxPreEvent for tx pool
    worker.txSub = eth.TxPool().SubscribeTxPreEvent(worker.txCh)
    // Subscribe events for blockchain
    worker.chainHeadSub = eth.BlockChain().SubscribeChainHeadEvent(worker.chainHeadCh)
    worker.chainSideSub = eth.BlockChain().SubscribeChainSideEvent(worker.chainSideCh)
    go worker.update()

    go worker.wait()
    worker.commitNewWork()

    return worker
}


該方法中繫結了三個管道,額外啟動了兩個goroutine執行update和wait方法,
func (self *worker) update() {
    defer self.txSub.Unsubscribe()
    defer self.chainHeadSub.Unsubscribe()
    defer self.chainSideSub.Unsubscribe()

    for {
        // A real event arrived, process interesting content
        select {
        // Handle ChainHeadEvent
        case <-self.chainHeadCh:
            self.commitNewWork()

        // Handle ChainSideEvent
        case ev := <-self.chainSideCh:
            self.uncleMu.Lock()
            self.possibleUncles[ev.Block.Hash()] = ev.Block
            self.uncleMu.Unlock()

        // Handle TxPreEvent
        case ev := <-self.txCh:
            // Apply transaction to the pending state if we're not mining
            if atomic.LoadInt32(&self.mining) == 0 {
                self.currentMu.Lock()
                acc, _ := types.Sender(self.current.signer, ev.Tx)
                txs := map[common.Address]types.Transactions{acc: {ev.Tx}}
                txset := types.NewTransactionsByPriceAndNonce(self.current.signer, txs)

                self.current.commitTransactions(self.mux, txset, self.chain, self.coinbase)
                self.currentMu.Unlock()
            } else {
                // If we're mining, but nothing is being processed, wake on new transactions
                if self.config.Clique != nil && self.config.Clique.Period == 0 {
                    self.commitNewWork()
                }
            }

        // System stopped
        case <-self.txSub.Err():
            return
        case <-self.chainHeadSub.Err():
            return
        case <-self.chainSideSub.Err():
            return
        }
    }
}



worker.update方法中接收各種事件,並且是永久迴圈,如果有錯誤事件發生,則終止,如果有新的交易事件,則執行commitTransactions驗證提交交易到本地的trx判定池中,供下次出塊使用
worker.wait的方法接收self.recv管道的結果,也就是本地新挖出的塊,如果有挖出,則寫入塊資料,並廣播一個chainHeadEvent事件,同時將
該塊新增到待確認列表中,並且提交一個新的工作量,commitNewWork方法相當於共識的第一個階段,它會組裝一個標準的塊,其他包含產出該塊需要的難度,
然後將產出該塊的工作以及塊資訊廣播給所有代理,
接著agent.go中的update方法監聽到廣播新塊工作量的任務,開始挖礦,搶該塊的出塊權

func (self *CpuAgent) mine(work *Work, stop <-chan struct{}) {
    if result, err := self.engine.Seal(self.chain, work.Block, stop); result != nil {
        log.Info("Successfully sealed new block", "number", result.Number(), "hash", result.Hash())
        self.returnCh <- &Result{work, result}
    } else {
        if err != nil {
            log.Warn("Block sealing failed", "err", err)
        }
        self.returnCh <- nil
    }
}

該方法中開始進行塊的hash難度計算,如果返回的result結果不為空,說明挖礦成功,將結果寫入到returnCh通道中

然後worker.go中的wait方法又接收到了資訊開始處理.


如果不是組裝好帶有隨機數hash的,那麼儲存塊將會返回錯誤,
    stat, err := self.chain.WriteBlockAndState(block, work.receipts, work.state)
            if err != nil {
                log.Error("Failed writing block to chain", "err", err)
                continue
            }
            
            
remote_agent 提供了一套RPC介面,可以實現遠端礦工進行採礦的功能。 比如我有一個礦機,礦機內部沒有執行以太坊節點,礦機首先從remote_agent獲取當前的任務,然後進行挖礦計算,當挖礦完成後,提交計算結果,完成挖礦。
### eth共識演算法分析,從本地節點挖到塊開始分析


##### 首先目前生產環境上面,肯定不是以CPU的形式挖礦的,那麼就是`remoteAgent`這種形式,也就是礦機通過網路請求從以太的節點獲取當前節點的出塊任務,
然後礦機根據算出符合該塊難度hash值,提交給節點,也就是對應的以下方法.

```
func (a *RemoteAgent) SubmitWork(nonce types.BlockNonce, mixDigest, hash common.Hash) bool {
    a.mu.Lock()
    defer a.mu.Unlock()

    // Make sure the work submitted is present
    work := a.work[hash]
    if work == nil {
        log.Info("Work submitted but none pending", "hash", hash)
        return false
    }
    // Make sure the Engine solutions is indeed valid
    result := work.Block.Header()
    result.Nonce = nonce
    result.MixDigest = mixDigest

    if err := a.engine.VerifySeal(a.chain, result); err != nil {
        log.Warn("Invalid proof-of-work submitted", "hash", hash, "err", err)
        return false
    }
    block := work.Block.WithSeal(result)

    // Solutions seems to be valid, return to the miner and notify acceptance
    a.returnCh <- &Result{work, block}
    delete(a.work, hash)

    return true
}

```

該方法會校驗提交過來的塊的hash難度,如果是正常的話,則會將該生成的塊寫到管道中,管道接收的方法在/miner/worker.go/Wait方法中

```
func (self *worker) wait() {
    for {
        mustCommitNewWork := true
        for result := range self.recv {
            atomic.AddInt32(&self.atWork, -1)

            if result == nil {
                continue
            }
            block := result.Block
            work := result.Work

            // Update the block hash in all logs since it is now available and not when the
            // receipt/log of individual transactions were created.
            for _, r := range work.receipts {
                for _, l := range r.Logs {
                    l.BlockHash = block.Hash()
                }
            }
            for _, log := range work.state.Logs() {
                log.BlockHash = block.Hash()
            }
            stat, err := self.chain.WriteBlockAndState(block, work.receipts, work.state)
            if err != nil {
                log.Error("Failed writing block to chain", "err", err)
                continue
            }
            // check if canon block and write transactions
            if stat == core.CanonStatTy {
                // implicit by posting ChainHeadEvent
                mustCommitNewWork = false
            }
            // Broadcast the block and announce chain insertion event
            // 通過p2p的形式將塊廣播到連線的節點,走的還是channel
            self.mux.Post(core.NewMinedBlockEvent{Block: block})
            var (
                events []interface{}
                logs = work.state.Logs()
            )
            events = append(events, core.ChainEvent{Block: block, Hash: block.Hash(), Logs: logs})
            if stat == core.CanonStatTy {
                events = append(events, core.ChainHeadEvent{Block: block})
            }
            self.chain.PostChainEvents(events, logs)

            // Insert the block into the set of pending ones to wait for confirmations
            self.unconfirmed.Insert(block.NumberU64(), block.Hash())

            if mustCommitNewWork {
                self.commitNewWork()
            }
        }
    }
}

```

這裡傳送了一個新挖到塊的事件,接著跟,呼叫棧是
```
/geth/main.go/geth --> startNode --> utils.StartNode(stack)
--> stack.Start() --> /node/node.go/Start() --> service.Start(running)
--> /eth/backend.go/Start() --> /eth/handler.go/Start()

```
好了核心邏輯在handler.go/Start()裡面

```
func (pm *ProtocolManager) Start(maxPeers int) {
    pm.maxPeers = maxPeers

    // broadcast transactions
    // 廣播交易的通道。 txCh會作為txpool的TxPreEvent訂閱通道。txpool有了這種訊息會通知給這個txCh。 廣播交易的goroutine會把這個訊息廣播出去。
    pm.txCh = make(chan core.TxPreEvent, txChanSize)
    // 訂閱的回執
    pm.txSub = pm.txpool.SubscribeTxPreEvent(pm.txCh)
    go pm.txBroadcastLoop()

    // 訂閱挖礦訊息。當新的Block被挖出來的時候會產生訊息。 這個訂閱和上面的那個訂閱採用了兩種不同的模式,這種是標記為Deprecated的訂閱方式。
    // broadcast mined blocks
    pm.minedBlockSub = pm.eventMux.Subscribe(core.NewMinedBlockEvent{})
    // 挖礦廣播 goroutine 當挖出來的時候需要儘快的廣播到網路上面去 本地挖出的塊通過這種形式廣播出去
    go pm.minedBroadcastLoop()
    // 同步器負責週期性地與網路同步,下載雜湊和塊以及處理通知處理程式。
    // start sync handlers
    go pm.syncer()
    // txsyncLoop負責每個新連線的初始事務同步。 當新的peer出現時,我們轉發所有當前待處理的事務。 為了最小化出口頻寬使用,我們一次只傳送一個小包。
    go pm.txsyncLoop()
}

```
`pm.minedBroadcastLoop()`裡面就有管道接收到上面post出來的出塊訊息,跟進去將會看到通過p2p網路傳送給節點的邏輯

```
// BroadcastBlock will either propagate a block to a subset of it's peers, or
// will only announce it's availability (depending what's requested).
func (pm *ProtocolManager) BroadcastBlock(block *types.Block, propagate bool) {
    hash := block.Hash()
    peers := pm.peers.PeersWithoutBlock(hash)

    // If propagation is requested, send to a subset of the peer
    if propagate {
        // Calculate the TD of the block (it's not imported yet, so block.Td is not valid)
        var td *big.Int
        if parent := pm.blockchain.GetBlock(block.ParentHash(), block.NumberU64()-1); parent != nil {
            td = new(big.Int).Add(block.Difficulty(), pm.blockchain.GetTd(block.ParentHash(), block.NumberU64()-1))
        } else {
            log.Error("Propagating dangling block", "number", block.Number(), "hash", hash)
            return
        }
        // Send the block to a subset of our peers
        transfer := peers[:int(math.Sqrt(float64(len(peers))))]
        for _, peer := range transfer {
            peer.SendNewBlock(block, td)
        }
        log.Trace("Propagated block", "hash", hash, "recipients", len(transfer), "duration", common.PrettyDuration(time.Since(block.ReceivedAt)))
        return
    }
    // Otherwise if the block is indeed in out own chain, announce it
    if pm.blockchain.HasBlock(hash, block.NumberU64()) {
        for _, peer := range peers {
            peer.SendNewBlockHashes([]common.Hash{hash}, []uint64{block.NumberU64()})
        }
        log.Trace("Announced block", "hash", hash, "recipients", len(peers), "duration", common.PrettyDuration(time.Since(block.ReceivedAt)))
    }
}
```

這裡面會傳送兩種時間,一種是`NewBlockMsg`,另外一種是`NewBlockHashesMsg`,好了到此本地節點挖到的塊就通過p2p網路的形式開始擴散出去了
接著看下一個重要的方法

```
// NewProtocolManager returns a new ethereum sub protocol manager. The Ethereum sub protocol manages peers capable
// with the ethereum network.
func NewProtocolManager(config *params.ChainConfig, mode downloader.SyncMode, networkId uint64, mux *event.TypeMux, txpool txPool, engine consensus.Engine, blockchain *core.BlockChain, chaindb ethdb.Database) (*ProtocolManager, error) {
    // Create the protocol manager with the base fields
    manager := &ProtocolManager{
        networkId: networkId,
        eventMux: mux,
        txpool: txpool,
        blockchain: blockchain,
        chaindb: chaindb,
        chainconfig: config,
        peers: newPeerSet(),
        newPeerCh: make(chan *peer),
        noMorePeers: make(chan struct{}),
        txsyncCh: make(chan *txsync),
        quitSync: make(chan struct{}),
    }
    // Figure out whether to allow fast sync or not
    if mode == downloader.FastSync && blockchain.CurrentBlock().NumberU64() > 0 {
        log.Warn("Blockchain not empty, fast sync disabled")
        mode = downloader.FullSync
    }
    if mode == downloader.FastSync {
        manager.fastSync = uint32(1)
    }
    // Initiate a sub-protocol for every implemented version we can handle
    manager.SubProtocols = make([]p2p.Protocol, 0, len(ProtocolVersions))
    for i, version := range ProtocolVersions {
        // Skip protocol version if incompatible with the mode of operation
        if mode == downloader.FastSync && version < eth63 {
            continue
        }
        // Compatible; initialise the sub-protocol
        version := version // Closure for the run
        manager.SubProtocols = append(manager.SubProtocols, p2p.Protocol{
            Name: ProtocolName,
            Version: version,
            Length: ProtocolLengths[i],
            Run: func(p *p2p.Peer, rw p2p.MsgReadWriter) error {
                peer := manager.newPeer(int(version), p, rw)
                select {
                case manager.newPeerCh <- peer:
                    manager.wg.Add(1)
                    defer manager.wg.Done()
                    return manager.handle(peer)
                case <-manager.quitSync:
                    return p2p.DiscQuitting
                }
            },
            NodeInfo: func() interface{} {
                return manager.NodeInfo()
            },
            PeerInfo: func(id discover.NodeID) interface{} {
                if p := manager.peers.Peer(fmt.Sprintf("%x", id[:8])); p != nil {
                    return p.Info()
                }
                return nil
            },
        })
    }
    if len(manager.SubProtocols) == 0 {
        return nil, errIncompatibleConfig
    }
    // downloader是負責從其他的peer來同步自身資料。
    // downloader是全鏈同步工具
    // Construct the different synchronisation mechanisms
    manager.downloader = downloader.New(mode, chaindb, manager.eventMux, blockchain, nil, manager.removePeer)

    validator := func(header *types.Header) error {
        return engine.VerifyHeader(blockchain, header, true)
    }
    heighter := func() uint64 {
        return blockchain.CurrentBlock().NumberU64()
    }
    inserter := func(blocks types.Blocks) (int, error) {
        // If fast sync is running, deny importing weird blocks
        if atomic.LoadUint32(&manager.fastSync) == 1 {
            log.Warn("Discarded bad propagated block", "number", blocks[0].Number(), "hash", blocks[0].Hash())
            return 0, nil
        }
        // 設定開始接收交易
        atomic.StoreUint32(&manager.acceptTxs, 1) // Mark initial sync done on any fetcher import
        return manager.blockchain.InsertChain(blocks)
    }
    // 生成一個fetcher
    // Fetcher負責積累來自各個peer的區塊通知並安排進行檢索。
    manager.fetcher = fetcher.New(blockchain.GetBlockByHash, validator, manager.BroadcastBlock, heighter, inserter, manager.removePeer)

    return manager, nil
}

```
該方法是用來管理以太坊協議下的多個子協議,其中的`Run`方法在每個節點啟動的時候就會呼叫,可以看到是阻塞的,跟進`handler`方法能看到這樣的一塊關鍵程式碼

```
for {
        if err := pm.handleMsg(p); err != nil {
            p.Log().Debug("Ethereum message handling failed", "err", err)
            return err
        }
    }

```

死迴圈,處理p2p網路過來的訊息,接著看`handleMsg`方法

```
func (pm *ProtocolManager) handleMsg(p *peer) error {
    // Read the next message from the remote peer, and ensure it's fully consumed
    msg, err := p.rw.ReadMsg()
    if err != nil {
        return err
    }
    if msg.Size > ProtocolMaxMsgSize {
        return errResp(ErrMsgTooLarge, "%v > %v", msg.Size, ProtocolMaxMsgSize)
    }
    defer msg.Discard()

    // Handle the message depending on its contents
    switch {
    case msg.Code == StatusMsg:
        // Status messages should never arrive after the handshake
        return errResp(ErrExtraStatusMsg, "uncontrolled status message")

    // Block header query, collect the requested headers and reply
    case msg.Code == GetBlockHeadersMsg:
        // Decode the complex header query
        var query getBlockHeadersData
        if err := msg.Decode(&query); err != nil {
            return errResp(ErrDecode, "%v: %v", msg, err)
        }
        hashMode := query.Origin.Hash != (common.Hash{})

        // Gather headers until the fetch or network limits is reached
        var (
            bytes common.StorageSize
            headers []*types.Header
            unknown bool
        )
        for !unknown && len(headers) < int(query.Amount) && bytes < softResponseLimit && len(headers) < downloader.MaxHeaderFetch {
            // Retrieve the next header satisfying the query
            var origin *types.Header
            if hashMode {
                origin = pm.blockchain.GetHeaderByHash(query.Origin.Hash)
            } else {
                origin = pm.blockchain.GetHeaderByNumber(query.Origin.Number)
            }
            if origin == nil {
                break
            }
            number := origin.Number.Uint64()
            headers = append(headers, origin)
            bytes += estHeaderRlpSize

            // Advance to the next header of the query
            switch {
            case query.Origin.Hash != (common.Hash{}) && query.Reverse:
                // Hash based traversal towards the genesis block
                for i := 0; i < int(query.Skip)+1; i++ {
                    if header := pm.blockchain.GetHeader(query.Origin.Hash, number); header != nil {
                        query.Origin.Hash = header.ParentHash
                        number--
                    } else {
                        unknown = true
                        break
                    }
                }
            case query.Origin.Hash != (common.Hash{}) && !query.Reverse:
                // Hash based traversal towards the leaf block
                var (
                    current = origin.Number.Uint64()
                    next = current + query.Skip + 1
                )
                if next <= current {
                    infos, _ := json.MarshalIndent(p.Peer.Info(), "", " ")
                    p.Log().Warn("GetBlockHeaders skip overflow attack", "current", current, "skip", query.Skip, "next", next, "attacker", infos)
                    unknown = true
                } else {
                    if header := pm.blockchain.GetHeaderByNumber(next); header != nil {
                        if pm.blockchain.GetBlockHashesFromHash(header.Hash(), query.Skip+1)[query.Skip] == query.Origin.Hash {
                            query.Origin.Hash = header.Hash()
                        } else {
                            unknown = true
                        }
                    } else {
                        unknown = true
                    }
                }
            case query.Reverse:
                // Number based traversal towards the genesis block
                if query.Origin.Number >= query.Skip+1 {
                    query.Origin.Number -= (query.Skip + 1)
                } else {
                    unknown = true
                }

            case !query.Reverse:
                // Number based traversal towards the leaf block
                query.Origin.Number += (query.Skip + 1)
            }
        }
        return p.SendBlockHeaders(headers)

    case msg.Code == BlockHeadersMsg:
        // A batch of headers arrived to one of our previous requests
        var headers []*types.Header
        if err := msg.Decode(&headers); err != nil {
            return errResp(ErrDecode, "msg %v: %v", msg, err)
        }
        // If no headers were received, but we're expending a DAO fork check, maybe it's that
        if len(headers) == 0 && p.forkDrop != nil {
            // Possibly an empty reply to the fork header checks, sanity check TDs
            verifyDAO := true

            // If we already have a DAO header, we can check the peer's TD against it. If
            // the peer's ahead of this, it too must have a reply to the DAO check
            if daoHeader := pm.blockchain.GetHeaderByNumber(pm.chainconfig.DAOForkBlock.Uint64()); daoHeader != nil {
                if _, td := p.Head(); td.Cmp(pm.blockchain.GetTd(daoHeader.Hash(), daoHeader.Number.Uint64())) >= 0 {
                    verifyDAO = false
                }
            }
            // If we're seemingly on the same chain, disable the drop timer
            if verifyDAO {
                p.Log().Debug("Seems to be on the same side of the DAO fork")
                p.forkDrop.Stop()
                p.forkDrop = nil
                return nil
            }
        }
        // Filter out any explicitly requested headers, deliver the rest to the downloader
        filter := len(headers) == 1
        if filter {
            // If it's a potential DAO fork check, validate against the rules
            if p.forkDrop != nil && pm.chainconfig.DAOForkBlock.Cmp(headers[0].Number) == 0 {
                // Disable the fork drop timer
                p.forkDrop.Stop()
                p.forkDrop = nil

                // Validate the header and either drop the peer or continue
                if err := misc.VerifyDAOHeaderExtraData(pm.chainconfig, headers[0]); err != nil {
                    p.Log().Debug("Verified to be on the other side of the DAO fork, dropping")
                    return err
                }
                p.Log().Debug("Verified to be on the same side of the DAO fork")
                return nil
            }
            // Irrelevant of the fork checks, send the header to the fetcher just in case
            headers = pm.fetcher.FilterHeaders(p.id, headers, time.Now())
        }
        if len(headers) > 0 || !filter {
            err := pm.downloader.DeliverHeaders(p.id, headers)
            if err != nil {
                log.Debug("Failed to deliver headers", "err", err)
            }
        }

    case msg.Code == GetBlockBodiesMsg:
        // Decode the retrieval message
        msgStream := rlp.NewStream(msg.Payload, uint64(msg.Size))
        if _, err := msgStream.List(); err != nil {
            return err
        }
        // Gather blocks until the fetch or network limits is reached
        var (
            hash common.Hash
            bytes int
            bodies []rlp.RawValue
        )
        for bytes < softResponseLimit && len(bodies) < downloader.MaxBlockFetch {
            // Retrieve the hash of the next block
            if err := msgStream.Decode(&hash); err == rlp.EOL {
                break
            } else if err != nil {
                return errResp(ErrDecode, "msg %v: %v", msg, err)
            }
            // Retrieve the requested block body, stopping if enough was found
            if data := pm.blockchain.GetBodyRLP(hash); len(data) != 0 {
                bodies = append(bodies, data)
                bytes += len(data)
            }
        }
        return p.SendBlockBodiesRLP(bodies)

    case msg.Code == BlockBodiesMsg:
        // A batch of block bodies arrived to one of our previous requests
        var request blockBodiesData
        if err := msg.Decode(&request); err != nil {
            return errResp(ErrDecode, "msg %v: %v", msg, err)
        }
        // Deliver them all to the downloader for queuing
        trasactions := make([][]*types.Transaction, len(request))
        uncles := make([][]*types.Header, len(request))

        for i, body := range request {
            trasactions[i] = body.Transactions
            uncles[i] = body.Uncles
        }
        // Filter out any explicitly requested bodies, deliver the rest to the downloader
        filter := len(trasactions) > 0 || len(uncles) > 0
        if filter {
            trasactions, uncles = pm.fetcher.FilterBodies(p.id, trasactions, uncles, time.Now())
        }
        if len(trasactions) > 0 || len(uncles) > 0 || !filter {
            err := pm.downloader.DeliverBodies(p.id, trasactions, uncles)
            if err != nil {
                log.Debug("Failed to deliver bodies", "err", err)
            }
        }

    case p.version >= eth63 && msg.Code == GetNodeDataMsg:
        // Decode the retrieval message
        msgStream := rlp.NewStream(msg.Payload, uint64(msg.Size))
        if _, err := msgStream.List(); err != nil {
            return err
        }
        // Gather state data until the fetch or network limits is reached
        var (
            hash common.Hash
            bytes int
            data [][]byte
        )
        for bytes < softResponseLimit && len(data) < downloader.MaxStateFetch {
            // Retrieve the hash of the next state entry
            if err := msgStream.Decode(&hash); err == rlp.EOL {
                break
            } else if err != nil {
                return errResp(ErrDecode, "msg %v: %v", msg, err)
            }
            // Retrieve the requested state entry, stopping if enough was found
            if entry, err := pm.chaindb.Get(hash.Bytes()); err == nil {
                data = append(data, entry)
                bytes += len(entry)
            }
        }
        return p.SendNodeData(data)

    case p.version >= eth63 && msg.Code == NodeDataMsg:
        // A batch of node state data arrived to one of our previous requests
        var data [][]byte
        if err := msg.Decode(&data); err != nil {
            return errResp(ErrDecode, "msg %v: %v", msg, err)
        }
        // Deliver all to the downloader
        if err := pm.downloader.DeliverNodeData(p.id, data); err != nil {
            log.Debug("Failed to deliver node state data", "err", err)
        }

    case p.version >= eth63 && msg.Code == GetReceiptsMsg:
        // Decode the retrieval message
        msgStream := rlp.NewStream(msg.Payload, uint64(msg.Size))
        if _, err := msgStream.List(); err != nil {
            return err
        }
        // Gather state data until the fetch or network limits is reached
        var (
            hash common.Hash
            bytes int
            receipts []rlp.RawValue
        )
        for bytes < softResponseLimit && len(receipts) < downloader.MaxReceiptFetch {
            // Retrieve the hash of the next block
            if err := msgStream.Decode(&hash); err == rlp.EOL {
                break
            } else if err != nil {
                return errResp(ErrDecode, "msg %v: %v", msg, err)
            }
            // Retrieve the requested block's receipts, skipping if unknown to us
            results := core.GetBlockReceipts(pm.chaindb, hash, core.GetBlockNumber(pm.chaindb, hash))
            if results == nil {
                if header := pm.blockchain.GetHeaderByHash(hash); header == nil || header.ReceiptHash != types.EmptyRootHash {
                    continue
                }
            }
            // If known, encode and queue for response packet
            if encoded, err := rlp.EncodeToBytes(results); err != nil {
                log.Error("Failed to encode receipt", "err", err)
            } else {
                receipts = append(receipts, encoded)
                bytes += len(encoded)
            }
        }
        return p.SendReceiptsRLP(receipts)

    case p.version >= eth63 && msg.Code == ReceiptsMsg:
        // A batch of receipts arrived to one of our previous requests
        var receipts [][]*types.Receipt
        if err := msg.Decode(&receipts); err != nil {
            return errResp(ErrDecode, "msg %v: %v", msg, err)
        }
        // Deliver all to the downloader
        if err := pm.downloader.DeliverReceipts(p.id, receipts); err != nil {
            log.Debug("Failed to deliver receipts", "err", err)
        }

    case msg.Code == NewBlockHashesMsg:
        var announces newBlockHashesData
        if err := msg.Decode(&announces); err != nil {
            return errResp(ErrDecode, "%v: %v", msg, err)
        }
        // Mark the hashes as present at the remote node
        for _, block := range announces {
            p.MarkBlock(block.Hash)
        }
        // Schedule all the unknown hashes for retrieval
        unknown := make(newBlockHashesData, 0, len(announces))
        for _, block := range announces {
            if !pm.blockchain.HasBlock(block.Hash, block.Number) {
                unknown = append(unknown, block)
            }
        }
        for _, block := range unknown {
            pm.fetcher.Notify(p.id, block.Hash, block.Number, time.Now(), p.RequestOneHeader, p.RequestBodies)
        }

    case msg.Code == NewBlockMsg:
        // Retrieve and decode the propagated block
        var request newBlockData
        if err := msg.Decode(&request); err != nil {
            return errResp(ErrDecode, "%v: %v", msg, err)
        }
        request.Block.ReceivedAt = msg.ReceivedAt
        request.Block.ReceivedFrom = p

        // Mark the peer as owning the block and schedule it for import
        p.MarkBlock(request.Block.Hash())
        pm.fetcher.Enqueue(p.id, request.Block)

        // Assuming the block is importable by the peer, but possibly not yet done so,
        // calculate the head hash and TD that the peer truly must have.
        var (
            trueHead = request.Block.ParentHash()
            trueTD = new(big.Int).Sub(request.TD, request.Block.Difficulty())
        )
        // Update the peers total difficulty if better than the previous
        if _, td := p.Head(); trueTD.Cmp(td) > 0 {
            p.SetHead(trueHead, trueTD)

            // Schedule a sync if above ours. Note, this will not fire a sync for a gap of
            // a singe block (as the true TD is below the propagated block), however this
            // scenario should easily be covered by the fetcher.
            currentBlock := pm.blockchain.CurrentBlock()
            if trueTD.Cmp(pm.blockchain.GetTd(currentBlock.Hash(), currentBlock.NumberU64())) > 0 {
                go pm.synchronise(p)
            }
        }

    case msg.Code == TxMsg:
        // Transactions arrived, make sure we have a valid and fresh chain to handle them
        if atomic.LoadUint32(&pm.acceptTxs) == 0 {
            break
        }
        // Transactions can be processed, parse all of them and deliver to the pool
        var txs []*types.Transaction
        if err := msg.Decode(&txs); err != nil {
            return errResp(ErrDecode, "msg %v: %v", msg, err)
        }
        for i, tx := range txs {
            // Validate and mark the remote transaction
            if tx == nil {
                return errResp(ErrDecode, "transaction %d is nil", i)
            }
            p.MarkTransaction(tx.Hash())
        }
        pm.txpool.AddRemotes(txs)

    default:
        return errResp(ErrInvalidMsgCode, "%v", msg.Code)
    }
    return nil
}
```

該方法中就解碼了p2p網路過來的訊息,並且處理了`NewBlockMsg``NewBlockHashesMsg`這兩種事件,如`NewBlockMsg`中的處理邏輯是直接通過管道傳送到本地了,`pm.fetcher.Enqueue(p.id, request.Block)`
,對應的管道名是:`f.inject`,其中是一個佇列,/fetcher.go/enqueue方法中寫入了一個FIFO佇列中

```
func (f *Fetcher) enqueue(peer string, block *types.Block) {
    hash := block.Hash()

    // Ensure the peer isn't DOSing us
    count := f.queues[peer] + 1
    if count > blockLimit {
        log.Debug("Discarded propagated block, exceeded allowance", "peer", peer, "number", block.Number(), "hash", hash, "limit", blockLimit)
        propBroadcastDOSMeter.Mark(1)
        f.forgetHash(hash)
        return
    }
    // Discard any past or too distant blocks
    if dist := int64(block.NumberU64()) - int64(f.chainHeight()); dist < -maxUncleDist || dist > maxQueueDist {
        log.Debug("Discarded propagated block, too far away", "peer", peer, "number", block.Number(), "hash", hash, "distance", dist)
        propBroadcastDropMeter.Mark(1)
        f.forgetHash(hash)
        return
    }
    // Schedule the block for future importing
    if _, ok := f.queued[hash]; !ok {
        op := &inject{
            origin: peer,
            block: block,
        }
        f.queues[peer] = count
        f.queued[hash] = op
        f.queue.Push(op, -float32(block.NumberU64()))
        if f.queueChangeHook != nil {
            f.queueChangeHook(op.block.Hash(), true)
        }
        log.Debug("Queued propagated block", "peer", peer, "number", block.Number(), "hash", hash, "queued", f.queue.Size())
    }
}

```

該佇列的消費端在/fetcher.go/loop中,是一個死迴圈,核心程式碼

```
for !f.queue.Empty() {
            op := f.queue.PopItem().(*inject)
            if f.queueChangeHook != nil {
                f.queueChangeHook(op.block.Hash(), false)
            }
            // If too high up the chain or phase, continue later
            number := op.block.NumberU64()
            if number > height+1 {
                f.queue.Push(op, -float32(op.block.NumberU64()))
                if f.queueChangeHook != nil {
                    f.queueChangeHook(op.block.Hash(), true)
                }
                break
            }
            // Otherwise if fresh and still unknown, try and import
            hash := op.block.Hash()
            if number+maxUncleDist < height || f.getBlock(hash) != nil {
                f.forgetBlock(hash)
                continue
            }
            f.insert(op.origin, op.block)
        }

```

從佇列中取出,接著看`insert`方法

```
func (f *Fetcher) insert(peer string, block *types.Block) {
    hash := block.Hash()

    // Run the import on a new thread
    log.Debug("Importing propagated block", "peer", peer, "number", block.Number(), "hash", hash)
    go func() {
        defer func() { f.done <- hash }()

        // If the parent's unknown, abort insertion
        parent := f.getBlock(block.ParentHash())
        if parent == nil {
            log.Debug("Unknown parent of propagated block", "peer", peer, "number", block.Number(), "hash", hash, "parent", block.ParentHash())
            return
        }
        // Quickly validate the header and propagate the block if it passes
        switch err := f.verifyHeader(block.Header()); err {
        case nil:
            // All ok, quickly propagate to our peers
            propBroadcastOutTimer.UpdateSince(block.ReceivedAt)
            go f.broadcastBlock(block, true)

        case consensus.ErrFutureBlock:
            // Weird future block, don't fail, but neither propagate

        default:
            // Something went very wrong, drop the peer
            log.Debug("Propagated block verification failed", "peer", peer, "number", block.Number(), "hash", hash, "err", err)
            f.dropPeer(peer)
            return
        }
        // Run the actual import and log any issues
        if _, err := f.insertChain(types.Blocks{block}); err != nil {
            log.Debug("Propagated block import failed", "peer", peer, "number", block.Number(), "hash", hash, "err", err)
            return
        }
        // If import succeeded, broadcast the block
        propAnnounceOutTimer.UpdateSince(block.ReceivedAt)
        go f.broadcastBlock(block, false)

        // Invoke the testing hook if needed
        if f.importedHook != nil {
            f.importedHook(block)
        }
    }()
}


```

可以看到,該方法會呼叫`verifyHeader`方法去校驗區塊,如果沒問題的話就通過p2p的形式廣播出去,然後呼叫`insertChain`方法插入到本地的leveldb中,插入沒問題的話,會再廣播一次,不過這次只會廣播block的hash,
如此,通過一個對等網路,只要塊合法,那麼就會被全網採納,其中的`verifyHeader`,`insertChain`方法都是在`/handler.go/NewProtocolManager`中定義傳過來的,所有啟動的邏輯都是`handler.go/Start`方法中.
fetch.go的start方法在`syncer`方法中用一個單獨的協程觸發的

`/handler.go/handleMsg --> go pm.synchronise(p) --> pm.downloader.Synchronise(peer.id, pHead, pTd, mode) --> d.synchronise(id, head, td, mode)
--> d.syncWithPeer(p, hash, td)`,讓我們看下核心方法

```
func (d *Downloader) syncWithPeer(p *peerConnection, hash common.Hash, td *big.Int) (err error) {
    d.mux.Post(StartEvent{})
    defer func() {
        // reset on error
        if err != nil {
            d.mux.Post(FailedEvent{err})
        } else {
            d.mux.Post(DoneEvent{})
        }
    }()
    if p.version < 62 {
        return errTooOld
    }

    log.Debug("Synchronising with the network", "peer", p.id, "eth", p.version, "head", hash, "td", td, "mode", d.mode)
    defer func(start time.Time) {
        log.Debug("Synchronisation terminated", "elapsed", time.Since(start))
    }(time.Now())

    // Look up the sync boundaries: the common ancestor and the target block
    latest, err := d.fetchHeight(p)
    if err != nil {
        return err
    }
    height := latest.Number.Uint64()

    origin, err := d.findAncestor(p, height)
    if err != nil {
        return err
    }
    d.syncStatsLock.Lock()
    if d.syncStatsChainHeight <= origin || d.syncStatsChainOrigin > origin {
        d.syncStatsChainOrigin = origin
    }
    d.syncStatsChainHeight = height
    d.syncStatsLock.Unlock()

    // Initiate the sync using a concurrent header and content retrieval algorithm
    pivot := uint64(0)
    switch d.mode {
    case LightSync:
        pivot = height
    case FastSync:
        // Calculate the new fast/slow sync pivot point
        if d.fsPivotLock == nil {
            pivotOffset, err := rand.Int(rand.Reader, big.NewInt(int64(fsPivotInterval)))
            if err != nil {
                panic(fmt.Sprintf("Failed to access crypto random source: %v", err))
            }
            if height > uint64(fsMinFullBlocks)+pivotOffset.Uint64() {
                pivot = height - uint64(fsMinFullBlocks) - pivotOffset.Uint64()
            }
        } else {
            // Pivot point locked in, use this and do not pick a new one!
            pivot = d.fsPivotLock.Number.Uint64()
        }
        // If the point is below the origin, move origin back to ensure state download
        if pivot < origin {
            if pivot > 0 {
                origin = pivot - 1
            } else {
                origin = 0
            }
        }
        log.Debug("Fast syncing until pivot block", "pivot", pivot)
    }
    d.queue.Prepare(origin+1, d.mode, pivot, latest)
    if d.syncInitHook != nil {
        d.syncInitHook(origin, height)
    }

    fetchers := []func() error{
        func() error { return d.fetchHeaders(p, origin+1) }, // Headers are always retrieved
        func() error { return d.fetchBodies(origin + 1) }, // Bodies are retrieved during normal and fast sync
        func() error { return d.fetchReceipts(origin + 1) }, // Receipts are retrieved during fast sync
        func() error { return d.processHeaders(origin+1, td) },
    }
    if d.mode == FastSync {
        fetchers = append(fetchers, func() error { return d.processFastSyncContent(latest) })
    } else if d.mode == FullSync {
        fetchers = append(fetchers, d.processFullSyncContent)
    }
    err = d.spawnSync(fetchers)
    if err != nil && d.mode == FastSync && d.fsPivotLock != nil {
        // If sync failed in the critical section, bump the fail counter.
        atomic.AddUint32(&d.fsPivotFails, 1)
    }
    return err
}
```

由於上述整個呼叫棧是在`newBlockMsg`的條件中觸發的,這裡的`StartEvent`會通過通道的形式傳遞到miner.go/update中
```
func (self *Miner) update() {
    events := self.mux.Subscribe(downloader.StartEvent{}, downloader.DoneEvent{}, downloader.FailedEvent{})
out:
    for ev := range events.Chan() {
        switch ev.Data.(type) {
        case downloader.StartEvent:
            atomic.StoreInt32(&self.canStart, 0)
            if self.Mining() {
                self.Stop()
                atomic.StoreInt32(&self.shouldStart, 1)
                log.Info("Mining aborted due to sync")
            }
        case downloader.DoneEvent, downloader.FailedEvent:
            shouldStart := atomic.LoadInt32(&self.shouldStart) == 1

            atomic.StoreInt32(&self.canStart, 1)
            atomic.StoreInt32(&self.shouldStart, 0)
            if shouldStart {
                self.Start(self.coinbase)
            }
            // unsubscribe. we're only interested in this event once
            events.Unsubscribe()
            // stop immediately and ignore all further pending events
            break out
        }
    }
}

```

可以看到接收到這個`StartEvent`就會通知所有的代理,呼叫`stop`停止當前相同塊的挖礦,`remote_Agent`中的`stop`方法


最後再看一下新塊如何廣播給其他節點的,處理的方法在`/eth/handle.go/BroadcastBlock`

```
func (pm *ProtocolManager) BroadcastBlock(block *types.Block, propagate bool) {
    hash := block.Hash()
    peers := pm.peers.PeersWithoutBlock(hash)

    // If propagation is requested, send to a subset of the peer
    if propagate {
        // Calculate the TD of the block (it's not imported yet, so block.Td is not valid)
        var td *big.Int
        if parent := pm.blockchain.GetBlock(block.ParentHash(), block.NumberU64()-1); parent != nil {
            td = new(big.Int).Add(block.Difficulty(), pm.blockchain.GetTd(block.ParentHash(), block.NumberU64()-1))
        } else {
            log.Error("Propagating dangling block", "number", block.Number(), "hash", hash)
            return
        }
        // Send the block to a subset of our peers
        transfer := peers[:int(math.Sqrt(float64(len(peers))))]
        for _, peer := range transfer {
            peer.SendNewBlock(block, td)
        }
        log.Trace("Propagated block", "hash", hash, "recipients", len(transfer), "duration", common.PrettyDuration(time.Since(block.ReceivedAt)))
        return
    }
    // Otherwise if the block is indeed in out own chain, announce it
    if pm.blockchain.HasBlock(hash, block.NumberU64()) {
        for _, peer := range peers {
            peer.SendNewBlockHashes([]common.Hash{hash}, []uint64{block.NumberU64()})
        }
        log.Trace("Announced block", "hash", hash, "recipients", len(peers), "duration", common.PrettyDuration(time.Since(block.ReceivedAt)))
    }
}

```


可以看到該方法中迴圈每個連線的peer節點,呼叫`peer.SendNewBlock`傳送產塊訊息過去

```
func (p *peer) SendNewBlock(block *types.Block, td *big.Int) error {
    p.knownBlocks.Add(block.Hash())
    return p2p.Send(p.rw, NewBlockMsg, []interface{}{block, td})
}

```

```
func Send(w MsgWriter, msgcode uint64, data interface{}) error {
    size, r, err := rlp.EncodeToReader(data)
    if err != nil {
        return err
    }
    return w.WriteMsg(Msg{Code: msgcode, Size: uint32(size), Payload: r})
}

```

可以看到通過`writeMsg`寫入該節點裡,該方法的實現是`rw *netWrapper) WriteMsg(msg Msg)`

```
func (rw *netWrapper) WriteMsg(msg Msg) error {
    rw.wmu.Lock()
    defer rw.wmu.Unlock()
    rw.conn.SetWriteDeadline(time.Now().Add(rw.wtimeout))
    return rw.wrapped.WriteMsg(msg)
}

```

該方法設定了一個超時時間,底層呼叫了net.go的`Write(b []byte) (n int, err error)`,通過網路寫給對應的節點了,然後接收端的方法為`ReadMsg`

```
func (pm *ProtocolManager) handleMsg(p *peer) error {
    // Read the next message from the remote peer, and ensure it's fully consumed
    msg, err := p.rw.ReadMsg()
    if err != nil {
        return err
    }
    if msg.Size > ProtocolMaxMsgSize {
        return errResp(ErrMsgTooLarge, "%v > %v", msg.Size, ProtocolMaxMsgSize)
    }
    defer msg.Discard()


```

可以看到在這邊讀取網路寫入來的訊息,然後根據不同的`msgCode`作不同的處理,由於`handleMsg`是在一個死迴圈中呼叫的,所以就能一直接收到節點廣播過來的訊息

```
//eth/handler.go
func (pm *ProtocolManager) handle(p *peer) error {
td, head, genesis := pm.blockchain.Status()
p.Handshake(pm.networkId, td, head, genesis)

if rw, ok := p.rw.(*meteredMsgReadWriter); ok {
rm.Init(p.version)
}

pm.peers.Register(p)
defer pm.removePeer(p.id)

pm.downloader.RegisterPeer(p.id, p.version, p)

pm.syncTransactions(p)
...
for {
if err := pm.handleMsg(p); err != nil {
return err
}
}
}


```

handle()函式針對一個新peer做了如下幾件事:
握手,與對方peer溝通己方的區塊鏈狀態
初始化一個讀寫通道,用以跟對方peer相互資料傳輸。
註冊對方peer,存入己方peer列表;只有handle()函式退出時,才會將這個peer移除出列表。
Downloader成員註冊這個新peer;Downloader會自己維護一個相鄰peer列表。
呼叫syncTransactions(),用當前txpool中新累計的tx物件組裝成一個txsync{}物件,推送到內部通道txsyncCh。還記得Start()啟動的四個函式麼?
其中第四項txsyncLoop()中用以等待txsync{}資料的通道txsyncCh,正是在這裡被推入txsync{}的。
在無限迴圈中啟動handleMsg(),當對方peer發出任何msg時,handleMsg()可以捕捉相應型別的訊息並在己方進行處理。
### eth共識演算法分析,從本地節點挖到塊開始分析


##### 首先目前生產環境上面,肯定不是以CPU的形式挖礦的,那麼就是`remoteAgent`這種形式,也就是礦機通過網路請求從以太的節點獲取當前節點的出塊任務,
然後礦機根據算出符合該塊難度hash值,提交給節點,也就是對應的以下方法.

```
func (a *RemoteAgent) SubmitWork(nonce types.BlockNonce, mixDigest, hash common.Hash) bool {
    a.mu.Lock()
    defer a.mu.Unlock()

    // Make sure the work submitted is present
    work := a.work[hash]
    if work == nil {
        log.Info("Work submitted but none pending", "hash", hash)
        return false
    }
    // Make sure the Engine solutions is indeed valid
    result := work.Block.Header()
    result.Nonce = nonce
    result.MixDigest = mixDigest

    if err := a.engine.VerifySeal(a.chain, result); err != nil {
        log.Warn("Invalid proof-of-work submitted", "hash", hash, "err", err)
        return false
    }
    block := work.Block.WithSeal(result)

    // Solutions seems to be valid, return to the miner and notify acceptance
    a.returnCh <- &Result{work, block}
    delete(a.work, hash)

    return true
}

```

該方法會校驗提交過來的塊的hash難度,如果是正常的話,則會將該生成的塊寫到管道中,管道接收的方法在/miner/worker.go/Wait方法中

```
func (self *worker) wait() {
    for {
        mustCommitNewWork := true
        for result := range self.recv {
            atomic.AddInt32(&self.atWork, -1)

            if result == nil {
                continue
            }
            block := result.Block
            work := result.Work

            // Update the block hash in all logs since it is now available and not when the
            // receipt/log of individual transactions were created.
            for _, r := range work.receipts {
                for _, l := range r.Logs {
                    l.BlockHash = block.Hash()
                }
            }
            for _, log := range work.state.Logs() {
                log.BlockHash = block.Hash()
            }
            stat, err := self.chain.WriteBlockAndState(block, work.receipts, work.state)
            if err != nil {
                log.Error("Failed writing block to chain", "err", err)
                continue
            }
            // check if canon block and write transactions
            if stat == core.CanonStatTy {
                // implicit by posting ChainHeadEvent
                mustCommitNewWork = false
            }
            // Broadcast the block and announce chain insertion event
            // 通過p2p的形式將塊廣播到連線的節點,走的還是channel
            self.mux.Post(core.NewMinedBlockEvent{Block: block})
            var (
                events []interface{}
                logs = work.state.Logs()
            )
            events = append(events, core.ChainEvent{Block: block, Hash: block.Hash(), Logs: logs})
            if stat == core.CanonStatTy {
                events = append(events, core.ChainHeadEvent{Block: block})
            }
            self.chain.PostChainEvents(events, logs)

            // Insert the block into the set of pending ones to wait for confirmations
            self.unconfirmed.Insert(block.NumberU64(), block.Hash())

            if mustCommitNewWork {
                self.commitNewWork()
            }
        }
    }
}

```

這裡傳送了一個新挖到塊的事件,接著跟,呼叫棧是
```
/geth/main.go/geth --> startNode --> utils.StartNode(stack)
--> stack.Start() --> /node/node.go/Start() --> service.Start(running)
--> /eth/backend.go/Start() --> /eth/handler.go/Start()

```
好了核心邏輯在handler.go/Start()裡面

```
func (pm *ProtocolManager) Start(maxPeers int) {
    pm.maxPeers = maxPeers

    // broadcast transactions
    // 廣播交易的通道。 txCh會作為txpool的TxPreEvent訂閱通道。txpool有了這種訊息會通知給這個txCh。 廣播交易的goroutine會把這個訊息廣播出去。
    pm.txCh = make(chan core.TxPreEvent, txChanSize)
    // 訂閱的回執
    pm.txSub = pm.txpool.SubscribeTxPreEvent(pm.txCh)
    go pm.txBroadcastLoop()

    // 訂閱挖礦訊息。當新的Block被挖出來的時候會產生訊息。 這個訂閱和上面的那個訂閱採用了兩種不同的模式,這種是標記為Deprecated的訂閱方式。
    // broadcast mined blocks
    pm.minedBlockSub = pm.eventMux.Subscribe(core.NewMinedBlockEvent{})
    // 挖礦廣播 goroutine 當挖出來的時候需要儘快的廣播到網路上面去 本地挖出的塊通過這種形式廣播出去
    go pm.minedBroadcastLoop()
    // 同步器負責週期性地與網路同步,下載雜湊和塊以及處理通知處理程式。
    // start sync handlers
    go pm.syncer()
    // txsyncLoop負責每個新連線的初始事務同步。 當新的peer出現時,我們轉發所有當前待處理的事務。 為了最小化出口頻寬使用,我們一次只傳送一個小包。
    go pm.txsyncLoop()
}

```
`pm.minedBroadcastLoop()`裡面就有管道接收到上面post出來的出塊訊息,跟進去將會看到通過p2p網路傳送給節點的邏輯

```
// BroadcastBlock will either propagate a block to a subset of it's peers, or
// will only announce it's availability (depending what's requested).
func (pm *ProtocolManager) BroadcastBlock(block *types.Block, propagate bool) {
    hash := block.Hash()
    peers := pm.peers.PeersWithoutBlock(hash)

    // If propagation is requested, send to a subset of the peer
    if propagate {
        // Calculate the TD of the block (it's not imported yet, so block.Td is not valid)
        var td *big.Int
        if parent := pm.blockchain.GetBlock(block.ParentHash(), block.NumberU64()-1); parent != nil {
            td = new(big.Int).Add(block.Difficulty(), pm.blockchain.GetTd(block.ParentHash(), block.NumberU64()-1))
        } else {
            log.Error("Propagating dangling block", "number", block.Number(), "hash", hash)
            return
        }
        // Send the block to a subset of our peers
        transfer := peers[:int(math.Sqrt(float64(len(peers))))]
        for _, peer := range transfer {
            peer.SendNewBlock(block, td)
        }
        log.Trace("Propagated block", "hash", hash, "recipients", len(transfer), "duration", common.PrettyDuration(time.Since(block.ReceivedAt)))
        return
    }
    // Otherwise if the block is indeed in out own chain, announce it
    if pm.blockchain.HasBlock(hash, block.NumberU64()) {
        for _, peer := range peers {
            peer.SendNewBlockHashes([]common.Hash{hash}, []uint64{block.NumberU64()})
        }
        log.Trace("Announced block", "hash", hash, "recipients", len(peers), "duration", common.PrettyDuration(time.Since(block.ReceivedAt)))
    }
}
```

這裡面會傳送兩種時間,一種是`NewBlockMsg`,另外一種是`NewBlockHashesMsg`,好了到此本地節點挖到的塊就通過p2p網路的形式開始擴散出去了
接著看下一個重要的方法

```
// NewProtocolManager returns a new ethereum sub protocol manager. The Ethereum sub protocol manages peers capable
// with the ethereum network.
func NewProtocolManager(config *params.ChainConfig, mode downloader.SyncMode, networkId uint64, mux *event.TypeMux, txpool txPool, engine consensus.Engine, blockchain *core.BlockChain, chaindb ethdb.Database) (*ProtocolManager, error) {
    // Create the protocol manager with the base fields
    manager := &ProtocolManager{
        networkId: networkId,
        eventMux: mux,
        txpool: txpool,
        blockchain: blockchain,
        chaindb: chaindb,
        chainconfig: config,
        peers: newPeerSet(),
        newPeerCh: make(chan *peer),
        noMorePeers: make(chan struct{}),
        txsyncCh: make(chan *txsync),
        quitSync: make(chan struct{}),
    }
    // Figure out whether to allow fast sync or not
    if mode == downloader.FastSync && blockchain.CurrentBlock().NumberU64() > 0 {
        log.Warn("Blockchain not empty, fast sync disabled")
        mode = downloader.FullSync
    }
    if mode == downloader.FastSync {
        manager.fastSync = uint32(1)
    }
    // Initiate a sub-protocol for every implemented version we can handle
    manager.SubProtocols = make([]p2p.Protocol, 0, len(ProtocolVersions))
    for i, version := range ProtocolVersions {
        // Skip protocol version if incompatible with the mode of operation
        if mode == downloader.FastSync && version < eth63 {
            continue
        }
        // Compatible; initialise the sub-protocol
        version := version // Closure for the run
        manager.SubProtocols = append(manager.SubProtocols, p2p.Protocol{
            Name: ProtocolName,
            Version: version,
            Length: ProtocolLengths[i],
            Run: func(p *p2p.Peer, rw p2p.MsgReadWriter) error {
                peer := manager.newPeer(int(version), p, rw)
                select {
                case manager.newPeerCh <- peer:
                    manager.wg.Add(1)
                    defer manager.wg.Done()
                    return manager.handle(peer)
                case <-manager.quitSync:
                    return p2p.DiscQuitting
                }
            },
            NodeInfo: func() interface{} {
                return manager.NodeInfo()
            },
            PeerInfo: func(id discover.NodeID) interface{} {
                if p := manager.peers.Peer(fmt.Sprintf("%x", id[:8])); p != nil {
                    return p.Info()
                }
                return nil
            },
        })
    }
    if len(manager.SubProtocols) == 0 {
        return nil, errIncompatibleConfig
    }
    // downloader是負責從其他的peer來同步自身資料。
    // downloader是全鏈同步工具
    // Construct the different synchronisation mechanisms
    manager.downloader = downloader.New(mode, chaindb, manager.eventMux, blockchain, nil, manager.removePeer)

    validator := func(header *types.Header) error {
        return engine.VerifyHeader(blockchain, header, true)
    }
    heighter := func() uint64 {
        return blockchain.CurrentBlock().NumberU64()
    }
    inserter := func(blocks types.Blocks) (int, error) {
        // If fast sync is running, deny importing weird blocks
        if atomic.LoadUint32(&manager.fastSync) == 1 {
            log.Warn("Discarded bad propagated block", "number", blocks[0].Number(), "hash", blocks[0].Hash())
            return 0, nil
        }
        // 設定開始接收交易
        atomic.StoreUint32(&manager.acceptTxs, 1) // Mark initial sync done on any fetcher import
        return manager.blockchain.InsertChain(blocks)
    }
    // 生成一個fetcher
    // Fetcher負責積累來自各個peer的區塊通知並安排進行檢索。
    manager.fetcher = fetcher.New(blockchain.GetBlockByHash, validator, manager.BroadcastBlock, heighter, inserter, manager.removePeer)

    return manager, nil
}

```
該方法是用來管理以太坊協議下的多個子協議,其中的`Run`方法在每個節點啟動的時候就會呼叫,可以看到是阻塞的,跟進`handler`方法能看到這樣的一塊關鍵程式碼

```
for {
        if err := pm.handleMsg(p); err != nil {
            p.Log().Debug("Ethereum message handling failed", "err", err)
            return err
        }
    }

```

死迴圈,處理p2p網路過來的訊息,接著看`handleMsg`方法

```
func (pm *ProtocolManager) handleMsg(p *peer) error {
    // Read the next message from the remote peer, and ensure it's fully consumed
    msg, err := p.rw.ReadMsg()
    if err != nil {
        return err
    }
    if msg.Size > ProtocolMaxMsgSize {
        return errResp(ErrMsgTooLarge, "%v > %v", msg.Size, ProtocolMaxMsgSize)
    }
    defer msg.Discard()

    // Handle the message depending on its contents
    switch {
    case msg.Code == StatusMsg:
        // Status messages should never arrive after the handshake
        return errResp(ErrExtraStatusMsg, "uncontrolled status message")

    // Block header query, collect the requested headers and reply
    case msg.Code == GetBlockHeadersMsg:
        // Decode the complex header query
        var query getBlockHeadersData
        if err := msg.Decode(&query); err != nil {
            return errResp(ErrDecode, "%v: %v", msg, err)
        }
        hashMode := query.Origin.Hash != (common.Hash{})

        // Gather headers until the fetch or network limits is reached
        var (
            bytes common.StorageSize
            headers []*types.Header
            unknown bool
        )
        for !unknown && len(headers) < int(query.Amount) && bytes < softResponseLimit && len(headers) < downloader.MaxHeaderFetch {
            // Retrieve the next header satisfying the query
            var origin *types.Header
            if hashMode {
                origin = pm.blockchain.GetHeaderByHash(query.Origin.Hash)
            } else {
                origin = pm.blockchain.GetHeaderByNumber(query.Origin.Number)
            }
            if origin == nil {
                break
            }
            number := origin.Number.Uint64()
            headers = append(headers, origin)
            bytes += estHeaderRlpSize

            // Advance to the next header of the query
            switch {
            case query.Origin.Hash != (common.Hash{}) && query.Reverse:
                // Hash based traversal towards the genesis block
                for i := 0; i < int(query.Skip)+1; i++ {
                    if header := pm.blockchain.GetHeader(query.Origin.Hash, number); header != nil {
                        query.Origin.Hash = header.ParentHash
                        number--
                    } else {
                        unknown = true
                        break
                    }
                }
            case query.Origin.Hash != (common.Hash{}) && !query.Reverse:
                // Hash based traversal towards the leaf block
                var (
                    current = origin.Number.Uint64()
                    next = current + query.Skip + 1
                )
                if next <= current {
                    infos, _ := json.MarshalIndent(p.Peer.Info(), "", " ")
                    p.Log().Warn("GetBlockHeaders skip overflow attack", "current", current, "skip", query.Skip, "next", next, "attacker", infos)
                    unknown = true
                } else {
                    if header := pm.blockchain.GetHeaderByNumber(next); header != nil {
                        if pm.blockchain.GetBlockHashesFromHash(header.Hash(), query.Skip+1)[query.Skip] == query.Origin.Hash {
                            query.Origin.Hash = header.Hash()
                        } else {
                            unknown = true
                        }
                    } else {
                        unknown = true
                    }
                }
            case query.Reverse:
                // Number based traversal towards the genesis block
                if query.Origin.Number >= query.Skip+1 {
                    query.Origin.Number -= (query.Skip + 1)
                } else {
                    unknown = true
                }

            case !query.Reverse:
                // Number based traversal towards the leaf block
                query.Origin.Number += (query.Skip + 1)
            }
        }
        return p.SendBlockHeaders(headers)

    case msg.Code == BlockHeadersMsg:
        // A batch of headers arrived to one of our previous requests
        var headers []*types.Header
        if err := msg.Decode(&headers); err != nil {
            return errResp(ErrDecode, "msg %v: %v", msg, err)
        }
        // If no headers were received, but we're expending a DAO fork check, maybe it's that
        if len(headers) == 0 && p.forkDrop != nil {
            // Possibly an empty reply to the fork header checks, sanity check TDs
            verifyDAO := true

            // If we already have a DAO header, we can check the peer's TD against it. If
            // the peer's ahead of this, it too must have a reply to the DAO check
            if daoHeader := pm.blockchain.GetHeaderByNumber(pm.chainconfig.DAOForkBlock.Uint64()); daoHeader != nil {
                if _, td := p.Head(); td.Cmp(pm.blockchain.GetTd(daoHeader.Hash(), daoHeader.Number.Uint64())) >= 0 {
                    verifyDAO = false
                }
            }
            // If we're seemingly on the same chain, disable the drop timer
            if verifyDAO {
                p.Log().Debug("Seems to be on the same side of the DAO fork")
                p.forkDrop.Stop()
                p.forkDrop = nil
                return nil
            }
        }
        // Filter out any explicitly requested headers, deliver the rest to the downloader
        filter := len(headers) == 1
        if filter {
            // If it's a potential DAO fork check, validate against the rules
            if p.forkDrop != nil && pm.chainconfig.DAOForkBlock.Cmp(headers[0].Number) == 0 {
                // Disable the fork drop timer
                p.forkDrop.Stop()
                p.forkDrop = nil

                // Validate the header and either drop the peer or continue
                if err := misc.VerifyDAOHeaderExtraData(pm.chainconfig, headers[0]); err != nil {
                    p.Log().Debug("Verified to be on the other side of the DAO fork, dropping")
                    return err
                }
                p.Log().Debug("Verified to be on the same side of the DAO fork")
                return nil
            }
            // Irrelevant of the fork checks, send the header to the fetcher just in case
            headers = pm.fetcher.FilterHeaders(p.id, headers, time.Now())
        }
        if len(headers) > 0 || !filter {
            err := pm.downloader.DeliverHeaders(p.id, headers)
            if err != nil {
                log.Debug("Failed to deliver headers", "err", err)
            }
        }

    case msg.Code == GetBlockBodiesMsg:
        // Decode the retrieval message
        msgStream := rlp.NewStream(msg.Payload, uint64(msg.Size))
        if _, err := msgStream.List(); err != nil {
            return err
        }
        // Gather blocks until the fetch or network limits is reached
        var (
            hash common.Hash
            bytes int
            bodies []rlp.RawValue
        )
        for bytes < softResponseLimit && len(bodies) < downloader.MaxBlockFetch {
            // Retrieve the hash of the next block
            if err := msgStream.Decode(&hash); err == rlp.EOL {
                break
            } else if err != nil {
                return errResp(ErrDecode, "msg %v: %v", msg, err)
            }
            // Retrieve the requested block body, stopping if enough was found
            if data := pm.blockchain.GetBodyRLP(hash); len(data) != 0 {
                bodies = append(bodies, data)
                bytes += len(data)
            }
        }
        return p.SendBlockBodiesRLP(bodies)

    case msg.Code == BlockBodiesMsg:
        // A batch of block bodies arrived to one of our previous requests
        var request blockBodiesData
        if err := msg.Decode(&request); err != nil {
            return errResp(ErrDecode, "msg %v: %v", msg, err)
        }
        // Deliver them all to the downloader for queuing
        trasactions := make([][]*types.Transaction, len(request))
        uncles := make([][]*types.Header, len(request))

        for i, body := range request {
            trasactions[i] = body.Transactions
            uncles[i] = body.Uncles
        }
        // Filter out any explicitly requested bodies, deliver the rest to the downloader
        filter := len(trasactions) > 0 || len(uncles) > 0
        if filter {
            trasactions, uncles = pm.fetcher.FilterBodies(p.id, trasactions, uncles, time.Now())
        }
        if len(trasactions) > 0 || len(uncles) > 0 || !filter {
            err := pm.downloader.DeliverBodies(p.id, trasactions, uncles)
            if err != nil {
                log.Debug("Failed to deliver bodies", "err", err)
            }
        }

    case p.version >= eth63 && msg.Code == GetNodeDataMsg:
        // Decode the retrieval message
        msgStream := rlp.NewStream(msg.Payload, uint64(msg.Size))
        if _, err := msgStream.List(); err != nil {
            return err
        }
        // Gather state data until the fetch or network limits is reached
        var (
            hash common.Hash
            bytes int
            data [][]byte
        )
        for bytes < softResponseLimit && len(data) < downloader.MaxStateFetch {
            // Retrieve the hash of the next state entry
            if err := msgStream.Decode(&hash); err == rlp.EOL {
                break
            } else if err != nil {
                return errResp(ErrDecode, "msg %v: %v", msg, err)
            }
            // Retrieve the requested state entry, stopping if enough was found
            if entry, err := pm.chaindb.Get(hash.Bytes()); err == nil {
                data = append(data, entry)
                bytes += len(entry)
            }
        }
        return p.SendNodeData(data)

    case p.version >= eth63 && msg.Code == NodeDataMsg:
        // A batch of node state data arrived to one of our previous requests
        var data [][]byte
        if err := msg.Decode(&data); err != nil {
            return errResp(ErrDecode, "msg %v: %v", msg, err)
        }
        // Deliver all to the downloader
        if err := pm.downloader.DeliverNodeData(p.id, data); err != nil {
            log.Debug("Failed to deliver node state data", "err", err)
        }

    case p.version >= eth63 && msg.Code == GetReceiptsMsg:
        // Decode the retrieval message
        msgStream := rlp.NewStream(msg.Payload, uint64(msg.Size))
        if _, err := msgStream.List(); err != nil {
            return err
        }
        // Gather state data until the fetch or network limits is reached
        var (
            hash common.Hash
            bytes int
            receipts []rlp.RawValue
        )
        for bytes < softResponseLimit && len(receipts) < downloader.MaxReceiptFetch {
            // Retrieve the hash of the next block
            if err := msgStream.Decode(&hash); err == rlp.EOL {
                break
            } else if err != nil {
                return errResp(ErrDecode, "msg %v: %v", msg, err)
            }
            // Retrieve the requested block's receipts, skipping if unknown to us
            results := core.GetBlockReceipts(pm.chaindb, hash, core.GetBlockNumber(pm.chaindb, hash))
            if results == nil {
                if header := pm.blockchain.GetHeaderByHash(hash); header == nil || header.ReceiptHash != types.EmptyRootHash {
                    continue
                }
            }
            // If known, encode and queue for response packet
            if encoded, err := rlp.EncodeToBytes(results); err != nil {
                log.Error("Failed to encode receipt", "err", err)
            } else {
                receipts = append(receipts, encoded)
                bytes += len(encoded)
            }
        }
        return p.SendReceiptsRLP(receipts)

    case p.version >= eth63 && msg.Code == ReceiptsMsg:
        // A batch of receipts arrived to one of our previous requests
        var receipts [][]*types.Receipt
        if err := msg.Decode(&receipts); err != nil {
            return errResp(ErrDecode, "msg %v: %v", msg, err)
        }
        // Deliver all to the downloader
        if err := pm.downloader.DeliverReceipts(p.id, receipts); err != nil {
            log.Debug("Failed to deliver receipts", "err", err)
        }

    case msg.Code == NewBlockHashesMsg:
        var announces newBlockHashesData
        if err := msg.Decode(&announces); err != nil {
            return errResp(ErrDecode, "%v: %v", msg, err)
        }
        // Mark the hashes as present at the remote node
        for _, block := range announces {
            p.MarkBlock(block.Hash)
        }
        // Schedule all the unknown hashes for retrieval
        unknown := make(newBlockHashesData, 0, len(announces))
        for _, block := range announces {
            if !pm.blockchain.HasBlock(block.Hash, block.Number) {
                unknown = append(unknown, block)
            }
        }
        for _, block := range unknown {
            pm.fetcher.Notify(p.id, block.Hash, block.Number, time.Now(), p.RequestOneHeader, p.RequestBodies)
        }

    case msg.Code == NewBlockMsg:
        // Retrieve and decode the propagated block
        var request newBlockData
        if err := msg.Decode(&request); err != nil {
            return errResp(ErrDecode, "%v: %v", msg, err)
        }
        request.Block.ReceivedAt = msg.ReceivedAt
        request.Block.ReceivedFrom = p

        // Mark the peer as owning the block and schedule it for import
        p.MarkBlock(request.Block.Hash())
        pm.fetcher.Enqueue(p.id, request.Block)

        // Assuming the block is importable by the peer, but possibly not yet done so,
        // calculate the head hash and TD that the peer truly must have.
        var (
            trueHead = request.Block.ParentHash()
            trueTD = new(big.Int).Sub(request.TD, request.Block.Difficulty())
        )
        // Update the peers total difficulty if better than the previous
        if _, td := p.Head(); trueTD.Cmp(td) > 0 {
            p.SetHead(trueHead, trueTD)

            // Schedule a sync if above ours. Note, this will not fire a sync for a gap of
            // a singe block (as the true TD is below the propagated block), however this
            // scenario should easily be covered by the fetcher.
            currentBlock := pm.blockchain.CurrentBlock()
            if trueTD.Cmp(pm.blockchain.GetTd(currentBlock.Hash(), currentBlock.NumberU64())) > 0 {
                go pm.synchronise(p)
            }
        }

    case msg.Code == TxMsg:
        // Transactions arrived, make sure we have a valid and fresh chain to handle them
        if atomic.LoadUint32(&pm.acceptTxs) == 0 {
            break
        }
        // Transactions can be processed, parse all of them and deliver to the pool
        var txs []*types.Transaction
        if err := msg.Decode(&txs); err != nil {
            return errResp(ErrDecode, "msg %v: %v", msg, err)
        }
        for i, tx := range txs {
            // Validate and mark the remote transaction
            if tx == nil {
                return errResp(ErrDecode, "transaction %d is nil", i)
            }
            p.MarkTransaction(tx.Hash())
        }
        pm.txpool.AddRemotes(txs)

    default:
        return errResp(ErrInvalidMsgCode, "%v", msg.Code)
    }
    return nil
}
```

該方法中就解碼了p2p網路過來的訊息,並且處理了`NewBlockMsg``NewBlockHashesMsg`這兩種事件,如`NewBlockMsg`中的處理邏輯是直接通過管道傳送到本地了,`pm.fetcher.Enqueue(p.id, request.Block)`
,對應的管道名是:`f.inject`,其中是一個佇列,/fetcher.go/enqueue方法中寫入了一個FIFO佇列中

```
func (f *Fetcher) enqueue(peer string, block *types.Block) {
    hash := block.Hash()

    // Ensure the peer isn't DOSing us
    count := f.queues[peer] + 1
    if count > blockLimit {
        log.Debug("Discarded propagated block, exceeded allowance", "peer", peer, "number", block.Number(), "hash", hash, "limit", blockLimit)
        propBroadcastDOSMeter.Mark(1)
        f.forgetHash(hash)
        return
    }
    // Discard any past or too distant blocks
    if dist := int64(block.NumberU64()) - int64(f.chainHeight()); dist < -maxUncleDist || dist > maxQueueDist {
        log.Debug("Discarded propagated block, too far away", "peer", peer, "number", block.Number(), "hash", hash, "distance", dist)
        propBroadcastDropMeter.Mark(1)
        f.forgetHash(hash)
        return
    }
    // Schedule the block for future importing
    if _, ok := f.queued[hash]; !ok {
        op := &inject{
            origin: peer,
            block: block,
        }
        f.queues[peer] = count
        f.queued[hash] = op
        f.queue.Push(op, -float32(block.NumberU64()))
        if f.queueChangeHook != nil {
            f.queueChangeHook(op.block.Hash(), true)
        }
        log.Debug("Queued propagated block", "peer", peer, "number", block.Number(), "hash", hash, "queued", f.queue.Size())
    }
}

```

該佇列的消費端在/fetcher.go/loop中,是一個死迴圈,核心程式碼

```
for !f.queue.Empty() {
            op := f.queue.PopItem().(*inject)
            if f.queueChangeHook != nil {
                f.queueChangeHook(op.block.Hash(), false)
            }
            // If too high up the chain or phase, continue later
            number := op.block.NumberU64()
            if number > height+1 {
                f.queue.Push(op, -float32(op.block.NumberU64()))
                if f.queueChangeHook != nil {
                    f.queueChangeHook(op.block.Hash(), true)
                }
                break
            }
            // Otherwise if fresh and still unknown, try and import
            hash := op.block.Hash()
            if number+maxUncleDist < height || f.getBlock(hash) != nil {
                f.forgetBlock(hash)
                continue
            }
            f.insert(op.origin, op.block)
        }

```

從佇列中取出,接著看`insert`方法

```
func (f *Fetcher) insert(peer string, block *types.Block) {
    hash := block.Hash()

    // Run the import on a new thread
    log.Debug("Importing propagated block", "peer", peer, "number", block.Number(), "hash", hash)
    go func() {
        defer func() { f.done <- hash }()

        // If the parent's unknown, abort insertion
        parent := f.getBlock(block.ParentHash())
        if parent == nil {
            log.Debug("Unknown parent of propagated block", "peer", peer, "number", block.Number(), "hash", hash, "parent", block.ParentHash())
            return
        }
        // Quickly validate the header and propagate the block if it passes
        switch err := f.verifyHeader(block.Header()); err {
        case nil:
            // All ok, quickly propagate to our peers
            propBroadcastOutTimer.UpdateSince(block.ReceivedAt)
            go f.broadcastBlock(block, true)

        case consensus.ErrFutureBlock:
            // Weird future block, don't fail, but neither propagate

        default:
            // Something went very wrong, drop the peer
            log.Debug("Propagated block verification failed", "peer", peer, "number", block.Number(), "hash", hash, "err", err)
            f.dropPeer(peer)
            return
        }
        // Run the actual import and log any issues
        if _, err := f.insertChain(types.Blocks{block}); err != nil {
            log.Debug("Propagated block import failed", "peer", peer, "number", block.Number(), "hash", hash, "err", err)
            return
        }
        // If import succeeded, broadcast the block
        propAnnounceOutTimer.UpdateSince(block.ReceivedAt)
        go f.broadcastBlock(block, false)

        // Invoke the testing hook if needed
        if f.importedHook != nil {
            f.importedHook(block)
        }
    }()
}


```

可以看到,該方法會呼叫`verifyHeader`方法去校驗區塊,如果沒問題的話就通過p2p的形式廣播出去,然後呼叫`insertChain`方法插入到本地的leveldb中,插入沒問題的話,會再廣播一次,不過這次只會廣播block的hash,
如此,通過一個對等網路,只要塊合法,那麼就會被全網採納,其中的`verifyHeader`,`insertChain`方法都是在`/handler.go/NewProtocolManager`中定義傳過來的,所有啟動的邏輯都是`handler.go/Start`方法中.
fetch.go的start方法在`syncer`方法中用一個單獨的協程觸發的

`/handler.go/handleMsg --> go pm.synchronise(p) --> pm.downloader.Synchronise(peer.id, pHead, pTd, mode) --> d.synchronise(id, head, td, mode)
--> d.syncWithPeer(p, hash, td)`,讓我們看下核心方法

```
func (d *Downloader) syncWithPeer(p *peerConnection, hash common.Hash, td *big.Int) (err error) {
    d.mux.Post(StartEvent{})
    defer func() {
        // reset on error
        if err != nil {
            d.mux.Post(FailedEvent{err})
        } else {
            d.mux.Post(DoneEvent{})
        }
    }()
    if p.version < 62 {
        return errTooOld
    }

    log.Debug("Synchronising with the network", "peer", p.id, "eth", p.version, "head", hash, "td", td, "mode", d.mode)
    defer func(start time.Time) {
        log.Debug("Synchronisation terminated", "elapsed", time.Since(start))
    }(time.Now())

    // Look up the sync boundaries: the common ancestor and the target block
    latest, err := d.fetchHeight(p)
    if err != nil {
        return err
    }
    height := latest.Number.Uint64()

    origin, err := d.findAncestor(p, height)
    if err != nil {
        return err
    }
    d.syncStatsLock.Lock()
    if d.syncStatsChainHeight <= origin || d.syncStatsChainOrigin > origin {
        d.syncStatsChainOrigin = origin
    }
    d.syncStatsChainHeight = height
    d.syncStatsLock.Unlock()

    // Initiate the sync using a concurrent header and content retrieval algorithm
    pivot := uint64(0)
    switch d.mode {
    case LightSync:
        pivot = height
    case FastSync:
        // Calculate the new fast/slow sync pivot point
        if d.fsPivotLock == nil {
            pivotOffset, err := rand.Int(rand.Reader, big.NewInt(int64(fsPivotInterval)))
            if err != nil {
                panic(fmt.Sprintf("Failed to access crypto random source: %v", err))
            }
            if height > uint64(fsMinFullBlocks)+pivotOffset.Uint64() {
                pivot = height - uint64(fsMinFullBlocks) - pivotOffset.Uint64()
            }
        } else {
            // Pivot point locked in, use this and do not pick a new one!
            pivot = d.fsPivotLock.Number.Uint64()
        }
        // If the point is below the origin, move origin back to ensure state download
        if pivot < origin {
            if pivot > 0 {
                origin = pivot - 1
            } else {
                origin = 0
            }
        }
        log.Debug("Fast syncing until pivot block", "pivot", pivot)
    }
    d.queue.Prepare(origin+1, d.mode, pivot, latest)
    if d.syncInitHook != nil {
        d.syncInitHook(origin, height)
    }

    fetchers := []func() error{
        func() error { return d.fetchHeaders(p, origin+1) }, // Headers are always retrieved
        func() error { return d.fetchBodies(origin + 1) }, // Bodies are retrieved during normal and fast sync
        func() error { return d.fetchReceipts(origin + 1) }, // Receipts are retrieved during fast sync
        func() error { return d.processHeaders(origin+1, td) },
    }
    if d.mode == FastSync {
        fetchers = append(fetchers, func() error { return d.processFastSyncContent(latest) })
    } else if d.mode == FullSync {
        fetchers = append(fetchers, d.processFullSyncContent)
    }
    err = d.spawnSync(fetchers)
    if err != nil && d.mode == FastSync && d.fsPivotLock != nil {
        // If sync failed in the critical section, bump the fail counter.
        atomic.AddUint32(&d.fsPivotFails, 1)
    }
    return err
}
```

由於上述整個呼叫棧是在`newBlockMsg`的條件中觸發的,這裡的`StartEvent`會通過通道的形式傳遞到miner.go/update中
```
func (self *Miner) update() {
    events := self.mux.Subscribe(downloader.StartEvent{}, downloader.DoneEvent{}, downloader.FailedEvent{})
out:
    for ev := range events.Chan() {
        switch ev.Data.(type) {
        case downloader.StartEvent:
            atomic.StoreInt32(&self.canStart, 0)
            if self.Mining() {
                self.Stop()
                atomic.StoreInt32(&self.shouldStart, 1)
                log.Info("Mining aborted due to sync")
            }
        case downloader.DoneEvent, downloader.FailedEvent:
            shouldStart := atomic.LoadInt32(&self.shouldStart) == 1

            atomic.StoreInt32(&self.canStart, 1)
            atomic.StoreInt32(&self.shouldStart, 0)
            if shouldStart {
                self.Start(self.coinbase)
            }
            // unsubscribe. we're only interested in this event once
            events.Unsubscribe()
            // stop immediately and ignore all further pending events
            break out
        }
    }
}

```

可以看到接收到這個`StartEvent`就會通知所有的代理,呼叫`stop`停止當前相同塊的挖礦,`remote_Agent`中的`stop`方法


最後再看一下新塊如何廣播給其他節點的,處理的方法在`/eth/handle.go/BroadcastBlock`

```
func (pm *ProtocolManager) BroadcastBlock(block *types.Block, propagate bool) {
    hash := block.Hash()
    peers := pm.peers.PeersWithoutBlock(hash)

    // If propagation is requested, send to a subset of the peer
    if propagate {
        // Calculate the TD of the block (it's not imported yet, so block.Td is not valid)
        var td *big.Int
        if parent := pm.blockchain.GetBlock(block.ParentHash(), block.NumberU64()-1); parent != nil {
            td = new(big.Int).Add(block.Difficulty(), pm.blockchain.GetTd(block.ParentHash(), block.NumberU64()-1))
        } else {
            log.Error("Propagating dangling block", "number", block.Number(), "hash", hash)
            return
        }
        // Send the block to a subset of our peers
        transfer := peers[:int(math.Sqrt(float64(len(peers))))]
        for _, peer := range transfer {
            peer.SendNewBlock(block, td)
        }
        log.Trace("Propagated block", "hash", hash, "recipients", len(transfer), "duration", common.PrettyDuration(time.Since(block.ReceivedAt)))
        return
    }
    // Otherwise if the block is indeed in out own chain, announce it
    if pm.blockchain.HasBlock(hash, block.NumberU64()) {
        for _, peer := range peers {
            peer.SendNewBlockHashes([]common.Hash{hash}, []uint64{block.NumberU64()})
        }
        log.Trace("Announced block", "hash", hash, "recipients", len(peers), "duration", common.PrettyDuration(time.Since(block.ReceivedAt)))
    }
}

```


可以看到該方法中迴圈每個連線的peer節點,呼叫`peer.SendNewBlock`傳送產塊訊息過去

```
func (p *peer) SendNewBlock(block *types.Block, td *big.Int) error {
    p.knownBlocks.Add(block.Hash())
    return p2p.Send(p.rw, NewBlockMsg, []interface{}{block, td})
}

```

```
func Send(w MsgWriter, msgcode uint64, data interface{}) error {
    size, r, err := rlp.EncodeToReader(data)
    if err != nil {
        return err
    }
    return w.WriteMsg(Msg{Code: msgcode, Size: uint32(size), Payload: r})
}

```

可以看到通過`writeMsg`寫入該節點裡,該方法的實現是`rw *netWrapper) WriteMsg(msg Msg)`

```
func (rw *netWrapper) WriteMsg(msg Msg) error {
    rw.wmu.Lock()
    defer rw.wmu.Unlock()
    rw.conn.SetWriteDeadline(time.Now().Add(rw.wtimeout))
    return rw.wrapped.WriteMsg(msg)
}

```

該方法設定了一個超時時間,底層呼叫了net.go的`Write(b []byte) (n int, err error)`,通過網路寫給對應的節點了,然後接收端的方法為`ReadMsg`

```
func (pm *ProtocolManager) handleMsg(p *peer) error {
    // Read the next message from the remote peer, and ensure it's fully consumed
    msg, err := p.rw.ReadMsg()
    if err != nil {
        return err
    }
    if msg.Size > ProtocolMaxMsgSize {
        return errResp(ErrMsgTooLarge, "%v > %v", msg.Size, ProtocolMaxMsgSize)
    }
    defer msg.Discard()


```

可以看到在這邊讀取網路寫入來的訊息,然後根據不同的`msgCode`作不同的處理,由於`handleMsg`是在一個死迴圈中呼叫的,所以就能一直接收到節點廣播過來的訊息

```
//eth/handler.go
func (pm *ProtocolManager) handle(p *peer) error {
td, head, genesis := pm.blockchain.Status()
p.Handshake(pm.networkId, td, head, genesis)

if rw, ok := p.rw.(*meteredMsgReadWriter); ok {
rm.Init(p.version)
}

pm.peers.Register(p)
defer pm.removePeer(p.id)

pm.downloader.RegisterPeer(p.id, p.version, p)

pm.syncTransactions(p)
...
for {
if err := pm.handleMsg(p); err != nil {
return err
}
}
}


```

handle()函式針對一個新peer做了如下幾件事:
握手,與對方peer溝通己方的區塊鏈狀態
初始化一個讀寫通道,用以跟對方peer相互資料傳輸。
註冊對方peer,存入己方peer列表;只有handle()函式退出時,才會將這個peer移除出列表。
Downloader成員註冊這個新peer;Downloader會自己維護一個相鄰peer列表。
呼叫syncTransactions(),用當前txpool中新累計的tx物件組裝成一個txsync{}物件,推送到內部通道txsyncCh。還記得Start()啟動的四個函式麼?
其中第四項txsyncLoop()中用以等待txsync{}資料的通道txsyncCh,正是在這裡被推入txsync{}的。
在無限迴圈中啟動handleMsg(),當對方peer發出任何msg時,handleMsg()可以捕捉相應型別的訊息並在己方進行處理。







網址:http://www.qukuailianxueyuan.io/



欲領取造幣技術與全套虛擬機器資料

區塊鏈技術交流QQ群:756146052  備註:CSDN

尹成學院微信:備註:CSDN

相關文章