Elasticsearch NettyTransport通訊機制

lotus_ruan發表於2021-09-09

ES 當發起各種Action類的Request時, 在叢集環境中通常需要進行請求的轉發, 此時就會出現RPC通訊, 針對此RPC通訊ES 是透過NettyTransport類來完成轉發與接收功能的. 實際上就是基於Netty的一個通訊程式碼模組.
在Node啟動以及TransportClient初始化的過程中都會建立NettyTransport

Node節點啟動時

當節點Node啟動時會根據依賴注入獲取TransportService例項, 並呼叫其start方法,最終會轉向NettyTransport的doStart 方法, 進而建立Netty RPC的客戶端以及服務端物件, 透過MessageHandler來處理請求或者處理Response.

public Node start() {       
        //... 省略
        injector.getInstance(MappingUpdatedAction.class).setClient(client);
        injector.getInstance(IndicesService.class).start();
        injector.getInstance(IndexingMemoryController.class).start();
        injector.getInstance(IndicesClusterStateService.class).start();
        injector.getInstance(IndicesTTLService.class).start();
        injector.getInstance(SnapshotsService.class).start();
        injector.getInstance(SnapshotShardsService.class).start();
        injector.getInstance(RoutingService.class).start();
        injector.getInstance(SearchService.class).start();
        injector.getInstance(MonitorService.class).start();
        injector.getInstance(RestController.class).start();        // TODO hack around circular dependencies problems
        injector.getInstance(GatewayAllocator.class).setReallocation(injector.getInstance(ClusterService.class), injector.getInstance(RoutingService.class));

        injector.getInstance(ResourceWatcherService.class).start();
        injector.getInstance(GatewayService.class).start();        // Start the transport service now so the publish address will be added to the local disco node in ClusterService
        TransportService transportService = injector.getInstance(TransportService.class);
        transportService.start();
        injector.getInstance(ClusterService.class).start();       //... 省略        return this;
    }

轉而AbstractLifecycleComponent的start方法

public T start() {        if (!lifecycle.canMoveToStarted()) {            return (T) this;
        }        for (LifecycleListener listener : listeners) {
            listener.beforeStart();
        }
        doStart();
        lifecycle.moveToStarted();        for (LifecycleListener listener : listeners) {
            listener.afterStart();
        }        return (T) this;
    }

doStart方法此處只分析NettyTransport的doStart方法(不分析LocalTransport的)
那段程式碼中有如下兩個比較關鍵:

clientBootstrap = createClientBootstrap(); // 啟動客戶端createServerBootstrap(name, mergedSettings); // 啟動Server端

看看客戶端的構造:

private ClientBootstrap createClientBootstrap() {        if (blockingClient) {
            clientBootstrap = new ClientBootstrap(new OioClientSocketChannelFactory(Executors.newCachedThreadPool(daemonThreadFactory(settings, TRANSPORT_CLIENT_WORKER_THREAD_NAME_PREFIX))));
        } else {            int bossCount = settings.getAsInt("transport.netty.boss_count", 1);
            clientBootstrap = new ClientBootstrap(new NioClientSocketChannelFactory(
                    Executors.newCachedThreadPool(daemonThreadFactory(settings, TRANSPORT_CLIENT_BOSS_THREAD_NAME_PREFIX)),
                    bossCount,                    new NioWorkerPool(Executors.newCachedThreadPool(daemonThreadFactory(settings, TRANSPORT_CLIENT_WORKER_THREAD_NAME_PREFIX)), workerCount),                    new HashedWheelTimer(daemonThreadFactory(settings, "transport_client_timer"))));
        }
        clientBootstrap.setPipelineFactory(configureClientChannelPipelineFactory());       //... 省略

        return clientBootstrap;
    }

上面程式碼一個注意的地方是clientBootstrap.setPipelineFactory(configureClientChannelPipelineFactory());

public ChannelPipelineFactory configureClientChannelPipelineFactory() {        return new ClientChannelPipelineFactory(this);
    }
protected static class ClientChannelPipelineFactory implements ChannelPipelineFactory {        protected final NettyTransport nettyTransport;        public ClientChannelPipelineFactory(NettyTransport nettyTransport) {            this.nettyTransport = nettyTransport;
        }        @Override
        public ChannelPipeline getPipeline() throws Exception {
            ChannelPipeline channelPipeline = Channels.pipeline();
            SizeHeaderFrameDecoder sizeHeader = new SizeHeaderFrameDecoder();            if (nettyTransport.maxCumulationBufferCapacity != null) {                if (nettyTransport.maxCumulationBufferCapacity.bytes() > Integer.MAX_VALUE) {
                    sizeHeader.setMaxCumulationBufferCapacity(Integer.MAX_VALUE);
                } else {
                    sizeHeader.setMaxCumulationBufferCapacity((int) nettyTransport.maxCumulationBufferCapacity.bytes());
                }
            }            if (nettyTransport.maxCompositeBufferComponents != -1) {
                sizeHeader.setMaxCumulationBufferComponents(nettyTransport.maxCompositeBufferComponents);
            }
            channelPipeline.addLast("size", sizeHeader);            // using a dot as a prefix means, this cannot come from any settings parsed
            channelPipeline.addLast("dispatcher", new MessageChannelHandler(nettyTransport, nettyTransport.logger, ".client"));            return channelPipeline;
        }
    }

上面的程式碼中channelPipeline.addLast("dispatcher", new MessageChannelHandler(nettyTransport, nettyTransport.logger, ".client"));指定了使用MessageChannelHandler來進行接收處理請求或者Response

createServerBootstrap(name, mergedSettings); // 啟動Server端 基本和createClientBootstrap()流程相似, 最終都是透過MessageChannelHandler來進行接收處理請求或者Response.

建立TransportClient時

當呼叫build方法時會根據依賴注入獲取TransportService例項, 並呼叫其start方法,最終會轉向NettyTransport的doStart 方法, 進而建立Netty RPC的客戶端以及服務端物件, 透過MessageHandler來處理請求或者處理Response.
它的具體流程和上面基本一致.
整體的結構圖如圖所示


圖片描述

image.png

透過這樣的方式, 當Client發起一個請求, 比如先發給協調器, 協調器可能需要透過NettyTranport(也可能是LocalTransport, 看具體情況)轉發到其他的Node, 最終會透過MessageChannelHandler進行處理Request以及Response



作者:kason_zhang
連結:


來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/1762/viewspace-2816613/,如需轉載,請註明出處,否則將追究法律責任。

相關文章