本系列:
- 《深入淺出Netty(1)》
- 《深入淺出Netty:服務啟動》
- 《深入淺出Netty:NioEventLoop》
- 《深入淺出Netty:ChannelPipeline》
- 《深入淺出Netty:accept》
boss執行緒主要負責監聽並處理accept事件,將socketChannel註冊到work執行緒的selector,由worker執行緒來監聽並處理read事件,本節主要分析Netty如何處理read事件。
accept->read
當work執行緒的selector檢測到OP_READ事件發生時,觸發read操作。
1 2 3 4 5 6 7 8 |
//NioEventLoop if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) { unsafe.read(); if (!ch.isOpen()) { // Connection already closed - no need to handle write. return; } } |
該read方法定義在類NioByteUnsafe中。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
//AbstractNioByteChannel.NioByteUnsafe public final void read() { final ChannelConfig config = config(); if (!config.isAutoRead() && !isReadPending()) { // ChannelConfig.setAutoRead(false) was called in the meantime removeReadOp(); return; } final ChannelPipeline pipeline = pipeline(); final ByteBufAllocator allocator = config.getAllocator(); final int maxMessagesPerRead = config.getMaxMessagesPerRead(); RecvByteBufAllocator.Handle allocHandle = this.allocHandle; if (allocHandle == null) { this.allocHandle = allocHandle = config.getRecvByteBufAllocator().newHandle(); } ByteBuf byteBuf = null; int messages = 0; boolean close = false; try { int totalReadAmount = 0; boolean readPendingReset = false; do { byteBuf = allocHandle.allocate(allocator); int writable = byteBuf.writableBytes(); int localReadAmount = doReadBytes(byteBuf); if (localReadAmount <= 0) { // not was read release the buffer byteBuf.release(); byteBuf = null; close = localReadAmount < 0; break; } if (!readPendingReset) { readPendingReset = true; setReadPending(false); } pipeline.fireChannelRead(byteBuf); byteBuf = null; if (totalReadAmount >= Integer.MAX_VALUE - localReadAmount) { // Avoid overflow. totalReadAmount = Integer.MAX_VALUE; break; } totalReadAmount += localReadAmount; // stop reading if (!config.isAutoRead()) { break; } if (localReadAmount < writable) { // Read less than what the buffer can hold, // which might mean we drained the recv buffer completely. break; } } while (++ messages < maxMessagesPerRead); pipeline.fireChannelReadComplete(); allocHandle.record(totalReadAmount); if (close) { closeOnRead(pipeline); close = false; } } catch (Throwable t) { handleReadException(pipeline, byteBuf, t, close); } finally { // Check if there is a readPending which was not processed yet. // This could be for two reasons: // * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method // * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method // // See https://github.com/netty/netty/issues/2254 if (!config.isAutoRead() && !isReadPending()) { removeReadOp(); } } } |
1、allocHandle負責自適應調整當前快取分配的大小,以防止快取分配過多或過少,先看看AdaptiveRecvByteBufAllocator內部實現:
1 2 3 4 5 6 7 8 |
public class AdaptiveRecvByteBufAllocator implements RecvByteBufAllocator { static final int DEFAULT_MINIMUM = 64; static final int DEFAULT_INITIAL = 1024; static final int DEFAULT_MAXIMUM = 65536; private static final int INDEX_INCREMENT = 4; private static final int INDEX_DECREMENT = 1; private static final int[] SIZE_TABLE; } |
SIZE_TABLE:按照從小到大的順序預先儲存可以分配的快取大小。
從16開始,每次累加16,直到496,接著從512開始,每次增大一倍,直到溢位。
DEFAULT_MINIMUM:最小快取(64),在SIZE_TABLE中對應的下標為3。
DEFAULT_MAXIMUM :最大快取(65536),在SIZE_TABLE中對應的下標為38。
DEFAULT_INITIAL :初始化快取大小,第一次分配快取時,由於沒有上一次實際收到的位元組數做參考,需要給一個預設初始值。
INDEX_INCREMENT:上次預估快取偏小,下次index的遞增值。
INDEX_DECREMENT :上次預估快取偏大,下次index的遞減值。
2、allocHandle.allocate(allocator) 申請一塊指定大小的記憶體。
1 2 3 4 |
//AdaptiveRecvByteBufAllocator.HandleImpl public ByteBuf allocate(ByteBufAllocator alloc) { return alloc.ioBuffer(nextReceiveBufferSize); } |
通過ByteBufAllocator的ioBuffer方法申請快取。
1 2 3 4 5 6 7 |
//AbstractByteBufAllocator public ByteBuf ioBuffer(int initialCapacity) { if (PlatformDependent.hasUnsafe()) { return directBuffer(initialCapacity); } return heapBuffer(initialCapacity); } |
根據平臺是否支援unsafe,選擇使用直接實體記憶體還是堆上記憶體。
direct buffer方案:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
//AbstractByteBufAllocator public ByteBuf directBuffer(int initialCapacity) { return directBuffer(initialCapacity, Integer.MAX_VALUE); } public ByteBuf directBuffer(int initialCapacity, int maxCapacity) { if (initialCapacity == 0 && maxCapacity == 0) { return emptyBuf; } validate(initialCapacity, maxCapacity); return newDirectBuffer(initialCapacity, maxCapacity); } //UnpooledByteBufAllocator protected ByteBuf newDirectBuffer(int initialCapacity, int maxCapacity) { ByteBuf buf; if (PlatformDependent.hasUnsafe()) { buf = new UnpooledUnsafeDirectByteBuf(this, initialCapacity, maxCapacity); } else { buf = new UnpooledDirectByteBuf(this, initialCapacity, maxCapacity); } return toLeakAwareBuffer(buf); } |
UnpooledUnsafeDirectByteBuf是如何實現快取管理的?對Nio的ByteBuffer進行了封裝,通過ByteBuffer的allocateDirect方法實現快取的申請。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
protected UnpooledUnsafeDirectByteBuf(ByteBufAllocator alloc, ByteBuffer initialBuffer, int maxCapacity) { //判斷邏輯已經忽略 this.alloc = alloc; setByteBuffer(allocateDirect(initialCapacity)); } protected ByteBuffer allocateDirect(int initialCapacity) { return ByteBuffer.allocateDirect(initialCapacity); } private void setByteBuffer(ByteBuffer buffer) { ByteBuffer oldBuffer = this.buffer; if (oldBuffer != null) { if (doNotFree) { doNotFree = false; } else { freeDirect(oldBuffer); } } this.buffer = buffer; memoryAddress = PlatformDependent.directBufferAddress(buffer); tmpNioBuf = null; capacity = buffer.remaining(); } |
memoryAddress = PlatformDependent.directBufferAddress(buffer) 獲取buffer的address欄位值,指向快取地址。
capacity = buffer.remaining() 獲取快取容量。
方法toLeakAwareBuffer(buf)對申請的buf又進行了一次包裝:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
protected static ByteBuf toLeakAwareBuffer(ByteBuf buf) { ResourceLeak leak; switch (ResourceLeakDetector.getLevel()) { case SIMPLE: leak = AbstractByteBuf.leakDetector.open(buf); if (leak != null) { buf = new SimpleLeakAwareByteBuf(buf, leak); } break; case ADVANCED: case PARANOID: leak = AbstractByteBuf.leakDetector.open(buf); if (leak != null) { buf = new AdvancedLeakAwareByteBuf(buf, leak); } break; } return buf; } |
Netty中使用引用計數機制來管理資源,ByteBuf實現了ReferenceCounted介面,當例項化一個ByteBuf時,引用計數為1, 程式碼中需要保持一個該物件的引用時需要呼叫retain方法將計數增1,物件使用完時呼叫release將計數減1。當引用計數變為0時,物件將釋放所持有的底層資源或將資源返回資源池。
3、方法doReadBytes(byteBuf) 將socketChannel資料寫入快取。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
//NioSocketChannel @Override protected int doReadBytes(ByteBuf byteBuf) throws Exception { return byteBuf.writeBytes(javaChannel(), byteBuf.writableBytes()); } //WrappedByteBuf @Override public int writeBytes(ScatteringByteChannel in, int length) throws IOException { return buf.writeBytes(in, length); } //AbsractByteBuf @Override public int writeBytes(ScatteringByteChannel in, int length) throws IOException { ensureAccessible(); ensureWritable(length); int writtenBytes = setBytes(writerIndex, in, length); if (writtenBytes > 0) { writerIndex += writtenBytes; } return writtenBytes; } //UnpooledUnsafeDirectByteBuf @Override public int setBytes(int index, ScatteringByteChannel in, int length) throws IOException { ensureAccessible(); ByteBuffer tmpBuf = internalNioBuffer(); tmpBuf.clear().position(index).limit(index + length); try { return in.read(tmpBuf); } catch (ClosedChannelException ignored) { return -1; } } private ByteBuffer internalNioBuffer() { ByteBuffer tmpNioBuf = this.tmpNioBuf; if (tmpNioBuf == null) { this.tmpNioBuf = tmpNioBuf = buffer.duplicate(); } return tmpNioBuf; } |
最終底層採用ByteBuffer實現read操作,這裡有一塊邏輯不清楚,為什麼要用tmpNioBuf?
int localReadAmount = doReadBytes(byteBuf);
1、如果返回0,則表示沒有讀取到資料,則退出迴圈。
2、如果返回-1,表示對端已經關閉連線,則退出迴圈。
3、否則,表示讀取到了資料,資料讀入快取後,觸發pipeline的ChannelRead事件,byteBuf作為引數進行後續處理,這時自定義Inbound型別的handler就可以進行業務處理了。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
static class DiscardServerHandler extends ChannelInboundHandlerAdapter { @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { ByteBuf in = (ByteBuf) msg; try { while (in.isReadable()) { // (1) System.out.print((char) in.readByte()); System.out.flush(); } } finally { ReferenceCountUtil.release(msg); // (2) } } } |
其中引數msg,就是對應的byteBuf,當請求的資料量比較大時,會多次觸發channelRead事件,預設最多觸發16次,可以通過maxMessagesPerRead欄位進行配置。
如果客戶端傳輸的資料過大,可能會分成好幾次傳輸,因為TCP一次傳輸內容大小有上限,所以同一個selectKey會觸發多次read事件,剩餘的資料會在下一輪select操作繼續讀取。
在實際應用中,應該把所有請求資料都快取起來再進行業務處理。
所有資料都處理完,觸發pipeline的ChannelReadComplete事件,並且allocHandle記錄這次read的位元組數,進行下次處理時快取大小的調整。
到此為止,整個NioSocketChannel的read事件已經處理完成。
打賞支援我寫出更多好文章,謝謝!
打賞作者
打賞支援我寫出更多好文章,謝謝!