Java IO學習筆記二:DirectByteBuffer與HeapByteBuffer

Grey Zeng 發表於 2021-06-12
Java

作者:Grey

原文地址:Java IO學習筆記二:DirectByteBuffer與HeapByteBuffer

ByteBuffer.allocate()與ByteBuffer.allocateDirect()的基本使用

這兩個API封裝了一個統一的ByteBuffer返回值,在使用上是無差別的。

import java.nio.ByteBuffer;

public class TestByteBuffer {
    public static void main(String[] args) {
        ByteBuffer buffer = ByteBuffer.allocateDirect(1024);
        System.out.println("position: " + buffer.position());
        System.out.println("limit: " + buffer.limit());
        System.out.println("capacity: " + buffer.capacity());
        System.out.println("mark: " + buffer);

        buffer.put("123".getBytes());

        System.out.println("-------------put:123......");
        System.out.println("mark: " + buffer);

        buffer.flip();

        System.out.println("-------------flip......");
        System.out.println("mark: " + buffer);

        buffer.get();

        System.out.println("-------------get......");
        System.out.println("mark: " + buffer);

        buffer.compact();

        System.out.println("-------------compact......");
        System.out.println("mark: " + buffer);

        buffer.clear();

        System.out.println("-------------clear......");
        System.out.println("mark: " + buffer);
    }
}

輸出結果是:

mark: java.nio.DirectByteBuffer[pos=0 lim=1024 cap=1024]
-------------put:123......
mark: java.nio.DirectByteBuffer[pos=3 lim=1024 cap=1024]
-------------flip......
mark: java.nio.DirectByteBuffer[pos=0 lim=3 cap=1024]
-------------get......
mark: java.nio.DirectByteBuffer[pos=1 lim=3 cap=1024]
-------------compact......
mark: java.nio.DirectByteBuffer[pos=2 lim=1024 cap=1024]
-------------clear......
mark: java.nio.DirectByteBuffer[pos=0 lim=1024 cap=1024]

當分配好1024空間後,未對ByteBuffer做任何操作的時候,position最初就是0位置,limit和capcity都是1024位置,如圖:

image

當put進去123三個字元以後:

image

執行flip後,pos會回到原點,lim會到目前寫入的位置,這個方法主要用於讀取資料:

image

呼叫get方法,拿出一個byte,如下圖:

image

呼叫compact,會把前面拿掉的1個Byte位置填充:

image

呼叫clear會讓整個記憶體回到初始分配狀態:

image

ByteBuffer.allocate()與ByteBuffer.allocateDirect()方法的區別

可以參考:

https://stackoverflow.com/questions/5670862/bytebuffer-allocate-vs-bytebuffer-allocatedirect/5671880#5671880

Ron Hitches in his excellent book Java NIO seems to offer what I thought could be a good answer to your question:
Operating systems perform I/O operations on memory areas. These memory areas, as far as the operating system is concerned, are contiguous sequences of bytes. It's no surprise then that only byte buffers are eligible to participate in I/O operations. Also recall that the operating system will directly access the address space of the process, in this case the JVM process, to transfer the data. This means that memory areas that are targets of I/O perations must be contiguous sequences of bytes. In the JVM, an array of bytes may not be stored contiguously in memory, or the Garbage Collector could move it at any time. Arrays are objects in Java, and the way data is stored inside that object could vary from one JVM implementation to another.
For this reason, the notion of a direct buffer was introduced. Direct buffers are intended for interaction with channels and native I/O routines. They make a best effort to store the byte elements in a memory area that a channel can use for direct, or raw, access by using native code to tell the operating system to drain or fill the memory area directly.
Direct byte buffers are usually the best choice for I/O operations. By design, they support the most efficient I/O mechanism available to the JVM. Nondirect byte buffers can be passed to channels, but doing so may incur a performance penalty. It's usually not possible for a nondirect buffer to be the target of a native I/O operation. If you pass a nondirect ByteBuffer object to a channel for write, the channel may implicitly do the following on each call:
Create a temporary direct ByteBuffer object.
Copy the content of the nondirect buffer to the temporary buffer.
Perform the low-level I/O operation using the temporary buffer.
The temporary buffer object goes out of scope and is eventually garbage collected.
This can potentially result in buffer copying and object churn on every I/O, which are exactly the sorts of things we'd like to avoid. However, depending on the implementation, things may not be this bad. The runtime will likely cache and reuse direct buffers or perform other clever tricks to boost throughput. If you're simply creating a buffer for one-time use, the difference is not significant. On the other hand, if you will be using the buffer repeatedly in a high-performance scenario, you're better off allocating direct buffers and reusing them.
Direct buffers are optimal for I/O, but they may be more expensive to create than nondirect byte buffers. The memory used by direct buffers is allocated by calling through to native, operating system-specific code, bypassing the standard JVM heap. Setting up and tearing down direct buffers could be significantly more expensive than heap-resident buffers, depending on the host operating system and JVM implementation. The memory-storage areas of direct buffers are not subject to garbage collection because they are outside the standard JVM heap.
The performance tradeoffs of using direct versus nondirect buffers can vary widely by JVM, operating system, and code design. By allocating memory outside the heap, you may subject your application to additional forces of which the JVM is unaware. When bringing additional moving parts into play, make sure that you're achieving the desired effect. I recommend the old software maxim: first make it work, then make it fast. Don't worry too much about optimization up front; concentrate first on correctness. The JVM implementation may be able to perform buffer caching or other optimizations that will give you the performance you need without a lot of unnecessary effort on your part.

allocate分配方式產生的記憶體開銷是在JVM中的,allocateDirect分配方式產生的開銷在JVM之外,以就是系統級的記憶體分配。系統級別記憶體的分配比JVM記憶體的分配要耗時多。所以並非不論什麼時候 allocateDirect的操作效率都是很高的。

那什麼時候使用堆記憶體,什麼時候使用直接記憶體?

參考:NIO ByteBuffer 的 allocate 和 allocateDirect 的區別

什麼情況下使用DirectByteBuffer(ByteBuffer.allocateDirect(int))?

1、頻繁的native IO,即緩衝區 中轉 從作業系統獲取的檔案資料、或者使用緩衝區中轉網路資料等

2、不需要經常建立和銷燬DirectByteBuffer物件

3、經常複用DirectByteBuffer物件,即經常寫入資料到DirectByteBuffer中,然後flip,再讀取出來,最後clear。。反覆使用該DirectByteBuffer物件。

而且,DirectByteBuffer不會佔用堆記憶體。。也就是不會受到堆大小限制,只在DirectByteBuffer物件被回收後才會釋放該緩衝區。

什麼情況下使用HeapByteBuffer(ByteBuffer.allocate(int))?

1、同一個HeapByteBuffer物件很少被複用,並且該物件經常是用一次就不用了,此時可以使用HeapByteBuffer,因為建立HeapByteBuffer開銷比DirectByteBuffer低。

(但是!!建立所消耗時間差距只是一倍以下的差距,一般一次只會建立一個DirectByteBuffer物件反覆使用,而不會建立幾百個DirectByteBuffer,

所以在建立一個物件的情況下,HeapByteBuffer並沒有什麼優勢,所以,開發中要使用ByteBuffer時,直接用DirectByteBuffer就行了)

原始碼

Github

相關文章