Kafka是一款很棒的訊息系統,可以看看我之前寫的 後端好書閱讀與推薦來了解一下它的整體設計。今天我們就來深入瞭解一下它的實現細節(我fork了一份程式碼),首先關注Producer這一方。
要使用kafka首先要例項化一個KafkaProducer
,需要有brokerIP、序列化器等必要Properties以及acks(0、1、n)、compression、retries、batch.size等非必要Properties,通過這個簡單的介面可以控制Producer大部分行為,例項化後就可以呼叫send
方法傳送訊息了。
核心實現是這個方法:
public Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback) {
// intercept the record, which can be potentially modified; this method does not throw exceptions
ProducerRecord<K, V> interceptedRecord = this.interceptors.onSend(record);//①
return doSend(interceptedRecord, callback);//②
}
複製程式碼
通過不同的模式可以實現傳送即忘(忽略返回結果)、同步傳送(獲取返回的future物件,回撥函式置為null)、非同步傳送(設定回撥函式)三種訊息模式。
我們來看看訊息類ProducerRecord
有哪些屬性:
private final String topic;//主題
private final Integer partition;//分割槽
private final Headers headers;//頭
private final K key;//鍵
private final V value;//值
private final Long timestamp;//時間戳
複製程式碼
它有多個建構函式,可以適應不同的訊息型別:比如有無分割槽、有無key等。
①中ProducerInterceptors
(有0 ~ 無窮多個,形成一個攔截鏈)對ProducerRecord
進行攔截處理(比如打上時間戳,進行審計與統計等操作)
public ProducerRecord<K, V> onSend(ProducerRecord<K, V> record) {
ProducerRecord<K, V> interceptRecord = record;
for (ProducerInterceptor<K, V> interceptor : this.interceptors) {
try {
interceptRecord = interceptor.onSend(interceptRecord);
} catch (Exception e) {
// 不丟擲異常,繼續執行下一個攔截器
if (record != null)
log.warn("Error executing interceptor onSend callback for topic: {}, partition: {}", record.topic(), record.partition(), e);
else
log.warn("Error executing interceptor onSend callback", e);
}
}
return interceptRecord;
}
複製程式碼
如果使用者有定義就進行處理並返回處理後的ProducerRecord
,否則直接返回本身。
然後②中doSend
真正傳送訊息,並且是非同步的(原始碼太長只保留關鍵):
private Future<RecordMetadata> doSend(ProducerRecord<K, V> record, Callback callback) {
TopicPartition tp = null;
try {
// 序列化 key 和 value
byte[] serializedKey;
try {
serializedKey = keySerializer.serialize(record.topic(), record.headers(), record.key());
} catch (ClassCastException cce) {
}
byte[] serializedValue;
try {
serializedValue = valueSerializer.serialize(record.topic(), record.headers(), record.value());
} catch (ClassCastException cce) {
}
// 計算分割槽獲得主題與分割槽
int partition = partition(record, serializedKey, serializedValue, cluster);
tp = new TopicPartition(record.topic(), partition);
// 回撥與事務處理省略。
Header[] headers = record.headers().toArray();
// 訊息追加到RecordAccumulator中
RecordAccumulator.RecordAppendResult result = accumulator.append(tp, timestamp, serializedKey,
serializedValue, headers, interceptCallback, remainingWaitMs);
// 該批次滿了或者建立了新的批次就要喚醒IO執行緒傳送該批次了,也就是sender的wakeup方法
if (result.batchIsFull || result.newBatchCreated) {
log.trace("Waking up the sender since topic {} partition {} is either full or getting a new batch", record.topic(), partition);
this.sender.wakeup();
}
return result.future;
} catch (Exception e) {
// 攔截異常並丟擲
this.interceptors.onSendError(record, tp, e);
throw e;
}
}
複製程式碼
下面是計算分割槽的方法:
private int partition(ProducerRecord<K, V> record,
byte[] serializedKey, byte[] serializedValue, Cluster cluster) {
Integer partition = record.partition();
// 訊息有分割槽就直接使用,否則就使用分割槽器計算
return partition != null ?
partition :
partitioner.partition(
record.topic(), record.key(), serializedKey,
record.value(), serializedValue, cluster);
}
複製程式碼
預設的分割槽器DefaultPartitioner
實現方式是如果partition存在就直接使用,否則根據key計算partition,如果key也不存在就使用round robin演算法分配partition。
/**
* The default partitioning strategy:
* <ul>
* <li>If a partition is specified in the record, use it
* <li>If no partition is specified but a key is present choose a partition based on a hash of the key
* <li>If no partition or key is present choose a partition in a round-robin fashion
*/
public class DefaultPartitioner implements Partitioner {
private final ConcurrentMap<String, AtomicInteger> topicCounterMap = new ConcurrentHashMap<>();
public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
int numPartitions = partitions.size();
if (keyBytes == null) {//key為空
int nextValue = nextValue(topic);
List<PartitionInfo> availablePartitions = cluster.availablePartitionsForTopic(topic);//可用的分割槽
if (availablePartitions.size() > 0) {//有分割槽,取模就行
int part = Utils.toPositive(nextValue) % availablePartitions.size();
return availablePartitions.get(part).partition();
} else {// 無分割槽,
return Utils.toPositive(nextValue) % numPartitions;
}
} else {// key 不為空,計算key的hash並取模獲得分割槽
return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions;
}
}
private int nextValue(String topic) {
AtomicInteger counter = topicCounterMap.get(topic);
if (null == counter) {
counter = new AtomicInteger(ThreadLocalRandom.current().nextInt());
AtomicInteger currentCounter = topicCounterMap.putIfAbsent(topic, counter);
if (currentCounter != null) {
counter = currentCounter;
}
}
return counter.getAndIncrement();//返回並加一,在取模的配合下就是round robin
}
}
複製程式碼
以上就是傳送訊息的邏輯處理,接下來我們再看看訊息傳送的物理處理。
Sender
(是一個Runnable
,被包含在一個IO執行緒ioThread
中,該執行緒不斷從RecordAccumulator
佇列中的讀取訊息並通過Selector
將資料傳送給Broker)的wakeup
方法,實際上是KafkaClient
介面的wakeup
方法,由NetworkClient
類實現,採用了NIO,也就是java.nio.channels.Selector.wakeup()
方法實現。
Sender
的run
中主要邏輯是不停執行準備訊息和等待訊息:
long pollTimeout = sendProducerData(now);//③
client.poll(pollTimeout, now);//④
複製程式碼
③完成訊息設定並儲存到通道中,然後監聽感興趣的key,由KafkaChannel
實現。
public void setSend(Send send) {
if (this.send != null)
throw new IllegalStateException("Attempt to begin a send operation with prior send operation still in progress, connection id is " + id);
this.send = send;
this.transportLayer.addInterestOps(SelectionKey.OP_WRITE);
}
// transportLayer的一種實現中的相關方法
public void addInterestOps(int ops) {
key.interestOps(key.interestOps() | ops);
}
複製程式碼
④主要是Selector
的poll
,其select被wakeup喚醒:
public void poll(long timeout) throws IOException {
/* check ready keys */
long startSelect = time.nanoseconds();
int numReadyKeys = select(timeout);//wakeup使其停止阻塞
long endSelect = time.nanoseconds();
this.sensors.selectTime.record(endSelect - startSelect, time.milliseconds());
if (numReadyKeys > 0 || !immediatelyConnectedKeys.isEmpty() || dataInBuffers) {
Set<SelectionKey> readyKeys = this.nioSelector.selectedKeys();
// Poll from channels that have buffered data (but nothing more from the underlying socket)
if (dataInBuffers) {
keysWithBufferedRead.removeAll(readyKeys); //so no channel gets polled twice
Set<SelectionKey> toPoll = keysWithBufferedRead;
keysWithBufferedRead = new HashSet<>(); //poll() calls will repopulate if needed
pollSelectionKeys(toPoll, false, endSelect);
}
// Poll from channels where the underlying socket has more data
pollSelectionKeys(readyKeys, false, endSelect);
// Clear all selected keys so that they are included in the ready count for the next select
readyKeys.clear();
pollSelectionKeys(immediatelyConnectedKeys, true, endSelect);
immediatelyConnectedKeys.clear();
} else {
madeReadProgressLastPoll = true; //no work is also "progress"
}
long endIo = time.nanoseconds();
this.sensors.ioTime.record(endIo - endSelect, time.milliseconds());
}
複製程式碼
其中pollSelectionKeys
方法會呼叫如下方法完成訊息傳送:
public Send write() throws IOException {
Send result = null;
if (send != null && send(send)) {
result = send;
send = null;
}
return result;
}
private boolean send(Send send) throws IOException {
send.writeTo(transportLayer);
if (send.completed())
transportLayer.removeInterestOps(SelectionKey.OP_WRITE);
return send.completed();
}
複製程式碼
Send
是一次資料發包,一般由ByteBufferSend
或者MultiRecordsSend
實現,其writeTo
呼叫transportLayer
的write
方法,一般由PlaintextTransportLayer
或者SslTransportLayer
實現,區分是否使用ssl:
public long writeTo(GatheringByteChannel channel) throws IOException {
long written = channel.write(buffers);
if (written < 0)
throw new EOFException("Wrote negative bytes to channel. This shouldn't happen.");
remaining -= written;
pending = TransportLayers.hasPendingWrites(channel);
return written;
}
public int write(ByteBuffer src) throws IOException {
return socketChannel.write(src);
}
複製程式碼
到此就把Producer的業務相關邏輯處理和非業務相關的網路 2方面的主要流程梳理清楚了。其他額外的功能是通過一些配置保證的。
比如順序保證就是max.in.flight.requests.per.connection
,InFlightRequests
的doSend
會進行判斷(由NetworkClient
的canSendRequest
呼叫),只要該引數設為1即可保證當前包未確認就不能傳送下一個包從而實現有序性
public boolean canSendMore(String node) {
Deque<NetworkClient.InFlightRequest> queue = requests.get(node);
return queue == null || queue.isEmpty() ||
(queue.peekFirst().send.completed() && queue.size() < this.maxInFlightRequestsPerConnection);
}
複製程式碼
再比如可靠性,通過設定acks,Sender
中sendProduceRequest
的clientRequest
加入了回撥函式:
RequestCompletionHandler callback = new RequestCompletionHandler() {
public void onComplete(ClientResponse response) {
handleProduceResponse(response, recordsByPartition, time.milliseconds());//呼叫completeBatch
}
};
/**
* 完成或者重試投遞,這裡如果acks不對就會重試
*
* @param batch The record batch
* @param response The produce response
* @param correlationId The correlation id for the request
* @param now The current POSIX timestamp in milliseconds
*/
private void completeBatch(ProducerBatch batch, ProduceResponse.PartitionResponse response, long correlationId,
long now, long throttleUntilTimeMs) {
}
public class ProduceResponse extends AbstractResponse {
/**
* Possible error code:
* INVALID_REQUIRED_ACKS (21)
*/
}
複製程式碼
kafka原始碼一層一層包裝很多,錯綜複雜,如有錯誤請大家不吝賜教。