文章出處shenyifengtk.github.io 轉載請註明
本文由來,有一個需求要在瀏覽器輸入Kafka topic,消費組提交後自動開啟消費,這個做起來比較簡單,同事使用了Kafka 驅動包很快速完成這個。我突然想到能不能通過Spring Kafka自身框架完成這個功能,不使用底層驅動包來自做呢。而引出分析整個Spring Kafka 如何實現註解消費資訊,呼叫方法的。並且最後通過幾個簡單的程式碼完成上面小需求。
原始碼解析
EnableKafka入口
kafka 模組的開始先從@EnableKafka 上@Import(KafkaListenerConfigurationSelector.class)
@Override
public String[] selectImports(AnnotationMetadata importingClassMetadata) {
return new String[] { KafkaBootstrapConfiguration.class.getName() };
}
接著繼續看下KafkaBootstrapConfiguration類
public class KafkaBootstrapConfiguration implements ImportBeanDefinitionRegistrar {
@Override
public void registerBeanDefinitions(AnnotationMetadata importingClassMetadata, BeanDefinitionRegistry registry) {
if (!registry.containsBeanDefinition(
KafkaListenerConfigUtils.KAFKA_LISTENER_ANNOTATION_PROCESSOR_BEAN_NAME)) {
registry.registerBeanDefinition(KafkaListenerConfigUtils.KAFKA_LISTENER_ANNOTATION_PROCESSOR_BEAN_NAME,
new RootBeanDefinition(KafkaListenerAnnotationBeanPostProcessor.class));
}
if (!registry.containsBeanDefinition(KafkaListenerConfigUtils.KAFKA_LISTENER_ENDPOINT_REGISTRY_BEAN_NAME)) {
registry.registerBeanDefinition(KafkaListenerConfigUtils.KAFKA_LISTENER_ENDPOINT_REGISTRY_BEAN_NAME,
new RootBeanDefinition(KafkaListenerEndpointRegistry.class));
}
}
}
使用BeanDefinitionRegistry 將class 轉換成beanDefinition,註冊到beanDefinitionMap 容器中,容器會統一將Map Class全部進行例項化,其實就是將這個交給Spring 初始化。
KafkaListenerAnnotationBeanPostProcessor 解析
下面看下kafka核心處理類KafkaListenerAnnotationBeanPostProcessor 如何解析@KafkaListener 註解,postProcessAfterInitialization 在bean 例項化後呼叫方法,對bean 進行增強。
public Object postProcessAfterInitialization(final Object bean, final String beanName) throws BeansException {
if (!this.nonAnnotatedClasses.contains(bean.getClass())) {
//如果此時bean可能是代理類,則獲取原始class ,否則直接class
Class<?> targetClass = AopUtils.getTargetClass(bean);
//這時類上去找@KafkaListener ,因為在class 上可能出現多種複雜情況,這個方法封裝一系列方法能包裝找到註解
//這裡可能存在子父類同時使用註解,所有隻有找到一個就進行對應方法處理
Collection<KafkaListener> classLevelListeners = findListenerAnnotations(targetClass);
final boolean hasClassLevelListeners = classLevelListeners.size() > 0;
final List<Method> multiMethods = new ArrayList<>();
//從方法上找註解,找到方法放到map中,Method 當作key
Map<Method, Set<KafkaListener>> annotatedMethods = MethodIntrospector.selectMethods(targetClass,
(MethodIntrospector.MetadataLookup<Set<KafkaListener>>) method -> {
Set<KafkaListener> listenerMethods = findListenerAnnotations(method);
return (!listenerMethods.isEmpty() ? listenerMethods : null);
});
if (hasClassLevelListeners) { //如果類上有註解的話,都有搭配@KafkaHandler使用的,方法上找這個註解
Set<Method> methodsWithHandler = MethodIntrospector.selectMethods(targetClass,
(ReflectionUtils.MethodFilter) method ->
AnnotationUtils.findAnnotation(method, KafkaHandler.class) != null);
multiMethods.addAll(methodsWithHandler);
}
if (annotatedMethods.isEmpty()) { //將解析過class 快取起來
this.nonAnnotatedClasses.add(bean.getClass());
else {
// Non-empty set of methods
for (Map.Entry<Method, Set<KafkaListener>> entry : annotatedMethods.entrySet()) {
Method method = entry.getKey();
for (KafkaListener listener : entry.getValue()) {
processKafkaListener(listener, method, bean, beanName); //方法監聽處理的邏輯
}
}
this.logger.debug(() -> annotatedMethods.size() + " @KafkaListener methods processed on bean '"
+ beanName + "': " + annotatedMethods);
}
if (hasClassLevelListeners) {
processMultiMethodListeners(classLevelListeners, multiMethods, bean, beanName); //KafkaHandler 處理邏輯
}
}
return bean;
}
@kafkaListener其實可以作用於Class 上的,搭配著@KafkaHandler一起使用,那怎麼樣使用呢,我用一個簡單例子展示下。
@KafkaListener(topics = "${topic-name.lists}",groupId = "${group}",concurrency = 4)
public class Kddk {
@KafkaHandler
public void user(User user){
}
@KafkaHandler
public void std(Dog dog){
}
}
消費資訊不同物件區分進行處理,省去物件轉換的麻煩,我暫時想到場景就是這些,平常很少有這些。這個實現原理我就不深入分析了
protected void processKafkaListener(KafkaListener kafkaListener, Method method, Object bean, String beanName) {
//如果方法剛好被代理增強了,返回原始class 方法
Method methodToUse = checkProxy(method, bean);
MethodKafkaListenerEndpoint<K, V> endpoint = new MethodKafkaListenerEndpoint<>();
endpoint.setMethod(methodToUse);
String beanRef = kafkaListener.beanRef();
this.listenerScope.addListener(beanRef, bean);
String[] topics = resolveTopics(kafkaListener);
TopicPartitionOffset[] tps = resolveTopicPartitions(kafkaListener);
//這個方法是判斷方法上是否有@RetryableTopic 註解,有的話則放回true,註冊到KafkaListenerEndpointRegistry
if (!processMainAndRetryListeners(kafkaListener, bean, beanName, methodToUse, endpoint, topics, tps)) {
//解析@kafkaListener 屬性,設定到endpoint ,註冊到KafkaListenerEndpointRegistry
processListener(endpoint, kafkaListener, bean, beanName, topics, tps);
}
this.listenerScope.removeListener(beanRef);
}
protected void processListener(MethodKafkaListenerEndpoint<?, ?> endpoint, KafkaListener kafkaListener,
Object bean, String beanName, String[] topics, TopicPartitionOffset[] tps) {
processKafkaListenerAnnotationBeforeRegistration(endpoint, kafkaListener, bean, topics, tps);
String containerFactory = resolve(kafkaListener.containerFactory());
KafkaListenerContainerFactory<?> listenerContainerFactory = resolveContainerFactory(kafkaListener, containerFactory, beanName);
//這裡主要核心了,解析完成後,註冊到KafkaListenerEndpointRegistry 中,等待下一步操作了
this.registrar.registerEndpoint(endpoint, listenerContainerFactory);
processKafkaListenerEndpointAfterRegistration(endpoint, kafkaListener);
}
類名MethodKafkaListenerEndpoint 都可以理解成端點物件,簡單地說,端點是通訊通道的一端。可以理解這個端點連線業務方法和kafka 資訊之間的通訊端點。
@RetryableTopic 是spring kafka 2.7 後出的一個註解,主要作用就是在消費kafka資訊時出現消費異常時,失敗重試而出現死信資訊的處理,由於Kafka內部並沒有死信佇列或者死信資訊這類東西。Spring 自己搞出來一個DLT topics (Dead-Letter Topic
),意思就是當消費資訊失敗到達一定次數時,會將資訊傳送到指定DLT topic 中。註解可以設定重試次數、重試時間、故障異常、失敗策略等等。
其實這個processMainAndRetryListeners 方法跟下面processListener 作用差不多,都有解析註解內容,然後呼叫KafkaListenerEndpointRegistry.registerEndpoint 方法。
KafkaListenerEndpointRegistry 主要由Spring 容器建立,用於例項化MessageListenerContainer
KafkaListenerEndpointRegistrar主要程式碼new建立,並沒有交給spring容器管理,用於幫助bean 註冊到KafkaListenerEndpointRegistry中
這個兩個類類名特別相似,在分析原始碼時被搞到暈頭轉向,分清楚後其實就挺簡單了,這個類名搞混上浪費不算時間去理解。
註冊endpoint
public void registerEndpoint(KafkaLiEstenerEndpoint endpoint, @Nullable KafkaListenerContainerFactory<?> factory) {
// Factory may be null, we defer the resolution right before actually creating the container
// 這個只是一個內部類,用來裝兩個物件的,沒有任何實現意義,factory 實際可能為空,這裡使用延時建立解析這個問題
KafkaListenerEndpointDescriptor descriptor = new KafkaListenerEndpointDescriptor(endpoint, factory);
synchronized (this.endpointDescriptors) {
//這個 startImmediately 並沒有被初始化,這裡一定是false,當被設定true,會直接建立監聽器容器,這時應該是spring 容器已經初始化完成了
if (this.startImmediately) { // Register and start immediately
this.endpointRegistry.registerListenerContainer(descriptor.endpoint,
resolveContainerFactory(descriptor), true);
}
else {
this.endpointDescriptors.add(descriptor);
}
}
}
這裡為什麼有一個startImmediately開關呢,這裡只是將endpoint 放入容器集中儲存起來,等到全部新增完成後,使用Spring InitializingBean介面afterPropertiesSet 方法進行基礎註冊啟動,這是利用了Spring bean 生命週期方法來觸發,如果是Spring 完全啟動完成後,那新增進來endpoint就是不能啟動的了,所以相當於一個閾值開關,開啟後立即啟動。
下面看下呼叫KafkaListenerEndpointRegistrar.afterPropertiesSet 來開啟各大endpoint 執行。
@Override
public void afterPropertiesSet() {
registerAllEndpoints();
}
protected void registerAllEndpoints() {
synchronized (this.endpointDescriptors) {
for (KafkaListenerEndpointDescriptor descriptor : this.endpointDescriptors) {
if (descriptor.endpoint instanceof MultiMethodKafkaListenerEndpoint //只有使用@KafkaHandler 才會生成這個物件
&& this.validator != null) {
((MultiMethodKafkaListenerEndpoint) descriptor.endpoint).setValidator(this.validator);
}
//通過endpoint ,containerFactory 建立資訊容器MessageListenerContainer
this.endpointRegistry.registerListenerContainer(
descriptor.endpoint, resolveContainerFactory(descriptor));
}
//全部處理完成了,就可以開啟start啟動按鈕,讓新增進來立即啟動
this.startImmediately = true; // trigger immediate startup
}
}
//獲取內部類KafkaListenerContainerFactory 具體例項,在延時啟動時,可能存在空,這時可以使用Spring 內部預設
// 如果註解上已經備註了要使用ContainerFactory 則使用自定義,為空則使用預設ConcurrentKafkaListenerContainerFactory
private KafkaListenerContainerFactory<?> resolveContainerFactory(KafkaListenerEndpointDescriptor descriptor) {
if (descriptor.containerFactory != null) {
return descriptor.containerFactory;
}
else if (this.containerFactory != null) {
return this.containerFactory;
}
else if (this.containerFactoryBeanName != null) {
Assert.state(this.beanFactory != null, "BeanFactory must be set to obtain container factory by bean name");
this.containerFactory = this.beanFactory.getBean(
this.containerFactoryBeanName, KafkaListenerContainerFactory.class);
return this.containerFactory; // Consider changing this if live change of the factory is required
}
else {
//.....
}
}
MessageListenerContainer
看下KafkaListenerEndpointRegistry.registerListenerContainer 方法如何生成資訊監聽器的。
public void registerListenerContainer(KafkaListenerEndpoint endpoint, KafkaListenerContainerFactory<?> factory) {
registerListenerContainer(endpoint, factory, false);
}
public void registerListenerContainer(KafkaListenerEndpoint endpoint, KafkaListenerContainerFactory<?> factory,
boolean startImmediately) {
String id = endpoint.getId();
Assert.hasText(id, "Endpoint id must not be empty");
synchronized (this.listenerContainers) {
Assert.state(!this.listenerContainers.containsKey(id),
"Another endpoint is already registered with id '" + id + "'");
//建立監聽器容器
MessageListenerContainer container = createListenerContainer(endpoint, factory);
//使用map 將例項化容器儲存起來,key就是 @KafkaListener id ,這個就是所謂的beanName
this.listenerContainers.put(id, container);
ConfigurableApplicationContext appContext = this.applicationContext;
String groupName = endpoint.getGroup();
//如果註解中有設定自定義監聽組,這時需要獲取到監聽組例項,將監聽器容器裝起來
if (StringUtils.hasText(groupName) && appContext != null) {
//省略部分內容
}
if (startImmediately) { //如果是立即啟動,這時需要手動呼叫監聽器start 方法
startIfNecessary(container);
}
}
}
protected MessageListenerContainer createListenerContainer(KafkaListenerEndpoint endpoint,
KafkaListenercContainerFactory<?> factory) {
//監聽器被建立了
MessageListenerContainer listenerContainer = factory.createListenerContainer(endpoint);
if (listenerContainer instanceof InitializingBean) { //這時spring 容器已經初始化完成了,生命週期方法不會再執行了,這裡顯式呼叫它
try {
((InitializingBean) listenerContainer).afterPropertiesSet();
}
catch (Exception ex) {
throw new BeanInitializationException("Failed to initialize message listener container", ex);
}
}
int containerPhase = listenerContainer.getPhase();
if (listenerContainer.isAutoStartup() &&
containerPhase != AbstractMessageListenerContainer.DEFAULT_PHASE) { // a custom phase value
if (this.phase != AbstractMessageListenerContainer.DEFAULT_PHASE && this.phase != containerPhase) {
throw new IllegalStateException("Encountered phase mismatch between container "
+ "factory definitions: " + this.phase + " vs " + containerPhase);
}
this.phase = listenerContainer.getPhase();
}
return listenerContainer;
}
private void startIfNecessary(MessageListenerContainer listenerContainer) {
// contextRefreshed Spring 完全啟動完成true
if (this.contextRefreshed || listenerContainer.isAutoStartup()) {
listenerContainer.start();
}
}
主要就是通過KafkaListenercContainerFactory 資訊監聽工廠來建立監聽器MessageListenerContainer ,通過繼承了SmartLifecycle。SmartLifecycle介面是Spring 在初始化完成後,根據介面isAutoStartup() 返回值是否實現該介面的類中對應的start()。Spring 當spring 完全初始化完成後,SmartLifecycle 介面就不會被Spring 呼叫執行,這時就需要手動執行start 方法,所以startIfNecessary 方法才會判斷容器已經啟動完成了。
MessageListenerContainer
public C createListenerContainer(KafkaListenerEndpoint endpoint) {
C instance = createContainerInstance(endpoint);
JavaUtils.INSTANCE
.acceptIfNotNull(endpoint.getId(), instance::setBeanName);
if (endpoint instanceof AbstractKafkaListenerEndpoint) {
//配置kafka 設定,因為像資訊消費提交ack,資訊消費批量這些設定都是通過配置設定的,這些資訊都在factory儲存著,這時將配置資訊設定給endpoint
configureEndpoint((AbstractKafkaListenerEndpoint<K, V>) endpoint);
}
//這裡是核心,將註解宣告bean method 建立成MessagingMessageListenerAdapter 資訊監聽介面卡,在將介面卡初始化引數去建立資訊監聽器,交給instance
endpoint.setupListenerContainer(instance, this.messageConverter);
//將concurrency 併發數設定上
initializeContainer(instance, endpoint);
//自定義配置
customizeContainer(instance);
return instance;
}
這時kafka 配置資訊、@KafkaListener 資訊、消費方法、bean 已經全部設定createListenerContainer,這時監聽器容器就可以啟動kafka 拉取資訊,呼叫方法進行處理了。
直接從資訊監聽器ConcurrentMessageListenerContainer啟動方法開始
public final void start() {
checkGroupId();
synchronized (this.lifecycleMonitor) {
if (!isRunning()) { //監聽狀態,測試還沒有開始監聽,所以監聽狀態應該為false
Assert.state(this.containerProperties.getMessageListener() instanceof GenericMessageListener,
() -> "A " + GenericMessageListener.class.getName() + " implementation must be provided");
//抽象方法,由子類去實現
doStart();
}
}
}
@Override
protected void doStart() {
if (!isRunning()) {
//topic 正則匹配,根據規則去匹配sever所有topic,沒有則丟擲異常
checkTopics();
ContainerProperties containerProperties = getContainerProperties();
//已經獲取到消費組的分割槽和offset
TopicPartitionOffset[] topicPartitions = containerProperties.getTopicPartitions();
if (topicPartitions != null && this.concurrency > topicPartitions.length) {
// 當 concurrency 併發數超過分割槽時,這裡會列印警告日誌
this.logger.warn(() -> "When specific partitions are provided, the concurrency must be less than or "
+ "equal to the number of partitions; reduced from " + this.concurrency + " to "
+ topicPartitions.length);
//注意這裡,強制將併發數改成最大分數,在設定消費併發時,不用擔心分割槽數量併發超過
this.concurrency = topicPartitions.length;
}
setRunning(true); //開始監聽
//concurrency 就是建立容器時,從@KafkaListener 解析處理的併發數
// 可以看出併發數控制著 KafkaMessageListenerContainer 例項產生
for (int i = 0; i < this.concurrency; i++) {
//建立 KafkaMessageListenerContainer 物件
KafkaMessageListenerContainer<K, V> container =
constructContainer(containerProperties, topicPartitions, i);
//配置監聽器容器攔截器、通知這些,如果沒有配置預設都是null
configureChildContainer(i, container);
if (isPaused()) {
container.pause();
}
container.start(); //啟動任務
//因為所有消費現場都是同一個容器建立的,當要停止某個消費topic,需要對containers進行操作
this.containers.add(container);
}
}
}
private KafkaMessageListenerContainer<K, V> constructContainer(ContainerProperties containerProperties,
@Nullable TopicPartitionOffset[] topicPartitions, int i) {
KafkaMessageListenerContainer<K, V> container;
if (topicPartitions == null) {
container = new KafkaMessageListenerContainer<>(this, this.consumerFactory, containerProperties); // NOSONAR
}
else { //如果存在分割槽,每一個消費都有平分分割槽
container = new KafkaMessageListenerContainer<>(this, this.consumerFactory, // NOSONAR
containerProperties, partitionSubset(containerProperties, i));
}
return container;
}
看到了@KafkaListener 併發數如何實現的,並且併發數不能超過分割槽數的,如果併發數小於分割槽數,則會出現平分的情況,可能會讓一個消費佔有多個分割槽情況。這裡在建立KafkaMessageListenerContainer 去對Kafka topic 進行消費。
KafkaMessageListenerContainer
因為KafkaMessageListenerContainer和ConcurrentMessageListenerContainer都是通過extends AbstractMessageListenerContainer 重寫doStart()開啟任務,直接看見doStart就可以知道程式入口了。
protected void doStart() {
if (isRunning()) {
return;
}
if (this.clientIdSuffix == null) { // stand-alone container
checkTopics();
}
ContainerProperties containerProperties = getContainerProperties();
//檢查是否非自動ack,在org.springframework.kafka.listener.ContainerProperties.AckMode 有多種模式
checkAckMode(containerProperties);
//
Object = containerProperties.getMessageListener();
//任務執行器,看起倆像一個執行緒池Executor ,本質上是直接使用Thread來啟動任務的
AsyncListenableTaskExecutor consumerExecutor = containerProperties.getConsumerTaskExecutor();
if (consumerExecutor == null) {
consumerExecutor = new SimpleAsyncTaskExecutor(
(getBeanName() == null ? "" : getBeanName()) + "-C-");
containerProperties.setConsumerTaskExecutor(consumerExecutor);
}
GenericMessageListener<?> listener = (GenericMessageListener<?>) messageListener;
//這個一個列舉類,根據型別生成type,type 標記著如何處理kafka 資訊,有批量的、單條的、手動提交、自動提交
ListenerType listenerType = determineListenerType(listener);
//ListenerConsumer 內部類,有關Kafka 任何資訊都可以直接去取的
this.listenerConsumer = new ListenerConsumer(listener, listenerType);
setRunning(true); //設定執行狀態
this.startLatch = new CountDownLatch(1);
this.listenerConsumerFuture = consumerExecutor
.submitListenable(this.listenerConsumer);//啟動執行緒
try {
if (!this.startLatch.await(containerProperties.getConsumerStartTimeout().toMillis(), TimeUnit.MILLISECONDS)) {
this.logger.error("Consumer thread failed to start - does the configured task executor "
+ "have enough threads to support all containers and concurrency?");
publishConsumerFailedToStart();
}
}
catch (@SuppressWarnings(UNUSED) InterruptedException e) {
Thread.currentThread().interrupt();
}
}
在這裡主要邏輯就是啟動執行緒去去處理kafka 資訊拉取。我們直接去看ListenerConsumer run() 就行了。
@Override // NOSONAR complexity
public void run() {
ListenerUtils.setLogOnlyMetadata(this.containerProperties.isOnlyLogRecordMetadata());
//向spring容器釋出事件
publishConsumerStartingEvent();
this.consumerThread = Thread.currentThread();
setupSeeks();
KafkaUtils.setConsumerGroupId(this.consumerGroupId);
this.count = 0;
this.last = System.currentTimeMillis();
//從kafka 獲取消費組 分割槽 offset,儲存起來
initAssignedPartitions();
//釋出事件
publishConsumerStartedEvent();
Throwable exitThrowable = null;
while (isRunning()) {
try {
//核心 拉取資訊和 呼叫方法去處理資訊
pollAndInvoke();
}
//省略
pollAndInvoke 這個方法就是拉取資訊和處理的過程了,方法太繁瑣了,無非就是如何去呼叫endpoint 生成資訊處理器,並且將引數注入方法中。
總結
結合上面圖,簡單總結下Spring Kafka 如何通過一個簡單註解實現對方法消費資訊的。首先通過Spring 前置處理器機制使用KafkaListenerAnnotationBeanPostProcessor 掃描所有已經例項化的bean,找出帶有@KafkaListener bean 和方法,解析註解的內容設定到MethodKafkaListenerEndpoint,並且註冊到KafkaListenerEndpointRegistry,有它統一儲存起來,等到執行前置處理器統一將KafkaListenerEndpointRegistry儲存起來的enpoint,註冊到KafkaListenerEndpointRegistrar,根據enpoint生成ConcurrentMessageListenerContainer,在根據併發數去生成對應數量的KafkaMessageListenerContainer,最後使用Thread 非同步啟動Kafka 資訊拉去,呼叫bean 方法進行處理。
還理解了topic 分割槽和併發數如何關聯的,還知道kafka消費是可控制的,處理Kafka資訊方法,返回值可以被推送到另一個topic的、也是第一次知道有@RetryableTopic 重試機制,還有DLT 死信topic。如果不是看原始碼分析,平常工作場景估計很少用得上這些。現在看原始碼多了,越來越有感覺檢視程式碼更能加深你對框架學習,心得。
動態訂閱
看了這麼多程式碼,對照處理器CV下就,簡單版動態監聽就可以實現了
@Component
public class ListenerMessageCommand<K,V> implements CommandLineRunner {
@Autowired
private Cusmotd cusmotd;
@Autowired
private KafkaListenerEndpointRegistry endpointRegistry;
@Autowired
private KafkaListenerContainerFactory<?> kafkaListenerContainerFactory;
private Logger logger = LoggerFactory.getLogger(ListenerMessageCommand.class);
@Override
public void run(String... args) throws Exception {
MethodKafkaListenerEndpoint<K, V> endpoint = new MethodKafkaListenerEndpoint<>();
endpoint.setBean(cusmotd);
Method method = ReflectionUtils.findMethod(cusmotd.getClass(), "dis", ConsumerRecord.class);
endpoint.setMethod(method);
endpoint.setMessageHandlerMethodFactory(new DefaultMessageHandlerMethodFactory());
endpoint.setId("tk.shengyifeng.custom#1");
endpoint.setGroupId("test");
endpoint.setTopicPartitions(new TopicPartitionOffset[0]);
endpoint.setTopics("skdsk");
endpoint.setClientIdPrefix("comuserd_");
endpoint.setConcurrency(1);
endpointRegistry.registerListenerContainer(endpoint,kafkaListenerContainerFactory,true);
logger.info("register...............");
}
}
我們看過完整程式碼,知道監聽動作是由KafkaListenerContainerFactory建立後,呼叫例項start 方法開始的,並且我們還能拿到監聽容器物件,可以呼叫物件各式API,可以動態停止對topic消費哦。
@RestController
@RequestMapping("kafka")
public class KafkaController<K,V> {
@Autowired
private Cusmotd cusmotd;
@Autowired
private KafkaListenerContainerFactory<?> kafkaListenerContainerFactory;
private Map<String,MessageListenerContainer> containerMap = new ConcurrentReferenceHashMap<>();
@GetMapping("start/topic")
public void startTopic(String topicName,String groupName){
MethodKafkaListenerEndpoint<K, V> endpoint = new MethodKafkaListenerEndpoint<>();
endpoint.setBean(cusmotd);
Method method = ReflectionUtils.findMethod(cusmotd.getClass(), "dis", ConsumerRecord.class);
endpoint.setMethod(method);
endpoint.setMessageHandlerMethodFactory(new DefaultMessageHandlerMethodFactory());
endpoint.setId("tk.shengyifeng.custom#1");
endpoint.setGroupId(groupName);
endpoint.setTopicPartitions(new TopicPartitionOffset[0]);
endpoint.setTopics(topicName);
endpoint.setClientIdPrefix("comuserd_");
endpoint.setConcurrency(1);
MessageListenerContainer listenerContainer = kafkaListenerContainerFactory.createListenerContainer(endpoint);
listenerContainer.start();
containerMap.put(topicName,listenerContainer);
}
@GetMapping("stop/topic")
public void stopTopic(String topicName){
if (containerMap.containsKey(topicName))
containerMap.get(topicName).stop();
}
}
這個簡單http介面,通過介面方式支援對外擴容的方式動態訂閱頻道,並且支援已經訂閱topic消費停下來。
使用@kafkaListener 宣告方法消費的同學不用羨慕的,Spring 提供機制可以去獲取MessageListenerContainer,上面程式碼分析我們知道了KafkaListenerEndpointRegistry內部的listenerContainers 會儲存所有container例項,並且提供外部方法根據id去獲取物件,而且KafkaListenerEndpointRegistry還是有spring 進行例項化的,所以....
為了方便獲取id簡單,可以在使用註解時,手動指定id 值,如果沒有指定則id,預設生成規則是org.springframework.kafka.KafkaListenerEndpointContainer# + 自增長
SpringBoot 自動配置
大家可能好奇,Spring boot中Kafka配置資訊如何給kafkaListenerContainerFactory,因為它是通過Spring 容器初始化的,原始碼中並沒有看見帶有構造器的引數注入。想要具體瞭解,只有看KafkaAnnotationDrivenConfiguration,ConcurrentKafkaListenerContainerFactoryConfigurer
@Configuration(proxyBeanMethods = false)
@ConditionalOnClass(EnableKafka.class)
class KafkaAnnotationDrivenConfiguration {
private final KafkaProperties properties;
private final RecordMessageConverter messageConverter;
private final RecordFilterStrategy<Object, Object> recordFilterStrategy;
private final BatchMessageConverter batchMessageConverter;
private final KafkaTemplate<Object, Object> kafkaTemplate;
private final KafkaAwareTransactionManager<Object, Object> transactionManager;
private final ConsumerAwareRebalanceListener rebalanceListener;
private final ErrorHandler errorHandler;
private final BatchErrorHandler batchErrorHandler;
private final AfterRollbackProcessor<Object, Object> afterRollbackProcessor;
private final RecordInterceptor<Object, Object> recordInterceptor;
KafkaAnnotationDrivenConfiguration(KafkaProperties properties,
ObjectProvider<RecordMessageConverter> messageConverter,
ObjectProvider<RecordFilterStrategy<Object, Object>> recordFilterStrategy,
ObjectProvider<BatchMessageConverter> batchMessageConverter,
ObjectProvider<KafkaTemplate<Object, Object>> kafkaTemplate,
ObjectProvider<KafkaAwareTransactionManager<Object, Object>> kafkaTransactionManager,
ObjectProvider<ConsumerAwareRebalanceListener> rebalanceListener, ObjectProvider<ErrorHandler> errorHandler,
ObjectProvider<BatchErrorHandler> batchErrorHandler,
ObjectProvider<AfterRollbackProcessor<Object, Object>> afterRollbackProcessor,
ObjectProvider<RecordInterceptor<Object, Object>> recordInterceptor) {
this.properties = properties;
this.messageConverter = messageConverter.getIfUnique();
this.recordFilterStrategy = recordFilterStrategy.getIfUnique();
this.batchMessageConverter = batchMessageConverter
.getIfUnique(() -> new BatchMessagingMessageConverter(this.messageConverter));
this.kafkaTemplate = kafkaTemplate.getIfUnique();
this.transactionManager = kafkaTransactionManager.getIfUnique();
this.rebalanceListener = rebalanceListener.getIfUnique();
this.errorHandler = errorHandler.getIfUnique();
this.batchErrorHandler = batchErrorHandler.getIfUnique();
this.afterRollbackProcessor = afterRollbackProcessor.getIfUnique();
this.recordInterceptor = recordInterceptor.getIfUnique();
}
作為其實Spring Boot自動配置原理就是由spring-boot-autoconfigure 包編碼實現的,在根據@ConditionalOnClass 註解來決定是否啟動配置類,所以當你引入對應pox時,就會啟動配置類了,配置資訊會注入到KafkaProperties物件中,然後將properties 設定到工廠物件,例項化物件交給spring 容器,你會發現大多數自動配置都是這樣套路。