歡迎訪問我的GitHub
https://github.com/zq2599/blog_demos
內容:所有原創文章分類彙總及配套原始碼,涉及Java、Docker、Kubernetes、DevOPS等;
本篇概覽
本文是《Flink的sink實戰》系列的第三篇,主要內容是體驗Flink官方的cassandra connector,整個實戰如下圖所示,我們先從kafka獲取字串,再執行wordcount操作,然後將結果同時列印和寫入cassandra:
全系列連結
軟體版本
本次實戰的軟體版本資訊如下:
- cassandra:3.11.6
- kafka:2.4.0(scala:2.12)
- jdk:1.8.0_191
- flink:1.9.2
- maven:3.6.0
- flink所在作業系統:CentOS Linux release 7.7.1908
- cassandra所在作業系統:CentOS Linux release 7.7.1908
- IDEA:2018.3.5 (Ultimate Edition)
關於cassandra
本次用到的cassandra是三臺叢集部署的叢集,搭建方式請參考《ansible快速部署cassandra3叢集》
準備cassandra的keyspace和表
先建立keyspace和table:
- cqlsh登入cassandra:
cqlsh 192.168.133.168
- 建立keyspace(3副本):
CREATE KEYSPACE IF NOT EXISTS example
WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'};
- 建表:
CREATE TABLE IF NOT EXISTS example.wordcount (
word text,
count bigint,
PRIMARY KEY(word)
);
準備kafka的topic
- 啟動kafka服務;
- 建立名為test001的topic,參考命令如下:
./kafka-topics.sh \
--create \
--bootstrap-server 127.0.0.1:9092 \
--replication-factor 1 \
--partitions 1 \
--topic test001
- 進入傳送訊息的會話模式,參考命令如下:
./kafka-console-producer.sh \
--broker-list kafka:9092 \
--topic test001
- 在會話模式下,輸入任意字串然後回車,都會將字串訊息傳送到broker;
原始碼下載
如果您不想寫程式碼,整個系列的原始碼可在GitHub下載到,地址和連結資訊如下表所示(https://github.com/zq2599/blog_demos):
名稱 | 連結 | 備註 |
---|---|---|
專案主頁 | https://github.com/zq2599/blog_demos | 該專案在GitHub上的主頁 |
git倉庫地址(https) | https://github.com/zq2599/blog_demos.git | 該專案原始碼的倉庫地址,https協議 |
git倉庫地址(ssh) | git@github.com:zq2599/blog_demos.git | 該專案原始碼的倉庫地址,ssh協議 |
這個git專案中有多個資料夾,本章的應用在flinksinkdemo資料夾下,如下圖紅框所示:
兩種寫入cassandra的方式
flink官方的connector支援兩種方式寫入cassandra:
- Tuple型別寫入:將Tuple物件的欄位對齊到指定的SQL的引數中;
- POJO型別寫入:通過DataStax,將POJO物件對應到註解配置的表和欄位中;
接下來分別使用這兩種方式;
開發(Tuple寫入)
- 《Flink的sink實戰之二:kafka》中建立了flinksinkdemo工程,在此繼續使用;
- 在pom.xml中增加casandra的connector依賴:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-cassandra_2.11</artifactId>
<version>1.10.0</version>
</dependency>
- 另外還要新增flink-streaming-scala依賴,否則編譯CassandraSink.addSink這段程式碼會失敗:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
- 新增CassandraTuple2Sink.java,這就是Job類,裡面從kafka獲取字串訊息,然後轉成Tuple2型別的資料集寫入cassandra,寫入的關鍵點是Tuple內容和指定SQL中的引數的匹配:
package com.bolingcavalry.addsink;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.sink.PrintSinkFunction;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.connectors.cassandra.CassandraSink;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.util.Collector;
import java.util.Properties;
public class CassandraTuple2Sink {
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//設定並行度
env.setParallelism(1);
//連線kafka用到的屬性物件
Properties properties = new Properties();
//broker地址
properties.setProperty("bootstrap.servers", "192.168.50.43:9092");
//zookeeper地址
properties.setProperty("zookeeper.connect", "192.168.50.43:2181");
//消費者的groupId
properties.setProperty("group.id", "flink-connector");
//例項化Consumer類
FlinkKafkaConsumer<String> flinkKafkaConsumer = new FlinkKafkaConsumer<>(
"test001",
new SimpleStringSchema(),
properties
);
//指定從最新位置開始消費,相當於放棄歷史訊息
flinkKafkaConsumer.setStartFromLatest();
//通過addSource方法得到DataSource
DataStream<String> dataStream = env.addSource(flinkKafkaConsumer);
DataStream<Tuple2<String, Long>> result = dataStream
.flatMap(new FlatMapFunction<String, Tuple2<String, Long>>() {
@Override
public void flatMap(String value, Collector<Tuple2<String, Long>> out) {
String[] words = value.toLowerCase().split("\\s");
for (String word : words) {
//cassandra的表中,每個word都是主鍵,因此不能為空
if (!word.isEmpty()) {
out.collect(new Tuple2<String, Long>(word, 1L));
}
}
}
}
)
.keyBy(0)
.timeWindow(Time.seconds(5))
.sum(1);
result.addSink(new PrintSinkFunction<>())
.name("print Sink")
.disableChaining();
CassandraSink.addSink(result)
.setQuery("INSERT INTO example.wordcount(word, count) values (?, ?);")
.setHost("192.168.133.168")
.build()
.name("cassandra Sink")
.disableChaining();
env.execute("kafka-2.4 source, cassandra-3.11.6 sink, tuple2");
}
}
- 上述程式碼中,從kafka取得資料,做了word count處理後寫入到cassandra,注意addSink方法後的一連串API(包含了資料庫連線的引數),這是flink官方推薦的操作,另外為了在Flink web UI看清楚DAG情況,這裡呼叫disableChaining方法取消了operator chain,生產環境中這一行可以去掉;
- 編碼完成後,執行mvn clean package -U -DskipTests構建,在target目錄得到檔案flinksinkdemo-1.0-SNAPSHOT.jar;
- 在Flink的web UI上傳flinksinkdemo-1.0-SNAPSHOT.jar,並指定執行類,如下圖紅框所示:
- 啟動任務後DAG如下:
- 去前面建立的傳送kafka訊息的會話模式視窗,傳送一個字串"aaa bbb ccc aaa aaa aaa";
- 檢視cassandra資料,發現已經新增了三條記錄,內容符合預期:
- 檢視TaskManager控制檯輸出,裡面有Tuple2資料集的列印結果,和cassandra的一致:
- DAG上所有SubTask的記錄數也符合預期:
開發(POJO寫入)
接下來嘗試POJO寫入,即業務邏輯中的資料結構例項被寫入cassandra,無需指定SQL:
- 實現POJO寫入資料庫,需要datastax庫的支援,在pom.xml中增加以下依賴:
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>3.1.4</version>
<classifier>shaded</classifier>
<!-- Because the shaded JAR uses the original POM, you still need
to exclude this dependency explicitly: -->
<exclusions>
<exclusion>
<groupId>io.netty</groupId>
<artifactId>*</artifactId>
</exclusion>
</exclusions>
</dependency>
- 請注意上面配置的exclusions節點,依賴datastax的時候,按照官方指導對netty相關的間接依賴做排除,官方地址:https://docs.datastax.com/en/developer/java-driver/3.1/manual/shaded_jar/
- 建立帶有資料庫相關注解的實體類WordCount:
package com.bolingcavalry.addsink;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.Table;
@Table(keyspace = "example", name = "wordcount")
public class WordCount {
@Column(name = "word")
private String word = "";
@Column(name = "count")
private long count = 0;
public WordCount() {
}
public WordCount(String word, long count) {
this.setWord(word);
this.setCount(count);
}
public String getWord() {
return word;
}
public void setWord(String word) {
this.word = word;
}
public long getCount() {
return count;
}
public void setCount(long count) {
this.count = count;
}
@Override
public String toString() {
return getWord() + " : " + getCount();
}
}
- 然後建立任務類CassandraPojoSink:
package com.bolingcavalry.addsink;
import com.datastax.driver.mapping.Mapper;
import com.datastax.shaded.netty.util.Recycler;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.functions.ReduceFunction;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.sink.PrintSinkFunction;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.connectors.cassandra.CassandraSink;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.util.Collector;
import java.util.Properties;
public class CassandraPojoSink {
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//設定並行度
env.setParallelism(1);
//連線kafka用到的屬性物件
Properties properties = new Properties();
//broker地址
properties.setProperty("bootstrap.servers", "192.168.50.43:9092");
//zookeeper地址
properties.setProperty("zookeeper.connect", "192.168.50.43:2181");
//消費者的groupId
properties.setProperty("group.id", "flink-connector");
//例項化Consumer類
FlinkKafkaConsumer<String> flinkKafkaConsumer = new FlinkKafkaConsumer<>(
"test001",
new SimpleStringSchema(),
properties
);
//指定從最新位置開始消費,相當於放棄歷史訊息
flinkKafkaConsumer.setStartFromLatest();
//通過addSource方法得到DataSource
DataStream<String> dataStream = env.addSource(flinkKafkaConsumer);
DataStream<WordCount> result = dataStream
.flatMap(new FlatMapFunction<String, WordCount>() {
@Override
public void flatMap(String s, Collector<WordCount> collector) throws Exception {
String[] words = s.toLowerCase().split("\\s");
for (String word : words) {
if (!word.isEmpty()) {
//cassandra的表中,每個word都是主鍵,因此不能為空
collector.collect(new WordCount(word, 1L));
}
}
}
})
.keyBy("word")
.timeWindow(Time.seconds(5))
.reduce(new ReduceFunction<WordCount>() {
@Override
public WordCount reduce(WordCount wordCount, WordCount t1) throws Exception {
return new WordCount(wordCount.getWord(), wordCount.getCount() + t1.getCount());
}
});
result.addSink(new PrintSinkFunction<>())
.name("print Sink")
.disableChaining();
CassandraSink.addSink(result)
.setHost("192.168.133.168")
.setMapperOptions(() -> new Mapper.Option[] { Mapper.Option.saveNullFields(true) })
.build()
.name("cassandra Sink")
.disableChaining();
env.execute("kafka-2.4 source, cassandra-3.11.6 sink, pojo");
}
}
- 從上述程式碼可見,和前面的Tuple寫入型別有很大差別,為了準備好POJO型別的資料集,除了flatMap的匿名類入參要改寫,還要寫好reduce方法的匿名類入參,並且還要呼叫setMapperOptions設定對映規則;
- 編譯構建後,上傳jar到flink,並且指定任務類為CassandraPojoSink:
- 清理之前的資料,在cassandra的cqlsh上執行TRUNCATE example.wordcount;
- 像之前那樣傳送字串訊息到kafka:
- 檢視資料庫,發現結果符合預期:
10. DAG和SubTask情況如下:
至此,flink的結果資料寫入cassandra的實戰就完成了,希望能給您一些參考;
歡迎關注公眾號:程式設計師欣宸
微信搜尋「程式設計師欣宸」,我是欣宸,期待與您一同暢遊Java世界...
https://github.com/zq2599/blog_demos