我們前面採集的日誌資料已經儲存到 Kafka 中,作為日誌資料的 ODS 層,從 kafka 的ODS 層讀取的日誌資料分為 3 類, 頁面日誌、啟動日誌和曝光日誌。這三類資料雖然都是使用者行為資料,但是有著完全不一樣的資料結構,所以要拆分處理。將拆分後的不同的日誌寫回 Kafka 不同主題中,作為日誌 DWD 層。
頁面日誌輸出到主流,啟動日誌輸出到啟動側輸出流,曝光日誌輸出到曝光側輸出流
2. 識別新老使用者
本身客戶端業務有新老使用者的標識,但是不夠準確,需要用實時計算再次確認(不涉及業務操作,只是單純的做個狀態確認)。
利用側輸出流實現資料拆分
根據日誌資料內容,將日誌資料分為 3 類:頁面日誌、啟動日誌和曝光日誌。將不同流的資料推送下游的 kafka 的不同 Topic 中
3. 程式碼實現
通過flink消費kafka 的資料,然後記錄消費的checkpoint存到hdfs中,記得要手動建立路徑,然後給許可權
checkpoint可選擇性使用,測試時可以關掉。
package com.zhangbao.gmall.realtime.app;
import com.alibaba.fastjson.JSONObject;
import com.zhangbao.gmall.realtime.utils.MyKafkaUtil;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.runtime.state.filesystem.FsStateBackend;
import org.apache.flink.streaming.api.CheckpointingMode;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
/**
* @author: zhangbao
* @date: 2021/6/18 23:29
* @desc:
**/
public class BaseLogTask {
public static void main(String[] args) {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//設定並行度,即kafka分割槽數
env.setParallelism(4);
//新增checkpoint,每5秒執行一次
env.enableCheckpointing(5000, CheckpointingMode.EXACTLY_ONCE);
env.getCheckpointConfig().setCheckpointTimeout(60000);
env.setStateBackend(new FsStateBackend("hdfs://hadoop101:9000/gmall/flink/checkpoint/baseLogAll"));
//指定哪個使用者讀取hdfs檔案
System.setProperty("HADOOP_USER_NAME","zhangbao");
//新增資料來源
String topic = "ods_base_log";
String group = "base_log_app_group";
FlinkKafkaConsumer<String> kafkaSource = MyKafkaUtil.getKafkaSource(topic, group);
DataStreamSource<String> kafkaDs = env.addSource(kafkaSource);
//對格式進行轉換
SingleOutputStreamOperator<JSONObject> jsonDs = kafkaDs.map(new MapFunction<String, JSONObject>() {
@Override
public JSONObject map(String s) throws Exception {
return JSONObject.parseObject(s);
}
});
jsonDs.print("json >>> --- ");
try {
//執行
env.execute();
} catch (Exception e) {
e.printStackTrace();
}
}
}
MyKafkaUtil.java工具類
package com.zhangbao.gmall.realtime.utils;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import java.util.Properties;
/**
* @author: zhangbao
* @date: 2021/6/18 23:41
* @desc:
**/
public class MyKafkaUtil {
private static String kafka_host = "hadoop101:9092,hadoop102:9092,hadoop103:9092";
/**
* kafka消費者
*/
public static FlinkKafkaConsumer<String> getKafkaSource(String topic,String group){
Properties props = new Properties();
props.setProperty(ConsumerConfig.GROUP_ID_CONFIG,group);
props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,kafka_host);
return new FlinkKafkaConsumer<String>(topic, new SimpleStringSchema(),props);
}
}
4. 新老訪客狀態修復
識別新老客戶規則
識別新老訪客,前端會對新老客狀態進行記錄,可能不準,這裡再次確認,儲存mid某天狀態情況(將首次訪問日期作為狀態儲存),等後面裝置在有日誌過來,從狀態中獲取日期和日誌產生日期比較,如果狀態不為空,並且狀態日期和當前日期不相等,說明是老訪客,如果is_new標記是1,則對其狀態進行修復。
import com.alibaba.fastjson.JSONObject;
import com.zhangbao.gmall.realtime.utils.MyKafkaUtil;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.functions.RichMapFunction;
import org.apache.flink.api.common.state.ValueState;
import org.apache.flink.api.common.state.ValueStateDescriptor;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.runtime.state.filesystem.FsStateBackend;
import org.apache.flink.streaming.api.CheckpointingMode;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import java.text.SimpleDateFormat;
import java.util.Date;
/**
* @author: zhangbao
* @date: 2021/6/18 23:29
* @desc:
**/
public class BaseLogTask {
public static void main(String[] args) {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//設定並行度,即kafka分割槽數
env.setParallelism(4);
//新增checkpoint,每5秒執行一次
env.enableCheckpointing(5000, CheckpointingMode.EXACTLY_ONCE);
env.getCheckpointConfig().setCheckpointTimeout(60000);
env.setStateBackend(new FsStateBackend("hdfs://hadoop101:9000/gmall/flink/checkpoint/baseLogAll"));
//指定哪個使用者讀取hdfs檔案
System.setProperty("HADOOP_USER_NAME","zhangbao");
//新增資料來源,來至kafka的資料
String topic = "ods_base_log";
String group = "base_log_app_group";
FlinkKafkaConsumer<String> kafkaSource = MyKafkaUtil.getKafkaSource(topic, group);
DataStreamSource<String> kafkaDs = env.addSource(kafkaSource);
//對格式進行轉換
SingleOutputStreamOperator<JSONObject> jsonDs = kafkaDs.map(new MapFunction<String, JSONObject>() {
@Override
public JSONObject map(String s) throws Exception {
return JSONObject.parseObject(s);
}
});
jsonDs.print("json >>> --- ");
/**
* 識別新老訪客,前端會對新老客狀態進行記錄,可能不準,這裡再次確認
* 儲存mid某天狀態情況(將首次訪問日期作為狀態儲存),等後面裝置在有日誌過來,從狀態中獲取日期和日誌產生日期比較,
* 如果狀態不為空,並且狀態日期和當前日期不相等,說明是老訪客,如果is_new標記是1,則對其狀態進行修復
*/
//根據id對日誌進行分組
KeyedStream<JSONObject, String> midKeyedDs = jsonDs.keyBy(data -> data.getJSONObject("common").getString("mid"));
//新老訪客狀態修復,狀態分為運算元狀態和鍵控狀態,我們這裡記錄某一個裝置狀態,使用鍵控狀態比較合適
SingleOutputStreamOperator<JSONObject> midWithNewFlagDs = midKeyedDs.map(new RichMapFunction<JSONObject, JSONObject>() {
//定義mid狀態
private ValueState<String> firstVisitDateState;
//定義日期格式化
private SimpleDateFormat sdf;
//初始化方法
@Override
public void open(Configuration parameters) throws Exception {
firstVisitDateState = getRuntimeContext().getState(new ValueStateDescriptor<String>("newMidDateState", String.class));
sdf = new SimpleDateFormat("yyyyMMdd");
}
@Override
public JSONObject map(JSONObject jsonObject) throws Exception {
//獲取當前mid狀態
String is_new = jsonObject.getJSONObject("common").getString("is_new");
//獲取當前日誌時間戳
Long ts = jsonObject.getLong("ts");
if ("1".equals(is_new)) {
//訪客日期狀態
String stateDate = firstVisitDateState.value();
String nowDate = sdf.format(new Date());
if (stateDate != null && stateDate.length() != 0 && !stateDate.equals(nowDate)) {
//是老客
is_new = "0";
jsonObject.getJSONObject("common").put("is_new", is_new);
} else {
//新訪客
firstVisitDateState.update(nowDate);
}
}
return jsonObject;
}
});
midWithNewFlagDs.print();
try {
//執行
env.execute();
} catch (Exception e) {
e.printStackTrace();
}
}
}
5. 利用側輸出流實現資料拆分
經過上面的新老客戶修復後,再將日誌資料分為 3 類
啟動日誌標籤定義:OutputTag<String> startTag = new OutputTag<String>("start"){};
和曝光日誌標籤定義:OutputTag<String> displayTag = new OutputTag<String>("display"){};
頁面日誌輸出到主流,啟動日誌輸出到啟動側輸出流,曝光日誌輸出到曝光日誌側輸出流。
資料拆分後傳送到kafka
-
dwd_start_log:啟動日誌
-
dwd_display_log:曝光日誌
-
dwd_page_log:頁面日誌
package com.zhangbao.gmall.realtime.app;
import com.alibaba.fastjson.JSONArray;
import com.alibaba.fastjson.JSONObject;
import com.zhangbao.gmall.realtime.utils.MyKafkaUtil;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.functions.RichMapFunction;
import org.apache.flink.api.common.state.ValueState;
import org.apache.flink.api.common.state.ValueStateDescriptor;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.runtime.state.filesystem.FsStateBackend;
import org.apache.flink.streaming.api.CheckpointingMode;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.ProcessFunction;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.util.Collector;
import org.apache.flink.util.OutputTag;
import java.text.SimpleDateFormat;
import java.util.Date;
/**
* @author: zhangbao
* @date: 2021/6/18 23:29
* @desc:
**/
public class BaseLogTask {
private static final String TOPIC_START = "dwd_start_log";
private static final String TOPIC_DISPLAY = "dwd_display_log";
private static final String TOPIC_PAGE = "dwd_page_log";
public static void main(String[] args) {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
//設定並行度,即kafka分割槽數
env.setParallelism(4);
//新增checkpoint,每5秒執行一次
env.enableCheckpointing(5000, CheckpointingMode.EXACTLY_ONCE);
env.getCheckpointConfig().setCheckpointTimeout(60000);
env.setStateBackend(new FsStateBackend("hdfs://hadoop101:9000/gmall/flink/checkpoint/baseLogAll"));
//指定哪個使用者讀取hdfs檔案
System.setProperty("HADOOP_USER_NAME","zhangbao");
//新增資料來源,來至kafka的資料
String topic = "ods_base_log";
String group = "base_log_app_group";
FlinkKafkaConsumer<String> kafkaSource = MyKafkaUtil.getKafkaSource(topic, group);
DataStreamSource<String> kafkaDs = env.addSource(kafkaSource);
//對格式進行轉換
SingleOutputStreamOperator<JSONObject> jsonDs = kafkaDs.map(new MapFunction<String, JSONObject>() {
@Override
public JSONObject map(String s) throws Exception {
return JSONObject.parseObject(s);
}
});
jsonDs.print("json >>> --- ");
/**
* 識別新老訪客,前端會對新老客狀態進行記錄,可能不準,這裡再次確認
* 儲存mid某天狀態情況(將首次訪問日期作為狀態儲存),等後面裝置在有日誌過來,從狀態中獲取日期和日誌產生日期比較,
* 如果狀態不為空,並且狀態日期和當前日期不相等,說明是老訪客,如果is_new標記是1,則對其狀態進行修復
*/
//根據id對日誌進行分組
KeyedStream<JSONObject, String> midKeyedDs = jsonDs.keyBy(data -> data.getJSONObject("common").getString("mid"));
//新老訪客狀態修復,狀態分為運算元狀態和鍵控狀態,我們這裡記錄某一個裝置狀態,使用鍵控狀態比較合適
SingleOutputStreamOperator<JSONObject> midWithNewFlagDs = midKeyedDs.map(new RichMapFunction<JSONObject, JSONObject>() {
//定義mid狀態
private ValueState<String> firstVisitDateState;
//定義日期格式化
private SimpleDateFormat sdf;
//初始化方法
@Override
public void open(Configuration parameters) throws Exception {
firstVisitDateState = getRuntimeContext().getState(new ValueStateDescriptor<String>("newMidDateState", String.class));
sdf = new SimpleDateFormat("yyyyMMdd");
}
@Override
public JSONObject map(JSONObject jsonObject) throws Exception {
//獲取當前mid狀態
String is_new = jsonObject.getJSONObject("common").getString("is_new");
//獲取當前日誌時間戳
Long ts = jsonObject.getLong("ts");
if ("1".equals(is_new)) {
//訪客日期狀態
String stateDate = firstVisitDateState.value();
String nowDate = sdf.format(new Date());
if (stateDate != null && stateDate.length() != 0 && !stateDate.equals(nowDate)) {
//是老客
is_new = "0";
jsonObject.getJSONObject("common").put("is_new", is_new);
} else {
//新訪客
firstVisitDateState.update(nowDate);
}
}
return jsonObject;
}
});
// midWithNewFlagDs.print();
/**
* 根據日誌資料內容,將日誌資料分為 3 類, 頁面日誌、啟動日誌和曝光日誌。頁面日誌
* 輸出到主流,啟動日誌輸出到啟動側輸出流,曝光日誌輸出到曝光日誌側輸出流
* 側輸出流:1接收遲到資料,2分流
*/
//定義啟動側輸出流標籤,加大括號為了生成相應型別
OutputTag<String> startTag = new OutputTag<String>("start"){};
//定義曝光側輸出流標籤
OutputTag<String> displayTag = new OutputTag<String>("display"){};
SingleOutputStreamOperator<String> pageDs = midWithNewFlagDs.process(
new ProcessFunction<JSONObject, String>() {
@Override
public void processElement(JSONObject jsonObject, Context context, Collector<String> collector) throws Exception {
String dataStr = jsonObject.toString();
JSONObject startJson = jsonObject.getJSONObject("start");
//判斷是否啟動日誌
if (startJson != null && startJson.size() > 0) {
context.output(startTag, dataStr);
} else {
//判斷是否曝光日誌
JSONArray jsonArray = jsonObject.getJSONArray("displays");
if (jsonArray != null && jsonArray.size() > 0) {
//給每一條曝光事件加pageId
String pageId = jsonObject.getJSONObject("page").getString("page_id");
//遍歷輸出曝光日誌
for (int i = 0; i < jsonArray.size(); i++) {
JSONObject disPlayObj = jsonArray.getJSONObject(i);
disPlayObj.put("page_id", pageId);
context.output(displayTag, disPlayObj.toString());
}
} else {
//如果不是曝光日誌,則是頁面日誌,輸出到主流
collector.collect(dataStr);
}
}
}
}
);
//獲取側輸出流
DataStream<String> startDs = pageDs.getSideOutput(startTag);
DataStream<String> disPlayDs = pageDs.getSideOutput(displayTag);
//列印輸出
startDs.print("start>>>");
disPlayDs.print("display>>>");
pageDs.print("page>>>");
/**
* 將不同流的日誌資料傳送到指定的kafka主題
*/
startDs.addSink(MyKafkaUtil.getKafkaSink(TOPIC_START));
disPlayDs.addSink(MyKafkaUtil.getKafkaSink(TOPIC_DISPLAY));
pageDs.addSink(MyKafkaUtil.getKafkaSink(TOPIC_PAGE));
try {
//執行
env.execute();
} catch (Exception e) {
e.printStackTrace();
}
}
}