大資料開發-Flink-資料流DataStream和DataSet

Hoult丶吳邪發表於2021-05-12

Flink主要用來處理資料流,所以從抽象上來看就是對資料流的處理,正如前面大資料開發-Flink-體系結構 && 執行架構提到寫Flink程式實際上就是在寫DataSource、Transformation、Sink.

  • DataSource是程式的資料來源輸入,可以通過StreamExecutionEnvironment.addSource(sourceFuntion)為程式
    新增一個資料來源

  • Transformation是具體的操作,它對一個或多個輸入資料來源進行計算處理,比如Map、FlatMap和Filter等操作

  • Sink是程式的輸出,它可以把Transformation處理之後的資料輸出到指定的儲存介質中

DataStream的三種流處理Api

DataSource

Flink針對DataStream提供了兩種實現方式的資料來源,可以歸納為以下四種:

  • 基於檔案

    readTextFile(path) 讀取文字檔案,檔案遵循TextInputFormat逐行讀取規則並返回

  • 基於Socket

    socketTextStream 從Socket中讀取資料,元素可以通過一個分隔符分開

  • 基於集合

    fromCollection(Collection) 通過Java的Collection集合建立一個資料流,集合中的所有元素必須是相同型別的,需要注意的是,如果集合裡面的元素要識別為POJO,需要滿足下面的條件

    • 該類有共有的無參構造方法

    • 該類是共有且獨立的(沒有非靜態內部類)

    • 類(及父類)中所有的不被static、transient修飾的屬性要麼有公有的(且不被final修飾),要麼是包含公有的getter和setter方法,這些方法遵循java bean命名規範

    總結:上面的要求其實就是為了讓Flink可以方便地序列化和反序列化這些物件為資料流

  • 自定義Source

    使用StreamExecutionEnvironment.addSource(sourceFunction)將一個流式資料來源加到程式中,具體這個sourceFunction 是為非並行源implements SourceFunction,或者為並行源 implements ParallelSourceFunction介面,或者extends RichParallelSourceFunction,對於自定義Source,Sink, Flink內建了下面幾種Connector

聯結器 是否提供Source支援 是否提供Sink支援
Apache Kafka
ElasticSearch
HDFS
Twitter Streaming PI

對於Source的使用,其實較簡單,這裡給一個較常用的自定義Source的KafaSource的使用例子。更多相關原始碼可以檢視:

package com.hoult.stream;


public class SourceFromKafka {
    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        String topic = "animalN";
        Properties props = new Properties();
        props.put("bootstrap.servers", "linux121:9092");

        FlinkKafkaConsumer<String> consumer = new FlinkKafkaConsumer<>(topic, new SimpleStringSchema(), props);

        DataStreamSource<String> data = env.addSource(consumer);

        SingleOutputStreamOperator<Tuple2<Long, Long>> maped = data.map(new MapFunction<String, Tuple2<Long, Long>>() {
            @Override
            public Tuple2<Long, Long> map(String value) throws Exception {
                System.out.println(value);

                Tuple2<Long,Long> t = new Tuple2<Long,Long>(0l,0l);
                String[] split = value.split(",");

                try{
                    t = new Tuple2<Long, Long>(Long.valueOf(split[0]), Long.valueOf(split[1]));
                } catch (Exception e) {
                    e.printStackTrace();
                }
                return t;


            }
        });
        KeyedStream<Tuple2<Long,Long>, Long> keyed = maped.keyBy(value -> value.f0);
        //按照key分組策略,對流式資料呼叫狀態化處理
        SingleOutputStreamOperator<Tuple2<Long, Long>> flatMaped = keyed.flatMap(new RichFlatMapFunction<Tuple2<Long, Long>, Tuple2<Long, Long>>() {
            ValueState<Tuple2<Long, Long>> sumState;

            @Override
            public void open(Configuration parameters) throws Exception {
                //在open方法中做出State
                ValueStateDescriptor<Tuple2<Long, Long>> descriptor = new ValueStateDescriptor<>(
                        "average",
                        TypeInformation.of(new TypeHint<Tuple2<Long, Long>>() {
                        }),
                        Tuple2.of(0L, 0L)
                );

                sumState = getRuntimeContext().getState(descriptor);
//                super.open(parameters);
            }

            @Override
            public void flatMap(Tuple2<Long, Long> value, Collector<Tuple2<Long, Long>> out) throws Exception {
                //在flatMap方法中,更新State
                Tuple2<Long, Long> currentSum = sumState.value();

                currentSum.f0 += 1;
                currentSum.f1 += value.f1;

                sumState.update(currentSum);
                out.collect(currentSum);


                /*if (currentSum.f0 == 2) {
                    long avarage = currentSum.f1 / currentSum.f0;
                    out.collect(new Tuple2<>(value.f0, avarage));
                    sumState.clear();
                }*/

            }
        });

        flatMaped.print();

        env.execute();
    }
}

Transformation

對於Transformation ,Flink提供了很多的運算元,

  • map

    DataStream → DataStream Takes one element and produces one element. A map function that doubles the values of the input stream:

DataStream<Integer> dataStream = //...
dataStream.map(new MapFunction<Integer, Integer>() {
    @Override
    public Integer map(Integer value) throws Exception {
      return 2 * value;
    }
});
  • flatMap

    DataStream → DataStream Takes one element and produces zero, one, or more elements. A flatmap function that splits sentences to words:

dataStream.flatMap(new FlatMapFunction<String, String>() {
  @Override
  public void flatMap(String value, Collector<String> out) throws Exception {
    for(String word: value.split(" ")){
      out.collect(word);
    }
  }
});
  • filter

    DataStream → DataStream Evaluates a boolean function for each element and retains those for which the function returns true. A filter that filters out zero values:

dataStream.filter(new FilterFunction<Integer>() {
  @Override
  public boolean filter(Integer value) throws Exception {
    return value != 0;
  }
});
  • keyBy

    DataStream → KeyedStream Logically partitions a stream into disjoint partitions. All records with the same key are assigned to the same partition. Internally, keyBy() is implemented with hash partitioning. There are different ways to specify keys.
    This transformation returns a KeyedStream, which is, among other things, required to use keyed state.

    Attention A type cannot be a key if:

  • fold

  • aggregation

  • window/windowAll/window.apply/window.reduce/window.fold/window.aggregation

dataStream.keyBy(value -> value.getSomeKey()) // Key by field "someKey"
dataStream.keyBy(value -> value.f0) // Key by the first element of a Tuple

更多運算元操作可以檢視官網,官網寫的很好:https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/datastream/operators/overview/

Sink

Flink針對DataStream提供了大量的已經實現的資料目的地(Sink),具體如下所示

  • writeAsText():講元素以字串形式逐行寫入,這些字串通過呼叫每個元素的toString()方法來獲取

  • print()/printToErr():列印每個元素的toString()方法的值到標準輸出或者標準錯誤輸出流中

  • 自定義輸出:addSink可以實現把資料輸出到第三方儲存介質中, Flink提供了一批內建的Connector,其中有的Connector會提供對應的Sink支援

這裡舉一個常見的例子,下層到Kafka

import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
public class StreamToKafka {
  public static void main(String[] args) throws Exception {
    StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
    DataStreamSource<String> data = env.socketTextStream("teacher2", 7777);
    String brokerList = "teacher2:9092";
    String topic = "mytopic2";
    FlinkKafkaProducer producer = new FlinkKafkaProducer(brokerList, topic, new SimpleStringSchema());
    data.addSink(producer);
    env.execute();
  }
}

DataSet的常用Api

DataSource

對DataSet批處理而言,較為頻繁的操作是讀取HDFS中的檔案資料,因為這裡主要介紹兩個DataSource元件

  • 基於集合 ,用來測試和DataStream類似

  • 基於檔案 readTextFile....

Transformation


更多運算元可以檢視官網:https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/dataset/overview/

Sink

Flink針對DataStream提供了大量的已經實現的資料目的地(Sink),具體如下所示

  • writeAsText():將元素以字串形式逐行寫入,這些字串通過呼叫每個元素的toString()方法來獲取

  • writeAsCsv():將元組以逗號分隔寫入檔案中,行及欄位之間的分隔是可配置的,每個欄位的值來自物件的

  • toString()方法

  • print()/pringToErr():列印每個元素的toString()方法的值到標準輸出或者標準錯誤輸出流中
    Flink提供了一批內建的Connector,其中有的Connector會提供對應的Sink支援,如1.1節中表所示
    吳邪,小三爺,混跡於後臺,大資料,人工智慧領域的小菜鳥。
    更多請關注
    file

相關文章