当前位置:网站首页>Flink's sink: Cassandra 3

Flink's sink: Cassandra 3

2020-11-10 17:30:43 Irving the procedural ape

Welcome to visit mine GitHub

https://github.com/zq2599/blog_demos

Content : All original articles classified summary and supporting source code , involve Java、Docker、Kubernetes、DevOPS etc. ;

An overview of this article

This article is about 《Flink Of sink actual combat 》 The third in the series , The main content is experience Flink Official cassandra connector, The whole actual combat is shown in the figure below , Let's start with kafka Get string , Re execution wordcount operation , Then print and write the results at the same time cassandra:
 Insert picture description here

Full series Links

  1. 《Flink Of sink One of the real battles : On 》
  2. 《Flink Of sink The second part of the actual battle :kafka》
  3. 《Flink Of sink The third part of the actual battle :cassandra3》
  4. 《Flink Of sink The fourth part of the actual battle : Customize 》

Software version

The software version information of this actual combat is as follows :

  1. cassandra:3.11.6
  2. kafka:2.4.0(scala:2.12)
  3. jdk:1.8.0_191
  4. flink:1.9.2
  5. maven:3.6.0
  6. flink The operating system :CentOS Linux release 7.7.1908
  7. cassandra The operating system :CentOS Linux release 7.7.1908
  8. IDEA:2018.3.5 (Ultimate Edition)

About cassandra

What is used this time cassandra It's a cluster of three clusters , Please refer to 《ansible Rapid deployment cassandra3 colony 》

Get ready cassandra Of keyspace And table

First create keyspace and table:

  1. cqlsh Sign in cassandra:
cqlsh 192.168.133.168
  1. establish keyspace(3 copy ):
CREATE KEYSPACE IF NOT EXISTS example WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'};
  1. Build table :
CREATE TABLE IF NOT EXISTS example.wordcount ( word text, count bigint, PRIMARY KEY(word) );

Get ready kafka Of topic

  1. start-up kafka service ;
  2. Create a test001 Of topic, The reference command is as follows :
./kafka-topics.sh \--create \--bootstrap-server 127.0.0.1:9092 \--replication-factor 1 \--partitions 1 \--topic test001
  1. Enter the send message session mode , The reference command is as follows :
./kafka-console-producer.sh \--broker-list kafka:9092 \--topic test001
  1. In conversation mode , Enter any string and enter , Will send a string message to broker;

Source download

If you don't want to write code , The source code of the whole series can be found in GitHub Download to , The address and link information is shown in the following table (https://github.com/zq2599/blog_demos):

name link remarks
Project home page https://github.com/zq2599/blog_demos The project is in progress. GitHub Home page on
git Warehouse address (https) https://github.com/zq2599/blog_demos.git The warehouse address of the source code of the project ,https agreement
git Warehouse address (ssh) git@github.com:zq2599/blog_demos.git The warehouse address of the source code of the project ,ssh agreement

This git Multiple folders in project , The application of this chapter in flinksinkdemo Under the folder , As shown in the red box below :
 Insert picture description here

Two kinds of writing cassandra The way

flink Official connector Supports two ways to write cassandra:

  1. Tuple Type write : take Tuple Object to the specified SQL The parameters of the ;
  2. POJO Type write : adopt DataStax, take POJO Objects correspond to tables and fields in annotation configuration ;

Next, use these two methods respectively ;

Development (Tuple write in )

  1. 《Flink Of sink The second part of the actual battle :kafka》 Created in flinksinkdemo engineering , Continue to use ;
  2. stay pom.
<dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-connector-cassandra_2.11</artifactId> <version>1.10.0</version></dependency>
  1. Also add flink-streaming-scala rely on , Otherwise compile CassandraSink.addSink This code will fail :
<dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-streaming-scala_${scala.binary.version}</artifactId> <version>${flink.version}</version> <scope>provided</scope></dependency>
  1. newly added CassandraTuple2Sink.java, This is it. Job class , From inside kafka Get string message , And turn it into Tuple2 Type of data set write cassandra, The key point of writing is Tuple Content and designation SQL Match of parameters in :
package com.bolingcavalry.addsink;import org.apache.flink.api.common.functions.FlatMapFunction;import org.apache.flink.api.common.serialization.SimpleStringSchema;import org.apache.flink.api.java.tuple.Tuple2;import org.apache.flink.streaming.api.datastream.DataStream;import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;import org.apache.flink.streaming.api.functions.sink.PrintSinkFunction;import org.apache.flink.streaming.api.windowing.time.Time;import org.apache.flink.streaming.connectors.cassandra.CassandraSink;import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;import org.apache.flink.util.Collector;import java.util.Properties;public class CassandraTuple2Sink { public static void main(String[] args) throws Exception {  final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();  // Set parallelism   env.setParallelism(1);  // Connect kafka Property object used   Properties properties = new Properties();  //broker Address   properties.setProperty("bootstrap.servers", "192.168.50.43:9092");  //zookeeper Address   properties.setProperty("zookeeper.connect", "192.168.50.43:2181");  // Consumers' groupId  properties.setProperty("group.id", "flink-connector");  // Instantiation Consumer class   FlinkKafkaConsumer<String> flinkKafkaConsumer = new FlinkKafkaConsumer<>(    "test001",    new SimpleStringSchema(),    properties  );  // Specify to start consumption from the latest location , It's like giving up historical news   flinkKafkaConsumer.setStartFromLatest();  // adopt addSource Method to get DataSource  DataStream<String> dataStream = env.addSource(flinkKafkaConsumer);  DataStream<Tuple2<String, Long>> result = dataStream    .flatMap(new FlatMapFunction<String, Tuple2<String, Long>>() {        @Override        public void flatMap(String value, Collector<Tuple2<String, Long>> out) {         String[] words = value.toLowerCase().split("\\s");         for (String word : words) {          //cassandra In the table of , Every word All primary keys , So it can't be empty           if (!word.isEmpty()) {           out.collect(new Tuple2<String, Long>(word, 1L));          }         }        }       }    )    .keyBy(0)    .timeWindow(Time.seconds(5))    .sum(1);  result.addSink(new PrintSinkFunction<>())    .name("print Sink")    .disableChaining();  CassandraSink.addSink(result)    .setQuery("INSERT INTO example.wordcount(word, count) values (?, ?);")    .setHost("192.168.133.168")    .build()    .name("cassandra Sink")    .disableChaining();  env.execute("kafka-2.4 source, cassandra-3.11.6 sink, tuple2"); }}
  1. In the above code , from kafka Get data , Did word count Write to after processing cassandra, Be careful addSink A series of after methods API( Contains parameters for database connection ), This is a flink Officially recommended operation , In addition, in order to Flink web UI See clearly DAG situation , This call disableChaining The method cancelled operator chain, This line can be removed from the production environment ;
  2. After coding , perform mvn clean package -U -DskipTests structure , stay target Directory to get files flinksinkdemo-1.0-SNAPSHOT.jar;

版权声明
本文为[Irving the procedural ape]所创,转载请带上原文链接,感谢