ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

spark2.3 消费kafka数据

2019-12-13 14:02:45  阅读:328  来源: 互联网

标签:消费 val kafka streaming spark2.3 import apache org spark


官网介绍

http://spark.apache.org/docs/2.3.0/streaming-kafka-0-10-integration.html#creating-a-direct-stream

 

案例pom.xml依赖

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming_2.11</artifactId>
      <version>2.3.0</version>
      <!--      <scope>provided</scope>   -->
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
      <version>2.3.0</version>

 

 

 

package SpartStreamingaiqiyi
import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.sql.SparkSession
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe


object test {
  def main(args: Array[String]): Unit = {
    val spark = SparkSession.builder()
      .appName("aiqiyi")
      .master("local[*]")
      .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
      .getOrCreate()
    val sc = spark.sparkContext
    val checkpointDir = "F:\\IdeaWorkspace\\aiqiyi\\ck"
    val ssc: StreamingContext = new StreamingContext(sc, Seconds(5))
    ssc.checkpoint(checkpointDir)
    val topics = Array("aiqiyi")

    // Create a local StreamingContext with two working thread and batch interval of 1 second.
    // The master requires 2 cores to prevent a starvation scenario.
    val kafkaParams = Map[String, Object](
      "bootstrap.servers" -> "dip005:9092,dip006:9092,dip007:9092",
      "key.deserializer" -> classOf[StringDeserializer],
      "value.deserializer" -> classOf[StringDeserializer],
      "group.id" -> "use_a_separate_group_id_for_each_stream",
      "auto.offset.reset" -> "latest",
      "enable.auto.commit" -> (false: java.lang.Boolean)
    )

    val stream = KafkaUtils.createDirectStream[String, String](
      ssc,
      PreferConsistent,
      Subscribe[String, String](topics, kafkaParams)
    )
    val resultDStream = stream.map(x=>x.value())
    resultDStream.print()
    ssc.start()
    ssc.awaitTermination()
  }

}

标签:消费,val,kafka,streaming,spark2.3,import,apache,org,spark
来源: https://www.cnblogs.com/students/p/12034744.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有