Home >Technology >[Big Data Spain] Apache Spark Streaming + Kafka 0.10: an Integration Story

[Big Data Spain] Apache Spark Streaming + Kafka 0.10: an Integration Story

Date post:21-Jan-2018
Category:
View:120 times
Download:4 times
Share this document with a friend
Transcript:
  1. 1. Apache Spark Streaming + Kafka 0.10: An Integration Story Joan Viladrosa, Billy Mobile
  2. 2. About me Degree In Computer Science Advanced Programming Techniques & System Interfaces and Integration Co-Founder, Educabits Educational Big data solutions using AWS cloud Big Data Developer, Trovit Hadoop and MapReduce Framework SEM keywords optimization Big Data Architect & Tech Lead BillyMobile Full architecture with Hadoop: Kafka, Storm, Hive, HBase, Spark, Druid, Joan Viladrosa Riera @joanvr joanviladrosa [email protected]
  3. 3. Apache Kafka
  4. 4. What is Apache Kafka? - Publish - Subscribe Message System
  5. 5. What is Apache Kafka? What makes it great? - Publish - Subscribe Message System - Fast - Scalable - Durable - Fault-tolerant
  6. 6. What is Apache Kafka Producer Producer Producer Producer Kafka Consumer Consumer Consumer Consumer As a central point
  7. 7. What is Apache Kafka A lot of different connectors Apache Storm Apache Spark My Java App Logger Kafka Apache Storm Apache Spark My Java App Monitoring Tool
  8. 8. Kafka Terminology Topic: A feed of messages Producer: Processes that publish messages to a topic Consumer: Processes that subscribe to topics and process the feed of published messages Broker: Each server of a kafka cluster that holds, receives and sends the actual data
  9. 9. Kafka Topic Partitions 0 1 2 3 4 5 6Partition 0 Partition 1 Partition 2 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 8 9 7 8 Topic: Old New writes
  10. 10. Kafka Topic Partitions 0 1 2 3 4 5 6Partition 0 7 8 9 Old New 1 0 1 1 1 2 1 3 1 4 1 5 Producer writes Consumer A (offset=6) Consumer B (offset=12) reads reads
  11. 11. Kafka Topic Partitions 0 1 2 3 4 5 6P0 P1 P2 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 8 9 7 8 0 1 2 3 4 5 6P3 P4 P5 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 8 9 7 8 0 1 2 3 4 5 6P6 P7 P8 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 8 9 7 8 Broker 1 Broker 2 Broker 3 Consumers & Producers
  12. 12. Kafka Topic Partitions 0 1 2 3 4 5 6P0 P1 P2 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 8 9 7 8 0 1 2 3 4 5 6P3 P4 P5 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 8 9 7 8 0 1 2 3 4 5 6P6 P7 P8 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 8 9 7 8 Broker 1 Broker 2 Broker 3 Consumers & Producers More Storage More Parallelism
  13. 13. Kafka Semantics In short: consumer delivery semantics are up to you, not Kafka - Kafka doesnt store the state of the consumers* - It just sends you what you ask for (topic, partition, offset, length) - You have to take care of your state
  14. 14. Apache Kafka Timeline may-2016nov-2015nov-2013nov-2012 New Producer New Consumer Security Kafka Streams Apache Incubator Project 0.7 0.8 0.9 0.10
  15. 15. Apache Spark Streaming
  16. 16. What is Apache Spark Streaming? - Process streams of data - Micro-batching approach
  17. 17. What is Apache Spark Streaming? What makes it great? - Process streams of data - Micro-batching approach - Same API as Spark - Same integrations as Spark - Same guarantees & semantics as Spark
  18. 18. What is Apache Spark Streaming Relying on the same Spark Engine: same syntax as batch jobs https://spark.apache.org/docs/latest/streaming-programming-guide.html
  19. 19. How does it work? - Discretized Streams https://spark.apache.org/docs/latest/streaming-programming-guide.html
  20. 20. How does it work? - Discretized Streams https://spark.apache.org/docs/latest/streaming-programming-guide.html
  21. 21. How does it work? https://databricks.com/blog/2015/07/30/diving-into-apache-spark-streamings-execution-model.html
  22. 22. How does it work? https://databricks.com/blog/2015/07/30/diving-into-apache-spark-streamings-execution-model.html
  23. 23. Spark Streaming Semantics Side effects As in Spark: - Not guarantee exactly-once semantics for output actions - Any side-effecting output operations may be repeated - Because of node failure, process failure, etc. So, be careful when outputting to external sources
  24. 24. Spark Streaming Kafka Integration
  25. 25. Spark Streaming Kafka Integration Timeline dec-2016jul-2016jan-2016sep-2015jun-2015mar-2015dec-2014sep-2014 Fault Tolerant WAL + Python API Direct Streams + Python API Improved Streaming UI Metadata in UI (offsets) + Graduated Direct Receivers Native Kafka 0.10 (experimental) 1.1 1.2 1.3 1.4 1.5 1.6 2.0 2.1
  26. 26. Kafka Receiver ( Spark 1.1) Executor Driver Launch jobs on data Continuously receive data using High Level API Update offsets in ZooKeeper Receiver
  27. 27. Executor HDFS WAL Kafka Receiver with WAL (Spark 1.2) Driver Launch jobs on data Continuously receive data using High Level API Update offsets in ZooKeeper Receiver
  28. 28. Kafka Receiver with WAL (Spark 1.2) Application Driver Executor Spark Context Jobs Computation checkpointed Receiver Input stream Block metadata Block metadata written to log Block data written both memory + log Streaming Context
  29. 29. Kafka Receiver with WAL (Spark 1.2) Restarted Driver Restarted Executor Restarted Spark Context Relaunch Jobs Restart computation from info in checkpoints Restarted Receiver Resend unacked data Recover Block metadata from log Recover Block data from log Restarted Streaming Context
  30. 30. Executor HDFS WAL Kafka Receiver with WAL (Spark 1.2) Driver Launch jobs on data Continuously receive data using High Level API Update offsets in ZooKeeper Receiver
  31. 31. Direct Kafka Integration w/o Receiver or WAL (Spark 1.3) Executor Driver
  32. 32. Direct Kafka Integration w/o Receiver or WAL (Spark 1.3) Executor Driver 1. Query latest offsets and decide offset ranges for batch
  33. 33. Direct Kafka Integration w/o Receiver or WAL (Spark 1.3) Executor 1. Query latest offsets and decide offset ranges for batch 2. Launch jobs using offset ranges Driver topic1, p1, (2000, 2100) topic1, p2, (2010, 2110) topic1, p3, (2002, 2102)
  34. 34. Direct Kafka Integration w/o Receiver or WAL (Spark 1.3) Executor 1. Query latest offsets and decide offset ranges for batch 2. Launch jobs using offset ranges Driver topic1, p1, (2000, 2100) topic1, p2, (2010, 2110) topic1, p3, (2002, 2102) 3. Reads data using offset ranges in jobs using Simple API
  35. 35. Direct Kafka Integration w/o Receiver or WAL (Spark 1.3) Executor 1. Query latest offsets and decide offset ranges for batch 2. Launch jobs using offset ranges Driver topic1, p1, (2000, 2100) topic1, p3, (2002, 2102) 3. Reads data using offset ranges in jobs using Simple API topic1, p2, (2010, 2110)
  36. 36. Direct Kafka Integration w/o Receiver or WAL (Spark 1.3) Executor 1. Query latest offsets and decide offset ranges for batch 2. Launch jobs using offset ranges Driver topic1, p1, (2000, 2100) topic1, p3, (2002, 2102) 3. Reads data using offset ranges in jobs using Simple API topic1, p2, (2010, 2110)
  37. 37. Direct Kafka Integration w/o Receiver or WAL (Spark 1.3) Executor Driver 2. Launch jobs using offset ranges 3. Reads data using offset ranges in jobs using Simple API 1. Query latest offsets and decide offset ranges for batch
  38. 38. Direct Kafka API benefits - No WALs or Receivers - Allows end-to-end exactly-once semantics pipelines * * updates to downstream systems should be idempotent or transactional - More fault-tolerant - More efficient - Easier to use.
  39. 39. Spark Streaming UI improvements (Spark 1.4)
  40. 40. Kafka Metadata (offsets) in UI (Spark 1.5)
  41. 41. What about Spark 2.0+ and new Kafka Integration? This is why we are here, right?
  42. 42. Spark 2.0+ new Kafka Integration spark-streaming-kafka-0-8 spark-streaming-kafka-0-10 Broker Version 0.8.2.1 or higher 0.10.0 or higher Api Stability Stable Experimental Language Support Scala, Java, Python Scala, Java Receiver DStream Yes No Direct DStream Yes Yes SSL / TLS Support No Yes Offset Commit Api No Yes Dynamic Topic Subscription No Yes
  43. 43. Whats really New with this New Kafka Integration? - New Consumer API * Instead of Simple API - Location Strategies - Consumer Strategies - SSL / TLS - No Python API :(
  44. 44. Location Strategies - New consumer API will pre-fetch messages into buffers - So, keep cached consumers into executors - Its better to schedule partitions on the host with already appropriate consumers
  45. 45. Location Strategies - PreferConsistent Distribute partitions evenly across available executors - PreferBrokers If your executors are on the same hosts as your Kafka brokers - PreferFixed Specify an explicit mapping of partitions to hosts
  46. 46. Consumer Strategies - New consumer API has a number of different ways to specify topics, some of which require considerable post-object-instantiation setup. - ConsumerStrategies provides an abstraction that allows Spark to obtain properly configured consumers even after restart from checkpoint.
  47. 47. Consumer Strategies - Subscribe subscribe to a fixed collection of topics - SubscribePattern use a regex to specify topics of interest - Assign specify a fixed collection of partitions Overloaded constructors to specify the starting offset for a particular partition. ConsumerStrategy is a public class that you can extend.
  48. 48. SSL/TTL encryption - New consumer API supports SSL - Only applies to communication between Spark and Kafka brokers - Still responsible for separately securing Spark inter-node communication
  49. 49. How to use New Kafka Integration on Spark 2.0+ Scala Example Code Basic Usage val kafkaParams = Map[String, Object]( "bootstrap.servers" -> "broker01:9092,broker02:9092", "key.deserializer" -> classOf[StringDeserializer], "value.deserializer" -> classOf[StringDeserializer], "group.id" -> "stream_group_id", "auto.offset.reset" -> "latest", "enable.auto.commit" -> (false: java.lang.Boolean) ) val topics = Array("topicA", "topicB") val stream = KafkaUtils.createDirectStream[String, String]( streamingContext, PreferConsistent, Subscribe[String, String](topics, kafkaParams) ) stream.map(record => (record.key, record.value))
  50. 50. How to use New Kafka Integration on Spark 2.0+ Scala Example Code Getting Metadata stream.foreachRDD { rdd => val offsetRanges = rdd.asInstanceOf[HasOffsetRanges] .offsetRanges rdd.foreachPartition { iter => val osr: OffsetRange = offsetRanges( TaskContext.get.partitionId) // get any needed data from the offset range val topic = osr.topic val kafkaPartitionId = osr.partition val begin = osr.fromOffset val end = osr.untilOffset } }
  51. 51. Kafka or Spark RDD partitions? RDDTopic Kafka Spark 1 2 3 4 1 2 3 4
  52. 52. Kafka or Spark RDD partitions? RDDTopic Kafka Spark 1 2 3 4 1 2 3 4
  53. 53. How to use New Kafka Integration on Spark 2.0+ Scala Example Code Getting Metadata stream.foreachRDD { rdd => val offsetRanges = rdd.asInstanceOf[HasOffsetRanges] .offsetRanges rdd.foreachPartition { iter => val osr: OffsetRange = offsetRanges( TaskContext.get.partitionId) // get any needed data from the offset range val topic = osr.topic val kafkaPartitionId = osr.partition val begin = osr.fromOffset val end = osr.untilOffset } }
  54. 54. How to use New Kafka Integration on Spark 2.0+ Scala Example Code Store offsets in Kafka itself: Commit API stream.foreachRDD { rdd => val offsetRanges = rdd.asInstanceOf[HasOffsetRanges] .offsetRanges // DO YOUR STUFF with DATA stream.asInstanceOf[CanCommitOffsets] .commitAsync(offsetRanges) } }
  55. 55. - At most once - At least once - Exactly once Kafka + Spark Semantics
  56. 56. Kafka + Spark Semantics At most once - We dont want duplicates - Not worth the hassle of ensuring that messages dont get lost - Example: Sending statistics over UDP 1. Set spark.task.maxFailures to 1 2. Make sure spark.speculation is false (the default) 3. Set Kafka param auto.offset.reset to largest 4. Set Kafka param enable.auto.commit to true
  57. 57. Kafka + Spark Semantics At most once - This will mean you lose messages on restart - At least they shouldnt get replayed. - Test this carefully if its actually important to you that a message never gets repeated, because its not a common use case.
  58. 58. Kafka + Spark Semantics At least once - We dont want to loose any record - We dont care about duplicates - Example: Sending internal alerts on relative rare occurrences on the stream 1. Set spark.task.maxFailures > 1000 2. Set Kafka param auto.offset.reset to smallest 3. Set Kafka param enable.auto.commit to false
  59. 59. Kafka + Spark Semantics At least once - Dont be silly! Do NOT replay your whole log on every restart - Manually commit the offsets when you are 100% sure records are processed - If this is too hard youd better have a relative short retention log - Or be REALLY ok with duplicates. For example, you are outputting to an external system that handles duplicates for you (HBase)
  60. 60. Kafka + Spark Semantics Exactly once - We dont want to loose any record - We dont want duplicates either - Example: Storing stream in data warehouse 1. We need some kind of idempotent writes, or whole-or-nothing writes (transactions) 2. Only store offsets EXACTLY after writing data 3. Same parameters as at least once
  61. 61. Kafka + Spark Semantics Exactly once - Probably the hardest to achieve right - Still some small chance of failure if your app fails just between writing data and committing offsets (but REALLY small)
  62. 62. Spark Streaming + Kafka at Billy Mobile a story of love and fury
  63. 63. Some Billy Insights we rock it! 15Brecords monthly 35TBweekly retention log 6Kevents/second x4growth/year
  64. 64. Our use cases: ETL to Data Warehouse - Input events from Kafka - Enrich events with some external data sources - Finally store it to Hive - We do NOT want duplicates - We do NOT want to lose events
  65. 65. Our use cases: ETL to Data Warehouse - Hive is not transactional - Neither idempotent writes - Writing files to HDFS is atomic (whole or nothing) - A relation 1:1 from each partition-batch to file in HDFS - Store to ZK the current state of the batch - Store to ZK offsets of last finished batch
  66. 66. Our use cases: ETL to Data Warehouse On failure: - If executors fails, just keep going (reschedule task) > spark.task.maxFailures = 1000 - If driver fails (or restart): - Load offsets and state from current batch if exists and finish it (KafkaUtils.createRDD) - Continue Stream from last saved offsets
  67. 67. Our use cases: Anomalies Detection - Input events from Kafka - Periodically load batch-computed model - Detect when an offer stops converting (or too much) - We do not care about losing some events (on restart) - We always need to process the real-time stream
  68. 68. Our use cases: Anomalies Detection - Its useless to detect anomalies on a lagged stream! - Actually it could be very bad - Always restart stream on latest offsets - Restart with fresh state
  69. 69. Our use cases: Store it to Entity Cache - Input events from Kafka - Almost no processing - Store it to HBase (has idempotent writes) - We do not care about duplicates - We can NOT lose a single event
  70. 70. Our use cases: Store it to Entity Cache - Since HBase has idempotent writes, we can write events multiple times without hassle - But, we do NOT start with earliest offsets - That would be 7 days of redundant writes!!! - We store offsets of last finished batch - But obviously we might re-write some events on restart or failure
  71. 71. Lessons Learned - Do NOT use checkpointing! - Not recoverable across upgrades - Do your own checkpointing - Track offsets yourself - ZK, HDFS, DB - Memory might be an issue - You do not want to waste it... - Adjust batchDuration - Adjust maxRatePerPartition
  72. 72. Future Research - Dynamic Allocation spark.dynamicAllocation.enabled vs spark.streaming.dynamicAllocation.enabled https://issues.apache.org/jira/browse/SPARK-12133 But no reference in docs... - Graceful shutdown - Structured Streaming
  73. 73. Thank you very much! Questions? @joanvr joanviladrosa [email protected]

Click here to load reader

Reader Image
Embed Size (px)
Recommended