+ All Categories
Home > Technology > Stream Processing using Apache Spark and Apache Kafka

Stream Processing using Apache Spark and Apache Kafka

Date post: 15-Apr-2017
Category:
Upload: abhinav-singh
View: 2,601 times
Download: 1 times
Share this document with a friend
29
Apache Spark Streaming Processing with Kafka Please introduce yourselves using the Q&A window that appears on the right while others join us.
Transcript
Page 1: Stream Processing using Apache Spark and Apache Kafka

Apache SparkStreaming Processing with Kafka

Please introduce yourselves using the Q&A window that appears on the right while others join us.

Page 2: Stream Processing using Apache Spark and Apache Kafka

● Session - 3 hours Duration (due to high demand increased it from 2:30 hrs)● First Half: Apache Spark Introduction & Streaming basics● 10 mins. break● Second Half: Hands-on demo using CloudxLab

● Session is being recorded. Recording & presentation will be shared after the session

● Asking Questions?● Every one except the instructor is muted● Please ask questions by typing in the Q&A window (requires logging

in to google+)● Instructor will read out the question before answering● To get better answers, keep your messages short and avoid chat

language

WELCOME TO THE SESSION

Page 3: Stream Processing using Apache Spark and Apache Kafka

WELCOME TO CLOUDxLAB SESSION

A cloud based lab forstudents to gain hands-on experience in Big Data Technologies

such as Hadoop and Spark

● Learn Through Practice

● Real Environment

● Connect From Anywhere

● Connect From Any Device

● Centralized Data sets

● No Installation

● No Compatibility Issues

● 24x7 Support

Page 4: Stream Processing using Apache Spark and Apache Kafka

TODAY’S AGENDA

I Introduction to Apache SparkII Introduction to stream processingIII Understanding RDD (Resilient Distributed Datasets)IV Understanding DstreamV Kafka IntroductionVI Understanding Stream Processing flowVII Real time Hands-on using CloudxLabVIII Questions and Answers

Page 5: Stream Processing using Apache Spark and Apache Kafka

About Instructor?

2015 CloudxLab Founded2014 KnowBigData Founded2014

Amazon Built High Throughput Systems for Amazon.com site using in-house NoSql.

20122012 InMobi Built Recommender that churns 200 TB2011

tBits Global Founded tBits GlobalBuilt an enterprise grade Document Management System

2006

D.E.Shaw Built the big data systems before the term was coined

20022002 IIT Roorkee Finished B.Tech.

Page 6: Stream Processing using Apache Spark and Apache Kafka

Apache

A fast and general engine for large-scale data processing.

● Really fast MapReduce

● 100x faster than Hadoop MapReduce in memory,

● 10x faster on disk.

● Builds on similar paradigms as MapReduce

● Integrated with Hadoop

Page 7: Stream Processing using Apache Spark and Apache Kafka

SPARK STREAMING

Extension of the core Spark API: high-throughput, fault-tolerant

Input Sources Output

Page 8: Stream Processing using Apache Spark and Apache Kafka

SPARK STREAMING

Workflow

• Spark Streaming receives live input data streams

• Divides the data into batches

• Spark Engine generates the final stream of results in batches.

Provides a discretized stream or DStream - a continuous stream of data.

Page 9: Stream Processing using Apache Spark and Apache Kafka

SPARK STREAMING - DSTREAMInternally represented using RDD

Each RDD in a DStream contains data from a certain interval.

Page 10: Stream Processing using Apache Spark and Apache Kafka

SPARK STREAMING - EXAMPLE

Problem: do the word count every second.Step 1: Create a connection to the service

from pyspark import SparkContextfrom pyspark.streaming import StreamingContext

# Create a local StreamingContext with two working thread and # batch interval of 1 secondsc = SparkContext("local[2]", "NetworkWordCount")ssc = StreamingContext(sc, 1)# Create a DStream that will connect to hostname:port, # like localhost:9999lines = ssc.socketTextStream("localhost", 9999)

Page 11: Stream Processing using Apache Spark and Apache Kafka

SPARK STREAMING - EXAMPLE

Step 2: Split each line into words, convert to tuple and then count.

# Split each line into wordswords = lines.flatMap(lambda line: line.split(" "))

# Count each word in each batchpairs = words.map(lambda word: (word, 1))

#Do the countwordCounts = pairs.reduceByKey(lambda x, y: x + y)

Problem: do the word count every second.

Page 12: Stream Processing using Apache Spark and Apache Kafka

SPARK STREAMING - EXAMPLE

Step 3: Print the stream. It is a periodic event

# Print the first ten elements of each RDD generated # in this DStream to the consolewordCounts.pprint()

Problem: do the word count every second.

Page 13: Stream Processing using Apache Spark and Apache Kafka

SPARK STREAMING - EXAMPLE

Step 4: Every Thing Setup: Lets Start

# Start the computationssc.start()

# Wait for the computation to terminatessc.awaitTermination()

Problem: do the word count every second.

Page 14: Stream Processing using Apache Spark and Apache Kafka

SPARK STREAMING - EXAMPLEProblem: do the word count every second.

spark-submit spark_streaming_ex.py

2>/dev/null

(Also available in HDFS at /data/spark)

nc -l 9999

Page 15: Stream Processing using Apache Spark and Apache Kafka

SPARK STREAMING - EXAMPLEProblem: do the word count every second.

Page 16: Stream Processing using Apache Spark and Apache Kafka

SPARK STREAMING - EXAMPLEProblem: do the word count every second.

spark-submit spark_streaming_ex.py

2>/dev/nullyes|nc -l 9999

Page 17: Stream Processing using Apache Spark and Apache Kafka

Spark Streaming + Kafka Integration

Apache Kafka● Publish-subscribe messaging● A distributed, partitioned, replicated commit log service.

Page 18: Stream Processing using Apache Spark and Apache Kafka

Spark Streaming + Kafka Integration

Prerequisites● Zookeeper● Kafka● Spark● All of above are installed by Ambari with HDP (CloudxLab)● Kafka Library - you need to download from maven

○ also available in /data/spark

Page 19: Stream Processing using Apache Spark and Apache Kafka

Spark Streaming + Kafka Integration

Step 1: Download the spark assembly from here. Include essentials

from __future__ import print_functionfrom pyspark import SparkContextfrom pyspark.streaming import StreamingContextfrom pyspark.streaming.kafka import KafkaUtilsimport sys

Problem: do the word count every second from kafka

Page 20: Stream Processing using Apache Spark and Apache Kafka

Spark Streaming + Kafka Integration

Step 2: Create the streaming objects

Problem: do the word count every second from kafka

sc = SparkContext(appName="KafkaWordCount")ssc = StreamingContext(sc, 1)

#Read name of zk from argumentszkQuorum, topic = sys.argv[1:]

#Listen to the topickvs = KafkaUtils.createStream(ssc, zkQuorum, "spark-streaming-consumer", {topic: 1})

Page 21: Stream Processing using Apache Spark and Apache Kafka

Spark Streaming + Kafka Integration

Step 3: Create the RDDs by Transformations & Actions

Problem: do the word count every second from kafka

#read lines from streamlines = kvs.map(lambda x: x[1])

# Split lines into words, words to tuples, reducecounts = lines.flatMap(lambda line: line.split(" ")) \.map(lambda word: (word, 1)) \.reduceByKey(lambda a, b: a+b)

#Do the printcounts.pprint()

Page 22: Stream Processing using Apache Spark and Apache Kafka

Spark Streaming + Kafka Integration

Step 4: Start the process

Problem: do the word count every second from kafka

ssc.start() ssc.awaitTermination()

Page 23: Stream Processing using Apache Spark and Apache Kafka

Spark Streaming + Kafka Integration

Step 5: Create the topic

Problem: do the word count every second from kafka

#Login via ssh or Consolessh [email protected]# Add following into pathexport PATH=$PATH:/usr/hdp/current/kafka-broker/bin

#Create the topickafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

#Check if createdkafka-topics.sh --list --zookeeper localhost:2181

Page 24: Stream Processing using Apache Spark and Apache Kafka

Spark Streaming + Kafka Integration

Step 6: Create the producer

# find the ip address of any broker from zookeeper-client using command get /brokers/id/0

kafka-console-producer.sh --broker-list ip-172-31-13-154.ec2.internal:6667 --topic session2

#Test if producing by consuming in another terminalkafka-console-consumer.sh --zookeeper localhost:2181 --topic session2 --from-beginning

#Produce a lotyes|kafka-console-producer.sh --broker-list ip-172-31-13-154.ec2.internal:6667 --topic test

Problem: do the word count every second from kafka

Page 25: Stream Processing using Apache Spark and Apache Kafka

Spark Streaming + Kafka Integration

Step 7: Do the stream processing. Check the graphs at :4040/

Problem: do the word count every second from kafka

(spark-submit --jars spark-streaming-kafka-assembly_2.10-1.6.0.jar kafka_wordcount.py localhost:2181 session2) 2>/dev/null

Page 26: Stream Processing using Apache Spark and Apache Kafka

● The updateStateByKey operation allows you to maintain arbitrary state while continuously updating it with new information.

● To use this, you will have to do two steps.○ Define the state - The state can be an arbitrary data type.○ Define the state update function - Specify with a function how to update the state

using the previous state and the new values from an input stream● In every batch, Spark will apply the state update function for all existing keys, regardless

of whether they have new data in a batch or not. ● If the update function returns None then the key-value pair will be eliminated

UpdateStateByKey OperationCompute Aggregation across whole day

Page 27: Stream Processing using Apache Spark and Apache Kafka

UpdateStateByKey Operation

def updateFunction(newValues, runningCount): if runningCount is None: runningCount = 0

# add the new values with the previous running count # to get the new count

return sum(newValues, runningCount)

runningCounts = pairs.updateStateByKey(updateFunction)

Objective: Maintain a running count of each word seen in a text data stream.The running count is the state and it is an integer

Page 28: Stream Processing using Apache Spark and Apache Kafka

Feedback

http://bit.ly/1ZQwUAn

Help us improve


Recommended