+ All Categories
Home > Data & Analytics > Introduction to Structured Data Processing with Spark SQL

Introduction to Structured Data Processing with Spark SQL

Date post: 06-Aug-2015
Category:
Upload: datamantra
View: 186 times
Download: 0 times
Share this document with a friend
22
Structured data processing with Spark SQL https://github.com/phatak-dev/structured_data_processing_spark_sql
Transcript
Page 1: Introduction to Structured Data Processing with Spark SQL

Structured data processing with Spark

SQLhttps://github.com/phatak-dev/structured_data_processing_spark_sql

Page 2: Introduction to Structured Data Processing with Spark SQL

● Madhukara Phatak

● Big data consultant and trainer at datamantra.io

● Consult in Hadoop, Spark and Scala

● www.madhukaraphatak.com

Page 3: Introduction to Structured Data Processing with Spark SQL

Agenda● Variety in Big data● Structured data● Structured data analysis in M/R● Datasource API● DataFrame● SQL in Spark● Smart sources

Page 4: Introduction to Structured Data Processing with Spark SQL

3 V’s of Big data● Volume

○ TB’s and PB’s of files○ Driving need for batch processing systems

● Velocity○ TB’s of stream data○ Driving need for stream processing systems

● Variety○ Structured, semi structured and unstructured○ Driving need for sql, graph processing systems

Page 5: Introduction to Structured Data Processing with Spark SQL

Why care about structured data?● Isn’t big data is all about unstructured data?● Most real world problems work with structured / semi

structured data 80% of the time● Sources

○ JSON from API data○ RDBMS input○ NoSQL db inputs

● ETL process convert from unstructured to structured

Page 6: Introduction to Structured Data Processing with Spark SQL

Structured data in M/R world● Both structured and unstructured data treated as same

text file● Even higher level frameworks like Pig/Hive interprets

the structured data using user provided schema● Let’s take an example of processing csv data in spark in

Map/Reduce style● Ex: CsvInRDD

Page 7: Introduction to Structured Data Processing with Spark SQL

Challenges of structured data in M/R● No uniform way of loading structured data, we just piggyback

on input format● No automatic schema discovery● Adding a new field or changing field sequencing is not that

easy● Even Pig JSON input format just limits for record separating● No high level representation of structured data even in Spark

as there is always RDD[T]● No way to query RDD using sql once we have constructed

structured output

Page 8: Introduction to Structured Data Processing with Spark SQL

Spark SQL library● Data source API

Universal API for Loading/ Saving structured data● DataFrame API

Higher level representation for structured data● SQL interpreter and optimizer

Express data transformation in SQL● SQL service

Hive thrift server

Page 9: Introduction to Structured Data Processing with Spark SQL

Spark SQL Apache HiveLibrary Framework

Optional metastore Mandatory metastore

Automatic schema inference Explicit schema declaration using DDL

API - DataFrame DSL and SQL HQL

Supports both Spark SQL and HQL Only HQL

Hive Thrift server Hive thrift server

Page 10: Introduction to Structured Data Processing with Spark SQL

Loading structured Data● Why not Input format

○ Input format always needs key,value pair which is not efficient manner to represent schema

○ Schema discovery is not built in○ No direct support for smart sources aka server side

filtering ■ Hacks using configuration passing

Page 11: Introduction to Structured Data Processing with Spark SQL

Data source API

Page 12: Introduction to Structured Data Processing with Spark SQL

Data source API● Universal API for loading/saving structured data● Built In support for Hive, Avro, Json,JDBC, Parquet● Third party integration through spark-packages● Support for smart sources● Third parties already supporting

○ Csv ○ MongoDB○ Cassandra (in works)etc

Page 13: Introduction to Structured Data Processing with Spark SQL

Data source API examples● SQLContext for accessing data source API’s● sqlContext.load is way to load from given source● Examples

○ Loading CSV file - CsvInDataSource.scala○ Loading Json file - JsonInDataSource.scala

● Can we mix and match sources having same schema?○ Example : MixSources.scala

Page 14: Introduction to Structured Data Processing with Spark SQL

DataFrame● Single abstraction for representing structured data in

Spark● DataFrame = RDD + Schema (aka SchemaRDD)● All data source API’s return DataFrame● Introduced in 1.3● Inspired from R and Python panda● .rdd to convert to RDD representation resulting in RDD

[Row]● Support for DataFrame DSL in Spark

Page 15: Introduction to Structured Data Processing with Spark SQL

Querying data frames using SQL● Spark-SQL has a built in spark sql interpreter and

optimizer similar to Hive● Support both Spark SQL and Hive dialect● Support for both temporary and hive metastore● All ideas like UDF,UDAF, Partitioning of Hive is

supported● Example

○ QueryCsv.scala

Page 16: Introduction to Structured Data Processing with Spark SQL

DataFrame DSL● A DSL to express SQL transformation in a map/reduce

style DSL● Geared towards Data Scientist coming from R/Python● Both SQL and Dataframe uses exactly same interpreter

and optimizers● SQL or Dataframe is upto you to decide● Ex : AggDataFrame

Page 17: Introduction to Structured Data Processing with Spark SQL

RDD transformations Data Frame transformation

Actual transformation is shipped on cluster

Optimized generated transformation is shipped on cluster

No schema need to be specified Mandatory Schema

No parser or optimizer SQL parser and optimizer

Lowest API on platform API built on SQL which is intern built on RDD

Don’t use smart source capabilities Make effective use of smart souces

Different performance in different language API’s Same performance across all different languages

Page 18: Introduction to Structured Data Processing with Spark SQL

Performance

Page 20: Introduction to Structured Data Processing with Spark SQL

Smart sources● Data source API’s integrates richly with data sources to

allow to get better performance from smart sources● Smart data sources are the one which support server

side filtering, column pruning etc● Ex : Parquet, HBase, Cassandra, RDBMS ● Whenever optimizer determines it only need only few

columns, it passes that info to data source● Data source can optimize for reading only those columns● More optimization like sharing logical planning is coming

in future

Page 21: Introduction to Structured Data Processing with Spark SQL

Apache Parquet ● Apache Parquet column storage format for Hadoop● Supported by M/R,Spark, Hive, Pig etc● Supports compression out of the box● Optimized for column oriented processing aka analytics● Supported out of the box in spark as data source API● One of the smart sources, which supports column

pruning● Ex : CsvToParquet.scala and AggParquet.scala


Recommended