Date post: | 12-Jan-2017 |
Category: |
Technology |
Upload: | apache-apex |
View: | 135 times |
Download: | 2 times |
1
Ingestion Platform
Pramod ImmaneniAug 18th 2016
2
Big Data Ingestion• Ingestion at scale• Support for a variety of sources• Ability to process data during ingestion
• Cleansing, enrichment and transformations• Recovery from failures without having to re-run entire ingestion
• No data loss or duplicates when failures occur• Flexibility to create different types of ingestion pipelines
Our solution
3
• Ingestion configurable to the user’s needs• User can put together building blocks of ingestion any way they want
• Supports variety of sources and sinks out of the box• Kafka, databases, files, HDFS
• Supports filter and transformations out of the box• Extensible
• User can add their own processing elements and sources• Scales with data
• Uses Apache Apex distributed platform as the processing engine• Each stage of the process can scale independently and even dynamically
• Data integrity and no loss in the face of failures• Fault tolerant and end-to-end exactly once because of the fault tolerance support
out of the box from Apache Apex
Application Stack
4
HADOOP
APACHE APEX
O O O O O O
Operators
INGESTION PIPELINE
5
Application Designer
6
Apache Apex• In-memory, distributed stream processing
• Application logic broken into components called operators that run in a distributed fashion across your cluster
• Natural programming model• Unobtrusive Java API that blends with your custom business logic• Maintain state and metrics in your member variables
• Scalable, high throughput, low latency• Operators can be scaled up or down at runtime according to the load and SLA• Dynamic scaling (elasticity), compute locality
• Fault tolerance & correctness• Automatically recover from node outages without having to reprocess from
beginning• State is preserved, checkpointing, incremental recovery• End-to-end exactly-once
• Operability• System and application metrics, record/visualize data• Dynamic changes
7
Platform Overview
8
Application Development Model
A Stream is a sequence of data tuplesA typical Operator takes one or more input streams, performs computations & emits one or more output streams
• Each Operator is YOUR custom business logic in java, or built-in operator from our open source library• Operator has many instances that run in parallel and each instance is single-threaded
Directed Acyclic Graph (DAG) is made up of operators and streams
Directed Acyclic Graph (DAG)
Filtered
Stream
Output StreamTuple Tuple
Filtered Stream
Enriched Stream
Enriched
Stream
er
Operator
er
Operator
er
Operator
er
Operator
er
Operator
er
Operator
9
Scalability
0
Unifier
1a
1b
1c
2a
2b
Unifier 3
Unifier
0 4
3a2a1a
1b 2b 3b
Unifier
uopr1
uopr2
uopr3
uopr4
doprunifier
unifier
unifier
Container
Container
NIC
NIC
NIC
NIC
NIC
Container
10
Fault Tolerance• Operator state is checkpointed to persistent store
ᵒ Automatically performed by engine, no additional coding neededᵒ Asynchronous and distributed ᵒ In case of failure operators are restarted from checkpoint state
• Automatic detection and recovery of failed containersᵒ Heartbeat mechanismᵒ YARN process status notification
• Buffering to enable replay of data from recovered pointᵒ Fast, incremental recovery, spike handling
• Application master state checkpointedᵒ Snapshot of physical (and logical) planᵒ Execution layer change log
Exactly Once - Files
11
File Data
Offset
• Operator saves file offset during checkpoint
• File contents are flushed before checkpoint to ensure there is no pending data in buffer
• On recovery platform restores the file offset value from checkpoint
• Operator truncates the file to the offset
• Starts writing data again• Ensures no data is duplicated or lost
Chk
Exactly Once - Databases
12
d11 d12 d13
d21 d22 d23
lwn1 lwn2 lwn3
op-id wn
chk wn wn+1
Lwn+11 Lwn+12 Lwn+13
op-id wn+1
Data TableMeta Table
• Data in a window is written out in a single transaction
• Window id is also written to a meta table as part of the same transaction
• Operator reads the window id from meta table on recovery
• Ignores data for windows less than the recovered window id and writes new data
• Partial window data before failure will not appear in data table as transaction was not committed
• Assumes idempotency for replay
13
Ingestion Solution• Application package with operators ready to use for ingestion• Input and output connectors
• Kafka – Dynamically scalable with Kafka scale• HDFS Block and file• S3• Databases with JDBC
- Postgres, Mysql• Processing
• Deduper• Parsing, Filtering & Transform
• Comes with pre-built pipelines• Kafka to HDFS with Deduper• HDFS sync between two clusters or S3 to HDFS
• Currently in beta• If interested please contact [email protected]
14
Resources• http://apex.apache.org/• Learn more: http://apex.apache.org/docs.html • Subscribe - http://apex.apache.org/community.html• Download - http://apex.apache.org/downloads.html• Follow @ApacheApex - https://twitter.com/apacheapex• Meetups – http://www.meetup.com/pro/apacheapex/• More examples: https://github.com/DataTorrent/examples• Slideshare:
http://www.slideshare.net/ApacheApex/presentations• https://www.youtube.com/results?search_query=apache+ape
x• Free Enterprise License for Startups -
https://www.datatorrent.com/product/startup-accelerator/
Q&A
15