Post on 18-Jan-2018
description
transcript
Brief Overview on
Bigdata, Hadoop, MapReduce
Jianer ChenCSCE-629, Fall 2015
A Lot of Data
• Google processes 20 PB a day (2008)• Wayback Machine has 3 PB + 100 TB/month (03/2009) – 9.6 PB recently• Facebook processes 500 TB/day (08/2012)• eBay has > 10 PB of user data + 50 TB/day (01/2012)• CERN Data Centre has over 100 PB of physics data.
KB (kilobyte) = 103 bytes; MB (megabyte) = 106 bytes; GB (gigabyte) = 109 bytes; TB (terabyte) = 1012 bytes; PB (petabyte) = 1015 bytes
• 20+ billion web pages x 20KB = 400+ TB - one computer reads 30-35 MB/sec from disk, so it will take more than 4 months to read the web pages - 1,000 hard drives to store the web pages
• Not scalable: takes even more to do something useful with the data!
• A standard architecture for such problems has emerged - Cluster of commodity Linux nodes - Commodity network (ethernet) to connect them
Google Example
A Lot of Data
4
Cluster Architecture: Many Machines
CPU
Mem
CPU
Mem
CPU
Mem
CPU
Mem
Switch Switch
Switch
1 Gbps between nodes in a rack
2-10 Gbps backbone between racks
……
… …
Each rack has 16-64 nodesGoogle had 1 million machines in 2011.
Hadoop ClusterDN: data nodeTT: task trackerNN: name node
From: http://bradhedlund.com/2011/09/10/understanding-hadoop-clusters-and-the-network/
Hadoop is an open-source software framework for storing data and running applications on clusters of commodity hardware. It provides massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs.
Cluster Architecture: Many Machines
Hadoop Cluster
Cluster Computing: A Classical Algorithmic Ideas: Divide-and Conquer
work 1
work partition
work 2 work 3 work 4
“worker” “worker” “worker” “worker”
result 1 result 2 result 3 result 4
result combine
solve
Challenges in Cluster Computing
• How do we assign work units to workers?• What if we have more work units than workers?• What if workers need to share partial results?• How do we aggregate partial results?• How do we know all the workers have finished?• What if workers die?
Challenges in Cluster Computing
• How do we assign work units to workers?• What if we have more work units than workers?• What if workers need to share partial results?• How do we aggregate partial results?• How do we know all the workers have finished?• What if workers die?
What is the common theme of all of these problems?
Challenges in Cluster Computing
• How do we assign work units to workers?• What if we have more work units than workers?• What if workers need to share partial results?• How do we aggregate partial results?• How do we know all the workers have finished?• What if workers die?
What is the common theme of all of these problems?• Parallelization problems arise from:
- Communication between workers (e.g., to exchange state) - Access to shared resources (e.g., data)
• We need a synchronization mechanism.
Challenges in Cluster Computing
• We need the right level of abstraction – new model more appropriate for the multicore/cluster
environment
• Hide system-level details from the developers – no more race conditions, lock contention, etc.
• Separating the what from how – developer specifies the computation that needs to be performed – execution framework handles actual execution
Therefore,
• We need the right level of abstraction – new model more appropriate for the multicore/cluster
environment
• Hide system-level details from the developers – no more race conditions, lock contention, etc.
• Separating the what from how – developer specifies the computation that needs to be performed – execution framework handles actual execution
Therefore,
This motivated MapReduce
MapReduce: Big Ideas
• Failures are common in Cluster systems MapReduce implementation copes with failures (auto task restart)
MapReduce: Big Ideas
• Failures are common in Cluster systems MapReduce implementation copes with failures (auto task restart)
• Data movements are expensive in supercomputers MapReduce moves processing to data (leverage locality)
MapReduce: Big Ideas
• Failures are common in Cluster systems MapReduce implementation copes with failures (auto task restart)
• Data movements are expensive in supercomputers MapReduce moves processing to data (leverage locality)
• Disk I/O is time-consuming MapReduce organizes computation into long streaming operations
MapReduce: Big Ideas
• Failures are common in Cluster systems MapReduce implementation copes with failures (auto task restart)
• Data movements are expensive in supercomputers MapReduce moves processing to data (leverage locality)
• Disk I/O is time-consuming MapReduce organizes computation into long streaming operations
• Developing distributed software is difficult MapReduce isolates developers from implementation details.
MapReduce: Big Ideas
• Iterate over a large number of records• Extract something of interest from each• Shuffle and sort intermediate results• Aggregate intermediate results• Generate final output
Typical Large-Data Problem
• Iterate over a large number of records• Extract something of interest from each• Shuffle and sort intermediate results• Aggregate intermediate results• Generate final output
Typical Large-Data Problem
map
• Iterate over a large number of records• Extract something of interest from each• Shuffle and sort intermediate results• Aggregate intermediate results• Generate final output
Typical Large-Data Problem
map
Reduce
• Iterate over a large number of records• Extract something of interest from each• Shuffle and sort intermediate results• Aggregate intermediate results• Generate final output
Typical Large-Data Problem
map
Reduce
Key idea of MapReduce: provide a functional abstraction for these two operations. [Dean and Ghemawat, OSDI 2004]
map
input
map map map
MapReduce: Greneral Framework
reduce reducereduce
……
…………
output
Shuffle and Sort
map
Output written to DFS
map map map
MapReduce: Greneral Framework
reduce reducereduce
……
…………
input
InputSplit
Shuffle and Sort
map
Output written to DFS
map map map
MapReduce: Greneral Framework
reduce reducereduce
……
…………
input
InputSplit
User specified
System provided
• Programmers specify two functions: map (k1, v1) → (k2, v2)*
reduce (k2, v2*) → (k3, v3)*
All values with the same key are sent to the same reducer
• The execution framework handles everything else.
MapReduce
• Programmers specify two functions: map (k1, v1) → (k2, v2)*
reduce (k2, v2*) → (k3, v3)*
All values with the same key are sent to the same reducer
• The execution framework handles everything else.
MapReduce
Example: Word Count Map(String docID, String text): map(docID, text) → (word, 1)* for each word w in text: Emit(w, 1)
Reduce(String word, Iterator<int> values): int sum = 0; for each v in values: reduce(word, [1, …, 1]) → (word, sum)* sum += v;
Emit(word, sum);
k1 v1
k3 v3k2
k2
v2*
v2
Shuffle and Sort: aggregate values by keys
map
Output written to DFS
docID text
map map map
a
1
b
1
a
1
c
1
b
1
c
1
c
1
a
1
a
1
a
1
c
1
b
1
a
1
1
1
1
1
b
1
1
1
c
1
1
1
1
reduce reducereduce
a 5 b 3 c 4
Example: Word CountMap(String docID, String text): for each word w in text: Emit(w, 1)
Reduce(String word, Iterator<int> values): int sum = 0; for each v in values: sum += v; Emit(word, sum);
MapReduce: Word Count
• Handles scheduling – Assigns workers to map and reduce tasks• Handles “data distribution” – Moves processes to data• Handles synchronization – Gathers, sorts, and shuffles intermediate data• Handles errors and faults – Detects worker failures and restarts• Everything happens on top of a distributed file system
MapReduce: Framework
• Programmers specify two functions: map (k1, v1) → (k2, v2)* reduce (k2, v2*) → (k3, v3)* – all values with the same key are sent to the same reducer
MapReduce: User Specification
• Programmers specify two functions: map (k1, v1) → (k2, v2)* reduce (k2, v2*) → (k3, v3)* – all values with the same key are sent to the same reducer
• Mappers & Reducers can specify any computation – be careful with access to external resources!
MapReduce: User Specification
• Programmers specify two functions: map (k1, v1) → (k2, v2)* reduce (k2, v2*) → (k3, v3)* – all values with the same key are sent to the same reducer
• Mappers & Reducers can specify any computation – be careful with access to external resources!
• The execution framework handles everything else
MapReduce: User Specification
• Programmers specify two functions: map (k1, v1) → (k2, v2)* reduce (k2, v2*) → (k3, v3)* – all values with the same key are sent to the same reducer
• Mappers & Reducers can specify any computation – be careful with access to external resources!
• The execution framework handles everything else• Not quite… often, programmers also specify: partition (k2, number of partitions) → partition for k2
– often a simple hash of the key, e.g., hash(k2) mod n – divides up key space for parallel reduce operations
MapReduce: User Specification
• Programmers specify two functions: map (k1, v1) → (k2, v2)* reduce (k2, v2*) → (k3, v3)* – all values with the same key are sent to the same reducer
• Mappers & Reducers can specify any computation – be careful with access to external resources!
• The execution framework handles everything else• Not quite… often, programmers also specify: partition (k2, number of partitions) → partition for k2
– often a simple hash of the key, e.g., hash(k2) mod n – divides up key space for parallel reduce operations combine (k2, v2) → (k2’, v2’) – mini-reducers that run in memory after the map phase – used as an optimization to reduce network traffic
MapReduce: User Specification
map
docID text
map map map
Shuffle and Sort: aggregate values by keysa
2
1
2
b
1
1
1
c
1
1
2
reduce reducereduce
a 5 b 3 c 4
Example: Word CountMap(String docID, String text): for each word w in text: H[w] = H[w] + 1; for each word w in H Emit(w, H[w])
Reduce(String word, Iterator<int> values): int sum = 0; for each v in values: sum += v; Emit(word, sum);
MapReduce: Word Count
InputSplit
a
1
b
1
a
1
c
1
b
1
c
1
c
1
a
1
a
1
a
1
c
1
b
1 combin
ecombin
ecombin
ecombin
ea
2
b
1
a
c
2
b
1
c
c
1
a
2
a
a
1
c
1
b
1 partitio
npartitio
npartitio
npartitio
n
Example: Shortest-Path
Example: Shortest-Path
Data structure: •The adjacency list (with edge weights) for the graph •Each vertex v has a Node ID•Let Av be the set of neighbors of v
•Let dv be the current distance from source to v
Example: Shortest-Path
Data structure: •The adjacency list (with edge weights) for the graph •Each vertex v has a Node ID•Let Av be the set of neighbors of v
•Let dv be the current distance from source to v
Basic ideas:•Original input is (s, [0, As]);
Example: Shortest-Path
Data structure: •The adjacency list (with edge weights) for the graph •Each vertex v has a Node ID•Let Av be the set of neighbors of v
•Let dv be the current distance from source to v
Basic ideas:•Original input is (s, [0, As]);
•On an input (v, [dv, Av]), Mapper emits pairs whose key (i.e., vertex) is in Av, with a distance associated with dv
Example: Shortest-PathData structure: •The adjacency list (with edge weights) for the graph •Each vertex v has a Node ID•Let Av be the set of neighbors of v
•Let dv be the current distance from source to v
Basic ideas:•Original input is (s, [0, As]);
•On an input (v, [dv, Av]), Mapper emits pairs whose key (i.e., vertex) is in Av, with a distance associated with dv
•On an input (v, [dv, Av]*), Reducer emits a pair (v, [dv, Av]) with the minimum distance dv.
Example: Shortest-Path
Map(v, [dv, Av]) Emit(v, [dv, Av]);
for each w in Av do
Emit(w, [dv+wt(v, w), Aw]);
Reduce(v, [dv, Av]*) dmin = +∞;
for each [dv, Av] in [dv, Av]*
if dmin > dv then dmin = d;
Emit(v, [d, Av])
Data structure: •The adjacency list (with edge weights) for the graph •Each vertex v has a Node ID•Let Av be the set of neighbors of v
•Let dv be the current distance from source to v
• MapReduce iterations– The first time we run the algorithm, we discover all
neighbors of the source s– The second iteration, we discover all “2nd level”
neighbors of s – Each iteration expands the “search frontier” by one hop
Example: Shortest-Path
• MapReduce iterations– The first time we run the algorithm, we discover all
neighbors of the source s– The second iteration, we discover all “2nd level”
neighbors of s – Each iteration expands the “search frontier” by one hop
• The approach is suitable for graphs with small diameter (e.g., the “small-world graphs”)
Example: Shortest-Path
• MapReduce iterations– The first time we run the algorithm, we discover all
neighbors of the source s– The second iteration, we discover all “2nd level”
neighbors of s – Each iteration expands the “search frontier” by one hop
• The approach is suitable for graphs with small diameter (e.g., the “small-world graphs”)
• Need a “driver” algorithm to check termination of the algorithm ( in practice: Hadoop counters)
Example: Shortest-Path
• MapReduce iterations– The first time we run the algorithm, we discover all
neighbors of the source s– The second iteration, we discover all “2nd level”
neighbors of s – Each iteration expands the “search frontier” by one hop
• The approach is suitable for graphs with small diameter (e.g., the “small-world graphs”)
• Need a “driver” algorithm to check termination of the algorithm ( in practice: Hadoop counters)
• Can be extended to including the actual path.
Example: Shortest-Path
• Store graphs as adjacency lists;• Graph algorithms with MapReduce: -- Each Map task receives a vertex and its outlinks; -- Map task computes some function of the link structure and then gives a value with target as the key; -- Reduce task collects these keys (target vertices) and aggregates
• Graph Iterate multiple MapReduce cycles until some termination condition
-- graph structure is passed from one iteration to next
• The idea can be used to solve other graph problems
Summary: MapReduce Graph Algorithms
46
CSCE-629 Course Summary
• Basic notations, concepts, and
techniques• Data manipulation• Graph algorithms and applications• Computational optimization• NP-completeness theory
47
CSCE-629 Course Summary
• Basic notations, concepts, and techniques
• Data manipulation• Graph algorithms and applications• Computational optimization• NP-completeness theory
Pseudo-code for algorithms Big-Oh notation Divide-and-conquer Dynamic programming Solving recurrence relations
48
CSCE-629 Course Summary
• Basic notations, concepts, and techniques• Data manipulation
• Graph algorithms and applications• Computational optimization• NP-completeness theory
Data structures, algorithms, complexity Heap 2-3 trees Hashing Union-Find Finding median
49
CSCE-629 Course Summary
• Basic notations, concepts, and techniques• Data manipulation• Graph algorithms and applications
• Computational optimization• NP-completeness theory
DFS and BFS, and simple applications Connected components Topological sorting Strongly connected components Longest path in DAG
50
CSCE-629 Course Summary
• Basic notations, concepts, and techniques• Data manipulation• Graph algorithms and applications• Computational optimization
• NP-completeness theory
Maximum bandwidth paths Dijkstra’s algorithm (shortest path) Kruskal’s algorithm (MST) Bellman-Ford algorithm (shortest path) Matching in bipartite graphs Sequence alignment
51
CSCE-629 Course Summary
• Basic notations, concepts, and
techniques• Data manipulation• Graph algorithms and applications• Computational optimization• NP-completeness theory P and polynomial-time computation
Definition of NP, membership in NP Polynomial-time reducibility NP-hardness and NP-completeness Proving NP-hardness and NP-completeness NP-complete problems: SAT, IS, VC, Partition