+ All Categories
Home > Documents > MapReduce and the New Software Stack

MapReduce and the New Software Stack

Date post: 13-Apr-2016
Category:
Upload: varsha1504
View: 218 times
Download: 0 times
Share this document with a friend
33
MapReduce and the New Software Stack CHAPTER 2 1
Transcript
Page 1: MapReduce and the New Software Stack

1

MapReduce and the New Software StackCHAPTER 2

Page 2: MapReduce and the New Software Stack

2

Single Node Architecture

J. LESKOVEC, A. RAJARAMAN, J. ULLMAN: MINING OF MASSIVE DATASETS, HTTP://WWW.MMDS.ORG

Memory

Disk

CPUMachine Learning, Statistics

“Classical” Data Mining

Page 3: MapReduce and the New Software Stack

3

Motivation: Google Example 20+ billion web pages x 20KB = 400+ TB 1 computer reads 30-35 MB/sec from disk

~4 months to read the web ~1,000 hard drives to store the web Takes even more to do something useful

with the data!

J. LESKOVEC, A. RAJARAMAN, J. ULLMAN: MINING OF MASSIVE DATASETS, HTTP://WWW.MMDS.ORG

Page 4: MapReduce and the New Software Stack

4

Cluster Architecture

J. LESKOVEC, A. RAJARAMAN, J. ULLMAN: MINING OF MASSIVE DATASETS, HTTP://WWW.MMDS.ORG

Mem

Disk

CPU

Mem

Disk

CPU

Switch

Each rack contains 16-64 nodes

Mem

Disk

CPU

Mem

Disk

CPU

Switch

Switch1 Gbps between any pair of nodesin a rack

2-10 Gbps backbone between racks

Page 5: MapReduce and the New Software Stack

5

Page 6: MapReduce and the New Software Stack

6

Large-scale Computing A system has the more frequently something in the system will not be working at any given time.

the principal failure modes are the loss of a single node and the loss of an entire rack.

If we had to abort and restart the computation every time one component failed, then the computation might never complete successfully.

Page 7: MapReduce and the New Software Stack

7

Idea and SolutionIdea:

Files must be stored redundantly.Computations must be divided into tasks.

Solution:To exploit cluster computing, files must look and behave

somewhat differently from the conventional file systems found on single computers.

This new file system, often called a distributed file system or DFS.

Page 8: MapReduce and the New Software Stack

8

Typical usage pattern◦ Provides global file namespace◦ Huge files (100s of GB to TB)◦ Data is rarely updated in place◦ Reads and appends are common

Page 9: MapReduce and the New Software Stack

9

Distributed File System Chunk servers

◦ File is split into contiguous chunks◦ Typically each chunk is 16-64MB◦ Each chunk replicated (usually 2x or 3x)◦ Try to keep replicas in different racks

Master node◦ a.k.a. Name Node in Hadoop’s HDFS◦ Stores metadata about where files are stored◦ Might be replicated

Client library for file access◦ Talks to master to find chunk servers ◦ Connects directly to chunk servers to access data

J. LESKOVEC, A. RAJARAMAN, J. ULLMAN: MINING OF MASSIVE DATASETS, HTTP://WWW.MMDS.ORG

Page 10: MapReduce and the New Software Stack

10

Distributed File System

J. LESKOVEC, A. RAJARAMAN, J. ULLMAN: MINING OF MASSIVE DATASETS, HTTP://WWW.MMDS.ORG

Reliable distributed file system

Data kept in “chunks” spread across machines

Each chunk replicated on different machines ◦ Seamless recovery from disk or machine

failure

C0 C1

C2C5

Chunk server 1

D1

C5

Chunk server 3

C1

C3C5

Chunk server 2…

C2D0 D

0

Bring computation directly to the data!

C0 C5

Chunk server NC2

D0

Chunk servers also serve as compute servers

Page 11: MapReduce and the New Software Stack

11

MapReduce MapReduce is a style of computing. You can use an implementation of MapReduce to manage many large-scale computations in a way that is tolerant of hardware faults.

All you need to write are two functions, called Map and Reduce.

Page 12: MapReduce and the New Software Stack

12

MapReduce

Page 13: MapReduce and the New Software Stack

13

The Map Step Some number of Map tasks each are given one or more chunks from a distributed file system. These Map tasks turn the chunk into a sequence of key-value pairs.

The types of keys and values are each arbitrary.Further, keys are not “keys” in the usual sense; they do not have to be unique.

Page 14: MapReduce and the New Software Stack

14

MapReduce: The Map Step

J. LESKOVEC, A. RAJARAMAN, J. ULLMAN: MINING OF MASSIVE DATASETS, HTTP://WWW.MMDS.ORG

vk

k v

k v

mapvk

vk

k vmap

Inputkey-value pairs

Intermediatekey-value pairs

k v

map

Page 15: MapReduce and the New Software Stack

15

MapReduce: The Reduce Step The Reduce tasks work on one key at a time, and combine all the values associated with that key in some way.

These key-value pairs can be of a type different from those sent from Map tasks to Reduce tasks, but often they are the same type.

Page 16: MapReduce and the New Software Stack

16

MapReduce: The Reduce Step

J. LESKOVEC, A. RAJARAMAN, J. ULLMAN: MINING OF MASSIVE DATASETS, HTTP://WWW.MMDS.ORG

k v

k v

k v

k v

Intermediatekey-value pairs

Groupby key

reduce

reducek v

k v

k v

k v

k v

k v v

v v

Key-value groupsOutput key-value pairs

Page 17: MapReduce and the New Software Stack

17

MapReduce: The Reduce Step

Groupby key

reduce

Page 18: MapReduce and the New Software Stack

18

More Specifically Input: a set of key-value pairs Programmer specifies two methods:

◦ Map(k, v) <k’, v’>*◦ Takes a key-value pair and outputs a set of key-value pairs

◦ E.g., key is the filename, value is a single line in the file◦ There is one Map call for every (k,v) pair

◦ Reduce(k’, <v’>*) <k’, v’’>*◦ All values v’ with same key k’ are reduced together

and processed in v’ order◦ There is one Reduce function call per unique key k’

Page 19: MapReduce and the New Software Stack

19

Word CountingWarm-up task:

We have a huge text document

Count the number of times each distinct word appears in the file

Sample application: ◦ Analyze web server logs to find popular URLs

Page 20: MapReduce and the New Software Stack

20

MapReduce: Word Counting

The crew of the space shuttle Endeavor recently returned to Earth as ambassadors, harbingers of a new era of space exploration. Scientists at NASA are saying that the recent assembly of the Dextre bot is the first step in a long-term space-based man/mache partnership. '"The work we're doing now -- the robotics we're doing -- is what we're going to need ……………………..

Big document

(The, 1)(crew, 1)

(of, 1)(the, 1)

(space, 1)(shuttle, 1)(Endeavor,

1)(recently, 1)

….

(crew, 1)(crew, 1)(space, 1)

(the, 1)(the, 1)(the, 1)

(shuttle, 1)(recently, 1)

(crew, 2)(space, 1)

(the, 3)(shuttle, 1)(recently, 1)

MAP:Read input and produces a set

of key-value pairs

Group by key:

Collect all pairs with same key

Reduce:Collect all

values belonging to the key and

output

(key, value)

Provided by the programmer

(key, value)(key, value)

Page 21: MapReduce and the New Software Stack

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

21

Map-Reduce: EnvironmentMap-Reduce environment takes care of:

Partitioning the input data Scheduling the program’s execution across a set of machines

Performing the group by key step Handling machine failures Managing required inter-machine communication

Page 22: MapReduce and the New Software Stack

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

22

Map-Reduce: A diagramBig document

MAP:Read input and

produces a set of key-value pairs

Group by key:Collect all pairs with same key

(Hash merge, Shuffle, Sort, Partition)

Reduce:Collect all values

belonging to the key and output

Page 23: MapReduce and the New Software Stack

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

23

Map-Reduce: In Parallel

All phases are distributed with many tasks doing the work

Page 24: MapReduce and the New Software Stack

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

24

Map-Reduce Programmer specifies:

◦ Map and Reduce and input files

Workflow:◦ Read inputs as a set of key-value-pairs◦ Map transforms input kv-pairs into a new set of

k'v'-pairs◦ Sorts & Shuffles the k'v'-pairs to output nodes◦ All k’v’-pairs with a given k’ are sent to the same

reduce◦ Reduce processes all k'v'-pairs grouped by key

into new k''v''-pairs◦ Write the resulting pairs to files

All phases are distributed with many tasks doing the work

Input 0

Map 0

Input 1

Map 1

Input 2

Map 2

Reduce 0 Reduce 1

Out 0 Out 1

Shuffle

Page 25: MapReduce and the New Software Stack

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

25

Data Flow Input and final output are stored on a distributed file system (FS):◦ Scheduler tries to schedule map tasks “close” to physical storage

location of input data

Intermediate results are stored on local FS of Map and Reduce workers

Output is often input to another MapReduce task

Page 26: MapReduce and the New Software Stack

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

26

Coordination: Master Master node takes care of coordination:

◦ Task status: (idle, in-progress, completed)◦ Idle tasks get scheduled as workers become available◦ When a map task completes, it sends the master the

location and sizes of its R intermediate files, one for each reducer

◦ Master pushes this info to reducers

Master pings workers periodically to detect failures

Page 27: MapReduce and the New Software Stack

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

27

Dealing with Failures Map worker failure

◦ Map tasks completed or in-progress at worker are reset to idle.

◦ Reduce workers are notified when task is rescheduled on another worker.

Reduce worker failure◦ Only in-progress tasks are reset to idle ◦ Reduce task is restarted

Master failure◦ MapReduce task is aborted and client is notified

Page 28: MapReduce and the New Software Stack

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

28

How many Map and Reduce jobs? M map tasks, R reduce tasks Rule of a thumb:

◦ Make M much larger than the number of nodes in the cluster◦ One DFS chunk per map is common◦ Improves dynamic load balancing and speeds up recovery

from worker failures

Usually R is smaller than M◦ Because output is spread across R files

Page 29: MapReduce and the New Software Stack

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

29

Task Granularity & Pipelining Fine granularity tasks: map tasks >> machines

◦ Minimizes time for fault recovery◦ Can do pipeline shuffling with map execution◦ Better dynamic load balancing

Page 30: MapReduce and the New Software Stack

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

30

Refinements: Backup Tasks Problem

◦ Slow workers significantly lengthen the job completion time:◦ Other jobs on the machine◦ Bad disks◦ Weird things

Solution◦ Near end of phase, spawn backup copies of tasks

◦ Whichever one finishes first “wins”

Effect◦ Dramatically shortens job completion time

Page 31: MapReduce and the New Software Stack

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

31

Refinement: Combiners Often a Map task will produce many pairs of the form (k,v1), (k,v2), … for the same key k

◦ E.g., popular words in the word count example

Can save network time by pre-aggregating values in the mapper:

◦ combine(k, list(v1)) v2

◦ Combiner is usually same as the reduce function

Works only if reduce function is commutative and associative

Page 32: MapReduce and the New Software Stack

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

32

Refinement: Combiners Back to our word counting example:

◦ Combiner combines the values of all keys of a single mapper (single machine):

◦ Much less data needs to be copied and shuffled!

Page 33: MapReduce and the New Software Stack

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org

33

Refinement: Partition Function

Want to control how keys get partitioned◦ Inputs to map tasks are created by contiguous splits of input file◦ Reduce needs to ensure that records with the same intermediate key end up at the

same worker

System uses a default partition function:◦ hash(key) mod R

Sometimes useful to override the hash function:◦ E.g., hash(hostname(URL)) mod R ensures URLs from a host end up

in the same output file


Recommended