+ All Categories
Home > Documents > School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing...

School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing...

Date post: 20-May-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
60
PARALLEL & SCALABLE MACHINE LEARNING & DEEP LEARNING Prof. Dr. – Ing. Morris Riedel Adjunct Associated Professor School of Engineering and Natural Sciences, University of Iceland Research Group Leader, Juelich Supercomputing Centre, Germany Map Reduce Computing Paradigm September 27 th , 2018 Room Stapi 108 Cloud Computing & Big Data LECTURE 5
Transcript
Page 1: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

PARALLEL & SCALABLE MACHINE LEARNING & DEEP LEARNING

Prof. Dr. – Ing. Morris RiedelAdjunct Associated ProfessorSchool of Engineering and Natural Sciences, University of IcelandResearch Group Leader, Juelich Supercomputing Centre, Germany

Map Reduce Computing ParadigmSeptember 27th, 2018Room Stapi 108

Cloud Computing & Big Data

LECTURE 5

Page 2: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Review of Lecture 4 – Virtualization & Data Center Design

Virtualization drives Clouds

Lecture 5 – Map Reduce Computing Paradigm

Data Centers use Virtualization

[1] Distributed & Cloud Computing Book

(flexible resource sharing)

(withvirtualization

more‘logical‘servers)

(benefitsfor data centers,e.g. life migrationin case of failures)

2 / 60

Page 3: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Outline of the Course

1. Cloud Computing & Big Data

2. Machine Learning Models in Clouds

3. Apache Spark for Cloud Applications

4. Virtualization & Data Center Design

5. Map-Reduce Computing Paradigm

6. Deep Learning driven by Big Data

7. Deep Learning Applications in Clouds

8. Infrastructure-As-A-Service (IAAS)

9. Platform-As-A-Service (PAAS)

10. Software-As-A-Service (SAAS)

Lecture 5 – Map Reduce Computing Paradigm

11. Data Analytics & Cloud Data Mining

12. Docker & Container Management

13. OpenStack Cloud Operating System

14. Online Social Networking & Graphs

15. Data Streaming Tools & Applications

16. Epilogue

+ additional practical lectures for our

hands-on exercises in context

Practical Topics

Theoretical / Conceptual Topics3 / 60

Page 4: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Promises from previous lecture(s):

Lecture 1 & 3: Lecture 5 will give in-depth details on Map-Reduce & implementation Hadoop & filesystem

Lecture 1: Lecture 5 will give in-depth details on using the map-reduce paradigm for big data analytics

Lecture 3: Lecture 5 offers moreinsights into the map-reducecomputing paradigm includingapplications

Lecture 3: Lecture 5 providesmore information on theHadoop Distributed File System (HDFS) used by Spark

Lecture 3: Lecture 5 providesmore detailed information on the scheduler Hadoop YARN andits usage

Lecture 4: Lecture 5 provides insights into Hadoop Distributed File System that assumes failure by default

Lecture 5 – Map Reduce Computing Paradigm

Outline

Map Reduce Approach Big Data Requirements & Apache Hadoop Hiding Complexity & Increase Reliability Three ‘phases’ & Communication & YARN Key-Value Data Structures & Applications Application Examples in Context

Map-Reduce Ecosystem & Applications Hadoop Distributed File System & Reliability OpenStack Cloud Operating System Amazon Web Services & Elastic MapReduce Google Cloud & Machine Learning Docker & Map-Reduce Limits

4 / 60

Page 5: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Map Reduce Approach

Lecture 5 – Map Reduce Computing Paradigm 5 / 60

Page 6: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Requirements for ‘Big Data’ using Cloud Computing

Taking into account the ever-increasing amounts of ‘big data’ Think: ‘Big Data’ not always denoted by volume, there is velocity, variety, …

Fault-tolerant and scalable data analytics processing approach Data analytics computations must be divided in small task for easy restart Restart of one data analytics task has no affect on other active tasks

Reliable scalable ‘big data’ storage method Data is (almost always) accessible even if failures in nodes/disks occur Enable the access of large quantities of data with good performance

A specialized distributed file system is required that assumes failures as default

A data analytics processing programming model is required that is easy to use and simple to program with fault-tolerance already within its design

6 / 60

Page 7: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

HDInsight

Lecture 5 – Map Reduce Computing Paradigm

Networking & Big Data Impacts on Cloud Computing

Requirements for scalable programming models and tools CPU speed has surpassed IO capabilities of existing cloud resources Data-intensive clouds with advanced analytics and analysis capabilities Considering moving compute task to data vs. moving data to compute MS Azure is only one

concrete example of many cloud solutions

Apache HadoopMap-ReduceImplementation

Requirements for Reliable Filesystems Traditional parallel filesystems need to prove their ‘big data’ feasibility Emerging new forms of filesystems that assume hardware error constantly E.g. Hadoop distributed file system (HDFS) [2] HDFS Architecture Guide

[4] Apache Hadoop Web page

[3] Azure HDInsight Web page [6] Apache Spark

7 / 60

Page 8: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Big Data Analytics Frameworks – Revisited

Distributed Processing ‘Map-reduce via files‘: Tackle large problems with many small tasks Advantage of ‘data replication‘ via specialized distributed file system E.g. Apache Hadoop

In-Memory Processing (cf. Lecture 3) Perform many operations fast via ‘in-memory‘ Enable tasks such as ‘map-reduce in-memory‘ E.g. Apache Spark, Apache Flink

Big Data analytics frameworks shift the approach from ‘bring data to compute resources‘ into ‘bring compute tasks close to data‘

[6] Apache Spark

[5] Map-Reduce

focus in this lecture

8 / 60

Page 9: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Machine Learning & Data Mining Applications

Properties opportunity to exploit ‘parallelism’ Require handling of immense amounts of data quickly Provide data that is extremely regular and can be independently processed

Examples from the Web Ranking of Web pages by importance

(includes iterated matrix-vector multiplication) Searches in online social networking sites

(includes search in graph with hundreds of nodes/billions of edges)

Many modern data mining applications require computing on compute nodes (i.e. processors/cores) that operate independently from each other (i.e. HTC)

Independent means there is little or even no communication between tasks

Lecture 11 will provide more details on using parallel computing for data mining applications 9 / 60

Page 10: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Requirement for a ‘Data Processing Machinery’

Specialized Parallel Computers (aka Supercomputers) Interconnent between nodes is expensive (i.e. Infiniband/Myrinet) Interconnect not always needed in

data analysis (independence in datasets) Programming is relatively difficult/specific,

parallel programming is profession of its own

Large ‘compute clusters’ dominate data-intensive computing Offer ‘cheaper’ parallelism (no expensive

interconnect & switches between nodes) Compute nodes (i.e. processors/cores)

interconnected with usual ethernet cables Provide large collections of commodity

hardware (e.g. normal processors/cores)

10 / 60

Page 11: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

HTC

networkinterconnectionless important!

Lecture 5 – Map Reduce Computing Paradigm

HPC vs. HTC Systems (cf. Lecture 4)

High Performance Computing (HPC) is based on computing resources that enable the efficient use of parallel computing techniques through specific support with dedicated hardware such as high performance cpu/core interconnections. These are compute-oriented systems.

High Throughput Computing (HTC) is based on commonly available computing resources such as commodity PCs and small clusters that enable the execution of ‘farming jobs’ without providing a high performance interconnection between the cpu/cores. These are data-oriented systems

HPC

networkinterconnectionimportant

Complementary HPC-A course offers more insights into HPC massively parallel application domains11 / 60

Page 12: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Map-Reduce Motivation: Avoid Increased HPC Complexity

Different HPC Programming elements (barriers, mutexes, shared-/distributed memory, MPI, etc.) Task distribution issues (scheduling, synchronization, inter-process-communication, etc.) Complex heterogenous architectures (UMA, NUMA, hybrid, various network topologies, etc.) Data/Functional parallelism approaches (SPMD, MPMD, domain decomposition, ghosts/halo, etc. )

[7] Parallel Computing Tutorial[8] Introduction to High Performance Computing for Scientists and Engineers

Complementary HPC-A course offers parallel programming with UMA, NUMA, SPMD, MPMD, etc.

12 / 60

Page 13: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Approach: Hiding the Complexity of Computing Technologies

Many users of parallel data processing machines are not technical savvy users and don’t need to know system details Scientific domain-scientists (e.g. biology)

need to focus on their science & data Data Scientists from statistics/machine-learning

need to focus on their algorithms & data

Non-technical users raise the following requirements for a ‘data processing machinery’:

The ‘data processing machinery’ needs to be easy to program The machinery must hide the complexities of computing

(e.g. different networks, various operating systems, etc.) It needs to take care of the complexities of parallelization

(e.g. scheduling, task distribution, synchronization, etc.)

[9] Science Progress

13 / 60

Page 14: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Faults: Physical Organization of a Compute Node

Compute nodes are part of racks (perhaps 8-64 / rack) Single Computer node in rack interconnected with GBit Ethernet N different racks are interconnected via switch/network Big machines: ‘supercomputer’ / ‘cluster’ then with many nodes But as with any ‘big machine’ – parts can fail anytime…

[10] Mining of Massive Datasets(cf. Lecture 4)

Cloud users are interested in using reliable ‘compute nodes’ w/o the need to understand all details

14 / 60

Page 15: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Need for Reliability: Possible Failures during Operation

Reasons for failures (cf. Lecture 4) Loss of a single node within a rack

(e.g. hard disk crash, memory errors) Loss of an entire rack (e.g. network issues) Operating software ‘errors/bugs’

Consequences of failures Long-running compute tasks (e.g. hours/days) need to be restarted Already written data may become inconsistent and needs to be removed Access to unique datasets maybe hindered or even unavailable

Example: Juelich Supercomputing Centre Disks fail ~2-3 times per week, using the machine needs to go on

Rule of thumb: The bigger the cluster is, the more frequent failures happen Cloud users should not be exposed to faults of compute nodes & consequences

15 / 60

Page 16: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Origins of the Map-Reduce Programming Model

Origin: Invented via the proprietary Google technology by Google technologists Drivers: Applications and ‘data mining

approaches’ around the Web Foundations go back to

functional programming (e.g. LISP)

Large ‘open source community’ Apache Hadoop, versions 1/2/3 Open Source Implementation of the

‘map-reduce’ programming model Based on Java programming language, e.g. map & reduce tasks objects Broadly used – also by commercial vendors within added-value software Foundation for many higher-level algorithms, frameworks, and approaches

[11] MapReduce: Simplified Dataset on Large Clusters, 2004

[4] Apache Hadoop

16 / 60

Page 17: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Inspired by Traditional Model in Computer Science

Approach Break ‘big tasks’ in many sub-tasks and aggregate/combine results E.g. word counts in large text document collections, movie ratings, etc.

Divide & Conquer

P1 P2 P3

Problem

Result

Worker WorkerWorker

1

2

partition

combine

(1) Partition the whole problem space (2) Combine the partly solutions of each

partition to a whole solution of the problem

HTC

(e.g. work to compute similiarity of users & analyze ratings)

(e.g. all ratings of movies)

(e.g. combine recommendations for a user)

18 / 60

Page 18: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Map-Reduce Programming Model

Enables many ‘common calculations easily’ on large-scale data Efficiently performed on computing clusters (but security critics exists)

Offers system that is tolerant of hardware failures in computation Simple Programming Model

Users need to provide two functions Map & Reduce with key/value pairs Tunings are possible with numerous configurations & combine operation

Key to the understanding: The Map-Reduce Run-Time Three phases – not just ‘map-reduce’

Takes care or the partitioning of input data and the communication Manages parallel execution and performs sort/shuffle/grouping Coordinates/schedules all tasks that either run Map and Reduce tasks Handles faults/errors in execution and even re-submit tasks if necessary

[11] MapReduce: Simplified Dataset on Large Clusters, 2004

17 / 60

Page 19: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Understanding Map-[Sort/Shuffle/Group]-Reduce

Modified from [10] Mining of Massive Datasets

(done by theframework)

19 / 60

Page 20: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Key-Value Data Structures

Programming Model Two key functions to program by user: map and reduce Third phase ‘sort/shuffle’ works with keys and sorts/groups them Input keys and values (k1,v1)

are drawn from a different domain than the output keys and values (k2,v2)

Intermediate keys and values (k2,v2) are from the same domain as the output keys and values

[11] MapReduce: Simplified Dataset on Large Clusters, 2004

map (k1,v1) list(k2,v2) reduce (k2,list(v2)) list(v2)

20 / 60

Page 21: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Key-Value Data Structure – Simple Example Wordcount

// counting words example

map(String key, String value):

// key: document name

// value: document contents

for each word w in value:

EmitIntermediate(w, "1");

// the framework performs sort/shuffle// with the specified keys

reduce(String key, Iterator values):

// key: a word

// values: a list of counts

int result = 0;

for each v in values:

result += ParseInt(v);

Emit(AsString(result));

Key-Value pairs are implementedas Strings in this text-processingexample for each function and as ‘Iterator’ over a list

Goal: Counting the number of each word appearing in a document (or text-stream more general)

Map (docname, doctext)list (wordkey, 1), …

Reduce (wordkey, list (1, …))list (numbercounted)

[11] MapReduce: Simplified Dataset on Large Clusters, 2004

Lecture 5 – Map Reduce Computing Paradigm 21 / 60

map (k1,v1) list(k2,v2) reduce (k2,list(v2)) list(v2)

Page 22: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Map-Reduce Practical Example: Text(stream) Processing

1

map

3

reduce

2

sort/shuffle/group

Problem(key, value pairs)

Map Map Map Map

Reduce Reduce Reduce

foo 1bar 1foo 1

foo 1car 1bar 1

car 1car 1car 1

foo 0bar 0car 0

empty list

foo 3 bar 2 car 4

foo car barfoo bar foofoo bar fooConfigurations (size 3)

foo 1foo 1foo 1

bar 1bar 1

car 1car 1car 1car 1

Aggregate values by keys (done by the framework)

… … …

distribution byframework

ff

f

f = file

Page 23: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Hadoop Resource Manager Yarn & Wordcount Example

Lecture 5 – Map Reduce Computing Paradigm

Apache Hadoop offers the Yet Another Resource Negotiator (YARN) scheduler for map-reduce jobs Idea of Yarn is to split up the functionalities of resource management & job scheduling/monitoring The ResourceManager is a scheulder that controls resources among all applications in the system The NodeManager is the per-machine system that is reponsible for containers, monitoring of their

resource uage (cpu, memory, disk, network) and reporting to the ResourceManager

[12] Introduction to Yarn

23 / 60

Page 24: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Scheduling & Communication in Map-Reduce Algorithms

Greatest cost in the communication of a map-reduce algorithm Algorithms often trade communication cost against degree of

parallelism and use principle of ‘data locality’ (use less network)

Modified from [11] MapReduce: Simplified Dataset on Large Clusters

Data locality means that network bandwidth is conserved by taking advantage of the approach that the input data (managed by DFS) is stored on (or very near, e.g. same network switch) the local disks of the machines that make up the computing clusters

(taking data localityinto account)

24 / 60

Page 25: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

OSN Architecture – Facebook Example Map-Reduce

HBase for Messaging & Indexing NoSQL key-value store Scales to Facebook 600 million monthly visitors Used for messages (integrate SMS, chat, email & Facebook Messages

all into one inbox) and used for search functionality (posts indexing)

Lecture 5 – Map Reduce Computing Paradigm

[21] Facebook Code, HBase at Facebook

[20] Apache HBase

One billion new posts added per day Search index contains more than one

trillion total posts (i.e. ~ 700 TB) MySQL DB data harvested data in an

HBase cluster execute Hadoop map-reduce jobs to build index in parallel

[22] Facebook Engineering,Building Post Search

(Facebook data physically shared in regions)(Writes in-memory write-ahead log WAL HFile)

Facebook is using Hadoop map-reduce jobs to build the post index in parallel with ~700 TB of data

25 / 60

Page 26: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

[Video] Map-Reduce

[13] YouTube, Intro to Map-Reduce

26 / 60

Page 27: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Map-Reduce Ecosystem & Applications

Lecture 5 – Map Reduce Computing Paradigm 27 / 60

Page 28: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Assume Failure in Data Center Design – Revisited

Example: Google Data Center, Council Bluffs, Iowa, USA Scale of thousands of servers means always concurrent failure Hardware failure or software failure

(1 percent of nodes is common) CPU failure, disk I/O failure Network failure, switch error Sofware failure, scheduler error

Whole data center does not work E.g. in the case of a power crash?!

Lecture 5 – Map Reduce Computing Paradigm

Cloud data centres need to operate with failures: Computing service and user data should not be lost in a failure situation and reliability can be achieved by redundant hardware in centres

Software keeps multiple copies of data in different locations and keeps data accesible during errors

[1] Distributed & Cloud Computing Book [16] Google Data Centers, wired.com

28 / 60

Page 29: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

About Faults, Errors, and Failures?

Dependability Quality of delivered service such that reliance can justifiably

placed on this service

‘System failure’ occurs when actual behavior deviates from specified behavior

System failure occurs because of an ‘error’ The cause of an error is a ‘fault’ A fault creates a ‘latent error’,

which becomes effective when it is activated Time between error and failure is the ‘error latency’

Basic assumption: Priority for storage is to remember information

[26] Laprie et al., 1986

29 / 60

Page 30: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Quantifying Dependability: Reliability & Availability

‘Module reliability’ Measure of continuous service accomplishment Quantified in terms of ‘mean time to failure (MTTF)’ Reciprocal: ‘failures in time (FIT)’ Service interruption quantified as ‘mean time to repair (MTTR)’ ‘Mean time between failures (MTBF) = MTTF + MTTR’

‘Module availability’ Measure of service accomplishment with respect to the alternation

between accomplishment and interruption For non-redundant systems with repair:

Module Availability = 30 / 60

Page 31: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Faults by Cause

Hardware faults Devices that fail (e.g. due to an

alpha particle hitting a memory cell)

Design faults Faults in software (usually) and hardware design (occasionally)

Operation faults Mistakes by operations and maintenance personnel

Environmental faults Fire, flood, earthquake, power failure, and sabotage

One approach to be prepared for Faults: RAID Levels Another approach: ‘specialized filesystems’

Primary cause of failures in large big data systems today is faults by human operators

31 / 60

Page 32: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Hadoop Distributed File System (HDFS)

A ‘specialized filesystem’ designed for reliability and scalability Designed to run on ‘comodity hardware’ Many similarities with existing ‘distributed file systems’ Takes advantage of ‘file and data replication concept’

Differences to traditional distributed file systems are significant Designed to be ‘highly fault-tolerant’ improves dependability! Enables use with applications that have ‘extremely large data sets’

Provides ‘high throughput access’ to application data HDFS relaxes a few POSIX requirements Enables ‘streaming access’ to file system data

Origin HDFS was originally built as infrastructure for

the ‘Apache Nutch web search engine project’[27] The Hadoop Distributed File System

32 / 60

Page 33: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

OSN Facebook Database Example HBase – Revisited

HBase for Messaging & Indexing NoSQL key-value store Scales to Facebook 600 million monthly visitors Used for messages (integrate SMS, chat, email & Facebook Messages

all into one inbox) and used for search functionality (posts indexing)

Lecture 5 – Map Reduce Computing Paradigm

[21] Facebook Code, HBase at Facebook

[20] Apache HBase

One billion new posts added per day Search index contains more than one

trillion total posts (i.e. ~ 700 TB) MySQL DB data harvested data in an

HBase cluster execute Hadoop map-reduce jobs to build index in parallel

[22] Facebook Engineering,Building Post Search

(Facebook data physically shared in regions)(Writes in-memory write-ahead log WAL HFile)

Facebook takes advantage of the Hadoop Distributed File System (HDFS) for specific applications

33 / 60

Page 34: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

File Replication Concept

Ideal: Replicas reside on ‘failure-independent machines’ Availability of one replica should not depend on availability of others Requires ability to place replica on particular machine ‘Failure-independent machines’ hard to find, but in ‘system design’ easier

Replication should be hidden from users But replicas must be distinguishable at lower level Different DataNode’s are not visible to end-users

Replication control at higher level Degree of replication must be adjustable

(e.g. Hadoop configuration files)

File replication is a useful redundancy for improving availability and performance

Modified form [28] Stampede Virtual Workshop34 / 60

Page 35: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Benefits of File Replication

Tradeoff between ‘update consistency’ and performance HDFS is completely optimized towards aggregating data/time, poor updates! Good performance when ‘data read occurs more often than data writes’

Fault Tolerance HDFS provides high ‘reliability and dependability’ via replication of data

Easy Management HDFS is scalable towards data, but also towards ‘operating the system’ No immediate action required if disk fails ( can be different in RAIDs) No immediate repair action required if node fails ( other replicas exist)

The benefits of file (and data) replication is performance, fault tolerance, and (easy) management

Easy management means that no immediate action is required to keep the system operating Repairs of large-scale systems are typically done periodically for a whole collection of failures

35 / 60

Page 36: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

HDFS Design (1)

Hardware Failure ‘Hardware failure is the norm rather than the exception’ Detection of faults and ‘quick, automatic recovery’ is an architectural goal

Streaming Data Access HDFS is designed more for batch processing, not interactive use by users High throughput of data access important, not low latency of data access POSIX semantics in a few key areas has been traded to increase throughput

(Extremely) Large Datasets A typical file in HDFS is gigabytes (GBs)

to terabytes (TBs) in volume/size Provides high aggregate data bandwidth Scales to hundreds of nodes in a single cluster Support tens of millions of files in a single HDFS instance

[27] The Hadoop Distributed File System36 / 60

Page 37: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

HDFS Design (2)

Simple Coherency Model Applications optimized for ‘write-once-read-many access model’ for files Limits the application areas (e.g. n updates of customer data at Amazon) A file once created, written, and closed need not be changed Simplifies data coherency issues and enables high throughput data access (cf. Lecture 10: did we changed data with map-reduce?) (A Map-Reduce application fits perfectly!)

‘Data Locality’ Principle ‘Moving Computation is Cheaper than Moving Data’ Impact if size of the data set is huge (i.e. big data) ‘Minimizes network congestion’ and ‘increases the overall throughput’

Portability Across heterogeneous hardware and software platforms (e.g. JAVA)

[27] The Hadoop Distributed File System37 / 60

Page 38: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Implemented as ‘Specialized Distributed File System’

Key Properties designed as large-scale file system Provides much larger units than the disk blocks in a standard OS/FS Enables redundancy to protect against media failures that occur when data

is distributed over thousands of compute nodes (‘replication of data’)

[27] The Hadoop Distributed File System

38 / 60

Page 39: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

HDFS Architecture

HDFS cluster consists of a single ‘NameNode’ Master server that manages the file system namespace Regulates access to files by clients

HDFS cluster consists of N ‘DataNode(s)’ Typically one per node in the cluster Manage storage attached to

the nodes that they run on

HDFS exposes a ‘file system namespace’ User data can be stored in files

and directories (‘as usual’) Not directly accessible outside

HDFS implements a ‘master/slave architecture’ with one NameNode and n DataNode(s)

Master Slaves

[28] Stampede Virtual Workshop39 / 60

Page 40: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Files in HDFS

HDFS enables ‘parallel reading and processing’ of the data files File Handling

Files enormously in size (GBs, TBs, etc.), better ‘accumulated over time’ Files are rarely updated (optimized rather for read/append – not re-write) Bring computation to the files, rather the files to the computing resource

(‘data locality principle’)

File Storage Approach Files are divided into blocks – also

named as ‘chunks’ (default 64MB),relatively ‘large blocks’ for a filesystem

Blocks are replicated at different compute nodes(default three times, configurable)

Blocks holding copy of one dataset are distributed across different racks

[28] Stampede Virtual Workshop

Distributed Filesystem

40 / 60

Page 41: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Master/NameNode

The master/name node is replicated A directory for the file system as a whole knows where to find the copies All participants using the DFS know where the directory copies are

Modified from [28] Stampede Virtual Workshop

The Master/Name node keeps metadata (e.g. node knows about the blocks of files)

E.g. horizontalpartitioning

41 / 60

Page 42: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

File Operations in HDFS

Different optimized operations on different ‘nodes’ NameNode

Determines the mapping of ‘pure data blocks’ to DataNodes (metadata) Executes file system namespace operations E.g. opening, closing, and renaming files and directories

DataNode Serving ‘read and write

requests’ from HDFSfilesystem clients

Performs block creation, deletion, and replication (upon instruction from the NameNode)

[28] Stampede Virtual Workshop

42 / 60

Page 43: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

HDFS File/Block(s) Distribution Example

[27] The Hadoop Distributed File System

43 / 60

Page 44: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Big Data Analytics with Apache Hadoop on OpenStack

Resource Manager (uses Nova) Scheduler Yarn to allocate resources to various applications on the cluster

NameNode (uses Nova & Cinder) Metadata about the data blocks are stored in the NameNode Provides lookup functionality

and tracking for all data or files in the Hadoop cluster

NodeManager (uses Nova) Takes instructions from Yarn

and responsible to executeand monitor applications

Datanode (uses Nova & Cinder) Store and process the data

Lecture 5 – Map Reduce Computing Paradigm

[15] OpenStack Paper ‘Big Data’

Lecture 13 will provide more details on using the OpenStack Cloud Operating System for Clouds44 / 60

Page 45: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

OpenStack Optional Services

Lecture 5 – Map Reduce Computing Paradigm

[14] OpenStack Web page

Lecture 13 will provide more details on using the OpenStack Cloud Operating System for Clouds45 / 60

Page 46: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Apache Spark vs. Apache Hadoop – Revisited

Lecture 5 – Map Reduce Computing Paradigm

Apache Hadoop

Iteration 1HDFS Read

HDFS Write

slow

slow

HDFS Readslow

Iteration 2HDFS Writeslow

DATAINPUT

HDFS Readslow

RESULT

Apache Spark

Iteration 1HDFS Read

RAM Write

slow

fast

RAM Readfast

Iteration 2RAM Writefast

DATAINPUT

RAM Readfast

RESULT

Spark Results Up to 100% faster then

traditional Hadoop (for some applications)

Requires ‘big memory‘systems to perform well

One key difference between Apache Spark vs. Apache Hadoop is that slow HDFS reads and writes are exchanged with fast RAM reads and writes in several iterations that might bepart of a larger workflow

A workflow is a sequence of processing tasks on data thatleads to results via ‘iterations‘

[31] big-data.tips, Apache Spark vs. Hadoop

46 / 60

Page 47: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Cloud Computing with MS Azure HDInsight - Revisited

Microsoft Azure Cloud Wide variety of different cloud-based services

& resources for many application areas Managed via Microsoft Azure Portal Hub Needs a Microsoft Azure account

HDInsight Cluster ‘Service‘ Full managed services to

deploy known open sourceanalytice frameworks

Easy deployment &configuration per AzurePortal Hub

Pay-per-use model

Lecture 5 – Map Reduce Computing Paradigm

HDInsight

[29] Azure Portal Hub

[30] Azure HDInsight Web page

Practical Lecture 5.1 will offer more insights of using map-reduce with MS Azure Clouds(known open source analytics frameworks)

47 / 60

Page 48: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Understand AWS Cloud Service Portfolio – Analytics

Multiple analytics products on IAAS Extracting insights and actionable information from data

requires technologies like analytics & machine learning

Products & Usage Amazon Athena: Serverless Query Service Amazon ElasticMapReduce: Hadoop Amazon ElasticSearch Service: Elasticsearch on AWS Amazon Kinesis: Streaming Data Amazon QuickSight: Business Analytics Amazon Redshift: Data Warehouse …

Lecture 5 – Map Reduce Computing Paradigm

[17] AWS Web page

Lecture 8 will provide more examples using Infrastructure-As-A-Service (IAAS) & AWS Cloud Services48 / 60

Apache Spark on Hadoop YARN is natively supported in Amazon ElasticMapReduce (EMR) The AWS management console offers to easily create and manage Apache Spark clusters

Page 49: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

AWS Marketplace and Amazon Elastic MapReduce Users

AWS Marketplace E.g. collection of community and Amazon created pre-installed images Software infrastructure, developer tools, business & desktop software User success stories and details of how AWS was adopted in solutions

‘Startup company‘ Airbnb (travel) Scales infrastructure automatically using AWS Uses 200 Amazon EC2 instances for its application Uses elastic load balancing with Amazon EC2 instances Analyzes 50 GB of data daily via Amazon Elastic MapReduce (Amazon EMR)

‘Startup company‘ Spotify (music) Instant access to over 16 million licensed songs Stores its huge volume of content in Amazon S3

Lecture 5 – Map Reduce Computing Paradigm

[18] AWS Marketplace

Elastic MapReduce (Amazon EMR) uses Hadoop on AWS IAAS cloud & is used by Airbnb (50 GB/day)

49 / 60

Page 50: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Google Cloud Example – Data Analytics & Machine Learning

Lecture 5 – Map Reduce Computing Paradigm

The GAE services re-use other existing Big Data services of the Google Cloud Platform through welldefined interfaces such as the Google BigQuery or Google Dataproc for Apache Spark & Hadoop

[19] Google Dataproc service

Google Dataproc enables scalable & automated cluster management for Apache Spark with quick deployment, logging, and monitoring in order to focus on data analysis and not on infrastructure

Lecture 9 provides more detailed information about Google Cloud Platform-As-A-Service Tools22 / 56

Page 51: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Other PAAS Providers & Disadvantages – Revisited

Lecture 5 – Map Reduce Computing Paradigm

[1] Distributed & Cloud Computing Book

PAAS disadvantages are that application developers are being locked in to a certain platform (aka ‚vendor lock‘) and that they may not be able to use known tools (e.g. ORACLE DB) with services

51 / 60

Page 52: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

The Ecosystem around ‘Map-Reduce’ is expanding

Machine Learning & Data Mining

SQL Scripts

Metadata Management

Parallel Task Execution

Efficient & reliable data storage

RelationalData

Data-Streams

NoSQLDatabase

Modified from [23] Map-Reduce

52 / 60

Page 53: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Docker Tool – Docker Hub Repository – Hadoop Example

Docker community image repository 100,000+ free applications including Hadoop

Lecture 5 – Map Reduce Computing Paradigm

[24] Docker Hub Web page

Lecture 12 provides details on containers & Docker used in cloud computing for machine learning53 / 60

Page 54: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

Not every problem can be solved with Map-Reduce

Example: Amazon Online retail sales Requires a lot of updates on their product databases on each user action

(a problem for the underlying file system optimization next Lecture) Processes that involve ‘little calculation’ but still change the database Employs thousands of computing nodes and offers them (‘Amazon Cloud’) (maybe they using map-reduce for certain analytics: buying patterns)

Example: Selected machine learning methods Some learning algorithms require rather ‘iterative map-reduce‘ The traditional ‘map-reduce done‘ approach needs to be extended

Map-Reduce is not a solution to every parallelizable problem Only specific algorithms benefit from the map-reduce approach No communication-intensive parallel tasks (e.g. PDE solver) with MPI Applications that require often updates of existing datasets (writes) Implementations often have severe security limitations (distributed)

[1] Mining of Massive Datasets

54 / 60

Page 55: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm

[Video] IBM Commercial Hadoop-based Tools

[25] YouTube Video, IBM InfoSights

55 / 60

Page 56: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture Bibliography

Lecture 5 – Map Reduce Computing Paradigm 56 / 60

Page 57: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture Bibliography (1)

[1] K. Hwang, G. C. Fox, J. J. Dongarra, ‘Distributed and Cloud Computing’, Book, Online: http://store.elsevier.com/product.jsp?locale=en_EU&isbn=9780128002049

[2] HDFS Architecture Guide, Online: http://hadoop.apache.org/docs/stable/hdfs_design.html

[3] Microsoft Azure HDInsight Cluster Web page, Online: https://azure.microsoft.com/en-us/services/hdinsight/

[4] Apache Hadoop Web page,Online: http://hadoop.apache.org/

[5] J. Dean, S. Ghemawat, ‘MapReduce: Simplified Data Processing on Large Clusters’, OSDI'04: Sixth Symposium on Operating System Design and Implementation, December, 2004.

[6] Apache Spark Web page, Online: http://spark.apache.org/

[7] Introduction to Parallel Computing Tutorial, Online: https://computing.llnl.gov/tutorials/parallel_comp/

[8] Introduction to High Performance Computing for Scientists and Engineers, Georg Hager & Gerhard Wellein, Chapman & Hall/CRC Computational Science, ISBN 143981192X

[9] Science Progress, Online: http://scienceprogress.org/2009/11/dna-confidential/

[10] Mining of Massive Datasets, Online: http://infolab.stanford.edu/~ullman/mmds/book.pdf

Lecture 5 – Map Reduce Computing Paradigm 57 / 60

Page 58: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture Bibliography (2)

[11] J. Dean, S. Ghemawat, ‘MapReduce: Simplified Data Processing on Large Clusters’, OSDI'04: Sixth Symposium on Operating System Design and Implementation, December, 2004.

[12] SlideShare, ‚‘Introduction to Yarn and MapReduce‘, Online: https://www.slideshare.net/cloudera/introduction-to-yarn-and-mapreduce-2

[13] YouTube, Intro To MapReduce ,Online: http://www.youtube.com/watch?v=HFplUBeBhcM

[14] OpenStack Web page, Online: https://www.openstack.org/software/

[15] OpenStack Paper, ‘OpenStack Workload Reference Architecture: Big Data‘, Online: https://www.openstack.org/assets/software/mitaka/OpenStack-WorkloadRefBigData-v4.pdf

[16] ‘Google Throws Open Doors to Its Top-Secret Data Center’, wired.com, Online: https://www.wired.com/2012/10/ff-inside-google-data-center/all/

[17] Amazon Web Services Web Page, Online: https://aws.amazon.com

[18] AWS Marketplace, Online: https://aws.amazon.com/marketplace/

[19] Google DataProc Service, Online: https://cloud.google.com/dataproc/

[20] Apache Hbase Web Page, Online: https://hbase.apache.org/

Lecture 5 – Map Reduce Computing Paradigm 58 / 60

Page 59: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture Bibliography (3)

[21] Facebook Code, ‚‘HydraBase – The evolution of HBase@Facebook‘, Online: https://code.facebook.com/posts/321111638043166/hydrabase-the-evolution-of-hbase-facebook/

[22] Facebook Engineering, ‘Under the Hood: Building posts search‘, Online: https://www.facebook.com/notes/facebook-engineering/under-the-hood-building-posts-search/10151755593228920/

[23] Einführung in Hadoop (German language), Online: http://blog.codecentric.de/2013/08/einfuhrung-in-hadoop-die-wichtigsten-komponenten-von-hadoop-teil-3-von-5/

[24] Docker Hub, Online: https://hub.docker.com/

[25] YouTube Video, IBM InfoSights, Online: http://www.youtube.com/watch?v=LGsq7kHjdhI&feature=youtu.be

[26] Jean-claude Laprie, ‘Dependable computing: From concepts to design diversity, Proceedings of the IEEE, 1986, pages 629-638

[27] Dhruba Borthakur, ‘The Hadoop Distributed File System: Architecture and Design’, Online: http://hadoop.apache.org/docs/r0.18.0/hdfs_design.pdf

[28] Stampede Virtual Workshop, Online: http://www.cac.cornell.edu/Ranger/MapReduce/dfs.aspx

[29] Microsoft Azure Portal Hub,Online: https://portal.azure.com/#create/hub

[30] Microsoft Azure HDInsight Cluster Web page, Online: https://azure.microsoft.com/en-us/services/hdinsight/

[31] Big Data Tips, ‘Apache Spark vs Hadoop‘, Online: http://www.big-data.tips/apache-spark-vs-hadoop

Lecture 5 – Map Reduce Computing Paradigm 59 / 60

Page 60: School ofEngineering andNatural Sciences, University ......E.g. Apache Hadoop In-Memory Processing (cf. Lecture3) Performmanyoperationsfast via ‘in-memory‘ Enabletaskssuch as‘map-reducein-memory‘

Lecture 5 – Map Reduce Computing Paradigm 60 / 60


Recommended