+ All Categories
Home > Technology > Webinar: Productionizing Hadoop: Lessons Learned - 20101208

Webinar: Productionizing Hadoop: Lessons Learned - 20101208

Date post: 20-Aug-2015
Category:
Upload: cloudera-inc
View: 4,099 times
Download: 1 times
Share this document with a friend
Popular Tags:
27
Welcome to Production-izing Hadoop: Lessons Learned Audio/Telephone: +1 916 233 3087 Access Code: 616-465-108 Audio PIN: Shown after joining the Webinar
Transcript

Welcome to Production-izing Hadoop: Lessons Learned

Audio/Telephone: +1 916 233 3087

Access Code: 616-465-108

Audio PIN: Shown after joining the Webinar

Housekeeping

• Ask questions at any time using the questions panel

• Problems? Use the chat panel

• Book drawing - winner announced at the end

• Slides and recording will be available

2 Copyright 2010 Cloudera Inc. All rights reserved

Poll

What is your interest in Hadoop?

• Just learning about it

• I have a problem I think Hadoop can solve

• Using Hadoop in our labs

• Using Hadoop in production

3 Copyright 2010 Cloudera Inc. All rights reserved

Speaker: Eric Sammer

Eric is a Solution Architect and Training Instructor for Cloudera. He has worked with dozens of customers in a variety of industries including Cloudera's largest Hadoop deployments. His experience ranges from clusters of a few nodes to clusters with hundreds of nodes with complex multi-tenant user environments. Prior to joining Cloudera, he held roles including System Architect, Director of Technical Operations, and Tech Lead at various New York City startups focusing on distributed data collection, processing, and reporting systems. Eric has over 12 years in development and technical operations and has contributed to various open source projects such as Gentoo Linux.

twitter: @esammer, @cloudera

4 Copyright 2010 Cloudera Inc. All rights reserved

Starting Out

5 Copyright 2010 Cloudera Inc. All rights reserved

(You)

http://www.iccs.inf.ed.ac.uk/~miles/code.html

“Let’s build a Hadoop cluster!”

Starting Out

6 Copyright 2010 Cloudera Inc. All rights reserved

(You)

http://www.iccs.inf.ed.ac.uk/~miles/code.html

Where you want to be

7 Copyright 2010 Cloudera Inc. All rights reserved

(You)

Yahoo! Hadoop Cluster (2007)

What is Hadoop?

• A scalable fault-tolerant distributed system for data storage and processing (open source under the Apache license)

• Core Hadoop has two main components • Hadoop Distributed File System (HDFS): self-healing high-bandwidth

clustered storage • MapReduce: fault-tolerant distributed processing

• Key value

• Flexible -> store data without a schema and add it later as needed • Affordable -> cost / TB at a fraction of traditional options • Broadly adopted -> a large and active ecosystem • Proven at scale -> dozens of petabyte + implementations in

production today

Copyright 2010 Cloudera Inc. All Rights Reserved. 8

Cloudera’s Distribution for Hadoop, Version 3

• Open source – 100% Apache licensed

• Simplified – Component versions & dependencies managed for you

• Integrated – All components & functions interoperate through standard API’s

• Reliable – Patched with fixes from future releases to improve stability

• Supported – Employs project founders and committers for >70% of components

Copyright 2010 Cloudera Inc. All Rights Reserved. 9

Hue Hue SDK

Oozie Oozie

HBase Flume, Sqoop

Zookeeper

Hive

Pig/ Hive

The Industry’s Leading Hadoop Distribution

Overview

• Proper planning

• Data Ingestion

• ETL and Data Processing Infrastructure

• Authentication, Authorization, and Sharing

• Monitoring

10 Copyright 2010 Cloudera Inc. All rights reserved

The production data platform

• Data storage

• ETL / data processing / analysis infrastructure

• Data ingestion infrastructure

• Integration with tools

• Data security and access control

• Health and performance monitoring

11 Copyright 2010 Cloudera Inc. All rights reserved

Proper planning

• Know your use cases!

• Log transformation, aggregation

• Text mining, IR

• Analytics

• Machine learning

• Critical to proper configuration

• Hadoop

• Network

• OS

• Resource utilization, deep job insight will tell you more

12 Copyright 2010 Cloudera Inc. All rights reserved

HDFS Concerns

• Name node availability

• HA is tricky

• Consider where Hadoop lives in the system

• Manual recovery can be simple, fast, effective

• Backup Strategy

• Name node metadata – hourly, ~2 day retention

• User data • Log shipping style strategies

• DistCp

• “Fan out” to multiple clusters on ingestion

13 Copyright 2010 Cloudera Inc. All rights reserved

Data Ingestion

• Many data sources

• Streaming data sources (log files, mostly)

• RDBMS

• EDW

• Files (usually exports from 3rd party)

• Common place we see DIY

• You probably shouldn’t

• Sqoop, Flume, Oozie (but I’m biased)

• No matter what - fault tolerant, performant, monitored

14 Copyright 2010 Cloudera Inc. All rights reserved

ETL and Data Processing

• Non-interactive jobs

• Establish a common directory structure for processes

• Need tools to handle complex chains of jobs

• Workflow tools support

• Job dependencies, error handling

• Tracking

• Invocation based on time or events

• Most common mistake: depending on jobs always completing successfully or within a window of time.

• Monitor for SLA rather than pray

• Defensive coding practices apply just as they do everywhere else!

15 Copyright 2010 Cloudera Inc. All rights reserved

Metadata Management

• Tool independent metadata about…

• Data sets we know about and their location (on HDFS)

• Schemata

• Authorization (currently HDFS permissions only)

• Partitioning

• Format and compression

• Guarantees (consistency, timeliness, permits duplicates)

• Currently still DIY in many ways, tool-dependent

• Most people rely on prayer and hard coding

• (H)OWL is interesting

16 Copyright 2010 Cloudera Inc. All rights reserved

Authentication and authorization

• Authentication • Don’t talk to strangers

• Should integrate with existing IT infrastructure

• Yahoo! security (Kerberos) patches now part of CDH3b3

• Authorization • Not everyone can access everything

• Ex. Production data sets are read-only to quants / analysts. Analysts have home or group directories for derived data sets.

• Mostly enforced via HDFS permissions; directory structure and organization is critical

• Not as fine grained as column level access in EDW, RDBMS

• HUE as a gateway to the cluster

17 Copyright 2010 Cloudera Inc. All rights reserved

Resource Sharing

• Prefer one large cluster to many small clusters (unless maybe you’re Facebook)

• “Stop hogging the cluster!”

• Cluster resources

• Disk space (HDFS size quotas)

• Number of files (HDFS file count quotas)

• Simultaneous jobs

• Tasks – guaranteed capacity, full utilization, SLA enforcement

• Monitor and track resource utilization across all groups

18 Copyright 2010 Cloudera Inc. All rights reserved

Monitoring

• Critical for keeping things running

• Cluster health

• Duh.

• Traditional monitoring tools: Nagios, Hyperic, Zenoss

• Host checks, service checks

• When to alert? It’s tricky.

• Cluster performance

• Overall utilization in aggregate

• 30,000ft view of utilization and performance; macro level

19 Copyright 2010 Cloudera Inc. All rights reserved

Monitoring

• Hadoop aware cluster monitoring • Traditional tools don’t cut it; Hadoop monitoring is inherently

Hadoop specific

• Analogous to RDBMS monitoring tools

• Job level “monitoring” • More like analysis

• “What resources does this job use?”

• “How does this run compare to last run?”

• “How can I make this run faster, more resource efficient?”

• Two views we care about • Job perspective

• Resource perspective (task slots, scheduler pool)

20 Copyright 2010 Cloudera Inc. All rights reserved

Wrapping it up

• Hadoop proper is awesome, but is only part of the picture

• Much of Professional Services time is filling in the blanks

• There’s still a way to go

• Metadata management

• Operational tools and support

• Improvements to Hadoop core to improve stability, security, manageability

• Adoption and feedback drive progress

• CDH provides the infrastructure for a complete system

21 Copyright 2010 Cloudera Inc. All rights reserved

Cloudera Makes Hadoop Safe For the Enterprise

Copyright 2010 Cloudera Inc. All Rights Reserved. 22

Software Services Training

Copyright 2010 Cloudera Inc. All Rights Reserved.

• Increases reliability and consistency of the Hadoop platform

• Improves Hadoop’s conformance to important IT policies and procedures

• Lowers the cost of management and administration

Cloudera Enterprise Enterprise Support and Management Tools

23

References / Resources

• Cloudera documentation - http://docs.cloudera.com

• Cloudera Groups – http://groups.cloudera.org

• Cloudera JIRA – http://issues.cloudera.org

• Hadoop the Definitive Guide

[email protected]

• irc.freenode.net #cloudera, #hadoop

• @esammer

24 Copyright 2010 Cloudera Inc. All rights reserved

Poll

What other topics would you be most interested in hearing about?

• More case studies of enterprises using Hadoop

• Technical "How to" sessions

• Industry specific applications of Hadoop

• Technical overviews of Hadoop and related components

25 Copyright 2010 Cloudera Inc. All rights reserved

Winner of the drawing is…

26 Copyright 2010 Cloudera Inc. All rights reserved

Q&A

Learn about upcoming events: www.cloudera.com/events

DBTA Webinar: Thursday, December 9th, 11am PT / 1pm ET

New Solutions for the Data Intensive Enterprise

Register at www.cloudera.com/events

Thank you for attending.

27 Copyright 2010 Cloudera Inc. All rights reserved


Recommended