Date post: | 15-Jan-2015 |
Category: |
Technology |
Upload: | stuart-charlton |
View: | 1,681 times |
Download: | 0 times |
November 16, 2009
San Francisco 2009
© 2009 Elastra Corporation, licensed under a Creative Commons Attribution-No Derivative Works 3.0 Unported License
Designing for the Cloud:A Tutorial
Stuart Charlton, CTO, Elastra
San Francisco 2009 2
Tutorial Objectives• What has cloud computing done to IT systems
design & architecture?» “The future is already here, it’s just not very evenly
distributed” (Gibson)
• How should new systems be designed with the new constraints?» Such as: parallelism, availability, on demand infra
• Where can I find are practical frameworks, tools, and techniques, and what are the tradeoffs?» Hadoop, Cassandra, Parallel DBs, Actors, Caches,
Containers, and Configuration Management
San Francisco 2009 3
About Your Presenter
Stuart Charlton• Canadian,
now in San Francisco
CTO, Elastra• Focus on Customers, Products,
Technology Directions
In prior lives... • BEA Systems,
Rogers Communications, Financial Services,global training & consulting
RESTafarian and Data geek
Stu Says Stuffhttp://stucharlton.com/blog
San Francisco 2009 4
Tutorial Agenda, in 4 Words
1. Clouds
2. Service
3. Data
4. Control
4
San Francisco 2009 5
Agenda – Part 1
• Clouds: Fear of a Fluffy Planet» What has changed, and what remains the same?» Designing applications in this world» A Cloud Design Reference Architecture
(aka. A cheat sheet to categorize thinking in the clouds)
• Service: Foundations for Systems» Solving Big Problems vs. Little Problems» Amdahl’s Law & The Universal Scalability Law » Actor-Based Concurrency: Dr. Strangelanguage,
(or How I Learned to Stop Worrying and Love Erlang)
San Francisco 2009 6
Agenda – Part 2
• Data: Management & Access» Contrasting Philosophies
Persistence vs. Management; Scale-Up vs. Scale-Out Shared Disk vs. Shared Nothing
» A survey of solutions (from clustered DBMS to K/V stores)» Consistency, Availability, Partitioning (CAP) Tradeoffs
Deep dig into what these really imply
• Control: Containers, Configuration & Modeling» The Dev/Ops Tennis Match» The Evolution of Automation
From Scripts to Runbooks to FSMs to HTNs
San Francisco 2009 7
Caveats• Audience Assumption: IT Devs & Architects
» Some exposure to cloud, but not necessarily advanced
• The technology is a fast moving target» Especially state of the specific tools & frameworks
• Theory vs. practice» I try to balance the two; both are essential
• Time is limited» Only scratching the surface of certain topics» Missing topics are usually full tutorials in their own right
• Much of the subject matter is up for debate» And, this is a tutorial, not a workshop….
San Francisco 2009
CLOUDS
Fear of a Fluffy Planet
8
San Francisco 2009 9
• (court
(Courtesy of browsertoolkit.com)
San Francisco 2009 10
The Freedom!
• On Demand Infrastructure via API calls» Inside or outside my data centres (Private / Public Cloud)
• Pay-per-use pricing models» Great for temporary growth needs
• Platform-as-a-Service» Scalability without Skill, Availability without Avarice
• Large Scale, Always On» New opportunities due to cheaper scale & availability
San Francisco 2009 11
The Horror!
• Hype Overdrive» Cloud Running Shoes! Cloud Chewing Gum! GOOG!
Werner Vogels Action Figures! (well, not quite yet)
• Standards Support» So many to choose from!» OCCI, vCloud + OVF, EC2, WBEM, WS-Management
• Platform-as-a-Service» What color would you like for your locked trunk’s interior?
• Crazy Talk» No SQL! Eventual Consistency! Infrastructure as Code!
San Francisco 2009
Will the Real Slim Cloudy Please Stand Up?
“I, for one, welcome our new outsourced overlords”
• Finer-grained outsourcing• Metered resource usage• APIs & self-service UIs
• … but isn’t outsourcing often a shell game?
• See Distributed Computing Economics, Jim Gray (2003)
“Scale without skill, availability without avarice”
• Insert constrained code [here]• Magically scalable & available• GAE, Azure (some day)
• … but aren’t you locked in?
San Francisco 2009
Will the Real Slim Cloudy Please Stand Up?
“I like Big *aaS and I cannot lie”
• Private, Public, or Community Clouds
• Multiple stack levels• “Real” SOA, not just web
services
• … haven’t I heard this before?
“My name is… what? Slim Cloudy!”
• Reduced lead times to change• Agile Operations / Lean IT• Revolution in systems
management
• … can we really change IT?
San Francisco 2009 14
Designing Applications in this World
• Distributed & networked systems have triumphed» The fallacies must be taken seriously now
Network is unreliable, latency > 0, bandwidth is finite, topology might change, etc.
• Scale-out & fault tolerance: the new design center Versus productive business logic, data management, etc.
• What’s old is new» Some challengers to mainstream ideas are old ideas
being reapplied e.g. Erlang, Map/Reduce, distributed file systems, replication
San Francisco 2009 15
Designing Applications in this World
• Autonomous services constitute most systems» Full-stack services, not just bits of code
• Design for constant operations» Interdependence + Distribution + Autonomy = Pain» FCAPS (Fault, Configuration, Accounting, Performance & Security
Management)
• Security & Privacy» Multi-tenancy, data-in-transit vs. data-at-rest, etc.
San Francisco 2009 16
Solving for one’s own problems
• Mainstream tools, platforms, and servers have not consistently caught up
• LOTS of software experimentation in:» Web servers, containers, caches, databases, network
configuration, systems management
• The danger is to view new solutions as the better way of doing things in general» It’s possible; but stuff is changing quickly» New territory always involves a level of reinvention» The tech world has not rebooted due to cloud computing
• Beware Fanbois/Fangrrls, Pundits & The Press
San Francisco 2009 17
A Cloud Design Reference Architecture
• Web – WebArch & REST
• Service, Data,& Control – this tutorial
• Resource –virtualization,management &infrastructure clouds
DATA
RESOURCE
CONTROL
SERVICE
WEB
San Francisco 2009 18
Service
• Organizing your computing domain for» fault» scale» management DATA
RESOURCE
CONTROL
SERVICE
WEB
San Francisco 2009 19
Data
• Storage, retrieval,integrity, recovery given» Distributed systems» Large scale» High availability» (possible) Multi-tenancy
DATA
RESOURCE
CONTROL
SERVICE
WEB
San Francisco 2009 20
Control
• Provision, configuration, governance, and optimization of infrastructure» Resource brokerage» Policy constraints» Dependency management» Software configuration» Authorization & Auditability
DATA
RESOURCE
CONTROL
SERVICE
WEB
San Francisco 2009
SERVICE
Foundation for Systems
San Francisco 2009 22
Designing a Service, circa 1998-2008
• Multi-Tier Hybrid Architecture» Some stateless, some stateful computing» Session state is replicated
• Independent servers / applications» Low-level redundancy (RAID, 2x NICs, etc.)
• “Put your eggs into a small number of baskets, and watch those baskets”
• General assumptions» Failure at the service layer shouldn’t lead to
downtime» Failure at the data layer may be catastrophic
San Francisco 2009 23
Designing a Service, circa 2008+
• Autonomous services » Divide system into areas of functional
responsibility (tiers irrelevant)• Interdependent servers / applications
» Software-level redundancy andfault handling
• “Many, many servers breaking big problems down or distributinglots of little problems around”
• New realities» Partial failure is a regular, normal occurrence; no
excuse for downtime from any service
San Francisco 2009
Breaking or bridging a problem across resources
Big Problems (Parallel)
• Theory:Amdahl’s lawShared memory or disk vs. Shared nothing
• New Practice:MapReduce (e.g. Hadoop), Spaces, Master/Worker
• Retro: Linda, MPI, OpenMP, IPC or Threads
Little Problems (Concurrent)
• Theory: Actor-model & process calculi
• New Practice: Lightweight Messaging, Spaces, Erlang & Scala Actors
• Retro: IPC, Thread pools,Components (COM+/EJB),Big Messaging (MQ, TIB, JMS)
San Francisco 2009 25
Case Study in “Big Problem” Solving: MapReduce & Apache Hadoop
• Input» Read your data from files as a K/V map
• Distribute Mapping Function» Input one (k,v) pair» returns new K/V list
• Partition & Sort» Handled by framework (eg. Hadoop)» Provide a comparator
• Distribute Reduce Function» Input one (k, list of values) pair» Return a list of output values
• Output» Save the list as a file
San Francisco 2009 26
….But how fast can I get?Theory Interlude: Amdahl’s Law
• How fast can I speed up a sequential process?
• Time = Serial part + Parallel part
• Thus, the speed up is
• Where P is the % of the program that can be parallel
• N is the number of processors• What happens when P is 95%? -- Maximum of 20x
» How about 99.99%?
San Francisco 2009 27
Gunther’s Universal Scalability Law
• It gets worse…• Most scale-out
experiences retrogradebehavior at peak loads
• Capacity(N) = N 1 + α (N − 1) + β N (N − 1)
• α is the contention • β is the coherency delay• http://www.perfdynamics.com/Manifesto/gcaprules.html
San Francisco 2009 28
Case study in solving “little problems”Actors: The Basic Idea
• Programmable entities are concurrent, share nothing, communicate through messages
• Actors can» Send messages» Create other actors» Specify how it responds to messages
• Very lightweight (actors = objects)• Usually no ordering guarantees• At the language level
San Francisco 2009 29
Erlang Supervisors: Assuming failure will occur
• Failures require cleanup & restart
• Supervisor relationships canensure the systemtolerates faults
• Hot-swap patches• Fundamentally in
the language libraries
San Francisco 2009
What kinds of failures? A Simplification.
Exceptional Conditions
• Conditions that the programmer can handle
• Handled through cleanup or “catch” code
• Examples» File not found, type conversion,
bad arithmetic (divide by zero),malformed input
Error Conditions
• Conditions that a programmer did not or should not handle
• Tolerated through replication, fast failure, and/or restart(s)
• Examples» Hardware failures, network
outages, “Heisenbugs”, rare software conditions
San Francisco 2009
DATA
Management & Access
San Francisco 2009
Evolving the Database: Two Philosophies
Data Persistence Systemsand Frameworks
• Goal: Store & retrieve data quickly, reliable, with minimal hassle to the programmer
• Often uses application tools & languages to manage & access data
• Focused set of features
Database Management Systems(DBMS)
• Goal: Manage the access, integrity, security, and reliability of data, independently of applications
• Hard separation of tools & languages (e.g. SQL, DBA tools)
• Broad set of features
San Francisco 2009
Scaling the Database: Two Philosophies
Scale-Up
• Concurrent processing & parallelism through hardware
» SMP, NUMA, MPP» RAID Arrays (SAN & NAS)» Shared disk or memory
• Benefit: It worked in the 90s.• Drawback: Expensive, often
bespoke, forklift upgrades
Scale-Out
• Concurrent processing & parallelism through software
» Commodity hardware» Software provides the engine» Shared nothing
• Benefit: Linear scale, easy to standardize, easy to replicate / upgrade
• Drawback: Traditionally, the software sucked.
33
San Francisco 2009 34
… What happens when database clustering software stops sucking? (i.e. now)
• A flurry of programmer-oriented approaches» Persistence engines rule the bleeding edge in 2009» Key/Value Stores, JSON Document stores, etc.
• Declarative/Imperative impedance mismatch(the “Vietnam” of the software tools industry) gets conflated with distributed data
• Lots of practical confusion• What are the tradeoffs with a widely scaled out database system?• Too many choices, with idiosyncratic design histories
• Let’s detangle this…
34
San Francisco 2009
When should I share components?
Shared Disk
• Partition compute across nodes
• Storage is shared through NAS or SAN
• Good for:» Mixed workload» Small random access reads
• Worst case:» Inter-node network chatter
caps scalability» Disk pings to propagate
writes (e.g. Oracle pre-RAC)
Shared Nothing
• Partition data across nodes• Each node owns its data• Good for:
» Read-mostly» Parallel reads of huge data
volumes» Consistent writes go to one
partition
• Worst case:» Repartitioning» Hotspot records don’t scale» Writes that span partitions
San Francisco 2009 36
Modern Data Persistence Systems
• Object Persistence» “Navigational databases in Java, Smalltalk, C++”» GemStone, Versant, Objectivity
• Distributed Key-Value Stores» “Structured data with lesser need for complex queries”» Consistent: BigTable, HBase, Voldemort» Eventually Consistent: Dynamo, Cassandra
• Document and/or Blob Stores» “Indexed structured data + binaries/fulltext”» CouchDB, BerkeleyDB, MongoDB
San Francisco 2009 37
Clustered DBMS for Transactions
• Oracle Real Application Clusters (RAC)» Shared disk, Replicated Memory (“Cache Fusion”)» Limited by mesh interconnect to disk (partitioning possible)
• IBM DB2 Data Partitioning Feature» Shared nothing database cluster, high number of nodes
• IBM DB2 pureScale» New (Oct 2009) technology that ports IBM DB2 mainframe
shared-disk clustering to the DB2 for open systems
• Microsoft SQL Server 2008» “Federated” Shared Nothing Database a longtime feature
San Francisco 2009 38
Clustered DBMS for Parallel Queries
• Teradata» The old standard data warehouse, hardware + software
• Netezza» Data warehousing appliance (hw + software)
• Vertica» Column-oriented, shared nothing clustered database» Mike Stonebraker’s new company
• Greenplum» Column-oriented, shared nothing clustered database» Based on PostgreSQL with MapReduce engine
San Francisco 2009
Scaling to Internet-Scale
Single Control Domain
• One Database Site• Consistency is built-in
• Scalable with tradeoffs among different workloads
• Scale to the limits of network bandwidth & manageability
• Main Example:» Clustered DBMS
Multiple Control Domains
• Many Database Sites• Consistency requires
agreement protocol• Scalable only if consistency
is relaxed• Nearly limitless (global) scale• Main Examples:
» DNS » The Web
39
San Francisco 2009 40
How do I make consistency tradeoffs?Theory interlude: The CAP theorem• Consistency (A+C in ACID)
» There’s a total orderingon all operations on the data;i.e. like a sequence
• Availability» Every request on
non-failed servers must havea response
• Tolerance to Network Partitions» All messages might be lost between server nodes
• Choose at most two of these (as a spectrum).
Consistency
Availability
Tolerance to Network Partitions
San Francisco 2009 41
CAP Tradeoffs: Consistency & Availability
Consistency
Availability
Tolerance to Network Partitions
The common case. Fault tolerance through replicas
& fast fail + fast recovery Implication:
network outage between servers might halt the system
generally requires a single domainof control
Examples that emphasize C+A: Single-site cluster databases
Google BigTable Hadoop’s HBase Oracle RAC, IBM DB2 Parallel
Clustered file systems Google File System & HDFS
Distributed Spaces & Caches Coherence, Gigaspaces & Terracotta
San Francisco 2009 42
CAP Tradeoffs: Consistency & Partitions
Consistency
Availability
Tolerance to Network Partitions
Common approach for traditional distributed systems
Implication: multiple domains of control clients can’t always read/write failures degrade scale & performance
due to negotiation Examples that emphasize C+P:
Distributed shared nothing databases Two-phase commit
Distributed locks & file systems Chubby & Hadoop’s ZooKeeper
Paxos & consensus protocols Synchronous Log Shipping
San Francisco 2009 43
CAP Tradeoffs: Partitions & Availability
Consistency
Availability
Tolerance to Network Partitions
New approach for Internet-scale systems
Implication: multiple domains of control reads & writes always succeed
(eventually) clients may read inconsistent
(old or undone) data Examples that emphasize A+P:
Internet DNS Web Caching
& Content Delivery Networks Amazon Dynamo (and clones)
Cassandra (Facebook, Digg) CouchDB (BBC)
Asynchronous log shipping
San Francisco 2009 44
Summary of the CAP Tradeoffs
• Mix & match the tradeoffs where appropriate» Google’s search engine uses all three!
• The tradeoffs are a spectrum, and are not static choices» eg. there are adjustable levels of consistency to consider
Strict, causal, snapshot / epoch, eventual, weak…
• The main tradeoff: writes to multiple sites / domains of control (with or without high availability)» Single Domain (don’t tolerate network partitions), or
Agreement Protocol (reduces availability), orRelaxed Consistency (stale/inaccurate data is possible)
» Weaker consistency is where the idea of a DBMS falters (it is contrary to its main purpose in life)
San Francisco 2009 45
Please don’t throw out logical/relationaldata design! (unless you have to)
• “Future users of massive datasets should be protected from having to know how the data is organized in the computing cloud….
• …. Activities of users through web agents and most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed.”» Paraphrasing Ed Codd – 39 years ago!
San Francisco 2009
CONTROL
Containers, Configuration, & Modeling
San Francisco 2009 47
The Dev / Ops Game
San Francisco 2009
Example:Why can’t these two servers communicate?Possible areas of problems
• Security» Bad credentials
• Server Configuration» Wrong IP or Port» Bad setup to listen or call
• Network Configuration» Wrong duplex» Bad DNS or DHCP
• Firewall Configuration» Ports or protocols not open
San Francisco 2009
Example:What do I need to do to make this change?Desired Change
• Scale-out this cluster
But…
• Impacts on other systems» Security Systems» Load Balancers» Monitoring» CMDB / Service Desk
• Architecture issues» Stateful or stateless nodes» Repartitioning?» Limits/constraints on
scale out?
49
San Francisco 2009
Example:What is the authoritative reality?
Desired State
• Configuration Template• Model• Script• Workflow• CMDB• Code
Current State
• On the server» Might not be in a file» Might get changed at runtime
• And when you do change…» It may not actually change» It might change to an
undesirable setting» It might affect other settings
that you didn’t think about
50
San Francisco 2009
Configuration Code, Files, and Models
Bottom Up
• Scripts & Recipes» Hand-grown automation
• Runbooks» Workflow, policy
• Frameworks» Chef» Puppet, Cfengine
• Build Dependency Systems» Maven
Top Down
• Modeled Viewpoints» E.g. Microsoft Oslo, UML,
Enterprise Architecture
• Modular Containers» E.g. OSGi, Spring, Azure roles
• Configuration Models» SML, CIM» ECML , EDML
San Francisco 2009 52
An Evolution of Automation
• Scripts» For automating common cases
• Run-Book Automation» Scripts as visual workflow
• Declarative» Separate what you want from how you want it done
• Finite State Machines» Organize scripts into described states & transitions
• Hierarchical Task Networks (Planning)» Assemble a plan by exploring hypothetical strategic paths
San Francisco 2009 53
An Approach to Integrated Design and Ops
53
San Francisco 2009
WRAP-UP
Cloudy, with a chance of …
San Francisco 2009 55
Revisiting the Cloud Design Reference Architecture
• Service – Big vs. Little ProblemsMapReduce & ActorsAmdahl’s Law
• Data – persistence vs. mgtscale-up vs. scale-outCAP tradeoffs
• Control –containers, configuration, automation
DATA
RESOURCE
CONTROL
SERVICE
WEB
San Francisco 2009 56
For More Information
• Hadoop» http://hadoop.apache.org/
• CAP Theorem Proof Paper» http://people.csail.mit.edu/sethg/pubs/BrewersConjecture-SigAct.pdf
• Google’s papers on Distributed & Parallel Computing» http://research.google.com/pubs/DistributedSystemsandParallelComputing.html
• Neil Gunther’s “Taking the Pith out of Performance” Blog» http://perfdynamics.blogspot.com/
• A Comparison of Approaches to Large-Scale Data Analytics» http://database.cs.brown.edu/sigmod09/benchmarks-sigmod09.pdf
• Model-Driven Operations for the Cloud» http://www.stucharlton.com/stuff/oopsla09.pdf