+ All Categories
Home > Documents > The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh...

The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh...

Date post: 28-Dec-2015
Category:
Upload: archibald-fowler
View: 219 times
Download: 1 times
Share this document with a friend
Popular Tags:
31
The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engine LeHigh University
Transcript
Page 1: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

The Google File System

Ghemawat, Gobioff, Leung

via Kris MolendykeCSE498 WWW Search EnginesLeHigh University

Page 2: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Operating Environment Component Failure

It’s the norm, not the exception Expect:

Application, OS bugs Human error Hardware failure:

Disks Memory Power Supply

Need for: Constant monitoring Error detection Fault tolerance Automatic Recovery

Page 3: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Non-standard data needs

Files are huge Typically multi-GB Data sets can be many TB Cannot deal with typical file sizes

Would mean billions of KB sized files- I/O problems! Must rethink block sizes

Page 4: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Non-standard writing

File mutation Data is often appended, rather than overwritten Once a file is written it is read only, typically read

only sequentially Optimization focuses on appending to large files

and atomicity, not on caching

Page 5: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

File System API & Applications

Applications & API are co-designed Increases flexibility Goal is simple file system, light burden on

applications Atomic append built into API to make it easy on

applications with multiple clients to append with minimal synchronization

Page 6: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Design Assumptions

Built from cheap commodity hardware Expect large files: 100MB to many GB Support large streaming reads and small random

reads Support large, sequential file appends Support producer-consumer queues for many-way

merging and file atomicity Sustain high bandwidth by writing data in bulk

Page 7: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Interface and operations

Interface resembles standard file system- hierarchical directories and pathnames

Standard operations such as create, delete, open, close, read, write

Non-standard operations: Snapshot: low cost copy of directory tree or file Record append: allows multiple clients to append data

concurrently while guaranteeing atomicity without blocking (more later)

Page 8: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

GFS Architecture

1 master, at least 1 chunkserver, many clients

Page 9: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Master

Responsibilities Maintain metadata: information like namespace, access

control info, mappings from files to chunks, chunk locations Activity control

Chunk lease management Garbage collection Chunk migration

Communication with chunks Heartbeats: periodic syncing with chunkservers

Lack of caching Block size too big (64 MB) Applications stream data (Clients will cache metadata, though)

Page 10: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Chunks Identified by 64 bit chunk handles

Globally unique and assigned by master at chunk creation time Triple redundancy: copies of chunks kept on multiple

chunkservers Chunk size: 64 MB!

Plain Linux file Advantages

Reduces R/W & communication between clients & master Reduces network overhead- persistent TCP connection Reduces size of metadata on master

Disadvantage Hotspots- small files of just one chunk that many clients need

Page 11: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Metadata

3 main types to keep in memory File and chunk namespaces Mappings from files to chunks Locations of chunk’s replicas

First two are also kept persistently in log files to ensure reliability and recoverability

Chunk locations are held by chunkservers- master polls them for locations with a heartbeat occasionally (and on startup)

Page 12: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

What About Consistency?

GFS Guarantees File namespace mutations are atomic (file

creation…) File Regions

Consistent: all clients see same data, regardless which replicated chunk they use

Defined: after data mutation (writes or record appends) if it is consistent & all clients will see the entire mutation effect

Page 13: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

System Operations

Leases Mutations are done at all chunk’s replicas Master defines primary chunk which then decides

serial order to update replicas Timeout of 60 seconds- can be extended with

help of heartbeats from master

Page 14: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Write Control & Data Flow 1. Client asks master for

chunkserver 2. Master replies with primary &

secondaries (client caches) 3. Client pushes data to all 4. Clients sends write to primary 5. Primary forwards serially 6. Secordaries return completed

write 7. Primary replies to client

Page 15: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Decoupling

Data & flow control are separated Use network efficiently- pipeline data linearly

along chain of chunkservers Goals

Fully utilize each machine’s network bandwidth Avoid network bottlenecks & high-latency

Pipeline is “carefully chosen” Assumes distances determined by IP addresses

Page 16: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Atomic record appends

Record append GFS appends data to a file at least once atomically (as a

single continuous sequence of bytes) Similar to Unix O_APPEND without race condition

concerns when faced with multiple concurrent writers Extra logic to implement this behavior

Primary checks to see if appending data exceeds current chunk boundary

If so, append new chunk padded out to chunk size and tell client to try again next chunk

When data fits in chunk, write it and tell replicas to do so at exact same offset

Primary keeps replicas in sync with itself

Page 17: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Master operation

Executes all namespace operations Manages chunk replicas throughout system

Placement decisions Creation Replication

Load balancing chunkserver Reclaim unused storage throughout system

Page 18: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Replica Placement Issues

Worries Hundreds of chunkservers spread over many machine

racks Hundreds of clients accessing from any rack Aggregate bandwidth of rack < aggregate bandwidth of all

machines on rack Want to distribute replicas to maximize data

Scalability Reliability Availability

Page 19: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Where to Put a Chunk

Considerations It is desirable to put chunk on chunkserver with

below-average disk space utilization Limit the number of recent creations on each

chunkserver- avoid heavy write traffic Spread chunks across multiple racks

Page 20: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Re-replication & Rebalancing

Re-replication occurs when the number of available chunk replicas falls below a user-defined limit When can this occur?

Chunkserver becomes unavailable Corrupted data in a chunk Disk error Increased replication limit

Rebalancing is done periodically by master Master examines current replica distribution and moves

replicas to even disk space usage Gradual nature avoids swamping a chunkserver

Page 21: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Collecting Garbage

Lazy Update master log immediately, but… Do not reclaim resources for a bit (lazy part) Chunk is first renamed, not removed Periodic scans remove renamed chunks more than a few

days old (user-defined) Orphans

Chunks that exist on chunkservers that the master has no metadata for

Chunkserver can freely delete these

Page 22: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Lazy Garbage Pros/Cons

Pros Simple, reliable Makes storage reclamation a background activity of

master- done in batches when the master has the time (lets master focus on responding to clients, putting speed at priority)

A simple safety net against accidental, irreversible deletion Con

Delay in deletion can make it harder to fine tune storage in tight situations

Page 23: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Stale replica detection

Missed mutations on a chunk make it stale Chunk version number- makes it possible to

detect stale chunks New leases increment chunk version number

Stale chunks are removed in garbage collection

Page 24: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Fault Tolerance

Cannot trust hardware Goal is high availability. How?

Fast recovery Replication

Fast Recovery Master & chunkservers can restore state and restart in a

matter of seconds regardless of failure type No distinction between failure types at all

Chunk Replication Default to keep replicas on at least 3 different racks

Page 25: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

More than One Master!

Master replication Really only one master in control- replicas are nearly

synchronous heirs to the throne Replicas spread over multiple machines After failure can restart almost instantly Controlled by a monitoring infrastructure outside of GFS

Shadow Master Provides read-only access to the FS when master is down May lag slightly- hence shadow, not mirror

Page 26: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Data Integrity

Chunkservers use checksumming Each chunkserver responsible to verify its own

copies of data Chunkservers will not propagate errors

Will not return data to requester on checksum failure Bad chunks collected with garbage Master replicates a good copy of the chunk to replace

bad Little effect on performance

Page 27: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Performance Environment

GFS Cluster 1 master 2 master replicas 16 chunkservers 16 clients

Hardware Dual 1.4 GHz PIII 2 GB memory 2 80 GB HD (5400 RPM) 100 Mbps Connections 2 switches

19 GFS machines connected to one 16 clients connected to other Switches connected by 1Gbps link

Page 28: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Aggregate Throughput

10 MB/s

94 MB/s

6.3 MB/s

35 MB/s 6 MB/s 4.8 MB/s

•Reads•Random 4 MB region from 320 GB, 256 times

•Writes•1 GB by each client in 1 MB sizes

Page 29: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Real World Clusters

Cluster Stats Performance after 1 week

% Operations Performed

% Master Request Types

Cluster X: R&D machine, Cluster Y: Production Data Processing

67%

97%

50% 45%

95%

85%

Page 30: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Final Issues

More infrastructure needed to keep users from interfering with each other

Linux & HD problems thanks to IDE protocol versions- motivated checksumming

Additional concerns with costs of certain calls and locking issues

Page 31: The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.

Conclusions GFS is radically different than traditional FS

Component failure is normal Optimize for huge files Append as much as possible

Much attention paid to Monitoring Replication Recovery

Separate control and data flow Minimize master involvement with common operations to

avoid bottleneck Design is a success and widely used in Google


Recommended