+ All Categories
Home > Technology > Replication

Replication

Date post: 16-Dec-2014
Category:
Upload: mongodb
View: 551 times
Download: 3 times
Share this document with a friend
Description:
In this session we will introduce the concepts around replica sets in MongoDB, which provide automated failover and recovery of nodes. You’ll learn how to set up, configure, and develop with replica sets, and how to tune consistency and durability according to your application’s requirements. We’ll also review common deployment scenarios.
Popular Tags:
37
Software Engineer, 10gen Tyler Brock Replication and Replica Sets Wednesday, March 27, 13
Transcript
Page 1: Replication

Software Engineer, 10genTyler Brock

Replication and Replica Sets

Wednesday, March 27, 13

Page 2: Replication

Agenda

• Replica Sets Lifecycle• Developing with Replica Sets• Operational Considerations • Behind the Curtain

Wednesday, March 27, 13

Page 3: Replication

Why Replication?

• How many have faced node failures?• How many have been woken up from sleep to do

a fail-over(s)?• How many have experienced issues due to

network latency?• Different uses for data

– Normal processing– Simple analytics

Wednesday, March 27, 13

Page 4: Replication

ReplicaSet Lifecycle

Wednesday, March 27, 13

Page 5: Replication

Replica Set – Creation

Wednesday, March 27, 13

Page 6: Replication

Replica Set – Initialize

Wednesday, March 27, 13

Page 7: Replication

Replica Set – Failure

Wednesday, March 27, 13

Page 8: Replication

Replica Set – Failover

Wednesday, March 27, 13

Page 9: Replication

Replica Set – Recovery

Wednesday, March 27, 13

Page 10: Replication

Replica Set – Recovered

Wednesday, March 27, 13

Page 11: Replication

ReplicaSet Roles & Configuration

Wednesday, March 27, 13

Page 12: Replication

Replica Set Roles

Heartbeat Heartbeat

Wednesday, March 27, 13

Page 13: Replication

> conf = { _id : "mySet", members : [ {_id : 0, host : "A”, priority : 3}, {_id : 1, host : "B", priority : 2}, {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay : 3600} ]}

> rs.initiate(conf)

Configuration Options

Wednesday, March 27, 13

Page 14: Replication

> conf = { _id : "mySet”, members : [ {_id : 0, host : "A”, priority : 3}, {_id : 1, host : "B", priority : 2}, {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay : 3600} ]}

> rs.initiate(conf)

Configuration Options

Primary DC

Wednesday, March 27, 13

Page 15: Replication

> conf = { _id : "mySet”, members : [ {_id : 0, host : "A”, priority : 3}, {_id : 1, host : "B", priority : 2}, {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay : 3600} ]}

> rs.initiate(conf)

Configuration Options

Secondary DCDefault Priority = 1

Wednesday, March 27, 13

Page 16: Replication

> conf = { _id : "mySet”, members : [ {_id : 0, host : "A”, priority : 3}, {_id : 1, host : "B", priority : 2}, {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay : 3600} ]}

> rs.initiate(conf)

Configuration Options

Analytics node

Wednesday, March 27, 13

Page 17: Replication

> conf = { _id : "mySet”, members : [ {_id : 0, host : "A”, priority : 3}, {_id : 1, host : "B", priority : 2}, {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay : 3600} ]}

> rs.initiate(conf)

Configuration Options

Backup node

Wednesday, March 27, 13

Page 18: Replication

Developing with Replica Sets

Wednesday, March 27, 13

Page 19: Replication

Strong Consistency

ReadsWrites

Replication

Wednesday, March 27, 13

Page 20: Replication

Delayed Consistency

Writes

Replication

ReadsReads

Wednesday, March 27, 13

Page 21: Replication

Read Preference Modes

• 5 modes (new in 2.2)– primary (only) - Default– primaryPreferred– secondary– secondaryPreferred– Nearest

When more than one node is possible, closest node is used for reads (all modes but primary)

Wednesday, March 27, 13

Page 22: Replication

Write Concern

• Network acknowledgement• Wait for error • Wait for journal sync• Wait for replication

Wednesday, March 27, 13

Page 23: Replication

Tagging

• New in 2.0.0• Control where data is written to, and read from• Each member can have one or more tags

– tags: {dc: "ny"}– tags: {dc: "ny",

subnet: "192.168", rack: "row3rk7"}

• Replica set defines rules for write concerns• Rules can change without changing app code

Wednesday, March 27, 13

Page 24: Replication

{ _id : "mySet", members : [ {_id : 0, host : "A", tags : {"dc": "ny"}}, {_id : 1, host : "B", tags : {"dc": "ny"}}, {_id : 2, host : "C", tags : {"dc": "sf"}}, {_id : 3, host : "D", tags : {"dc": "sf"}}, {_id : 4, host : "E", tags : {"dc": "cloud"}}], settings : { getLastErrorModes : { allDCs : {"dc" : 3}, someDCs : {"dc" : 2}} }}> db.blogs.insert({...})> db.runCommand({getLastError : 1, w : "someDCs"})

Tagging Example

Wednesday, March 27, 13

Page 25: Replication

Tagged Read Preference

• Custom read preferences• Control where you read from by (node) tags

– E.g. { "disk": "ssd", "use": "reporting" }

• Use in conjunction with standard read preferences– Except primary

Wednesday, March 27, 13

Page 26: Replication

Operational Considerations

Wednesday, March 27, 13

Page 27: Replication

Maintenance and Upgrade

• No downtime• Rolling upgrade/maintenance

– Start with Secondary– Primary last

Wednesday, March 27, 13

Page 28: Replication

Replica Set – 1 Data Center

• Single datacenter• Single switch & power• Points of failure:

– Power– Network– Data center– Two node failure

• Automatic recovery of single node crash

Wednesday, March 27, 13

Page 29: Replication

Replica Set – 2 Data Centers

• Multi data center• DR node for safety

Wednesday, March 27, 13

Page 30: Replication

Replica Set – 3 Data Centers

• Three data centers• Can survive full data

center loss • Can do w= { dc : 2 } to

guarantee write in 2 data centers (with tags)

Wednesday, March 27, 13

Page 31: Replication

Behind the Curtain

Wednesday, March 27, 13

Page 32: Replication

Implementation details

• Heartbeat every 2 seconds– Times out in 10 seconds

• Local DB (not replicated)– system.replset– oplog.rs

• Capped collection

• Idempotent version of operation stored

Wednesday, March 27, 13

Page 33: Replication

> db.replsettest.insert({_id:1,value:1}){ "ts" : Timestamp(1350539727000, 1), "h" : NumberLong("6375186941486301201"), "op" : "i", "ns" : "test.replsettest", "o" : { "_id" : 1, "value" : 1 } }

> db.replsettest.update({_id:1},{$inc:{value:10}}) { "ts" : Timestamp(1350539786000, 1), "h" : NumberLong("5484673652472424968"), "op" : "u", "ns" : "test.replsettest", "o2" : { "_id" : 1 }, "o" : { "$set" : { "value" : 11 } } }

Op(erations) Log is idempotent

Wednesday, March 27, 13

Page 34: Replication

> db.replsettest.update({},{$set:{name : ”foo”}, false, true}) { "ts" : Timestamp(1350540395000, 1), "h" : NumberLong("-4727576249368135876"), "op" : "u", "ns" : "test.replsettest", "o2" : { "_id" : 2 }, "o" : { "$set" : { "name" : "foo" } } }{ "ts" : Timestamp(1350540395000, 2), "h" : NumberLong("-7292949613259260138"), "op" : "u", "ns" : "test.replsettest", "o2" : { "_id" : 3 }, "o" : { "$set" : { "name" : "foo" } } }{ "ts" : Timestamp(1350540395000, 3), "h" : NumberLong("-1888768148831990635"), "op" : "u", "ns" : "test.replsettest", "o2" : { "_id" : 1 }, "o" : { "$set" : { "name" : "foo" } } }

Single operation can have many entries

Wednesday, March 27, 13

Page 35: Replication

What’s New in 2.2

• Read preference support with sharding– Drivers too

• Improved replication over WAN/high-latency networks

• rs.syncFrom command

Wednesday, March 27, 13

Page 36: Replication

Just Use It

• Use replica sets• Easy to setup

– Try on a single machine

• Check doc page for RS tutorials– http://docs.mongodb.org/manual/replication/#tutorials

Wednesday, March 27, 13

Page 37: Replication

Software Engineer, 10genTyler Brock

Thank You

Wednesday, March 27, 13


Recommended