+ All Categories
Home > Technology > Ceph Day Amsterdam 2015 - Ceph over IPv6

Ceph Day Amsterdam 2015 - Ceph over IPv6

Date post: 15-Jul-2015
Category:
Upload: ceph-community
View: 377 times
Download: 0 times
Share this document with a friend
Popular Tags:
24
Ceph over IPv6
Transcript
Page 1: Ceph Day Amsterdam 2015 - Ceph over IPv6

Ceph over IPv6

Page 2: Ceph Day Amsterdam 2015 - Ceph over IPv6

Who am I?

● Wido den Hollander (1986)● Co-owner and CTO of a PCextreme B.V., a dutch

hosting company● Ceph trainer and consultant at 42on B.V.● Part of the Ceph community since late 2009

– Wrote the Apache CloudStack integration

– libvirt RBD storage pool support

– PHP and Java bindings for librados

● IPv6 fan :-)

Page 3: Ceph Day Amsterdam 2015 - Ceph over IPv6

What is 42on?

● Consultancy company focused on Ceph and it's Eco-system

● Founded in 2012● Based in the Netherlands● I'm the only employee

– My consultancy company

Page 4: Ceph Day Amsterdam 2015 - Ceph over IPv6

IPv6

Internet Protocol version 6 (IPv6) is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion. IPv6 is intended to replace IPv4.Source: Wikipedia IPv6

Page 5: Ceph Day Amsterdam 2015 - Ceph over IPv6

Why do we need IPv6?

● IPv4 is running out– ~3.2 billion addresses available for the whole planet

● 7 billion people on the planet● >16 billion devices connected to the internet

● The Internet was designed to be Peer-to-Peer, NAT breaks that whole principle– I see NAT as the evil of the Internet

– NAT is NOT a firewall

Page 6: Ceph Day Amsterdam 2015 - Ceph over IPv6

My IPv6 experience

● Deployed my first IPv6 tunnel in 2009– Using Sixxs as a tunnel broker

● Enabled my personal websites in 2010● My office has native IPv6 since 2012

– Thanks XS4All!

● My home has native IPv6 since summer 2014– Thanks ZeelandNet!

● I now try to deploy as much IPv6-only servers as possible

Page 7: Ceph Day Amsterdam 2015 - Ceph over IPv6

Ceph over IPv6

Page 8: Ceph Day Amsterdam 2015 - Ceph over IPv6

Ceph over IPv6

● It just works– Add 'ms bind ipv6 = true' to ceph.conf

● Monitors, OSDs and librados support IPv6 properly

● Public and Cluster networks work as they should

Page 9: Ceph Day Amsterdam 2015 - Ceph over IPv6

Why?

● No more issues trying to find available space in RFC1918 ranges (10.0.0.0/8, 192.168.0.0/16, ..)

● Use top-of-rack Layer 3 routing to route traffic between racks– No more large flat Layer 2 networks

● Use SLAAC (Auto-configuration) for OSDs and clients

● Ceph is the future, so is IPv6! Why not combine it?

Page 10: Ceph Day Amsterdam 2015 - Ceph over IPv6

Dual-Stack

● Does not work● Choose IPv4 or IPv6

– The OSDMap can only contain one address per OSD

– Hard, very hard, to switch after deployment

Page 11: Ceph Day Amsterdam 2015 - Ceph over IPv6

Top of rack routing

● Each top of rack switch is a Layer 3 router– No more spanning-tree or Layer 2 loops

● Each rack has a /64 subnet assigned– Available space is 'unlimited'

– Based on the IP address you know in which rack a host is

● Using OSPF or BGP racks can find routes to other racks– No need for a central core, network can be distributed

– Easy to connect other datacenters, networks and/or customers

● Facebook uses this in their new network design with IPv6-only. Internally they are almost IPv6-only

Page 12: Ceph Day Amsterdam 2015 - Ceph over IPv6

Top of rack routing

Page 13: Ceph Day Amsterdam 2015 - Ceph over IPv6

Top of rack routing

Page 14: Ceph Day Amsterdam 2015 - Ceph over IPv6

Ethernet drives

● Seagate Kinetic is a Ethernet connected drive– In the future your OSDs might run

on the drive itself

● Ethernet drives can reach high density per rack, ~250 IPs per rack won't be enough– 1,844674407×10¹ should be ⁹

sufficient, right? Is a /64 subnet

Page 15: Ceph Day Amsterdam 2015 - Ceph over IPv6

Ethernet drives

● 12 3.5” drives in 1U● 44 machines per

rack● 528 drives per rack● 528 addresses per

rack– Hard to do with

RFC1918

Page 16: Ceph Day Amsterdam 2015 - Ceph over IPv6

Issues?

Yes, a couple. But none of them were hard to fix

Page 17: Ceph Day Amsterdam 2015 - Ceph over IPv6

Issues: Char array size

● Char array for holding a IPv6 address was too small. 32 characters instead of 39 characters

● A fully written out IPv6 address is 39 characters long– Eg: 2a02:0f6e:8007:0000:52e5:49ff:fec2:c976

● Would only run into this issue when using the full address notationFixed by 7ccdae (2010)

Page 18: Ceph Day Amsterdam 2015 - Ceph over IPv6

Issues: Github

● Github is not available over IPv6..– I contacted them a couple of times!

● My IPv6-only Ceph servers could not fetch the Ceph package signing key...

● The key is now on ceph.com which is available over IPv6 :-)– In the meantime I used a HTTP proxy for my

machines

Page 19: Ceph Day Amsterdam 2015 - Ceph over IPv6

Issues: ceph-deploy

● ceph-deploy would write mon_host without the [ and ] around the addresses:– mon_host = XXX:YYY:ZZZ::AA::BB

– Instead of

– mon_host = [XXX:YYY:ZZZ::AA::BB]

● Was just a small Python if-else statement with a IPv6-address testFixed by d1750f (2014)

Page 20: Ceph Day Amsterdam 2015 - Ceph over IPv6

Issues: DAD

● DAD: Duplicate Address Detection– Like the name says, tries to prevent duplicate

addresses

● When the Monitor would try to bind on the address the kernel would refuse since DAD was still in progress– The network was however 'up'

● The fix was retrying the bind a couple of timesFixed by 2d4dca (2014)

Page 21: Ceph Day Amsterdam 2015 - Ceph over IPv6

Running in production

● Network wise I haven't ran into any downtime or Ceph issues caused by IPv6– It just works

● All issues I had were deployment wise– Once fixed it ran perfectly

● DON'T forget 'ms bind ipv6 = true'

Page 22: Ceph Day Amsterdam 2015 - Ceph over IPv6

Running in production

● PCextreme Aurora Compute– My company

– 48 OSD machines● Public IPv6 space (No private network)

– Over 100 clients

● GreenHost– 20 OSD machines

● Public IPv6 space

– Tens of clients

● Government Cloud in The Netherlands (ODC)– 24 OSD machines

● Will scale to hundreds later this year

Page 23: Ceph Day Amsterdam 2015 - Ceph over IPv6

IPv6 is easier

● No more NAT– It's NOT a firewall!

● No more running out of subnets– Overlapping subnets are history

● Stateless Auto-configuration (SLAAC) is useful● Machines can be reached from the internet

– Scary, isn't it? Use a proper firewall

● It is the future!

Page 24: Ceph Day Amsterdam 2015 - Ceph over IPv6

Questions?

● Twitter: @widodh● Skype: @widodh● E-Mail: [email protected]● Github: github.com/wido● Blog: http://blog.widodh.nl/


Recommended