Persistence Patterns for Containers
Steve Watt, Red HatCC flickr @dopey
The basics of configuring persistence for containers- Must be a filesystem- Always use a mount (docker -v)- Mounts can be host-local or a network storage asset (that is mounted to the
host).- Read and Write data to the mount point.- Don’t store data in the container itself- Also a good way to manage non-compiled code
Container
Host O/S
Host Local FileSystemNetwork Storage AssetMount
docker -v mount (bind mount)
Common choices for container I/O
- Block (POSIX Single Reader/Writer): - i.e. AWS EBS, ISCSI, Ceph RBD, Local Host File System
- Shared File (POSIX Multiple Readers/Writers): - i.e. AWS EFS, NFS, GlusterFS
- Object (Non-POSIX GET/PUT, Immutable): - i.e. Minio, Ceph Object Gateway, AWS S3
Common container persistence concerns- Availability: Most of this runs on commodity hardware. Things fail.- Locking (Differences between Shared FileSystems vs. File on Block Devices)- Mount Driver availability on host- Security, ACLs, SELinux- Storage Plugin Support for your specific desired storage (NetApp, Gluster,
etc.)- Storage Tiers, QoS, etc.
Why do we need container orchestration?
Storage Plugins that most Orchestration platforms offer (or will soon offer):
- ISCSI- Cinder- NFS- AWS EBS- GCE Persistent Disks- Azure Storage- GlusterFS- Ceph- EMC Portfolio- FibreChannel
So let’s explore these concepts via a simple microservice
Web ApplicationContainer
MicroserviceClient
RDBMS Container
HTTP
How the containers run on the host they get scheduled on
Web ApplicationContainer
Microservice
RDBMS Container
node-1.example.com
/opt/app/service.php
172.17.0.2
mount host directory
vol-0b7eaad4mount EBS Disk
/var/lib/mysql
NGINX Web Application[fedora@node-2] $ cat /opt/app/service.php <?php $host= gethostname(); $ip = gethostbyname($host);
$mysqli = new mysqli($ip, "myUserName", "myPassword", "us_states"); if ($mysqli->connect_errno) { echo "Failed to connect to MySQL: (" . $mysqli->connect_errno . ") " . $mysqli->connect_error;}
if ($result = $mysqli->query("SELECT * from states")) {
$result->data_seek(0); while ($row = $result->fetch_assoc()) { echo " The State of ".$row['state']; echo " has a population of ".$row['population']; echo " \n"; }
$result->close();
} else {echo "No result";}?>
So how can we attach storage to that?Let’s take a look at several patterns for how we can make persistence available to our container platforms
- Pet storage- Cattle Storage- Dynamic Provisioning
The examples from the demos I’ll be showing are available at:
https://github.com/wattsteve/docker-meetup
Pet Storage
Adds Storage Connection Info to Container Deployer
Deploy Containers
Orchestration InterfaceDeveloper
CC flickr @torek @menard_mickael
Storage Admin
“Hi Sue, can you create a Ceph RBD Disk & send me the connection info?”
1
2
3
Cattle Storage
Submit Storage ClaimClaim
Deploy Containers Referencing Claim
Orchestration InterfaceDeveloper
CC flickr @torek
Register Storage Connection Objects for specific storage assets
Pool of Storage Connection Objects
StorageAdministrator
1
2
3
4Release Claim When it is no longer needed, the claim can be released and the storage asset attached to it can be automatically wiped and placed back in the pool.
But, there’s a new way of thinking about storage on-premise
Cattle Storage 2.0 (Dynamic Provisioning)
Submit Storage ClaimCreate me an EBS Disk
Deploy Containers Referencing Claim
AWS EBS Provisioner
Orchestration InterfaceDeveloper
CC flickr @torek
1
2
3 Release Claim Deletes the associated EBS Disk
Storage Tiering
Gold Ceph RBD SSD Susan
Silver Ceph RBD Magnetic Dave, Rob
Bronze EBS GP SSD Rob
Copper EBS IOPS SSD Mike, Dave, Rob
Iron EBS Magnetic Everyone
Compute & Storage Containers HyperConverged
RHGS ContainerStorage ContainerStorage Platform
Host 2Host 1
Storage Container
NGINX ContainerSpark Container
https://github.com/wattsteve/glusterfs-kubernetes
Host Local FileSystem Host Local FileSystem
Thanks for listeningIf you have more questions, you can reach me on:
Email: [email protected]
Twitter: @wattsteve
GitHub: https://github.com/wattsteve/
Slack: @wattsteve (http://slack.kubernetes.io/)