Date post: | 13-Jan-2017 |
Category: |
Technology |
Upload: | rob-hirschfeld |
View: | 317 times |
Download: | 0 times |
Apply, Rinse, Repeat toget Location Agnostic
(re)Build OpenStack Ready Infrastructure Like a Pro
Rob Hirschfeld
OpenStack Foundation board.
RackN CEO & co-Founder - We specialize in portable infrastructure automation
Background: Dell and start-ups
● Twitter: @zehicle● Blog http://robhirschfeld.com
Juniper Networks
Sr. Director of Contrail Solutions Engineering
NFV related solutions with service chaining and interaction with MPLS based Telco network infrastructure
Background: Microsoft Online, UUNet
Parantap Lahiri
Complete Provisioning System (in containers!)
“API for Metal” automates physical infrastructure
“Start to Scale” works on any platform from desktop to datacenter
“Open Ops” makes DevOps portable between sites
Deploys container platforms using containers.
Seamless Virtual Network Across multiple Orchestration System
Each container POD or VM gets IP address from separate Virtual Network Space
Policy Based Virtual Network Interconnect plus Filtering with micro-segmentation
On-Demand Virtual and Physical (VNF and PNF) Service Insertion
Detailed Analytics on flow data and resource utilization
Making Cloud Infrastructure Agnostic
We want hybrid clouds but they are technically challenging
What major challenges do we face?
● Open Platforms - OpenStack● Distributed Overlay Networking - Open Contrail● Consistent Scale Operations - Digital Rebar
And…. Faster Iterations
Traditional Stacking
This approach is creates a lot of complexity
Metal Network Cloud Network Containers
Infrastructure needs are heterogeneous
App
App
App
Overlay networks can span all environments
?
over lay
App
App
App
True Hybrid: Private & Multiple Public
over layPublic
On Prem
Public
True Hybrid: Private & Multiple Public
Why is this so hard?!
Scale faults from the “Fidelity Gap”
Testing for production on a desktop or cloud is not sufficient.
Automation is required at all levels.
We want to use the SAME deployment at every level to eliminate translation errors.
Production
Dev
Test
PoC
Scale
Effo
rt
Easy
Hard
5 10 20 100+
How Do Deployments Fail?One step at a time
FidelityGap
Major Differences in:
● Networking● Timing / Sequential Ops● Need for fault tolerance● Process Requirements● Ops / Environmentals● User motivation / priorities● Ownership
Why a Fidelity Gap? Different needs
Scale
Effo
rt
Easy
Hard
5 10 20 100+
Desktop Cloud
Lab
Datacenter
Operationally Challenging
Different Requirements
Fragmented networking
Hybrid straddles multiple phases
Scale
Effo
rt
Easy
Hard
5 10 20 100+
Desktop Cloud
Lab
Datacenter
Addressing the “Fidelity Gap”
Faithful ops between environments
Portable DevOps automation
Fast cycle times for developers
Transparent execution
True multi-node even when small
Mix-and-match environments
Production
Dev
Test
PoC
Scale
Effo
rt
Easy
Hard
5 10 20 100+
Apply, Rinse & Repeat - cycle time matters!
Looking for at least, 10x faster
If you have fidelity, work translates
However, that’s not useful if we’ve added too much time or effort overhead
Redeploy Virtual in 5 to 10 minutes Redeploy Metal in 1 to 2 hours
Production
Dev
Test
PoC
Scale
Effo
rt
Easy
Hard
5 10 20 100+
10,000x
1,000x100x
10x
1x
10x
100x
1,000x
Hyb
rid
Infr
astr
uctu
re
Networks require logical & physical actions
Composable Approach
Building multi-site networks requires coordinating activities at multiple layers and sites.
ORCHESTRATION IS NOT OPTIONAL
Sequence Matters in SystemConstruction
Digital Rebar orchestrates cross platform operational steps to bring up the physical and logical systems.
The Digital Rebar “annealer” coordinates activities over multiple control planes.
Targ
et E
nd S
tate
Let’s keep it simple AND connected
Rob
Project http://rebar.digital & @digitalrebar
Rob: http://robhirschfeld.com & @zehicle
Parantap
OpenContrail http://OpenContrail.org & @OpenContrail
Parantap: [email protected]
Additional Material
Digital Rebar with Docker Compose
Complete Datacenter Ops in containers.
Fast to setup and reset
Low overhead and scales up to 100s
Doc
ker C
ompo
se (1
5 co
ntai
ners
)
Consul
Rebar API
Rebar Engine
Postgresql
NTP
DNS
DHCP
Provisioner
...
Port Map
Port Map
Port Map
Port Map
Port Map
Port MapDocker
Hub
Digital Rebar with Consul
Consul (registry & shared keystore)
● registers all services● shared secrets ● & more stuff we don’t use
Rebar API & Orchestration (yellow)
Services Managed by Rebar (blue)
Services Used by Rebar (green)Docker Containers
ConsulForwarder
Rebar API
Postgresql
Rebar Engine
DNS
Provision
NTP
DHCP
Chef Loggers
Kubernetes Metadata
Determines:
● which containers● dependencies between● port mapping● variables injection● start/stop/scale● tenant networking
AND multi-system infrastructure
Kub
erne
tes
Keystore
Database
Web Front
Service 1
Service 2
Batch Item
Foo
Bar
...
Port Map
Port Map
Port Map
Port Map
Port Map
Port MapDocker
Hub
Running Kubernetes
Master + Minion: cluster via etcd
Builds networking tunnel for pods
Additional pluggable services (L3)
Manages container
● life-cycle● placement● dependencies
Docker Containers
Docker Containers
Docker Containers
SDN Agent
App
KubernetesMinion
KubernetesMinion
KubernetesMinion
SDN Agent SDN Agent
App
AppApp
AppApp
App
AppApp
AppApp
App App
AppApp
AppApp
App
KubernetesMaster
etcd (shared store)
Flannel is weak SDN (basically UDP)
Requires kernel modification (fast!)
When L2 and L3 support is needed
Multi-datacenter connections
Mix infrastructure (docker, VMs, metal)
Expect to have multiple SDN options
+ OpenContrail
Docker Containers
Docker Containers
Docker Containers
Contrail Agent
App
KubernetesMinion
KubernetesMinion
KubernetesMinion
Contrail Agent
Contrail Agent
App
AppApp
AppApp
App
AppApp
AppApp
App App
AppApp
AppApp
App
KubernetesMaster
etcd (shared store)
ContrailController
Key Contrail Features