Post on 01-Jan-2017
transcript
ACI Multisite DeploymentJohn Schaper, Technical Solutions Architect
• ACI Introduction and Multi-Fabric Use Cases
• ACI Multi-Fabric Design Options
• ACI Stretched Fabric Overview
• ACI Multi-Pod Deep Dive
• ACI Multi-Site Solutions Overview
• Conclusions
Agenda
Session Objectives
At the end of the session, the participants should be able to:
Articulate the different Multi-Fabric deployment options offered with Cisco ACI
Understand the design considerations associated to those options
Initial assumption:
The audience already has a good knowledge of ACI main concepts (Tenant, BD, EPG, L2Out, L3Out, etc.)
Introducing: Application Centric Infrastructure (ACI)
6
Outside
(Tenant
VRF)
App DBWeb
QoS
Filter
QoS
Service
QoS
Filter
ACI Fabric
Application Policy
Infrastructure Controller
Integrated GBP VXLAN Overlay
APIC
ACI Model Tenant L3, L2 Isolation
EP
.
.
.
EPEP
EPG WEB
BD
EPG APP SERVER
EPG …
BD
subnetsubnet
subnet
L3 context(isolated tenant VRF)
With or
without
flooding
semantics
Application profile
Tenant
self-contained
tenant definition
representable as
a recursive
structured text
document
Outside
L2 or L3
Single Site Multi-Fabric
Multiple Fabrics connected within the same DC (between halls, buildings, … within the same Campus location)
Cabling limitations, HA requirements, Scaling requirements
Single Region Multi-Fabric (Classic Active/Active scenario)
Scoped by the application mobility domain of 10 msec RTT
BDs/IP Subnets can be stretched between sites
Desire is reducing as much as possible fate sharing across sites, yet maintaining operational simplicity
Multi Region Multi-Fabric
Creation of separate Availability Zones
Disaster Recovery – Minimal Cross Site Communication
Deployment of applications not requiring Layer 2 adjacency
ACI MultiPod/MultiSite Use Cases
• ACI Introduction and Multi-Fabric Use Cases
• ACI Multi-Fabric Design Options
• ACI Stretched Fabric Overview
• ACI Multi-Pod Deep Dive
• ACI Multi-Site Solutions Overview
• Conclusions
Agenda
ACI Multi-Fabric Design Options
Single APIC Cluster/Single Domain Multiple APIC Clusters/Multiple Domains
Site 1 Site 2
ACI Fabric
Stretched Fabric
IP NetworkSite ‘A’ Site ‘n’
MP-BGP - EVPN
Multi-Site (Future)
Pod ‘A’ Pod ‘n’
APIC Cluster
MP-BGP - EVPN
Multi-Pod (Q3CY16)
IP Network
…
ACI Fabric 2ACI Fabric 1
Dual-Fabric Connected (L2 and L3 Extension)
L2/L3
Dual-Fabric with Common Default GWNew in 11.2 (Brazos)
Subnet-1, Default GW 1.1.1.1
ACI Fabric 1 ACI Fabric 2
• Two independent ACI fabrics. Two
management and configuration domains.
• Active/Active workload. Extend L2 and
subnet across.
• Common default GW on both fabrics
• L2 connection between fabrics.
Active/Active or Active/Standby
• Consistent end-to-end policy
• Dark fiber/OTV for L2 extension
Subnet-2, Default GW 2.2.2.2
GW : 1.1.1.1BD1
BD2GW : 2.2.2.2
GW : 1.1.1.1BD1
BD2GW : 2.2.2.2
Dual-Fabric with Common Default GWActive/Active Routing Consideration
• A new virtual IP and virtual MAC are introduced under BD to support common pervasive GW
feature across ACI fabrics.
Unique “SVI” MAC
between two fabrics
Fabric 1 Config Fabric 2 Config
Unique “SVI” IP
between two fabrics
Common virtual IP
between two fabrics
New
Common virtual MAC
New
Dual Fabric Policy
• Endpoint Group (EPG) membership is lost when packets transit between 2 separate fabrics
• ACI advantages lost when transiting fabrics
• Unable to enforce granular policy
• Inter-DC traffic treated the same as WAN traffic
ACI Toolkit Intersite Overview
What it does
• Extends Endpoint Groups (EPGs) across multiple sites
• Each site has its own APIC cluster
• Allows policies to be enforced across fabrics
• Preserves EPG membership without carrying it in packet encapsulation
• Runs as an External Application
What it doesn’t do
• Sync contract configuration across sites
• Cost additional $$$ (open source)
• ACI Introduction and Multi-Fabric Use Cases
• ACI Multi-Fabric Design Options
• ACI Stretched Fabric Overview
• ACI Multi-Pod Deep Dive
• ACI Multi-Site Solutions Overview
• Conclusions
Agenda
Stretched ACI Fabric
Fabric stretched to two sites works as a
single fabric deployed within a DC
One APIC cluster one management and
configuration point
Anycast GW on all leaf switches
Work with one or more transit leaf per site
any leaf node can be a transit leaf
Number of transit leaf and links dictated by
redundancy and bandwidth capacity decision
Different options for Inter-site links (dark fiber,
DWDM, EoMPLS PWs)
DC Site 1 DC Site 2
ACI Stretched Fabric
Transit leaf Transit leaf
vCenter
Stretched ACI FabricOption – Ethernet over MPLS (EoMPLS) Port mode
Port mode EoMPLS used to stretch the ACI
fabric over long distance
DC Interconnect links could be 10G (minimum) or
higher with 40G facing the Leaf/Spine nodes
DWDM or Dark Fiber provide connectivity between
two sites
10 ms RTT
800 KM / 500 miles
DC Site 1 DC Site 2
QSFP-40G-SR4
40G
40G 40G
10G/40G/100G
10G/40G/100G
40G
EoMPLS Pseudowire
WAN
1.0(3f) release or later, 10ms max RTT between
sites
Under normal conditions 10 ms allows us to support two
DCs up to 800 Km apart
Other ports on the Router used for connecting to the WAN
via L3Out
Site 3
Stretched ACI FabricSupport for 3 Interconnected Sites (11.2)
2x40G or 4x40G
Transit Leaf
Site 1
Site 2
Transit leafs in all sites connect to the local and remote spines
• ACI Introduction and Multi-Fabric Use Cases
• ACI Multi-Fabric Design Options
• ACI Stretched Fabric Overview
• ACI Multi-Pod Solution Deep Dive
• ACI Multi-Site Solutions Overview
• Conclusions
Agenda
ACI Multi-Pod Solution Overview
Pod ‘A’
MP-BGP - EVPN
Single APIC Cluster
Multiple ACI Pods connected by an IP Inter-Pod
L3 network, each Pod consists of leaf and spine
nodes
Managed by a single APIC Cluster
Single Management and Policy Domain
Forwarding control plane (IS-IS, COOP)
fault isolation
Data Plane VXLAN encapsulation between
Pods
End-to-end policy enforcement
Pod ‘n’
Inter-Pod Network
…
IS-IS, COOP, MP-BGP IS-IS, COOP, MP-BGP
ACI Multi-Pod Solution Use Cases
Evolution of Stretched Fabric design
Metro Area (dark fiber, DWDM), L3 core
>2 interconnected sites
Pod 1 Pod 2
Web/AppDB Web/AppAPIC Cluster
Handling 3-tiers physical cabling layout
Cable constrain (multiple buildings, campus, metro) requires a second tier of “spines”
Preferred option when compared to ToR FEX deployment
Inter-Pod
Network
Spine Nodes
Pod
Leaf Nodes
ACI Multi-Pod Solution SW and HW Requirements
Software
The solution will be available from Q3CY16 SW Release
Hardware
The Multi-Pod solution can be supported with all currently shipping Nexus 9000 platforms
The requirement is to use multicast in the Inter-Pod Network for handling BUM (L2 Broadcast, Unknown Unicast, Multicast) traffic across Pods
ACI Multi-Pod Solution Supported Topologies
Pod 1 Pod n
Web/AppDB Web/AppAPIC Cluster
…
Intra-DC Two DC sites connected
back2back
Pod 1 Pod 2
Web/AppDB Web/AppAPIC Cluster
Dark fiber/DWDM (up
to 10 msec RTT)
Multiple sites interconnected
by a generic L3 networkPod 1 Pod 2
Pod 3
3 DC Sites
Dark fiber/DWDM (up
to 10 msec RTT)
L3
40G/100G 40G/100G
10G/40G/100G
40G/100G 40G/100G
40G/100G 40G/100G
40G/100G
10G/40G/100G
40G/100G
40G/100G
40G/100G
40G/100G
ACI Multi-Pod Solution Scalability Considerations
Those scalability values may change without warning before the Multi-Pod solution gets officially released
At FCS, the maximum number of supported ACI leaf nodes is 400 (across all Pods)
200 is the maximum number of leaf nodes per Pod
Use case 1: larger number of Pods (up to 20) with a small number of leaf nodes in each Pod (20-30)
Use case 2: low number of Pods (2-3) with large number of leaf nodes in each Pod (up to 200)
ACI Multi-Pod Solution Inter-Pod Network (IPN) Requirements
Not managed by APIC, must be pre-configured
IPN topology can be arbitrary, not mandatory to
connect to all spine nodes
Main requirements:
40G/100G interfaces to connect to the spine nodes
Multicast BiDir PIM needed to handle BUM traffic
DHCP Relay to enable spine/leaf nodes discovery across Pods
OSPF to peer with the spine nodes and learn VTEP reachability
Increased MTU support to handle VXLAN encapsulated traffic
QoS (to prioritise intra APIC cluster communication)
Pod ‘A’ Pod ‘B’
Web/AppDB Web/AppAPIC Cluster
MP-BGP - EVPN
40G/100G 40G/100G
Processes are active on all nodes (not active/standby)
The Data Base is distributed as active + 2 backup instances (shards) for every attribute
The Data Base is
replicated across APIC
nodes
APIC APIC APIC
Shard 2
Shard 1
Shard 3
Shard 1Shard 1
Shard 2 Shard 2Shard 3 Shard 3
One copy is ‘active’ for every
specific portion of the Data
Base
APIC – Distributed Multi-Active Data Base
When an APIC fails a backup copy of the shard is promoted to active and it takes over
for all tasks associated with that portion of the Data Base
APIC APIC APIC
Shard 2
Shard 1
Shard 3
Shard 1Shard 1
Shard 2 Shard 2Shard 3 Shard 3
Shard 1
APIC – Distributed Multi-Active Data Base
Additional APIC will increase the system
scale (today up to 5 nodes supported) but
does not add more redundancy
APIC APICAPICAPIC APIC
There is a max supported distance between
data base (APIC) nodes – 800km
APIC APICAPIC
800km
APIC – Design Considerations
APIC APIC APIC800kmAPIC APIC
APIC will allow read-only access to the DB
when only one node remains active
(standard DB quorum)
APIC APIC APICX X
NOT RECOMMENDED: failure of site 1 may
cause irreparable loss of data for some
shards and inconsistent behaviour for others
X
ACI Multi-Pod Solution APIC Cluster Deployment Considerations
Pod ‘A’ Pod ‘B’
Web/AppDB Web/AppAPIC Cluster
MP-BGP - EVPN
APIC cluster is stretched across multiple Pods
Central Mgmt for all the Pods (VTEP address, VNIDs, class-IDs, GIPo, etc.)
Centralised policy definition
Recommended not to connect more than two APIC nodes per Pod (due to the creation of three replicas per ‘shard’)
The first APIC node connects to the ‘Seed’ Pod
Drives auto-provisioning for all the remote Pods
Pods can be auto-provisioned and managed even without a locally connected APIC node
ACI Multi-Pod Solution Auto-Provisioning of Pods
Discovery and
provisioning of all the
devices in the local Pod
2
2
Provisioning interfaces on the
spines facing the IPN and EVPN
control plane configuration
3
3Spine 1 in Pod 2 connects
to the IPN and generates
DHCP requests
4
4
DHCP requests are relayed
by the IPN devices back to
the APIC in Pod 1
5
5
DHCP response reaches Spine 1
allowing its full provisioning
6
6
‘Seed’ Pod 1
Single APIC Cluster
APIC Node 2 joins the
Cluster
9
9
Discovery and
provisioning of all the
devices in the local Pod
7
7
APIC Node 2 connected
to a Leaf node in Pod 2
8
8
1APIC Node 1 connected to a
Leaf node in ‘Seed’ Pod 1 Pod 2
1
110 Discover other Pods following the same procedure
ACI Multi-Pod Solution IPN Control Plane
Separate IP address pools for VTEPs assigned by APIC to each Pod
Summary routes advertised toward the IPN via OSPF routing
Spine nodes redistribute other Pods summary routes into the local IS-IS process
Needed for local VTEPs to communicate with remote VTEPs
Web/AppDB Web/AppAPIC Cluster
OSPF OSPF
10.0.0.0/16 10.1.0.0/16
IPN Global VRF
IP Prefix Next-Hop
10.0.0.0/16 Pod1-S1, Pod1-S2, Pod1-S3, Pod1-S4
10.1.0.0/16 Pod2-S1, Pod2-S2, Pod2-S3, Pod2-S4
Leaf Node Underlay VRF
IP Prefix Next-Hop
10.1.0.0/16 Pod1-S1, Pod1-S2, Pod1-S3, Pod1-S4
IS-IS to OSPF
mutual redistribution
ACI Fabric – Integrated OverlayDecoupled Identity, Location & Policy
ACI Fabric decouples the tenant end-point address, it’s “identifier”, from the location of that end-
point which is defined by it’s “locator” or VTEP address
Forwarding within the Fabric is between VTEPs (ACI VXLAN tunnel endpoints) and leverages an
extender VXLAN header format referred to as the ACI VXLAN policy header
The mapping of the internal tenant MAC or IP address to location is performed by the VTEP using
a distributed mapping database
PayloadIPVXLANVTEP
APIC
VTEP VTEP VTEP VTEP VTEP VTEP
Host Routing - InsideInline Hardware Mapping DB - 1,000,000+ Hosts
10.1.3.11 fe80::462a:60ff:fef7:8e5e10.1.3.35 fe80::62c5:47ff:fe0a:5b1a
The Forwarding Table on the Leaf Switch is divided between local (directly attached) and
global entries
The Leaf global table is a cached portion of the full global table
If an endpoint is not found in the local cache the packet is forwarded to the ‘default’
forwarding table in the spine switches (1,000,000+ entries in the spine forwarding table)
Local Station Table
contains addresses of
‘all’ hosts attached
directly to the Leaf
10.1.3.11
10.1.3.35
Port 9
Leaf 3
Proxy A*
Global Station Table
contains a local cache
of the fabric endpoints
10.1.3.35 Leaf 3
10.1.3.11 Leaf 1Leaf 4
Leaf 6
fe80::8e5e
fe80::5b1a
Proxy Station Table contains
addresses of ‘all’ hosts attached
to the fabric
Proxy Proxy Proxy Proxy
What is MP-BGP EVPN?
BGP based control plane to advertise
Layer-2 MAC and Layer-3 IP information
Leverages the EVPN Address Family
Virtual Routing and Forwarding (VRF)
Layer-3 segmentation for tenants’ routing space
Route Distinguisher (RD)
8-byte field, VRF parameters; unique value to make
VPN IP routes unique: RD + VPN IP prefix
Selective distribution of VPN routes
Route Target (RT): 8-byte field, VRF parameter,
unique value to define the import/export rules for
VPNv4 routes
RR RR
V2V1
V3
BGP Route-ReflectorRR
iBGP Adjacency
VRF InfoName: VRF-A
RD: 50000:1.1.1.1 (auto)
Imp Route-Target 65000:50000 (auto)
Exp Route-Target 65500:50000 (auto)
VRF Info Name: VRF-A
RD: 50000:1.1.1.2 (auto)
Imp Route-Target 65000:50000 (auto)
Exp Route-Target 65500:50000 (auto)VRF InfoName: VRF-A
RD: 50000:1.1.1.3 (auto)
Imp Route-Target 65000:50000 (auto)
Exp Route-Target 65500:50000 (auto)
ACI Multi-Pod Solution Inter-Pods MP-BGP EVPN Control Plane
MP-BGP EVPN used to communicate Endpoint (EP) and Multicast Group information between Pods
All remote Pod entries associated to a Proxy VTEP next-hop address
Single BGP AS across all the Pods
BGP EVPN on multiple spines in a Pod (minimum of two for redundancy)
Some spines may also provide the route reflector functionality (one in each Pod)
APIC Cluster
MP-BGP - EVPN
172.16.1.1
0
172.16.1.10 Leaf 1
Proxy A Proxy B
172.16.1.10 Proxy A
172.16.2.4
0172.16.1.2
0
172.16.2.40 Leaf 3
Proxy B
Proxy B
172.16.1.20
172.16.3.50
172.16.3.5
0
172.16.2.40 Proxy A
Leaf 4
Leaf 6
172.16.1.20
172.16.3.50
COOP
ACI Multi-Pod Solution Overlay Data Plane
172.16.1.20172.16.2.40
Proxy A Proxy B
VM2 unknown, traffic is
encapsulated to the local
Proxy A Spine VTEP (adding
S Class information)
Proxy A*
2
Spine encapsulates
traffic to remote
Proxy B Spine VTEP
3
3
172.16.2.40 Leaf 4
172.16.1.20 Proxy B
4
4
Spine encapsulates
traffic to local leaf
172.16.1.20 Leaf 4
172.16.2.40 Proxy A
5
172.16.2.40 Pod1 L4
Proxy B*
Leaf learns remote VM1
location and enforces policy
6
6
If policy allows it, VM2
receives the packetVM1 sends traffic
destined to remote VM2
11
VTEP IP VNID Tenant PacketGroup
Policy
Single APIC Cluster
ACI Multi-Pod Solution Overlay Data Plane (2)
172.16.1.20172.16.2.40
*
172.16.2.40 Pod1 L4
Proxy B*
Leaf enforces policy in
ingress and, if allowed,
encapsulates traffic to
remote Leaf node L4
8
8
Leaf learns remote VM2 location
(no need to enforce policy)
Proxy A
9
9
172.16.1.20 Pod2 L4
7
7
VM2 sends traffic back to
remote VM1
VM1 receives the packet
110
111From this point VM1 to VM1 communication
is encapsulated Leaf to Leaf (VTEP to VTEP)
VTEP IP VNID Tenant PacketGroup
Policy
Single APIC Cluster
ACI Multi-Pod Solution Handling of Multi-Destination Traffic (BUM*)
Single APIC Cluster172.16.1.20172.16.2.40
*
*
BUM frame is associated to
GIPo 1 and flooded intra-Pod
via the corresponding tree
2
2
3
3
Spine 2 is responsible to
send GIPo 1 traffic toward
the IPN
4
4IPN replicates traffic to all
the Pods that joined GIPo 1
(optimised delivery to Pods)
6
6
VM2 receives the BUM
frameVM1 generates a BUM
frame
11
5
5
BUM frame is flooded along the
tree associated to GIPo 1. VTEP
learns VM1 remote location
172.16.2.40 Pod1 L4
Proxy B*
*L2 Broadcast, Unknown Unicast and Multicast
ACI Multi-Pod Solution Traditional WAN Connectivity
A Pod does not need to have a dedicated WAN connection
Multiple WAN connections can be deployed across Pods
Traditional L3Out configuration
Shared between tenants or dedicated per tenant (VRF-Lite)
VTEPs always select WAN connection in the local Pod based on preferred metric
Inbound traffic may require “hair-pinning” across the IPN network
Recommended to deploy clustering technology when stateful services are deployed
Pod 1 Pod 2
Pod 3
MP-BGP - EVPN
WANWAN
ACI Integration with WAN at Scale‘Project GOLF’ Overview
WAN
VRF-1 VRF-2
IP Network
Web/App
MP-BGP
EVPN
‘GOLF’ Devices Addresses both control plane and data plane scale
VXLAN data plane between ACI spines and WAN Routers
BGP-EVPN control plane between ACI spines and WAN routers
OpFlex for exchanging config parameters (VRF names, BGP Route-Targets, etc.)
Consistent policy enforcement on ACI leaf nodes (for both ingress and egress directions)
‘GOLF’ Router support (Q3CY16)
Nexus 7000, ASR9000 and ASR1000 (not yet committed)
ACI Integration with WAN at ScaleSupported Topologies
IP Network IP Network
Directly Connected
WAN RoutersRemote WAN Routers Multi-Pod + GOLF
MP-BGP
EVPNMP-BGP
EVPN
MP-BGP
EVPN
WAN WAN WAN
Multi-Pod and GOLFIntra-DC Deployment – Control Plane
Web/AppDB
Web/AppDB
Single APIC Domain
Web/App. . .
Multiple
Pods
IPN
WAN routes received on the Pod
spines as EVPN routes and translated
to VPNv4/VPNv6 routes with the spine
proxy TEP as Next-Hop
GOLF
Devices
MP-BGP EVPN Control Plane
WAN
Public BD subnets advertised to
GOLF devices with the external
spine-proxy TEP as Next-Hop
Single APIC Cluster
Option to consolidate ‘Golf’ and ‘IPN’ devices
Perform pure L3 routing for Inter-Pod VXLAN traffic
VXLAN Encap/Decap for WAN to DC traffic flows
Multi-Pod and GOLFIntra-DC Deployment – Control Plane
Web/AppDB
Web/AppDB
Single APIC Domain
Web/App. . .
Multiple
Pods
GOLF
Devices
WAN
Single APIC Cluster
*Not available at FCS
IPN
Multi-Pod and GOLFMulti-DC Deployment – Control Plane
Web/AppDB
Web/AppDB
IPN
MP-BGP EVPN Control PlaneMP-BGP EVPN Control Plane
Pod ‘A’ Pod ‘B’
Host routes for endpoint belonging
to public BD subnets in Pod ‘A’Host routes for endpoint belonging
to public BD subnets in Pod ‘B’
GOLF devices inject host routes
into the WAN or register them in
the LISP database
Single APIC Cluster
Multi-Pod and GOLFMulti-DC Deployment – Data Plane
Web/AppDB
Web/AppDB
IPN
Traffic from an external user is
steered toward the GOLF devices
(via routing or LISP)GOLF devices VXLAN
encapsulate traffic and send it to
the Spine Proxy VTEP address
Spine encapsulates traffic
to the destination VTEP
that can then apply policy
Proxy A Proxy B
Single APIC Cluster
Multi-Pod and GOLFMulti-DC Deployment – Data Plane (2)
Web/AppWeb/AppDB
IPN
WAN
Traffic is received by
the external user
Leaf applies policy and
encapsulates traffic directly to
the local GOLF VTEP address
GOLF devices de-encapsulate traffic
and route it to the WAN (or LISP
encapsulates to the remote router)
Single APIC Cluster
ACI Multi-Pod Solution Summary
ACI Multi-Pod solution represents the natural evolution of the Stretched Fabric design
Combines the advantages of a centralised mgmt and policy domain with fault domain isolation (each Pod runs independent control planes)
Control and data plane integration with WAN Edge devices (Nexus 7000/7700 and ASR 9000) completes and enriches the solution
The solution is planned to be available in Q3CY16 and will be released with a companion Design Guide
• ACI Introduction and Multi-Fabric Use Cases
• ACI Multi-Fabric Design Options
• ACI Stretched Fabric Overview
• ACI Multi-Pod Solution Deep Dive
• ACI Multi-Site Solutions Overview
• Conclusions
Agenda
ACI Fabric 1
Independent ACI Fabrics interconnected via L2
and L3 DCI technologies
Each ACI Fabric is independently managed by a
separate APIC cluster
Separate Management and Policy Domains
Data Plane VXLAN encapsulation
terminated at the edge of each Fabric
VLAN hand-off to the DCI devices for providing
Layer 2 extension service
Requires to classify inbound traffic for
providing end-to-end policy extensibility
ACI Fabric 2
L2/L3
DCI
ACI Dual-Fabric Solution Overview
ACI Multi-Site (Future) Overview
Site ‘A’
MP-BGP - EVPN
Site ‘n’
Inter-Site Network
…
IS-IS, COOP, MP-BGP IS-IS, COOP, MP-BGP
Multiple ACI fabrics connected via IP Network
Separate availability zones with maximum isolation
Separate APIC clusters, separate management and
policy domains, separate fabric control planes
End to end policy enforcement
with policy collaboration
Support multiple sites
Not bound by distance
Separate APIC
Clusters
ACI Multi-SiteReachability
Host Level Reachability Advertised between Fabrics via BGP
Transit Network is IP Based
Host Routes do not need to be advertised into transit network
Policy Context is carried with packets as they traverse the transit IP Network
Forwarding between multiple Fabrics is allowed (not limited to two sites)
Site ‘A’
MP-BGP - EVPN
Site ‘n’
Inter-Site Network
…
Separate APIC
Clusters
ACI Multi-SitePolicy Collaboration
Site ‘A’ Site ‘B’
Web1
App1
dB1
Web2
App2
dB2
Web2
dB1
App2
Web1
App1
Export Web, App,
DB to Fabric ‘B’Import Web, App,
DB from Fabric ‘A’
Export Web & App
to Fabric ‘A’Import Web & App
from Fabric ‘B’
EPG policy is exported by source site to desired peer target site fabrics
Fabric ‘A’ advertises which of its endpoints it allows other sites to see
Target site fabrics selectively imports EPG policy from desired source sites
Fabric ‘B’ controls what it wants to allow its endpoints to see in other sites
Policy export between multiple Fabrics is allowed (not limited to two sites)
Site ‘A’
MP-BGP - EVPN
Site ‘n’
Inter-Site Network
…
Scope of Policy
Policy is applied at provider of the contract (always at fabric where the provider endpoint is
connected)
Scoping of changes
No need to propagate all policies to all fabrics
Different policy applied based on source EPG (which fabric)
Web App AppWeb
Separate APIC
Clusters
• ACI Introduction and Multi-Fabric Use Cases
• ACI Multi-Fabric Design Options
• ACI Stretched Fabric Overview
• ACI Multi-Pod Solution Deep Dive
• ACI Multi-Site Solutions Overview
• Conclusions
Agenda
Conclusions
Cisco ACI offers different multi-fabric options that can be deployed today
There is a solid roadmap to evolve those options in the short and mid term
Multi-Pod represents the natural evolution of the existing Stretched Fabric design
Multi-Site will replace the Dual-Fabric approach
Cisco will offer smooth and gradual migration path to drive the adoption of those new solutions
Where to Go for More Information
ACI Stretched Fabric White Paper
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/kb/b_kb-aci-stretched-fabric.html#concept_524263C54D8749F2AD248FAEBA7DAD78
ACI Dual Fabric Design Guide
Coming soon!
ACI Dual Fabric Live Demos
Active/Active ASA Cluster Integration
https://youtu.be/Qn5Ki5SviEA
vCenter vSphere 6.0 Integration
http://videosharing.cisco.com/p.jsp?i=14394
Q & A
Complete Your Online Session Evaluation
Learn online with Cisco Live!
Visit us online after the conference
for full access to session videos and
presentations.
www.CiscoLiveAPAC.com
Give us your feedback and receive a
Cisco 2016 T-Shirt by completing the
Overall Event Survey and 5 Session
Evaluations.– Directly from your mobile device on the Cisco Live
Mobile App
– By visiting the Cisco Live Mobile Site http://showcase.genie-connect.com/ciscolivemelbourne2016/
– Visit any Cisco Live Internet Station located
throughout the venue
T-Shirts can be collected Friday 11 March
at Registration
Thank you