Date post: | 29-Dec-2015 |
Category: |
Documents |
Upload: | lorraine-mcdonald |
View: | 213 times |
Download: | 0 times |
Networks Services People ∙ ∙ www.geant.org
Jerry Sobieski
GENI FIRE Coordination Workshop, Washington DC
Federation, multi-Domain strategy GÉANT Testbeds Service
Sep 18, 2015
Activity Leader, GÉANT4-1 SA2 Testbeds
Director, Int’l Research Initiatives, NORDUnet
Networks Services People ∙ ∙ www.geant.org
2
• Federated Authentication (Identification)• A user’s credentials, issued by one of several possible administrative authorities, are recognized by multiple service
providers• This may involve different control interactions/APIs for each resource provider (or identify providers)• This does not have a direct bearing on the compatibility of data plane resourcesfrom the various providers • Requires all providers to recognize a set of common authentication processes.• Providers may have different authorization policy for the recognized user
• Federated Authorization (common policy)• Providers recognize common classes (or “roles”) for users – and the classes provide common level of functionality for users
in each provider’s domain across the federation
• Federated Resource [data plane] Provisioning• User data planes from different providers can be interconnected into form a single data plane object• Thus requires some subset of the resource classes offered by two providers to have a common data plane resource model • Example: two providers recognize a 802.1 data framing as the data format and transparent transport as the behavioural
model for some resource instance in each provider’s domain. This allows the two data planes to be interconnected.• This may involve different user authentication and authorization processes for each provider • This may involve different control interactions/APIs for each resource provider
“Federation” – Different context depending upon your perspective
Networks Services People ∙ ∙ www.geant.org
3
• Federated Control Plane – (does this make sense?)• If control protocols are the same for two providers...are they “federated”? Not necessarilly (at al the levels) • If two domains offer the same control API ...are they client-server based? .. “peer” based?• If CP protocols are different...see “Interworking”
• “Inter-working” := Semantic interpretation of one control protocol and translation into another (and back)• Inter-working is often a complex process since capabilities offered by the two protocols may be different even if
they are similar in purpose• “n-body” problem -> a unique interworking module is required for every pair of protocols – as the number of
protocols/frameworks grows, the interworking complexity explodes• The interworking will often only provide partial functional translation, or asymmetric conversion • IW is an Interim solution – generally only used to support an older service protocol until the older service facilities
can be migrated to the newer service model.
• The Common Canonical Model solution• The solution is a common canonical model that all protocols can agree to use as the inter-domain protocol – thus
the number of interworking modules is linear to the number of protocols, and the original protocols are no longer necessary in the inter-domain context and become an intra-domain concern only.
“Federation” context
Networks Services People ∙ ∙ www.geant.org
4
• GTS “Federation” - stage 1, data plane connectivity • External Domain resources – Data plane tunnels established out of band and associated to virtual resource
instances in the RDB • The tunnels originate at a specific interface at the edge of the GTS service domain (STP) and are [typically]
manually engineered across several transit domains using available transport services and conversions to terminate on an interface of some box (end system or switch) in some remote facility.
• Users can incorporate these EDs into their GTS virtual environment just like any other resource.• We use these to reach ExoGENI, University of Rome, CreateNet, etc.• The common data plane format is “802.1 framing”• The common behavioral model is “transparent transport” – i.e. what goes in comes out, unmodified.
GTS interoperability roadmap (Near term)
Networks Services People ∙ ∙ www.geant.org
5
• GTS Federation – stage 2 Client based interworking• Fed4FIRE jFED GUI - Provides control API to several frameworks – now including GTS• User is able to establish facilities in different provider domains
• The client must construct data flow constructs between the multiple data planes – “stitching” (ugh)
• For GTS – this is simplified be pre-provisioning External Domain resources in the data plane and making these available/known in the service domains at each end of the tunnel.• The user simply defines a “ExoGENI” port and links it into their normal GTS network description.
• GTS Federation stage 3 – Federated Authentication• eduGAIN• Many Identity Providers. GTS will query the user’s IDP to authenticate user credentials.• This was designed for web based interactive services – not ideal for APIs... But we will make it work for
now...• Available 2015-Q4• Other suggestions for/from the GENI-FIRE community?
Next steps...
Networks Services People ∙ ∙ www.geant.org
6
• Automated global transport provisioning NSI (!)• In the data plane – it is the transport circuits that provide the data plane interconnectivity that allows integrated cross-
domain testbeds/virtual environments• Not all network transit domains are GENI FIRE facilities so we need to still provision across those domains...• The NSI standard is the emerging inter-domain circuit provisioning protocol/framework
• “service oriented” connection services – technology agnostic• open protocol (NSI WG) • Unique “top down” approach to global topology • Simple NSI topology model – and will easily incorporate Ontologies
• NSI is being deployed into production in numerous R&E networks globally and so has a global reach today• The GENI FIRE community should encourage (endorse) the NSI services – we can benefit immensely from a common transport
provisioning service• GTS is already NSI ready - OpenNSA to provision the GTS core• GEANT BoD is already NSI enabled ... GTS – BoD peering will be brought up this fall• GEANT BoD/NSI services are available at any GEANT edge (e.g MANLAN, London Open, NetherLight, WIX, etc.)• GTS will peer with other NREN GTS deployments directly using the GTS MD functionality and NSI for transport provisioning.• GTS can interconnect into/across any other domains globally using NSI (only the STPs in each domain need to be known)
Stage the 4th: Automated transport provisioning- NSI
I offer to provide an “official” overview of NSI provisioning model at the
next FGCW
Networks Services People ∙ ∙ www.geant.org
7
• Stage 5: Multi-Domain deployments and One Stop Shopping• OSS allows GTS server side agent to do the multi-domain resource acquisition on behalf of the user• GTS knows of other [GTS] domains that support the base resource class portfolio• The user describes their overall experiment/testbed • The GTS MDPA (multi-domain provider agent) assembles the pieces and delivers a single composite service
environment (spanning multiple administrative domains) to the user.
• This working in the Lab. Some adjunct issues are being addressed before we can roll it out in production
GTS Multi-Domain One Stop Shopping
A C
BL1 L2
L3A C
B
L1L2
L3
What the user requested, and what the provider assembled on behalf of the user.
What the provider actually constructed in order to meet the user’s request.
NSI transit domainsDomX DomY
DomZ
DomX
Networks Services People ∙ ∙ www.geant.org 8Objectives Achievements Conclusions Q&AChallenges
RCA-ST
MD Provider Agent
GTS Physical Infrastructure Layer
API
GTS Virtual Resource Layer
User Testbed Layer
Virtual Switches
Virtual Machines
Virtual Storage Virtual Circuits
GTS Virtualization Services LayerRCA-VMRCA-OFXRCA-VC
Other GTS*PAs
domains
Resource Manager
The GÉANT Testbeds Service Virtualisation, Management and User Control Layers
User Agent
VOFX proxy, HPOS ...]OpenStack [VMW, ...]OpenNSA [OSCARS, ..]
Networks Services People ∙ ∙ www.geant.org
Local PA 0
UA
MDPA
Local PA 1
User
Provider
UserProvider
GTS API
GTS API
RCA-VMRCA-VC
RCA-OFX
Local PA 1
GTS API
User
Provider
GTS Domain B
GTS Domain C
GTS Domain D
GTS Domain A
MDPA
MDPA
MDPA
..to other downstream domains
“downstream” domains
GTS Multi-Domain One Stop Shopping
Networks Services People ∙ ∙ www.geant.org
One Stop Shopping
Local PA processing
H1 H2 H2 H1
MDPA MD
PA
NSI transit domain(s)
UA
Local PA processing
Links segmented across NSI transit domain(s)
Primary GTS Service Domain
Collaborating “downstream” GTS Service Domain
Resources acquired downstream
User’s Testbed presented as a single integrated facility to user
London infrastructure
Paris infrastructure
User
LondonOffice
ParisOffice
Networks Services People ∙ ∙ www.geant.org
11
• GEANT has approximately 220 VMs as of Sep 2015• Plan to grow this over next few years... Specifics TBD will resolve in 2016-Q2
• EU Pilots:• HEAnet – CY15Q4 Dublin• DFN – CY16Q1 Erlangen, Frankfurt, others tbd• CESnet CY15Q4 Prague, plus others• NORDUnet CY16Q1 CPH, STO, WIX, CHI, others tbd CY16+ • GARR tbd
• US Pilots / PoPs• Clemson/Cloudlab Clemson and CHI-SL• NDN • Others?
• We expect this will provide ~500 VMs with 10G connectivity available by 2016-Q2
• Continue to smooth the interface with other FIRE and GENI facilities (and other user facilities.)
• Integrate or incorporate technologies/APIs from these other facilities, or access to these facilities, as needed by the research community.
• Instituting Workshops to gather features capabilities sets to roll into the development/deployment roadmap.• W#1: Oct 20-22 Copenhagen W#2: Feb 2016 location tbd (suggestions?)
Multi-Domain Deployments
Networks Services People ∙ ∙ www.geant.org 12Objectives Achievements Conclusions Q&AChallenges
• Software Defined eXchange “SDX”• SDX: An open exchange facility that can provide a wide range of resources and services – e.g.
computational resources, data transport circuits, switching/forwarding elements, storage, etc. – and provides automated SDN interface and control capabilities. (Zink & Mambretti @ FIDC 2014)
• The novelty of an SDX is its ability to dynamically allocate a wide range of resources beyond simple cross-connect circuits.
• This concept is proving to be a consensus model emerging from GENI and FIRE programs
• We believe GTS delivers all the features of a Software Defined eXchange... plus more:
Indeed, the pan-European reach of GTS enable it to provide SDX “fabrics”
Recognizing Unexpected Outcomes: The “Software Defined eXchange”
Networks Services People ∙ ∙ www.geant.org 13Objectives Achievements Conclusions Q&AChallenges
`
“SDX Fabrics”
Lab A
Lab B
Lab C
GTS Exchange Fabric “Alpha”
GTS Exchange Fabric “Beta”
Networks Services People ∙ ∙ www.geant.org
14
• Stage 1: Data plane interconnection / External Domain tunnels Done
• Stage 2: Client Interworking / jFED In progress 75% 2015-Q4• DSL generation is working, API control is in progress.
• Stage 3: Federated Authentication/ EduGAIn In progress 20% 2016-Q1• Lab testing eta is November... Production deployment may be a bit later (Jan?)
• Stage 4: Automated global transport provisioning / NSI 90% 2015-Q4
• Stage 5: Multi-domain GTS – One Stop Shopping 80% 2016-Q4
• Stage 6: Common canonical service model 2016-H1 ?
• Stage 7: Common canonical information model (see ontologies) 2016-H2 ?• GTS will move ahead in this area – it is progressing nicely.
• Stage 8: “Generalized Virtualization Framework” ? v1.0(beta) 2016-H2 ?
GTS “Federation” prognosis:
Networks Services People ∙ ∙ www.geant.org
15
• GTS tries to provide virtual resources that users can leverage for experiments / application/ service
• Simple intuitive interface is a priority (with possibly fewer knobs) • This is proving an important aspect to the user community
• The One Stop Shopping simplifies an otherwise overwhelming prospect for many new users• There are some still BCP heuristics we need to do better for OSS (e.g. we still need effective discovery approach)
• The GTS API allows the client agent to control who makes allocation decisions: • If the client asks the provider for the whole network, the MDPA will go find it.• If the client goes to multiple providers for portions of the overall testbed, then those PAs will service only those resources, and
the client will be only agent that is cognizant of the overall environment• Example: jFED – can submit to multiple providers/aggregates, or to a single provider.
• GTS applies almost no limiting policy (at this time.) Our mission is to enable these activities – not limit them.
• GEANT is considering a longer term role for GTS in maturing new applications and services. • Scaling such [increasingly reliable and adopted] services for a broader user community mean we need features that address
availability/maintenance issues and minimize interruptions. • Roadmap: Active modification, Checkpoint/restart, performance verification and auto fault localization, provider notification
protocols, alarms, common self-recursive virtual resource management and operational control tools, accounting tools and processes, etc.
Comments about the GTS OSS and other approaches to domain selection/partitioning
Networks Services People ∙ ∙ www.geant.org 16
Finally... What’s in a Name?
• SA2 was originally “Testbeds as a Service” - TaaS• “TaaS” was not a strong marketable moniker
It sounds like a Russian news agency, ... Or a cartoon character
So we pondered... In the spirit of “PLANET Lab”, ...Or the “GLOBAL Environment for Network Innovation”, ...
How about: The “World Testbed Facility”... “WTF” ?! wtf.geant.netAlas...management nixed WTF... they suggested instead The GEANT Testbeds Service - “GTS”
services.geant.net/gts Information
gts.geant.net User access