DeliverableD2.52
Planningoftrialsandevaluation-Final
Editor G.Xilouris(NCSRD)
Contributors E.Trouva(NCSRD),E.Markakis,G.Alexiou(TEIC),P.Comi,P.Paglierani(ITALTEL),J.FerrerRiera(i2CAT),D.Christofy,G.Dimosthenous(PTL),J.Carapinha(PTIN),P.Harsh(ZHAW),Z.Bozakov,D.Dietrich,P.Papadimitrioy(LUH),G.Gardikis(SPH).
Version 1.0
Date Oct30th,2015
Distribution PUBLIC(PU)
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium2
ExecutiveSummary
Thevalidation,assessmentanddemonstrationoftheT-NOVAarchitectureasacompleteend-to-endVNFaaSplatform,iscriticalforthesuccessofT-NOVAasanIntegratingProject.Theaim is not only to present technical advances in individual components, but –mainly - todemonstratetheaddedvalueoftheintegratedT-NOVAsystemasawhole.Tothisend,theoverallplanforthevalidationandassessmentoftheT-NOVAsystem,totakeplaceinWP7,ismostlyconcentratedonend-to-endsystem-wideusecases.
Thefirststepistheassemblyofatestingtoolbox,takingintoaccountstandardsandtrendsinbenchmarking methodology as well as industry-based platforms and tools for testing ofnetworkinfrastructures.AnothervaluableinputisthecurrentsetofguidelinesdraftedbyETSIforNFVperformancebenchmarking.
ThenextstepisthedefinitionoftheoverallT-NOVAevaluationstrategy.ThechallengesinNFVenvironmentvalidationare first identified;namelya) the functionalandperformancetestingofVNFs,b)thereliabilityofthenetworkservice,c)theportabilityandstabilityofNFVenvironments, as well as d) themonitoring of the virtual network service. Then, a set ofevaluationmetricsareproposed, includingsystem-levelmetrics (with focusof thephysicalsysteme.g.VMdeployment/scaling/migrationdelay,dataplaneperformance,isolationetc.)aswellasservice-levelmetrics(withfocusonthenetworkservicee.g.servicesetuptime,re-configurationdelay,networkserviceperformance).
Thespecificationoftheexperimentalinfrastructureisanothernecessarystepinthevalidationplanning.Areferencepilotarchitectureisdefined,comprisingNFVI-PoPswithcomputeandstorage resources, eachone controlledby theVIM.NFVI-PoPsare interconnectedoveran(emulated)WAN (TransportNetwork),whileoverallmanagementunits (OrchestrationandMarketplace) interface with the entire infrastructure. This reference architecture will beinstantiated(withspecificvariations)inthreeintegratedpilots(inAthens/Heraklion,AveiroandHannover,supportedbyNCSRD/TEIC,PTINandLUHrespectively),whichwillassessandshowcasetheentiresetofT-NOVAsystemfeatures.Otherlabsparticipatingintheevaluationprocedure(Milan/ITALTEL,Dublin/INTEL,Zurich/ZHAW,Roma/CRATandLimassol/PTIN)willfocusontestingspecificcomponents/functionalities.
ThevalidationplanisfurtherrefinedbyrecallingthesystemusecasesdefinedinD2.1andspecifyinga step-by-stepmethodology– includingpre-conditionsand testprocedure– forvalidating each of them. Apart from verifying the expected functional behaviour viawell-definedfitcriteria,asetofnon-functional(performance)metrics,bothsystem-andservice-levelisdefined,forassessingthesystembehaviourundereachUC.Thisconstitutesadetailedplanforend-to-endvalidationofallsystemusecases,whileatthesametimemeasuringandassessingtheefficiencyandeffectivenessoftheT-NOVAarchitecture.
Last,inadditiontouse-case-orientedtesting,aplanisdraftedfortestingeachofthefiveVNFsdeveloped in the project (vSBC, vTC, vSA, vHG, vTU, vPXaaS). For each VNF, specificmeasurement tools are selected, mostly involving L3-L7 traffic generators, producingapplication-specific traffic patterns for feeding the VNFs. A set of test procedures is thendescribed,definingthetoolsandparameterstobeadjustedduringtest,aswellasthemetricstobecollected.
Theexperimentation/validationplanlaidoutinthepresentdocumentwillbeusedasaguideforthesystemvalidationandassessmentcampaigntotakeplaceinWP7.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium3
TableofContents
1.INTRODUCTION..............................................................................................................7
2.OVERALLVALIDATIONANDEVALUATIONMETHODOLOGYFRAMEWORK.......................8
2.1.STANDARDS-BASEDMETHODOLOGIESREVIEW........................................................................82.1.1.IETF..........................................................................................................................82.1.2.ETSINFVISG............................................................................................................9
2.2.INDUSTRYBENCHMARKINGSOLUTIONS.................................................................................112.2.1.Spirent...................................................................................................................112.2.2.IXIA........................................................................................................................11
3.T-NOVAEVALUATIONASPECTS.....................................................................................13
3.1.CHALLENGESINNFVENVIRONMENTVALIDATION..................................................................133.2.DEFINITIONOFRELEVANTMETRICS.......................................................................................13
3.2.1.Systemlevelmetrics..............................................................................................143.2.2.Servicelevelmetrics..............................................................................................14
3.3.CLOUDTESTINGMETHODOLOGY..........................................................................................153.3.1.Introduction...........................................................................................................153.3.2.CloudtestinginT-NOVA........................................................................................153.3.3.CloudEnvironmenttests........................................................................................163.3.4.Hardwaredisparityconsiderations........................................................................16
4.PILOTSANDTESTBEDS..................................................................................................17
4.1.REFERENCEPILOTARCHITECTURE.........................................................................................174.2.T-NOVAPILOTS...............................................................................................................17
4.2.1.Athens-HeraklionPilot...........................................................................................194.2.2.AveiroPilot............................................................................................................214.2.3.HannoverPilot.......................................................................................................22
4.3.TEST-BEDSFORFOCUSEDEXPERIMENTATION.........................................................................244.3.1.Milan(ITALTEL)......................................................................................................244.3.2.Dublin(INTEL)........................................................................................................254.3.3.Zurich(ZHAW).......................................................................................................274.3.4.Rome(CRAT)..........................................................................................................304.3.5.Limassol(PTL)........................................................................................................32
5.TRIALS,EXPECTEDRESULTSANDMETRICS....................................................................345.1.SYSTEMLEVELVALIDATION.................................................................................................34
5.1.1.UC1.1-Browse/selectofferings:service+SLAagreement+pricing...................345.1.2.UC1.2–AdvertiseVNFs.........................................................................................355.1.3.UC1.3-Bid/trade.................................................................................................365.1.4.UC2–ProvisionNFVservices/Mapanddeployservices......................................365.1.5.UC3–Reconfigure/RescaleNFVservices...............................................................375.1.6.UC4–MonitorNFVservices..................................................................................385.1.7.UC4.1-MonitorSLA..............................................................................................395.1.8.UC5–BillNFVservices..........................................................................................405.1.9.UC6-TerminateNFVservices................................................................................41
5.2.EVALUATIONOFT-NOVAVNFS.........................................................................................43
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium4
5.2.1.Generictoolsforvalidationandevaluation..........................................................445.2.2.VNFSpecificvalidationtools.................................................................................48
6.CONCLUSIONS..............................................................................................................54
7.REFERENCES.................................................................................................................55
LISTOFACRONYMS..........................................................................................................58
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium5
IndexofFigures
Figure1Performancetestingworkloadtaxonomy.................................................................10Figure2T-NOVAPilotreferencearchitecture.........................................................................17Figure3VIMandComputeNodedetails................................................................................18Figure4Athenstopology........................................................................................................19Figure5AveiroPilot................................................................................................................22Figure6HannoverPilotarchitecture......................................................................................23Figure7SimplifiedschemeofItalteltestplantforvSBCcharacterization.............................24Figure8SimplifiedschemeofItalteltestplantforvTUcharacterization...............................25Figure9DublinTestbed..........................................................................................................26Figure10ICCLab“bart”testbedtopology...............................................................................28Figure11LisaNetworkdiagram..............................................................................................29Figure12ZHAWSDNtestbednetworkdiagram.....................................................................29Figure13-CRATtestbed..........................................................................................................31Figure14LimassolTestBed.....................................................................................................33
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium6
IndexofTablesTable1MainNFVI-PoPSpecifications.....................................................................................19Table2EdgeNFVI-PoPSpecifications.....................................................................................20Table3TEICITInfrastructuredescription...............................................................................20Table4TEICAccessNetworkDescription...............................................................................21Table5ZHAWTestbedavailability..........................................................................................27Table6LisaCloudEnvironmentspecifications.......................................................................28Table7SDNTest-bedspecifications.......................................................................................30Table8CRATTestbedServers.................................................................................................31Table9CRATTestbedNetworkNodes....................................................................................31Table10ETSItaxonomymappingofT-NOVAVNFs................................................................44Table11SummaryofL2-L4TrafficGenerators.......................................................................47
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium7
1. INTRODUCTION
TheaimofT-NOVAprojectistodesignanddevelopanintegratedend-to-endarchitectureforNFVservices,coveringalllayersofthetechnicalframework,fromtheMarketplacedowntotheInfrastructure(NFVI).Thepurposeistopresentacompletefunctionalsolution,whichcanbeelevatedtopre-operationalstatuswithminimaladditionaldevelopmentafterprojectend.
Inthiscontext,thevalidation,assessmentanddemonstrationoftheT-NOVAsolutiononend-to-endbasisbecomescriticalforthesuccessofT-NOVAasanIntegratingProject.Theaimisnot only to present technical advances in individual components, but – mainly – todemonstratetheaddedvalueoftheintegratedT-NOVAarchitectureasawhole.Tothisend,theoverallplanforthevalidationandassessmentoftheT-NOVAsystem,totakeplaceinWP7,ismostly concentratedonend-to-end system-wideuse cases, rather thanonunit tests ofindividual components or sub-components, which is expected to take place within therespectiveimplementationWP(WP3-WP6).
Thepresentdeliverablecontainsisafirstapproach–tobefurtherelaboratedinD2.52–totheplanningofthevalidation/experimentationcampaignofT-NOVA,describingtheassetstobeinvolved,thetoolstobeusedandthefollowedmethodology.Itisanevolvedversionoftheinitialreport(D2.51),containingsomeupdatesonthemethodologytobeadopted,theinfrastructure tobeusedandalsosomeamendmentsonthetestcases. It is structuredasfollows:Chapter2overviewsinhigh-leveltheoverallvalidationandevaluationmethodologyframework,highlightingsomegenericframeworksandrecommendationsfortestingnetworkandITinfrastructures.Chapter3discussesthechallengesassociatedwithNFVenvironmentvalidationandidentifiedcandidatesystem-andservice-levelmetrics.Chapter4describesthepilot infrastructures (onwhich the entire T-NOVA systemwill be deployed) aswell as thetestbeds,whichwillbeusedforfocusedexperimentation.Chapter5definesthevalidationprocedures(steps,metricsandfitcriteria)tobeusedforvalidatingeachoftheT-NOVAUseCases. Moreover, the procedures for assessing Virtual Network Function (VNF) specificscenariosaredescribed.Finally,Chapter6concludesthedocument.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium8
2. OVERALLVALIDATIONANDEVALUATIONMETHODOLOGYFRAMEWORK
This section attempts a survey of the related standard and industry base methodologiesavailableaswellasrecommendationsfromETSINFVISG.
2.1. Standards-BasedMethodologiesReview
2.1.1. IETF
In the frame of IETF, the BenchmarkingMethodologyWG (bmwg) [BMWG] is devoted toproposing the necessary metrologies and performance metrics to be measured in a labenvironment, so that will closely relate to actual observed performance on productionnetworks.
ThebmwgWGisexaminingperformanceandrobustnessacrossvariousmetricsthatcanbeusedforvalidatingavarietyofapplications,networksandservices.Themainmetricsthathavebeenidentifiedare:
§ Throughput(min,max,average,standarddeviation)§ Transactionrates(successful/failed)§ Applicationresponsetimes§ Numberofconcurrentflowssupported§ Unidirectionalpacketlatency
The group has proposed benchmarking methodologies for various types of interconnectdevices.Althoughthesetestsarefocusedonphysicaldevices,themainmethodologiesmightaswellbeappliedinvirtualisedenvironmentsforperformanceandbenchmarkingofVNFs.ThemostrelativeidentifiedRFCsare:
§ RFC1944BenchmarkingMethodologyforNetworkInterconnectDevices[RFC1944]§ RFC2889BenchmarkingMethodologyforLANSwitchingDevices[RFC2889]§ RFC3511BenchmarkingMethodologyforFirewallPerformance[RFC3511]
Additionally,theIETFIPPerformanceMetrics(ippm)WG[IPPM]hasreleasedaseriesofRFCs,related to standardmetrics that canbeapplied tomeasure thequality,performance,andreliabilityofInternetdatadeliveryservicesandapplicationsrunningoverIP.RelatedRFCsare:
§ RFC2679AOne-wayDelayMetricforIPPM[RFC2679]§ RFC2680AOne-wayDelayMetricforIPPM[RFC2680]§ RFC2681ARound-tripDelayMetricforIPPM[RFC2681]§ RFC2498IPPMMetricsforMeasuringConnectivity[RFC2498]
Inadditiontotheabove,theIRTFWGonNFV(NFVRG)hasrecentlyaddressedtheissueofNFVbenchmarkingbutfocusingmostlyinon-line,ad-hocVNFbenchmarking,highlightingtheproblemsarisingfromthedeviationfromthedefinitionofperformanceparametersaspartoftheVNFdescription(i.e.VNFD[ETSI-NFV-1])withtheactualVNFbehaviourwhilerunning.Thisis the topic of a recent proposed Internet-Draft [ID-ROSA15]. The authors propose anarchitecture for provision of VNF Benchmarking as-a Service integrated with the NFVArchitecture. From T-NOVA point of view, related work items are the workloadcharacterisationframeworkthathasbeendevelopedinWP4,Task4.1andwouldallowthecreation of the VNF profiles anticipated by the framework as well as the monitoringframeworkthatwouldbeabletomonitor inreal-timetheperformancemetricsdefinedbythedeveloperfortheVNF.InadditionthisframeworkhasbeenproposedinOPNFVupstream
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium9
projectYardstick,alongwiththevTCVNFtousedasaproofofconcept.OPNFVhasacceptedandwill include the frameworkat thenextOPNFV release (i.e.Brahmaputra)onFebruary2016.
2.1.2. ETSINFVISG
ETSINFVIndustrySpecificationGroup(ISG)completedPhase1ofitsworkintheendof2014with thepublicationof11specifications. relevantdocumentOneof those specifications isfocusedonNFVperformance(ETSIGSNFV-PER001V1.1.1)methodologiesforthetestingofVNFs[NFVPERF].TheaimistounifythetestingandbenchmarkingofvariousheterogeneousVNFsunderacommonmethodology.InaddtiontotheaboveduringthesecondphaseForthesakeofperformanceanalysis,thefollowingworkloadtypesaredistinguished:
§ Data-planeworkloads,whichcoveralltasksrelatedtopackethandlinginanend-to-endcommunicationbetweenedgeapplications
§ Control-planeworkloads,whichcoveranyothercommunicationbetweenNetworkFunctions (NFs) that is not directly related to the end-to-enddata communicationbetweenedgeapplications.
§ Signalprocessingworkloads,whichcoverallNF tasks related todigitalprocessingsuchastheFFTdecodingandencodinginaC-RANBaseBandUnit(BBU).
§ Storeworkloads,whichcoveralltasksrelatedtodiskstorage.
ThetaxonomyoftheworkloadcharacterisationisillustratedinFigure1.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium10
Figure1Performancetestingworkloadtaxonomy
AmappingoftheabovetaxonomytotheVNFsofferedbyT-NOVAasaproofofconceptispresentedinSectionError!Referencesourcenotfound..
ETSINFVISGphase2,spanningtheperiod2015/16,hascontinuedtheworkofETSINFVphase1.Inparticular,theresponsibilitiesoftheTSTworkinggroup(Testing,ExperimentationandOpenSource) include,amongothers, thedevelopmentof specificationon testingand testmethodologies. Two TSTWork Items, currently under development, should be taken intoaccountbyWP7:
• “Pre-deploymentTesting;ReportonValidationofNFVEnvironmentsandServices”:developsrecommendationsforpre-deploymentvalidationofNFVfunctionalblocksinalabenvironment.Thefollowingaspectsoflabtestingareaddressed:1)Functional
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium11
validationofVNFsinteractionwithNFVfunctionalblocks.2)Userandcontrolplaneperformancevalidation.[ETSI-NFV-TST001].
• “TestingMethodology;ReportonNFVinteroperabilitytestmethodology”:coverstheanalysis of the NFV interoperability methodology landscape and suggests aframeworktobeaddressed.[ETSI-NFV-TST002].
2.2. Industrybenchmarkingsolutions
For the testing and validation of networks and network application several vendors havedevelopedsolutionsforautomaticstresstestingwithavarietyofnetworktechnologiesandprotocolsrangingfromL2toL7.Amongthese,themostprominentareIXIA[IXIA],andSpirent[SPIRENT]. They both adopt standardisedmethodologies, benchmarks andmetrics for theperformanceevaluationandvalidationofavarietyofphysicalsystems.Lately,duetotheeverincreasingneedfortestingintheframeofNFV,theyhavealsodevelopedmethodologiesthataddresstheneedforbenchmarkinginvirtualisedenvironments.
2.2.1. Spirent
Spirent supports standards-based methodologies for the NFV validation. In general, themethodologiesusedaresimilartothoseemployedtophysicalDevicesUnderTest(DUT).Thefunctionalitiesandprotocolsofferedbystandardhardwaredevices,alsohavetobevalidatedinavirtualenvironment.VNFperformanceistestedagainstvariousdataplaneandcontrolplanemetrics,including:
§ Dataplanemetrics:o latency;o throughputandforwardingrate;o packet-delayvariationandshort-termaveragelatency;o droppedanderroredframes.
§ Controlplanemetrics:o Statesandstatetransitionsforvariouscontrolplaneprotocols;o Controlplaneframessentandreceivedoneachsession;o Controlplaneerrornotifications;o Validationofcontrol-planeprotocolsathighscale;o Scaling up on one protocol and validating protocol statemachines
anddataplane;o Scaling up on multiple protocols at the same time and validating
protocolstatesmachinesanddataplane;o ScalinguponroutesandMPLStunnels.Thesearearepresentative
sample of a comprehensive set of control-plane and data-planestatistics, states and error conditions that are measured for athoroughvalidationofNFVfunctions.
2.2.2. IXIA
Ixia’s BreakingPoint Resiliency Score [IXIABRC] and the Data Center Resiliency Score aresettingstandardsagainstwhichnetworkperformanceandsecurity(physicalorvirtual)canbemeasured.Eachscoreprovidesanautomated, standardized,anddeterministicmethod forevaluatingandensuringresiliencyandperformance.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium12
ΤheResiliencyScoreiscalculatedusingstandardsbyorganizationssuchasUS–CERT,IEEE,andIETF,aswellasreal-worldtrafficmixesfromtheworld’slargestserviceproviders.Userssimplyselectthenetworkordeviceforevaluationandthespeedatwhichitisrequiredtoperform.Thesolutionthenrunsabatteryofsimulationsusingablendedmixofapplicationtrafficandmaliciousattacks.TheResiliencyScoresimulationprovidesacommonnetworkconfigurationforalldevicesinordertomaintainfairnessandconsistencyforallvendorsandtheirsolutions.
TheResiliencyScore ispresentedasanumericgradefrom1to100.Networksanddevicesmayreceivenoscoreiftheyfailtopasstrafficatanypointortheydegradetoanunacceptableperformancelevel.TheDataCenterResiliencyScoreispresentedasanumericgradereflectinghow many typical concurrent users a data center can support without degrading to anunacceptablequalityofexperience(QoE)level.Bothscoresallowquickunderstandingofthedegreetowhichinfrastructureperformance,security,andstabilitywillbeimpactedbyuserload,newconfigurations,andthelatestsecurityattacks.
ByusingtheResiliencyScore,itispossibleto:
§ MeasuretheperformanceofVirtualNetworkFunctions(VNFs)andcompareittoitsphysicalcounterparts;
• Measuretheeffectofchangestovirtualresources(VMs,vCPUs,memory,diskandI/O)onVNFperformance,allowingtofinetunethevirtual infrastructuretoensuremaximumperformance;
• Definitivelymeasurethenumberofconcurrentuserswhichavirtualizedserverwillsupportbeforeresponsetimeandstabilitydegrade;
• Measureapplicationperformanceinphysicalandvirtualenvironments.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium13
3. T-NOVAEVALUATIONASPECTS
ThischapterprovidesapreliminarydescriptionoftheT-NOVAevaluationaspectsfromthearchitecturalandfunctionalperspective.TheseaspectswillbeusedforthedefinitionoftheevaluationstrategyandasastartingpointforthevalidationactivitieswithinWorkPackage7.
3.1. ChallengesinNFVEnvironmentValidation
ThissectionprovidesanoverviewofthechallengesinvolvedinthevalidationproceduresforNFVenvironments.
Functionalandperformancetestingofnetwork functions - In thegeneralcasewheretheperformance testing results are provided for end-user consumed network services, theprimaryconcernistheirapplicationperformanceandtheexhibitedqualityofexperience.Theview in this case ismoremacroscopic and does not delve to the protocol level or to theoperationofe.g.BGP,routingorCDNfunctionalities.HoweverfortheOperators,additionalconcerns exist, regarding specific control plane and data plane behaviour; whether, forexamplethenumberofPPPoEsessions,throughputandforwardingrates,numberofMPLStunnelsandroutessupportedarebroadlysimilarbetweenphysicalandvirtualenvironments.Testingmustensurethattheperformanceofvirtualenvironmentsisequivalenttothatofthecorrespondingphysicalenvironmentandprovidetheappropriatequantifiedmetrictosupportit.
Validating reliability of network service -Operators and users are accustomed to 99.999percent availability of physical network services and will have the same expectations forvirtualenvironments.Itisimportanttoensurethatnode,linkandservicefailuresaredetectedwithin milliseconds and that corrective action is taken promptly without degradation ofservices.Intheeventthatvirtualmachinesaremigratedbetweenservers,itisimportanttoensurethatanylossofpacketsorservicesiswithinacceptablelimitssetbytherelevantSLAs.
EnsuringportabilityofVMsandstabilityofNFVenvironments-Theabilitytoloadandrunvirtualfunctionsinavarietyofhypervisorandserverenvironmentsmustalsobetested.Unlikephysicalenvironments,instantiatingordeletingVMscanaffecttheperformanceofexistingVMsaswellasservicesontheserver.Inaccordancewithestablishedpolicies,newVMsshouldbeassignedtheappropriatenumberofcomputecoresandstoragewithoutdegradingexistingservices. It is also critically important to test thevirtualenvironment (i.e.NFVIandVNFs),includingtheorchestratorandVirtualInfrastructureManagement(VIM)system.
Activeandpassivemonitoringofvirtualnetworks-Inadditiontopre-deploymentandturn-uptesting, it isalso important tomonitorservicesandnetwork functionsoneitheranon-going,passivebasisoranas-needed,activebasis.Monitoringvirtualenvironmentsismorecomplexthantheirphysicalequivalentsbecauseoperatorsneedtotapintoeitheranentireservice chain or just a subset of that service chain. For active monitoring, a connectionbetween the monitoring end-points must also be created on an on-demand basis, againwithoutdegradingtheperformanceofotherfunctionsthatarenotbeingmonitoredinthatenvironment.
3.2. Definitionofrelevantmetrics
InthecontextofVNF-basedservices,validationobjectivesshouldbedefinedbasednotonlyontraditionalserviceperformancemetrics,whicharegenerallyapplicabletonetworkservices(e.g.dataplaneperformance–maximumdelay,jitter,biterrorrate,guaranteedbandwidth,etc),butalsonewNFV-specificmetricsrelatedtoresourceautomatedprovisioningandmulti-
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium14
tenancy–e.g. time todeployanewVNF instance, time to scaleout/in, isolationbetweentenants,etc.Ontheotherhand,validationobjectivesshouldbedefinedbothfromsystemandserviceperspectives,whichareconsideredseparatelyinthefollowingsub-sections.
3.2.1. Systemlevelmetrics
Thesystemlevelmetricsaddresstheperformanceofthesystemanditsseveralparts,withoutassociatingtoaspecificNFVservice.Thefollowingisapreliminarylistofsystemlevelmetricsto be checked for validation purposes. Although the overall system behaviour (e.g.,performance,availability,security,etc.)dependsontheseveralsub-systemsorcomponent,forevaluationpurposesweareonlyinterestedinservicehigh-levelgoalsandtheperformanceofthesystemasawhole.
• Timerelatedmetrics:o TimetodeployaVMo Timetoscale-outaVMo Timetoscale-inaVMo TimetomigrateaVMo Timetoestablishavirtualnetworko Timetomapaservicerequestintothephysicalinfrastructure
• Dataplaneperformance:o Maximumachievablethroughputbetweenanytwopointsinthenetworko Packetdelay(betweenanytwopointsinthenetwork)
• Performanceundertransientconditionso Stallundertransientconditions(e.g.VMmigration,VMscale-out/in)o Time to modify an existing virtual network (e.g. insertion of new node,
reconfigurationoftopology)• Isolationinmulti-tenantenvironment
o Variabilityofdataplaneperformancewiththenumberoftenantssharingthesameinfrastructureresource
o Variabilityofcontrolplaneperformancewiththenumberoftenantssharingthesameinfrastructureresources
3.2.2. Servicelevelmetrics
Service levelmetricsare supposed to reflect theservicequalityexperiencedbyendusers.Often,thiskindofmetricsisusedasthebasisforSLAcontractsbetweenserviceprovidersandtheircustomers.
Ingeneral,NFVservicesmayhavedifferentlevelsofcomplexityandservicelevelobjectivesmayvaryasaresultofthatvariability.Ontheotherhand,differenttypesofNFVservicesmayhavedifferentdegreesofsensitivenesstoimpairments.
• Timerelatedmetricso Time to start a new VNF instance (interval between submission request
through thecustomerportaland the timewhen theVNFbecomesupandrunning).
o Timetomodify/reconfigurearunningVNF(intervalbetweensubmissionofthereconfigurationrequestthroughthecustomerportalandthetimewhenthemodificationisenforced).
• Dataplaneperformanceo Maximumachievablethroughputinacustomervirtualnetwork
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium15
o Latency (packet delay) between any two points in the customer virtualnetwork
• Performanceundertransientconditionso Impactof inserting / removing aVNF /VNF chain in thedatapathon the
networkconnectivityservicealreadyinplace(transientpacketloss)o Impactofinserting/removingaVNF/VNFchaininthedatapathontheend-
to-enddelayo Impactof inserting/removingaVNF/VNFchain inthedatapathondata
throughput
3.3. Cloudtestingmethodology
3.3.1. Introduction
Testing is a key phase in software development and deployment lifecycle, which if doneproperly couldminimize the service disruptions upon deployment and release to the endusersinaproductionenvironment.Withincreasingnumberofservicesbeingdeployedonthecloud,thetraditionaltestingmechanismsarequicklybecominginadequate.Thereasonsareobvious.Traditionaltestenvironmentsaretypicallybasedonahighlycontrolled,singletenantsetup,whilethecloudoffersitsbenefits,albeitinamulti-tenantenvironment.Multi-tenantdoesnotonlymeanmultipleusersusingtheservicesinasharedresourcemodel,butitcouldalsomeanmultipleapplicationsbelongingtothesameuserbeingexecutedwithinasharedresourcemodel.
Thesituationbecomessignificantwhennetworkfunctionsaretobetransformedfromthebundled hardware+software model to NFV deployment models over a virtualisedinfrastructure. In T-NOVA project, in order to test NFV deployments in a provider's cloudenvironment, having a formal testmethodology oriented to cloudenvironments, assumessignificantimportance.
This section describes how a cloud-oriented testing approach can be applied in T-NOVA,focusedonthecloudinfrastructureandthedeployedworkloads, inadditiontothesystemandservicemetricsdescribedintheprevioussection.
3.3.2. CloudtestinginT-NOVA
ThefollowingactivitiesbelongtothecloudtestingstrategytobeadoptedinT-NOVA:
▪ identification of cloud performance characteristics to be evaluated - no. ofconnections,responsetimes,latencies,throughput,etc.
▪ identificationofself-sufficientcomponentsinaVNFtobetested
▪ conduction of individual VNFC tests in a VM (against the identified performancemetrics)
▪ repetitionofthetestswithdifferentVMflavors
▪ categorization of VNFCs into one or more of three performance groups - CPUintensive,Memoryintensive,orDisk/Storageintensive
▪ performancetestrunswithmultipleVNFCsinthesameVMs
▪ theselectioncriteriawouldbetominimizepairingofVNFCstaggedwithinthesameperformancegroupforsuchtests
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium16
▪ repetitionoftestswithdifferentVMflavors
▪ end-to-endperformancetestsofVNFsseparately
▪ repetitionoftestswithdifferentVMflavors
▪ end-to-endperformancetestsofmultipleVNFsdeployedtogether
▪ repetitionoftestswithdifferentVMflavors
Thetestsaretobeconductedin2modes-oneunconstrainedwithnoOpenStackschedulinghints allowed, and the other run with specific placement hints associated with the VMdeploymentHeatscripts.
Typically,inacloudenvironment,interferencefromotherworkloadsaretobeexpected,butinaprivatetelco-cloudenvironmentwhichisbeingenvisionedinT-NOVA,thecross-talkeffectfromunrelatedprivateusers'workloadscanbesafelyignored.
Theoutcomeofsuchamethodologywouldallowtheoperator/ServiceProvidertogainusefulinsights intooptimaldeploymentstrategiesforcontrolledsetsofVNFstobehosted intheNFVI-PoP.
3.3.3. CloudEnvironmenttests
ItwillbeusefultotesttheresponsivenessoftheOpenStackcloudframeworktoo.Afewteststhatcouldbeconductedare:
▪ OpenStackAPIsresponsivenessagainstvaryingnumberofVMsdeployedwithintheframework
▪ VMdeploymentlatencies(againstvaryingnumberofVMsalreadyrunninginthetestsystem)
3.3.4. Hardwaredisparityconsiderations
SpecialattentionwillbetakenwhenassessingtheperformanceofVNFCsaswellasforVNFswhenthesearetobedeployedincomputenodeswithspecialhardwarecapabilities,suchasSSDs,SR-IOVandDPDK.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium17
4. PILOTSANDTESTBEDS
This chapter contains the description of the different test-beds involved in the T-NOVAproject,aswellasthedescriptionofthedifferentpilots,whichwillbeusedtoperformallthetestingandvalidationactivities.ThefinaldeploymentandinfrastructureoftheT-NOVAPilotswillberefinedandpresentedduringWP7activityandmorespecificallyinTask7.1.
4.1. ReferencePilotarchitecture
4.2. T-NOVAPilots
In order to guide the integration activities, a reference pilot architecture is elaborated. Apreliminaryviewofthereferencearchitecture is illustrated inFigure2. ItcorrespondstoacompleteimplementationoftheT-NOVAsystemarchitecturedescribedinD2.22includingasingleinstanceoftheOrchestrationandMarketplacelayers,oneormoreNFVI-PoPs,eachonemanaged by a VIM instanced, interconnected by a (real or emulated)WAN infrastructure(core,edgeandaccess).
ThereferencepilotarchitecturewillbeenrichedastheT-NOVAimplementationsprogress,and will be detailed and refined in order to present finally all the building blocks andcomponentsofthePilotdeployment.Thearchitecturewillbeimplementedinseveralpilotdeployments,asdetailedinthenextsection.However,ineachpilotdeployment,giventheequipmentavailabilityandthespecificrequirementsforUseCasevalidation,thereferencearchitecture will be adapted appropriately. Starting the description from bottom up, theInfrastructureVirtualisationandManagementLayerincludes:
• anexecutionenvironmentthatprovidesITresources(computing,memoryandstorage)for the VNFs. This environment comprises i) Compute Nodes (CNs) based on x86
Figure2T-NOVAPilotreferencearchitecture
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium18
architecture commodity hardware without particular platform capabilities and ii)enhancedCNs(eCNs)thataresimilarlybasedonx86architecturecommodityhardwareenhancedwithparticulardataprocessingaccelerationcapabilities(i.e.DPDK,AES-NI,GPUacceleration).
• a Cloud Controller node (one per NFVI-PoP) for the management and control of theaforementioned IT resources, based on Openstack platform. The Liberty release isplannedtobeadoptedforT-NOVAexperimentation.
• aNetworkNode(oneperNFVI-PoP),runningOpenStackNeutronserviceformanagingthein-cloudnetworkingcreatedbyOpenVirtualSwitchinstanceineachCNandalsointheNetworkNode.
• an SDN Controller (one perNFVI-PoP), bsased on the recent version ofOpenDayLightplatform, for the control of the virtualised network resources. The interaction andintegrationoftheSDNcontrollerwiththeOpenStackplatformisachievedviatheML2PlugincomponentprovidedbyNeutronservice.
ThelatterthreecomponentsalongwiththeimplementedinterfacesandagentsbelongtotheVirtualisationInfrastructureManagementblock(alongwithotherVIMcomponents–notfullydetailed)asillustratedinmoredetailinFigure3.
The integration of the ODL with the Openstack in-cloud network controller (Neutron) isachieved via theML2 plugin. In this sense, Openstack is able to control the DC networkthrough this plugin and having ODL control OVS instances via OpenFlow protocol. AnalternativedeploymentmodeistouseOpenstackProviderNetworkdeploymentmode,withthenoticethatthenetworkprovisioningandtenantnetworksupportneedstobecompletelydelegatedtotheNMSusedbytheNFVI-PoPsomehowlimitingtheelasticitywrtthenetworkprovisioning.
InordertoprovideaccesstospecifictoT-NOVAfunctionalities,theVIMwillallowimmediateorchestration communicationwith theODL controller. Please refer todeliverables [D2.32][D4.01]and[D4.1]formoredetailsontheVIMcomponentsandstructure.
The connectivityof this infrastructurewithotherdeployedNFVI-PoP it is realized via a L3gateway.Asitcanbeobserved,inadditiontoNFVI-PoPequipmentitisanticipatedthatanauxiliaryinfrastructureexiststofacilitatethedeploymentofcentralisedcomponents,suchastheOrchestratorandtheMarketplacemodules.
Figure3VIMandComputeNodedetails
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium19
4.2.1. Athens-HeraklionPilot
4.2.1.1. Infrastructureandtopology
The Athens-Heraklion pilot will be based on a distributed infrastructure between Athens(NCSRDpremises)andHeraklion(TEICpremises).TheinterconnectionwillbeprovidedbytheGreekNREN(GRNET).Thisfacility isfreelyavailablefortheacademicinstitutes,supportingcertain levels of QoS. The idea behind this Pilot is to be able to demonstrate T-NOVAcapabilitiesoveradistributedtopologywithatleasttwoNFVI-PoPs,interconnectedbypre-configuredlinks.ThesetupisidealforexperimentationwithNSandVNFdeploymentissues,andperformancetakingintoaccountpossibledelaysandlossesintheinterconnectinglinks.Additionally,thisPilotwilloffertotherestoftheWPsacontinuousintegrationenvironmentin order to allow verification and validation of the proper operation of all developed andintegratedsoftwaremodules.
(a) Athensinfrastructure
ThePilotarchitecturethatwillbedeployedoverNCSRDtestbedinfrastructureisillustratedinFigure4.
Figure4Athenstopology
ThedetailedspecificationsofAthens infrastructurearesummarised inthefollowingtables(Table1,)
MainNFVI-PoP
Table1MainNFVI-PoPSpecifications
OpenStackControler ServerIntel(R)Xeon(R)[email protected],4cores,16GBRAM,1TBstorage,GigabitNIC
OpenstackCompute 2Servers
Eachwith2x(Intel(R)Xeon(R)[email protected],4cores),56GBRAM,1TBstorage,gigabitNICs.
Openstack NetworkNode
(neutron)
ServerIntel(R)Core(TM)[email protected]
8GBRAM(tobeupgraded)
OpenDaylight Intel(R)Core(TM)[email protected]
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium20
8GBRAM(tobeupgraded)
Storage 8TB,SCSI,NFSNAS
Hypervisors KVM
CloudPlatform OpenstackLiberty
Networking PICA8Openflow1.4switch
Table2EdgeNFVI-PoPSpecifications
OpenStack All–in-one,plusODL
ServerIntel(R)Xeon(R)[email protected],4cores,16GBRAM,2TBstorage,GigabitNIC
Storage 8TB,SCSI,NFSNAS
Hypervisors KVM
CloudPlatform OpenstackLiberty
Networking PICA8Openflow1.4switch
Inadditiontoafull-blowndeploymentofanNFVI-PoP(backboneDC),theaccommodationofalegacynetworkdomain(non-SDN)isalsoconsideredinthepilotarchitecture.Thisnetworkdomain will act as Transport network, providing connectivity to other simpler NFVI-PoPs.These PoPs will be deployed using an all-in-one logic, where the actual IT resources areimplemented around a single commodity server (with limited of course capabilities).However,theselectionoftheabovetopologyisjustifiedbytheneedtobeabletovalidateandevaluatetheServiceMappingcomponentsandexperimentwithVNFscalingscenarios.NCSRDandTEIC infrastructurebeingalready interconnectedvia theGreekNREN,which isGRNET, it is fairlyeasy tobe interconnectedandconstituteadistributedPilot forT-NOVAexperimentation.ThiswillprovidetheopportunitytoevaluateNScompositionandServiceFunction Chaining issues over a larger than a laboratory testbed deployment over 100%controllableconditions(dependingontheSLAwithourNREN).
(b) TEICInfrastructure
InTEICpremisesafullimplementationoftheT-NOVAtestbedwillbedeployedconformingtothereferencepilotarchitectureasdescribedinthisdeliverable.TheITinfrastructurethatwillbeusedforT-NOVAexperimentationandvalidationisdetailedinthefollowingtable(Table3):
Table3TEICITInfrastructuredescription
Servers 2XDELLR520(2xE5-2400Intelproductfamilyx64GBMemmory,2512GBSSD,
4X620GB15KSAS),4X(IntelCorei7-5930KBox,32GB1600MHz,2X1TBSAS)
3XDELLR310(DualCPY,32GB)(ComputeNodes)
RAM 256GB
Storage 8TB
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium21
Hypervisors KVM
CloudPlatform OpenstackLiberty
Networking PICA8Openflow1.4switch,HPHP2920-48G
InternetConnection 1Gbps Connection with the Greek Research and TechnologyNetworkwithaprivateIPv4Cclassnetwork.
FirewallCappabiltiy Virtualised Implementation of PFSENSE(Openbsd) currentlyworking1Gbpsexpandableto10Gbps
ThePASIPHAELabofTEIC featuresofvariousaccessnetwork (Table4) thatcanbeused ifneededtoemulatetheaccesspartoftheT-NOVAnetworkinlargescaledeployment
Table4TEICAccessNetworkDescription
DVB-TNetwork DVB-TNetwork(100realusers)
WiMAX WiMAXNetwork(100realusers)
Ethernet Locallaboratoryequippedwith300PCs
WiFi CampusWiFiInfrastructurewith+1000users
ADetailedviewofthetestbedwillbepresentedinWP7withanexplanationontheVirtualisedFirewallusedinordertointerconnectwiththeGRNETnetwork.
4.2.1.2. DeploymentofT-NOVAcomponents
TEICplanstohaveafullT-NOVAdeployment(i.e.includingalltheT-NOVAstackcomponents)to be able to run local testing campaigns but also participate to distributed evaluationcampaignsalongwithfederatedAthensPilot.
4.2.2. AveiroPilot
4.2.2.1. Infrastructureandtopology
PTIN'testbedfacilityistargetedatexperimentationinthefieldsofCloudNetworking,networkvirtualization and SDN. It distributed across two sites, PTIN headquarters and Institute ofTelecommunications(IT),bothlocatedinAveiro,asshowninthefigurebelow(Figure5).Theinfrastructure includes Openstack-based IT virtualized environments, an OpenDaylight-controlled OpenFlow testbed and a legacy IP/MPLS network domain based on Ciscoequipment (7200, 3700, 2800). This facility has hosted multiple experimentation anddemonstrationactivities,inthescopeofinternalandcollaborativeR&Dprojects.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium22
Figure5AveiroPilot
Theinfrastructureisasfollows:
IT(mainsite):
• IntelXeon/IntelCorei7cores,currentlytotaling157GBRAMand40Cores• OpenFlow-based infrastructure(4NetworkNodeswithOpenvSwitch)controlledby
OpenDaylightSDNplatform(Hydrogenrelease)• OpenstackKilo,OpenDaylightLithium• IP/MPLSinfrastructure(Cisco7200,2800,3700)
PTIN:
4xCPUXeonE5-2670,128GBRAM,1.8TBHDD
4.2.2.2. DeploymentofT-NOVAcomponents
PTINwill be able to host all components of the NFV infrastructure. Distributed scenariosinvolvingmultiplesNFVIPoPsseparatedbylegacyWANdomainwillalsobeeasilydeployedtakingadvantageoftheIP/MPLSinfrastructureavailableatthelab.
4.2.3. HannoverPilot
4.2.3.1. Infrastructureandtopology
FutureInternetLab(FILab)–illustratedinFigure6-isamedium-scaleexperimentalfacilityownedbythe InstituteofCommunicationsTechnologyatLUH.FILabprovidesacontrolledenvironment in which experiments can be performed on arbitrary, user-defined networktopologies,usingtheEmulabmanagementsoftware.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium23
FILabprovidesanexperimentaltest-bedcomposedof:
• 60multi-coreserverso IntelXeonE5520quad-coreCPUat2.26GHzo 8GBDDR3RAMat1333MHzo 1NICwith4x1Gbpsportso InterconnectedbyaCISCO6900switchwith720Gbpsbackplaneswitching
capacityand384x1Gports• 15multi-coreservers
o IntelXeonX5675six-coreCPUat2.66GHzo 6GBDDR3RAMat1333MHzo 1NICwith2x10Gbpsportso InterconnectedbyaCISCONEXUS5596switchwith48x10Gports
• 22programmableNetFPGAcards• 20wirelessnodes,andhigh-precisionpacketcapturecards• Various software packages for server virtualization (e.g., Xen, KVM), flow/packet
processing (e.g.,OpenvSwitch,FlowVisor,ClickModularRouter,Snort)androutingcontrol (e.g., NOX, POX, XORP) have been deployed into FILab allowing thedevelopmentofpowerfulplatformsforNFVandflowprocessing.
4.2.3.2. DeploymentofT-NOVAcomponents
TheHannoverPilotwillbesetupasaNFVPoPfortheevaluationofselectedcomponentsofthe T-NOVAorchestrator.More specifically, evaluation testswill be conducted for servicemapping (i.e., assessing the efficiency of the T-NOVA service mapping methods) and forservicechaining.Intermsofservicechaining,wewillvalidatethecorrectnessofNFchaining(i.e., that traffic traverses theNFs in the order prescribed by the client) and quantify anybenefitsintermsofstatereductionusingourSDN-basedportswitchingapproach.
Figure6HannoverPilotarchitecture
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium24
4.3. Test-bedsforfocusedexperimentation
4.3.1. Milan(ITALTEL)
4.3.1.1. Description
ITALTEL testing labs are composed by interconnected test plants (located in Milan andPalermo, Italy) and based on proprietary or third party equipment, to emulate real-lifecommunicationnetworksandcarryoutexperimentsonanytypeofvoiceand/orvideooverIP service. Theexperimental testbedwill bebasedon theavailablehardwareplatforms inItalteltestplants.ThistestplantwillbeusedtoverifythebehaviorofthevirtualSBCandthevirtualTUVNFs.
AsimplifiedschemerepresentingtheconnectionoftwoSessionBorderControllersisshowninFigure7.
Figure7SimplifiedschemeofItalteltestplantforvSBCcharacterization
Inthescheme,twodomains,herereferredtoasSiteAandB,areinterconnectedthroughanIPnetwork,andbyusingtwoSessionBorderController.WeusethetermDUTtoidentifytheDeviceUnderTest.
The virtual SBC, which represents the Device under Test, will be connected to Site B. ByexploitingthecapabilitiesofferedbyItalteltestlab,anumberofexperimentwillbedesignedinordertoverifytheDUTbehaviorunderawidevarietyoftestconditions.
TheSBCinSiteAisthecurrentcommercialsolutionofItaltel,namelytheItaltelNetmatch-S.Netmatch-S is a proprietary SBC, based on bespoke hardware, which can perform a highnumberofconcurrentsessions,andprovidevariousservices,suchasNATandTranscoding,bothofaudioandvideosessions.Avarietyofend-userterminalsarepresentinthetestplant,and can be used in order to perform testing on any type of service. In the lab, alsoHighDefinitionvideocommunicationandTele-presencesolutionsarepresent,andcanbeusedfortesting activities. Traffic generators are available, to verify the correct behavior of theproposedsolutionsunderloadingtrafficconditions.Finally,differenttypesofMeasurementProbes can be used, which can evaluate different Quality of Service parameters, bothintrusivelyandnon-intrusively.
The scheme in Figure 8 represents the virtual Transcoder Unit (TU) VNF. It provides thetranscodingfunctionforthebenefitofmanyotherVNFsinordertocreateenhancedservices.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium25
Figure8SimplifiedschemeofItalteltestplantforvTUcharacterization
ThecoretaskoftheVideoTranscodingUnitistoconvertvideostreamsfromonevideoformattoanother.ThetestwillbeperformedsimulatinganotherVNFrequestingthetranscodingofafilecontainingavideo.AserverwithGPUacceleratorcardswillbeusedfortestingthisVNF.InparticularweplantocomparethebehaviourofthevTUincaseofgeneralpurposeCPUandGPUacceleration.
4.3.1.2. TestPlanning
ThetestbedwillbemostlyusedforthevalidationandtheperformanceoptimisationoftheSBCandTUVNFs.
4.3.2. Dublin(INTEL)
4.3.2.1. Description
The Intel Labs Europe test-bed ismedium scale data centre facility comprising of 35+ HPserversofversiongenerationsrangingfromG4toG9.TheCPUsareXEONbasedwithdifferingconfigurationsofRAMon-boardstorage.AdditionalexternalstorageoptionsintheformofBackblazestorageserversarealsoavailable.ThisheterogeneousinfrastructureisavailableasrequiredbytheT-NOVAproject.HoweverfortheinitialexperimentalprotocolsadedicatedlabbasedconfigurationwillbeimplementedasoutlinedinFigure9.Thistestbedisdedicatedto the initial researchactivities for Task3.2 (resource repository) andTask4.1 (virtualisedinfrastructure).ThenodeswillbeamixtureofInteli74770,3,40GhzCPUswith32GBofRAMandonewith2XeonE52680v2,2.8GHzand64GBofRAM.Thelatterprovides10coresperprocessor(thecomputenodehasintotal20cores)andprovidesasetofplatformfeaturesofinteresttoTask4.1and3.2(e.g.VT-x,VT-d,Extendedpagetables,TSX-NI,TrustedExecutionTechnology(TXT)and8GT/sQuickPathInterconnectsforfastintersocketcommunications).EachcomputenodefeaturesanX540-T2networkinterfacecard.TheX540hasdualEthernet10GBportswhichareDPDK-compatibleandisSR-IOVcapablewithsupportforupto64virtualfunctions.InthetestbedconfigurationoneportontheNICisconnectedtoaManagementNetwork and the other is connected to a Data Network. Inter Virtual Machine traffic ondifferentcomputenodesisfacilitatedviaanExtremeNetworksG67048portSDNswitchwithOpenFlowsupport.Themanagementnetworkisimplementedwitha10GB12portNetgearProSafe switch.Fromasoftwareperspective the testbed is running theLibertyversionofOpenStackandtheHeliumversionofOpenDaylight.Oncetheinitialconfigurationhasbeenfunctionally validated the testbed will be upgrade to Juno and Helium version releases.IntegrationbetweenOpenStackNeutronmoduleandOpenDaylightisimplementedusingtheML2plugin.VirtualisationofthecomputeresourcesisbasedontheuseofKVMhypervisors
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium26
andalibvirthypervisorcontroller.DPDKvSwitchdeliversvirtualVMconnectivitythroughtheDataNetwork.
Figure9DublinTestbed
4.3.2.2. TestPlanning
AnexperimentalprotocolhasbeenexecutedonthetestbedaspartofTask4.1activitiesanddocumentedindeliverableD4.1.Theprimaryfocusoftheseactivitieswereasfollows:
• WorkloadCharacterisation-CaptureofdynamicmetricsandidentificationofmetricswhichhavethemostsignificantinfluenceonVNFworkloadperformance.
• Technology Characterisation – Evaluation the candidate technologies for the IVM(e.g. Open vSwitch, DPDK, SR-IOV etc.) and identification of themost appropriateconfigurationsetc.
• FunctionalValidation–Evaluatetest-bedbehaviourandperformance.
• EnhancedPlatformAwareness - Identifyoptions to implementenhancedplatformawarenesswithinthecontextoftheexistingcapabilitiesofOpenStack.
ThistestbedwillcontinuetobeusedtosupportactivitiesinTask4.5.Theprimaryfocuswillbe:
• Finalising development of the VNF workload characterisation framework and testcasesforcontributiontotheOPNFVYardstickproject
• EvaluationofOVSnetdevDPDKinconjunctionwiththeDPDKenableversionofthevirtualisedtrafficclassifierVNF.AlsodetermineeffectofNUMApinning,corepinningandhugepagesincombinationwithOVSnetdevonVNFperformance.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium27
4.3.3. Zurich(ZHAW)
InstituteofApplied InformationTechnology (InIT)'s cloudcomputing lab (ICCLab)atZurichUniversity of Applied Sciences (ZHAW) run multiple cloud testbeds for research andexperimentationpurposes.Belowisthesummaryofvariousexperimentationcloudtestbedsmaintainedbythelab.
Table5ZHAWTestbedavailability
4.3.3.1. Description
ICCLab's bart openstack cloud is generally used for variousR&Dprojects.However, due tocommitments inotherprojects,bart(asoriginallyplanned)willnotbeavailablefortestingpurposesuntilmid2016.
Bartcloudconsistsof1controllernode,and4computenodes,eachbeingLynxCALLEO1240servers.Thedetailsofeachserverisdescribednext.
Type LynxCALLEOApplicationServer1240
Model SA1240A304R(1HE)
Processor 2xINTEL®Xeon®E5620(4cores)
Memory 8x8GBDD3SDRAM,1333MHz,reg.ECC
Disk 4x 1 TB Enterprise SATA-3 Hard Disk, 7200 U/min, 6 Gb (SeagateST1000NM0011)
Eachofthenodesofthistestbedisconnectedthrough1GBpsethernetlinkstoHPProCurve2910AL switch, and using 1GB/s link to ZHAWuniversity network. This testbed has beenallocated 32 public IPs in 160.85.4.0/24 block, which allows collaborative work to beconducted over this testbed. ICCLab currently has 3 OpenFlow switches that can beprovisionedforuseinT-NOVAatalaterpoint.Thecharacteristicsoftheseswitchesare:
Model Pica8P-3290
Processor MPC8541
PacketMemoryBuffer 4MB
Memory 512MBSystem/2GBSD/CF
OS PicOS,stockversion
TestbedName
No.ofvCPUs RAM(GB) Storage(TB) Purpose
Lisa 200 840 14.5 Used for education and byexternalcommunity
Bart 64 377 3.2 GeneralR&Dprojects
Arcus 48 377 2.3 Energyresearch
XiFi 192 1500 25 FutureInternetZurichNode
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium28
Theschematicof ICCLab'sbarttestbed(Figure10)whichcurrentlyrunsOpenStackHavanawithVPNaaS,LBaaSenabledisshowninthefigurebelow.VirtualizationineachphysicalnodeissupportedthroughKVMusinglibvirt.
Figure10ICCLab“bart”testbedtopology
Thistestbedcanbeeasilymodifiedtoaddmorecapacityifneeded.InitiallythistestbedwillbeusedtosupportZHAW'sdevelopmentworkinT3.4ServiceProvisioning,ManagementandMonitoring,andtask4.3SDKforSDN.Later,thistestbedcanbeusedforT-NOVAconsortiumwide validation tests as a Zurich point-of-presence (POP) (site) for the overall T-NOVAdemonstrator.Forinter-sitetests,ourtestbedcanbeconnectedtoremotesitesthroughVPNsetup.
TwotesbedconfigurationsarebeingusedforWP3andWP4activiteiscurrentlyaslaiddowninthenextsubsections.
(a) Openstack(non-SDN)testbed
TwooperationalOpenstacktestbedsareavailable(namedBartandLisa).Theplanning(duetobarttavailabilityatthecurrentstage)istousethesemi-productionLisacloudforT-NOVAexperimentations.SpecificationoftheLisatestbedenvironmentisprovidedonTable6:
Table6LisaCloudEnvironmentspecifications
VCores 224
RAM 1.7TB
NOVAStorage 18TB
Cinder-Storage(distributed) 1.7TB
Glance-Storage(distributed) 1.7TB
Uplink 100Mbps
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium29
FloatingIPs 191PublicFloatingIPs
Oftheaboveresourcepool,apartisplannedtobededicatedforT-NOVAstackdeployments.ThenetworkdiagramoflisainthecontextofZHAWnetworkisshownbelowinFigure11:
Figure11LisaNetworkdiagram
(b) SDNTestbed
Specifically,intheframeofTask4.3whereZHAWisdevelopinganSDKforSDNbasedontheconcrete implementationsofagreed referenceapplications,a small SDN testbedhasbeensetupwithOpenStackandOpenDaylightHeliumastheSDNcontroller.ThedescriptionandcharacteristicofthistestbedareillustratedinFigure11:
Figure12ZHAWSDNtestbednetworkdiagram
Asthefiguredescribes,thistestbedismadeof2computenode,1controllernodeand1SDNcontroller node. All nodes are connected using a physical switch with OpenVSwitch. Thecontroller node is co hosted with the switch. The OpenStack (Juno release) is setup and
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium30
configuredwithOpendaylightML2 plugin, and the SDN controller is Opendaylight Heliumrelease.Someoftheworkbeingtestedinthisenvironmentincludesachievingtenantisolationwithout using tunnels, flow redundancy to achieve resilience, service function chainingstrategies,etc.ThephysicalcharacteristicsofthistestbedaresummarizedintheTable7:
Table7SDNTest-bedspecifications
VCores 12
RAM 109GB
Nova-Storage 147GB
Cinder-Storage(distributed) 450GB
GlanceStorage(distributed) 450GB
Uplink 100Mbps
4.3.3.2. TestPlanning
ZHAWtestbedswillbeusedtovalidatetheT-NovastackandwillbeconfiguredasaPOPfordeployingtheNFsthroughtheorchestrator.Furthermore,theSDKforSDNtoolthatwillbedeveloped in T4.3 will undergo functional testing using ZHAW testbed. The tests will becategorizedunderfourbroadcategories:
• SDKforSDNfunctionalvalidation-ThesetoftestswillbeplannedtoundertakethefeaturecoverageandfunctionalevaluationoftheSDKforSDNtoolkit.Forthis,thetestbedwillbemodifiedwithadditionofOpenFlowswitchesandSDNcontrollers.ItwillalsobeusedtovalidatetheSFCusecaseduringthedevelopmentphase.
• Testbed validation - The set of tests will be planned to evaluate the generalcharacteristicsoftheOpenStacktestbeditself,VMprovisioninglatencystudies,etc.
• Billingfunctionalvalidation-ThesetoftestswillbeplannedtogetherwithATOStoverifythedifferentbillingstakeholderscenarios.
• Marketplace testing and integration – ZHAW is adapting their Cyclops billingfreameworktoincorporatetheT-NovamarketplacerequirementsofenduserbillingaswellasrevenuesharereportsfortheFPs.Lisaopenstacktestbedisbeingusedtodeploymarketplacemodules that interactwithCyclops to aid in thedevelopmentphase.The integration testswith the restof themarketplacemoduleswill alsobecarriedoutafter thedevelopmentphase isover.These testswillbecarriedout inconjunctionwith ATOS and TEICwho are themain contributors in the dashboardmodule.
4.3.4. Rome(CRAT)
4.3.4.1. Description
TheconsortiumfortheResearchinAutomationandTelecommunication(CRAT)developedasmall SDN testbed at the Network Control Laboratory (NCLAB), with the purpose ofperformingacademicanalysisandresearchfocusedonnetworkcontrolandoptimization.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium31
Figure13-CRATtestbed
Theequipmentcomprisestwophysicalservers,connectedthroughaGigabitswitch,withthefollowingspecifications;
Table8CRATTestbedServers
Model DellPowerEdgeT20
Processor IntelXeonE3-1225(4cores)
Memory 1x8GBDD3SDRAM,1333MHz,reg.ECC
Disk 1x1TBEnterpriseSATA-3HardDisk,7200U/min,6Gb
Table9CRATTestbedNetworkNodes
Model NETGEARWNR3500L
Processor 480MHzpowerfulMIPS74Kprocessor
Memory 128MBNANDflashand128MBRAM
Firmware DD-WRTcustomfirmwarewithOpenflowsupport
AsshowninFigure13,thephysicalservershostatotaloffivevirtualmachines,inordertosetupadevelopmentandtestingenvironmentforaclusterofSDNnetworkcontrollers.Inthisregard,thevirtualmachinesareorganizedasfollows:
• ODL-H1,ODL-H2,ODL-H3host threedifferent instancesofOpenDaylightcontrollerformingaclusterofcontrollers.
• ODL-HDEVisusedfordevelopmentpurposes.ItholdstheOpenDaylightsourcecodewhichcanbeextended,builtanddeployedontheODL-H{DEV,1-3}machines.
• MININET is used to emulate different network topologies and evaluate theeffectivenessofthecluster.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium32
4.3.4.2. TestPlanning
CRAT testbed will be used to validate the functionalities of the SDN control plane underdevelopment inTask4.2Experimentalplanswillbedevelopedtocompareperformance indifferent scenarios (single controller, multiple controllers). Moreover, research activitiesfocusedonthevirtualizationoftheSDNcontrolplane,intermsofelasticdeploymentandloadbalancingofcontrolplaneinstances,willalsobenefitfromthetestbeddescribedabove.
4.3.5. Limassol(PTL)
4.3.5.1. Description
PrimeTel’sTripleplayplatform,calledTwister,isaconvergedend-to-endtelecomplatform,capable of supporting an integrated multi-play network for various media, services anddevices. The platform encompasses all elements of voice, video and data in a highlycustomisableandupgradeablepackage.The IPTVstreamersreceivecontent fromsatellite,off-air terrestrialandstudiosandconvert it toMPEG-2/MPEG-4overUDPmulticast,whileVideoondemandservicesaredeliveredoverUDPunicast.TwistertelephonyplatformusesVoiceoverIP(VoIP)technology.ThesolutionisbasedonopenSIPprotocolandprovidesallessentialfeaturesyouexpectfromClass5IPCentrexsoftswitches.MediaGatewaysareusedfor protocol conversion between VoIP and traditional SS7/ISDN telephone networks. IPinterconnectionswithinternationalcarriersareprovidedthroughinternationalPOPs.Italsoincludes components that provide centralized and distributed traffic policy enforcement,monitoring and analytics in an integratedmanagement system. Twister Converged BillingSystemprovidesmediation,ratingandbillgenerationformultipleservices.Itmaintainsalsoaprofileforeachsubscriber.Thecustomerpremisesequipment(CPE)providestocustomersInternet,telephonyandIPTVconnection.ItbehavesasanintegratedADSLmodem,IProuter,Ethernet switch andVoIPmedia gateway. STB receivesmulticast/unicastMPEG-2/MPEG-4UDP streams and shows them on TV. Through a Sonus interface and IP Connectivity theplatform is linked to partner’s 3GMobile Network for offering IP services provisioning tomobilecustomers.
R&DTestbed
PrimeTel’sR&Dtest-bedfacilitiescanconnecttothecompany’snetworkbackboneandutilizethenetworkaccordingly.ThroughtheR&Dtest-bedresearchengineerscanconnectstopartsofinterestontherealnetwork.IncollaborationwiththeNetworksDepartmentR&Dcouldconductnetworkanalysis, trafficmonitoring,powermeasurementsetc. andalsoallow fortestingandvalidationofnewlyintroducedcomponentsaspartoftheitsresearchprojectsandactivities. A number of beta testers could be connected to the test-bed for supportingvalidationactingasrealusersandprovidingthenecessaryfeedbackofanyproposedsystem,componentorapplicationdeveloped.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium33
Figure14LimassolTestBed
Interconnections
Interconnectionswithothertest-bedscouldbeachievedwithVPNtunnelsovertheInternet.
4.3.5.2. TestPlanning
PrimeTel’stest-bedisidealforrunningthevirtualhome-boxusecaseandmoreideallytotestthiswithrealendusers.PrimeTelcurrentlyhasaround12000TVsubscribers,amongstthemanumberofwhohaveexpressedinterestinparticipatingintestingandevaluationactivities.It is foreseentoallowrealendusertestingofT-NOVAplatform,specifically for testingHGVNF.PrimeTel'sBetaTesters(around100)willbeinvitedtoparticipateintheT-NOVAtrialsduringY3,morespecifically.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium34
5. TRIALS,EXPECTEDRESULTSANDMETRICS
5.1. SystemLevelValidation
This section approaches the system level validation needs by providing a step-by-stepapproachforthevalidationoftheT-NOVAUseCasesastheyhavebeenlaidoutinDeliverableD.2.1[D2.1]and laterupdated inD2.22[D2.22].ForeachUC,thetestdescription includespreconditions,methodology,metricsandexpectedresults.
5.1.1. UC1.1-Browse/selectofferings:service+SLAagreement+pricing
StepNumber 1.1.1
StepDescription SPservicedescription+SLAspecification
Precondition SLAhasbeendescribedforstandaloneVNFbytheFPs
InvolvedT-NOVAComponents
• SPDashboard• BusinessServiceCatalogue• SLAmanagementmodule
Parameters • SLAtemplate
TestMethodologyTheSPwillperformtheservicedescriptionprocedureinvolvingtheSLA template fulfilment by means of the connection to the SLAmanagementmodulefordifferentkindforservices.
Metrics
• TimebetweentheSPopeningtheservicedescriptionGUIandtheSLAtemplateisavailabletobecompleted.
• TimebetweentheservicedescriptioniscompletedbytheSP and the notification that the service information isavailableintheBusinessServicecatalogueandSLAmodule.
ExpectedResults SLAtemplatefulfilledbytheSPandstoreintheSLAmanagementmoduleinareasonabletime.
StepNumber 1.1.2
StepDescription TheCustomerbrowsesthebusinessservicecatalogue
Precondition ServicedescriptionandSLAspecificationforseveralservices
InvolvedT-NOVAComponents
• Customerdashboard• BusinessServiceCatalogue• SLAmanagementmodule
Parameters • Searchtime
TestMethodology Thecustomerwillintroducedifferentsearchparameters
Metrics Time since the customer introduces a searchparameteruntil thesystemshowsserviceoptions
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium35
ExpectedResultsThedashboardwillhavetoshowinareasonabletimetheofferingsavailableinthebusinessservicecataloguemarchingtheparametersintroduced
StepNumber 1.1.3
StepDescription CustomerselectsanofferingandacceptstheSLAconditions
Precondition The customer has performed search in the Business ServiceCatalogue
InvolvedT-NOVAComponents
• Customerdashboard• BusinessServiceCatalogue• SLAmanagementmodule
Parameters • Serviceselection
TestMethodologyThecustomerselectsanofferingwhatwillimplythatcustomerwillhavetoacceptseveralconditionscomingfromtheSLAspecificationintheSLAmodule
Metrics • Time since the customer selects and offering till the SLAconditionsareshowntobeaccepted.
ExpectedResults Conditionsshowedtothecustomertobeacceptedinareasonabletime.
StepNumber 1.1.4
StepDescription TheSLAagreementiscreatedandstored
Precondition ThecustomerhasacceptedtheconditionsassociatedtoagivenSLAspecification
InvolvedT-NOVAComponents
• Customerdashboard• SLAmanagementmodule• Accounting
Parameters • SLAagreement
TestMethodology TheSLAcontractissignedbySPandcustomerandstoreintheSLAmodule.(thepricewillbestoreintheaccounting).
Metrics
• Time since the customer has accepted the applicableconditions till theSLAcontract is store in theSLAmodule(includingSLAparametersthatwillneedtobemonitoredbytheorchestratormonitoringsystem)
ExpectedResults SLAagreementbetweencustomerandSPinareasonabletime
5.1.2. UC1.2–AdvertiseVNFs
StepNumber 1.2.1
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium36
StepDescription FP uploads the packaged VNF, providing also the metadatainformation.
Precondition TheFPdeveloperauthenticatesthroughthedashboard
InvolvedT-NOVAComponents
FPDashboard,Marketplace,NFStore
TestMethodology MultipleuploadsofVNFs(packageandmetadata)willbeexecuted.Measurevariousmetrics(below)
Metrics Upload time, system response, NF Store specific databaseperformancemetrics.
ExpectedResults FastresponseofthedashboardfortheuploadingoftheVNF
Quickupdateoftheservicecatalogues
5.1.3. UC1.3-Bid/trade
StepNumber 1.3.1
StepDescription SPtradesviabrokerageplatformwithFPs
Precondition TheCustomerhasselectedaNSthatisofferedviaServiceCatalogueandrequiresbrokerage.
InvolvedT-NOVAComponents
• Customerdashboard• SLAmanagementmodule• Brokeragemodule
Parameters FunctionPrice,SLAagreement,ServiceDescription.
TestMethodology ValidatethatthereturnedNSisalwaysthebestfittotheCustomerrequirements.
Metrics • Time since the customer sends the requirement till thesystemreturnstheNS
ExpectedResults Brokerage platform returns the appropriate NS matching therequirementssetbytheCustomer.
5.1.4. UC2–ProvisionNFVservices/Mapanddeployservices
StepNumber 2.1
StepDescription ProvisionNFVservice
Precondition Through thecustomerportal, theT-NOVACustomerhas selectedservicecomponentsandrelevantparameters(UC1)
InvolvedT-NOVAComponents IVM,orchestrator,VNF
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium37
TestMethodologyMeasuretimebetweenservicerequestandthemomenttheserviceis fully operational (how to verify that the service is operationaldependsonthespecificVNF)
Metrics
Metricstoverify:
• timetosetupandactivatetheservicefromthemomenttherequestissubmittedbythecustomer;
• dataplaneperformance(e.g.throughput,e2edelay);• controlplaneperformance(VNF-specific);• timetakentoenforcechangessubmittedbythecustomer
ExpectedResults Successcriteria-theserviceisfullyoperationalaftertheNFVserviceprovisioningsequence.
5.1.5. UC3–Reconfigure/RescaleNFVservices
StepNumber 3.1
StepDescriptionAscaleofaVNFwillneedtochangeinaccordancewiththetrafficloadprofile.TrafficthresholddefinedintheSLAassociatedwiththeVNFwilldefinethenetworktrafficlevels.
Precondition
Service monitoring provides metric data on a VNF to the SLAMonitorcomponent
The SLAMonitor detects that the SLA associatedwith theVNF isapproachingatriggerthreshold.
The SLA Monitor determines the require action based on theassociatedSLA.
TheSLAMonitornotifiestheReconfigure/RescaleServiceNFVofthescalingactionrequired
InvolvedT-NOVAComponents
• VirtualInfrastructureManager(VIM)• NFVI• Orchestrator
Parameters
VNFspecific.Likelyparameterwillbe:
• Networktrafficload• Numberofconcurrentusers• Numberofconcurrentsessions• Throughputorlatency
TestMethodology
1. SelectSLAparameterandspecifyathreshold,whichcanbe,breached e.g. network traffic load in the first VNF of theservicechain.
2. Usenetworktrafficgeneratortogenerate loadbelowSLAthresholdlevel.
3. Increased the traffic load in step wisemanner up to thethreshold.
4. MonitorVIMtodetermineifnewVMisaddedtoOpenStackenvironmentandtothecorrectVLAN.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium38
5. Monitoring network traffic latency/throughput has beenreduced.
Metrics
• Accuracyandvalidityofrescaledecision• Timedelayfromthevariationofthemetricuntiltherescale
decision• Timedelayfromtherescaledecisiontothecompletionof
therescalingoftheservice• Servicedowntimeduringrescaling
ExpectedResults
• VNFisscaledasperSLAthresholdconditions• AdditionalVNFVMfunctionscorrectlyasmeasuredbythe
expected impact on the trigger boundary condition e.g.networklatency/throughput.
• Servicedowntimestaysatminimum
5.1.6. UC4–MonitorNFVservices
StepNumber 5.1
StepDescription MonitorNFVService-i)Measurementprocess
Precondition Servicehasbeendeployedi.e.UCs1,2and3havepreceded
InvolvedT-NOVAComponents
• VNF • NFVI • VIM
TestMethodology
1. Feed a node port with artificially generated traffic withknownparameters,
2. Artificially stress a VNF container (VM), consuming itsresourcesbyamock-upresource-demandingprocess
Metrics
ObservemeasurementscollectedbytheVIMMonitoringManager.Performanceindicatorstobeobserved:
• accuracyofmeasurement• responsetime
ExpectedResults• Metrics are properly propagated and correspond to the
knowntrafficparametersand/orstressprocess• Responsetimeiskeptdowntotheminimum
StepNumber 5.2
StepDescription MonitorNFVService-ii)CommunicationofmetricstoOrchestrator
Precondition Servicehasbeendeployedi.e.UCs1,2and3havepreceded,Step5.1hasbeencompleted
InvolvedT-NOVAComponents
• VNF• NFVI• VIM• Orchestrator
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium39
TestMethodology
1. SendametricsubscriptionrequestbytheOrchestrator2. Feed a node port with artificially generated traffic with
knownparameters3. Artificially stress a VNF container (VM), consuming its
resourcesbyamock-upresource-demandingprocess
Metrics Observe response time i.e. time interval from the change inresourceusageuntiltheOrchestratorbecomesawareofthechange
ExpectedResults• Metrics are properly propagated and correspond to the
knowntrafficparametersand/orstressprocess• Responsetimeiskeptdowntotheminimum
StepNumber 5.3
StepDescription MonitorNFVService-iii)CommunicationofalarmstoOrchestrator
Precondition Servicehasbeendeployedi.e.UCs1,2,3and4havepreceded,Step5.2hasbeencompleted.
InvolvedT-NOVAComponents
• VNF• NFVI• VIM• Orchestrator
TestMethodology
1. SendanalarmssubscriptionrequestbytheOrchestrator2. Manuallyfailanetworklink3. Manually drain VNF container resources 3) Artificially
disruptVNFoperation
MetricsObserve the updates in the Orchestratormonitoring repositoriesandmeasureaccuracyandresponsetimei.e.timeintervalfromthechangeinresourceusageuntiltheOrchestratorrecordsthechange
ExpectedResults• Metrics are properly propagated and correspond to the
knowntrafficparametersand/orstressprocess• Responsetimeiskeptdowntotheminimum
5.1.7. UC4.1-MonitorSLA
StepNumber 5.4
StepDescription MonitorSLA
Precondition Servicehasbeendeployedi.e.UCs1,2and3havepreceded,
InvolvedT-NOVAComponents
• Orchestrator• Marketplace
TestMethodologyFollowprocedure similar toUC5.2 (artificially consume and drainVNFresources)and/orUC4.3 (disruptserviceoperation).ValidatethatSLAstatusisaffected.
MetricsMeasureSLAmonitoringaccuracy,especiallySLAviolationalarms.Measure response time (from the incident to the display of theupdatedSLAstatusontheDashboard)
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium40
ExpectedResults• ProperSLAstatusupdate• ProperindicationofSLAviolation• Minimumresponsetime
5.1.8. UC5–BillNFVservices
StepNumber 5.1
StepDescription Billing for theserviceprovider (SP) -NFhasbeen registeredanddeployed
Precondition Servicehasbeendeployedi.e.UCs1,2,3and4havepreceded
InvolvedT-NOVAComponents
• VNF• NFVI• VIM• Marketplace
TestMethodology UsetheMarketplacetorequestdeploymentandprovisioningoftheNF
Metrics
ExpectedResults NFhasbeensuccessfullydeployed(asreportedinthemarketplacedashboard)
StepNumber 5.2
StepDescription Billingfortheserviceprovider(SP)-NFusagedatacanbemonitored
Precondition Servicehasbeendeployedi.e.UCs1,2and3havepreceded
InvolvedT-NOVAComponents
• Monitoring@Marketplacelevel• Accounting• Marketplace
TestMethodology UsethemarketplacedashboardtochecktheresourceusagebythedeployedNF
Metrics All needed information is stored correctly (duration of service,billinginfo,SLAdata).
ExpectedResults Resource consumed data shown in the marketplace dashboard(sometimeafterdeployment)
StepNumber 5.3
StepDescription Billing for the service provider (SP) - NF billable terms and SLAelementscanbeaccessed
Precondition ServicehasbeenproperlyregisteredintheMarketplace
InvolvedT-NOVAComponents Marketplace
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium41
TestMethodology UsethemarketplaceinterfacetoextracttheNFbillablemetricsandSLAterms
Metrics All needed information is stored correctly (duration of service,billinginfo,SLAdata).
ExpectedResults Retrieve the list of billable items and SLA terms from theMarketplacestore
StepNumber 5.4
StepDescription Billingfortheserviceprovider(SP)-GetthepricingformulafortheNFatthisprovider
Precondition ServicehasbeenproperlyregisteredintheMarketplace
InvolvedT-NOVAComponents Marketplace
TestMethodology Use the marketplace interface to extract the NF pricing / billingmodel
Metrics
ExpectedResults ReceivethepricingequationfortheNFfortheproviderwhereitisdeployed
StepNumber 5.5
StepDescription Billingfortheserviceprovider(SP)-Generatetheinvoiceforatimeperiod
Precondition Servicehasbeendeployedi.e.UCs1,2and3havepreceded
InvolvedT-NOVAComponents Marketplace,Accounting
TestMethodology Usetheaccountinginterfacetogettheusagedatafortheperiodinquestion
Metrics
The invoice from theprovider to theuser for theNFand for thedesiredperiodisgeneratedandavailablefromthedashboard
5.1.9. UC6-TerminateNFVservices
StepNumber 6.1
StepDescription AT-NOVACustomerterminatesaprovisionedservice,overtheT-NOVAdashboard.
Precondition There is an existing active service running (deployed), for thespecificcustomer.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium42
InvolvedT-NOVAComponents Marketplace,Orchestrator,Billing
ParametersN/A
TestMethodology Dispatchterminationrequestandobservetheservicestatus.
Metrics
Metricstobeverify:
1. Responsetime,toteardowntheservice2. Update of associated information (duration of service,
billinginfo,SLAdata).
ExpectedResultsThe resources used by this service, will be released. Billinginformation must be sent. In customer’s marketplace view, thisservicewillbeshownasstopped.
StepNumber 6.2
StepDescription AT-NOVASPterminatesallactiveservicesthatheowns.
Precondition There are several services running (deployed), for differentCustomers.
InvolvedT-NOVAComponents Marketplace,Orchestrator,Billing
ParametersN/A
TestMethodologyMeasuretimebetweendiscardactionmadeandthemomentthatallservicesarefullydeactivate.Measuretheresponsetime,ineachcomponent.
MetricsResponsetime, todiscardtheservice,and inform involvedactors(SP,Customers).
ExpectedResults
The resources used by the services, must be released. Billinginformation must be sent. In customer’s marketplace view, thisservicewill be shownas stopped. In SP’s portal view, all servicesmustbestopped.
StepNumber 6.3
StepDescription AT-NOVASPterminatesaprovisionedservice,foraspecificT-NOVACustomer.
Precondition There is an existing active service running (deployed), for thespecificcustomer.
InvolvedT-NOVAComponents Marketplace,Orchestrator,Billing
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium43
ParametersN/A
TestMethodologyMeasuretimebetweendiscardactionmadeandthemomenttheservice is fully deactivate. Measure the response time, in eachcomponent.
MetricsResponsetime, todiscardtheservice,and inform involvedactors(SP,Customer).
ExpectedResults
The resources used by this service will be released. Billinginformation must be sent. In customer’s marketplace view, thisservicewill be shownas stopped. In SP’sportal view, the servicemustbestopped.
5.2. EvaluationofT-NOVAVNFs
Apartfromsystem-widevalidationbasedonusecases,aseparateevaluationcampaignwillbeconductedinordertoassesstheefficiencyandperformanceoftheVNFstobedevelopedinT-NOVA,namely:
• SecurityAppliance(SA)• SessionBorderController(SBC)• TrafficClassifier(TC)• HomeGateway(HG)• TranscodingUnit(TU)• Proxy(PXaaS)
ThefollowingfigurepresentsabriefmappingoftheVNFstothetaxonomyasprovidedbyETSINFVISG(seeSection2).ThismappingassiststheselectionofthetoolstobeemployedfortheevaluationofeachVNF.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium44
Table10ETSItaxonomymappingofT-NOVAVNFs
ThefollowingchaptersprovideadescriptionofsomeofthetoolstobeusedforthevalidationofthespecificVNFs.
5.2.1. Generictoolsforvalidationandevaluation
5.2.1.1. TrafficGenerators
(a) Non-Free
Enterpriselevel,non-freepacketgeneratorsandtrafficanalysersoftwarecanbeusedinordertoquicklyandbasedonstandardmethodologiesassessthesystem/componentperformance.Howeverthesetoolsareexpensiveandduringtheevaluationactivitiesmightnotbeavailableforuse.Forexample,vendorssuchasSpirentandIxia(mentionedinSection2)alreadyprovideend-to-end testing solutions that deliver high performancewith deterministic results. Thesolutionsarebasedonhardwareandsoftware solutionscapableof conducting repeatabletestsequencesutilizingalargenumberofconcurrentflowscontainingavarietyofL7.
(b) OpenSource
OpenSourcecommunitytools,areeasiertoaccessandcompareresults.
L2-L4TrafficGenerationtools Pktgen
The pktgen software package for Linux [PKTGEN] is a popular tool in the networkingcommunity for generating traffic loads for network experiments. Pktgen is a high-speed
DataPlane
Control
Plan
e
Sign
al
Processin
g
Storage
VNF
EdgeNF
Interm
ediateNF
Interm
ediateNFwith
En
cryptio
n
Routing
Authen
tication
SessionMan
agem
ent
Signalprocessing
Non
-Intensive
R/WIn
tensive
SecurityAppliance(SA) X X
TrafficClassification(vTC) X X X
SessionBoarderGateway(vSBC) X X X X X X X
HomeGateway(vHG) X X X X X
Proxy(vPXaaS) X X X X
Transcoder(vTU) X X X X X
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium45
packetgenerator,runningintheLinuxkernelveryclosetothehardware,therebymakingitpossibletogeneratepacketswithverylittleprocessingoverhead.Thepacketgenerationcanbecontrolledthroughauserinterfacewithrespecttopacketsize,IPandMACaddresses,portnumbers,inter-packetdelay,andsoon.Pktgenisusedtotestnetworkequipmentforstress,throughput and stability behavior. A high-performance traffic generator/analyzer can becreatedusingLinuxPC.
D-ITG
D-ITG(DistributedInternetTrafficGenerator)[D-ITG]isaplatformcapabletoproduceIPv4andIPv6trafficbyaccuratelyreplicatingtheworkloadofcurrentInternetapplications.Atthesametime,D-ITG isalsoanetworkmeasurement toolable tomeasure themostcommonperformancemetrics(e.g.throughput,delay,jitter,packetloss)atpacketlevel.
D-ITGcangeneratetrafficfollowingstochasticmodelsforpacketsize(PS)andinterdeparturetime(IDT)thatmimicsapplication-levelprotocolbehavior.ByspecifyingthedistributionsofIDTandPSrandomvariables,itispossibletochoosedifferentrenewalprocessesforpacketgeneration:byusingcharacterizationandmodelingresults fromliterature,D-ITG isabletoreplicatestatisticalpropertiesoftrafficofdifferentwell-knownapplications(e.gTelnet,VoIP-G.711,G.723,G.729,VoiceActivityDetection,CompressedRTP-DNS,networkgames).
At the transport layer, D-ITG currently supports TCP (Transmission Control Protocol), UDP(User Datagram Protocol), SCTP1 (Stream Control Transmission Protocol), and DCCP1(Datagram Congestion Control Protocol). It also supports ICMP (Internet ControlMessageProtocol). Among the several features described below, FTP-like passive mode is alsosupportedtoconductexperimentsinpresenceofNATs,anditispossibletosettheTOS(DS)andTTLIPheaderfields.Theusersimplychoosesoneofthesupportedproto-colsandthedistributionofbothIDTandPSwillbeautomaticallyset.
Pktgen-DPDK
Pktgen-DPDK[PKTGEN-DPDK] isatrafficgeneratorpoweredbyIntel'sDPDKat10Gbitwireratetrafficwith64byteframes.
PFRING
PF_RINGisahigh-speedpacketcapturelibrarythatturnsacommodityPCintoanefficientandcheapnetworkmeasurementboxsuitableforbothpacketandactivetrafficanalysisandmanipulation.
NETMAP
netmap/VALEisaframeworkforhighspeedpacketI/O.ImplementedasakernelmoduleforFreeBSDandLinux,itsupportsaccesstonetworkcards(NICs),hoststack,virtualports(the"VALE"switch),and"netmappipes".netmapcaneasilydolinerateon10GNICs(14.88Mpps),movesover20MppsonVALEports,andover100Mppsonnetmappipes.netmap/VALEcanbe used to build extremely fast traffic generators, monitors, software switches, networkmiddleboxes, interconnect virtual machines or processes, do performance testing of highspeednetworkingappswithouttheneedforexpensivehardware.Wehavefullsupportforlibpcapsomostpcapclientscanuseitwithnomodifications.netmap,VALEandnetmappipesareimplementedasasingle,nonintrusivekernelmodule.NativenetmapsupportisavailableforseveralNICsthroughslightlymodifieddrivers;forallotherNICs,weprovideanemulatedmodeontopofstandarddrivers.netmap/VALEarepartofstandardFreeBSDdistributions,andavailableinsourceformatforLinuxtoo.
MGEN
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium46
The Multi-Generator (MGEN) [MGEN] is open source software by the Naval ResearchLaboratory (NRL) PROTocol Engineering Advanced Networking (PROTEAN) group thatprovidestheabilitytoperformIPnetworkperformancetestsandmeasurementsusingUDPandTCPIPtraffic.Thetoolsetgeneratesreal-timetrafficpatternssothatthenetworkcanbeloadedinavarietyofways.Thegeneratedtrafficcanalsobereceivedandloggedforanalysis.Scriptfilesareusedtodrivethegeneratedloadingpatternsoverthecourseoftime.Thesescriptfilescanbeusedtoemulatethetrafficpatternsofunicastand/ormulticastUDPandTCP IPapplications.Thetoolsetcanbescriptedtodynamically joinand leave IPmulticastgroups.MGENlogdatacanbeusedtocalculateperformancestatisticsonthroughput,packetloss rates, communication delay, and more. MGEN currently runs on various Unix-based(includingMacOSX)andWIN32platforms. Theprincipaltoolisthemgenprogram,whichcan generate, receive, and log test traffic. This document provides information on mgenusage,message payload, and script and log file formats. Additional tools are available tofacilitateautomatedscriptfilecreationandlogfileanalyses.
IPERF
IPERF[IPERF]isacommonlyusednetwork-testingtoolthatcancreateTransmissionControlProtocol(TCP)andUserDatagramProtocol(UDP)datastreamsandmeasurethethroughputofanetworkthatiscarryingthem.IPERFisatoolfornetworkperformancemeasurementandspecificallyforactivemeasurementsofthemaximumachievablebandwidthonIPnetworks.Itsupportstuningofvariousparametersrelatedtotiming,protocols,andbuffers.Foreachtest it reports the bandwidth, delay jitter, datagram loss and other parameters. IPERF iswritteninC.
IPERFallowstheusertosetvariousparametersthatcanbeusedfortestinganetwork,oralternativelyforoptimizingortuninganetwork.IPERFhasclient/serverfunctionality,andcanmeasurethethroughputbetweenthetwoends,eitherunidirectionallyorbi-directionally.Itisopen-sourcesoftwareand runsonvariousplatforms includingLinux,UnixandWindows(eithernativelyorinsideCygwin).
• UDP: When used for testing UDP capacity, IPERF allows the user to specify thedatagramsizeandprovidesresultsforthedatagramthroughputandthepacketloss.
• TCP: When used for testing TCP capacity, IPERF measures the throughput of thepayload.
TypicalIPERFoutputcontainsatime-stampedreportoftheamountofdatatransferredandthethroughputmeasured.
IPERFissignificantasitisacross-platformtoolthatcanberunoveranynetworkandoutputstandardizedperformancemeasurements.Thusitcanbeusedforcomparisonofbothwiredandwirelessnetworkingequipmentandtechnologies.Sinceitisalsoopensource,theusercanscrutinizethemeasurementmethodologyaswell.
Ostinato
Ostinato[OSTINATO]isanopen-source,cross-platformnetworkpacketandtrafficgeneratorand analyzer with a friendly GUI. It aims to be "Wireshark in Reverse" and thus becomecomplementarytoWireshark.Itfeaturescustompacketcraftingwitheditingofanyfieldforseveralprotocols:Ethernet,802.3,LLCSNAP,VLAN(withQ-in-Q),ARP,IPv4,IPv6,IP-in-IPa.k.aIPTunneling,TCP,UDP,ICMPv4,ICMPv6,IGMP,MLD,HTTP,SIP,RTSP,NNTP,etc.ItcanimportandexportPCAPcapturefiles.Ostinatoisusefulforbothfunctionalandperformancetesting.
The following table summarizes someof themostwidelyused trafficgenerators for L2-L4assessment.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium47
Table11SummaryofL2-L4TrafficGenerators
TrafficGenerator
OperatingSystem
NetworkProtocols
TransportProtocols
MeasuredParameters
Pktgen Linux IPv4,v6 UDP Throughput
D-ITG Linux,Windows IPv4,IPv6 UDP,TCP,DCCP,SCTP,ICMP
Throughput,packetloss,delay,jitter
Pktgen-DPDK Linux IPv4,v6 UDP Generationonly
PFRING Linux IPv4,v6 UDP,TCP Generationonly
NETMAP Linux,FreeBSD IPv4,v6 UDP,TCP Generationonly
MGEN Linux,FreeBSD,NetBSD,Solaris,SunOS,SGI,DEC
IPv4 UDP,TCP Throughput,packetloss,delay,jitter
Iperf Linux,Windows,BSD
IPv4 UDP,TCP Throughput,packetloss,delay,jitter
Ostinato Linux IPv4,IPv6,IP-in-IP(IPTunneling)
Ethernet,802.3,LLCSNAP,VLAN(withQ-in-Q),ARP,TCP,UDP,ICMPv4,ICMPv6,IGMP,MLD,HTTP,SIP,RTSP,NNTP
StatisticsWindowshowsreal-timeportreceive/transmitstatisticsandrates
L4-L7TrafficGenerationtools
• SIPp[SIPp]:whichisafreeOpenSourcetesttool/trafficgeneratorfortheSIPprotocol.Itincludesa fewbasicSipStoneuseragentscenarios (UACandUAS)andestablishesandreleasesmultiplecallswith the INVITEandBYEmethods. It canalso readcustomXMLscenariofilesdescribingfromverysimpletocomplexcallflows.Itfeaturesthedynamicdisplayofstatisticsaboutrunningtests(callrate,roundtripdelay,andmessagestatistics),periodicCSV statistics dumps, TCPandUDPovermultiple socketsormultiplexedwithretransmissionmanagementanddynamicallyadjustablecallrates.
• Seagull[SEAGULL]:Seagull isafree,OpenSource(GPL)multi-protocoltrafficgeneratortesttool.PrimarilyaimedatIMS(3GPP,TISPAN,CableLabs)protocols(andthusbeingtheperfectcomplementtoSIPpfor IMStesting),Seagull isapowerfultrafficgeneratorforfunctional,load,endurance,stressandperformance/benchmarktestsforalmostanykindofprotocol.Inaddition,itsopennessallowstoaddthesupportofabrandnewprotocolin less than 2 hours - with no programming knowledge. For that, Seagull comeswithseveralprotocolfamiliesembeddedinthesourcecode:Binary/TLV(Diameter,Radiusandmany3GPPandIETFprotocols),Externallibrary(TCAP,SCTP),andText(XCAP,HTTP,H248ASCII).
• TCPReplay [TCPREP]: is a suite of GPLv3 licensed utilities for UNIX (andWin32 underCygwin)operatingsystemsforeditingandreplayingnetworktrafficwhichwaspreviouslycapturedbytoolsliketcpdumpandEthereal/Wireshark.Itallowstoclassifytrafficasclientor server, rewrite Layer2, 3 and4packets and finally replay the trafficbackonto the
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium48
networkandthroughotherdevicessuchasswitches,routers, firewalls,NIDSandIPS's.TCPreplaysupportsbothsingleanddualNICmodesfortestingbothsniffingandin-linedevices.
5.2.1.2. SDNControllerevaluationtools
AmeasurementframeworkfortheevaluationofOpenFlowswitchesandcontrollershasbeendevelopedinOflops[OFLOPS].OFLOPSisanopenframeworkforopenflowswitchevaluation.The software suite consists of two modules OFLOPS and Cbench. OFLOPS (OpenFLowOperationsPerSecond)isadummycontrollerusedtostressandmeasurethecontrollogicofOpenFlowswitches.Ontheotherhand,Cbenchemulatesacollectionofsubstrateswitchesby generating large numbers of packet-in messages and evaluating the rates of thecorrespondingflow-modificationmessagesgeneratedbythecontroller.Asthesourcecodeofthe framework is distributed under an open license it can be adapted to evaluate itsperformanceofwithintheT-NOVAproject.
5.2.1.3. Service/Resourcemappingevaluationtools
AutoEmbed [DIETRICH13]wasoriginallydevelopedfortheevaluationofvariousaspectsofmulti-providerVNembedding,suchastheefficiencyandscalabilityofembeddingalgorithms,theimpactofdifferentlevelsofinformationdisclosureonVNembeddingefficiency,andthesuitabilityofVNrequestdescriptions.TheAutoEmbedframeworksupportsdifferentbusinessrolesandstorestopologyandrequest informationaswellasthenetworkstate inordertoevaluate mapping efficiency. AutoEmbed includes an extendable library which supportsintegrationofadditionalembeddingalgorithmswhichcanbecomparedagainstareferenceembedding,e.g.byusinglinearprogramoptimizationtofindoptimalsolutionsfordifferentobjectives,orbyusingadifferentresourcevisibilitylevel.Requestandtopologyinformationare exchanged using XML schema and thus simplifies intercommunication with existingcomponents. The evaluation can either be done online by using the GUI, or by furtherprocessingofthemeta-statistics(.csvfiles)computedbyAutoEmbedlibrary.
Alevin
ALgorithmsforEmbeddingofVIrtualNetworks(ALEVIN)isaframeworktodevelop,compare,andanalyzevirtualnetworkembeddingalgorithms[ALEVIN].ThefocusinthedevelopmentofALEVINhasbeenonmodularityandefficienthandlingofarbitraryparametersforresourcesanddemandsaswell ason supporting the integrationofnewandexistingalgorithmsandevaluationmetrics.ALEVINisfullymodularregardingtheadditionofnewparameterstothevirtualnetworkmodel.
For platform independence, ALEVIN is written in Java. ALEVIN’s GUI and multi-layervisualizationcomponentisbasedonMuLaViTo[MULATIVO]whichenablesustovisualizeandhandletheSNandanarbitrarynumberofVNsasdirectedgraphs.
5.2.2. VNFSpecificvalidationtools
5.2.2.1. TrafficClassifier(vTC)
InT-NOVAthevTCsharescommonpropertieswithitshardwarebasedcounterpart.Activitiesin the frame of IETF Benchmarking Methodology WG, have proposed benchmarkingmethodologies for such devices i.e. [Hamilton07] (more specific to media aware type ofclassification). The goal of this document is to generate performance metrics in a lab
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium49
environmentthatwillcloselyrelatetoactualobservedperformanceonproductionnetworks.Thedocumentsaiminexaminingperformanceandrobustnessacrossthefollowingmetrics:
• Throughput(min,max,average,standarddeviation)• Transactionrates(successful/failed)• Applicationresponsetimes• Numberofconcurrentflowssupported• Unidirectionalpacketlatency
TheabovemetricsareindependentoftheDeviceunderTest(DUT)implementation.TheDUTshouldbeconfiguredaswhenusedinarealdeploymentortypicalfortheusecasewherethedevice is intended. The selected configuration should be available along with thebenchmarking results. In order to increase and guarantee repeatability of the tests, theconfigurationscriptsandalltheinformationresultingtothetestbedsetupshouldbemadeavailable.Averyimportantissueforthebenchmarkingofcontent-awaredevicesisthetrafficprofilethatwillbeutilizedduringtheexperiments.Sincetheexplicitpurposesofthesedevicesvary widely but they all inspect deeply in the packet payload in order to support theirfunctionalities,thetestsshouldutilizetrafficflowsthatresampletotherealapplicationtraffic.It is important for the testing procedure to define the following application flow specificcharacteristic:
• DataExchangedbyflow,bits• OfferedPercentageoftotalflows• Transportprotocol(s)• Destinationport(s)
PlannedBenchmarkingTests
1. Maximum application session establishment rate - Traffic pattern generation shouldbeginat10%oftheexpectedmaximumthrough110%oftheexpectedmaximum.Theduration of each test should be at least 30 seconds. The followingmetrics should beobserved:• MaximumApplicationFlowrate–maximumrateatwhichtheapplicationisserved• Application flow duration – min/max/avg application duration as defined by
[RFC2647].• ApplicationEfficiency–the%ratioofthebytestransmittedminusretransmittedover
transmittedbytes,asdefinedinRFC6349.• Applicationflowlatency–min/max/avglatencyintroducedbytheDUT
2. ApplicationThroughput–determinetheforwardingthroughputoftheDUT.Duringthis
testtheapplicationsflowthroughDUTat30%ofmaximumrate.• MaximumThroughput–maximumrateatwhichallapplicationflowscompleted• Applicationflowduration–min.max/avgapplicationduration• Applicationefficiency–asdefinedpreviously• PacketLoss• Applicationflowlatency
3. Malformedtraffichandling–todeterminetheeffectsonperformanceandstabilitythat
malformedtrafficmayhaveonDUT.TheDUTshouldbeundermalformedtrafficatallprotocollayers(fuzzedtraffic).
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium50
5.2.2.2. SessionBorderController(vSBC)
The vSBC incorporates two separate functionswithin a single device: the InterconnectionBorderControlFunction(IBCF)forthesignallingproceduresandtheBorderGatewayFunction(BGF) focused on the user data plane. Signalling procedures are implemented using theSessionInitiationProtocol(SIP),whilethedataoruseplaneusuallyadoptsRealtimeTransportProtocol(RTP)formultimediacontentdelivery.
ThemetricsthatwillbeadoptedtocharacterizethevirtualSBCperformancenecessarilyrefertothesessionsitcanestablish,andgenerallycoverthreemainaspects:
• themaximumnumberofconcurrentsessionsthatcanbeestablishedbytheSBC• themaximumsessionrate(expressedasthenumberoriginated/terminatedsession
persecond)• thequalityofserviceperceivedbytheend-usersduringaudio/videosessions.
Theprovidedqualityofserviceisusuallyverifiedbyanalyzingasetofparametersevaluatedineachactivesession.Thebasicparametersarerelatedtonetworkjitter,packetlossandend-to-end delay [RFC3550]. However, also instrumental measurements of ad hoc objectiveparameters should be performed. In particular, objective assessment of speech and videoqualityshouldbeachieved,using,forinstance,thetechniquesdescribedinrec.ITU-TP.862(PerceptualEvaluationofSpeechQuality,PESQ)foraudio,orfollowingtheguidelinesgiveninITU-TJ.247(Objectiveperceptualmultimediavideoqualitymeasurementinthepresenceofafullreference)forvideo.
Themetrics above summarized are strictly correlated. In fact, itmust be verified that themaximumnumberof concurrent sessions and themaximum session rate canbe achievedsimultaneously. Moreover, the quality of service must be continuously monitored underloadingconditions,toverifythattheend-userperceptionisnotaffected.Tothisend,adhocexperimentsmustbedesigned,forinstancebyanalysingafewsamplesessions,maintainedalwaysactiveduringloadingtests.
Finally,overloadingtestswillalsobedesigned.Themaximumsessionratewillbeexceededofaquantityequalto10%;theoverloadconditionwillbemaintainedforagiventimeinterval,andthenremoved.Afteraspecifiedsettlingtime,thevSBCwillconvergeagaintothenominalperformance.
5.2.2.3. SecurityAppliance(vSA)
ForthevalidationofthevSAVNF,abroadsetofintrusion/attacksimulatorsexists.Dependingonthetypeofattacksthatwillbetested,differenttoolsthatcouldbeusedare:
• LowOrbitIonCannon(LOIC):Thisisanopensourceapplicationthatcanbeusedforstress testing and denial-of-service attack generation. It is written in C# and iscurrentlyhostedonsourceforge (http://sourceforge.net/projects/loic/)andGitHub(https://github.com/NewEraCracker/LOIC/).
• InternetRelayChat(IRC)protocol:IncaseofDistributedDoSattacks,themastercanusetheIRCprotocoltosendcommandstotheattackingmachinesequipedwithLOIC.TheIRCprotocol(describedinRFC2812)enablesthetransferofmessagesintheformoftextbetweenclients.
• hping (http://www.hping.org/): hping is a command-line oriented TCP/IP packetassembler/analyzer.Theinterfaceisinspiredbytheping(8)unixcommand,buthpingisn'tonlyabletosendICMPechorequests.ItsupportsTCP,UDP,ICMPandRAW-IPprotocols,hasatraceroutemode,theabilitytosendfilesbetweenacoveredchannel,andmanyotherfeaturesincluding:firewalltestingandPortscanning.Hpingworkson
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium51
thefollowingunix-likesystems:Linux,FreeBSD,NetBSD,OpenBSD,Solaris,MacOsX,Windows.
• SendIP (http://www.earth.li/projectpurple/progs/sendip.html): SendIP is acommand-linetooltosendarbitraryIPpackets.Ithasalargenumberofoptionstospecify the contentof everyheaderof aRIP, RIPng,BGP, TCP,UDP, ICMP,or rawIPv4/IPv6packet.Italsoallowsanydatatobeaddedtothepacket.Checksumscanbecalculated automatically, but if you wish to send out wrong checksums, that issupportedtoo.
• Internet Control Message Protocol (ICMP): this protocol can be used to reportproblemsoccurredduringthedeliveryofIPdatagramswithinanIPnetwork.ItcanbeutilizedforinstancewhenaparticularEndSystem(ES)isnotresponding,whenanIPnetworkisnotreachable,orwhenanodeisoverloaded.
• Ping:The"ping"applicationcanbeused tocheckwhetheranend-to-end InternetPath is operational. Ping operates by sending Internet Control Message Protocol(ICMP)echorequestpacketstothetargethostandwaitingforanICMPresponse.Intheprocess,itmeasuresthetimefromtransmissiontoreception(RoundTripTime-RTT-)andrecordsanypacketloss.Thisapplicationcanbeusedtodetectwhetheraservice is under attack or not. As an example, if a service is running in a virtualmachine,checkingtheperformanceofthevirtualmachinethroughtheRTTvariationmightshowwhethertheserviceisunderattackornot.
5.2.2.4. HomeGateway(vHG)
TheVirtualHomeBoxintegratesvariousmiddlewareandservicelayermodules.Partoftheproposedfunctionalitiesarerelatedtovideostreaming,andtherefore,itcanalsobeviewedasamediaserverforEnd-Users.
Validationmethodologyforserviceenvironment(suchasservermonitoring)canbeappliedforvHG,toevaluateitsperformanceasanindividualentityduringthecontentdeliveryandtranscodingsteps.
Fortestingthevideoqualityattheuserside,somestandardizedapproachesexist.Theywillbe used as performance metrics for validating video encoding/decoding and QoS/QoEestimationtools.
ThevalidationmethodforVideoStreamingwillbebuiltuponpreviousworkcarriedoutbysomepartnersintheAlicanteProject.
PeakSignal-to-NoiseRatio(PSNR)
ForperformingevaluationsofthevHGonecanusethewell-knownPSNRmetricwhichoffersanumericrepresentationofthefidelityofaframe/video.PSNRallowstheevaluationofthevideoqualityresultingfromdecisionsoftheadaptationchainattheuserenvironment.
VideoQualityMetric(VQM)
TheNTIAVideoQualityMetric(VQM)[ALICANTED8.1]isastandardizedmethodofobjectivelymeasuringvideoqualitybymakingacomparisonbetweentheoriginalandthedistortedvideosequences based only on a set of features extracted independently from each video. Themethod takesperceptual effectsof various video impairments intoaccount (e.g., blurring,jerky/unnatural motion, global noise, block distortion, colour distortion) and generates asinglemetricwhichpredictstheoverallqualityofthevideo.
SubjectiveQualityEvaluations
Astheuserenvironmentisdedicatedtotheperceivedqualityoftheservicebytheuser,thereis theneed toperform subjectivequality evaluations toeffectivelydetect thequalityof a
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium52
system [ITU-RBT50013]. In this case, one can use a vast number of different evaluationmethodssuchasDoubleStimulusContinuousQualityScale[PINSON04].TheDSCQSprovidesmeansforcomparingtwosequencessubjectively.Thismeansthattheuserevaluatesonceareferenceversion(i.e.,aversionnotprocessedbythesystemunderinvestigation)andonceaprocessed version (i.e., a version processedby the systemunder investigation). The givenratinggivesafeedbackhowwellthesystemunderinvestigationperformsandifthereistheneedtoadjustparameters.
5.2.2.5. vPXaaS
ThevPXaaSVNFisbasedonSquidOpenSourceproxysoftware.Themaincharacteristicsthatmayimpactproxyperformanceevaluationandarealsorelatedtothevirtualisedversionare:
• RatioofCachableObjects–abiggerratioincreasestheefficiencyandperformanceoftheproxy
• ObjectSetSize–Thesizeoftheobjectcachedbytheproxy.Theproxycachemustbe able to quickly determine whether a requested object is cached to reduceresponselatency.Theproxymustalsoefficientlyupdateitsstateonacachehit,missorreplacement
• ObjectSize–The issuefortheproxycache is todecidewhethertocachea largenumberofsmallobjects(whichcouldpotentiallyincreasethehitrate)ortocacheafewlargeobjects(possiblyincreasingthebytehitrate).
• Recency of Reference – Most web proxies use the Least Recently Used (LRU)replacementpolicy.Recencyisacharacteristicofwebproxyworkloads.
• FrequencyofReference–Popularityofcertainobjectsdictatesthatareplacementpolicy thatcoulddiscriminateagainstone-timers shouldoutperformapolicy thatdoesnot.
InordertoconducttheperformanceassesmentofthevPXaaSVNFthemetricsconsideredarehitrateandbytehitrate.Thehitrateisthepercentageofallrequeststhatcanbesatisfiedbysearchingthecacheforacopyoftherequestedobject.Thebytehitraterepresentsthepercentageofalldatathatistransferreddirectlyfromthecacheratherthanfromtheoriginserver.Inaddition,responsetimeorlatencywillbeconsidered.
Fortheexperimentationcampaign,opensourcetoolsthatarecapableofgeneratingsynthetic,realisticHTTP,FTPandSSLtrafficwillbeused.A.An(non-exhaustive)listisprovidedbelow:
• WebPolygraph[WEBPOLY]isafreelyavailablebenchmarkingtoolforWebcachingproxies. Polygraph distribution includes high-performance Web client and serversimulators.Polygraph iscapableofmodelingavarietyofworkloads formicro-andmacro-benchmarking. Poly has been used to test and tune most leading cachingproxiesandisthebenchmarkusedforTMFcache-offs.
• httperf [HTTPERF] is a tool for measuring web server performance. It provides aflexible facility for generating various HTTP workloads and for measuring serverperformance.Thefocusofhttperfisnotonimplementingoneparticularbenchmarkbutonprovidingarobust,high-performancetoolthatfacilitatestheconstructionofbothmicro-andmacro-levelbenchmarks.
• http_load[HTTPLOAD]runsmultiplehttpfetchesinparallel,totestthethroughputofawebserver.However,unlikemostsuchtestclients,itrunsinasingleprocess,soitdoesn'tbogdowntheclientmachine.Itcanbeconfiguredtodohttpsfetchesaswell.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium53
5.2.2.6. vTU
The virtual transcoding unit evaluation involves performance comparison betweenacceleratedandnon-acceleratedversionsofthesameVNF.Theacceleratedversionexploitamulticore GPU card installed in the host machine. The vTU performance evaluation willemploymethodologiesandtoolsalreadyanalysedintheabovesectionsespeciallytheonesusedforvSBCandvHG.Themainperformancemetricsconsideredare:
• CPUload• MemoryUsed• DiskIOPS,inthecasevTUsavesthetranscodedcontentinsteadoflivetranscoding• Latency–delayintroducedbythetranscodinprocess• Frameloss
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium54
6. CONCLUSIONS
DeliverableD2.52presentedarevisedplanforthevalidation/experimentationcampaignofT-NOVA. The target of experimentationhasbeen the entire integrated T-NOVA systemas awhole, rather than individual components. Taking into account the challenges in NFVevaluation, a set of system- and service- level metrics were defined, as well as theexperimentationproceduresforthevalidationofeachoftheT-NOVAusecases.Thetestbedsalreadyavailableatthepartners’sites,aswellasthepilotstobe integrated,constituteanadequatefoundationfortheassessmentandevaluationtheT-NOVAsolution,undervariousdiversesetupsandconfigurations.
ItcanbededucedthattheplanningfortheT-NOVAexperimentationcampaigntobecarriedout in the frame of WP7 is complete with regard to infrastructure allocation as well asmethodology. It is however expected that some fine-tuning of this planwill be necessaryduringtheroll-outofthetests.TheactualsequenceanddescriptionofthestepsappliedwillbereflectedinthefinalWP7deliverables.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium55
7. REFERENCES
[ALEVIN] ALEVIN: VNREAL, “ALEVIN – ALgorithms for Embedding VIrtual Networks,” May2011.[Online].Available:http://alevin.sf.netReference
[ALICANTED8.1]
Alicante Project "D8.1: Use Cases and ValidationMethodology" http://www.ict-alicante.eu/validation/download/work-package/alicante_d8.1_v1.1.pdf
[BMWG] https://datatracker.ietf.org/wg/bmwg/charter/
[DIETRICH13] David Dietrich, Amr Rizk, and Panagiotis Papadimitriou. 2013. AutoEmbed:automatedmulti-providervirtualnetworkembedding.InProceedingsoftheACMSIGCOMM2013conferenceonSIGCOMM(SIGCOMM'13).ACM,NewYork,NY,USA,465-466. DOI=10.1145/2486001.2491690http://doi.acm.org/10.1145/2486001.2491690
[D2.1] T-NOVAProject,D2.1“SystemUseCasesandRequirements”,15thJune2014,on-line: http://www.t-nova.eu/wp-content/uploads/2014/11/TNOVA_D2.1_Use_Cases_and_Requirements.pdf
[D2.32] SpecificationoftheInfrastructureVirtualisation,ManagementandOrchestration–Final,on-line:
[D4.01] T-NOVAD4.01,InterimReportonInfrastructureVirtualisationandManagement
On-line:http://www.t-nova.eu/wp-content/uploads/2015/01/TNOVA_D4.01_Interim-Report-on-Infrastructure-Virtualisation-and-Management_v1.0.pdf
[D4.1] T-NOVAD4.1,Deliverable:ResourceVirtualization
[D-ITG] Distributed Internet Traffic Generator, on-line:http://traffic.comics.unina.it/software/ITG/
[ETSI-NFV-a] ETSI,"VirtualNetworkFunctionsArchitecture-ETSIISG,NFV-SWA001V1.1.1",Dec2014,<http://www.etsi.org/deliver/etsi_gs/NFV-
SWA/001_099/001/01.01.01_60/gs_NFV-SWA001v010101p.pdf>
ETSI-NFV-TST001
ETSI,NetworkFunctionsVirtualization (NFV);Pre-deploymentTesting;ReportonValidationofNFVEnvironmentsandServices–ETSIISG,NFV-TST001v01.1,workinprocess,<https://portal.etsi.org/webapp/workProgram/
Report_WorkItem.asp?wki_id=46009>
ETSI-NFV-TST002
ETSI, Network Functions Virtualization (NFV); Testing Methodology; Report onInteroperability Testing Methodology, ETSI-ISG, NFV-TST002, v0.1.1, work inprocess,<https://portal.etsi.org/webapp/WorkProgram/
Report_WorkItem.asp?WKI_ID=46043>
[NFVPERF] NFVISG,NFVPerformance&PortabilityBestPractices,ETSI,v1.1.1,June2014,on-line: http://www.etsi.org/deliver/etsi_gs/NFV-PER/001_099/001/01.01.01_60/gs_nfv-per001v010101p.pdf
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium56
[Hamilton07] M.Hamilton,S.Banks, “BenchmarkingMethodology forContent-AwareNetworkDevices”,draft-ietf-bmwg-ca-bench-meth-07.
[HTTPLOAD] http_load,on-line:http://www.acme.com/software/http_load/
[HTTPERF] HTTPERF,on-line:https://github.com/httperf/httperf
[ID-ROSA15] R. Rosa, C. Rothenberg, R. Szabo, “VNFBenchmark-as-a-Service”,WorkingDraft,Internet-Draft, draft-rorosz-nfvrg-vbaas-00.txt, Oct 19, 2015, IETF Secretariat.<https://www.ietf.org/id/draft-rorosz-nfvrg-vbaas-00.txt>
[IPERF] IPERF,on-line:http://sourceforge.net/projects/iperf/
[IPPM] IETF IP Performance Metrics Working Group, on-line:http://datatracker.ietf.org/wg/ippm/charter/
[ITU-RBT50013]
ITU-RRec.BT.500-13,"Methodologyforthesubjectiveassessmentofthequalityoftelevisionpictures",2012
[IXIA] IXIAWebSite,on-line:http://ixiacom.com
[IXIABRC] IXIA Breaking Point Score, On-line:http://www.ixiacom.com/sites/default/files/resources/datasheet/resiliency-score.pdf
[IXIAINF] IXIA Infrastructure Testing, on-line:http://www.ixiacom.com/solutions/infrastructure-testing/index.php
[MGEN] MGEN,on-line:http://cs.itd.nrl.navy.mil/work/mgen
[MULATIVO] M.Duelli,J.Ott,andT.Muller,“MuLaViTo–Multi-LayerVisualizationTool,”Apr.2011.[Online].Available:http://mulavito.sf.net
[NETMAP] NETMAP,on-line:http://info.iet.unipi.it/~luigi/netmap/
[OFLOPS] Charalampos Rotsos, Nadi Sarrar, Steve Uhlig, Rob Sherwood, and Andrew W.Moore. 2012. OFLOPS: an open framework for openflow switch evaluation. InProceedings of the 13th international conference on Passive and ActiveMeasurement(PAM'12),NinaTaftandFabioRicciato(Eds.).Springer-Verlag,Berlin,Heidelberg, 85-95. DOI=10.1007/978-3-642-28537-0_9http://dx.doi.org/10.1007/978-3-642-28537-0_9
[OSTINATO] OSTINATO,on-line:https://code.google.com/p/ostinato/
[PKTGEN] Packet Gen, on-line:http://www.linuxfoundation.org/collaborate/workgroups/networking/pktgen
[PKTGEN-DPDK]
PacketGen–DPDK,on-line:https://github.com/Pktgen/Pktgen-DPDK/
[PF-RING] PF-RING,on-line:http://www.ntop.org/products/pf_ring/
[PINSON04] M.H.PinsonandS.Wolf,“Anewstandardizedmethodforobjectivelymeasuringvideo quality,” Broadcasting, IEEE Transactions on, vol. 50, no. 3, pp. 312–322,September2004
[RFC1944] Bradner,S.andJ.McQuaid,"BenchmarkingMethodologyforNetworkInterconnectDevices",RFC1944,May1996,<http://www.rfc-editor.org/info/rfc1944>
[RFC2498] Mahdavi,J.andV.Paxson,"IPPMMetricsforMeasuringConnectivity",RFC2498,January1999,<http://www.rfc-editor.org/info/rfc2498>.
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium57
[RFC2647] Newman, D., "Benchmarking Terminology for Firewall Performance", RFC 2647,August1999,<http://www.rfc-editor.org/info/rfc2647>.
[RFC2679] Villamizar,C.,Alaettinoglu,C.,Govindan,R.,andD.Meyer,"RoutingPolicySystemReplication",RFC2769,February2000,<http://www.rfc-editor.org/info/rfc2769>.
[RFC2680] Almes,G.,Kalidindi,S.,andM.Zekauskas,"AOne-wayPacketLossMetricforIPPM",RFC2680,September1999,<http://www.rfc-editor.org/info/rfc2680>.
[RFC2681] Almes,G.,Kalidindi,S.,andM.Zekauskas,"ARound-tripDelayMetric for IPPM",RFC2681,September1999,<http://www.rfc-editor.org/info/rfc2681>.
[RFC2889] Mandeville, R. and J. Perser, "Benchmarking Methodology for LAN SwitchingDevices",RFC2889,August2000,<http://www.rfc-editor.org/info/rfc2889>.
[RFC3511] Hickman,B.,Newman,D.,Tadjudin,S.,andT.Martin,"BenchmarkingMethodologyfor Firewall Performance", RFC 3511, April 2003, <http://www.rfc-editor.org/info/rfc3511>.
[RFC3550] Schulzrinne,H.,Casner,S.,Frederick,R.,andV.Jacobson,"RTP:ATransportProtocolfor Real-Time Applications", STD 64, RFC 3550, July 2003, <http://www.rfc-editor.org/info/rfc3550>.
[RFC6349] Constantine, B., Forget, G., Geib, R., and R. Schrage, "Framework for TCPThroughput Testing", RFC 6349, August 2011, <http://www.rfc-editor.org/info/rfc6349>.
[SEAGULL] Seagull,on-line:http://gull.sourceforge.net/
[SIPp] SIPp,on-line:http://sipp.sourceforge.net/
[SPIRENT] SpirentWebPage,on-line:http://www.spirent.com/
[TCPREP] TCPReplay,on-line:http://tcpreplay.appneta.com/
[WEBPOLY] WebPolygraph,on-line:http://www.web-polygraph.org/
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium58
LISTOFACRONYMS
Acronym Explanation
AAA Authentication,Authorisation,andAccounting
API ApplicationProgrammingInterface
CAPEX CapitalExpenditure
CIP CloudInfrastructureProvider
CSP CommunicationServiceProvider
DASHDynamicAdaptiveStreamingoverHTTP
DDNS DynamicDNS
DDoS DistributedDenialofService
DHCP DynamicHostConfigurationProtocol
DNS DomainNameSystem
DoS DenialofService
DoW DescriptionofWork
DPI DeepPacketInspection
DPDK DataPlaneDevelopmentKit
DUT DeviceUnderTest
E2E End-to-End
EU EndUser
FP FunctionProvider
GW Gateway
HG HomeGateway
HTTP HypertextTransferProtocol
IP InternetProtocol
IP InfrastructureProvider
ISG IndustrySpecificationGroup
ISP InternetServiceProvider
IT InformationTechnology
KPI KeyPerformanceIndicator
LAN LocalAreaNetwork
MANO MANagementandOrchestration
MVNO MobileVirtualNetworkOperator
NAT NetworkAddressTranslation
T-NOVA|DeliverableD2.52 Planningoftrialsandevaluation-Final
©T-NOVAConsortium59
NF NetworkFunction
NFaaS NetworkFunctions-as-a-Service
NFV NetworkFunctionsVirtualisation
NFVI NetworkFunctionsVirtualisationInfrastructure
NFVIaaS NetworkFunctionVirtualisationInfrastructureas-a-Service
NIP NetworkInfrastructureProvider
NS NetworkService
OPEX OperationalExpenditure
OSS/BSS OperationalSupportSystem/BusinessSupportSystem
PaaS Platform-as-a-Service
PoC ProofofConcept
QoS QualityofService
RTP RealTimeProtocol
SA SecurityAppliance
SaaS Software-as-a-Service
SBC SessionBorderController
SDN Software-DefinedNetworking
SDO StandardsDevelopmentOrganisation
SI ServiceIntegrator
SIP SessionInitiationProtocol
SLA ServiceLevelAgreement
SME SmallMediumEnterprise
SP ServiceProvider
TEM TelecommunicationEquipmentManufacturers
TRL TechnologyReadinessLevel
TSON TimeSharedOpticalNetwork
UC UseCase
UML UnifiedModellingLanguage
vDPI VirtualDeepPacketInspection
vHG VirtualHomeGateway
VM VirtualMachine
VNF VirtualNetworkFunction
VNFaaS VirtualNetworkFunctionasaService
VNPaaS VirtualNetworkPlatformasaService