+ All Categories
Home > Documents > Data Center Network Connectivity With Ibm Server

Data Center Network Connectivity With Ibm Server

Date post: 14-Oct-2014
Category:
Upload: orlando-cordova-gonzales
View: 191 times
Download: 6 times
Share this document with a friend
160
DATA CENTER NETWORK CONNECTIVITY WITH IBM SERVERS Network infrastructure scenario designs and configurations by Meiji Wang, Mohini Singh Dukes, George Rainovic, Jitender Miglani and Vijay Kamisetty
Transcript
Page 1: Data Center Network Connectivity With Ibm Server

DATA CENTER NETWORK CONNECTIVITY WITH IBM SERVERSNetwork infrastructure scenario designs and configurations

by Meiji Wang, Mohini Singh Dukes, George Rainovic, Jitender Miglani and Vijay Kamisetty

Page 2: Data Center Network Connectivity With Ibm Server

Juniper Networks Validated Solutions

Data Center Network Connectivity with IBM Servers

Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

Chapter.1:.Introduction.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Chapter.2:.Design.Considerations.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Chapter.3:.Implementation.Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Chapter.4:.Connecting.IBM.Servers.in.the.Data.Center.Network . . . . . . . . . . . . . 45

Chapter.5:.Configuring.Spanning.Tree.Protocols. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

Chapter.6:.Supporting.Multicast.Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

Chapter.7:.Understanding.Network.CoS.and.Latency. . . . . . . . . . . . . . . . . . . . . . . . 105

Chapter.8:.Configuring.High.Availability.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

Appendix.A:...Configuring.TCP/IP.Networking.in.Servers. . . . . . . . . . . . . . . . . . . . . 144

Appendix.B:...LAG.Test.Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

Appendix.C:...Acronyms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

Appendix.D:...References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

Page 3: Data Center Network Connectivity With Ibm Server

ii DataCenterNetworkConnectivitywithIBMServers

©2010byJuniperNetworks,Inc.Allrightsreserved.

JuniperNetworks,theJuniperNetworkslogo,Junos,NetScreen,andScreenOSareregisteredtrademarksofJuniperNetworks,Inc.intheUnitedStatesandothercountries.Junos-eisatrademarkofJuniperNetworks,Inc.Allothertrademarks,servicemarks,registeredtrademarks,orregisteredservicemarksarethepropertyoftheirrespectiveowners.

JuniperNetworksassumesnoresponsibilityforanyinaccuraciesinthisdocument.JuniperNetworksreservestherighttochange,modify,transfer,orotherwiserevisethispublicationwithoutnotice.ProductsmadeorsoldbyJuniperNetworksorcomponentsthereofmightbecoveredbyoneormoreofthefollowingpatentsthatareownedbyorlicensedtoJuniperNetworks:U.S.PatentNos.5,473,599,5,905,725,5,909,440,6,192,051,6,333,650,6,359,479,6,406,312,6,429,706,6,459,579,6,493,347,6,538,518,6,538,899,6,552,918,6,567,902,6,578,186,and6,590,785.

PrintedintheUSAbyVervanteCorporation.

VersionHistory:v1June2010

2345678910

Key Contributors

Chandra Shekhar PandeyisaJuniperNetworksDirectorofSolutionsEngineering.Heisresponsibleforserviceprovider,enterpriseandOEMpartners’solutionsengineer-ingandvalidation.Chandrahasmorethan18yearsofnetworkingexperience,designingASICs,architectingsystemsanddesigningsolutionstoaddresscustomer’schallengesintheserviceproviders,MSOandenterprisemarket.Heholdsabachelor’sdegreeinElectronicsEngineeringfromK.N.I.T,Sultanpur,IndiaandaMBAinHighTechandFinancefromNortheasternUniversity,Boston,MA.

Louise ApichellisaJuniperNetworksSeniorTechnicalWritingSpecialistintheSolutionsMarketingGroup.Sheassistedasacontentdeveloper,chiefeditorandprojectmanagerinorganizing,writingandeditingthisbook.Louisespecializesinwritingandeditingalltypesoftechnicalcollateral,suchaswhitepapers,applicationnotes,implementationguides,referencearchitecturesandsolutionbriefs.

Ravinder SinghisaJuniperNetworksDirectorofSolutionArchitectureandTechnicalMarketingintheSolutionsMarketingGroup.HeisresponsibleforcreatingtechnicalknowledgebasesandhassignificantexperienceworkingwithsalesengineersandchannelstosupportJuniper’sCloud/DataCenterSolutionsfortheenterprise,serviceprovidersandkeyOEMalliances.Priortothisrole,RavinderwasresponsibleforEnterpriseSolutionsArchitectureandEngineeringwherehisteamdeliveredseveralEnterpriseSolutionsincludingAdaptiveThreatManagement,DistributedEnterpriseandJuniperSimplifiedDataCentersolutions.Ravinderholdsabachelor’sandmaster’sdegreeinElectronicsandamaster’sofbusinessdegreeinITManagementandMarketing.

Mike BarkerisaJuniperNetworksTechnicalMarketingDirector,SolutionsEngineeringandArchitectures.Inthisrole,hefocusesondevelopingarchitecturesandvalidatingmulti-productsolutionsthatcreatebusinessvalueforenterpriseandServiceProvidercustomers.Priortothisrole,MikeservedinvariousConsultingandSystemsEngineeringrolesforFederal,EnterpriseandServiceProvidermarketsatJuniperNetworks,AcornPacketSolutionsandArborNetworks.Earlierinhiscareer,MikeheldNetworkEngineeringpositionsatCable&Wireless,StanfordTelecomandtheUSAF.Mr.BarkerholdsaBachelorsofScienceDegreeinBusinessManagementfromMountOliveCollegeandaMBAfromMountSt.Mary’sUniversity.

Karen JoiceisaJuniperNetworksMarketingSpecialistwhoprovidedthetechnicalillustrationsforthisbook.Karenhasbeenagraphicartistandmarketingprofessionalformorethan15years,specializingintechnicalillustrations,Flash,andWebdesign,withexpertiseinprintproduction.

Youcanpurchaseaprintedcopyofthisbook,ordownloadafreePDFversionofthisbook,at:juniper.net/books.

Page 4: Data Center Network Connectivity With Ibm Server

AbouttheAuthors iii

About the Authors

Meiji WangisaJuniperNetworksSolutionsArchitectfordatacenterapplicationsandcloudcomputing.Hespecializesinapplicationdevelopment,datacenterinfrastructureoptimization,cloudcomputing,SoftwareasaService(SaaS),anddatacenternetworking.Hehasauthoredthreebooksfocusingondatabases,e-businesswebusageandmostrecentlydatacenternetworkdesignRedbookinpartnershipwiththeIBMteam.IBMRedbooks|IBMj-typeDataCenterNetworkingIntroduction.

Mohini Singh DukesisaJuniperNetworksStaffSolutionsDesignEngineerintheSolutionsEngineeringGroup.Shedesigns,implementsandvalidatesawiderangeofsolutionsinthemobile,CarrierEthernet,datacenterinterconnectivityandsecurity,businessandresidentialservices.Specializinginmobilenetworkingsolutionsincludingbackhaul,packetbackboneandsecurity,shehasauthoredanumberofwhitepapers,applicationnotesandimplementationanddesignguidesbasedonsolutionvalidationefforts.Shehasalsopublishedaseriesofblogsonenergyefficientnetworking.

George RainovicisaJuniperNetworksSolutionsStaffEngineer.HespecializesindesigningtechnicalsolutionsfordatacenternetworkingandVideoCDN.HespecializesintestingIBMJ-TypeEthernetswitchesandrouters.Georgehasmorethan15yearsofnetworkingexperienceandIT,designing,deployingandsupportingnetworksfornetworkserviceprovidersandbusinessenterprisecustomers.Heholdsabachelor’sdegreeinElectricalEngineeringfromtheUniversityofNoviSad,Serbia.

Jitender K. MiglaniisaJuniperNetworksSolutionsEngineerfordatacenterintraandinterconnectivitysolutions.AspartofJuniper’sOEMrelationshipwithIBM,JitenderassistsinqualifyingJuniper’sEX,MXandSRXSeriesPlatformswithIBMOpenSystemPlatforms(PowerP5/P6,BladeCenterandx3500).Jitenderhasdevelopmentandengineeringexperienceinvariousvoiceanddatanetworkingproducts,andwithsmall/medium/largeenterpriseandcarriergradecustomers.Jitenderholdsabachelor’sinComputerSciencefromtheRegionalEngineeringCollege,Kurukshetra,India.

Vijay K. KamisettyisaJuniperNetworksSolutionsEngineer.HespecializesintechnicalsolutionsforIPTV-Multiplay,HD-VideoConference,mobilebackhaul,applicationlevelsecurityinthedatacenter,developmentofmanagedservices,andvalidationofadaptiveclockrecovery.HeassistsinqualifyingJuniperEXandMXplatformswithIBMPowerP5andx3500platforms.Heholdsabachelor’sdegreeinComputerSciencefromJNTUHyderabad,India.

Page 5: Data Center Network Connectivity With Ibm Server

iv DataCenterNetworkConnectivitywithIBMServers

Authors Acknowledgments

TheauthorswouldliketotakethisopportunitytothankPatrickAmes,whosedirectionandguidancewasindispensible.ToNathanAlger,LionelRuggeri,andZachGibbs,whoprovidedvaluabletechnicalfeedbackseveraltimesduringthedevelopmentofthisbooklet,yourassistancewasgreatlyappreciated.ThanksalsotoCathyGadeckiforhelpingintheformativestagesofthebooklet.Therearecertainlyotherswhohelpedinmanydifferentwaysandwethankyouall.

And Special Thanks to our Reviewers...

Juniper Networks

MarcBernstein

VenkataAchanta

CharlesGoldberg

ScottSneddon

JohnBartlomiejczyk

AllenKluender

FraserStreet

RobertYee

NirajBrahmbhatt

PaulParker-Johnson

TravisO’Hare

ScottRobohn

TingZou

KrishnanManjeri

IBM

RakeshSharma

CasimerDeCusatis

Page 6: Data Center Network Connectivity With Ibm Server

Preface v

Preface

ENTERPRISESDEPENDMORETHANEVERBEFOREontheirdatacenterinfrastructureefficiencyandbusinessapplicationsperformancetoimproveemployeeproductivity,reduceoperationalcostsandincreaserevenue.Toachievetheseobjectives,virtualization,simplificationandconsolidationarethreeofthemostcrucialinitiativestotheenterprise.Theseobjectivesnotonlydemandhighperformanceserverandnetworktechnologies,butalsorequiresmoothintegrationbetweenthetwoaswelltoachieveoptimalperformance.Hence,successfulintegrationofserversandsimplifiednetworkinginfrastructureispivotal.

Thisguideprovidesenterprisearchitects,salesengineers,ITdevelopers,systemadministratorsandothertechnicalprofessionalsguidanceonhowtodesignandimplementahigh-performancedatacenterusingJuniperNetworksinfrastructureandIBMOpenSystems.Withastep-by-stepapproach,readerscangraspathoroughunderstandingofdesignconsiderations,recommendeddesigns,technicaldetailsandsampleconfigurations,exemplifyingsimplifieddatacenternetworkdesign.ThisapproachisbasedontestingperformedusingJuniperNetworksdevicesandIBMserversinJuniperNetworkssolutionlabs.

TheIBMOpenSystemServerssolution–includingIBMPowersystems,Systemx,andBladeCenterSystems–comprisesthefoundationforadynamicinfrastructure.IBMserverplatformshelpconsolidateapplicationsandservers,andvirtualizeitssystemresourceswhileimprovingoverallperformance,availabilityandenergyefficiency,providingamoreflexible,dynamicITinfrastructure.

JuniperNetworksoffersauniquebest-in-classdatacenterinfrastructuresolutionbasedonopenstandards.Itoptimizesperformanceandenablesconsolidationwhichinturnincreasesnetworkscalabilityandresilience,simplifiesoperations,andstreamlinesmanagementwhileloweringoverallTotalCostofOwnership(TCO).Thesolutionalsoautomatesnetworkinfrastructuremanagement,makingexistinginfrastructureeasilyadaptableandflexible,especiallyforthird-partyapplicationdeployment.

KeytopicsdiscussedinthisbookfocusonthefollowingroutingandswitchingsolutionsinJuniper’ssimplifiedtwo-tierdatacenternetworkarchitecturewithIBMopensystems.

• BestpracticesforintegratingJuniperNetworksEXandMXSeriesswitchesandrouterswithIBMOpenSystems.

• ConfigurationdetailsforvariousspanningtreeprotocolssuchasSpanningTreeProtocol(STP),MultipleSpanningTreeProtocol(MSTP),RapidSpanningTreeProtocol(RSTP),andVirtualSpanningTreeProtocol(VSTP);deployment

Page 7: Data Center Network Connectivity With Ibm Server

vi DataCenterNetworkConnectivitywithIBMServers

scenariossuchasRSTP/MSTPandVirtualSpanningTreeProtocol/Per-VLANSpanningTree(VSTP/PVST)withJuniperEXandMXSeries(switchesandrouters)connectingtoIBMBladeCenter.

• DetailsforLayer2andLayer3multicastscenarioswithProtocolIndependentMulticast(PIM)andInternetGroupManagementProtocol(IGMP)snooping.ScenariosincludevideostreamingclientrunningonIBMserverswithPIMimplementedonnetworkaccessandcore/aggregationtiersalongwithIGMPsnoopingattheaccesslayer.

• LowlatencynetworkdesignandtechniquessuchasClassofService(CoS)forimprovingdatacenternetworkperformance.

• Methodsforincreasingdatacenterresiliencyandhigh-availability.ConfigurationdetailsforprotocolssuchasVirtualRouterRedundancyProtocol(VRRP),RedundantTrunkGroup(RTG),LinkAggregation(LAG),RoutingEngineRedundancy,virtualchassis,NonstopBridging(NSB),NonstopRouting(NSR),GracefulRestart(GR)andIn-Service-Software-Upgrade(ISSU).

JuniperNetworksrealizesthatthescopeofdatacenternetworkdesignencompassesmanyfacets,forexampleservers,storageandsecurity.Therefore,tonarrowthescopeofthisbook,wehavefocusedonnetworkconnectivityimplementationdetailsbasedonJuniperEX,MXSeriesswitchesandroutersandIBMOpenSystems.However,asnewrelevanttechnologiesandbestpracticesevolve,wewillcontinuetorevisethisbooktoincludeadditionaltopics.

Pleasemakesuretosendusyourfeedbackwithanyneworrelevantideasthatyouwouldliketoseeinfuturerevisionsofthisbook,orinotherValidatedSolutionsbooks,at:[email protected].

Page 8: Data Center Network Connectivity With Ibm Server

Chapter 1

Introduction

7

TODAY’S.DATA.CENTER.ARCHITECTS.and.designers.do.not.have.the.luxury..

of.simply.adding.more.and.more.devices.to.solve.networking’s.constant.and.

continuous.demands.such.as.higher.bandwidth.requirements,.increased.speed,.

rack.space,.tighter.security,.storage,.interoperability.among.many.types.of.devices.

and.applications,.and.more.and.more.diverse.and.remote.users ..

This.chapter.discusses.in.detail.the.data.center.trends.and.challenges.now.facing.

network.designers ..Juniper.Networks.and.IBM.directly.address.these.trends.and.

challenges.with.a.data.center.solution.that.will.improve.data.center.efficiency.by.

simplifying.the.network.infrastructure,.by.reducing.recurring.maintenance.and.

software.costs,.and.by.streamlining.daily.management.and.maintenance.tasks .

Trends.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Challenges. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

IBM.and.Juniper.Networks.Data.Center.Solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

IBM.and.Juniper.Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Page 9: Data Center Network Connectivity With Ibm Server

8 DataCenterNetworkConnectivitywithIBMServers

Trends

Althoughthereareseveraltypesofdatacentersforsupportingawiderangeofapplicationssuchasfinancial,webportalscontentproviders,andITbackofficeoperations,theyallsharecertaintrends,suchas:

More Data Than Ever Before

Sincethedawnofthecomputerage,manycompanieshavestruggledtostoretheirelectronicrecords.Thatstrugglecanbegreaterthanevertoday,asregulatoryrequirementscanforcesomecompaniestosaveevenmorerecordsthanbefore.ThegrowthoftheInternetmaycompoundtheproblem;asbusinessesmoveonline,theyneedtostoreenormousamountsofdatasuchascustomeraccountinformationandorderhistories.Thetotalcapacityofshippedstoragesystemsissoaringbymorethan50percentayear,accordingtomarketresearcherIDC.Theonlythingthatisgrowingfasterthanthevolumeofdataitselfistheamountofdatathatmustbetransferredbetweendatacentersandusers.Numerouslargeenterprisesareconsolidatingtheirgeographicallydistributeddatacentersintomegadatacenterstotakeadvantageofcostbenefitsandeconomiesofscale,increasedreliability,andtoexploitthelatestvirtualizationtechnologies.AccordingtoresearchconductedbyNemertes,morethan50percentofcompaniesconsolidatedtheirdisperseddatacentersintofewerbutlargerdatacentersinthelast12months,withevenmoreplanningtoconsolidateintheupcoming12months.

Server Growth

Serversarecontinuingtogrowatahighannualrateof11percent,whilestorageisgrowingatanevenhigherrateof22percent:bothofwhicharecausingtremendousstrainonthedatacenter’spowerandcoolingcapacity.AccordingtoGartner,OSandapplicationinstabilityisincreasingtheserversprawlwithutilizationratesof20percent,leadingtoanincreasedadoptionofservervirtualizationtechnologies.

Evolution of Cloud Services

CloudcomputingisastyleofcomputinginwhichdynamicallyscalableandoftenvirtualizedresourcesareprovidedasaserviceovertheInternet.Largeenterprisesareadoptingcloud-computingmethodologyintotheirmegadatacenters.Smallerbusinessesthatcannotaffordtokeepupwiththecostandcomplexityofmaintainingtheirprivatelyowneddatacentersmaylooktooutsourcethosefunctionstocloud-hostingproviders.

Challenges

Today’smajordatacenterchallengesincludescaleandvirtualization,complexityandcost,andinterconnectivityforbusinesscontinuity,andsecurity:

Scale and Virtualization

Withtheevolutionofmegadatacentersandcloud-computingarchitectures,tremendousstrainisbeingplacedoncurrentnetworkarchitectures.Scalingnetworkingandsecurityfunctionscanquicklybecomealimitingfactortothesuccessofgrowingdatacentersastheystrivetomeetstringentperformanceand

Page 10: Data Center Network Connectivity With Ibm Server

Chapter1:IntroductiontoDataCenterNetworkConnectivity 9

high-availabilityrequirements.However,simplyaddingmoreequipmentisnotalwayssatiatingtheappetiteofhungrymegadatacenters.Ifthenetworkandsecurityarchitecturedoesnotenableapplicationworkloadmobilityandquickresponsestovariablecapacityrequirementstosupportmulti-tenancywithinservers(asrequiredinacloudenvironment)thenthefullvalueofdatacentervirtualizationcannotberealized.

Complexity and Cost

Manydatacentershavebecomeoverlycomplex,inefficientandcostly.Networkingarchitectureshavestagnatedforoveradecade,resultinginnetworkdevicesprawlandincreasinglychaoticnetworkinfrastructuresdesignedlargelytoworkaroundlow-performanceandlow-densitydevices.Theensuingcapitalexpenses,rackspace,powerconsumptionandmanagementoverheadalladdtotheoverallcost,nottomentiontheenvironmentalimpact.Unfortunately,insteadofcontainingcostsandreallocatingthesavingsintoenhancingandacceleratingbusinesspractices,theITbudgetalltoooftenismisappropriatedintosustainingandrapidlygrowingalreadyunwieldydatacenteroperations.EmergingapplicationsthatuseServiceOrientedArchitecture(SOA)andWebservicesareincreasinglycomputationalandnetworkintensive;however,thenetworkisnotefficient.Gartner(2007)assertsthat50percentoftheEthernetswitchportswithinthedatacenterareusedforswitchinterconnectivity.

Interconnectivity for Business Continuity

Asdatacentersexpand,theycaneasilyoutgrowasinglelocation.Whenthisoccurs,enterprisesmayhavetoopennewcentersandtransparentlyinterconnecttheselocationssotheycaninteroperateandappearasonelargedatacenter.Enterpriseswithgeographicallydistributeddatacentersmaywantto“virtually”consolidatethemintoasingle,logicaldatacenterinordertotakeadvantageofthelatesttechnology.

Security

Thesharedinfrastructureinthedatacenterorcloudshouldsupportmultiplecustomers,eachwithmultiplehostedapplications,providecomplete,granularandvirtualizedsecuritythatiseasytoconfigureandunderstand,andsupportallmajoroperatingsystemsonaplethoraofmobileanddesktopdevices.Inaddition,asharedinfrastructureshouldintegrateseamlesslywithexistingidentitysystems,checkhostposturebeforeallowingaccesstothecloud,andmakeallofthisaccessibleforthousandsofusers,whileprotectingagainstsophisticatedapplicationattacks,DistributedDenialofService(DDoS)attacksandhackers.

Today,adatacenterinfrastructuresolutionrequiresadynamicinfrastructure,ahighperformancenetworkandacomprehensivenetworkmanagementsystem.

IBM and Juniper Networks Data Center Solution

TheIBMServerssolution,includingIBMPowersystems,SystemxandBladeCenterSystemscomprisethefoundationforadynamicinfrastructure.

Page 11: Data Center Network Connectivity With Ibm Server

10 DataCenterNetworkConnectivitywithIBMServers

IBM Power System

TheIBMPower™Systemsfamilyofserversincludesprovenserverplatformsthathelpconsolidateapplicationsandservers,virtualizeitssystemresourceswhileimprovingoverallperformance,availabilityandenergyefficiency,andprovidingamoreflexible,dynamicITinfrastructure.APowerservercanrunupto254independentservers–eachwithitsownprocessor,memoryandI/OresourceswithinasinglephysicalPowerserver.Processorresourcescanbeassignedatagranularityof1/100thofcore.

IBM System x

TheIBMSystemx3850X5serveristhefifthgenerationoftheEnterpriseX-Architecture,deliveringinnovationwithenhancedreliabilityandavailabilityfeaturestoenableoptimalperformancefordatabases,enterpriseapplicationsandvirtualizedenvironments.AccordingtoarecentIBMRedbookspaper,asingleIBMSystemx3850X5hostservercansupportupto384.Fordetails,pleaserefertoHighdensityvirtualizationusingtheIBMsystemx3850X5atwww.redbooks.ibm.com/technotes/tips0770.pdf.

IBM BladeCenter

TheBladeCenterisbuiltonIBMX-Architecturetorunmultiplebusiness-criticalapplicationswithsimplification,costreductionandimprovedproductivity.ComparedtofirstgenerationXeon-basedbladeservers,IBMBladeCenterHS22bladeserverscanhelpimprovetheeconomicsofyourdatacenterwith:

• Upto11timesfasterperformance

• Upto90percentreductioninenergycostsalone

• Upto95percentITfootprintreduction

• Upto65percentlessinconnectivitycosts

• Upto84percentfewercables

FordetailedbenefitsconcerningtheIBMBladeCenter,pleaserefertowww-03.ibm.com/systems/migratetoibm/systems/bladecenter/.

Juniper Network Products for a High Performance Network Infrastructure Solution

JuniperNetworksdatacenterinfrastructuresolutionsprovideoperationalsimplicity,agilityandefficiencytosimplifythenetworkwiththefollowingkeytechnologies:

• VirtualChassistechnology,combinedwithwire-rate10-GigabitEthernetperformanceintheJuniperNetworksEXSeriesEthernetSwitches,reducesthenumberofnetworkingdevicesandinterconnections.Thiseffectivelyeliminatestheneedforanaggregationtier—contributingtoasignificantreductionofcapitalequipmentcostandnetworkoperationalcosts,improvedapplicationperformance,andfastertimetodeploynewserversandapplications.

• DynamicServicesArchitectureintheJuniperNetworksSRXSeriesServicesGatewaysconsolidatessecurityapplianceswithdistinctfunctionsintoahighlyintegrated,multifunctionplatformthatresultsinsimplernetworkdesigns,improvedapplicationperformance,andareductionofspace,power,andcoolingrequirements.

Page 12: Data Center Network Connectivity With Ibm Server

Chapter1:IntroductiontoDataCenterNetworkConnectivity 11

• NetworkvirtualizationwithMPLSintheJuniperNetworksMXSeries3DUniversalEdgeRoutersandtheJuniperNetworksMSeriesMultiserviceEdgeRoutersenablesnetworksegmentationacrossdatacentersandtoremoteofficesforapplicationsanddepartmentswithouttheneedtobuildseparateoroverlaynetworks.

• JuniperNetworksJunos®operatingsystemoperatesacrossthenetworkinfrastructure,providingoneoperatingsystem,enhancedthroughasinglereleasetrain,anddevelopeduponacommonmodulararchitecture—givingenterprisesa“1-1-1”advantage.

• J-CareTechnicalServicesprovideautomatedincidentmanagementandproactiveanalysisassistancethroughtheAdvancedInsightSolutionstechnologyresidentinJunosOS.

MX Series 3D Universal Edge Routers

TheJuniperNetworksMXSeries3DUniversalEdgeRoutersareafamilyofhigh-performanceEthernetrouterswithpowerfulswitchingfeaturesdesignedforenterpriseandserviceprovidernetworks.TheMXSeriesprovidesunmatchedflexibilityandreliabilitytosupportadvancedservicesandapplications.Itaddressesawiderangeofdeployments,architectures,portdensitiesandinterfaces.High-performanceenterprisenetworkstypicallydeployMXSeriesroutersinhigh-densityEthernetLANanddatacenteraggregation,andthedatacentercore.

TheMXSeriesprovidescarriergradereliability,density,performance,capacityandscaleforenterprisenetworkswithmissioncriticalapplications.Highavailabilityfeaturessuchasnonstoprouting(NSR),fastreroute,andunifiedinservicesoftwareupgrade(ISSU)ensurethatthenetworkisalwaysupandrunning.TheMXSeriesdeliverssignificantoperationalefficienciesenabledbyJunosOS,andsupportsacollapsedarchitecturerequiringlesspower,coolingandspaceconsumption.TheMXSeriesalsoprovidesopenAPIsforeasilycustomizedapplicationsandservices.

TheMXSeriesenablesenterprisenetworkstoprofitfromthetremendousgrowthofEthernettransportwiththeconfidencethattheplatformstheyinstallnowwillhavetheperformanceandserviceflexibilitytomeetthechallengesoftheirevolvingrequirements.

TheMXSeries3DUniversalEdgeRoutersincludetheMX80andMX80-48T,MX240,MX480andMX960.Theircommonkeyfeaturesinclude:

• 256Kmulticastgroups

• 1MMACaddressandV4routes

• 6KL3VPNand4KVPLSinstances

• Broadbandservicesrouter

• IPsec

• Sessionboardercontroller

• Videoqualitymonitoring

AsamemberoftheMXSeries,theMX960isahighdensityLayer2andLayer3Ethernetplatformwithupto2.6Tbpsofswitchingandroutingcapacity,andistheindustry’sfirst16-port10GbEcard.ItisoptimizedforemergingEthernetnetwork

Page 13: Data Center Network Connectivity With Ibm Server

12 DataCenterNetworkConnectivitywithIBMServers

architecturesandservicesthatrequirehighavailability,advancedQoS,andperformanceandscalabilitythatsupportmissioncriticalnetworks.TheMX960platformisidealwhereSCBandRoutingEngineredundancyarerequired.Allmajorcomponentsarefield-replaceable,increasingsystemserviceabilityandreliability,anddecreasingmeantimetorepair(MTTR).TheenterprisecustomerstypicallydeployMX960orMX480intheirdatacentercore.

NOTE WedeployedtheMX480inthishandbook.However,theconfigurationsanddiscussionspertainingtotheMX480alsoapplytotheentireMXproductline.

EX Series Ethernet Switches

AsamemberoftheEXSeriesEthernetSwitches,theEX4200EthernetswitcheswithvirtualchassistechnologyandtheEX8200modularchassisswitchesarecommonlydeployedintheenterprisedatacenter.WeusedtheEX4200andEX8200formostofourdeploymentscenarios.

EX4200 Ethernet Switches with Virtual Chassis Technology

TheEX4200lineofEthernetswitcheswithVirtualChassistechnologycombinetheHAandcarrierclassreliabilityofmodularsystemswiththeeconomicsandflexibilityofstackableplatforms,deliveringahigh-performance,scalablesolutionfordatacenter,campus,andbranchofficeenvironments.

TheEX4200Ethernetswitcheswithvirtualchassistechnologyhavethefollowingmajorfeatures:

• Deliverhighavailability,performanceandmanageabilityofchassis-basedswitchesinacompact,power-efficientformfactor.

• Offerthesameconnectivity,PoweroverEthernet(PoE)andJunosOSoptionsastheEX3200switches,withanadditional24-portfiber-basedplatformforGigabitaggregationdeployments.

• Enableupto10EX4200switches(withVirtualChassistechnology)tobeinterconnectedasasinglelogicaldevicesupportingupto480ports.

• Provideredundant,hot-swappable,load-sharingpowersuppliesthatreducemeantimetorepair(MTTR),whileGracefulRouteEngineSwitchover(GRES)ensureshitlessforwardingintheunlikelyeventofaswitchfailure.

• Runthesamemodularfault-tolerantJunosOSasotherEXSeriesswitchesandallJuniperrouters.

EX8200 Modular Chassis Switches

TheEX8200Modularchassisswitcheshavethefollowingmajorfeatures:

• High-performance8-slot(EX8208)and16-slot(EX8216)switchessupportdatacenterandcampusLANcoreandaggregationlayerdeployments.

• Scalableswitchfabricdeliversupto320Gbpsperslot48-port10/100/1000BASE-Tand100BASE-FX/1000BASE-Xlinecardssupportupto384(EX8208)or768(EX8216)GbEportsperchassis.

Page 14: Data Center Network Connectivity With Ibm Server

Chapter1:IntroductiontoDataCenterNetworkConnectivity 13

• 48-port100/1000BASE-Tand100BASE-FX/100BASE-Xlinecardssupportupto384(EX8208)or768(EX8216)GbEportsperchassis.

• 8-port10GBASE-XlinecardswithSFP+interfacesdeliverupto64(EX8208)or128(EX8216)10-GbEportsperchassis.

• Carrier-classarchitectureincludesredundantinternalRoutingEngines,switchfabrics,andpowerandcooling,allensuringuninterruptedforwardingandmaximumavailability.

• Runthesamemodularfault-tolerantJunosOSasotherEXSeriesswitchesandallJuniperrouters.

JuniperNetworkshigh-performancedatacenternetworkarchitecturereducescostandcomplexitybyrequiringfewertiersofswitching,andconsolidatingsecurityservices,acommonoperatingsystem,andoneextensiblemodelfornetworkmanagement.AsshownintheFigure1.1,theJunosOSrunsmanydatacenternetworkswitching,routingandsecurityplatforms,includingJuniperNetworksEXSeries,JuniperNetworksMXSeries,andJuniperNetworksSRXSeries,andIBMj-typedatacenternetworkproducts–JuniperNetworksoriginalequipmentmanufacturer(OEM)fortheEXandMXSeries.FordetailsconcerningproductmappingbetweenIBMandJuniperNetworksproducts,seeTable1.1attheendofthischapterorvisitthewebsite, IBM and Junos in the Data Center: A Partnership Made for Now,athttps://simplifymydatacenter.com/ibm.

Figure 1.1 Junos Operating System Runs on the Entire Data Center Network: Security, Routers, and Switching Platforms

T Series

Junos Space

Junos Pulse

EX8216

EX8208

NSMXpress

NSM

MX Series

M SeriesJ Series

SECURITY ROUTERS SWITCHES

EX3200 Line

EX2200 Line

EX4200 Line

SRX5000 Line

SRX3000 Line

SRX210

SRX240SRX650

SRX100

Page 15: Data Center Network Connectivity With Ibm Server

14

Figure 1.2 Data Center and Cloud Architecture Figure 1.2 Data Center and Cloud Architecture (cont.)

REMOTE/CLOUD USER

SSL VPN

TELEWORKER

SRX100

WAN NETWORK

IPsec VPN

PUBLIC CLOUD

Virtual Chassis

EX4200

EX4200

EX4200

EX4200

IBM System z

IBM System p

IBM System xand BladeCenter

SRX5600SRX5800

SBR Appliance

EX8208EX8216

EX8208EX8216

MX240MX480MX960

MX240MX480MX960

IC6500Unified Access

Control

ENTERPRISE OWNED DATA CENTER – LOCATION 1

Virtual Chassis

NetworkManager

NetView

ProvisioningManager

Netcool

Access Manager

FederatedIdentity

Manager

IBMSystem z

FCSAN

iSCSINAS

NFS/CIFSFile Systems

IBMSystem p

IBMSystem x

BladeCenter

SERVER

STORAGE

Fibre ChannelEthernet

SBR Appliance

SA6500

NSM/Junos Space

STRM Series

EX8208EX8216

EX8208EX8216

WXC3400

MX240MX480MX960

MX240MX480MX960

WXC3400

EX4200 EX4200 EX4200 EX4200

IC6500Unified Access

Control

SRX5600SRX5800

MANAGEMENTSECURITY NETWORK

MPLS/VPLS

SSL VPN

MPLS/VPLS

Page 16: Data Center Network Connectivity With Ibm Server

15

SMALL BRANCH

SRX100SRX210SRX240

LARGE BRANCH

Virtual Chassis

Kiosk

Tivoli StorageManager Fastrack

(TSMF)

WXC2600

EX4200 EX4200

SRX650 HEADQUARTERS

VirtualChassis

SRX3600 EX4200

EX4200

EX4200

MX240MX480MX960

EX8208, EX8216

EX8208, EX8216

WXC2600

IC4500

Tivoli StorageManager Fastrack

(TSMF)

WAN NETWORK

ENTERPRISE OWNED DATA CENTER – LOCATION 2

Virtual Chassis

NetworkManager

NetView

ProvisioningManager

Netcool

Access Manager

FederatedIdentity

Manager

IBMSystem z

SANiSCSINAS

NFS/CIFSFile Systems

IBMSystem p

IBMSystem x

BladeCenter

SERVER

STORAGE

Fibre ChannelEthernet

SBR Appliance

SA6500

NSM/Junos Space

STRM Series

EX8208EX8216

EX8208EX8216

WXC3400

MX240MX480MX960

MX240MX480MX960

WXC3400

EX4200 EX4200 EX4200 EX4200

IC6500Unified Access

Control

SRX5600SRX5800

MANAGEMENTSECURITY NETWORK

FC

SSL VPN IPsec VPN

MPLS/VPLS

Figure 1.2 Data Center and Cloud Architecture Figure 1.2 Data Center and Cloud Architecture (cont.)

Page 17: Data Center Network Connectivity With Ibm Server

16 DataCenterNetworkConnectivitywithIBMServers

IBM and Juniper Networks Data Center and Cloud Architecture

AsshowninFigure1.2(dividedintotwosectionsonpages14-15),thesampledatacenterandcloudarchitecturedeployIBMServers,IBMsoftwareandJuniperNetworksdatacenternetworkproducts.JuniperistheOEMforIBMj-typee-seriesswitchesandm-seriesrouters(EXSeriesandMXSeries).FordetailsconcerningproductmappingbetweenIBMandJuniperNetworksproducts,seeTable1.1.

IBM Tivoli and Juniper Networks Junos Space for Comprehensive Network Management Solution

Managingthedatacenternetworkoftenrequiresmanytoolsfromdifferentvendors,asthetypicalnetworkinfrastructureoftenisacomplexmeshednetworkdeployment.Thistypeofnetworkdeploymentcombinesdifferentnetworktopologiesandoftenincludesdevicesfrommultiplevendorsandnetworktechnologiesfordelivery.IBMTivoliproductsandJuniperNetworksJunosSpacetogethercanmanagedatacenternetworkseffectivelyandcomprehensively.Thetoolsinclude:

• IBMSystemDirector

• TivoliNetcool/OMNIbus

• IBMTivoliProvisioningManger

• JunosSpaceNetworkApplicationPlatform

• JuniperNetworksJunosSpaceEthernetActivator

• JuniperNetworksJunosSpaceSecurityDesigner

• JuniperNetworksJunosSpaceRouteInsightManager

• JuniperNetworksJunosSpaceServiceNow

MORE ForthelatestIBMandJuniperNetworksdatacentersolution,visithttp://www.juniper.net/us/en/company/partners/global/ibm/#dynamic.

IBM and Juniper Networks

ThecollaborationbetweenIBMandJuniperNetworksbeganadecadeago.InNovemberof1997,IBMprovidedcustomApplicationSpecificIntegratedCircuits(ASICs)forJuniperNetworksnewclassofInternetbackbonedevicesaspartofastrategictechnologyrelationshipbetweenthetwocompanies.

Since2007,thetwocompanieshavebeenworkingtogetheronjointtechnologysolutions,standardsdevelopmentandnetworkmanagementandmanagedsecurityservices.IBMspecificallyincludedJuniperNetworksswitching,routing,andsecurityproductsintotheirdatacenternetworkportfolio,withIBMplayinganinvaluableroleassystemsintegrator.

Mostrecently,thetwocompaniesjointlycollaboratedonaglobaltechnologydemonstrationhighlightinghowenterprisescanseamlesslyextendtheirprivatedatacenterclouds.ThedemonstrationbetweenSiliconValleyandShanghaishowedausecasewherecustomerscouldtakeadvantageofremoteserversina

Page 18: Data Center Network Connectivity With Ibm Server

Chapter1:IntroductiontoDataCenterNetworkConnectivity 17

securepubliccloudtoensurethathighpriorityapplicationsaregivenpreferenceoverlowerpriorityoneswhencomputingresourcesbecomeconstrained.IBMandJuniperareinstallingtheseadvancednetworkingcapabilitiesintoIBM’snineworldwideCloudLabsforcustomerengagements.OncetheseadvancednetworkingcapabilitiesareinstalledinthenineworldwideCloudLabs,IBMandJuniperwillbeabletoseamlesslymoveclient-computingworkloadsbetweenprivateandpubliclymanagedcloudenvironments,enablingcustomerstodeliverreliablyonservice-levelagreements(SLAs).

InJulyof2009,JuniperandIBMcontinuedtobroadentheirstrategicrelationshipbyenteringintoanOEMagreementthatenablesIBMtoprovideJuniper’sEthernetnetworkingproductsandsupportwithinIBM’sdatacenterportfolio.TheadditionofJuniper’sproductstoIBM’sdatacenternetworkingportfolioprovidescustomerswithabest-in-classnetworkingsolutionandacceleratesthesharedvisionofbothcompaniesforadvancingtheeconomicsofnetworkingandthedatacenterbyreducingcosts,improvingservicesandmanagingrisk.

IBM j-type Data Center Products and Juniper Networks Products Cross Reference

TheIBMj-typee-seriesEthernetswitchesandm-seriesEthernetroutersuseJuniperNetworkstechnology.Table1.1showsthemappingofIBMswitchesandrouterstotheircorrespondingJuniperNetworksmodel.Forfurtherinformationconcerningproductinformation,pleasevisitthewebsite,IBM and Junos in the Data Center: A Partnership Made For Now,athttps://simplifymydatacenter.com/ibm.

Table 1.1 Mapping of IBM j-type Data Center Network Products to Juniper Networks Products

IBM Description IBM Machine Type and Model

Juniper Networks Model

IBM.j-type.e-series.Ethernet.Switch.J48E 4273-E48 EX4200

IBM.j-type.e-series.Ethernet.Switch.J08E. 4274-E08 EX8208

IBM.j-type.e-series.Ethernet.Switch.J16E. 4274-E16 EX8216

IBM.j-type.m-series.Ethernet.Router.J02M. 4274-M02 MX240

IBM.j-type.m-series.Ethernet.Router.J06M. 4274-M06 MX480

IBM.j-type.m-series.Ethernet.Router.J11M. 4274-M11 MX960

IBM.j-type.s-series.Ethernet.Appliance.J34S. 4274-S34. SRX3400

IBM.j-type.s-series.Ethernet.Appliance.J36S. 4274-S36. SRX3600

IBM.j-type.s-series.Ethernet.Appliance.J56S....................................... 4274-S56.. SRX5600

IBM.j-type.s-series.Ethernet.Appliance.J58S. 4274-S58. SRX5800

Page 19: Data Center Network Connectivity With Ibm Server
Page 20: Data Center Network Connectivity With Ibm Server

Chapter 2

Design Considerations

19

Network.Reference.Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Design.Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Two-Tier.Network.Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

THIS.CHAPTER.FOCUSES.ON.Juniper.Networks.data.center.network.reference.

architecture ..It.presents.technical.considerations.for.designing.a.modern.day.data.

center.network.that.must.support.consolidated.and.centralized.server.and.

storage.infrastructure,.as.well.as.enterprise.applications .

Page 21: Data Center Network Connectivity With Ibm Server

20 DataCenterNetworkConnectivitywithIBMServers

Network Reference Architecture

Thedatacenternetworkisrealigningitselftomeetnewglobaldemandsbyprovidingbetterefficiency,higherperformanceandnewcapabilities.Today’sdatacenternetworkcan:

• Maximizeefficiencygainsfromtechnologiessuchasservervirtualization.

• Providerequiredcomponentswithimprovedcapabilities–security,performanceacceleration,highdensityandresilientswitching,andhighperformancerouting.

• UsevirtualizationcapabilitiessuchasMPLSandvirtualprivateLANservice(VPLS)toenableaflexible,high-performancedatacenterbackbonenetworkbetweendatacenters.

Theevolvingnetworkingdemandsanewnetworkreferencearchitecture,whichcansustainapplicationperformance,meetthedemandsofcustomergrowth,reinforcesecuritycompliance,reduceoperationalcostsandadoptinnovationtechnologies.

AsshowninFigure2.1,JuniperNetworksdatacenternetworkreferencearchitectureconsistsofthefollowingfourtiers:

• Edge Services Tier–providesallWANservicesattheedgeofdatacenternetworks;connectstotheWANservicesinotherlocations,includingotherdatacenters,campuses,headquarters,branches,carrierserviceproviders,managedserviceprovidersandevencloudserviceproviders.

• Core Network Tier–actsasthedatacenternetworkbackbone,whichinterconnectsothertierswithinthedatacenterandcanconnecttothecorenetworktierinotherdatacenters.

• Network Services Tier–providescentralizednetworksecurityandapplicationservices,includingfirewall,IntrusionDetectionandPrevention(IDP)andserverloadbalancing.

• Applications and Data Services Tier–connectsmainlyserversandstorageintheLANenvironmentandactsasanuplinktothecorenetworktier.

Thesubsequentsectionsinthischapterexplaineachnetworktierindetail.

Page 22: Data Center Network Connectivity With Ibm Server

Chapter2:DesignConsiderations 21

Figure 2. 1 Juniper Network Data Center Network Reference Architecture

CoreAggregation

Router

EDGE SERVICES

CORE NETWORKNETWORK SERVICES

IntrusionDetection and

Prevention

WANAcceleration

VPNTermination

Gateway

ServerSecurityGateway

InternetAccess

Gateway

Secure Access(SSL)

APPLICATIONSAND DATASERVICES

IP StorageNetwork

EX4200 EX4200 EX4200 EX4200

InternalApplication

Network

BusinessApplication

Network

InfrastructureNetwork

WX Series/WXC Series

SRX Series SRX Series

EX8200

IDP Series

SA Series

M Series

or

SRX Series

SRX SeriesCore Firewall

M Series

EX4200

PRIVATE WAN INTERNET

MX Series

Page 23: Data Center Network Connectivity With Ibm Server

22 DataCenterNetworkConnectivitywithIBMServers

Edge Services Tier

Theedgeservicestierisresponsibleforallconnectivityandnetworklevelsecurityaspectstoconnectthedatacentertotheoutsideworld,includingotherdatacenters,campuses,headquarters,branches,carrierserviceproviders,managedserviceproviders,orevencloudserviceproviders.Typically,routersandfirewall/VPNsresideinthistier.Itislikelythatthedatacenterconnectstovariousleasedlinesconnectingtopartners,branchofficesandtotheInternet.Whenconnectingallofthesenetworks,itisimportanttoplanforthefollowing:

• Internetroutingisolation,forexampleseparatingtheexteriorroutingprotocolsfromtheinteriorroutingprotocols.

• NetworkAddressTranslation(NAT)toconvertyourprivateIPaddressestopublicInternetroutableIPaddresses.

• IPSecVPNtunnelterminationforpartner,branchandemployeeconnections.

• Bordersecuritytoenforcestatefulfirewallpoliciesandcontentinspection.

• QualityofService(QoS).

Core Network Tier

Thecorenetworkactsasthebackboneofthedatacenternetwork,whichinterconnectsotherstierswithindatacentersandcanconnecttothecorenetworktierinotherdatacentersaswell.Itconnectsthenetworkservicestierandaggregatesuplinkconnectionsfortheapplicationsanddataservicestier.Thistierconsolidatesthefunctionalityofthecoreandaggregationtiersintraditionalthree-tiernetworkarchitecture,therebysignificantlyreducingthenumberofdevices.

Combiningthetraditionalthree-tiercoreandaggregationtiersintoasingleconsolidatedcoreprovidesotherbenefitssuchas:

• Significantpowersavings

• Reducedfacilitiessystemfootprint

• Simplifieddevicemanagement

• Tightersecuritycontrol

• Reducednumberofsystemfailurepoints

Network Services Tier

Thenetworkservicestierprovidescentralizednetworksecurityandapplicationservices,includingfirewall,IDP,serverloadbalancing,SSILoffload,HTTPcache,TCPmultiplex,andglobalserverloadbalancing(GSLB).Thistiertypicallyconnectsdirectlytothecorenetworktier,resultinginlowlatencyandhighthroughput.

Thistierisresponsibleforhandlingservicepoliciesforanynetwork,serversand/orapplication.Becausenetworkserviceiscentralized,itmustprovideservicetoallserversandapplicationswithinthedatacenter;itshouldapplyanetwork-specificpolicytoaparticularnetworkorapplyanapplication-specificpolicytosetof

Page 24: Data Center Network Connectivity With Ibm Server

Chapter2:DesignConsiderations 23

serversassociatedtoparticularapplications.Forexample,asecurityservice,suchastrafficSYNchecking/sequencenumberchecking,mustapplytoanyserverthatisexposedtopublicnetworks.

Thenetworkservicestierrequires:

• Highperformancedevices,forexample,highperformancefirewallstoprocesstrafficassociatedwithlargenumbersofendpoints,suchasnetworks,serversandapplications.

• Virtualizationcapabilities,suchasvirtualinstancetosecuremany,simultaneouslogicalservices.

Applications and Data Services Tier

Theapplicationsanddataservicestier(alsoknownastheaccesstier)isprimarilyresponsibleforconnectingserversandstorageintheLANenvironmentandactsasanuplinktothecorenetworktier.Itincludestheaccesstierinadatanetworkandstoragenetwork.Thistiersupportsinteroperabilitywithserverconnectionsandhighthroughputnetworkinterconnections.Whenthenumberofserversincreases,thenetworktopologyremainsagileandcanscaleseamlessly.BasedondifferentbusinessobjectivesandITrequirements,theapplicationanddataservicestiercanhavemanynetworks,including:

• Externalapplicationsnetworks,whichcanhavemultipleexternalnetworksthatserveseparatenetworksegments.ThesetypicallyincludeapplicationssuchasthepublicWeb,publicmailtransferagent(MTA),DomainNameSystem(DNS)servicesandremoteaccessandpotentialfileservicesthatareavailablethroughunfilteredaccess.

• Internalapplicationsnetworks,whichcanhavemultipleinternalnetworksservingdifferentlevelsofinternalaccessfromcampusorbranchlocations.Thesenetworkstypicallyconnectinternalapplicationssuchasfinanceandhumanresourcessystems.Alsoresidingintheinternalnetworkarepartnerapplicationsandoranyspecificapplicationsthatareexposedtopartnerssuchasinventorysystemsandmanufacturinginformation.

• Infrastructureservicesnetworks,whichprovidesecureinfrastructurenetworkconnectionsbetweenserversandtheirsupportinginfrastructureservices,suchasLightweightDirectoryAccessProtocol(LDAP),databases,filesharing,contentmanagementandmiddlewareservers.OutofBandManagementisalsoapartofthisnetwork.

• Storagenetworks,whichprovideremotestoragetoserversusingdifferentstandards,suchasFibreChannel,InfiniBandorInternetSmallComputerSystemInterface(iSCSI).ManymissioncriticalapplicationserverstypicallyusetheBusAdapter(HBA)toconnecttoaremotestoragesystem,ensuringfastaccesstodata.However,largenumbersofserversuseiSCSItoaccessremotestoragesystemsbyusingtheTCP/IPnetworkforsimplicityandcostefficiency.

Page 25: Data Center Network Connectivity With Ibm Server

24 DataCenterNetworkConnectivitywithIBMServers

Design Considerations

Thefollowingkeydesignconsiderationsarecriticalattributesfordesigningtoday’sdatacenternetworkarchitecture:

• Highavailabilityanddisasterrecovery

• Security

• Simplicity

• Performance

• Innovation

NOTE ThedesignconsiderationsdiscussedinthishandbookarenotnecessarilyspecifictoJuniperNetworkssolutionsandcanbeapplieduniversallytoanydatacenternetworkdesign,regardlessofvendorselection.

High Availability and Disaster Recovery

Fromtheperspectiveofadatacenternetworkdesigner,highavailabilityanddisasterrecoveryarekeyrequirementsandmustbeconsiderednotonlyinlightofwhatishappeningwithinthedatacenter,butalsofromacrossmultipledatacenters.Networkhighavailabilityshouldbedeployedbyusingacombinationoflinkredundancy(bothexternalandinternalconnectivity)andcriticaldeviceredundancytoensurenetworkoperationsandbusinesscontinuity.Inaddition,usingsiteredundancy(multipledatacenters)iscriticaltomeetingdisasterrecoveryandregulatorycomplianceobjectives.Moreover,devicesandsystemsdeployedwithintheconfinesofthedatacentershouldsupportcomponent-levelhighavailability,suchasredundantpowersupplies,fansandroutingengines.Anotherimportantconsiderationisthesoftware/firmwarerunningonthesedevices,whichshouldbebasedonamodulararchitecturethatprovidesfeaturessuchasISSUfeaturesintheMXSeriestopreventsoftwarefailuresandupgradeeventsfromimpactingtheentiredevice.Softwareupgradesshouldonlyimpactaparticularmodule,therebyensuringsystemavailability.

Security

Thecriticalresourcesinanyenterpriselocationaretypicallytheapplicationsthemselves,andtheserversandsupportingsystemssuchasstorageanddatabases.Financial,humanresources,andmanufacturingapplicationswithsupportingdatatypicallyrepresentacompany’smostcriticalassetsand,ifcompromised,cancreateapotentialdisasterforeventhemoststableenterprise.Thecorenetworksecuritylayersmustprotectthesebusinesscriticalresourcesfromunauthorizeduseraccessandattacks,includingapplication-levelattacks.

Thesecuritydesignmustemploylayersofprotectionfromthenetworkedgethroughthecoretothevariousendpoints,suchas,forexample,defenseindepth.Alayeredsecuritysolutionprotectscriticalnetworkresourcesthatresideonthenetwork.Ifonelayerfails,thenextlayerwillstoptheattackand/orlimitthedamagesthatcanoccur.ThislevelofsecurityallowsITdepartmentstoapplytheappropriatelevelofresourceprotectiontothevariousnetworkentrypointsbasedupontheirdifferentsecurity,performanceandmanagementrequirements.

Page 26: Data Center Network Connectivity With Ibm Server

Chapter2:DesignConsiderations 25

Layersofsecuritythatshouldbedeployedatthedatacenterincludethefollowing:

• DoSprotectionattheedge

• Firewallstotightlycontrolwhoandwhatgetsinandoutofthenetwork

• VPNtoprovidesecureremoteaccess

• IntrusionPreventionSystem(IPS)solutionstopreventamoregenericsetofapplicationlayerattacks.

Further,application-layerfirewallsandgatewaysalsoplayakeyroleinprotectingspecificapplicationtrafficsuchasXML.

Forfurtherdetails,refertotheNationalInstituteofScienceandTechnology(NIST)recommendedbestpractices,asdescribedinGuide to General Server Security Recommendations of the National Institute of Standards and Technologyathttp://csrc.nist.gov/publications/nistpubs/800-123/SP800-123.pdf.

Policy-basednetworkingisapowerfulconceptthatenablesdevicesinthenetworktobemanagedefficiently,especiallywithinvirtualizedconfigurations,andcanprovidegranularlevelsofnetworkaccesscontrol.Thepolicyandcontrolcapabilitiesshouldalloworganizationstocentralizepolicymanagementwhileofferingdistributedenforcementatthesametime.Thenetworkpolicyandcontrolsolutionshouldprovideappropriatelevelsofaccesscontrol,policycreationaswellasmanagementandnetworkandservicemanagement–ensuringsecureandreliablenetworksforallapplications.Inaddition,thedatacenternetworkinfrastructureshouldintegrateeasilyintoacustomer’sexistingmanagementframeworksandthird-partytools,suchasTivoli,andprovidebest-in-classcentralizedmanagement,monitoringandreportingservicesfornetworkservicesandtheinfrastructure.

Simplicity

Simplicitycanbeachievedbyadoptingnewarchitecturaldesigns,newtechnologies,andnetworkoperatingsystems.

Thetwo-tiernetworkarchitectureisanewdesignthatallowsnetworkadministratorstosimplifythedatacenterinfrastructure.Traditionally,datacenternetworkswereconstructedusingathree-tierdesignapproach,resultinginaccess,aggregationandcorelayers.Alargenumberofdevicesmustbedeployed,configuredandmanagedwithineachofthesetiers,increasingcostandcomplexity.

Thisisprimarilybecauseofscalabilityrequirements,performancelimitationsandkeyfeaturedeficienciesintraditionalswitchesandrouters.JuniperNetworksproductssupportadatacenternetworkdesignthatrequiresfewerdevices,interconnectionsandnetworktiers.Moreover,thedesignalsoenablesthefollowingkeybenefits:

• Reducedlatencyduetofewerdevicehops

• Simplifieddevicemanagement

• Significantpower,coolingandspacesavings

• Fewersystemfailurepoints.

Page 27: Data Center Network Connectivity With Ibm Server

26 DataCenterNetworkConnectivitywithIBMServers

Figure2.2showsdatacenternetworkdesigntrendsfromatraditionaldatacenternetwork,toanetworkconsistingofavirtualizedaccesstierandcollapsedaggregateandcoretiers,toanetworkwithimprovednetworkvirtualizationontheWAN.

Figure 2. 2 Data Center Network Design Trends

ConvergedI/Otechnologyisanewtechnologythatsimplifiesthedatacenterinfrastructurebysupportingflexiblestorageanddataaccessonthesamenetworkinterfacesontheserverside,andbyconsolidatingstorageareanetworks(SANsandLANs)intoasinglelogicalinfrastructure.Thissimplificationandconsolidationmakesitpossibletoallocatedynamicallyanyresource–includingrouting,switching,securityservices,storagesystems,appliancesandservers–withoutcompromisingperformance.

Keepinginmindthatnetworkdevicesarecomplex,designinganefficienthardwareplatformisnot,byitself,sufficientinachievinganeffective,cost-efficientandoperationallytenableproduct.Softwareinthecontrolplaneplaysacriticalroleinthedevelopmentoffeaturesandinensuringdeviceusability.BecauseJunosisaprovenmodularsoftwarenetworkoperatingsystemthatrunsacrossdifferentplatforms,implementingJunosisoneofthebestapproachestosimplifyingthedailyoperationsofthedatacenternetwork.

Inarecentstudytitled, The Total Economic Impact™ of Juniper Networks Junos Network Operating System,ForresterConsultingreporteda41percentreductioninoverallnetworkoperationalcostsbasedondollarsavingsacrossspecifictask

WAN Gateway

Tier 3:Core

Tier 2:Aggregation

Tier 1:Access

Servers

• Multiple L2/L3 switches at aggregation

• Multiple L2 access switches to be managed

• Multiple layers in the network

• Up to 10 EX4200 Ethernet Switches can be managedas single device with Virtual Chassis technology

• High-performance L2/L3 collapsed core/aggregation with EX8208 and EX8216 Ethernet Switches reduce number of devices

• Collapsed aggregation and core layer

• MPLS capable core with MX240, MX480 and MX960 Ethernet Routers

• WAN interface available on MX240, MX480 and MX960 Ethernet Routers

EX4200

EX8208

Virtual Chassis

EX8208

SRX5600 SRX5600WAN

MX480

EX4200 Virtual Chassis

MX480

SRX5800 SRX5800

Traditional Data CenterNetwork Design

Virtualized Access,Consolidated Core/Aggregation

Integrated WAN Interface with MPLS-Enabled Core/Aggregation

Page 28: Data Center Network Connectivity With Ibm Server

Chapter2:DesignConsiderations 27

categories,includingplannedevents,reductioninfrequencyanddurationofunplannednetworkevents,thesumofplannedandunplannedevents,thetimeneededtoresolveunplannednetworkevents,andthe“addinginfrastructure”task.

Asthefoundationofanyhighperformancenetwork,JunosexhibitsthefollowingkeyattributesasillustratedinFigure2.3:

• Oneoperatingsystemwithasinglesourcebaseandasingleconsistentfeatureimplementation.

• Onesoftwarereleasetrainextendedthroughahighlydisciplinedandfirmlyscheduleddevelopmentprocess.

• OnecommonmodularsoftwarearchitecturethatstretchesacrossmanydifferentJunoshardwareplatformsformanydifferentJunoshardwareplatforms,includingMXSeries,EXSeries,andSRXSeries.

Figure 2.3 Junos: A 1-1-1 Advantage

Performance

Toaddressperformancerequirementsrelatedtoservervirtualization,centralizationanddatacenterconsolidation,thedatacenternetworkshouldboosttheperformanceofallapplicationtraffic,whetherlocalorremote.ThedatacentershouldofferLAN-likeuserexperiencelevelsforallenterpriseusersirrespectiveoftheirphysicallocation.Toaccomplishthis,thedatacenternetworkshouldoptimizeapplications,servers,storageandnetworkperformance.

SECURITY

RO

UT

ING

SWITCHING

MA

NA

GE

ME

NT

ONEOS

ONERelease Track

ONEArchitecture

Frequent Releases

ModuleX

— A

PI —

10.110.09.6

Page 29: Data Center Network Connectivity With Ibm Server

28 DataCenterNetworkConnectivitywithIBMServers

WANoptimizationtechniquesthatincludedatacompression,TCPandapplicationprotocolacceleration,bandwidthallocation,andtrafficprioritizationimproveperformancenetworktraffic.Inaddition,thesetechniquescanbeappliedtodatareplication,andtobackupandrestorationbetweendatacentersandremotesites,includingdisasterrecoverysites.

Withinthedatacenter,ApplicationFrontEnds(AFEs)andloadbalancingsolutionsboosttheperformanceofbothclient-serverandWeb-basedapplications,aswellasspeedingWebpagedownloads.Inaddition,designersmustconsideroffloadingCPU-intensivefunctions,suchasTCPconnectionprocessingandHTTPcompression,frombackendapplicationsandWebservers.

Beyondapplicationacceleration,criticalinfrastructurecomponentssuchasrouters,switches,firewalls,remoteaccessplatformsandothersecuritydevicesshouldbebuiltonnon-blockingmodulararchitecture,sothattheyhavetheperformancecharacteristicsnecessarytohandlethehighervolumesofmixedtraffictypesassociatedwithcentralizationandconsolidation.Designersalsoshouldaccountforremoteusers.

JuniperNetworksinnovativesiliconchipsetandthevirtualizationtechnologiesdeliverauniquehighperformancedatacentersolution.

• JunosTriorepresentsJuniper’sfourthgenerationofpurpose-builtsiliconandistheindustry’sfirst“networkinstructionset”–anewsiliconarchitectureunliketraditionalASICsandnetworkprocessingunits(NPUs).Thenewarchitectureleveragescustomized“networkinstructions”thataredesignedintosilicontomaximizeperformanceandfunctionality,whileworkingcloselywithJunossoftwaretoensureprogrammabilityofnetworkresources.ThenewJunosOnefamilythuscombinestheperformancebenefitsofASICsandtheflexibilityofnetworkprocessorstobreakthestandardtrade-offsbetweenthetwo.

Builtin65-nanometertechnology,JunosTrioincludesfourchipswithatotalof1.5billiontransistorsand320simultaneousprocesses,yieldingtotalrouterthroughputupto2.6terabitspersecondandupto2.3millionsubscribersperrack–farexceedingtheperformanceandscalepossiblethroughoff-the-shelfsilicon.JunosTrioincludesadvancedforwarding,queuing,scheduling,synchronizationandend-to-endresiliencyfeatures,helpingcustomersprovideservice-levelguaranteesforvoice,videoanddatadelivery.JunosTrioalsoincorporatessignificantpowerefficiencyfeaturestoenablemoreenvironmentallyconsciousdatacenterandserviceprovidernetworks.

JunosTriochipsetwithrevolutionary3DScalingtechnologyenablesnetworkstoscaledynamicallyformorebandwidth,subscribersandservices–allatthesametimewithoutcompromise.JunosTrioalsoyieldsbreakthroughsfordeliveringrichbusiness,residentialandmobileservicesatmassivescale—allwhileusinghalfasmuchpowerpergigabit.Thenewchipsetincludesmorethan30patent-pendinginnovationsinsiliconarchitecture,packetprocessing,QoSandenergyefficiency.

• TheJuniperNetworksdatacenternetworkarchitectureemploysamixofvirtualizationtechnologies–suchasVirtualChassistechnologywithVLANsandMPLS-basedadvancedtrafficengineering,VPNenhancedsecurity,QoS,VPLS,andothervirtualizationservices.Thesevirtualizationtechnologiesaddressmanyofthechallengesintroducedbyserver,storageandapplicationvirtualization.Forexample,VirtualChassissupportslow-latencyserverlivemigrationfromservertoserverincompletelydifferentrackswithinadata

Page 30: Data Center Network Connectivity With Ibm Server

Chapter2:DesignConsiderations 29

center,andfromservertoserverbetweendatacentersinaflatLayer2network,whenthesedatacentersarewithinreasonablycloseproximity.VirtualChassiswithMPLSallowstheLayer2domaintoextendacrossdatacenterstosupportlivemigrationfromservertoserverwhendatacentersaredistributedoversignificantdistances.

JuniperNetworksvirtualizationtechnologiessupportlowlatency,throughput,QoSandhighavailabilityrequiredbyserverandstoragevirtualization.MPLS-basedvirtualizationaddressestheserequirementswithadvancedtrafficengineeringtoprovidebandwidthguarantees,labelswitchingandintelligentpathselectionforoptimizedlowlatencyandfastrerouteforextremehighavailabilityacrosstheWAN.MPLS-basedVPNsenhancesecuritywithQoStoefficientlymeetapplicationanduserperformanceneeds.

Thesevirtualizationtechnologiesservetoimproveefficienciesandperformancewithgreateragilitywhilesimplifyingoperations.Forexample,acquisitionsandnewnetworkscanbefoldedquicklyintotheexistingMPLS-basedinfrastructurewithoutreconfiguringthenetworktoavoidIPaddressconflicts.ThisapproachcreatesahighlyflexibleandefficientdatacenterWAN.

Innovation Innovation,forexamplegreeninitiatives,influencesdatacenterdesign.Agreendatacenterisarepositoryforthestorage,managementanddisseminationofdatainwhichthemechanical,lighting,electricalandcomputersystemsprovidemaximumenergyefficiencywithminimumenvironmentalimpact.Asolderdatacenterfacilitiesareupgradedandnewerdatacentersarebuilt,itisimportanttoensurethatthedatacenternetworkinfrastructureishighlyenergyandspaceefficient.

Networkdesignersshouldconsiderpower,spaceandcoolingrequirementsforallnetworkcomponents,andtheyshouldcomparedifferentarchitecturesandsystemssothattheycanascertaintheenvironmentalandcostimpactsacrosstheentiredatacenter.Insomeenvironments,itmightbemoreefficienttoimplementhigh-end,highlyscalablesystemsthatcanreplacealargenumberofsmallercomponents,therebypromotingenergyandspaceefficiency.

Greeninitiativesthattrackresourceusage,carbonemissionsandefficientutilizationofresources,suchaspowerandcoolingareimportantfactorswhendesigningadatacenter.AmongthemanyJuniperenergyefficiencydevices,theMX960ispresentedinTable2.1todemonstrateitseffectsonreductionsinenergyconsumptionandfootprintwithinthedatacenter.

Table 2. 1 Juniper Networks MX 960 Power Efficiency Analysis

Characteristics Juniper Networks Core MX960 2x Chassis

Line-rate10GigE(ports) 96

Throughputperchassis(Mpps) 720

Outputcurrent(Amps) 187.84

OutputPower(Watts) 9020.00

HeatDissipation(BTU/Hr) 36074.33

ChassisRequired(rackspace) 2chassis

Rackspace(racks) 2/3rdsofasinglerack

Page 31: Data Center Network Connectivity With Ibm Server

30 DataCenterNetworkConnectivitywithIBMServers

Two-Tier Network Deployment

Inthishandbook,wedeployatwo-tiernetworkarchitectureinadatacenternetwork,asshowninFigure2.4.Thesetwotiersconsistofthecorenetworkandaccessnetworktiers.Thesetwotiersareassociatedwiththedataservicesandapplicationstierandcorenetworktier,whichdefineJuniperNetworksdatacenterreferencearchitecture.ForfurtherdetailsconcerningJuniper’sdatacenterreferencearchitecture,refertotheEnterprise Data Center Network Reference Architecture – Using a High Performance Network Backbone to Meet the Requirements of the Modern Enterprise Data Centeratwww.juniper.net/us/en/local/pdf/reference-architectures/8030001-en.pdf.

NOTE Foradetaileddiscussionoftwo-tiernetworkdeployment,seeChapter 3: Implementation Overview.Thetwo-tiernetworkarchitecturedefinedinthishandbookdoesnotincludeastoragenetwork.

Figure 2. 4 Sample Two-Tier Data Center Network Deployment

To Edge Services Tier

Servers

MX480 MX480

CoreTier

AccessTier

MM1MM2

IBM Blade Servers

EX8200 EX8200Virtual Chassis Virtual Chassis

EX4200 VC

EX4200 VC

EX4200 VC

EX4200 VC

EX4200 VC

EX4200 VC

IBM Power VM

VIOS LPAR

NICs/HEA(Host EthernetAdapter)

VirtualSwitch

IBM Power VM

VIOS LPAR

NICs/HEA(Host EthernetAdapter)

VirtualSwitch

Page 32: Data Center Network Connectivity With Ibm Server

Chapter2:DesignConsiderations 31

Core Network Tier

ThecorenetworktiercommonlyusesJuniperNetworksMXSeriesEthernetServicesRoutersorJuniperNetworksEX8200lineofEthernetswitches,suchasMX960,MX480,EX8216andEX8208.Decidingonaparticulardevicedependsonvariousfactors,includingfunctionalrequirementsinthecorenetworktier,budgetaryconstraintsorphaseddeploymentconsiderations.Thefollowingrepresentseveralcustomerscenarios:

• ExtendtheLayer2broadcastingdomainacrossageographicallydisperseddatacentersothatalltheserversassociatedwiththeLayer2domainappearsonthesameEthernetLAN.Thentheenterprisecanleveragemanyexistingprovisioninganddatamigrationtoolstomanageworldwide-distributedserverseffectively.TheMX960andMX480areidealdevicesforbuildinganMPLSbackboneintheenterprisecorenetworktierandforleveragingVPLStocreateanextendedLayer2broadcastingdomainbetweendatacenters.Inthecorenetworktier,alsoknownastheconsolidatedcorelayer,twoMXSeriesroutersconnecttotwoSRXSeriesplatforms,whichhavemanyvirtualsecurityservicesthatcanbeconfiguredintoindependentsecurityzones.TheMXSeriesroutersconnecttotop-of-the-rackJuniperNetworksEXSeriesEthernetSwitchesintheaccesslayer,whichinturnaggregatetheserversinthedatacenter.

• Consolidateatraditionalthree-tiernetworkinfrastructuretosupporttraffic-intensiveapplicationsandmulti-tierbusinessapplicationstolowerlatency,supportdataandvideoandintegratesecurity.TheMX960andSRX5800areidealproductstoprovideaconsolidatedsolution,asillustratedinFigure2.5.

Figure 2. 5 Integrating Security Service with Core Network Tier

WAN Edge

Consolidated Core Layer

Access Layer VLANs

IP VPN

EX4200 Virtual Chassis

HR Finance Guest Departments

EX4200 Virtual Chassis

MX960

M Series

MX960

M Series

VRF#1

VRF#2

• Mapping of VLANs to Security Zones

• Map VRFs on core to routing instances on SRX Series

• Establish adjacency between VRFs on core

• Traffic between networks runs through SRX Series by default, or filtered on MX Series

SRX5800MappingVRF to

Security Zones

MappingVRF to

Security Zones

• Firewall• IPS• NAT

SecurityZones

IPS#2

Firewall#2

Firewall#3

VRF#1

VRF#2

Firewall#1

IPS#1

NAT#1

VPN

Trunk

Server VLAN

Page 33: Data Center Network Connectivity With Ibm Server

32 DataCenterNetworkConnectivitywithIBMServers

TwoMX960routersareshowntoindicatehighavailabilitybetweenthesedevices,providingend-to-endnetworkvirtualizationforapplicationsbymappingVirtualRoutingandForwarding(VRF)intheMXSeriestosecurityzonesintheSRX.InFigure2.5forexample,theVRF#1ismappedtosecurityzonesFirewall#1,NAT#1,andIPS#1,andVRF#2ismappedtoFirewall#2andNAT#2.

FordetailsconcerningnetworkvirtualizationontheMXSeries,refertoJuniperNetworkswhitepaper,Extending The Virtualization Advantage With NetworkVir-tualization –Virtualization Techniques in Juniper Networks MX Series 3D Universal Edge Routersatwww.juniper.net/us/en/local/pdf/whitepapers/2000342-en.pdf.

Access Tier

WetypicallydeploytheEX4200EthernetSwitchwithVirtualChassisTechnologyasatop-of-rackvirtualchassisintheaccesstier.

TheEX4200,togetherwithservervirtualizationtechnology,supportshighavailabilityandhighmaintainability–twokeyrequirementsformissioncritical,onlineapplications.

Figure 2. 6 Deploying PowerVM Using Dual Vios and Dual Top-Of-Rack Virtual Chassis

AsillustratedinFigure2.6:

• ThePower570ServersaredeployedwithdualVirtualI/OServers(VIOS):theprimaryVIOSrunsinactivemodewhilethesecondaryVIOSrunsinstandbymode.TheprimaryVIOSconnectstoonetop-of-rackvirtualchassiswhilethesecondaryoneconnectstoanothertop-of-rackvirtualchassis.

TOR Virtual Chassis Uplink (LAG)

Servers uplink (LAG+ backup)

RACK 1 RACK 2 RACK 7

TOR Virtual Chassis 2

TOR Virtual Chassis 1

VIOS VIOS

VIOS

VIOSPower 570

VIOS VIOS

VIOS VIOS

Power 570

VIOS VIOS

VIOS VIOS

Primary VIOS

Secondary VIOS

EX4200 EX4200 EX4200

Power 570

VIOS VIOS

Page 34: Data Center Network Connectivity With Ibm Server

Chapter2:DesignConsiderations 33

• ThetypicalbandwidthbetweenthePowerVM’sVIOSandthetop-of-rackvirtualchassisswitchis4Gbps,realizedas4x1GbpsportsintheNICcombinedinaLAG.Thebandwidthcanscaleupto8GbpsbyaggregatingeightportsinaLAGinterface.

• ThetwoHardwareManagementConsoles(HMCs)connecttotwodifferenttop-of-rackvirtualchassis,forexampleHMC1andHMC2.

Besidespreventingsinglepointoffailure(SPOF),thisapproachalsoprovideshighlyavailablemaintenancearchitectureforthenetwork:whenaVIOSorvirtualchassisinstancerequiresmaintenance,operatorscanupgradethestandbyVIOSorvirtualchassiswhiletheenvironmentrunsbusinessasusual,thenswitchtheenvironmenttotheupgradedversionwithoutdisruptingapplicationservice.

Forconnectingalargernumberofservers,itisstraightforwardtoduplicatethetop-of-rackvirtualchassisdeploymentattheaccesslayer.Figure2.7showsatop-of-rackvirtualchassiswithsevenEX4200sconnectedtoagroupof56Power570systems.

Toconnectadditional56Power570systems,anadditionaltop-of-rackvirtualchassisisdeployedattheaccesslayer.Asaresult,theaccesslayercanconnectalargenumberofPower570systems.

Afteraddressingalltheconnectivityissues,wemustnotlosesightoftheimportanceofperformanceintheothernetworklayersandnetworksecuritybecauseweareoperatingthedatacenternetworkasonesecurednetwork.

Figure 2. 7 Top-Of-Rack Virtual Chassis with Seven EX4200s Connected to Power 570 Systems

EX4200

CORE LAYER

ACCESS LAYER

SERVER LAYER

EX8200

EX4200

56 IBM Power 570 systems4480 Client Partitions

56 IBM Power 570 systems4480 Client Partitions

EX8200

Page 35: Data Center Network Connectivity With Ibm Server

34 DataCenterNetworkConnectivitywithIBMServers

TheEX4200top-of-rackvirtualchassissupportsdifferenttypesofphysicalconnections.TheEX4200provides48,1000Base-TXportsandtwoportsfor10GbpsXFPtransceiversthroughitsXFPuplinkmodule.TheXFPportcanuplinkothernetworkdevicesoritcanconnecttotheIBMPowerSystemsbasedonuserrequirements.Table2.2liststhreetypical10GbpsconnectionsusedinaPowerSystemandtheXFPUplinkmodulerequiredforeachEX4200connection.

MORE ForfurtherdetailsconcerningIBMPowerVMandEX4200top-of-rackvirtualchassisscalability,referto.Implementing IBM PowerVM Virtual Machines on Juniper Networks Data Center Networksatwww.juniper.net/us/en/local/pdf/implementation-guides/8010049-en.pdf.

Table 2. 2 Physical Connectivity Between IBM Power 570 and EX4200

IBM POWER 570 XFP Uplink Module Cable

10GbpsEthernet–LRPCI-ExpressAdapter

XFPUplinkModule

XFPLR10GbpsOpticalTransceiverModuleSMF

10GbpsEthernet–LRPCI-X2.0DDRAdapter

XFPUplinkModule

XFPLR10GbpsOpticalTransceiverModuleSMF

LogicalHostEthernetAdapter(lp-hea)

XFPUplinkModule

XFPSR10GbpsOpticalTransceiverModuleSMF

Page 36: Data Center Network Connectivity With Ibm Server

Chapter 3

Implementation Overview

35

THIS.CHAPTER.SERVES.AS.a.reference.to.the.later.chapters.in.this.handbook..

by.presenting.an.overview.of.the.next.generation.intra-data.center.network..

implementation.scenarios ..The.implementation.scenarios.summarized.in.this..

chapter.address.the.requirements,.as.previously.discussed.in.Chapter.2 ..The.

network.topology.of.this.reference.data.center.is.covered.specifically.as.a.part.of.

this.chapter ..

Chapters.4.through.8.focus.on.the.technical.aspects.of.the.implementation.that.

primarily.include.server.connectivity,.STP,.multicast,.performance,.and.high.

availability ..

Implementation.Network.Topology.Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Server.and.Network.Connections.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

Spanning.Tree.Protocols. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

Multicast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

Performance.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

High.Availability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Implementation.Scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Page 37: Data Center Network Connectivity With Ibm Server

36 DataCenterNetworkConnectivitywithIBMServers

Implementation Network Topology Overview

Thischapterpresentstheimplementationofatwo-tierdatacenternetworktopology.Thistopologyiscommontoallscenariosdescribedinthelaterchapters.Pleasenotethatthesetupdiagramsforeachindividualscenariocanbedifferentdespitecommonoverallnetworktopology.

AsshowninFigure3.1,theimplementationnetworktopologyisatwo-tierdatacenternetworkarchitecture.

Figure 3. 1 Implementation Network Topology Overview

VC - EX4200 Virtual Chassis

RTG - Redundant Trunk Group

LAG - Link Aggregation Group

VRRP - Virtual Router Redundancy Protocol

STP - Spanning Tree Protocol 1 GE Link

Virtual Chassis

I0 GE link

Core/Aggregation Tier

Access Tier

PIM

PIM

PIM

MulticastStreaming

Source

Virtual Chassis 1

Virtual Chassis 2

VRRP/RTG/STP

Multicast Receiver - IGMP host Multicast Receiver - IGMP host Multicast Receiver - IGMP host

LAG/VRRP

LAG/VRRP

LAG/VRRP

STPLAG

To R2To R1 To R3

PIM

LAG

WAN Edge + Core

VRRP/RTG/STP

Servers - VLAN B

VRRP/RTG/STP

Servers - VLAN CServers - VLAN A

EX8200 MX480MX480

EX4200

EX4200

EX4200EX4200

EX4200 EX4200

EX4200

EX4200

Page 38: Data Center Network Connectivity With Ibm Server

Chapter3:ImplementationOverview 37

NOTE Eachindividualimplementationcandifferbasedonnetworkdesignandrequirements.

Thetopologydescribedhereconsistsofthefollowingtiersandservers:

• Core/aggregationtierconsistingofEX8200sorMX480s.

• AccesstiercomprisedofEX4200s.Theseaccessswitchescanbedeployedeitherindividuallyorconfiguredtoformavirtualchassis.Eitheroftheseoptionscanbeimplementedastop-of-rackswitchestomeetdifferentEthernetportdensityrequirements.Pertainingtothetopologyunderdiscussion:

- ThreeEX4200switchesformavirtualchassis(VC1),functioningastop-of rackswitching(ToR1).

- TwoEX4200switchesformavirtualchassis(VC2),functioningastop-of rackswitch(ToR2).

- TheEX4200-1,EX4200-2,EX4200-3arethreeindividualaccessswitches, functioningastop-of-rackswitches(ToR3).

• ServerswheretheIBMBladeCenter,IBMx3500andIBMPowerVMresideforallscenariospresented.Foreaseofconfiguration,oneservertypeisusedforeachscenario.

ServersaresegmentedintodifferentVLANs,forexampleVLANA,B,andC,asshowninFigure3.1.Thephysicalnetworktopologyconsistsofthefollowingconnections:

• Theserversconnecttotheaccesstierthroughmultiple1GbElinkswithLinkAggregation(LAG)topreventsinglepointoffailure(SPOF)inthephysicallinkandimprovebandwidth.

• Theaccessswitchesconnecttothecorelayerwithmultiple10GbElinks.

• Atthecoretier,theMX480sandEX8200sinterconnecttoeachotherusingredundant10GbElinks.ThesedevicesconnecttotheWANedgetier,whichinterconnectsthedifferentdatacentersandconnectstoexternalnetworks.

NOTE Choosingdifferentconnectionconfigurationsisbasedonnetworkdesignandrequirements.Redundantphysicallinksareextremelyimportantforachievingnetworkhighavailability.

Server and Network Connections

Chapter4discussestheIBMSystemp,PowerVM,andJuniperNetworksMXanEXSeriesnetworkconfigurations.Typically,thesenetworkconfigurationsarerequiredforanyimplementationscenario.

FortheIBMSystempandPowerVM,wediscussitsproductionnetworksandmanagementnetworks.WealsodiscusskeyPowerVMservervirtualizationconcepts,includingSharedEthernetAdapter(SEA)andVirtualI/OServer(VIOS).

FortheJuniperNetworksMXandEXSeries,wediscusstheJunosoperatingsystem,whichrunsonboththeMXandEXSeriesplatforms.Inaddition,wediscussthejumboframeMaximumTransmissionUnit(MTU)setting.

Page 39: Data Center Network Connectivity With Ibm Server

38 DataCenterNetworkConnectivitywithIBMServers

Spanning Tree Protocols

SpanningTreeProtocolisenabledontheconnectionsbetweentheaccessswitchesandtheservers,andontheconnectionbetweentheaccessandcore/aggregationdevices.Forserver-to-accessswitchconnections,STPisconfiguredontheswitchsidesothatthelinkstotheserversaredesignatedas“edgeports.”Therearenootherbridgesattachedtoedgeports.AdministratorscanconfigureRSTP,MSTPorVSTPbetweentheaccessandaggregation/coredevices.

NOTE BoththeMXSeriesandEXSeriesdevicessupportallspanningtreeprotocols.

SpanningTreeProtocols,suchasRSTP,MSTPandVSTP,preventloopsinLayer2-basedaccessandaggregationlayers.MSTPandVSTPareenhancementsoverRSTP.MSTPisusefulwhenitisnecessarytodivideaLayer2networkintomultiple,logicalspanningtreeinstances.Forexample,itispossibletohavetwoMSTPinstancesthataremutuallyexclusivefromeachotherwhilemaintainingasinglebroadcastdomain.Thus,MSTPprovidesbettercontrolthroughoutthenetworkbydividingitintosmallerregions.MSTPispreferredwhendifferentdevicesmustfulfilltheroleoftherootbridge.Thus,theroleoftherootbridgeisspreadacrossmultipledevices.

ThetradeoffforimplementingMSTPisincreasedadministrativeoverheadandnetworkcomplexity.Ahighernumberofrootdevicesincreasethelatencytimeduringtherootbridgeelection.

NOTE WhenusingMSTP,itisimportanttodistributetherootbridgefunctionalityacrossanoptimalnumberofdeviceswithoutincreasingthelatencytimeduringrootbridgeelection.

VSTPcanbecomparedtoCisco’sPVST+protocol.VSTPisimplementedwhenspanningtreeisenabledacrossmultipleVLANs.However,VSTPisnotscalableandcannotbeusedforalargernumberofVLANs.SeeChapter5foradetaileddiscussiononSTPprotocols.

MulticastThemulticastprotocoloptimizesthedeliveryofvideostreamingandimprovesnetworkinfrastructureandoverallefficiency.InChapter6,wepresentmulticastimplementationscenarios,includingProtocolIndependentMulticast(PIM)andIGMPsnooping.

Inthesescenarios,thevideostreamingclientrunsonIBMservers.PIMisimplementedonthecore/aggregationtiers,whileIGMPsnoopingisimplementedontheaccesstier.

Performance InChapter7twomethodsforimprovingdatacenternetworkperformancearecoveredindetail:

• UsingCoStomanagetraffic.

• ConsideringlatencycharacteristicswhendesigningnetworksusingJuniperNetworksdatacenternetworkproducts.

Page 40: Data Center Network Connectivity With Ibm Server

Chapter3:ImplementationOverview 39

Using CoS to Manage TrafficConfiguringCoSonthedifferentdeviceswithinthedatacenterenablesSLAsfordifferentvoice,videoandothercriticalservices.Trafficcanbeprioritizedusingdifferentforwardingclasses.PrioritizationbetweenstreamsassignedtoaparticularforwardingclasscanbeachievedusingacombinationofBehaviorAggregate(BA)andMultifield(MF)classifiers.

LatencyEvolutionofWebservicesandSOAhasbeencriticaltotheintegrationofapplicationsthatusestandardprotocolssuchasHTML.Thistightintegrationofapplicationswithwebserviceshasgeneratedanincreaseofalmost30-75percentofeast-westtraffic(server-to-servertraffic)withinthedatacenter.

Asaresult,latencybetweenserversmustbereduced.Reducedlatencycanbeachievedby:

• Consolidatingthenumberofdevicesandthusthetierswithinthedatacenter.

• Extendingtheconsolidationbetweentiersusingtechniquessuchasvirtualchassis.Virtualchassisandmultipleaccesslayerswitchescanbegroupedlogicallytoformonesingleswitch.Thisreducesthelatencytimetoafewmicrosecondsbecausethetrafficfromtheserverdoesnotneedtobeforwardedthroughmultipledevicestotheaggregationlayer.

Inthelatencyimplementationscenario,weprimarilyfocusonhowtoconfiguretheMX480formeasuringLayer2andLayer3latency.

High AvailabilityHighavailabilitycanprovidecontinuousserviceavailabilitywhenimplementingredundancy,statefulrecoveryfromafailure,andproactivefaultprediction.Highavailabilityminimizesfailurerecoverytime.

JunosOSprovidesseveralhighavailabilityfeaturestoimproveuserexperienceandtoreducenetworkdowntimeandmaintenance.Forexample,featuressuchasvirtualchassis(supportedonEX4200),NonStopRouting/Bridging(NSR/NSB,bothsupportedonMXSeries),GRES,GRandRoutingEngineRedundancycanhelpincreaseavailabilityatthedevicelevel.TheVirtualRoutingRedundancyProtocol(VRRP),RedundantTrunkGroup(RTG)andLAGfeaturescontroltheflowoftrafficoverchosendevicesandlinks.TheISSUfeatureontheMXSeriesreducesnetworkdowntimeforaJunosOSsoftwareupgrade.Forfurtherdetailsconcerningavarietyofhighavailabilityfeatures,seeChapter 8: Configuring High Availability.

Eachhighavailabilityfeaturecanaddresscertaintechnicalchallengesbutmaynotaddressallthechallengesthattoday’scustomersexperience.Tomeetnetworkdesignrequirements,customerscanimplementoneormanyhighavailabilityfeatures.Inthefollowingsection,wediscusshighavailabilityfeaturesbycomparingtheircharacteristicsandlimitationswithinthefollowinggroups:

• GRES,GRversusNSR/NSB

• RoutingEngineSwitchover

• VirtualChassis

• VRRP

Page 41: Data Center Network Connectivity With Ibm Server

40 DataCenterNetworkConnectivitywithIBMServers

Comparing GRES and GR to NSR/NSB

Table3.1providesanoverviewoftheGRES,GRandNSR/NSBhighavailabilityfeaturesavailableinJunos.

Table 3. 1 High Availability Features in Junos OS

HA Features Functions Implementation Considerations

GRES

Providesuninterruptedtrafficforwarding.Incapableofprovidingrouterredundancybyitself.WorkswithGRprotocolextensions.

MaintainskernelstatebetweenREsandPFE.Networkchurnandprocessingnotproportionaltoeffectivechange.

GR (protocol extensions)

Allowsafailureofaneighboringrouternottodisruptadjacenciesortrafficforwardingforacertaintimeinterval.

Networktopologychangescaninterferewithgracefulrestart.EnablesadjoiningpeerstorecognizeRE

switchoverasatransitionalevent.Thispreventsthemfromstartingtheprocessofreconvergingnetworkpaths.

Neighborsarerequiredtosupportgracefulrestart.

GRcancauseblackholingifREfailureoccursduetoadifferentcause.

NSR/NSB

REswitchoveristransparenttonetworkpeer.

Unsupportedprotocolsmustberefreshedusingthenormalrecoverymechanismsinherentineachprotocol.

Nopeerparticipationrequired.

Nodropinadjacenciesorsession.

Minimalimpactonconvergence.

Allowsswitchovertooccuratanypoint,evenwhenroutingconvergenceisinprogress.

Nonstopactiverouting/bridgingandgracefulrestartaretwodifferentmechanismsformaintaininghighavailabilitywhenarouterrestarts.

Arouterundergoingagracefulrestartreliesonitsneighborstorestoreitsroutingprotocolinformation.Gracefulrestartrequiresarestartprocesswheretheneighborshavetoexitawaitintervalandstartprovidingroutinginformationtotherestartingrouter.

NSR/NSBdoesnotrequirearouterestart.BothprimaryandbackupRoutingEnginesexchangeupdateswithneighbors.RoutinginformationexchangecontinuesseamlesslywiththeneighborswhentheprimaryRoutingEnginefailsbecausethebackuptakesover.

NOTE NSRcannotbeenabledwhentherouterisconfiguredforgracefulrestart.

Page 42: Data Center Network Connectivity With Ibm Server

Chapter3:ImplementationOverview 41

Routing Engine Switchover

BecauseRoutingEngineswitchoverworkswellwithotherhighavailabilityfeatures,includinggracefulrestartandNSR,manyimplementationoptionsarepossible.Table3.2summarizesthefeaturebehaviorandprocessflowoftheseoptions.Thedual(redundant)RoutingEnginesoptionmeansthattheRoutingEngineswitchoverisdisabled.WealsousethedualRoutingEngineonlyoptionasabaselinetocompareotheroptionswithhighavailabilityenabled,suchasthegracefulroutingengineswitchoverenabledoption.

Table 3. 2 Routing Engine Switchover Implementation Options Summary

Implementation Options Feature Behavior Process Flow

DualRoutingEnginesonly(nohighavailabilityfeaturesenabled)

RoutingconvergencetakesplaceandtrafficresumeswhentheswitchovertothenewprimaryRoutingEngineiscomplete.

•. All.physical.interfaces.are.taken.offline .

•. Packet.Forwarding.Engines.restart .

•. Backup.Routing.Engine.restarts.the.routing.protocol.process.(rpd) ..

•. The.new.primary.Routing.Engine.discovers.all.hardware.and.interfaces .

•. The.switchover.takes.several.minutes.and.all.of.the.router’s.adjacencies.are.aware.of.the.physical.(interface.alarms).and.routing.(topology).change .

GracefulRoutingEngineswitchoverenabled

Interfaceandkernelinformationpreservedduringswitchover.TheswitchoverisfasterbecausethePacketForwardingEnginesarenotrestarted.

•. The.new.primary.Routing.Engine.restarts.the.routing.protocol.process.(rpd) ..

•. All.adjacencies.are.aware.of.the.router’s.change.in.state .

GracefulRoutingEngineswitchoverandnonstopactiveroutingenabled

Trafficisnotinterruptedduringtheswitchover.Interface,kernelandroutingprotocolinformationispreserved.

•. Unsupported.protocols.must.be.refreshed.using.the.normal.recovery.mechanisms.inherent.in.each.protocol .

GracefulRoutingEngineswitchoverandgracefulrestartenabled

Trafficisnotinterruptedduringtheswitchover.Interfaceandkernelinformationispreserved.

Gracefulrestartprotocolextensionsquicklycollectandrestoreroutinginformationfromtheneighboringrouters.

•. Neighbors.are.required.to.support.graceful.restart.and.a.wait.interval.is.required ..

•. The.routing.protocol.process.(rpd).restarts ..For.certain.protocols,.a.significant.change.in.the.network.can.cause.graceful.restart.to.stop .

Page 43: Data Center Network Connectivity With Ibm Server

42 DataCenterNetworkConnectivitywithIBMServers

Virtual Chassis

Between2and10EX4200switchescanbeconnectedandconfiguredtoformasinglevirtualchassisthatactsasasinglelogicaldevicetotherestofthenetwork.Avirtualchassistypicallyisdeployedintheaccesstier.Itprovideshighavailabilitytotheconnectionsbetweentheserversandaccessswitches.TheserverscanbeconnectedtodifferentmemberswitchesofthevirtualchassistopreventSPOF.

Virtual Router Redundancy Protocol

TheVirtualRoutingRedundancyProtocol(VRRP)describedinIETFstandardRFC3768,isaredundancyprotocolthatincreasestheavailabilityofadefaultgatewayonastaticroutingenvironment.VRRPenableshostsonaLANtouseredundantroutersonthatLANwithoutrequiringmorethanthestaticconfigurationofasingledefaultrouteonthehosts.TheVRRProuterssharetheIPaddresscorrespondingtothedefaultrouteconfiguredinthehosts.

Atanytime,oneoftheVRRProutersisthemaster(active)andtheothersarebackups.Ifthemasterfails,oneofthebackuproutersbecomesthenewmasterrouter,thusalwaysprovidingavirtualdefaultrouterandallowingtrafficontheLANtoberoutedwithoutrelyingonasinglerouter.

JunosOSprovidestwotrackingcapabilitiestoenhanceVRRPoperations:

• TrackthelogicalinterfacesandswitchtoaVRRPbackuprouter.

• Trackthereachabilitytotheprimaryrouter.Anautomaticfailovertothebackupoccursiftheroutetothegivenprimarynolongerexistsintheroutingtable.

Implementation Scenarios

Table3.3summarizestheimplementationscenariospresentedinthishandbook.Itprovidesmappingbetweeneachscenario,networktier,anddevices.Usingthistableasareference,youcanmapthecorrespondingchaptertoeachparticularimplementationscenario.

Page 44: Data Center Network Connectivity With Ibm Server

Chapter3:ImplementationOverview 43

Table 3. 3 Implementation Scenarios Summary

Implementation Scenarios Chapter Implementation Deployment Device Support

Spanning Tree (MSTP/RSTP/VSTP)

Chapter5

Access-Aggregation/Core

Aggregation-Aggregation

Aggregation-Core

EX4200

EX8200

MXSeries

PIM Chapter6 AccessEX4200

EX8200,MXSeries

IGMP snooping Chapter6 AccessEX4200,EX8200,MXSeries

CoS Chapter7Access

Aggregation/Core

EX4200

EX8200,MXSeries

Virtual Chassis Chapter8 Access EX4200

VRRP Chapter8

Access

Aggregation/CoreEX4200

EX8200,MXSeries

ISSU Chapter8 Aggregation/Core MXSeriesonly

RTG Chapter8Access

AggregationEXSeriesonly

Routing Engine Redundancy Chapter8 Aggregation/CoreMXSeries,EX8200

Non-Stop Routing Chapter8 Aggregation/Core MXSeriesonly

GR Chapter8Access

Aggregation/Core

EX4200

EX8200,MXSeries

RTG Chapter8 Access-Aggregation EXSeriesonly

LAG Chapter8Access-Server

Aggregation/Core

EX4200

EX8200,MXSeries

Page 45: Data Center Network Connectivity With Ibm Server

44 DataCenterNetworkConnectivitywithIBMServers

Table3.4functionsasareferenceaidtohelpourcustomersthoroughlyunderstandhowJuniperNetworksproductsandfeatures,whichareavailableinJunos9.6,canbeimplementedintotheirnetworks.Thistablesummarizesimplementationscenariosandtheirsupportedproductsthataredefinedindetaillaterinthisguide.

Table 3. 4 Mapping of Implementation Scenarios to Juniper Networks Supported Products

Implementation Scenarios EX4200 EX8200 MX480

High Availability

NSR/NSB – – Yes

GRES+GR – Yes Yes

VirtualChassis Yes – –

VRRP Yes Yes Yes

RTG Yes Yes –

LAG Yes Yes Yes

ISSU – – Yes

RoutingEngineRedundancy Yes Yes Yes

Spanning Tree Protocol

STP/RSTP,MSTP,VSTP Yes Yes Yes

Performance

CoS Yes Yes Yes

Multicast

PIM Yes Yes Yes

IGMP Yes Yes Yes

Page 46: Data Center Network Connectivity With Ibm Server

Chapter 4

Connecting IBM Servers in the Data Center Network

45

THIS.CHAPTER.DISCUSSES.the.IBM.System.p.and.PowerVM.network.

configuration.and.the.Juniper.Networks.MX.and.EX.Series.network.configuration ..

The.IBM.System.p.server.is.based.on.POWER.processors,.such.as.POWER5,.

POWER6.and.the.recently.announced.POWER7 ..In.addition.to.the.System.p.

server,.IBM.offers.PowerVM,.which.is.a.new.brand.for.the.system.virtualization.

powered.by.POWER.processors,.and.which.includes.elements.such.as.Micro-

Partitioning,.logical.partitioning.(LPAR),.Virtual.I/O.Server.(VIOS).and.hypervisor ..

Both.System.p.servers.and.PowerVM.typically.are.deployed.in.the.data.center.and.

support.mission.critical.applications .

IBM.System.p.and.PowerVM.Production.Networks. . . . . . . . . . . . . . . . . . . . . . . . . . 46

IBM.System.p.and.PowerVM.Management.Networks . . . . . . . . . . . . . . . . . . . . . . . 47

Configuring.IBM.System.p.Servers.and.PowerVM.. . . . . . . . . . . . . . . . . . . . . . . . . . . 48

IBM.PowerVM.Network.Deployment.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

Junos.Operating.System.Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Configuring.Network.Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

Page 47: Data Center Network Connectivity With Ibm Server

46 DataCenterNetworkConnectivitywithIBMServers

IBM System p and PowerVM Production Networks

AsillustratedinFigure4.1,thePOWERHypervisoristhefoundationofvirtualmachineimplementationintheIBMPowerVMsystem.CombinedwithfeaturesdesignedintoIBM’spowerprocessors,thePOWERHypervisorenablesdedicated-processorpartitions,Micro-Partitioning,virtualprocessors,anIEEEVLANcompatiblevirtualswitch,virtualEthernetadapters,virtualSCSIadapters,andvirtualconsoleswithintheindividualserver.ThePOWERHypervisorisafirmwarelayerpositionedbetweenthehostedoperatingsystemsandtheserverhardware.Itisautomaticallyinstalledandactivated,regardlessofsystemconfiguration.ThePOWERHypervisordoesnotrequirespecificordedicatedprocessorresourcesassignedtoit.

Figure 4.1 IBM Power Systems Virtualization Overview

TheVIOS,alsocalledtheHostingPartition,isaspecial-purposeLPARintheserver,whichprovidesvirtualI/Oresourcestoclientpartitions.TheVIOSownstheresources,suchasphysicalnetworkinterfacesandstorageconnections.Thenetworkorstorageresources,reachablethroughtheVIOS,canbesharedbyclientpartitionsrunningonthemachine,enablingadministratorstominimizethenumberofphysicalserversdeployedintheirnetwork.

InPowerVM,clientpartitionscancommunicateamongeachotheronthesameserverwithoutrequiringaccesstophysicalEthernetadapters.PhysicalEthernetadaptersarerequiredtoallowcommunicationbetweenapplicationsrunningintheclientpartitionsandexternalnetworks.ASharedEthernetAdapter(SEA)inVIOSbridgesthephysicalEthernetadaptersfromtheservertothevirtualEthernetadaptersfunctioningwithintheserver.

Shared Ethernet Adapters

VLAN 100

VLAN 200

VLAN 100VLAN 200

Hypervisor

Client Partitions (Virtual Machines)

IBM Power 570 SystemStandalone Servers

Virtual I/O Server (VIOS)

Virtual Virtual Virtual

App

OS

App

OS

App

OS

App

OS

App

OS

App

OS

Virtual Virtual

EX4200

Page 48: Data Center Network Connectivity With Ibm Server

Chapter4:ConnectingIBMServersintheDataCenterNetwork 47

BecausetheSEAfunctionsatLayer2,theoriginalMACaddressandVLANtagsoftheframesassociatedwiththeclientpartitions(virtualmachines)arevisibletoothersystemsinthenetwork.Forfurtherdetails,refertoIBM’swhitepaper Virtual Networking on AIX 5L atwww.ibm.com/servers/aix/whitepapers/aix_vn.pdf.

InPowerVM,thephysicalNetworkInterfaceCard(NIC)typicallyisallocatedonVIOSforimprovedutilizationandintheIBMSystemp,thephysicalNICisexclusivelyallocatedtoaLPAR.

IBM System p and PowerVM Management Networks

TheSystempserversandPowerVMinthedatacentercanbemanagedbyHMC,whichisasetofapplicationsrunningonadedicatedIBMXSeriesserverthatprovidesaCLI-basedandweb-basedservermanagementinterface.HMCtypicallyconnectsmonitor,keyboardandmouseforlocalaccess.However,themanagementnetwork,whichconnectsHMCanditsmanagedservers,iscriticaltotheremoteaccess,anessentialoperationaltaskintoday’sdatacenter.

Figure 4.2 IBM Power Systems Management Networks Overview

AsillustratedinFigure4.2,IBMPowerSystemsmanagementnetworksrequiretwonetworks:

• Out-of-Bandmanagementnetwork.

• HMCprivatemanagementnetwork.

X-WindowServer

NIC httpd sshd

NIC dhcpd RMC

Client

Server SRV 1

FSP

NIC NIC NIC

LPARVIOS

LPAR 1 LPAR 2 LPAR 3

Server SRV 2

FSP

NIC NIC NIC

LPARVIOS

LPAR 1 LPAR 2 LPAR 3

Out-of-BandNetwork

Management

HMC PrivateManagement

Network

HMCHMC

Page 49: Data Center Network Connectivity With Ibm Server

48 DataCenterNetworkConnectivitywithIBMServers

Theout-of-bandmanagementnetworkconnectsHMCandclientnetworkssothataclient’srequestforaccesscanberoutedtotheHMC.AHMCprivatemanagementnetworkisdedicatedforcommunicationbetweentheHMCanditsmanagedservers.Thenetworkusesaselectedrangeofnon-routableIPaddresses,andtheDynamicHostConfigurationProtocol(DHCP)serverisavailableintheHMCforIPallocation.EachpserverconnectstotheprivatemanagementnetworkthroughitsFlexibleServiceProcessor(FSP)ports.

ThroughtheHMCprivatemanagementnetwork,theHMCmanagesserversinthefollowingsteps:

1. Connectsthepserver’sFSPporttotheHMCprivatemanagementnetworksothatHMCandtheserverareinthesamebroadcastdomain,andHMCrunsDHCPserver(dhcpd).

2. Powersontheserver.Theserver’sFSPrunstheDHCPclientandrequestsanewIPaddress.FSPgetstheIPaddress,whichisallocatedfromHMC.

3. HMCcommunicateswiththeserverandupdatesitsmanagedserverlistwiththisnewserver.

4. HMCperformsoperationsontheserver,forexamplepowerson/offtheserver,createsLPAR,setssharedadapters(HostEthernetandHostChannel)andconfiguresvirtualresources.

Configuring IBM System p Servers and PowerVM Inthissection,wediscussIBM’ssystempserversandPowerVMnetworkconfiguration,includingNIC,virtualEthernetAdapterandvirtualEthernetswitchconfigurations,SEAinVIOS,andHostEthernetAdapter.

Network Interface Card AsillustratedinFigure4.3,theNICcanbeallocatedexclusivelytoaLPARthroughHMC.InLPAR,thesystemadministratorswillfurtherconfigureNICoperationparameters,suchasauto-negotiation,speed,duplex,flowcontrolandsupportforjumboframes.

Figure 4.3 IBM Power Systems Management Networks Overview

Toallocate(orremove)theNIContheLPAR,performthefollowingsteps:

1. SelectLPAR.2. Select:Configuration>>ManageProfiles.3. Selecttheprofilethatyouwanttochange.4. SelectI/Otab.5. SelectNIC(physicalI/Oresource).6. ClickAddtoaddtheNIC(orRemovetoremovetheNIC).7. SelectOKtosavechanges,thenclickClose.

to HMC

Server SRV 1

FSP

NIC NIC NIC

LPARVIOS

LPAR 1 LPAR 2 LPAR 3

Page 50: Data Center Network Connectivity With Ibm Server

Chapter4:ConnectingIBMServersintheDataCenterNetwork 49

NOTE TheNICcanbeallocatedtomultipleprofiles.BecausetheNICallocationisexclusiveduringtheprofileruntime,onlyoneprofileactivatesandusesthisNIC.IftheNICisalreadyusedbyoneactiveLPAR,andyouattempttoactivateanotherLPAR,whichrequiresthesameNICadapter,theactivationprocesswillbeaborted.AddingorremovingtheNICrequiresre-activeLPARprofile.

Configuring Virtual Ethernet Adapter and Virtual Ethernet Switch

AsillustratedinFigure4.4,thePOWERHypervisorimplementsanIEEE802.1QVLANstylevirtualEthernetswitch.SimilartoaphysicalEthernetswitch,itprovidesvirtualportswhichsupportsIEEE802.1QVLANtaggedoruntaggedEthernetframes.

SimilartoaphysicalEthernetadapteronthephysicalserver,thevirtualEthernetadapteronthepartitionprovidesnetworkconnectivitytothevirtualEthernetswitch.WhenyoucreateavirtualEthernetadapteronthepartitionfromtheHMC,thecorrespondingvirtualportiscreatedonthevirtualEthernetswitchandthereisnoneedtoattachexplicitlyavirtualEthernetadaptertoavirtualport.

ThevirtualEthernetadapterandvirtualEthernetswitchformavirtualnetworkamongtheclientpartitionssothattheycancommunicatewitheachotherrunningonthesamephysicalserver.TheVIOSisrequiredforclientpartitiontofurtheraccessthephysicalnetworkoutsideofthephysicalserver.AsshowninFigure4.4,threeLPARsandVIOSconnecttotwovirtualEthernetswitchesthroughvirtualEthernetadapters.TheVIOSalsoconnectstothephysicalNICsothatLPAR2andLPAR3cancommunicatewitheachother;LPAR1,LPAR2andVIOScancommunicatewitheachotherandfurtheraccesstheexternalphysicalnetworkthroughthephysicalNIC.

Figure 4.4 Configuring Virtual Ethernet Switches and Virtual Ethernet Adapters

Thissectionprovidesstepsforthefollowing:

• CreatingavirtualEthernetswitch

• RemovingavirtualEthernetswitch

• CreatingavirtualEthernetadapter

• RemovingavirtualEthernetadapter

• ChangingvirtualEthernetadapterproperties

to HMC

Server SRV 1

NIC

LPARVIOS

LPAR 1 LPAR 2 LPAR 3

FSP

Virtual Switch 21

Page 51: Data Center Network Connectivity With Ibm Server

50 DataCenterNetworkConnectivitywithIBMServers

ToCreateaVirtualEthernetSwitch

1. Selectserver(SystemsManagement>>Servers>>selectserver).

2. SelectConfiguration>>VirtualResources>>VirtualNetworkManagement.

3. SelectAction>>CreateVSwitch.

4. EnteranamefortheVSwitchthenselectOK.

5. ClickClosetoclosedialog.

ToRemoveaVirtualEthernetSwitch

1. Selectserver(SystemsManagement>>Servers>>selectserver).

2. SelectConfiguration>>VirtualResources>>VirtualNetworkManagement.

3. SelectAction>>RemoveVSwitch.

4. ClickClosetoclosedialog.

ToCreateaVirtualEthernetAdapter

1. Selectserver(SystemsManagement>>Servers>>selectserver).

2. SelectLPAR.

3. SelectConfiguration>>ManageProfiles.

4. Selecttheprofilethatyouwanttochange.

5. SelectVirtualAdapterstab.

6. SelectActions>>Create>>EthernetAdapter(seeFigure4.5).

Figure 4.5 Virtual Ethernet Adapter Properties Window

Page 52: Data Center Network Connectivity With Ibm Server

Chapter4:ConnectingIBMServersintheDataCenterNetwork 51

7. InthevirtualEthernetAdapterPropertieswindow(asshowninFigure4.5),enterthefollowing:

a. AdapterID,(defaultvaluedisplays).

b. VSwitch,virtualEthernetSwitchthatthisadapterconnectsto.

c. VLANID,VLANIDforuntaggedframes,Vswitchwilladd/removetheVLANheader.

d. Selectthecheckbox,thisadapterisrequiredforpartitionactivation.

e. Selectthecheckbox,IEEE802.1qcompatibleadapter,forcontrolifVLANtaggedframesareallowedonthisadapter.

f. SelecttheAdd,Remove,NewVLANIDandAdditionalVLANsforadding/removingVLANIDsthatareallowedforVLANtaggedframes.

g. SelectthecheckboxAccessexternalnetworkenabledonlyonLPARsusedforbridgingtrafficfromthevirtualEthernetSwitchtosomeotherNIC.TypicallythisshouldbekeptuncheckedforregularLPARsandcheckedforVIOS.

h. ClickOKtosavechangesmadeintheprofileandthenselectClose.

ToRemoveaVirtualEthernetAdapter

1. Selectserver(SystemsManagement>>Servers>>selectserver).

2. SelectLPAR.

3. SelectConfiguration>>ManageProfiles.

4. Selecttheprofilethatyouwanttochange.

5. SelectVirtualAdapterstab.

6. SelecttheEthernetAdapterthatyouwanttoremove.

7. Select:Actions>>Delete.

8. ClickOKtosavechangesmadeintheprofileandthenselectClose.

ToChangeaVirtualEthernetAdapter’sProperties

1. Selectserver(SystemsManagement>>Servers>>selectserver).

2. SelectLPAR.

3. SelectConfiguration>>ManageProfiles.

4. Selecttheprofilethatyouwanttochange.

5. SelecttheVirtualAdapterstab.

6. SelecttheEthernetAdapterthatyouwanttoremove.

7. SelectActions>>Edit.

8. Entertherequiredinformationinthefields,asillustratedinFigure4.5

9. ClickOKtosavechangesmadeintheprofileandthenselectClose.

Page 53: Data Center Network Connectivity With Ibm Server

52 DataCenterNetworkConnectivitywithIBMServers

Shared Ethernet Adapter in VIOS

TheSEAisasoftwareimplementedEthernetbridgethatconnectsavirtualEthernetnetworktoanexternalEthernetnetwork.Withthisconnection,theSEAbecomesalogicaldeviceinVIOS,whichtypicallyconnectstwootherdevices:thevirtualEthernetadapteronVIOSconnectstothevirtualEthernetswitch;thephysicalNICconnectstotheexternalEthernetnetwork.

NOTE MakesurethattheAccessExternalnetworkoptionischeckedwhenthevirtualEthernetadapteriscreatedonVIOS.

TocreateaSEAonVIOS,usethefollowingcommandsyntax:

mkvdev -sea <target_device> -vadapter <virtual_ethernet_adapters> -default <DefaultVirtualEthernetAdapter> -defaultid <SEADefaultPVID>

Table4.1listsanddefinestheparametersassociatedwiththiscommand.

Table 4.1 mk/debv Command Parameters and Description

Parameters Description

target _ device Is.the.physical.port.that.connects.to.the.external.network,.on.NIC.exclusively.allocated.to.VIOS,.LPAR.or.LHEA .

virtual _ ethernet _ adapters Represents.one.or.more.virtual.Ethernet.adapters.that.SEA.will.bridge.to.target_device.(typically.only.one.adapter) .

DefaultVirtualEthernetAdapter Is.the.default.virtual.Ethernet.adapter.that.will.handle.untagged.frames.(typically.the.same.as.previous.parameter) .

SEADefaultPVID Is.the.VID.for.the.default.virtual.Ethernet.adapter.(typically.has.the.value.of.1) .

ThefollowingsamplecommandcreatesaSEA,asshowninFigure4.6:

mkvdev -sea ent1 -vadapter emt2 -default ent3 -defaultid 1

Figure 4.6 Creating a shared Ethernet Adapter in VIOS

to HMC

ent1 – Ethernet Interface to NIC Assigned to VIOS LPARent2 – Ethernet Interface to Virtual Switchent3 – Shared Ethernet Adaper (Logical Device)

Virtual Switch 1

Server SRV 1

FSP

NIC

LPARVIOS

SEA (ent3)

(ent1)

(ent2)

LPAR 1 LPAR 2

Page 54: Data Center Network Connectivity With Ibm Server

Chapter4:ConnectingIBMServersintheDataCenterNetwork 53

Host Ethernet Adapter

TheHEA,alsocalledIntegratedvirtualEthernetAdapter,isanintegratedhigh-speedEthernetadapterwithhardware-assistedvirtualization,whichisastandardfeatureoneveryPOWER6processor-basedserver.TheHEAprovidesphysicalhigh-speedconnection(10G)totheexternalnetworkandprovidesalogicalport.Figure4.7showstheLHEAsforLPARs.

Figure 4.7 Host Ethernet Adapter Overview

BecauseHEAcreatesavirtualnetworkfortheclientpartitionsandbridgesthevirtualnetworktothephysicalnetwork,itreplacestheneedforboththevirtualEthernetandtheSharedEthernetAdapter.Inaddition,HEAenhancesperformanceandimprovesutilizationforEthernetbecauseHEAeliminatestheneedtomovepackets(usingvirtualEthernet)betweenpartitionsandthenthroughaSEAtothephysicalEthernetinterface.Fordetailedinformation,refertoIBM’sRedpaperIntegrated Virtual Ethernet Adapter Technical Overview and Introductionatwww.redbooks.ibm.com/abstracts/redp4340.html.

HEAisconfiguredthroughHMC.ThefollowinglistincludessomeHEAconfigurationrules:

• LPARusesonlyonelogicalporttoconnecttoHEA.

• HEAconsistsofoneortwogroupsoflogicalports.

• Eachgroupoflogicalportshas16logicalports(16or32totalforHEA).

• Eachgroupoflogicalportscanhaveoneortwoexternalportsassignedtoit(predefined).

• AlogicalportgroupconsistsofoneortwoEthernetswitchpartitions,oneforeachexternalport.

• LPARcanhaveonlyonelogicalportconnectedtoanEthernetswitchpartition.Thismeansthatonlyonelogicalportcanconnecttotheexternalport.

• MCSincreasesbandwidthbetweenLPARandNIC.MCSreducesthenumberoflogicalports,forMCS=2thenumberoflogicalportsis16/2=8.ForMCStotakeeffect,aserverrestartisrequired.

• Onlyonelogicalportinaportgroupcanbesetinpromiscuousmode.

Inthissection,wediscussthefollowingHEAconfigurations:

• ConfiguringaHEAphysicalport

• AddingaLHEAlogicalport

• RemovingaLHEAlogicalport.

Server SRV 1

FSP

HEA

LPARVIOS

LPAR 1 LPAR 2 LPAR 3

to HMC

HEA Ext Port 1 HEA Ext Port 2

Page 55: Data Center Network Connectivity With Ibm Server

54 DataCenterNetworkConnectivitywithIBMServers

Configuring a HEA Physical Port

ToconfiguretheHEAphysicalport,performthefollowingsteps(refertoFigure4.8asareference):

1. Selectserver(SystemsManagement>>Servers>>selectserver).

2. SelectHardwareInformation>>Adapters>>HostEthernet.

3. Selectadapter(port).

4. ClicktheConfigurebutton.

5. Enterparametersforthefollowingfields:Speed,Duplex,Maximumreceivingpacketsize(Jumboframes),PendingPortGroupMulti-CoreScalingvalue,Flowcontrol,PromiscuousLPAR.

6. ClickOKtosaveyourchanges.

Figure 4.8 HEA Physical Port Configuration Overview Window

Adding a LHEA Logical Port

ToaddaLHEA,performthefollowingsteps:

1. Selectserver(SystemsManagement>>Servers>>selectserver).

2. SelectLPAR.

3. SelectConfiguration>>ManageProfiles.

4. Selecttheprofilethatyouwanttochange.

5. SelectthetabLogicalHostEthernetAdapters(LHEA).

6. SelecttheexternalportthatLHEAconnectsto.

7. ClickConfigure.

8. Entertheparametersforthefollowingfields:Logicalport,selectoneport1…16,ifMCSisgreaterthan1somelogicalportswillbeidentifiedasNotAvailable.

9. SelectthecheckboxAllowallVLANIDs.Otherwise,entertheactualVLANIDintheVLANtoaddfield,asshowninFigure4.9.

10. ClickOK.

Page 56: Data Center Network Connectivity With Ibm Server

Chapter4:ConnectingIBMServersintheDataCenterNetwork 55

Figure 4.9 HEA Physical Port Configuration Overview

Removing a LHEA Logical Port

ToremovetheLHEA,performthefollowingsteps:

1. Selectserver(SystemsManagement>>Servers>>selectserver).

2. SelectLPAR.

3. SelectConfiguration>>ManageProfiles.

4. Selecttheprofilethatyouwanttochange.

5. SelectthetabLogicalHostEthernetAdapters(LHEA).

6. SelecttheexternalportthatLHEAconnectsto.

7. ClicktheResetbutton.

8. ClickOKtoclosethewindow.

9. ClickOKtosavechangesandclosethewindow.

Page 57: Data Center Network Connectivity With Ibm Server

56 DataCenterNetworkConnectivitywithIBMServers

IBM PowerVM Network Deployment

Inthissection,wediscussatypicalIBMPowerVMnetworkdeployment.AsillustratedintheFigure4.10,twoIBMSystemp6serversaredeployedinadatacenterandthreenetworksarerequired:

• HMCprivatenetwork(192.168.128.0/17).

• OutofBandmanagementnetwork(172.28.113.0/24).

• Productionnetwork(11.11.1.0/8).Typically,testingtrafficissenttointerfacesontheproductionnetwork.

Figure 4.10 IBM Power Series Servers, LPARs, and Network Connections

Ethernet SwitchPrivate Network

(192.168.128.0/17)

Ethernet SwitchManagement Network

(172.28.113.0/24)

Keyboardand Monitor

2

1

DUTProduction Network

(11.11.1.0/24)

NIC under test

p5 Server

FSP

LPARVIOS

LPARRHEL

LPARSUSE

Hypervisor

p6 Server

FSP

NIC under test

LPARVIOS

LPARRHEL

LPARSUSE

LPARAIX 5.3

LPARAIX 6.1

Host Ethernet Adapter (HEA)

X-WindowServer

Web App sshddhcpd

HMC

Management Workstation(Web client–Telnet/SSH Client)

Page 58: Data Center Network Connectivity With Ibm Server

Chapter4:ConnectingIBMServersintheDataCenterNetwork 57

HMCrunsonaLinuxserverwithtwonetworkinterfaces:oneconnectstoaprivatenetworkforallmanagedP5/P6systems(on-boardEthernetadapteronservers,controlledbyFSPprocess);theothernetworkinterfaceconnectstoamanagementnetwork.Inthemanagementnetwork,themanagementworkstationaccessestheHMCWebinterfacethroughaWebbrowser.

TherearetwowaystosetupcommunicationwithLPAR(logicalpartitions):

• UsingaconsolewindowthroughHMC.

• UsingTelnet/SSHoverthemanagementnetwork.EachLPARhasonededicatedEthernetinterfaceforconnectingtothemanagementnetworkusingthefirstphysicalportonHEA(IVE)sharedamongLPARs.

EachLPARmustconnecttothevirtualEthernetSwitchusingthevirtualEthernetAdapter.YoucreateavirtualEthernetswitchandavirtualEthernetadapterusingtheHMC.VirtualEthernetadaptersforVIOSLPARsmusthavetheAccessExternalNetworkoptionenabled.

VIOSLPAR,whichisaspecialversionofAIX,performsthebridgingbetweenthevirtualEthernetswitch(implementedinHypervisor)andtheexternalport.ForbridgingframesbetweenthephysicaladapterontheNICandthevirtualEthernetadapterconnectedtothevirtualEthernetswitch,anotherlogicaldevice(theSEA)iscreatedinVIOS.

AsillustratedinFigure4.11,thetypicalnetworkdeploymentwiththeaccessswitchandLPAR(virtualmachine)isasfollows:

• TheaccessswitchconnectstophysicalNIC,whichisassignedtoent1inVIOS.

• Theent3(SEA)bridgesent1(physicalNIC)andent2(virtualEthernetadapters).

• Theent2(virtualEthernetadapter)iscreatedanddedicatedtoLPARwhichrunsRedHatEnterpriseLinux.

• Theent3alsosupportsmultipleVLANs.EachVLANwillassociatewithonelogicalEthernetadapter,forexampleent4.

Figure 4.11 Detailed Network Deployment with SEA

ent1 – Ethernet Interface to NIC Assigned to VIOS LPARent2 – Ethernet Interface to Virtual Switch (Sw. in Hypervisor)ent3 – Shared Ethernet Adaper (Logical Device)ent4 – Logical Ethernet Adaper for One VLAN

LPARRHEL

NIC

LPARVIOS

SEA (ent3)

ent1

ent2

ent4 Virtual Switch 1

EthernetSwitch

Page 59: Data Center Network Connectivity With Ibm Server

58 DataCenterNetworkConnectivitywithIBMServers

Junos Operating System Overview

AsshowninFigure4.12,theJunosOSincludestwocomponents:RoutingEngineandthePacketForwardingEngine.Thesetwocomponentsprovideaseparationofcontrolplanefunctionssuchasroutingupdatesandsystemmanagementfrompacketdataforwarding.Hence,productsfromJuniperNetworkscandeliversuperiorperformanceandhighlyreliableInternetoperation.

Figure 4.12 Junos OS Architecture

Routing Engines

TheRoutingEnginerunstheJunosoperatingsystem,whichincludestheFreeBSDkernelandthesoftwareprocesses.Theprimaryoperatorprocessesincludethedevicecontrolprocess(dcd),routingprotocolprocess(rpd),chassisprocess(chassisd),managementprocess(mgd),trafficsamplingprocess(sampled),automaticprotectionswitchingprocess(apsd),simplenetworkmanagementprotocolprocess(snmpd)andsystemloggingprocess(syslogd).TheRoutingEngineinstallsdirectlyintothecontrolpanelandinteractswiththePacketForwardingEngine.

Embedded Microkernel

PacketForwardingEngine

RoutingTables

ForwardingTable

SNMP

User

Kernel

ChassisProcess

RoutingProtocol Process

InterfaceProcess

Command-LineInterface (CLI)

ForwardingTable

Microkernel

Junos So ware

RoutingEngine

DistributedASICs

ChassisProcess

InterfaceProcess

Page 60: Data Center Network Connectivity With Ibm Server

Chapter4:ConnectingIBMServersintheDataCenterNetwork 59

Packet Forwarding Engine

ThePacketForwardingEngineisdesignedtoperformLayer2andLayer3switching,routelookupsandrapidforwardingofpackets.ThePacketForwardingEngineincludesthebackplane(ormidplane),FlexiblePICConcentrator(FPC),PhysicalInterfaceCards(PICs)andthecontrolboard(switching/forwarding)andaCPUthatrunsthemicrokernel.

Themicrokernelisasimple,cooperative,multitasking,real-timeoperatingsystemdesignedandbuiltbyJuniperNetworks.Themicrokernel,whichhasmanyfeatures,comprisesfullyindependentsoftwareprocesses,eachwithitsownchunkofmemory.Theseapplicationscommunicatewithoneanother.Thehardwareintherouterpreventsoneprocessfromaffectinganother.Asnapshotistakenwhereverthefailureoccurredsothatengineerscananalyzethecoredumpandresolvetheproblem.TheSwitchControlBoard(SCB)powersonandoffcards,controlsclocking,resets,boots,andthenmonitorsandcontrolssystemsfunctions,includingthefanspeed,boardpowerstatus,PDMstatusandcontrol,andthesystemfrontpanel.

Interaction Between Routing Engine and Packet Forwarding Engine

ThekernelontheRoutingEnginecommunicateswiththePacketForwardingEngineandsynchronizesacopyoftheforwardingtableonthePacketForwardingEnginewiththatontheRoutingEngine.Figure4.12showstheinteractionbetweentheRoutingEngineandPacketForwardingEnginewithrespecttotheforwardingactivity.TheRoutingEnginebuildsamaster-forwardingtablebasedonitsroutingtable.ThekernelontheRoutingEnginecommunicateswiththePacketForwardingEngineandprovidesthePacketForwardingEnginetheforwardingtable.Fromthispointon,thePacketForwardingEngineperformstrafficforwarding.

TheRoutingEngineitselfisneverinvolvedintheforwardingofpackets.TheASICsintheforwardingpathonlyidentifyandsendtheRoutingEngineanyexceptionpacketsorroutingcontrolpacketsforprocessing.TherearesecuritymechanismsinplacethatpreventtheRoutingEngine(andcontroltraffic)frombecomingattachedoroverwhelmedbythesepackets.PacketssenttothecontrolplanefromtheforwardingplaneareratelimitedtoprotecttherouterfromDOSattacks.Thecontroltrafficisprotectedfromexcessexceptionpacketsusingmultiplequeuesthatprovideacleanseparationbetweenthetwo.Thepacketsareprioritizedbythepacket-handlinginterface,whichsendsthemtothecorrectqueuesforappropriatehandling.

TheredundantfunctioncomponentsinthenetworkdevicespreventSPOFandincreasehighavailabilityandreliability.JuniperNetworksdevicestypicallyconfigurewithasingleRoutingEngineandPacketForwardingEngine.Toachievehighavailabilityandreliability,theuserhastwooptions:

• CreateredundantRoutingEnginesandasinglePacketForwardingEngine,or

• CreateredundantRoutingEnginesandredundantPacketRoutingEngines.

Page 61: Data Center Network Connectivity With Ibm Server

60 DataCenterNetworkConnectivitywithIBMServers

Junos Processes

JunosprocessesrunontheRoutingEngineandmaintaintheroutingtables,managetheroutingprotocolsusedontherouter,controltherouterinterfaces,controlsomechassiscomponents,andactastheinterfaceforsystemmanagementanduseraccesstotherouter.Majorprocessesarediscussedindetaillaterthissection.

TheJunosprocessisaUNIXprocessthatrunsnonstopinthebackgroundwhileamachineisrunning.AlloftheprocessesoperatethroughtheCommandLineInterface(CLI).Eachprocessisapieceofthesoftwareandhasaspecificfunctionorareatomanage.Theprocessesruninseparatedandprotectedaddressspaces.ThefollowingsectionsbrieflycovertwomajorJunosprocesses:theroutingprotocolprocess(rpd)andthemanagementprocess(mgd).

Routing Protocol Process

Theroutingprotocolprocess(rpd)providestheroutingprotocolintelligencetotherouter,controllingtheforwardingofpackets.Sittingintheuserspace(versusthekernel)oftheroutingengine,rpdisamechanismfortheRoutingEnginetolearnroutinginformationandconstructtheroutingtable,whichstoresrouteinformation.

Thisprocessstartsallconfiguredroutingprotocolsandhandlesallroutingmessages.Itmaintainsoneormoreroutingtables,whichconsolidatestheroutinginformationlearnedfromallroutingprotocols.Fromthisroutinginformation,therdpprocessdeterminestheactiveroutestonetworkdestinationsandinstallstheseroutesintotheRoutingEngine’sforwardingtable.Finally,therdpprocessimplementstheroutingpolicy,whichenablesanoperatortocontroltheroutinginformationthatistransferredbetweentheroutingprotocolsandtheroutingtable.Usingaroutingpolicy,operatorscanfilterandlimitthetransferofinformationaswellassetpropertiesassociatedwithspecificroutes.

NOTE RPDhandlesbothunicastandmulticastroutingprotocolsasdatatravelstoonedestinationandtravelstomanydestinations,respectively.

Management Process

Severaldatabasesconnecttothemanagementprocess(mgd).Theconfigschemadatabasemergesthepackages/usr/lib/dd/libjkernel-dd.so, /usr/lib/dd/libjroute-dd.soand /usr/lib/dd/libjdocs-ddatinitializationtimetomake/var/db/schema.db,whichcontrolswhattheuserinterface(UI)is.Theconfigdatabaseholds/var/db/juniper.db.

ThemgdworkscloselywithCLI,allowingtheCLItocommunicatewithalltheotherprocesses.Mgdknowswhichprocessisrequiredtoexecutecommands(userinput).

Whentheuserentersacommand,theCLIcommunicateswithmgdoveraUNIXdomainsocketusingJunoscript,anXML-basedremoteprocedurecall(RPC)protocol.Themgdisconnectedtoalltheprocesses,andeachprocesshasaUNIXdomainmanagementsocket.

Page 62: Data Center Network Connectivity With Ibm Server

Chapter4:ConnectingIBMServersintheDataCenterNetwork 61

Ifthecommandislegal,thesocketopensandmgdsendsthecommandtotheappropriateprocess.Forexample,thechassisprocess(chassisd)implementstheactionsforthecommandshowchassishardware.TheprocesssendsitsresponsetomgdinXMLformandmgdrelaystheresponsebacktotheCLI.

Mgdplaysanimportantpartinthecommitcheckphase.Whenyoueditaconfigurationontherouter,youmustcommitthechangeforittotakeeffect.Beforethechangeactuallyismade,mgdsubjectsthecandidateconfigurationtoacheckphase.Themanagementprocesswritesthenewconfigurationintotheconfigdb(juniper.db).

Junos Operating System Network Management

TheJunosoperatingsystemnetworkmanagementfeaturesworkinconjunctionwithanoperationssupportsystem(OSS)tomanagethedeviceswithinthenetwork.TheJunosOScanassistinperformingthefollowingmanagementtasks:

• Faultmanagement:includesdevicemonitoringanddetectingandfixingfaults

• Configurationmanagement

• Accountingmanagement:collectsstatisticsforaccountingpurposes

• Performancemanagement:monitorsandadjustsdeviceperformance

• Securitymanagement:controlsdeviceaccessandauthenticatesusers

Thefollowinginterfaces(APIs)typicallyareusedtomanageandmonitorJuniperNetworksnetworkdevices:

• CLI

• J-Web

• SNMP

• NETCONF

Inaddition,Junosalsosupportsothermanagementinterfacestomeetvariousrequirementsfromenterpriseandcarrierproviders,includingJ-Flow,sFlow,EthernetOAM,TWAMP,etc.

MORE Fordetailedconfigurationinformationconcerningthenetworkmanagementinterfaces,pleaserefertoJunosSoftwareNetworkManagementConfigurationGuideRelease10.0athttp://www.juniper.net/techpubs/en_US/junos10.0/information-products/topic-collections/config-guide-network-mgm/frameset.html.

Configuring Network Devices

Table4.2listsanddescribesthewaysbywhichIBMserverscanconnecttoJuniperswitchesandroutersinthedatacenter.

Page 63: Data Center Network Connectivity With Ibm Server

62 DataCenterNetworkConnectivitywithIBMServers

Table 4.2 Methods for Connecting IBM Servers to Juniper Switches and Routers

Connection Types Description

ThenetworkdeviceactsasLayer2switch.

TotheIBMservers,thenetworkdeviceappearsasaLayer2switch.ThenetworkdeviceinterfacesandIBMserver’sNICareinthesameLayer2broadcastdomain.BecausethenetworkdeviceinterfacesdonotconfigureLayer3IPaddresses,theydonotprovideroutingfunctionality.

ThenetworkdeviceactsasaswitchwithLayer3address.

TotheIBMservers,thenetworkdeviceappearsasaLayer2switch.ThenetworkdeviceinterfacesandIBMserver’sNICareinthesameLayer2broadcastdomain.ThenetworkdeviceinterfacesconfigureLayer3IPaddressessothattheycanroutetraffictootherconnectednetworks.

Thenetworkdeviceactsasarouter.

TotheIBMservers,thenetworkdeviceappearsasaLayer3routerwithasingleEthernetinterfaceandIPaddress.ThenetworkdevicedoesnotprovideLayer2switchingfunctionality.

Inthenextsection,severaldifferentbuttypicalmethodsforconfiguringtheMXSeriesroutersandEXSeriesswitchesarepresented.

Configuring MX Series 3D Universal Edge Routers

InanMXSeriesconfiguration,onephysicalinterfacecanhavemultiplelogicalinterfaces,sothateachlogicalinterfaceisdefinedasaunitunderthephysicalinterface,followedbythelogicalinterfaceIDnumber.UsethefollowingstatementtoconfigurethemappingofEthernettraffictologicalinterfaces:

encapsulation and vlan-tagging

Configuring Layer 2 Switching

Asillustratedinthefollowingcode,twoEthernetportsareinthesamebroadcastdomain:ge-5/1/5interfaceisconfiguredwithuntaggedVLAN,whilege-5/1/7interfaceisconfiguredwithtaggedVLAN.

EthernetinterfacesinMXSeriesrouterscansupportoneormanyVLANs.EachEthernetVLANismappedintoonelogicalinterface.IflogicalinterfacesareusedtoseparatetraffictodifferentVLANs,werecommendusingthesamenumbersforlogicalinterface(unit)andVLANID.Forinstance,thelogicalinterfaceandtheVLANIDinthefollowingsampleusethesamenumber(100):

interfaces ge-5/1/5 { unit 0 { family bridge; }}interfaces ge-5/1/7 { vlan-tagging; encapsulation flexible-ethernet-services; unit 100 { encapsulation vlan-bridge; family bridge; }}bridge-domains { Data01 {

Page 64: Data Center Network Connectivity With Ibm Server

Chapter4:ConnectingIBMServersintheDataCenterNetwork 63

domain-type bridge; vlan-id 100; interface ge-5/1/5.0; interface ge-5/1/7.100; }}

Configuring Layer 2 Switching and Layer 3 Interface

Asillustratedinthefollowingcode,twoEthernetportsareinthesamebroadcastdomain,ge-5/1/5interfaceisconfiguredwithuntaggedVLAN,whilege-5/1/7interfaceisconfiguredwithtaggedVLAN.

Inaddition,IRBontheMXSeriesprovidessimultaneoussupportforLayer2bridgingandLayer3routingonthesameinterface,suchasirb.100sothatthelocalpacketsareabletoroutetoanotherroutedinterfaceortoanotherbridgingdomainthathasaLayer3protocolconfigured.

interfaces ge-5/1/5 { unit 0 { family bridge; }}interfaces ge-5/1/7 { vlan-tagging; encapsulation flexible-ethernet-services; unit 100 { encapsulation vlan-bridge; family bridge; }}interfaces irb { unit 100 { family inet { address 11.11.1.1/24; } }}bridge-domains { Data01 { domain-type bridge; vlan-id 100; interface ge-5/1/5.0; interface ge-5/1/7.100; routing-interface irb.100; }}

Configuring Layer 3 RoutingAs illustrated in the following code, one Ethernet interface (ge-5/0/0) is configured with a tagged VLAN and an IP address.interfaces ge-5/0/0 description “P6-1”; vlan-tagging; unit 30 { description Data01; vlan-id 30; family inet { address 11.11.1.1/24; }}

Page 65: Data Center Network Connectivity With Ibm Server

64 DataCenterNetworkConnectivitywithIBMServers

Configuring EX Series 4200 and 8200 Ethernet Switches

InatypicalEXSeriesconfiguration,onephysicalinterfacecanhavemultiplelogicalinterfacessothatalogicalinterfaceisdefinedasaunitunderthephysicalinterface,followedbyalogicalinterfaceIDnumber.However,forEthernetswitchingbetweenportsintheEXSeries,configurationoninterfacesmustincludefamilyEthernet-switchingunderunit0.

DefinetheconfigurationofLayer2broadcast(bridge)domainsundervlanstanza.InterfacemembershipinVLANscanbedefinedusingoneofthefollowingtwomethods:

• Undervlanxinterface(preferredmethod).

• Underinterfaceyunit0familyethernet-switchingvlanmembers.

IftheEthernetportcarriesonlyuntaggedframesforoneVLAN,portmodeshouldbedefinedasaccess(default).IftheEthernetportcarriestaggedframes,portmodemustbedefinedastrunk(casewithtwoormoreVLANsononeport).

Configuring Layer 2 Switching

Asillustratedinthefollowingcode,twoEthernetportsareinthesamebroadcastdomain:ge-5/1/5interfaceisconfiguredwithuntaggedVLAN,whilege-5/1/7interfaceisconfiguredwithtaggedVLAN.

TheEthernetinterfacesinEXSeriesrouterscansupportoneormanyVLANs.EachVLANismappedintoonelogicalinterface.IflogicalinterfacesareusedtoseparatetraffictodifferentVLANs,werecommendusingthesamenumbersforthelogicalinterface(unit)andVLANID.Forexample,thelogicalinterfaceandtheVLANIDinthefollowingsampleusethesamenumber(100):

interfaces ge-5/1/5 { unit 0 { family ethernet-switching;}interfaces ge-5/1/7 { unit 0 { family ethernet-switching { port-mode trunk; }}vlans { Data01 { vlan-id 100; interface { ge-5/1/5.0; ge-5/1/7.0; } }}

Page 66: Data Center Network Connectivity With Ibm Server

Chapter4:ConnectingIBMServersintheDataCenterNetwork 65

Configuring Layer 2 Switching and Layer 3 Interface

Asillustratedinthefollowingcode,twoEthernetportsareinthesamebroadcastdomain:ge-5/1/5interfaceisconfiguredwithuntaggedVLAN,whilege-5/1/7interfaceisconfiguredwithtaggedVLAN.

Inaddition,theEXseriesEthernetSwitchsupportsroutedinterfacescalledRoutedVLANInterfaces(RVI).RVIsareneededtoroutethetrafficfromoneVLANtoanother.AsopposedtoIRB,whichroutesbridgedomains,RBIroutesVLANs.Inthefollowingcode,theRVIinterfacewithIPaddress11.11.1.1/24isassociatedwithVLAN100logicalinterface.

interfaces ge-5/1/5 { unit 0 { family ethernet-switching;}interfaces ge-5/1/7 { unit 0 { family ethernet-switching { port-mode trunk; }}interfaces vlan { unit 100 { family inet { address 11.11.1.1/24; } }}vlans { Data01 { vlan-id 100; interface { ge-5/1/5.0; ge-5/1/7.0; } l3-interface vlan.100; }}

Configuring Layer 3 Routing

Asillustratedinthefollowingcode,theEthernetinterface(ge-5/0/0)isconfiguredwithtaggedVLANandIPaddress:

interfaces ge-5/0/0 description “P6-1”; vlan-tagging; unit 30 { description Data01; vlan-id 30; family inet { address 11.11.1.1/24; }

}

Page 67: Data Center Network Connectivity With Ibm Server

66 DataCenterNetworkConnectivitywithIBMServers

MX and EX Series Ethernet Interface Setting

Ingeneral,thedefaultvalueoftheEthernetinterfacesettingisasfollows:

• Auto-negotiationforthespeedsetting.• Automaticforthelinkmodesetting.• Flow-controlfortheflow-controlsettinglink-mode=automatic.

BecausethesedefaultsettingsontheMXandEXSeriesworkedwellinmanyusecases,werecommendusingthesedefaultsettingsasastartingpointandthenoptimizingsomeofthesettingsonlywhennecessary.

TheEthernetinterfaceconfigurationstanzasontheMXandEXSeriesaredifferent.OntheMXSeries,theinterfacesettingscanbechangedunderinterfacexgigether-optionsstanza;ontheEXSeries,theinterfacesettingscanbechangedunderinterfacexether-optionsstanza.

Undertheconfigurationstanzas,thefollowingsettingsareavailable:

• Linkspeedcanbesetto10m,100m,1gorauto-negotiation.• Link-modecanbesettoautomatic,fullduplexorhalf-duplex.• Flow-controlcanbesettoflow-controlorno-flow-control.

NOTE Whenonedeviceissettoauto-negotiatelink-modewhiletheotherdeviceissettofull-duplexlinkmode,theconnectionbetweenthesetwodevicesdoeswillnotworkproperlyduetothelimitationsetbyIEEE802.3standard.Wehighlyrecommendusingtheauto-negotiatelinksettingforgigabitEthernet.

NOTE TheMXSeriesdoesnotsupporthalf-duplexoperationon10/100/1000BaseTinterfaces.

MX and EX Series Support for Jumbo Frames (MTU)

TheEXandMXSeriescansupportframesizesonEthernetinterfacesupto9216octets.InaJunosconfiguration,thisparameteriscalledMaximumTransmissionUnit(MTU).InJunos,MTUincludesEthernetoverheadsuchassourceaddress,destinationaddress,andVLAN-tag.However,itdoesnotincludethepreambleorframechecksequence(FCS).ThedefaultEthernetframesizeinJunosis1514octets,whilethedefaultframesizeonothervendordevicescanbe1500octets.

ItisimportanttounderstandthatalldevicesinonebroadcastdomainmusthavethesamejumboframesMTUsize.Otherwise,devicesthatdonotsupportjumboframescoulddiscardsomeframessilently.Asaresult,thiscreatesremittancenetworkproblems,suchasfailuresbetweenrouterstoestablishOSPFneighboradjacency.

TheEXandMXSeriesdeviceshavedifferenttypesofinterfaces,suchasthephysicalandirbinterfaces.BecauseMTUisassociatedwitheachinterfacetype,theMTUconfigurationsyntaxisdifferent,aslistedinTable4.3.

Table 4.3 MTU Configuration Syntax

Juniper Network Devices Interface Type Command

MX Series RoutersPhysicalinterface set interfaces mtu <mtu>

IRBinterface set interfaces irb mtu <mtu>

EX Series Ethernet Switches

Physicalinterface set interfaces mtu <mtu>

VLANinterface set interfaces vlan mtu <mtu>

InterfaceVLANunit set interfaces vlan unit 100 family inet mtu <mtu>

Page 68: Data Center Network Connectivity With Ibm Server

67

Chapter 5

Configuring Spanning Tree Protocols

THIS.CHAPTER.FOCUSES.on.the.different.spanning.tree.protocols.–.STP,..

RSTP,.MSTP,.and.VSTP.–.that.are.used.in.Layer.2.networks.to.prevent.loops ..

Typically,.STP.is.supported.only.on.legacy.equipment.and.has.been.replaced.with.

RSTP.and.other.variants.of.Spanning.Tree ..Support.for.RSTP.is.mandatory.on.all.

devices.that.are.capable.of.spanning.tree.functionality ..When.interoperating.with.

legacy.switches,.a.RSTP.capable.switch.automatically.reverts.to.STP ..We.discuss.

STP.in.this.chapter.to.provide.a.background.on.spanning.tree.functionality .

Spanning.Tree.Protocols. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Configuring.RSTP/MSTP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Configuring.VSTP/PVST+/Rapid-PVST+. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Page 69: Data Center Network Connectivity With Ibm Server

68 DataCenterNetworkConnectivitywithIBMServers

Spanning Tree Protocols

TheSTPprotocolworksontheconceptofaswitchelectedasarootbridgethatconnectsinameshtoothernon-rootswitches.Theactivepathofleastcostisselectedbetweeneachofthenon-rootbridgesandtherootbridge.Inaddition,aredundantpathisidentifiedandusedwhenfailureoccurs.AllthesebridgesexchangeBridgeProtocolDataUnits(BPDU)thatcontainthebridgeIDsandcostinordertoreachtherootport.

Therootbridgeiselectedbasedonpriority.Aswitch,assignedthelowestpriority,iselectedastheroot.Theportsonaswitchthatareclosest(incost)totheRootBridgebecometheRootPorts(RP).

NOTE TherecanonlybeoneRPonaswitch.ArootbridgecannothaveanRP.

TheportsthathavealeastcosttotheRootBridgeinthenetworkareknownastheDesignatedPorts(DP).PortsthatarenotselectedasRPorDPareconsideredtobeBlocked.Anoptimizedactivepathbasedonbridge/portpriorityandcostischosentoforwarddatainthenetwork.TheBPDUsthatprovidetheinformationontheoptimalpatharereferredtoas“superior”BPDUswhilethosethatprovidesub-optimalmetricsarereferredtoas“inferior”BPDUs.

BPDUsmainlyconsistofthefollowingfieldsthatareusedasthebasisfordeterminingtheoptimalforwardingtopology:

• Root Identifier-Arepresentationoftheswitch’scurrentsnapshotofthenetworkassumingitastherootbridge.

• Root path cost-LinkspeedoftheportonwhichtheBPDUisreceived.

• Bridge Identifier-IdentityusedbytheswitchtosendBPDUs.

• Port Identifier-IdentityoftheportfromwhichBPDUoriginated.

Convergenceofaspanningtreebasednetworkconsistsofathree-stepprocess:

1. RootBridgeElection.

2. RootPortElection(onnon-rootswitches).

3. DesignatedPortElection(onnetworksegment).

Figure5.1showsthreeswitches:onerootandtwonon-rootbridges.Theportsontherootbridgearethedesignatedports(DP).Theportswithleastcosttotherootbridgearetherootports(RP).AllotherinterfacesrunningSTPonthenon-rootbridgesarealternateports(ALT).

Rapid Spanning Tree Protocol

RapidSpanningTreeProtocol(RSTP)isalaterandenhancedversionofSTPthatprovidesfasterconvergencetimes.ThefastertimesarepossiblebecauseRSTPusesprotocolhandshakemessagesunlikeSTP,whichusesfixedtimeouts.WhencomparedtoSTP,RSTPprovidesenhancedperformanceby:

Page 70: Data Center Network Connectivity With Ibm Server

Chapter5:ConfiguringSpanningTreeProtocols 69

• GeneratingandtransmittingBPDUsfromallnodesattheconfiguredHellointerval,irrespectiveofwhethertheyreceiveanyBPDUsfromtheRP.ThisallowsthenodestomonitoranylossofHellomessagesandthusdetectlinkfailuresmorequicklythanSTP.

• Expeditingchangesintopologybydirectlytransitioningtheport(eitheredgeportoraportconnectedtoapoint-to-pointlink)fromablockedtoforwardingstate.

• Providingadistributedmodelwhereallbridgesinthenetworkactivelyparticipateinnetworkconnectivity.

Figure 5.1 STP Network

NewinterfacetypesdefinedinRSTPare:

• PointtoPoint

• Edge

• SharedorNon-edge

Point to Point

Apoint-to-point(P2P)interfaceprovidesadirectconnectionbetweentwoswitches.Usually,afullduplexinterfaceissetautomaticallytobeP2P.

Edge

TheedgeinterfaceisanotherenhancementinRSTPthathelpsreduceconvergencetimewhencomparedtoSTP.Portsconnectedtoservers(therearenobridgesattached)aretypicallydefinedasedgeports.AnychangesthataremadetothestatusoftheedgeportdonotresultinchangestotheforwardingnetworktopologyandthusareignoredbyRSTP.

RP – Root Port

DP – Designated Port

ALT – Alternate Port

Non-Root Bridge 1 Non-Root Bridge 2

DP DP

DP

RP RP

ALT

Root Bridge

Page 71: Data Center Network Connectivity With Ibm Server

70 DataCenterNetworkConnectivitywithIBMServers

Shared or Non-edge

ASharedorNon-edgeinterfaceisaninterfacethatishalf-duplexorhasmorethantwobridgesonthesameLAN.

WhencomparedtoSTP,RSTPintroducestheconceptofaportstate,roleandinterface.ThestateandroleofaRSTPbasedportareindependent.AportcansendorreceiveBPDUsordatabasedonitscurrentstate.Theroleofaportdependsonitspositioninthenetwork.TheroleofaportcanbedeterminedbyperformingaBPDUcomparisonduringconvergence.

Table5.1showsthemappingbetweenRSTPportstatesandroles.

Table 5.1 Mapping between RSTP Port States and Roles

RSTP Role RSTP State

Root Forwarding

Designated Forwarding

Alternate Discard

Backup Discard

Disabled Discard

TheAlternateroleinRSTPisanalogoustotheBlockedportinSTP.Defininganedgeportallowsaporttotransitionintoaforwardingstate,eliminatingthe30-seconddelaythatoccurswithSTP.

Multiple Spanning Tree Protocol

MultipleSpanningTreeProtocol(MSTP)isanenhancementtoRSTP.MSTPsupportsthelogicaldivisionofaLayer2network,orevenasingleswitchintoregions.Aregionherereferstoanetwork,singleVLANormultipleVLANs.WithMSTP,separatespanningtreegroupsorinstancescanbeconfiguredforeachnetwork,VLANorgroupofVLANs.TherecanbeMultipleSpanningTreeInstances(MSTI)foreachregion.MSTPcanthuscontrolthespanningtreetopologywithineachregion.Ontheotherhand,CommonInstanceSpanningTree(CIST)isaseparateinstancethatiscommonacrossallregions.Itcontrolsthetopologybetweenthedifferentregions.

EachMSTIhasaspanningtreeassociatedwithit.RSTPbasedspanningtreetablesaremaintainedperMSTI.UsingCISTtodistributethisinformationoverthecommoninstanceminimizestheexchangeofspanningtreerelatedpacketsandthusnetworktrafficbetweenregions.

MSTPiscompatiblewithSTPandmakesuseofRSTPforconvergencealgorithms.

Page 72: Data Center Network Connectivity With Ibm Server

Chapter5:ConfiguringSpanningTreeProtocols 71

Figure 5.2 MSTP Example

Figure5.2showsthethreeMSTIs:A,B,andC.EachoftheseinstancesconsistsofeitheroneormoreVLANs.BPDUsspecifictotheparticularinstanceareexchangedwithineachoftheMSTIs.TheCISThandlesallBPDUinformationwhichisrequiredtomaintainthetopologyacrosstheregions.CISTistheinstancethatiscommontoallregions.

WithMSTP,bridgeprioritiesandrelatedconfigurationscanbeappliedonaperinstancebasis.Thus,arootbridgeinoneinstancedoesnotnecessarilyhavetobeinadifferentinstance.

VLAN Spanning Tree Protocol

IncaseofVLANSpanningTreeProtocol(VSTP),eachVLANhasaspanningtreeassociatedwithit.Theproblemwiththisapproachismainlythatofscalability–theprocessingresourcesconsumedincreaseproportionallywiththenumberofVLANs.

WhenconfiguringVSTP,thebridgeprioritiesandtherestofthespanningtreeconfigurationcanbeappliedonaperVLANbasis.

NOTE WhenconfiguringVSTP,pleasepaycloseattentiontothefollowing:

• Whenusingvirtualswitches,VSTPcannotbeconfiguredonvirtualswitchbridgedomainsthatcontainportswitheitherVLANrangesormappings.

• VSTPcanonlybeenabledforaVLANIDthatisassociatedwithabridgedomainorVPLSroutinginstance.AlllogicalinterfacesassignedtotheVLANmusthavethesameVLANID.

• VSTPiscompatiblewiththeCiscoPVSTimplementation.

Table5.2liststhesupportexistingondifferentplatformsforthedifferentspanningtreeprotocols.

BPDUsbetween instances

CIST MSTI-B VLANs 990, 991

BPDUs - internal to instance MSTI-B

MSTI-A VLAN 501

BPDUs - internal to instance MSTI-A

MSTI-C VLANs 100, 200, 300

BPDUs - internal to instance MSTI-C

Page 73: Data Center Network Connectivity With Ibm Server

72 DataCenterNetworkConnectivitywithIBMServers

Table 5.2 Overview of Spanning Tree Protocols and Platforms

Platforms

ProtocolsIBMBladeCenter(CiscoESM)

EX4200 EX8200 MXSeries

STPConfigurationnotsupported,workswithMSTP/PVST(backwardscompatible)

STP STPConfigurationnotsupported,workswithRSTP(backwardscompatible)

RSTPConfigurationnotsupported,workswithMSTP/PVST(backwardscompatible)

RSTP RSTP RSTP

MSTP MSTP MSTP MSTP MSTP

PVST+(Cisco)/ VSTP (Juniper)

PVST+ VSTP VSTP VSTP

Rapid-PVST+ (Cisco)/ VSTP (Juniper)

Rapid-PVST+ VSTP VSTP VSTP

Configuring RSTP/MSTP

Figure5.3showsasampleMSTPnetworkthatcanbeusedtoconfigureandverifyRSTP/MSTPfunctionality.Thedevicesinthisnetworkconnectinafullmesh.TheswitchesandIBMBladeCenterconnectinameshandareassignedthesepriorities:

• EX4200-A–0K(lowestbridgeprioritynumber)

• MX480–8K

• EX8200–16K

• IBMBladeCenter(CiscoESM)–16K

• EX4200-B–32K

WeconfigureEX4200-Aastherootbridge.TwoMSTPinstances,MSTI-1andMSTI-2,correspondtoVLANs1122and71,respectively.Eitherone,orboth,oftheseVLANsareconfiguredonlinksbetweentheswitchesonthisspanningtreenetwork.Table5.3showstheassociationbetweenthelinks,VLANsandMSTIinstances.

Table 5.3 Association between Links, VLANs and MSTI Instances

Links between Switches MSTI Instance VLAN ID

EX4200-B – EX8200 MSTI-1 1122

EX4200-B – IBM BladeCenter MSTI-1 1122

EX4200-B – MX480 MSTI-1,MSTI-2 1122,71

EX4200-A – IBM BladeCenter MSTI-2 71

EX4200-A – MX480 MSTI-1,MSTI-2 1122,71

EX4200-A – EX8200 MSTI-1 1122

MX480 – IBM BladeCenter MSTI-1,MSTI-2 1122,71

Page 74: Data Center Network Connectivity With Ibm Server

Chapter5:ConfiguringSpanningTreeProtocols 73

Figure 5.3 Spanning Tree MSTP/RSTP

AnotherinstanceMSTI-0(constitutestheCIST)iscreatedbydefaulttoexchangetheoverallspanningtreeinformationforallMSTIsbetweentheswitches.Thebladeserversconnecttoeachoftheswitchesashosts/servers.TheswitchportsonthedifferentswitchesthatconnecttotheseBladeCenterserversaredefinedasedgeportsandareassignedIPaddresses.Theselectionoftherootbridgeiscontrolledbyexplicitconfiguration.Thatis,abridgecanbepreventedfrombeingelectedasarootbridgebyenablingrootprotection.

Trunk Port 18

Trunk Port 17

BladeCenterblade 6, eth 1,ip=11.22.1.6

BladeCenterblade 8, eth 1,

ip=11.22.1.8

BladeCenterblade 7, eth 1,ip=11.22.1.7

BladeCenterblade 9, eth 1,ip=11.22.1.9

BladeCenterblade 10, eth 1,ip=11.22.1.10

IBM BladeCenterPriority 16K

Priority 8K

Priority 16K

Pass-Through Modulevia eth 1 interface on Blades

[6, 7, 8, 9, 10]

Internal eth 0 interface for each blade [6, 7, 8, 9, 10]connected via Trunk Ports [17, 18, 19, 20]

*Server connections simulated to each DUT via eth 1 interface connected to BladeCenter Pass-Through Module

Trunk Port 19

ge-0/0/15

ge-0/0/12

ge-5/3/4

ge-5/2/2

ge-5/1/2

ge-5/1/1

ge-0/0/13

ge-0/0/10

ge-1/0/4

ge-1/0/6 ge-1/0/8

ge-1/0/2

ge-0/0/14

ge-0/0/7

ge-0/0/9

ge-0/0/21

ge-0/0/20

ge-0/0/23

ge-0/0/0

VLAN [1122]

VLAN [1122]

VLAN [1122]

VLAN [71, 1122]

ge-5/3/3

MSTP

VLAN [71, 1122]

VLAN [71, 1122]

VLAN [71, 1122]

VLAN [71]

172.28.113.180Priority 32K

172.28.113.175Priority 0K

*

*

*

*

*

MX480

EX4200

EX8200

9.3.71.399.3.71.509.3.71.409.3.71.359.3.71.41

EX4200

Page 75: Data Center Network Connectivity With Ibm Server

74 DataCenterNetworkConnectivitywithIBMServers

Configuration Snippets

ThefollowingcodepertainstotheEX4200-A(RSTP/MSTP):

// Enable RSTP by assigning bridge priorities.// Set priorities on interfaces to calculate the least cost path.// Enable root protection so that the interface is blocked for the RSTP instance that receives superior BPDUs. Also, define the port to be an edge port.rstp { bridge-priority 4k; interface ge-0/0/0.0 { priority 240; } interface ge-0/0/7.0 { priority 240; edge; no-root-port; } interface ge-0/0/9.0 { priority 240; edge; no-root-port; } interface ge-0/0/20.0 { priority 240; } interface ge-0/0/21.0 { priority 240; } }chandra@EX-175-CSR# show protocols mstpconfiguration-name MSTP;bridge-priority 8k;interface ge-0/0/0.0 { priority 240;}// Enable RSTP by assigning bridge priorities// Set priorities on interfaces.// Enable root protection so that the interface is blocked when it receives BPDUs. An operator can configure a bridge not to be elected as a root bridge by enabling root protection. Root protection increases user control over the placement of the root bridge in the network.

Also, define the port to be an edge port.// Define MST-1, provide a bridge priority for the instance. Associate a VLAN with the instance.// Define MSTI-2, provide a bridge priority for the instance. Associate a VLAN and interface with the instance.interface ge-0/0/7.0 { priority 240; edge; no-root-port;}interface ge-0/0/9.0 { priority 240; edge; no-root-port;}interface ge-0/0/20.0 {

Page 76: Data Center Network Connectivity With Ibm Server

Chapter5:ConfiguringSpanningTreeProtocols 75

priority 224;}interface ge-0/0/21.0 { priority 192;}interface ge-0/0/23.0 { priority 224;}msti 1 { bridge-priority 8k; vlan 1122;}msti 2 { bridge-priority 8k; vlan 71; interface ge-0/0/23.0 { priority 224; }}

ThefollowingcodesnippetpertainstotheEX4200-B:

// Enable RSTP by assigning bridge priorities// Set priorities on interfaces.// Enable root protection so that the interface is blocked when it receives BPDUs. Also, define the port to be an edge port.rstp { bridge-priority 4k; interface ge-0/0/10.0 { priority 240; } interface ge-0/0/12.0 { priority 240; } interface ge-0/0/14.0 { priority 240; } interface ge-0/0/15.0 { priority 240; edge; no-root-port; } }// Assign bridge priorities.// Set priorities on interfaces.// Enable root protection so that the interface is blocked when it receives BPDUs. Also, define the port to be an edge port.chandra@SPLAB-EX-180> show configuration protocols mstpconfiguration-name MSTP;bridge-priority 0;interface ge-0/0/10.0 { priority 240;}interface ge-0/0/11.0 { priority 240;}interface ge-0/0/12.0 { priority 224;}

Page 77: Data Center Network Connectivity With Ibm Server

76 DataCenterNetworkConnectivitywithIBMServers

interface ge-0/0/13.0 { priority 224;}interface ge-0/0/14.0 { priority 192;}interface ge-0/0/15.0 { priority 240; edge; no-root-port;}// Define MSTI-1, provide a bridge priority for the instance. Associate a VLAN with the instance.msti 1 { bridge-priority 0; vlan 1122;}// Define MSTI-2, provide a bridge priority for the instance. Associate a VLAN and interface with the instance.msti 2 { bridge-priority 0; vlan 71; interface ge-0/0/13.0 { priority 224; }}

ThefollowingcodesnippetpertainstotheMX480:

rstp { bridge-priority 40k; interface ge-5/1/1 { priority 240; } interface ge-5/1/2 { priority 240; } interface ge-5/2/2 { priority 240; } interface ge-5/3/3 { priority 240; } interface ge-5/3/4 { priority 240; edge; no-root-port; } }chandra@HE-RE-0-MX480# show protocols mstpbridge-priority 8k;interface ge-5/1/1 { priority 224;}interface ge-5/1/2 { priority 192;}interface ge-5/2/2 { priority 192;}

Page 78: Data Center Network Connectivity With Ibm Server

Chapter5:ConfiguringSpanningTreeProtocols 77

interface ge-5/3/3 { priority 224;}interface ge-5/3/4 { priority 240; edge; no-root-port;}msti 1 { bridge-priority 4k; vlan 1122;}msti 2 { bridge-priority 4k; vlan 71; interface ge-5/1/1 { priority 224; }}

Verification

Basedonthesamplenetwork,administratorscanverifytheRSTP/MSTPconfigurationbyissuingtheshowcommandstoverifythattherearetwoMSTIinstancesandoneMTSI-0commoninstancepresentoneachswitch.ThefollowingCLIsampleshowsthesethreedifferentMSTIinstancesandtheVLANsassociatedwitheachofthem:

chandra@SPLAB-EX-180> show spanning-tree mstp configurationMSTP informationContext identifier : 0Region name : MSTPRevision : 0Configuration digest : 0xeef3ba72b1e4404425b44520425d3d9eMSTI Member VLANs0 0-70,72-1121,1123-40941 11222 71

EachoftheseinstancesshouldhaveaRP(ROOT),BP(ALT)andDP(DESG)ofitsown:

chandra@SPLAB-EX-180> show spanning-tree interfaceSpanning tree interface parameters for instance 0Interface Port ID Designated Designated Port State Role port ID bridgeID Costge-0/0/10.0 240:523 240:513 0.0019e2544040 20000 FWD ROOTge-0/0/11.0 240:524 240:524 32768.0019e2544ec0 20000 FWD DESGge-0/0/12.0 224:525 224:525 32768.0019e2544ec0 20000 FWD DESGge-0/0/13.0 224:526 224:526 32768.0019e2544ec0 20000 FWD DESGge-0/0/14.0 192:527 192:213 8192.001db5a167d1 20000 BLK ALTge-0/0/15.0 240:528 240:528 32768.0019e2544ec0 20000 FWD DESGge-0/0/36.0 128:549 128:549 32768.0019e2544ec0 20000 FWD DESGge-0/0/46.0 128:559 128:559 32768.0019e2544ec0 20000 FWD DESGSpanning tree interface parameters for instance 1Interface Port ID Designated Designated Port State Role port ID bridge ID Costge-0/0/10.0 128:523 128:513 1.0019e2544040 20000 FWD ROOT

Page 79: Data Center Network Connectivity With Ibm Server

78 DataCenterNetworkConnectivitywithIBMServers

ge-0/0/12.0 128:525 128:525 32769.0019e2544ec0 20000 FWD DESGge-0/0/14.0 128:527 192:213 4097.001db5a167d1 20000 BLK ALTge-0/0/15.0 128:528 128:528 32769.0019e2544ec0 20000 FWD DESGSpanning tree interface parameters for instance 2Interface Port ID Designated Designated Port State Role port ID bridge ID Costge-0/0/10.0 128:523 128:513 2.0019e2544040 20000 FWD ROOTge-0/0/13.0 224:526 224:526 16386.0019e2544ec0 20000 FWD DESGge-0/0/14.0 128:527 192:213 4098.001db5a167d1 20000 BLK ALT

ThefollowingCLIoutputshowstheMSTI-0informationontheRootBridge.Allportsareintheforwardingstate.

chandra@EX-175-CSR> show spanning-tree interfaceSpanning tree interface parameters for instance 0Interface Port ID Designated Designated Port State Role port ID bridge ID Costge-0/0/0.0 240:513 240:513 12288.0019e2544040 20000 FWD DESGge-0/0/7.0 240:520 240:520 12288.0019e2544040 20000 FWD DESGge-0/0/9.0 240:522 240:522 12288.0019e2544040 20000 FWD DESGge-0/0/20.0 240:533 240:533 12288.0019e2544040 20000 FWD DESGge-0/0/21.0 240:534 240:534 12288.0019e2544040 20000 FWD DESGge-0/0/24.0 128:537 128:537 12288.0019e2544040 20000 FWD DESGge-0/0/25.0 128:538 128:538 12288.0019e2544040 20000 FWD DESG

1. CheckthatonlytheinformationfrominstanceMSTI-0(butnotMSTI-1andMSTI-2)isavailableonallswitches.

2. ConfirmthatthereisonlyonedirectpathtoanyotherinterfacewithineachMSTIinstanceonaswitch.AllotherredundantpathsshouldbedesignatedasBlocked.Usetheshow spanning-tree interface commandforthispurpose.

3. VerifythatachangeinpriorityonanyMSTIinstanceonaswitchispropagatedthroughtheentiremeshusingtheshow spanning-tree interfacecommand.

Configuring VSTP/PVST+/Rapid-PVST+

Figure5.4depictsasamplenetworkconsistingofameshofEX8200/4200andMX480deviceswiththeCiscoESMswitch.VSTPandPVST+mustbeenabledontheCiscoandJuniperdevices,respectively,forinteroperability.TwoVLANs1122and71arecreatedonalldevices;VSTPisenabledforbothoftheseVLANs.

Page 80: Data Center Network Connectivity With Ibm Server

Chapter5:ConfiguringSpanningTreeProtocols 79

Figure 5.4 Spanning Tree – VSTP/(PVST+, Rapid-PVST+)

Trunk Port 18

Trunk Port 17

Prioritybc_ext:24Kbc_int:12K

Prioritybc_ext:16Kbc_int:8K

Prioritybc_ext:16Kbc_int:16K

Internal eth 0 interface foreach blade [6, 7, 8, 9, 10]

connected via Trunk Ports [17, 18, 19]

*Server connections simulated to each DUT via eth 1 interface connected to BladeCenter’s Pass-Through Module for blade slots [6, 7, 8, 9, 10]

Trunk Port 19

ge-0/0/15

ge-0/0/12

ge-5/3/4

ge-5/2/2

ge-5/1/2 ge-5/1/1

ge-0/0/13

ge-1/0/4

ge-1/0/6

ge-1/0/8

ge-1/0/2

ge-0/0/14

ge-0/0/7ge-0/0/9

ge-0/0/21

ge-0/0/20

ge-0/0/23

VLAN [1122]

VLAN [1122]

VLAN [71, 1122]

ge-5/3/3

Cisco (PVST +/ Rapid-PVST+) and Juniper (VSTP)

VLAN [71, 1122]VLAN [71, 1122]

VLAN [71, 1122]

VLAN [71]

VLAN [71]

172.28.113.175

BladeCenterblade 7, eth 1,ip=11.22.1.7

*BladeCenterblade 9, eth 1,ip=11.22.1.9

*

BladeCenterblade 8, eth 1,ip=11.22.1.8

*

BladeCenterblade 10, eth 1,ip=11.22.1.10

*

BladeCenterblade 6, eth 1,ip=11.22.1.6

*

MX480

EX8200

9.3.71.399.3.71.509.3.71.409.3.71.359.3.71.41

EX4200 EX4200

Page 81: Data Center Network Connectivity With Ibm Server

80 DataCenterNetworkConnectivitywithIBMServers

Table5.4liststhebridgeprioritiesforeachoftheVLANs.

Table 5.4 VSTP Bridge Priorities

VLAN ID Bridge Priority

71

EX4200-A–8K

EX4200-B–4K

EX8200–12K

MX480–16K

1122

EX4200-A–16K

EX4200-B–32K

EX8200–24K

MX480–16K

Verification

BasedonthesamplesetupasshowninFigure5.4,verifyinteroperabilityoftheVSTPconfigurationwithCiscoPVST+byperformingthefollowingsteps.

• 1.VerifythateachoftheswitcheswithVSTP/PVST+enabledhastwospanningtreescorrespondingtotwoVLANs.EachVLANhasitsownRP(ROOT),BP(ALT)andDP(DESG).Usetheshow spanning tree command.

chandra@SPLAB-EX-180> show spanning-tree interfaceSpanning tree interface parameters for VLAN 1122Interface Port ID Designated Designated Port State Roleport ID bridge ID Costge-0/0/10.0 128:523 128:513 17506.0019e2544040 20000 FWD ROOTge-0/0/12.0 224:525 224:525 33890.0019e2544ec0 20000 FWD DESGge-0/0/14.0 240:527 240:213 17506.001db5a167d0 20000 BLK ALTge-0/0/15.0 240:528 240:528 33890.0019e2544ec0 20000 FWD DESGSpanning tree interface parameters for VLAN 71Interface Port ID Designated Designated Port State Roleport ID bridge ID Costge-0/0/10.0 128:523 128:523 4167.0019e2544ec0 20000 FWD DESGge-0/0/13.0 224:526 224:526 4167.0019e2544ec0 20000 FWD DESGge-0/0/14.0 240:527 240:527 4167.0019e2544ec0 20000 FWD DESG

• 2.ConfirmthatthereisonlyonedirectactivepathperVLANinstancetoanyothernon-rootbridge.AllredundantpathsshouldbeidentifiedasBlocked.Usetheoutputoftheshow spanning-tree interfacecommandforthispurpose.

• RebootingtherootportshouldcausethedevicewiththenextlowerprioritystepupastherootfortheparticularVLAN.ThisinformationmustbeupdatedintheVLANtableonalldevices.

• 3.Verifythattheoriginalrootbridgebecomestheprimary(active),afterthereboot.Thisinformationshouldbeupdatedonalldevicesinthemesh.

NOTE AnychangeinbridgeprioritiesoneitheroftheVSTPmustbepropagatedthroughthemesh.

Page 82: Data Center Network Connectivity With Ibm Server

Chapter5:ConfiguringSpanningTreeProtocols 81

Configuration Snippets

ThefollowingcodepertainstotheEX4200-A.

chandra@EX-175-CSR> show configuration protocols vstp// Define a “VLAN bc-external”, assign bridge and interface priorities.// Enable root protection so that the interface is blocked when it receives BPDUs. Also, define the port to be an edge port.vlan bc-external {bridge-priority 16k;interface ge-0/0/7.0 {priority 240;edge;no-root-port;}interface ge-0/0/20.0 {priority 224;}interface ge-0/0/21.0 {priority 240;}}// Define a “VLAN bc-internal”, assign bridge and interface priorities.// Enable root protection so that interface is blocked when it receives BPDUs. Also, define the port to be an edge port.vlan bc-internal {bridge-priority 8k;interface ge-0/0/9.0 {priority 240;edge;no-root-port;}interface ge-0/0/21.0 {priority 240;}interface ge-0/0/23.0 {priority 224;}}

ThefollowingcodepertainstotheMX480.

// Define VLAN71, assign bridge and interface priorities.// Define VLAN1122, assign bridge and interface priorities.chandra@HE-RE-0-MX480> show configuration protocols vstpvlan 71 {bridge-priority 16k;interface ge-5/1/1 {priority 240;}interface ge-5/1/2 {priority 240;}interface ge-5/2/2 {priority 240;}interface ge-5/3/3 {priority 240;}}

Page 83: Data Center Network Connectivity With Ibm Server

82 DataCenterNetworkConnectivitywithIBMServers

vlan 1122 {bridge-priority 16k;interface ge-5/1/1 {priority 240;}interface ge-5/1/2 {priority 240;}interface ge-5/2/2 {priority 240;}interface ge-5/3/3 {priority 240;}interface ge-5/3/4 {priority 240;}}

Page 84: Data Center Network Connectivity With Ibm Server

83

Chapter 6

Supporting Multicast Traffic

IPv4.SENDS.IP.DATAGRAMS.to.a.single.destination.or.a.group.of.interested.

receivers.by.using.three.fundamental.types.of.addresses:.

•. Unicast.–.sends.a.packet.to.a.single.destination ..

•. Broadcast.–.sends.a.datagram.to.an.entire.subnetwork ..

•. Multicast.–.sends.a.datagram.to.a.set.of.hosts.that.can.be.on.different..sub-networks.and.can.be.configured.as.members.of.a.multicast.group .

Internet.Group.Management.Protocol.Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

Configuring.Protocol.Independent.Multicast. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

IGMP.Snooping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Configuring.IGMP.Snooping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

Page 85: Data Center Network Connectivity With Ibm Server

84 DataCenterNetworkConnectivitywithIBMServers

Internet Group Management Protocol Overview

Amulticastdatagramisdeliveredtodestinationgroupmemberswiththesamebest-effortreliabilityasastandardunicastIPdatagram.Thismeansthatmulticastdatagramsarenotguaranteedtoreachallmembersofagrouportoarriveinthesameorderinwhichtheyweretransmitted.TheonlydifferencebetweenamulticastIPpacketandaunicastIPpacketisthepresenceofagroupaddressintheIPheaderdestinationaddressfield.

NOTE AccordingtoRFC3171,IPaddresses224.0.0.0through239.255.255.255aredesignatedasmulticastaddressesinIPv4.Individualhostscanjoinorleaveamulticastgroupatanytime.Therearenorestrictionsonthephysicallocationoronthenumberofmembersinamulticastgroup.Ahostcanbeamemberofmorethanonemulticastgroupatanytimeanddoesnothavetobelongtoagrouptosendpacketstomembersofagroup.

Routersuseagroupmembershipprotocoltolearnaboutthepresenceofgroupmembersondirectlyattachedsubnetworks.Whenahostjoinsamulticastgroup,ittransmitsagroupmembershipprotocolmessagetothegroup,andsetsitsIPprocessandnetworkinterfacecardtoreceiveframesaddressedtothemulticastgroup.

JunossoftwaresupportsIPmulticastroutingwithmanyprotocols,suchas:

• InternetGroupManagementProtocol(IGMP),versions1,2and3.

• MulticastListenerDiscovery(MLD),versions1and2.

• DistanceVectorMulticastRoutingProtocol(DVMRP).

• ProtocolIndependentMulticast(PIM).

• MulticastSourceDiscoveryProtocol(MSDP).

• SessionAnnouncementProtocol(SAP)andSessionDescriptionProtocol(SDP).

FordetailsconcerningtheIPmulticastfeatureandhowtoconfigureitusingJunosOSv10.0,pleaserefertotheIP Multicast Operational Mode Commands Guideathttps://www.kr.juniper.net/techpubs/en_US/junos10.0/information-products/topic-collections/swcmdref-protocols/chap-ip-multicast-op-mode-cmds.html#chap-ip-multicast-op-mode-cmds.

ImplementinganIPmulticastnetworkrequiresanumberofbuildingblocks.Figure6.1showsatypicalend-to-endvideostreamingservicewithIPmulticasting.BoththeclientcomputerandadjacentnetworkswitchesuseIGMPtoconnecttheclienttoalocalmulticastrouter.Betweenthelocalandremotemulticastrouters,weusedProtocolIndependentMulticast(PIM)todirectmulticasttrafficfromthevideoservertomanymulticastclients.

TheInternetGroupManagementProtocol(IGMP)managesthemembershipofhostsandroutersinmulticastgroups.IPhostsuseIGMPtoreporttheirmulticastgroupmembershipstoanyadjacentmulticastrouters.Foreachoftheirattachedphysicalnetworks,multicastroutersuseIGMPtolearnwhichgroupshavemembers.

Page 86: Data Center Network Connectivity With Ibm Server

Chapter6:ConfiguringtheInternetGroupManagementProtocol 85

Figure 6-1 IP Multicasting Network Deployment

IGMPmanagesthemembershipofhostsandroutersinmulticastgroups.IPhostsuseIGMPtoreporttheirmulticastgroupmembershipstoanyneighboringmulticastrouters.Inaddition,IGMPisusedasthetransportforseveralrelatedmulticastprotocols,suchasDVMRPandPIMv1.IGMPhasthreeversionsthataresupportedbyhostsandrouters:

IGMPv1 –TheoriginalprotocoldefinedinRFC1112.Anexplicitjoinmessageissenttotherouter,butatimeoutisusedtodeterminewhenhostsleaveagroup.

IGMPv2 –DefinedinRFC2236.Amongotherfeatures,IGMPv2addsanexplicitleavemessagetothejoinmessagesothatrouterscaneasilydeterminewhenagrouphasnolisteners.

IGMPv3–DefinedinRFC3376.IGMPv3supportstheabilitytospecifywhichsourcescansendtoamulticastgroup.Thistypeofmulticastgroupiscalledasource-specificmulticast(SSM)groupanditsmulticastaddressis232/8.IGMPv3isalsobackwardscompatiblewithIGMPv1andIGMPv2.

ForSSMmode,wecanconfigurethemulticastsourceaddresssothatthesourcecansendthetraffictothemulticastgroup.Inthisexample,wecreategroup225.1.1.1andacceptIPaddress10.0.0.2astheonlysource.

user@host# set protocols igmp interface fe-0/1/2 static group 225.1.1.1 source 10.0.0.2 user@host# set protocols igmp interface fe-0/1/2 static group 225.1.1.1 source 10.0.0.2 source-count 3 user@host# set protocols igmp interface fe-0/1/2 static group 225.1.1.1 source 10.0.0.2 source-count 3 source-increment 0.0.0.2 user@host# set protocols igmp interface fe-0/1/2 static group 225.1.1.1 exclude source 10.0.0.2

NOTE TheSSMconfigurationrequiresthattheIGMPversionontheinterfacebesettoIGMPv3.

L2 Switch withIGMP Snooping Video Client

PIM

LAN

UDP/RTP

Multicast Tra�c

IGMP IGMP

Video Server Local Multicast Router

Multicast Router

Router 1 Laptop

Page 87: Data Center Network Connectivity With Ibm Server

86 DataCenterNetworkConnectivitywithIBMServers

IGMP Static Group Membership

WecancreateIGMPstaticgroupmembershipformulticastforwardingwithoutareceiverhost.Thefollowingaresomeoftheexampleswithvariousoptionsusedwhilecreatingstaticgroups:

user@host# set protocols igmp interface fe-0/1/2 static group 225.1.1.1 user@host# set protocols igmp interface fe-0/1/2 static group 225.1.1.1 group-count 3 user@host# set protocols igmp interface fe-0/1/2 static group 225.1.1.1 group-count 3 group-increment 0.0.0.2

WhenweenableIGMPstaticgroupmembership,dataisforwardedtoaninterfacewithoutthatinterfacereceivingmembershipreportsfromdownstreamhosts.

NOTE WhenweconfigurestaticIGMPgroupentriesonpoint-to-pointlinksthatconnectrouterstoarendezvouspoint(RP),thestaticIGMPgroupentriesdonotgeneratejoinmessagestowardtheRP.

Various Multicast Routing Protocols

Multicastroutingprotocolsenableacollectionofmulticastrouterstobuild(join)distributiontreeswhenahostonadirectlyattachedsubnet,typicallyaLAN,wantstoreceivetrafficfromacertainmulticastgroup.Therearefivemulticastroutingprotocols:DVMRP,MulticastOpenShortestPathFirst(MOSPF),CBT(CoreBasedTree),PIM-Sparse,andPIM-Densethatcanbeusedtoachievethis.Table6.1summarizesthedifferencesamongthefivemulticastroutingprotocols.

Table 6.1 Multicast Routing Protocols Summary

Multicast Routing Protocols

Dense Mode

Sparse Mode

Implicit Join

Explicit Join (S,G) SBT (*,G) Shared

Tree

DVMRP Yes No Yes No Yes No

MOSPF Yes No No Yes Yes No

PIM dense mode Yes No Yes No Yes No

PIM sparse mode No Yes No Yes Yes Yes

CBT No Yes No Yes No Yes

BecausePIMSparseModeandPIMDenseModearethemostwidelydeployedtechniques,theywereusedinthisreferencedesign.

Protocol Independent Multicast

ThepredominantmulticastroutingprotocolusedontheInternettodayisProtocolIndependentMulticast(PIM).PIMhastwoversions,v1andv2.ThemaindifferencebetweenPIMv1andPIMv2isthepacketformat.PIMv1messagesuseInternetGroupManagementProtocol(IGMP)packets,whereasPIMv2hasitsownIPprotocolnumber(103)andpacketstructure.

Inaddition,itisimportanttoselecttheappropriatemode.AlthoughPIMprovidesfourmodes:sparsemode,densemode,sparse-densemode,andsource-specificmode,userstypicallyuseoneoftwobasicmodes:sparsemodeordensemode.

Page 88: Data Center Network Connectivity With Ibm Server

Chapter6:ConfiguringtheInternetGroupManagementProtocol 87

PIMdensemoderequiresonlyamulticastsourceandaseriesofmulticast-enabledroutersthatrunPIMdensemodetoallowreceiverstoobtainmulticastcontent.Densemodeensuresthatthetrafficreachesitsprescribeddestinationbyperiodicallyfloodingthenetworkwithmulticasttraffic,andreliesonprunemessagestoensurethatsubnets(whereallreceiversareun-interestedinthatparticularmulticastgroup)stopreceivingpackets.

PIMsparsemoderequiresestablishingspecialrouterscalledrendezvouspoints(RPs)inthenetworkcore.Thisisthepointwheretheseroutersupstreamjoinmessagesfrominterestedreceiversandmeetdownstreamtrafficfromthesourceofthemulticastgroupcontent.AnetworkcanhavemanyRPs,butPIMsparsemodeallowsonlyoneRPtobeactiveforanymulticastgroup.

Onthemulticastrouter,ittypicallyhastwoIGMPinterfaces:upstreamIGMPinterfaceanddownstreamIGMPinterface.WemustconfigurePIMontheupstreamIGMPinterfacestoenablemulticastroutingandtoperformreversepathforwardingformulticastdatapacketstopopulatethemulticast-forwardingtablefortheupstreaminterfaces.InthecaseofPIMsparsemode,todistributeIGMPgroupmembershipsintothemulticastroutingdomain.

Onlyone“pseudoPIMinterface”isrequiredtorepresentallIGMPdownstream(IGMP-only)interfacesontherouter.Therefore,PIMisgenerallynotrequiredonallIGMPdownstreaminterfaces,reducingtheamountofrouterresources,suchasmemory.

IGMP and Nonstop Active Routing

NSRconfigurationsincludepassivesupportwithIGMPinassociationwithPIM.TheprimaryRoutingEngineusesIGMPtodetermineitsPIMmulticaststate,andthisIGMP-derivedinformationisreplicatedonthebackupRoutingEngine.IGMPonthenewprimaryRoutingEngine(afterfailover)relearnsthestateinformationquicklythroughtheIGMPoperation.Intheinterim,thenewprimaryRoutingEngineretainstheIGMP-derivedPIMstateasreceivedbythereplicationprocessfromtheoriginalprimaryRoutingEngine.ThisstateinformationtimesoutunlessrefreshedbyIGMPonthenewprimaryRoutingEngine.AdditionalIGMPconfigurationisnotrequired.

Filtering Unwanted IGMP Reports at the IGMP Interface Level

The group-policystatementenablestheroutertofilterunwantedIGMPreportsattheinterfacelevel.WhenthisstatementisenabledonarouterrunningIGMPversion2(IGMPv2)orversion3(IGMPv3),aftertherouterreceivesanIGMPreport,theroutercomparesthegroupagainstthespecifiedgrouppolicyandperformstheactionconfiguredinthatpolicy.Forexample,therouterrejectsthereportifthepolicydoesn’tmatchthedefinedaddressornetwork.

ToenableIGMPreportfilteringforaninterface,includethefollowinggroup-policystatement:

protocols { igmp { interface ge-1/1/1.0 { group-policy reject_policy; } } }

Page 89: Data Center Network Connectivity With Ibm Server

88 DataCenterNetworkConnectivitywithIBMServers

policy-options { //IGMPv2 policy policy-statement reject_policy { from { router-filter 192.1.1.1/32 exact; } then reject; } policy-statement reject_policy { //IGMPv3 policy from { router-filter 192.1.1.1/32 exact; source-address-filter 10.1.0.0/16 orlonger; } then reject; } }

IGMPConfigurationCommandHierarchy

ToconfiguretheInternetGroupManagementProtocol(IGMP),includethefollowingigmpstatement:

igmp { accounting; // Accounting Purposes interface interface-name { disable; (accounting | no-accounting); // Individual interface specific accounting group-policy [ policy-names ]; immediate-leave; // see Note 1 at end of code snippet.

oif-map map-name; promiscuous-mode; // See Note 2 at end of code snippet. ssm-map ssm-map-name; static { group multicast-group-address { exclude; group-count number; group-increment increment; source ip-address { source-count number; source-increment increment; } } } version version; // See Note 3 at end of code snippet. } query-interval seconds; query-last-member-interval seconds; // Default 1 Second query-response-interval seconds; // Default 10 Seconds robust-count number; // See Note 4 at end of code snippet.

traceoptions { // Tracing Purposes file filename <files number> <size size> <world-readable | no-world-readable>; flag flag <flag-modifier> <disable>; // Flag can be : [leave (for IGMPv2 only)| mtrace | packets | query | report] } }

Page 90: Data Center Network Connectivity With Ibm Server

Chapter6:ConfiguringtheInternetGroupManagementProtocol 89

NOTE 1 UsethisstatementonlyonIGMPversion2(IGMPv2)interfacestowhichoneIGMPhostisconnected.IfmorethanoneIGMPhostisconnectedtoaLANthroughthesameinterface,andonehostsendsaleavegroupmessage,therouterremovesallhostsontheinterfacefromthemulticastgroup.Therouterlosescontactwiththehoststhatmustremaininthemulticastgroupuntiltheysendjoinrequestsinresponsetotherouter’snextgeneralgroupmembershipquery.

NOTE 2 Bydefault,IGMPinterfacesacceptIGMPmessagesonlyfromthesamesubnetwork.Thepromiscuous-modestatementenablestheroutertoacceptIGMPmessagesfromdifferentsub-networks.

NOTE 3 Bydefault,therouterrunsIGMPv2.Ifasourceaddressisspecifiedinamulticastgroupthatisconfiguredstatically,theIGMPversionmustbesettoIGMPv3.Otherwise,thesourcewillbeignoredandonlythegroupwillbeadded.ThejoinwillbetreatedasanIGMPv2groupjoin.

WhenwereconfiguretherouterfromIGMPv1toIGMPv2,therouterwillcontinuetouseIGMPv1forupto6minutesandwillthenuseIGMPv2.

NOTE 4 Therobustnessvariableprovidesfine-tuningtoallowforexpectedpacketlossonasubnetwork.ThevalueoftherobustnessvariableisusedincalculatingthefollowingIGMPmessageintervals:Groupmemberinterval=(robustnessvariablexquery-interval)+(1xquery-response-interval)Otherquerierpresentinterval=(robustnessvariablexquery-interval)+(0.5xquery-response-interval),last-memberquerycount=robustnessvariable.Bydefault,therobustnessvariableissetto2.Increasethisvalueifyouexpectasubnetworktolosepackets.

Configuring Protocol Independent Multicast

ThissectionfocusesonconfiguringPIM-SparseModeontheMXEthernetrouterandEXEthernetswitchserieswithvariousroutingprotocolsbasedonthefollowingscenarios.

• Scenario1:configurePIMonMX480andEX4200withOSPF.

• Scenario2:configurePIMonEX8200andEX4200withRIP.

Ineachscenario,weusedanIGMPserverasthesourceofthemulticaststreamsandusedtheVideoLAN(VLC)mediaplayerastheIGMPclient,whichmakesarequesttojointhemulticastgroup.

Scenario 1: Configuring PIM on the MX480 and EX4200 with OSPF

AsillustratedinFigure6.2,theMX480andEX4200arethemulticastrouters,whichinteroperatewithOSPFroutingprotocol.PIMisconfiguredonbothroutersandisconfiguredonlyonupstreaminterfacestoenablemulticastrouting.ThemulticastclientrunsontheIBMBladeServer,whichconnectstotheaccessswitch,forexampletheEX4200.

Page 91: Data Center Network Connectivity With Ibm Server

90 DataCenterNetworkConnectivitywithIBMServers

Figure 6.2 Configuring PIM on MX480 and EX4200 with OSPF

Configuring the MX480

chandra@HE-RE-1-MX480# show ge-5/2/5 unit 0 { family inet { address 22.11.5.5/24; }}{master}[edit interfaces]chandra@HE-RE-1-MX480# show lo0 unit 0 { family inet { address 8.8.8.8/32; }}chandra@HE-RE-1-MX480# show protocols igmp interface all { promiscuous-mode;}interface ge-5/2/6.0 { static { group 239.168.1.1 { group-count 10; source 10.10.10.254; } }

VLAN 1119

PIM with OSPF

ge-5/2/6

ge-5/2/5

ge-0/0/44

ge-0/0/2

MX480

IGMP Source(Streaming)

EX4200

BNTPass-Through

Eth1

IGMP Multicast Source

Multicast Router

Multicast Router

IGMP Multicast Client

Io0.08.8.8.8

Io0.06.6.6.6

Page 92: Data Center Network Connectivity With Ibm Server

Chapter6:ConfiguringtheInternetGroupManagementProtocol 91

}interface ge-5/2/5.0 { static { group 239.168.1.4; }}{master}[edit]chandra@HE-RE-1-MX480# show protocols pim rp { local { address 8.8.8.8; }}interface all { mode sparse;}interface fxp0.0 { disable;}chandra@HE-RE-1-MX480# show protocols ospfarea 0.0.0.0 { interface ge-5/2/5.0; interface lo0.0 { passive; } interface fxp0.0 { disable; }}chandra@HE-RE-1-MX480# show routing-optionsrouter-id 8.8.8.8;

Configuring the EX4200

chandra@EX-175-CSR# show interfaces ge-0/0/2 unit 0 { family ethernet-switching;}chandra@EX-175-CSR# show interfaces ge-0/0/44 unit 0 { family inet { address 22.11.5.44/24; }}

chandra@EX-175-CSR# show interfaces vlan unit 1119 { family inet { address 10.10.9.100/24; }}chandra@EX-175-CSR# show protocols igmp interface me0.0 { disable;}interface vlan.1119 { immediate-leave;}interface ge-0/0/6.1119;interface all;chandra@EX-175-CSR# show protocols pim

Page 93: Data Center Network Connectivity With Ibm Server

92 DataCenterNetworkConnectivitywithIBMServers

rp { static { address 8.8.8.8; }}interface vlan.1119;interface me0.0 { disable;}interface all { mode sparse;}chandra@EX-175-CSR# show interfaces lo0 unit 0 { family inet { address 6.6.6.6/32; }}chandra@EX-175-CSR# show protocols ospfarea 0.0.0.0 { interface ge-0/0/44.0; interface lo0.0 { passive; } interface me0.0 { disable; }}chandra@EX-175-CSR# show routing-optionsrouter-id 6.6.6.6;

Validating the MX480 Configuration

chandra@HE-RE-1-MX480> show route |grep PIM224.0.0.2/32 *[PIM/0] 06:21:14MultiRecv224.0.0.13/32 *[PIM/0] 06:21:14MultiRecv239.168.1.1,10.10.10.254/32*[PIM/105] 01:28:54Multicast (IPv4)239.168.1.2,10.10.10.254/32*[PIM/105] 01:23:33Multicast (IPv4)239.168.1.3,10.10.10.254/32*[PIM/105] 01:23:33Multicast (IPv4)239.168.1.4,10.10.10.254/32*[PIM/105] 01:23:33Multicast (IPv4)

chandra@HE-RE-1-MX480> show pim neighbors Instance: PIM.masterB = Bidirectional Capable, G = Generation Identifier,H = Hello Option Holdtime, L = Hello Option LAN Prune Delay,P = Hello Option DR PriorityInterface IP V Mode Option Uptime Neighbor addrge-5/2/5.0 4 2 HPLG 01:14:14 22.11.5.44 chandra@HE-RE-1-MX480> show pim join Instance: PIM.master Family: INETR = Rendezvous Point Tree, S = Sparse, W = Wildcard

Group: 239.168.1.1

Page 94: Data Center Network Connectivity With Ibm Server

Chapter6:ConfiguringtheInternetGroupManagementProtocol 93

Source: * RP: 8.8.8.8 Flags: sparse,rptree,wildcard Upstream interface: Local

Group: 239.168.1.1 Source: 10.10.10.254 Flags: sparse,spt Upstream interface: ge-5/2/6.0

Group: 239.168.1.2 Source: * RP: 8.8.8.8 Flags: sparse,rptree,wildcard Upstream interface: Local

Group: 239.168.1.2 Source: 10.10.10.254 Flags: sparse,spt Upstream interface: ge-5/2/6.0

chandra@HE-RE-1-MX480> show pim source Instance: PIM.master Family: INET

Source 8.8.8.8 Prefix 8.8.8.8/32 Upstream interface Local Upstream neighbor Local

Source 10.10.10.254 Prefix 10.10.10.0/24 Upstream interface ge-5/2/6.0Upstream neighbor 10.10.10.2

Source 10.10.10.254 Prefix 10.10.10.0/24 Upstream interface ge-5/2/6.0 Upstream neighbor Direct

Validating the EX4200 Configuration

chandra@EX-175-CSR# run show pim join Instance: PIM.master Family: INETR = Rendezvous Point Tree, S = Sparse, W = Wildcard

Group: 239.168.1.1 Source: * RP: 8.8.8.8 Flags: sparse,rptree,wildcard Upstream interface: ge-0/0/44.0 Group: 239.168.1.1 Source: 10.10.10.254 Flags: sparse,spt Upstream interface: ge-0/0/44.0 chandra@EX-175-CSR# run show pim neighbors Instance: PIM.masterB = Bidirectional Capable, G = Generation Identifier,

Page 95: Data Center Network Connectivity With Ibm Server

94 DataCenterNetworkConnectivitywithIBMServers

H = Hello Option Holdtime, L = Hello Option LAN Prune Delay,P = Hello Option DR Priority

Interface IP V Mode Option Uptime Neighbor addrge-0/0/44.0 4 2 HPLG 01:06:07 22.11.5.5 chandra@EX-175-CSR# run show pim source Instance: PIM.master Family: INET

Source 8.8.8.8 Prefix 8.8.8.8/32 Upstream interface ge-0/0/44.0 Upstream neighbor 22.11.5.5

Source 10.10.10.254 Prefix 10.10.10.0/24 Upstream interface ge-0/0/44.0 Upstream neighbor 22.11.5.5

Scenario 2: Configuring PIM on the EX8200 and EX4200 with RIP

AsillustratedinFigure6.3,theEX8200andEX4200arethemulticastrouterswithRIPenabled.PIMisconfiguredinbothrouters,andisconfiguredonlyonupstreaminterfacestoenablemulticastrouting.ThemulticastingclientrunsontheIBMPowerVM,whichconnectstotheEX4200accessswitch.

Configuring the EX4200

chandra@EX-175-CSR# show interfaces ge-0/0/2 unit 0 { family ethernet-switching;}

chandra@EX-175-CSR# show interfaces ge-0/0/17 unit 0 { family inet { address 22.11.2.17/24; }}

chandra@EX-175-CSR# show interfaces vlan unit 2211 { family inet { address 10.10.9.200/24; }}chandra@EX-175-CSR# show protocols igmp interface me0.0 { disable;}interface vlan.2211 {

Page 96: Data Center Network Connectivity With Ibm Server

Chapter6:ConfiguringtheInternetGroupManagementProtocol 95

Figure 6.3 Configuring PIM on EX8200 and EX4200 with RIP

immediate-leave;}interface ge-0/0/2.2211;interface all;chandra@EX-175-CSR# show protocols pim rp { static { address 9.9.9.9; }}interface vlan.2211;interface me0.0 { disable;}interface all { mode sparse;}chandra@EX-175-CSR# show interfaces lo0 unit 0 { family inet { address 6.6.6.6/32; }}

IGMP Multicast Source

Multicast Router

Multicast Router

IGMP Multicast ClientIBM POWERVM

VLAN 2211

PIM with RIP

ge-1/0/20

Io0.09.9.9.9

Io0.06.6.6.6

ge-1/0/26

ge-0/0/2

ge-0/0/17

EX8200

IGMP Multicast Source(Streaming)

EX4200

VirtualNetwork

SEA(Shared Ethernet Adapter)

NICs/HEA(Host Ethernet Adapter)

Page 97: Data Center Network Connectivity With Ibm Server

96 DataCenterNetworkConnectivitywithIBMServers

chandra@EX-175-CSR# show protocols rip send broadcast;receive both;group jweb-rip { export jweb-policy-rip-direct; neighbor ge-0/0/2.0; neighbor lo0.0; neighbor vlan.2211;}chandra@EX-175-CSR# show policy-options policy-statement jweb-policy-rip-direct { term 1 { from { protocol [ direct rip ]; interface [ ge-0/0/2.0 ge-0/0/17.0]; } then accept; } term 2 { then accept; }}

Configuring the EX8200

chandra@SPLAB-8200-1-re0# show protocols ripsend broadcast;receive both;group jweb-rip { export jweb-policy-rip-direct; neighbor ge-1/0/26.0; neighbor lo0.0;}

chandra@SPLAB-8200-1-re0# show policy-optionspolicy-statement jweb-policy-rip-direct { term 1 { from { protocol [ direct rip ]; interface [ ge-1/0/26.0]; } then accept; } term 2 { then accept; }}

IGMP Snooping

AnaccessswitchusuallylearnsunicastMACaddressesbycheckingthesourceaddressfieldoftheframesitreceives.However,amulticastMACaddresscanneverbethesourceaddressforapacket.Asaresult,theswitchfloodsmulticasttrafficontheVLAN,consumingsignificantamountsofbandwidth.

Page 98: Data Center Network Connectivity With Ibm Server

Chapter6:ConfiguringtheInternetGroupManagementProtocol 97

IGMPsnoopingregulatesmulticasttrafficonaVLANtoavoidflooding.WhenIGMPsnoopingisenabled,theswitchinterceptsIGMPpacketsandusesthecontentofthepacketstobuildamulticastcachetable.ThecachetableisadatabaseofmulticastgroupsandtheircorrespondingmemberportsandisusedtoregulatemulticasttrafficontheVLAN.

Whentheswitchreceivesmulticastpackets,itusesthecachetabletoselectivelyforwardthepacketsonlytotheportsthataremembersofthedestinationmulticastgroup.

AsillustratedinFigure6.4,theaccessswitchEX4200connectsfourhostsandsegmentstheirdatatrafficwithtwoVLANs,wherehost1andhost2belongtoVLAN1andhost3andhost4belongtoVLAN2.ThehostsatthesameVLANmighttakedifferentactiononwhethertosubscribeortounsubscribethemulticastgroup.Forinstance,host1hassubscribedtomulticastgroup1,whilehost2isnotinterestedinmulticastgroup1traffic;host3hassubscribedtomulticastgroup2,whilehost4isnotinterestedinmulticastgroup2traffic.TheEX4200IGMPsnoopingfeaturecanaccommodatethisrequestsothathost1receivesmulticastgroup1traffic,andhost2doesnot;host3receivesmulticastgroup2traffic,andhost4doesnot.

Figure 6.4 IGMP Traffic Flow with IGMP Snooping Enabled

Hostscanjoinmulticastgroupsintwoways:

• BysendinganunsolicitedIGMPjoinmessagetoamulticastrouterthatspecifiestheIPmulticastthatthehostisattemptingtojoin.

• BysendinganIGMPjoinmessageinresponsetoageneralqueryfromamulticastrouter.

AmulticastroutercontinuestoforwardmulticasttraffictoaVLANifatleastonehostonthatVLANrespondstotheperiodicgeneralIGMPqueries.Toleaveamulticastgroup,ahostcaneithernotrespondtotheperiodicgeneralIGMPqueries,whichresultsinasilentleave,orsendagroup-specificIGMPv2leavemessage.

Host 1 in VLAN 1Subscribes Group 1

Host 3 in VLAN 2Subscribes Group 2

Host 2 in VLAN 1

Host 4 in VLAN 2

EX4200

VLAN 1

VLAN 2

Multicast Group 1 Tra�c

Multicast Group 2 Tra�c

Trunk

Page 99: Data Center Network Connectivity With Ibm Server

98 DataCenterNetworkConnectivitywithIBMServers

IGMP Snooping in EX Series Ethernet Switches

IntheEXSeriesEthernetswitches,IGMPsnoopingworkswithbothLayer2interfacesandtheroutedVLANinterfaces(RVIs)toregulatemulticasttrafficinaswitchednetwork.SwitchesuseLayer2interfacestosendtraffictohoststhatarepartofthesamebroadcastdomainanduseaRVItoroutetrafficfromonebroadcastdomaintoanother.

WhenanEXSeriesswitchreceivesamulticastpacket,thePacketForwardingEnginesintheswitchperformanIPmulticastlookuponthemulticastpackettodeterminehowtoforwardthepackettoitslocalports.FromtheresultsoftheIPmulticastlookup,eachPacketForwardingEngineextractsalistofLayer3interfaces(whichcanincludeVLANinterfaces)thathaveportslocaltothePacketForwardingEngine.IfanRVIispartofthislist,theswitchprovidesabridgemulticastgroupIDforeachRVItothePacketForwardingEngine.

Figure6.5showshowmulticasttrafficisforwardedonamultilayerswitch.Themulticasttrafficarrivesthroughthexe-0/1/0.0interface.AmulticastgroupisformedbytheLayer3interfacege-0/0/2.0,vlan.0andvlan.1.Thege-2/0/0.0interfaceisacommontrunkinterfacethatbelongstobothvlan.0andvlan.1.TheletterRnexttoaninterfacenameinFigure6.5indicatesthatamulticastreceiverhostisassociatedwiththatinterface.

Figure 6.5 IGMP Traffic Flow with Routed VLAN Interfaces

Multicast Tra�c

Multicast Tra�c

Non-Multicast Tra�c

Receiving Multicast Tra�c(R)

(R)

v100 v200

xe-0/1/0.0

xe-0/1/0.0

ge-0/0/2.0

(R)ge-2/0/0.0(R) ge-0/0/0.0(R)ge-2/0/0.0

ge-1/0/0.0 ge-1/0/0.0

(R) ge-0/0/0.0

ge-0/0/3.0

VLAN 0 VLAN 1

EX4200 SeriesSwitch

Page 100: Data Center Network Connectivity With Ibm Server

Chapter6:ConfiguringtheInternetGroupManagementProtocol 99

IGMP Snooping Configuration Command

TheIGMPsnoopingfeatureisavailableontheMXEthernetrouterandtheEXEthernetswitchseries.However,theconfigurationcommandhierarchyisdifferentonthesetwodevices.

IntheEXEthernetswitchseries,theconfigurationhierarchyisatthe[edit protocols] hierarchylevelinJunosCLIandthedetailedconfigurationstanzaisasfollows:

igmp-snooping { vlan (vlan-id | vlan-number { disable { interface interface-name } immediate-leave; interface interface-name { multicast-router-interface; static { group ip-address; } } query-interval seconds; query-last-member-interval seconds; query-response-interval seconds; robust-count number; } }

NOTE Bydefault,IGMPsnoopingisnotenabled.StatementsconfiguredattheVLANlevelapplyonlytothatparticularVLAN.

WiththeMXEthernetRouterSeriesintheJunosCLI,wecanconfigureaLayer2broadcastingdomainwithabridgedomain,sothatIGMPsnoopingisconfiguredatthe[bridge-domains] configurationhierarchy.Thedetailedconfigurationstanzaisasfollows:

multicast-snooping-options { flood-groups [ ip-addresses ]; forwarding-cache { threshold suppress value <reuse value>; } graceful-restart <restart-duration seconds>; ignore-stp-topology-change; }

Page 101: Data Center Network Connectivity With Ibm Server

100 DataCenterNetworkConnectivitywithIBMServers

Configuring IGMP Snooping

ThissectionfocusesonconfiguringIGMP-SnoopingontheMXEthernetRouterandEXEthernetswitchserieswithvariousIGMPclientplatformsonthefollowingscenarios.

• Scenario1:MX480,EXSeriesandIBMBladeCenter.• Scenario2:MX480andIBMx3500Server.

Ineachscenario,weusedaIGMPserverasthesourceofmulticaststeamsandusedtheVideoLAN(VLC)mediaplayerastheIGMPClient,whichrequeststojoinontothemulticastgroup.

Scenario 1: MX480, EX Series and IBM Blade Center

AsillustratedinFigure6.6,theIGMPmulticastsourcegeneratestheIGMPGroup2flow:fromtheMX480totheEX800,andthenontotheIGMPclient,whichrunsontheIBMBladeCenter.

Twointerfaces(ge-5/2/3andge-5/2/6)intheMX480areconfiguredasLayer2switchesbyusingbridge domain,whichisassociatedwithVLAN1117.Thege-5/2/6interfaceisconfiguredwiththemulticast-routerinterfaceandthisinterfaceconnectstothemulticastingsource;interfacege-5/2/3isaLayer2interfacewiththemulticastingIPaddress(239.168.1.3).Thisconfigurationallowstheinterfacetoreceiveandthenforwardthemulticastingpacketstotheirtarget.

Figure 6.6 MX480, EX8200, EX4200 and IBM Blade Center – IGMP Traffic Flow with IGMP Snooping

ge-0/0/6

ge-0/0/44

Multicast Router

VLAN 1119

Trunk Port 17

ge-1/0/20

Up to 14 GigE Links

ge-5/2/5

ge-5/2/6

ge-5/2/3

ge-5/2/4

ge-1/0/22ge-0/0/2

VLAN 2211VLAN 1117

MX480

EX8200

EX4200

IGMP Multicast Source(Streaming)

BNT

Pass-ThroughMM1Cisco

ESM 1

Eth1

Eth0

SoL

Page 102: Data Center Network Connectivity With Ibm Server

Chapter6:ConfiguringtheInternetGroupManagementProtocol 101

Configuring the MX480

chandra@HE-RE-1-MX480> show configuration bridge-domains 1117domain-type bridge;vlan-id 1117;interface ge-5/2/3.0;interface ge-5/2/6.0;protocols {igmp-snooping { interface ge-5/2/3.0 { static { group 239.168.1.3; } } interface ge-5/2/6.0 { multicast-router-interface; } }}

Configuring the EX4200

{master:0}chandra@EX-175-CSR> show configuration protocols igmp-snoopingvlan IGMP {interface ge-0/0/2.0 { static { group 239.168.1.1; }}interface ge-0/0/17.0 { static { group 239.168.1.1; } multicast-router-interface; }

}chandra@EX-175-CSR> show configuration vlans 2211 vlan-id 2211;interface {ge-0/0/2.0;ge-0/0/17.0;}

Configuring the EX8200

chandra@SPLAB-8200-1-re0> show configuration protocols igmp-snoopingvlan 1117 {interface ge-1/0/18.0 { static { group 239.168.1.3; } multicast-router-interface;}interface ge-1/0/22.0 { static { group 239.168.1.3; } }}

Page 103: Data Center Network Connectivity With Ibm Server

102 DataCenterNetworkConnectivitywithIBMServers

Validating IGMP Snooping

laka-bay1#show ip igmp snooping group Vlan Group Version Port List--------------------------------------------2211 239.168.1.1 v2 Gi0/17

laka-bay1#show ip igmp snooping mrouter Vlan ports---- -----2211 Gi0/19(dynamic)

laka-bay1#show ip igmp snooping querier Vlan IP Address IGMP Version Port -----------------------------------------------2211 11.22.3.24 v2 Gi0/19

chandra@HE-RE-1-MX480> show igmp snooping statisticsBridge: bc-igmpIGMP Message type Received Sent Rx errors. . . Membership Query 0 9 0V1 Membership Report 0 0 0DVMRP 0 0 0. . .Group Leave 1 4 0. . .V3 Membership Report 43 56 0

. . .

chandra@HE-RE-1-MX480> show igmp snooping membership detailInstance: default-switch

Bridge-Domain: bc-igmpLearning-Domain: defaultInterface: ge-5/2/6.0Interface: ge-5/2/5.0Group: 239.168.1.2Group mode: Exclude Source: 0.0.0.0 Last reported by: 10.10.10.1 Group timeout: 76 Type: Dynamic

chandra@EX-175-CSR> show igmp-snooping membershipVLAN: IGMP239.168.1.2 * Interfaces: ge-0/0/2.0, ge-0/0/44.0

chandra@EX-175-CSR> show igmp-snooping membership detailVLAN: IGMP Tag: 2211 (Index: 10)Router interfaces: ge-0/0/44.0 static Uptime: 00:31:59Group: 239.168.1.2Receiver count: 1, Flags: <V2-hosts Static>ge-0/0/2.0 Uptime: 00:39:34ge-0/0/44.0 Uptime: 00:39:34

chandra@EX-175-CSR> show igmp-snooping statisticsBad length: 0 Bad checksum: 0 Invalid interface: 0

Page 104: Data Center Network Connectivity With Ibm Server

Chapter6:ConfiguringtheInternetGroupManagementProtocol 103

Not local: 0 Receive unknown: 0 Timed out: 2IGMP Type Received Transmitted Receive ErrorsQueries: 156 12 0Reports: 121 121 0Leaves: 2 2 0Other: 0 0 0

Scenario 2: MX480 and IBM x3500 Server

Inthisscenario,theIGMPgrouptrafficflowisgeneratedfromtheIGMPsourceandsenttotheMX480;itthencontinuestotheclient,whichrunsontheIBMx3500SeriesPlatform.

AsshowninFigure6.7,twointerfacesintheMX480(ge-5/2/4andge-5/2/6)intheMX480areconfiguredasLayer2switchesbyusingbridge domain,whichisassociatedwithVLAN1118.Theinterfacege-5/2/6whichisconfiguredwiththemulticast-router-interface,connectstothemulticastingsource;interfacege-5/2/4isaLayer2interfacewithmulticastingIPaddress(239.168.1.4)andissetuptoreceiveandforwardmulticastingpacketstotheirrespectiveservers.

Figure 6.7 MX480 and IBM x3500 IGMP Traffic Flow with IGMP Snooping

chandra@HE-RE-1-MX480> show configuration bridge-domains 1118 domain-type bridgevlan-id 1118;interface ge-5/2/4.0;interface ge-5/2/6.0;protocols {igmp-snooping { interface ge-5/2/6.0 { multicast-router-interface; } interface ge-5/2/4.0 { static { group 239.168.1.4; } } } }

IBM X3500 Server

IP = 10.10.9.1

GW = 10.10.9.1

ge-5/2/6

ge-5/2/4 ip = 239.168.1.4MX480

IGMP Multicast Source(Streaming)

IGMP Multicast Source

Multicast Router

IGMP Client

Page 105: Data Center Network Connectivity With Ibm Server
Page 106: Data Center Network Connectivity With Ibm Server

105

Chapter 7

Understanding Network CoS and Latency

AN.APPLICATION’S.PERFORMANCE.directly.relies.on.network.performance ..

Network.performance.typically.refers.to.bandwidth.because.bandwidth.is.the.

primary.measure.of.computer.network.speed.and.represents.overall.capacity.of.a.

connection ..Greater.capacity.typically.generates.improved.performance ..However,.

network.bandwidth.is.not.the.only.factor.that.contributes.to.network.

performance .

The.performance.of.an.application.relies.on.different.network.characteristics ...

Some.real-time.applications.such.as.voice.and.video.are.extremely.sensitive.to.

latency,.jitter,.and.packet.loss,.while.some.non.real-time.applications,.such.as.web.

applications.(HTTP),.email,.File.Transfer.Protocol.(FTP),.and.Telnet,.do.not.

require.any.specific.reliability.on.the.network,.and.“best.effort”.policy.works.well.in.

transmitting.these.traffic.types .

Class.of.Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Configuring.CoS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

Latency. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . 115

Page 107: Data Center Network Connectivity With Ibm Server

106 DataCenterNetworkConnectivitywithIBMServers

Intoday’sconvergednetwork,includingdata/voiceconvergednetworksanddata/storageconvergednetworks,andincloud-readydatacenterswithservervirtualization,differenttypesofapplicationsaretransmittedthroughoutthesamenetwork.Toensureapplicationperformanceforalltypesofapplications,additionalprovisionsarerequiredwithinthenetworktominimizelatencyandpacketloss.

Thischaptercoverstwotechniquesforimprovingdatacenternetworkperformance:

• Usingclassofservice(CoS)tomanagepacketloss.

• ConsideringlatencycharacteristicswhendesigningnetworksusingJuniperNetworksdatacenternetworkproducts.

Class of Service

Typically,whenanetworkexperiencescongestionanddelay,somepacketswillbedropped.However,asanaidinpreventingdroppedpackets,JunosCoSallowsanadministratortodividetrafficintoclassesandoffersvariouslevelsofthroughputandpacketlosswhencongestionanddelayoccur.Thisallowspacketlosstooccuronlywhenspecificrulesareconfiguredonthesystem.

IndesigningCoSapplications,wemustconsiderserviceneeds,andwemustthoroughlyplananddesignCoSconfigurationtoensureconsistencyacrossallroutersinaCoSdomain.WemustalsoconsideralltheroutersandothernetworkingequipmentintheCoSdomaintoensureinteroperabilityamongdifferenttypesofequipment.However,beforewefurtherproceedwithimplementingCoSinJunos,weshouldunderstandCoScomponentsandpacketflowthroughtheCoSprocess.

Junos CoS Process

Figure7.1showsatypicalCoSprocess,thegeneralflowofapacketasitpassesthroughCoSinaQoSimplementedrouter.

Figure 7.1 CoS Processing Model

Ingress Processing

PacketForwarding

MFClassifiers

MFClassifiers

Policing

PolicingQueueingand Shaping

BAClassifiers

PacketDropping

Rewrite

Packet Queueing/Shaping

Packet Classification/Marking

IFL-BasedClassifier

Engress Processing

Page 108: Data Center Network Connectivity With Ibm Server

Chapter7:UnderstandingNetworkCoSandLatency 107

ThefollowingisalistofthekeystepsintheQoSprocess,togetherwiththecorrespondingconfigurationcommandsfortheprocess.

1. Classifying:Thisstepexamines(forexample,EXPbits,IEEE802.1pbits,orDSCPbits)toseparateincomingtraffic.

Oneormoreclassifiersmustbeassignedtoaphysicalinterfaceoralogicalinterfacemustbeassignedoneormoreclassifierstoseparatethetrafficflows.Theclassifierconfigurationisatthe[edit class-of-service interfaces]hierarchylevelinJunosCLI.

Inaddition,theclassifierstatementfurtherdefineshowtoassignthepackettoaforwardingclasswithalosspriority.Theconfigurationisatthe[edit class-of-service classifiers]hierarchylevelinJunosCLI.Fordetailsconcerningpacketlosspriorityandforwardingclass,seeDefining Loss Priorities and Defining Forwarding Classesonpage109ofthishandbook.

Furthermore,eachforwardingclasscanbeassignedtoaqueue.Theconfigurationisatthe[editclass-of-serviceforwarding-classes]hierarchylevel.

2. Policing:Thisstepmeterstraffic.Itchangestheforwardingclassandlosspriorityifatrafficflowexceedsitspre-definedservicelevel.

3. Scheduling:Thisstepmanagesallattributesofqueuing,suchastransmissionrate,bufferdepth,priority,andRandomEarlyDetection(RED)profile.

Aschedulemapwillbeassignedtothephysicalorlogicalinterface.Theconfigurationisatthe[edit class-of-service interfaces] hierarchylevelinJunosCLI.

Inaddition,theschedulerstatementdefineshowtrafficistreatedintheoutputqueue—forexample,thetransmitrate,buffersize,priority,anddropprofile.Theconfigurationisatthe[edit class-of-service schedulers] hierarchylevel.

Finally,thescheduler-mapsstatementassignsaschedulertoeachforwardingclass.Theconfigurationisatthe[edit class-of-service scheduler-maps] hierarchylevel.

4. Packet Dropping: Thisstepmanagesdrop-profiletoavoidTCPsynchronizationandprotecthighprioritytrafficfrombeingdropped.

Thedrop-profiledefineshowaggressivelytodroppacketsthatareusingaparticularscheduler.Theconfigurationisatthe[edit class-of-service drop-profiles]hierarchylevel.

5. Rewrite Marker: ThissteprewritesthepacketCoSfields(forexample,EXPorDSCPbits)accordingtotheforwardingclassandlosspriorityofthepacket.

Therewriteruletakeseffectasthepacketleavesalogicalinterfacethathasarewriterule.Theconfigurationisatthe[edit class-of-service rewrite-rules]hierarchylevelinJunosCLI.

Page 109: Data Center Network Connectivity With Ibm Server

108 DataCenterNetworkConnectivitywithIBMServers

JUNOS CoS Implementation Best Practices

Bestpracticesincludethefollowing:

• Selectingtheappropriateclassifier.

• Usingcode-pointaliases.

• Defininglosspriorities.

• Definingforwardingclasses.

• Definingcomprehensiveschedulers.

• Definingpolicersfortrafficclasses.

Selecting the Appropriate Classifier

Selectingtheappropriateclassifieriskeyindistinguishingtraffic.Table7.1listsclassifiercomparisonsbetweenJuniperNetworksMXSeriesandEXSeries.

Table 7.1 Packet Classifiers Comparison Between MX Series and EX Series

Packet Classifiers MX960 Series & MX480 Series

EX8200 Series & EX4200 Series Function

dscp Yes Yes HandlesincomingIPv4packets.

dscp-ipv6 Yes – HandlesincomingIPv6packets.

exp Yes –HandlesMPLSpacketsusingLayer2headers.

ieee-802.1 Yes Yes HandlesLayer2CoS.

ieee-802.1ad Yes – HandlesIEEE-802.1ad(DEI)classifier.

inet-precedence Yes Yes

HandlesincomingIPv4packets.IPprecedencemappingrequiresonlytheupperthreebitsoftheDSCPfield.

Using Code-Point Aliases

Usingcode-pointaliasesrequiresanoperatortoassignanametoapatternofcode-pointbits.WecanusethisnameinsteadofthebitpatternwhenconfiguringotherCoScomponents,suchasclassifiers,drop-profilemaps,andrewriterules,forexampleieee-802.1{be000;af12101;af11100;be1001;ef010;}.

Defining Loss Priorities

Losspriorityaffectstheschedulingofapacketwithoutaffectingthepacket’srelativeordering.Anadministratorcanusethepacketlosspriority(PLP)bitaspartofacongestioncontrolstrategyandcanusethelossprioritysettingtoidentifypacketsthathaveexperiencedcongestion.Typically,anadministratorwillmarkpacketsexceedingaspecifiedservicelevelwithahighlosspriorityandsetthelossprioritybyconfiguringaclassifierorapolicer.Thelosspriorityisusedlaterintheworkflowtoselectoneofthedropprofilesusedbyrandomearlydetection(RED).

Page 110: Data Center Network Connectivity With Ibm Server

Chapter7:UnderstandingNetworkCoSandLatency 109

Defining Forwarding Classes

Theforwardingclassaffectstheforwarding,scheduling,andmarkingofpoliciesappliedtopacketsastheymovethrougharouter.Table7.2summarizesthemappingbetweenqueuesanddifferentforwardingclassesforboththeMXandEXSeries.

Table 7.2 Forwarding Classes for MX480, EX4200 and EX8200 Series

Forwarding Class MX Series Queue EX Series Queue

Voice (EF) Q3 Q5

Video (AF) Q2 Q4

Data (BE) Q0 Q0

Network Control (NC) – Q7

Theforwardingclass,plusthelossprioritydefinestheper-hopbehavior.Iftheusecaserequiresassociatingtheforwardingclasseswithnexthops,thentheforwardingpolicyoptionsareavailableonlyontheMXSeries.

Defining Comprehensive Schedulers

Anindividualrouterinterfacehasmultiplequeuesassignedtostorepackets.Therouterdetermineswhichqueuetoservicebasedonaparticularmethodofscheduling.Thisprocessofteninvolvesadeterminationofwhichtypeofpacketshouldbetransmittedbeforeanothertypeofpacket.Junosschedulersallowanadministratortodefinethepriority,bandwidth,delaybuffersize,ratecontrolstatus,andREDdropprofilestobeappliedtoaparticularqueueforpackettransmission.

Defining Policers for Traffic Classes

Policersallowanadministratortolimittrafficofacertainclasstoaspecifiedbandwidthandburstsize.Packetsexceedingthepolicerlimitscanbediscardedorcanbeassignedtoadifferentforwardingclass,adifferentlosspriority,orboth.Juniperdefinespolicerswithfiltersthatcanbeassociatedwithinputoroutputinterfaces.

Table7.3comparesthemulticastroutingprotocolsastheypertaintoJuniperNetworksMX4800,EX8200,andEX4200.

Page 111: Data Center Network Connectivity With Ibm Server

110 DataCenterNetworkConnectivitywithIBMServers

Table 7.3 Comparison of Multicast Routing Protocols

Field Description MX4800Series

EX8200Series

EX4200Series

classifiers Classifyincomingpacketsbasedoncodepointvalue Yes Yes Yes

code-point-aliases

Mappingofcodepointaliasestobitstrings Yes Yes Yes

drop-profiles RandomEarlyDrop(RED)datapointmap Yes Yes Yes

fabric DefineCoSparametersofswitchfabric Yes Yes -

forwarding-classes

Oneormoremappingsofforwardingclasstoqueuenumber

Yes Yes Yes

forwarding-policy

Class-of-serviceforwardingpolicy Yes - -

fragmentation-maps

Mappingofforwardingclasstofragmentationoptions

Yes - -

host-outbound-traffic

Classifyandmarkhosttraffictoforwardingengine Yes - -

interfaces Applyclass-of-serviceoptionstointerfaces Yes Yes -

multi-destination

Multicastclassofservice - Yes -

restricted-queues

Mapforwardingclassestorestrictedqueues Yes - -

rewrite-rules Writecodepointvalueofoutgoingpackets Yes Yes Yes

routing-instances

ApplyCoSoptionstoroutinginstanceswithVRFtablelabel

Yes - -

scheduler-maps

Mappingofforwardingclassestopacketschedulers Yes Yes Yes

schedulers Packetschedulers Yes Yes Yes

traffic-control-profiles

Trafficshapingandschedulingprofiles Yes - -

translation-table

Translationtable Yes - -

tri-color Enabletricolormarking Yes - -

Page 112: Data Center Network Connectivity With Ibm Server

Chapter7:UnderstandingNetworkCoSandLatency 111

Configuring CoS

Inthissection,wedemonstrateasampleconfigurationscenarioforconfiguringCoSontheEX4200.Twobladeserversconnecttotwodifferentinterfacestosimulateproductiontrafficbyissuingapingcommand;thetestdevice(N2X)willgeneratesignificantnetworktrafficclassifiedasbackgroundtrafficthroughtheEX4200tooneofthebladeservers.Thisbackgroundpackagewillcongestwithproductiontraffic,causingpacketlossintheproducttraffic.BecausetheEX4200iscentraltonetworktrafficaggregationinthisscenario,itisreasonabletoapplyaCoSpacketlosspolicyontheEX4200toensurethatnopacketlossoccursintheproducttraffic.

NOTE TheconfigurationscenarioandsnippetisalsoapplicabletoMXSeriesEthernetRouters.

Configuration Description

AsillustratedinFigure7.2,theEX4200istheDUT,whichinterconnectsIBMbladeservers,andtheAgilentTrafficGeneratorN2X.

Figure 7.2 EX4200 CoS Validation Scenario

Thetestincludesthefollowingsteps:

1. TheN2XgeneratesnetworktrafficasbackgroundtrafficontotheEX4200throughtwoingressGigEports(ge-0/0/24andge-0/0/25).

2. TheEX4200forwardsthebackgroundtraffictoasingleegressGigEport(ge-0/0/9).

3. Atthesametime,thebladeserverusesthepingcommandtogenerateproductiontrafficontotheEX4200throughadifferentinterface(ge-0/0/7).

4. TheEX4200alsoforwardstheproductiontraffictothesameegressport(ge-0/0/9).Fromapacketlosspolicyperspective,theproductiontrafficislowlosspriority,whilethebackgroundtrafficishigh.

Background Tra�c

Production Tra�c

N2X

Pass-Through Module viaeth 1 Interface on 9th Blade

Pass-Through Module viaeth 1 Interface on 7th Blade

ge-203/1ge-0/0/24

ge-0/0/25

ge-0/0/7

ge-0/0/9

IBM BladeCenter

IBM BladeCenter

11.22.1.100

ge-304/411.22.1.200

EX4200

11.22.1.9

11.22.1.7

Page 113: Data Center Network Connectivity With Ibm Server

112 DataCenterNetworkConnectivitywithIBMServers

Toverifythestatusofpacketsoningress/egressports,weenabletheshow interface queue <ge-0/x/y>commandtoconfirmthatonlyhighlossprioritypacketsfromtheBACKGROUNDforwardingclasswerebeingtaildropped.

NOTE TheconfigurationusedinthissetupwassufficienttoachieveconfirmationonCoSfunctionality(insimplestform).Otherdetailedconfigurationoptionsareavailableandcanbeenabledasneeded.RefertotheCoScommandHierarchyLevelsintheJunos Software CLI User Guideatwww.juniper.net/techpubs/software/junos/junos95/swref-hierarchy/hierarchy-summary-configuration-statement-class-of-service.html#hierarchy-summary-configuration-statement-class-of-service.

Thefollowingstepssummarizethesetupconfigurationprocess.

• 1. ConfigurethesetupasillustratedinFigure7.2andbyreviewingtheCoSconfigurationcodesnippet.

• 2. CreatesomesimpleflowsonN2Xtosendfromeachport-to-portge-0/0/9.

• 3. Sendthetrafficat50%fromeachportto11.22.1.9.(inabsenceoftwoports,oneportcouldbeusedtosend100%traffic).

• 4. ConfiguretheDUTtodotheCoS-basedprocessingoningresstrafficfromsource11.22.1.7comingoverinterfacege-0/0/7asHighClassandlowprobabilitytogetdroppedandfrominterfacesge-0/0/24andge-0/0/25asHighPrioritytogetdropped.

• 5. Nowstartthepingfrom11.22.1.7onto11.22.1.9.

• 6. Tunetheline-rateparameterwithN2Xtrafficcomingtoge-0/0/9.

• 7. Observetheegressinterfacestatisticsandingressportsstatisticstogetconfirmationthatpingtrafficistaggedhigherforwardingclassanddoesnotgetdropped,whiletrafficcomingfromportge-0/0/24andge-0/0/25getsdroppedoningress.

CoS Configuration Snippet

chandra@EX> show configuration class-of-serviceclassifiers {ieee-802.1 DOTP-CLASSIFIER { //define the type of classifer forwarding-class CONVERSATIONAL { //Assign Expedited forwarding to CONVERSATIONAL forwarding-class loss-priority low code-points ef; } forwarding-class INTERACTIVE { loss-priority low code-points af12; } forwarding-class STREAMING { loss-priority low code-points af11; } forwarding-class BACKGROUND { loss-priority high code-points be; } }}

Page 114: Data Center Network Connectivity With Ibm Server

Chapter7:UnderstandingNetworkCoSandLatency 113

code-point-aliases {ieee-802.1 { //associate the code point aliases be 000; af12 101; af11 100; be1 001; ef 010; }}forwarding-classes { //assigned the four queues to the forwarding classesqueue 0 BACKGROUND;queue 3 CONVERSATIONAL;

queue 2 INTERACTIVE;queue 1 STREAMING;}interfaces {ge-0/0/9 { //associate the scheduler map, rewrite rules and classifer with the interface scheduler-map SCHED-MAP; unit 0 { classifiers { ieee-802.1 DOTP-CLASSIFIER; } rewrite-rules { ieee-802.1 DOTP-RW; } } }}rewrite-rules {//define the rewrite rules for each of the forwarding classes. Set the code points to be used in each caseieee-802.1 DOTP-RW { forwarding-class CONVERSATIONAL { loss-priority low code-point ef; } forwarding-class INTERACTIVE { loss-priority low code-point af12; } forwarding-class STREAMING { loss-priority low code-point af11; } forwarding-class BACKGROUND { loss-priority high code-point be; } }}scheduler-maps {//define the scheduler maps for each forwarding classSCHED-MAP { forwarding-class BACKGROUND scheduler BACK-SCHED; forwarding-class CONVERSATIONAL scheduler CONV-SCHED; forwarding-class INTERACTIVE scheduler INTERACT-SCHED; forwarding-class STREAMING scheduler STREAMING-SCHED; }}schedulers {//Specify the scheduler properties for each forwarding class. Priorities assigned here define how the scheduler handles the traffic.CONV-SCHED { transmit-rate remainder; buffer-size percent 80; priority strict-high;}

Page 115: Data Center Network Connectivity With Ibm Server

114 DataCenterNetworkConnectivitywithIBMServers

INTERACT-SCHED;STREAMING-SCHED { transmit-rate percent 20;}BACK-SCHED { transmit-rate remainder; priority low; }}chandra@EX> show configuration firewallfamily ethernet-switching {//Configure a multifield classifer for better granularity. CONVERSATIONAL class gets higher priority than BACKGROUNDfilter HIGH { term 1 { from { source-address { 11.22.1.7/32; } } then { accept; forwarding-class CONVERSATIONAL; loss-priority low; } } term 2 { then { accept; count all; } }}filter LOW { term 1 { from { source-address { 11.22.1.100/32; 11.22.1.101/32; } } then { accept; forwarding-class BACKGROUND; loss-priority high; } } term 2 { then { accept; count all; } } }}chandra@EX > show configuration interfaces ge-0/0/24unit 0 {family ethernet-switching { //Assign the firewall filter to the interface port-mode access; filter { input LOW; output LOW; } }}chandra@EX> show configuration interfaces ge-0/0/25unit 0 {

Page 116: Data Center Network Connectivity With Ibm Server

Chapter7:UnderstandingNetworkCoSandLatency 115

family ethernet-switching { port-mode access; filter { input LOW; output LOW; } }}chandra@EX> show configuration interfaces ge-0/0/7unit 0 {family ethernet-switching { port-mode access; filter { input HIGH; output HIGH; } }}chandra@EX> show configuration interfaces ge-0/0/9unit 0 {family ethernet-switching { port-mode access; }}

Latency

Networklatencyiscriticaltobusiness.Today,thecompetitivenessintheglobalfinancialmarketsismeasuredinmicroseconds.Highperformancecomputingandfinancialtradingdemandanultralow-latencynetworkinfrastructure.Voiceandvideotrafficistime-sensitiveandtypicallyrequireslowlatency.

BecausenetworklatencyinaTCP/IPnetworkcanbemeasuredondifferentlayers,suchasLayer2/3,andfordifferenttypesoftraffic,suchasunicastormulticast,itoftenreferstooneofthefollowing:Layer2unicast,Layer3unicast,Layer2multicastorLayer3multicast.

Often,latencyismeasuredinvariousframesizes–64,128,256,512,1024,1280,1518bytesforEthernet.

Thesimulatedtrafficthroughputisacriticalfactorintheaccuracyoftestresults.Fora1Gbpsfull-duplexinterface,thetransmitting(TX)throughputofsimulatedtrafficandthereceiving(TR)throughputrequire1GbpsandtheTX/TRthroughputratiomustbeatleast99%.

Measuringnetworklatencyoftenrequiressophisticatedtestappliances,suchasAgilentN2X,SpirentCommunications,andIXIA.

NetworkWorldvalidatedJuniperNetworksEX4200performance,includingLayer2unicastlatency,Layer3unicast,Layer2multicastandLayer3multicast.Fordetailedtestresults,pleaserefertowww.networkworld.com/reviews/2008/071408-test-juniper-switch.html.

Page 117: Data Center Network Connectivity With Ibm Server

116 DataCenterNetworkConnectivitywithIBMServers

Inthissection,wediscusstheconceptofmeasuringdevicelatencyanddemonstratethesampleconfigurationformeasuringLayer2andLayer3unicastlatencyontheMX480.

Measuring Latency

IEFTstandardRFC2544definesperformancetestcriteriaformeasuringlatencyoftheDUT.AsshowninFigure7.3,theidealwaytotestDUTlatencyistouseatesterwithbothtransmittingandreceivingports.ThetesterconnectsDUTwithtwoconnections:thetransmittingportofthetesterconnectstothereceivingportoftheDUT,andthesendingportoftheDUTconnectstothereceivingportofthetester.ThesetupalsoappliestomeasuringthelatencyofmultipleDUTs,asshowninFigure7.3.

Figure 7.3 Measuring Latency

Figure7.4illustratestwolatencytestscenarios.WemeasuredthelatencyoftheMX480inonescenario;wemeasuredtheend-to-endlatencyofMX480andCisco’sESMinanotherscenario.WeusedAgilent’sN2Xwithtransmittingport(ge-2/3/1)andreceivingport(ge-3/4/4)asatester.

Figure 7.4 Latency Setup

DUT 2

DUT 1

DUTTester Tester

End-to-End Latency

Device Latency

N2X

N2X

ge-2/3/1

Port 20

Port 18

CiscoESM

IBM BladeCenter

11.22.1.2

ge-3/4/411.22.2.2

ge-5/3/511.22.1.1

ge-5/3/711.22.2.1

MX480

Page 118: Data Center Network Connectivity With Ibm Server

Chapter7:UnderstandingNetworkCoSandLatency 117

Inthefirsttestscenario,theN2XandMX480connections,representedbythedashedline,aremadefromthesendingports(ge-2/3/1)oftheN2Xtothereceivingports(ge-5/3/5)oftheMX480andfromthesendingports(ge-5/3/6)oftheMX480backtothereceivingports(ge-3/4/4)ofthetester.

Insecondtestscenario,theconnectionamongtheN2X,MX480andCisco’sESM(representedbythesolidlineinFigure7.4)occursinthefollowingorder:

• ConnectionfromthesendingportsoftheN2XtothereceivingportsoftheMX480

• ConnectionfromthesendingportoftheMX480tothereceivingport(Port18)ofCisco’sESM

• Connectionfromthesendingport(Port20)ofCisco’sESMtothereceivingportoftheN2X.

Configuration on Measuring Layer 2 Latency

TomeasuretheLayer2Latency,allparticipatingportsontheDUTsmustbeconfiguredwiththesameVLAN.ThatisthesameLayer2broadcastdomain.HereisasampleLayer2configurationontheMX480:

ge-5/3/5 { //Define a VLAN tagged interface and Ethernet-bridge encapsulation vlan-tagging; encapsulation ethernet-bridge;

}

unit 1122 { //Define a logical unit, vlan-id and a vlan-bridge type encapsulation encapsulation vlan-bridge; vlan-id 1122; ge-0/0/35.0;

}

bc-ext { //Define a bridge domain and assign VLAN id and interface. domain-type bridge; vlan-id 1122; interface ge-5/3/5.1122; interface ge-5/3/7.1122;

}

Page 119: Data Center Network Connectivity With Ibm Server

118 DataCenterNetworkConnectivitywithIBMServers

Configuration on Measuring Layer 3 Latency

TomeasuretheLayer3latency,alltheparticipatingportsontheDUTsmustbeconfiguredwiththesameIPsubnet.

Configuring the MX480

ge-5/3/5 { unit 0 {

family inet { address 11.22.1.1/24;

} } }

ge-5/3/7 { unit 0 {

family inet { address 11.22.2.1/24;

} } }

Page 120: Data Center Network Connectivity With Ibm Server

119

Chapter 8

Configuring High Availability

IMPLEMENTING HIGH AVAILABILITY (HA) is critical when designing a network.

Operators can implement high availability using one or more of

the several methods described in Chapter 3: Implementation Overview.

Routing Engine Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

Graceful Routing Engine Switchover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

Virtual Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

Nonstop Active Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

Nonstop Bridging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

Graceful Restart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

In-Service Software Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

Virtual Router Redundancy Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

Link Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

Redundant Trunk Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

Page 121: Data Center Network Connectivity With Ibm Server

120 DataCenterNetworkConnectivitywithIBMServers

Thischaptercoversthefollowingsoftware-basedhighavailabilityfeaturesthatoperatorscanenableinthedatacenter:

• RoutingEngineRedundancy

• GracefulRoutingEngineSwitchover(GRES)

• VirtualChassis

• NonstopRouting(NSR)

• NonstopBridging(NSB)

• GracefulRestart(GR)

• In-ServiceSoftwareUpgrade(ISSU)

• VirtualRouterRedundancyProtocol(VRRP)

• LinkAggregation(LAG)

• RedundantTrunkGroup(RTG)

Enablingeitheroneoracombinationofthefeatureslistedincreasesthereliabilityofthenetwork.

ThischapterfirstintroducesJunosOSbasedfeaturessuchasRoutingEngineredundancy,GRES,GR,NSR,NSBandISSUthatarecriticaltoimplementinghighavailabilityinthedatacenter.ReliabilityfeaturessuchasVRRP,RTGandLAGareimplementedoverthesekeyhighavailabilityelements.

Routing Engine Redundancy

RoutingEngineredundancyoccurswhentwophysicalRoutingEnginesresideonthesamedevice.OneoftheRoutingEnginesfunctionsastheprimary,whiletheotherservesasabackup.WhentheprimaryRoutingEnginefails,thebackupRoutingEngineautomaticallybecomestheprimaryRoutingEngine,thusincreasingtheavailabilityofthedevice.(RoutingEngineRedundancywithrespecttothescopeofthishandbookisavailableonlyontheMXSeriesandEX8200platforms.)

AnyoneofthefollowingfailurescantriggeraswitchoverfromtheprimarytothebackupRoutingEngine:

• Hardwarefailure–ThiscanbeaharddiskerrororalossofpowerontheprimaryRoutingEngine.

• Softwarefailure–ThiscanbeakernelcrashoraCPUlock.ThesefailurescausealossofkeepalivesfromtheprimarytothebackupRoutingEngine.

• Softwareprocessfailure–Specificsoftwareprocessesthatfailatleastfourtimeswithinthespanof30secondsontheprimaryRoutingEngine.

NOTE Toreverttotheoriginalprimarypost-failurerecovery,operatorsmustperformamanualswitchover.

Configuration Hierarchy for Routing Engine Redundancy

Thefollowingredundancystatementsthatdefinetheroutingenginerolesandfailovermechanismandareavailableatthe [edit chassis] hierarchy:

Page 122: Data Center Network Connectivity With Ibm Server

Chapter8:ConfiguringHighAvailability 121

redundancy { graceful-switchover; keepalive-time seconds; routing-engine slot-number (master | backup | disabled); }

1. ConfiguringtheautomaticfailoverfromanactivetobackupRoutingEnginewithoutanyinterruptiontopacketforwardingcanbedoneattheedit chassis redundancyheirarchy.ThetriggersareeitheradetectionofaharddiskerrororalossofkeepalivesfromtheprimaryRoutingEngine:

[edit chassis redundancy]{ failover on-disk-failure; failover on-loss-of-keepalives; }

2. SpecifythethresholdtimeintervalforlossofkeepalivesafterwhichthebackupRoutingEnginetakesoverfromtheprimaryRoutingEngine.Thefailoveroccursbydefaultafter300secondswhenGracefulRoutingEngineSwitchoverisnotconfigured.

[edit chassis redundancy] keepalive-time seconds;

3. ConfigureautomaticswitchovertothebackupRoutingEnginefollowingasoftwareprocessfailurebyincludingthefailover other-routing-engine statementatthe [edit system processes process-name] hierarchylevel:

[edit system processes] <process-name> failover other-routing-engine;

4. TheRoutingEnginemastershipcanbemanuallyswitchedusingthefollowingCLIcommands:

request chassis routing-engine master acquire on backup Routing Engine request chassis routing-engine master release on primary Routing Engine request chassis routing-engine master switch on either primary or backup Routing Engines

Graceful Routing Engine Switchover

JunosOSprovidesaseparationbetweentheroutingandcontrolplanes.GracefulRoutingEngineswitchover(leveragesthisseparationtoprovideaswitchoverbetweentheRoutingEngineswithoutdisruptingtrafficflow).ConfiguringgracefulRoutingEngineswitchoveronarouterenablestheinterfaceinformationandkernelstatetobesynchronizedonbothRoutingEngines.ThisleadstothesameroutingandforwardingstatestobepreservedonbothRoutingEngines.AnyroutingchangesoccurringontheprimaryRoutingEnginearereplicatedonthekernelofthebackupRoutingEngine.AlthoughgracefulRoutingEngineswitchoversynchronizesthekernelstate,itdoesnotpreservethecontrolplane.

Page 123: Data Center Network Connectivity With Ibm Server

122 DataCenterNetworkConnectivitywithIBMServers

ItisimportanttonotethatgracefulRoutingEngineswitchoveronlyoffersRoutingEngineredundancy,notrouterlevelredundancy.TrafficflowsthroughtherouterforashortintervalduringtheRoutingEngineswitchover.However,thetrafficisdroppedassoonasanyoftheroutingprotocoltimersexpireandtheneighborrelationshipwiththeupstreamrouterends.Toavoidthissituation,operatorsmustapplygracefulRoutingEngineswitchoverinconjunctionwithGracefulRestart(GR)protocolextensions.

NOTE AlthoughgracefulRoutingEngineswitchoverisavailableonmanyotherplatforms,withrespecttothescopeofthishandbook,gracefulRoutingEngineswitchoverisavailableonlyontheMXSeriesandEX8200platforms.

Figure8.1showsaprimaryandbackupRoutingEngineexchangingkeepalivemessages.

Figure 8.1 Primary and Backup Routing Engines

FordetailsconcerningGR,seethe Graceful Restart sectiononpage126.

Configuring Graceful Routing Engine Switchover

1. GracefulRoutingEngineswitchovercanbeconfiguredundertheedit chassis redundancyhierarchy:

[edit chassis redundancy]graceful-switchover;

2. Theoperationalshow system switchovercommandcanbeusedtocheckthegracefulRoutingEngineswitchoverstatusonthebackupRoutingEngine:

{backup}chandra@HE-Routing Engine-1-MX480-194> show system switchover Graceful switchover: OnConfiguration database: ReadyKernel database: Readystate: Steady State

Virtual Chassis

RoutingEnginesarebuiltintotheEXSerieschassis.Inthiscase,RoutingEngineredundancycanbeachievedbyconnectingandconfiguringtwo(oruptoten)EXswitchesasapartofavirtualchassis.Thisvirtualchassisoperatesasasinglenetworkentityandconsistsofdesignatedprimaryandbackupswitches.RoutingEnginesoneachofthesetwoswitchesthenbecomethemasterandbackupRoutingEnginesofthevirtualchassis,respectively.Therestoftheswitchesof

MasterRouting Engine

BackupRouting Engine

Keep-Alives

Page 124: Data Center Network Connectivity With Ibm Server

Chapter8:ConfiguringHighAvailability 123

thevirtualchassisassumetheroleoflinecards.ThemasterRoutingEngineontheprimaryswitchmanagesalltheotherswitchesthataremembersofthevirtualchassisandhasfullcontroloftheconfigurationandprocesses.Itreceivesandtransmitsroutinginformation,buildsandmaintainsroutingtables,andcommunicateswithinterfacesandtheforwardingcomponentsofthememberswitches.

ThebackupswitchactsasthebackupRoutingEngineofthevirtualchassisandtakesoverasthemasterwhentheprimaryRoutingEnginefails.ThevirtualchassisusesGRESandNSRtorecoverfromcontrolplanefailures.Operatorscanphysicallyconnectindividualchassisusingeithervirtualchassisextensioncablesor10G/1GEthernetlinks.

UsinggracefulRoutingEngineswitchoveronavirtualchassisenablestheinterfaceandkernelstatestobesynchronizedbetweentheprimaryandbackupRoutingEngines.ThisallowstheswitchoverbetweenprimaryandbackupRoutingEnginetooccurwithminimaldisruptiontotraffic.ThegracefulRoutingEngineswitchoverbehavioronthevirtualchassisissimilartothedescriptionintheGraceful Routing Engine Switchover sectiononpage121.

WhengracefulRoutingEngineswitchoverisnotenabled,thelinecardswitchesofthevirtualchassisinitializetothebootupstatebeforeconnectingtothebackupthattakesoverasthemasterwhenRoutingEnginefailoveroccurs.EnablinggracefulRoutingEngineswitchovereliminatestheneedforthelinecardswitchestore-initializetheirstate.Instead,theyresynchronizetheirstatewiththenewmasterRoutingEnginethusensuringminimaldisruptiontotraffic.

Someoftheresiliencyfeaturesofavirtualchassisincludethefollowing:

• Asoftwareupgradeeithersucceedsorfailsonallornoneoftheswitchesbelongingtothevirtualchassis.

• Avirtualchassisfastfailover,ahardwaremechanismthatautomaticallyreroutestrafficandreducestrafficlosswhenalinkfailureoccurs.

• Avirtualchassissplitandmergethatcausesthevirtualchassisconfigurationtosplitintotwoseparatevirtualchassiswhenmemberswitchesfailorareremoved.

Figure8.2showsavirtualchassisthatconsistsofthreeEX4200switches:EX-6,EX-7andEX-8.Avirtualchassiscableconnectstheswitchestoeachother,ensuringthatthefailureofonelinkdoesnotcauseavirtualchassissplit.

Figure 8.2 Virtual Chassis Example Consisting of Three EX4200s

(EX-6)

(EX-7)

(EX-8)

Line Card

EX4200

Backup

EX4200

Primary

EX4200

Page 125: Data Center Network Connectivity With Ibm Server

124 DataCenterNetworkConnectivitywithIBMServers

Virtual Chassis Configuration Snippet

// Define members of a virtual chassis.virtual-chassis { member 1 { mastership-priority 130;}member 2 { mastership-priority 130; }}// Define a management interface and address for the VC.interfaces {vme { unit 0 { family inet {address 172.28.113.236/24; } } }}

Theshow virtual-chassis CLIcommandprovidesastatusofavirtualchassisthathasamasterandbackupswitchandlinecard.TherearethreeEX4200switchesconnectedandconfiguredtoformavirtualchassis.EachswitchhasamemberIDandseestheothertwoswitchesasitsneighborswhenthevirtualchassisisfullyfunctioning.Themasterandbackupswitchesareassignedthesamepriority(130)toensureanon-revertivebehaviorafterthemasterrecovers.

show virtual-chassis Virtual Chassis ID: 555c.afba.0405 Mastership Neighbor List Member ID Status Serial No Model priority Role ID Interface0 (FPC 0) Prsnt BQ0208376936 ex4200-48p 128 Linecard 1 vcp-0 2 vcp-1 1 (FPC 1) Prsnt BQ0208376979 ex4200-48p 130 Backup 2 vcp-0 0 vcp-1 2 (FPC 2) Prsnt BQ0208376919 ex4200-48p 130 Master* 0 vcp-0 1 vcp-1 Member ID for next new member: 0 (FPC 0)

UsethefollowingoperationalCLIcommandtodefinethe10/1GEthernetportsthatareusedonlyforvirtualchassisinter-memberconnectivity.

requestvirtual-chassisvc-portsetpic-slot1port0orrequestvirtual-chassisvc-portsetpic-slot1port1

Nonstop Active Routing

NonstopActiveRouting(NSR)preserveskernelandinterfaceinformationinamannersimilartogracefulRoutingEngineswitchover.However,comparedtogracefulRoutingEngineswitchover,NSRgoesastepfurtherandsavestheroutingprotocolinformationonthebackupRoutingEngine.Italsopreservestheprotocolconnectioninformationinthekernel.AnyswitchoverbetweentheRoutingEnginesisdynamic,istransparenttothepeers,andoccurswithoutanydisruptiontoprotocolpeering.Forthesereasons,NSRisbeneficialincaseswherethepeerroutersdonotsupportgracefulRoutingEngineswitchover.

Page 126: Data Center Network Connectivity With Ibm Server

Chapter8:ConfiguringHighAvailability 125

JuniperNetworksrecommendsenablingNSRinconjunctionwithgracefulRoutingEngineswitchoverbecausethismaintainstheforwardingplaneinformationduringtheswitchover.

StateinformationforaprotocolthatisnotsupportedbyNSRistheprimaryRoutingEngine.Stateinformationmustberefreshedusingthenormalrecoverymechanisminherenttotheprotocol.

• AutomaticroutedistinguishersformulticastcanbeenabledsimultaneouslywithNSR.

• ItisnotnecessarytostarttheprimaryandbackupRoutingEnginesatthesametime.

• ActivatingabackupRoutingEngineatanytimeautomaticallysynchronizestheprimaryRoutingEngine.

Forfurtherdetails,refertotheJunos High Availability Guide forthelatestJunossoftwareversionatwww.juniper.net/techpubs/en_US/junos10.1/information-products/topic-collections/swconfig-high-availability/noframes-collapsedTOC.html.

Configuring Nonstop Active Routing

1. EnablegracefulRoutingEngineswitchoverunderthechassisstanza.

[edit chassis redundancy] graceful-switchover;

2. Enablenonstopactiveroutingundertherouting-optionsstanza.

[edit routing-options]nonstop-routing;

3. WhenoperatorsenableNSR,theymustsynchronizeconfigurationchangesonbothRoutingEngines.

[edit system] commit synchronize;

4. AswitchovertothebackupRoutingEnginemustoccurwhentheroutingprotocolprocess(rpd)failsthreetimesconsecutively,inrapidintervals.Forthistooccur,thefollowingstatementmustbeincluded.

[edit system processes routing failover] routing failover other-routing-engine;

5. OperatorsmustaddthefollowingcommandtoachievesynchronizationbetweentheRoutingEnginesafterconfigurationchanges.

[edit system]commit synchronize

6. OperatorscanusethefollowingoperationalcommandtoverifyifNSRisenabledandactive.

show task replication

Page 127: Data Center Network Connectivity With Ibm Server

126 DataCenterNetworkConnectivitywithIBMServers

Nonstop Bridging

NonstopBridging(NSB)enablesaswitchoverbetweentheprimaryandbackupRoutingEngineswithoutlosingLayer2ControlProtocol(L2CP)information.NSBissimilartoNSRinthatitpreservesinterfaceandkernelinformation.ThedifferenceisthatNSBsavestheLayer2controlinformationbyrunningaLayer2ControlProtocolprocess(l2cpd)onthebackupRoutingEngine.ForNSBtofunction,operatorsmustenableGracefulRoutingEngineswitchover.

ThefollowingLayer2controlprotocolssupportNSB:

• SpanningTreeProtocol(STP)

• RapidSTP(RSTP)

• MultipleSTP(MSTP)

Configuring Nonstop Bridging

1. EnablegracefulRoutingEngineswitchoverunderthe“chassis”stanza.

[edit chassis redundancy]graceful-switchover;Explicitly enable NSB

[edit protocols layer2-control]nonstop-bridging;

2. EnsuresynchronizationbetweenRoutingEngineswheneveraconfigurationisrequired.

[edit system] commit synchronize

NOTE ItisnotnecessarytostarttheprimaryandbackupRoutingEnginesatthesametime.ImplementingabackupRoutingEngineatanytimeautomaticallysynchronizeswiththeprimaryRoutingEnginewhenNSBisenabled.

Graceful Restart

Aservicedisruptionnecessitatesroutingprotocolsonaroutertorecalculatepeeringrelationships,protocolspecificinformationandroutingdatabases.Disruptionsduetoanunprotectedrestartofaroutercancauserouteflapping,greaterprotocolreconvergencetimesorforwardingdelays,ultimatelyresultingindroppedpackets.However,GracefulRestart(GR)alleviatesthissituation,actingasanextensiontotheroutingprotocols.

ArouterwithGRextensionscanbedefinedeitherinaroleof“restarting”or“helper.”Theseextensionsprovidetheneighboringrouterswiththestatusofarouterwhenafailureoccurs.Considerarouteronwhichafailurehasoccurred,theGRextensionssignaltheneighboringroutersthatarestartisoccurring.Thispreventstheneighborsfromsendingoutnetworkupdatestotherouterforthedurationofthegracefulrestartwaitinterval.ArouterwithGRenabledmustnegotiatetheGRsupportwithitsneighborsatthestartofaroutingsession.TheprimaryadvantagesofGRareuninterruptedpacketforwardingandtemporarysuppressionofallroutingprotocolupdates.

Page 128: Data Center Network Connectivity With Ibm Server

Chapter8:ConfiguringHighAvailability 127

NOTE AhelperrouterundergoingRoutingEngineswitchoverdropstheGRwaitstatethatitmaybeinandpropagatestheadjacency’sstatechangetothenetwork.GRsupportisavailableforrouting/MPLSrelatedprotocolsandLayer2orLayer3VPNs.

MORE SeeTable-B.3inAppendix B of this handbookforalistofGRprotocolssupportedontheMXandEXSeriesplatforms.

Configuring Graceful Restart

1. EnableGReitheratglobaloratspecificprotocollevels.Whenconfiguringonagloballevel,operatorsmustusetherouting-optionshierarchy.TherestartdurationspecifiesthedurationoftheGRperiod.

NOTE TheGRhelpermodeisenabledbydefaulteventhoughGRmaynotbeenabled.Ifnecessary,theGRhelpermodecanbedisabledonaper-protocolbasis.IfGRisenabledglobally,itcanbedisabledonlyifrequiredforeachindividualprotocol.

edit routing-options]graceful-restartrestart-duration

2. GRcanbeenabledforstaticroutesundertherouting-optionshierarchy

[edit routing-options]graceful-restart

In-Service Software Upgrade

In-servicesoftwareupgrade(ISSU)facilitatessoftwareupgradesofJuniperdevicesinenvironmentswherethereisahighconcentrationofusersandbusinesscriticalapplications.OperatorscanuseISSUtoupgradethesoftwarefromoneJUNOSreleasetoanotherwithoutanydisruptiontothecontrolplane.Anydisruptiontotrafficduringtheupgradeisminimal.

ISSUrunsonlyonplatformsthatsupportdualRoutingEnginesandrequiresthatgracefulRoutingEngineswitchoverandNSRbeenabled.GracefulRoutingEngineswitchoverisrequiredbecauseaswitchfromtheprimarytothebackupRoutingEnginemusthappenwithoutanypacketforwardingloss.TheNSRwithgracefulRoutingEngineswitchovermaintainsroutingprotocolandcontrolinformationduringtheswitchoverbetweentheRoutingEngines.

NOTE Similartoregularupgrades,Telnetsessions,SNMP,andCLIaccesscanbeinterruptedbrieflywhenISSUisbeingperformed.

IfBFDisenabled,thedetectionandtransmissionsessiontimersincreasetemporarilyduringtheISSUactivity.ThetimersreverttotheiroriginalvaluesoncetheISSUactivityiscomplete.

WhenattemptingtoperformanISSU,thefollowingconditionsmustbemet:

• TheprimaryandbackupRoutingEnginesmustberunningthesamesoftwareversion.

• ThestatusofthePICscannotbechangedduringtheISSUprocess.Forexample,thePICscannotbebroughtonline/offline.

• Thenetworkmustbeinasteady,stablestate.

Page 129: Data Center Network Connectivity With Ibm Server

128 DataCenterNetworkConnectivitywithIBMServers

AnISSUcanbeperformedinoneofthefollowingways:

• UpgradingandrebootingbothRoutingEnginesautomatically–BothRoutingEnginesareupgradedtothenewerversionofsoftwareandthenrebootedautomatically.

• UpgradingbothRoutingEnginesandthenmanuallyrebootingthenewbackupRoutingEngine–TheoriginalbackupRoutingEngineisrebootedfirstaftertheupgradetobecomethenewprimaryRoutingEngine.Followingthis,theoriginalprimaryRoutingEnginemustberebootedmanuallyforthenewsoftwaretotakeeffect.TheoriginalprimaryRoutingEnginethenbecomesthebackupRoutingEngine.

• UpgradingandrebootingonlyoneRoutingEngine–Inthiscase,theoriginalbackupRoutingEngineisupgradedandrebootedandbecomesthenewprimaryRoutingEngine.TheformerprimaryRoutingEnginemustbeupgradedandrebootedmanually.

MORE FormoredetailswhenperforminganISSUusingtheabove-listedmethods,seeAppendix Aofthishandbook.

Verifying Conditions and Tasks Prior to ISSU Operation

1. VerifythattheprimaryandbackupRoutingEnginesarerunningthesamesoftwareversionusingtheshow version invoke-on all-routing-engines CLIcommand:

{master}chandra@MX480-131-0> show version invoke-on all-routing-engines re0:--------------------------------------------------------------------------Hostname: MX480-131-0Model: mx480JUNOS Base OS boot [10.0R1.8]JUNOS Base OS Software Suite [10.0R1.8]JUNOS Kernel Software Suite [10.0R1.8]JUNOS Crypto Software Suite [10.0R1.8]JUNOS Packet Forwarding Engine Support (M/T Common) [10.0R1.8]JUNOS Packet Forwarding Engine Support (MX Common) [10.0R1.8]JUNOS Online Documentation [10.0R1.8]JUNOS Voice Services Container package [10.0R1.8]JUNOS Border Gateway Function package [10.0R1.8]JUNOS Services AACL Container package [10.0R1.8]JUNOS Services LL-PDF Container package [10.0R1.8]JUNOS Services Stateful Firewall [10.0R1.8]JUNOS AppId Services [10.0R1.8]JUNOS IDP Services [10.0R1.8]JUNOS Routing Software Suite [10.0R1.8]re1:--------------------------------------------------------------------------Hostname: MX480-131-1Model: mx480JUNOS Base OS boot [10.0R1.8]JUNOS Base OS Software Suite [10.0R1.8]JUNOS Kernel Software Suite [10.0R1.8]JUNOS Crypto Software Suite [10.0R1.8]

Page 130: Data Center Network Connectivity With Ibm Server

Chapter8:ConfiguringHighAvailability 129

JUNOS Packet Forwarding Engine Support (M/T Common) [10.0R1.8]JUNOS Packet Forwarding Engine Support (MX Common) [10.0R1.8]JUNOS Online Documentation [10.0R1.8]JUNOS Voice Services Container package [10.0R1.8]JUNOS Border Gateway Function package [10.0R1.8]JUNOS Services AACL Container package [10.0R1.8]JUNOS Services LL-PDF Container package [10.0R1.8JUNOS Services Stateful Firewall [10.0R1.8]JUNOS AppId Services [10.0R1.8]JUNOS IDP Services [10.0R1.8]JUNOS Routing Software Suite [10.0R1.8]

2. VerifythatgracefulRoutingEngineswitchoverandNSRareenabledusingtheshow system switchover and show task replication commands.

3. BFDtimernegotiationcanbedisabledexplicitlyduringtheISSUactivityusingthe [edit protocols bfd] hierarchy:

[edit protocols bfd]no-issu-timer-negotiation;

4. PerformasoftwarebackuponeachRoutingEngineusingtherequest system snapshot CLIcommand:

{master}chandra@MX480-131-0> request system snapshot Verifying compatibility of destination media partitions...Running newfs (899MB) on hard-disk media / partition (ad2s1a)...Running newfs (99MB) on hard-disk media /config partition (ad2s1e)...Copying ‘/dev/ad0s1a’ to ‘/dev/ad2s1a’ .. (this may take a few minutes)Copying ‘/dev/ad0s1e’ to ‘/dev/ad2s1e’ .. (this may take a few minutes)The following filesystems were archived: / /config

Verifying a Unified ISSU

Executetheshow chassis in-service-upgradecommandontheprimaryRoutingEnginetoverifythestatusofFPCsandtheircorrespondingPICsafterthemostrecentISSUactivity.

Page 131: Data Center Network Connectivity With Ibm Server

130 DataCenterNetworkConnectivitywithIBMServers

Virtual Router Redundancy Protocol

VirtualRouterRedundancyProtocol(VRRP)isaprotocol,whichrunsonroutingdevicesthatareconnectedtothesamebroadcastdomain.VRRPconfigurationassignsthesedevicestoagroup.Thegroupingeliminatesthepossibilityofasinglepointoffailureandthusprovideshighavailabilityofnetworkconnectivitytothehostsonthebroadcastdomain.RoutersparticipatinginVRRPshareavirtualIPaddressandvirtualMACaddress.ThesharedVirtualIPaddresscorrespondstothedefaultrouteconfiguredonthehosts.Forexample,hostsonabroadcastdomaincanuseasingledefaultroutetoreachmultipleredundantroutersbelongingtotheVRRPgrouponthatbroadcastdomain.

Oneoftheroutersiselecteddynamicallyasadefaultprimaryofthegroupandisactiveatagiventime.Alltheotherparticipatingroutingdevicesperformabackuprole.Operatorscanassignprioritiestodevicesmanually,forcingthemtoactasprimaryandbackupdevices.TheVRRPprimarysendsoutmulticastadvertisementstothebackupdevicesatregularintervals(defaultintervalis1second).Whenthebackupdevicesdonotreceiveanadvertisementforaconfiguredperiod,thedevicewiththenexthighestprioritybecomesthenewprimary.Thisoccursdynamically,thusenablinganautomatictransitionwithminimaltrafficloss.ThisVRRPactioneliminatesthedependenceonachievingconnectivityusingasingleroutingplatformthatcanresultinasinglepointoffailure.Inaddition,thechangebetweentheprimaryandbackuprolesoccurswithminimumVRRPmessagingandnointerventiononthehostside.

Figure8.3showsasetofhostsconnectedtothreeEXswitches:EX4200-0,EX8200-1andEX8200-2onthesamebroadcastdomain.EX4200-0isconfiguredasaLayer2switchonly,withoutanyroutingfunctionality.EX8200-1andEX8200-2areconfiguredtohavetheirrespectiveIPaddressesonthebroadcastdomainandareconfiguredtobeVRRPmemberswithavirtualaddressof172.1.1.10/16.EX8200-1issettobetheprimary,whileEX8200-2isthebackup.Thedefaultgatewayoneachofthehostsissettobethevirtualaddress.

TrafficfromthehostsissenttohostsonothernetworksthroughEX8200-1becauseitistheprimary.WhenthehostsloseconnectivitytoEX8200-1eitherduetoanodeorlinkfailure,EX8200-2becomestheprimary.ThehostsstartsendingthetrafficthroughEX8200-2.ThisispossiblebecausethehostsforwardthetraffictothegatewaythatownsvirtualIPaddress172.1.1.10,andIPpacketsareencapsulatedinEthernetframesdestinedtoavirtualMACaddress.

Junosprovidesasolutionthatpreventsre-learningofARPinformationonthebackuprouterwhentheprimaryrouterfails.ThissolutionincreasesperformancewhenlargenumbersofhostsexistsontheLAN.

Page 132: Data Center Network Connectivity With Ibm Server

Chapter8:ConfiguringHighAvailability 131

Figure 8.3 VRRP

MORE ForVRRPconfigurationdetails,refertotheJunosHighAvailabilityGuideatwww.juniper.net/techpubs/software/junos/junos90/swconfig-high-availability/high-availability-overview.html.

VRRP Configuration Diagram

Figure8.4showsasampleVRRPnetworkscenario.Inthisscenario,twoEX4200devices(EX4200-AandEX4200-B)areconfiguredaspartofaVRRPgroup.

NOTE AlthoughthisVRRPsamplescenariousesEX4200devices,itispossibletoconfigureothercombinationsofVRRPgroupsconsistingofdevicessuchas:

• EX8200–EX4200

• EX8200–MX480

• MX480–MX480

• EX8200–EX8200

Figure8.4showsdevicesEX8200-AandEX8200-B,MX480-AandMX480-BtoillustratethechoicesofdifferentplatformswhenconfiguringVRRPinthenetwork.

Default Gatewayon Each Host set

to 172.1.1.10

Virtual Address172.1.1.10/16

EX4200 - 0

EX8200 - 1

EX8200 - 2

Page 133: Data Center Network Connectivity With Ibm Server

132 DataCenterNetworkConnectivitywithIBMServers

Figure 8.4 VRRP Test Network

ThevirtualaddressassignedtotheEX4200groupdiscussedhereis11.22.1.1.ThetwodevicesandtheIBMBladeserversphysicallyconnectonthesamebroadcastdomain.EX4200-AiselectedastheprimaryandsothepathbetweentheserverstoEX4200-AthroughtheCiscoESMistheprimarypreferredpath.ThelinkbetweentheCiscoESMandEX4200-Bisthebackuppath.

NOTE Cisco’sESMincludedintheIBMBladeCenterisaLayer2switchthatdoesnotsupportVRRP,butitservesasanaccessnetworklayerswitchconnectedtoroutersthatuseVRRP.OtherswitchmodulesfortheIBMBladeCentersupportLayer3functionalitybutareoutofthescopeofthisbook.

VRRP Configuration Options:

MX480-A — MX480-B

MX480-A — EX4200-B

MX480-A — EX8200-B

EX8200-A — EX8200-B

EX8200-A — EX4200-B

ge-5/3/5

ge-5/3/7ge-5/3/9

ge-0/0/3111.22.1.31/24

ge-0/0/1111.22.1.11/24

11.22.3.1/24

11.22.5.1/24

11.22.2.1/24

Primary Path

Trunk Port 18

IBM Blade Center connectedvia Cisco ESM/IBM Power 5/

Power6/IBM x3500 servers

Trunk Port 19

Backup Path (when VRRP’s Primary

Interface Fails)

Virtual Router GroupIP address = 11.22.1.1

11.22.3.16/24ge-0/0/16

MX480-A

EX8200-A

EX4200-A

ge-0/0/3611.22.2.36/24

MX480-B

EX8200-B

EX4200-B

MX480

Cisco

ESM 1

Eth1

Eth0

SoL

BNT

Pass-

Through

MM2

MM1

Page 134: Data Center Network Connectivity With Ibm Server

Chapter8:ConfiguringHighAvailability 133

Configuring VRRP

ToconfigureVRRPonthesamplenetworkperformthefollowingsteps:

1. CreatetwotrunkportsonCisco’sESM.Assignaninternaleth0portonBlade[x]tosamenetworkasVRRP,forexample11.22.1.x.

2. AddarouterwithaLayer3addressthatisreachablefromthe11.22.1.xnetworkonthebladecenter.Inthiscase,theMX480actsasaLayer3routerthatconnectstobothEX4200-AandEX4200-Bthroughthe11.22.2.xand11.22.3.xnetworks,respectively.

3. ThisLayer3MX480routeralsoterminatesthe11.22.5.Xnetworkviainterfacege-5/3/5withfamilyinetaddress11.22.5.1.

4. Verifythatthisaddressisreachablefromthebladeserverbyconfiguringthedefaultgatewaytobeeither11.22.1.11(ge-0/0/11)or11.22.1.31(ge-0/0/31).

5. ConfigureVRRPbetweenthetwointerfacesge-0/0/11(EX4200-A)andge-0/0/31(EX4200-B).Thedefaultvirtualaddress(knownasvrrp-id)is11.22.1.1withge-0/0/11onEX4200-Asettohaveahigherpriority.

Verifyoperationonthesamplenetworkbyperformingthefollowingsteps.

1. Reconfigurethedefaultrouteon11.22.1.60(bladeserver)to11.22.1.1(vrrprouterid).

2. Confirmthat11.22.5.1isreachablefrom11.22.1.60andvice-versa.Performatraceroute toensurethatthenexthopis11.22.1.11onEX4200-A.

3. EitherlowerthepriorityonEX4200-Aoradministrativelydisabletheinterfacege-0/0/11tosimulateanoutageofEX4200-A.

4. Confirmthatpingsfrom11.22.1.60to11.22.5.1arestillworkingbutusethebackuppathtoEX4200-B.

5. Performa traceroute toconfirmthatthebackuppathisbeingused.

NOTE The traceroute commandcanbeusedforconfirmationinbothdirections–toandfromtheBladeCenter.

VRRP Configuration Snippet

TheVRRPconfigurationsnippetshowstheminimumconfigurationrequiredontheEXSeriestoenableaVRRPgroup.

// Configure the interface ge-0/0/31 on EX4200-B with an IP address of 11.22.1.31/24 on the logical unit 0.// Define a VRRP group with a virtual IP of 11.22.1.1 and priority of 243.show configuration interfaces ge-0/0/31unit 0 { family inet { address 11.22.1.31/24 { vrrp-group 1 { virtual-address 11.22.1.1; priority 243; preempt { hold-time 0; } accept-data; } } } }

Page 135: Data Center Network Connectivity With Ibm Server

134 DataCenterNetworkConnectivitywithIBMServers

// Interface ge-0/0/36 to MX480 with an IP of 11.22.2.36/24 show configuration interfaces ge-0/0/36 unit 0 { family inet { address 11.22.2.36/24; } }// Configure the interface ge-0/0/11 on EX4200-A with an IP address of 11.22.1.11/24 on the logical unit 0. // Define a VRRP group with a virtual IP of 11.22.1.1 and priority of 240. show configuration interfaces ge-0/0/11unit 0 { family inet { address 11.22.1.11/24 { vrrp-group 1 { virtual-address 11.22.1.1; priority 240; preempt { hold-time 0; } accept-data; } } } }

VRRP Configuration Hierarchy for IPv4

ThissectionshowsthatVRRPstatementscanbeincludedatthe interface hierarchylevel.

[edit interfaces interface-name unit <unit-number> family inet address address] vrrp-group group-id { (accept-data | no-accept-data); advertise-interval seconds; authentication-key key; authentication-type authentication; fast-interval milliseconds; (preempt | no-preempt) { hold-time seconds; } priority number; track { interface interface-name { priority-cost priority; bandwidth-threshold bits-per-second { priority-cost priority; } } priority-hold-time seconds; route prefix routing-instance instance-name { priority-cost priority; } } virtual-address [ addresses ]; }

Page 136: Data Center Network Connectivity With Ibm Server

Chapter8:ConfiguringHighAvailability 135

Configuring VRRP for IPv6 (MX Series Platform Only)

Asmentionedearlier,operatorscanconfigureVRRPforIPv6ontheMXplatform.ToconfigureVRRPforIPv6,includethefollowingstatementsatthishierarchylevel:

[edit interfaces interface-name unit <-unit-number> family inet6 address address]vrrp-inet6-group group-id { (accept-data | no-accept-data); fast-interval milliseconds; inet6-advertise-interval seconds; (preempt | no-preempt) { hold-time seconds; } priority number; track { interface interface-name { priority-cost priority; bandwidth-threshold bits-per-second { priority-cost priority; } } priority-hold-time seconds; route prefix routing-instance instance-name { priority-cost priority; } } virtual-inet6-address [ addresses ]; virtual-link-local-address ipv6-address}

Link Aggregation

LinkAggregation(LAG)isafeaturethataggregatestwoormorephysicalEthernetlinksintoonelogicallinktoobtainhigherbandwidthandtoprovideredundancy.LAGprovideshighlinkavailabilityandcapacitywhichresultsinimprovedperformanceandavailability.

Trafficisbalancedacrossalllinksthataremembersofanaggregatedbundle.Thefailureofamemberlinkdoesnotcausetrafficdisruption.Instead,becausetherearemultiplememberlinks,trafficcontinuesoveractivelinks.

LAGisan802.3adstandardthatcanbeusedinconjunctionwithLinkAggregationControlProtocol(LACP).UsingLACP,multiplephysicalportscanbebundledtogethertoformalogicalchannel.EnablingLACPontwopeersthatparticipateinaLAGgroupenablesthemtoexchangeLACPpacketsandnegotiatetheautomaticbundlingoflinks.

NOTE LAGcanbeenabledoninterfacesspreadacrossmultiplechassis;thisisknownasMultichassisLAG(MC-LAG).Thismeansthatthememberlinksofabundlecanbeconfiguredbetweenmultiplechassisinsteadofonlytwochassis.

Currently,MC-LAGsupportonlyexistsontheMXplatforms.

Page 137: Data Center Network Connectivity With Ibm Server

136 DataCenterNetworkConnectivitywithIBMServers

SomepointstonotewithrespecttoLAG:

• EthernetlinksbetweentwopointssupportLAG.

• Amaximumof16EthernetinterfacescanbeincludedwithinaLAGontheMXSeriesPlatforms.TheLAGcanconsistofinterfacesthatresideondifferentFlexiblePICConcentrators(FPC)cardsinthesameMXchassis.However,theseinterfacelinksmustbeofthesametype.

• TheEXSeriesPlatformssupportsamaximumof8EthernetinterfacesinaLAG.IncaseofanEX4200basedvirtualchassis,theinterfacesthatbelongtoaLAGcanbeondifferentswitchmembersofthevirtualchassis.

Link Aggregation Configuration Diagram

Figure8.5showsasamplelinkaggregationandloadbalancingsetup.Inthisconfiguration,LAGisenabledontheinterfacesbetweentheMX480andCisco’sESMswitchontheIBMBladeCenter,thusbundlingthephysicalconnectionsintoonelogicallink.

Figure 8.5 LAG and Load Balancing Setup

NOTE TheEX8200oranyoftheMXSeriesdevicescanbeusedinsteadoftheMX480,asshowninFigure8.5.

Link Aggregation Configuration Hierarchy

ThissectiondescribesthedifferentstepsinvolvedinconfiguringandverifyingLAGonthetestnetwork.AphysicalinterfacecanbeassociatedwithanaggregatedEthernetinterfaceontheEXandMXSeriesPlatforms.Enabletheaggregatedlinkasfollows:

1. At [edit chassis] hierarchylevel,configurethemaximumnumberofaggregated-devicesavailableonsystem:

aggregated-devices { ethernet { device-count X; }}

Trunk Port 20

LAG – EX and MX Series

N2X

DUT

ge-304/4

N2Xge-201/1

ge-5/0/5

ge-5/0/1

Etherchannel

AggregatedEthernet

17

18

CiscoESM

IBM Blade

MX480

Page 138: Data Center Network Connectivity With Ibm Server

Chapter8:ConfiguringHighAvailability 137

NOTE HereXreferstothenumberofaggregatedinterfaces(0-127).

2. At [edit interfaces interface-name]hierarchylevel,includethe802.3adstatement:

[edit interfaces interface-name (fastether-options | gigether-options)] 802.3ad aeX;

3. AstatementdefiningaeXalsomustbeincludedatthe[edit interfaces] hierarchylevel.

4. SomeofthephysicalpropertiesthatspecificallyapplytoaggregatedEthernetinterfacesalsocanbeconfigured:

chandra@HE-Routing Engine-1-MX480> show configuration interfaces aeX aggregated-ether-options { minimum-links 1; link-speed 1g; lacp { active; periodic fast; }}unit 0 { family bridge { interface-mode trunk; vlan-id-list 1122; }}

AnaggregatedEthernetinterfacecanbedeletedfromtheconfigurationbyissuingthe delete interfaces aex commandatthe [edit] hierarchylevelinconfigurationmode.

[edit] user@host# delete interfaces aeX

NOTE WhenanaggregatedEthernetinterfaceisdeletedfromtheconfiguration,Junosremovestheconfigurationstatementsrelatedto aeX andsetsthisinterfacetotheDOWNstate.However,theaggregatedEthernetinterfaceisnotdeleteduntilthechassis aggregated-devices ethernet device-count configuration statementisdeleted.

Forwarding Options in LAG (MX 480 only)

Bydefault,hash-keyalgorithmsusetheinterfaceasthedefaultparametertogeneratehash-keysforloaddistribution. Forwarding options mustbeconfiguredtoachieveloadbalancingbasedonsourceanddestinationIP;sourceanddestinationMACoranyothercombinationofLayer3orLayer4parameters.

NOTE AlthoughEXSeriesPlatformscanalsoperformhash-keybasedloadbalancingasofrelease9.6R1.13,theydonothavetheflexibilitytoconfigurethecriteriaforhashing.

hash-key { family multiservice { source-mac; destination-mac;

Page 139: Data Center Network Connectivity With Ibm Server

138 DataCenterNetworkConnectivitywithIBMServers

payload { ip { layer-3 { [source-ip-only | destination-ip-only]; } layer-4; } } symmetric-hash; }}

Link Aggregation Configuration Description

// Specify the number of aggregated devicesaggregated-devices { ethernet { device-count X; }}// Specify the aeX interface properties such as minimum number of links, speed and LACP options. aggregated-ether-options { minimum-links 1; link-speed 1g; lacp { active; periodic fast; }}// Define a logical unit that is a bridge type trunk interface and vlan-id.unit 0 { family bridge { interface-mode trunk; vlan-id-list 1122; }}

Link Failover Scenarios - LAG with LACP and NSR

LinkfailoverbetweenmembersofLAGonMX480canoccurinconjunctionwithdifferentcombinationsofLACPandNSR.TherearevariousfailurescenariossuchasRoutingEngine/FPC/Switchfabricfailover,systemupgradewithandwithoutISSUpossibleforeachoftheLACP/NSRcombinations.

ThedifferentLACP/NSRcombinationsontheMX480includethefollowing:

• LACPEnabled,NSREnabled

• LACPEnabled,NSRDisabled

• LACPDisabled,NSREnabled

• LACPDisabled,NSRDisabled

TableB.3andTableB.4inAppendix BofthishandbookprovidedetailedLAGtestingresultsbasedonthescenarioslistedabove.

Page 140: Data Center Network Connectivity With Ibm Server

Chapter8:ConfiguringHighAvailability 139

Thesalienttestresults,listedinAppendix B areasfollows:

• EnablingLACPprovidedseamlessrecoveryfromRoutingEnginefailoverontheMX480.TheRoutingEnginetookapproximately20secondstorecoverfromafailurewithLACPdisabledasopposedtonodisruptionwhenitwasenabled.

• FPCswithonlyoneLAGinterfacerecoveredmorequickly(in1.5seconds)thanFPCswithtwointerfaces(approximately55seconds).

• Theswitchfabricrecoveredimmediatelyafterafailureinallthescenarios.

• AsimilarvalidationwasperformedusingtheEX4200insteadoftheMX480.Inthiscase,enablingordisablingtheLACPdidnotmakeadifference.Thefollowingscenarioswerevalidated:

- RoutingEngineFailover

- FPCFailover(twoLAGlinksandaninterfacetothetrafficgenerator)

- SwitchFabricFailover

- SystemUpgrade(withoutISSUorgracefulRoutingEngineswitchover)

- SystemUpgrade(withoutISSU,withgracefulRoutingEngineswitchover)

MORE TableC.1andTableC.4inAppendix C ofthishandbookprovidedetailedLAGtestresultsusingtheEX4200andMX480.

Redundant Trunk Group

RedundantTrunkGroup(RTG),whichisaLayer2basedredundancymechanismsimilartoSTP,isavailableontheEXSeriesswitches.RTGeliminatestheneedforspanningtree.Initssimplestform,RTGisimplementedonaswitchthatisdualhomedtonetworkdevices.EnablingRTGmakesoneofthelinksactiveandtheotherabackup;trafficisforwardedovertheactivelink.Thebackuplinktakesoverthetrafficforwardingwhentheactivelinkfailsthusreducingtheconvergencetime.Thereis,however,adistinctionbetweenhowdataandcontroltrafficarehandledbythebackuplink.Layer2controlstraffic,forexample,LLDPsessionmessagesarepermittedoverthebackuplinkwhiledatatrafficisblocked.Thisbehaviorisconsistentirrespectiveofwhethertheswitchisaphysicalorvirtualchassis.

Figure8.6showsanEXSeriesswitchthathaslinkstoSwitch1andSwitch2,respectively.RTGisconfiguredontheEXSeriesswitchsothatthelinktoSwitch1isactiveandperformstrafficforwarding.ThelinktoSwitch2isthebackuplinkandstartsforwardingtrafficwhentheactivelinkfails.

NOTE Giventhemulti-chassisscenario,itisbettertouseRTGinsteadofMC-LAG.

Page 141: Data Center Network Connectivity With Ibm Server

140 DataCenterNetworkConnectivitywithIBMServers

Figure 8.6 RTG-based Homing to Two Switches

Figure8.7showsanEXSeriesswitchthathastwolinkstoSwitch1.RTGisconfiguredontheEXSeriesswitchsothatoneofthelinkstoSwitch1isactiveandperformstrafficforwardingwhiletheotherlinkactsasthebackup.ThebackuplinkstartsforwardingtraffictoSwitch1whentheactivelinkfails.

NOTE Inthisscenario,itmaybemoreefficientintermsofbandwidthandavailabilitytouseLAGinsteadofRTG.LAGprovidesbetteruseofbandwidthandfasterrecoverybecausethereisnoflushingandrelearningofMACaddresses.

Figure 8.7 RTG-homing to Single Switch

Basedonthesetwoscenarios,RTGcanbeusedtocontroltheflowoftrafficoverlinksfromasingleswitchtomultipledestinationswitcheswhileprovidinglinkredundancy.

ThisfeatureisenabledonaphysicalinterfaceandissimilarspecificallytoSTP.However,RTGandSTParemutuallyexclusiveonaphysicalport.JunosdoesnotpermitthesameinterfacetobeapartofbothRTGandSTPsimultaneously.ThesignificanceofRTGislocalandnotnetworkwidesincedecisionsaremadelocallyontheswitch.

Typically,RTGisimplementedonanaccessswitchdeviceoronavirtualchassisthatisconnectedtotwoormoredevicesthatdonotoperateasavirtualchassis,multi-chassisoruseSTP.Itisconfiguredbetweentheaccessandcorelayersinatwo-tierdatacenterarchitectureorbetweentheaccessandaggregationlayersinathree-tiermodel.Therecanbeamaximumof16RTGsinastandaloneswitchorinavirtualchassis.

BothRTGactiveandbackuplinksmustbemembersofthesameVLANs.

NOTE JunosdoesnotallowtheconfigurationtotakeeffectifthereisamismatchofVLANIDsbetweenthelinksbelongingtoaRTG.

Active

Switch 1

Switch 2

Backup

EX Series

Switch 1

Active

BackupEX Series

Page 142: Data Center Network Connectivity With Ibm Server

Chapter8:ConfiguringHighAvailability 141

Figure8.8showsasampletwo-tierarchitecturewithRTGandLAGenabledbetweentheaccess-corelayersandaccess-to-serverlayers.ThecoreconsistsoftwoMXSeriesdevices:MX480-AandMX480-B.TwoEX4200basedvirtualchassis(EX4200VC-A,EX4200VC-B)andEX8200s-AandBformtheaccesslayer.ThereareconnectionsfromeachoftheaccesslayerdevicestoMX480-AandB,respectively.

Figure 8.8 RTG and LAG in 2-Tier Model

WeenableLAGandRTGontheselinkstoensureredundancyandcontroltrafficflow.

WeenableLAGontheaccessdevicesforlinksbetweenthefollowingdevices:

• A-ae1(EX4200VC-A->MX480-A)

• A-ae2(EX4200VC-A->MX480-B)

• B-ae1(EX4200VC-B->MX480-A)

• B-ae2(EX4200VC-B->MX480-B)

• EX-A-ae1(EX8200-A->MX480-A)

• EX-A-ae2(EX8200-A->MX480-B)

• EX-B-ae1(EX8200-B->MX480-A)

• EX-B-ae1(EX8200-B->MX480-B)

Inaddition,weconfigureLAGontheEX8200-AandEX8200-BtoprovideaggregationonlinkstotheIBMPowerVMServers.

WeenableRTGontheEX4200VC-AandBsothatthatlinksAL-AandAl-BtoMX480-Aareactiveandareusedtoforwardtraffic.ThesetofbackuplinksRL-AandRL-BfromthevirtualchassestoMX480-Btakeoverthetrafficforwardingactivitywhentheactivelink(s)fails.

ae1 ae2

ae4 ae3

RTGRTG

MX480-BMX480-A

(EX-1)EX4200

(EX-2)EX4200

(EX-3)EX4200

(EX-4)EX4200

(EX-5)EX4200

(EX-6)EX4200

Page 143: Data Center Network Connectivity With Ibm Server

142 DataCenterNetworkConnectivitywithIBMServers

Configuration Details

Toconfigurearedundanttrunklink,aRTGfirstmustbecreated.Asstatedearlier,RTGcanbeconfiguredontheaccessswitchthathastwolinks–aprimary(active)andasecondary(backup)link.Thesecondarylinkautomaticallystartsforwardingdatatrafficwhentheactivelinkfails.

ExecutethefollowingcommandstoconfigureRTGandtodisableRSTPontheEXswitches.

• DefineRTGontheLAGinterfaceae1:

set ethernet-switching-options redundant-trunk-group group DC_RTG interface ae1

• DefineRTGontheLAGinterfaceae2:

set ethernet-switching-options redundant-trunk-group group DC_RTG interface ae2

• DisableRSTPoninterface“ae1”thatismemberofRTG:

set protocols rstp interface ae1 disable

• DisableRSTPoninterface“ae2”thatismemberofRTG:

set protocols rstp interface ae2 disable

Page 144: Data Center Network Connectivity With Ibm Server

143

Appendices

Appendix.A:...Configuring.TCP/IP.Networking.in.Servers. . . . . . . . . . . . . . . . . . . . . 144

Appendix.B:...LAG.Test.Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

Appendix.C:...Acronyms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

Appendix.D:...References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

Page 145: Data Center Network Connectivity With Ibm Server

144 Appendices

Appendix A: Configuring TCP/IP Networking in Servers

Servernetworkconfigurationincludesmanytaskssuchasenablingtheinterface,settinganIPaddress,androutinginformation,creatingalogicalinterface,andoptimizingEthernetportsettingswhichincludesspeed,duplex,flowcontrol,MTU(Jumboframes),orVLANID.

TheengineeringtestersenabledmanynetworkconfigurationcommandsindifferentOSs,includingRHEL,SUSE,AIX,andWindows.ThisappendixliststhecommonnetworkconfigurationcommandswiththeirassociatedOSasaconvenientreference.

TableA.1liststasksthatareassociatedwithsystem-dependentcommands.Obviously,acommandthatworksononeplatformmaynotworkonanother.Forexample,thelsdevcommandonlyworksontheAIXplatform.

Table A.1 Network Interface Configuration Tasks on Different Server Platform

Interfaces Server Platform Configuration Tasks

Physical NIC IBMsystemPUsesHMCtoallocatethephysicalNICtopartition.

TheadapterconfigurationinthepartitiondependsontheOS,includingRHEL,SUSEandAIX.

Virtual Ethernet Adapter

IBMPowerVMUsesHMCtoallocatethevirtualEthernetAdaptertoeachpartition.

TheadapterconfigurationinthepartitiondependsontheOS,includingRHEL,SUSE,AIX.

Host Ethernet Adapter (HEA)

IBMPowerVMUsesHMCtoallocatethevirtualEthernetAdaptertoeachpartition.

TheadapterconfigurationinthepartitiondependsontheOS,includingRHEL,SUSE,AIX.

Logical Host Ethernet Adapter (LHEA)

IBMPowerVMUsesHMCtoallocatethevirtualEthernetAdaptertoeachpartition.

TheadapterconfigurationinthepartitiondependsontheOS,includingRHEL,SUSE,AIX.

Shared Ethernet Adapter (SEA)

IBMPowerVMUsesHMCtoallocatetheinterfacetoVIOS.

UsesVIOScommandstoconfigureSEA.

Interfaces in the Ethernet Pass-Thru Module

IBMBladeCenter

UsesBladeCenterManagementModule(GUI)toallocatetheinterfacetothebladeserver.

InterfaceconfigurationinthebladeserverdependsontheOS,includingRHEL,SUSE,AIX,Windows.

Physical NIC IBMx3500ThephysicalNICconfigurationdependsontheOS,includingRHEL,SUSE,AIXandWindows.

NOTE SomeofthesecommandswillchangeIPaddresssettingsimmediately,whilesomeofthemrequirearestartofnetworkservice.

NOTE Notalltoolswillsavechangesintheconfigurationdatabase.Itmeansthatthechangesmaynotbepreservedafterserverreboot.

Page 146: Data Center Network Connectivity With Ibm Server

Appendices 145

Configuring Red Hat Enterprise Linux Network

InRedHatEnterpriseLinux(RHEL),theconfigurationfilesfornetworkinterfacesandthescriptstoactivateanddeactivatethemarelocatedin /etc/sysconfig/network-scripts/ directory:

• File/etc/sysconfig/networkspecifiesroutingandhostinformationforallnetworkinterfaces

• File/etc/sysconfig/network-scripts/ifcfg-<interface-name>

ForeachnetworkinterfaceonaRedHatLinuxsystem,thereisacorrespondinginterfaceconfigurationscript.Eachofthesefilesprovideinformationspecifictoaparticularnetworkinterface.Thefollowingisasampleifcfg-eth0fileforasystemusingafixedIPaddress:

DEVICE=eth0 BOOTPROTO=none ONBOOT=yes NETWORK=10.0.1.0 NETMASK=255.255.255.0 IPADDR=10.0.1.27 USERCTL=noInaddition,severalothercommandscanbehelpful,aslistedinTableA.2.

Table A.2 Additional Commands

Commands Description

ethtool QueriesandchangessettingsofanEthernetdevice,suchasauto-negotiation,speed,link-mode,flow-control.

kudzu Detectsandconfiguresnewandorchangedhardwareonasystem.

ifconfigQueriesandchangessettingsofanEthernetinterface.Thechangesmadeviaifconfigtakeeffectimmediatelybuttheyarenotsavedintheconfigurationdatabase.

Thefollowingisasampleifconfigcommandtocreateeth0interfacewithafixedIPaddress.

# ifconfig eth0.5 192.168.1.100 netmask 255.255.255.0 broadcast 192.168.1.255 up

Page 147: Data Center Network Connectivity With Ibm Server

146 Appendices

VconfigaddsorremovesaVLANinterface.WhenvconfigaddsaVLANinterface,anewlogicalinterfacewillbeformedwithitsbaseinterfacenameandtheVLANID.BelowisasamplevconfigcommandtoaddaVLAN5interfaceontheeth0interface:

#vconfig add eth0 5The eht0.5 interface configuration file will be created in /etc/sysconfig/network-scripts/ifcfg-eth0.5

• Service network restart restartsnetworking.

• System-config-network launchesaGUI-basednetworkadministrationtoolforconfiguringtheinterface.

• Route allowsoperatorstoinquireaboutaroutingtableortoaddastaticroute.Thestaticrouteaddedbytheroutecommandisnotpersistentafterasystemrebootornetworkservicerestart.

• Netstat allowsoperatorstochecknetworkconfigurationandactivity.Forinstance, netstat –I showsinterfacestatisticreports; netstat –r showsroutingtableinformation.

• Ping allowsoperatorstochecknetworkconnectivity.

• Traceroute allowsoperatorstotracetheroutepacketstransmittedfromanIPnetworktoagivenhost.

Forfurtherdetailsconcerningthesecommands,refertoRed hat Linux Reference Guideatwww.redhat.com/docs/manuals/linux/RHL-9-Manual/pdf/rhl-rg-en-9.pdf.

Configuring SUSE Linux Enterprise Network

TableA.3listsanddefinescommonlyusedSUSELinuxnetworkconfigurationcommands.

Table A.3 SUSE Linux Enterprise Network Configuration Commands

Commands Description

ifconfig Configuresnetworkinterfaceparameters.

rcnetwork restart Restartsnetworkservice.

netstat Providesformatforprintingnetworkconnections,routingtables,interfacestatisticsandprotocolstatistics.

ping Checksnetworkconnectivity.

traceroute TrackstheroutepacketstakenfromanIPnetworkontheirwaytoagivenhost.

ForfurtherdetailsconcerningtheSUSELinuxnetworkconfigurationcommands,refertoNovell’s Command Line Utilitiesatwww.novell.com/documentation/oes/tcpipenu/?page=/documentation/oes/tcpipenu/data/ajn67vf.html.

Page 148: Data Center Network Connectivity With Ibm Server

Appendices 147

Configuring AIX Network

AIXnetworkconfigurationcanbeperformedusingsmitty,asystemmanagementtoolthatisacursor-basedtext(commandline)interface.TableA.4listsanddefinessmittycommands.

Table A.4 Smitty Commands and Definitions

Commands Definition

lscfg Displaysconfiguration,diagnosticandvitalproductdata(VPD)informationaboutthesystemanditsresources.

lslot Displaysdynamicallyreconfigurableslots,suchashotplugslotsandtheircharacteristics.

lsdev Displaysdevicesinthesystemandtheircharacteristics.

rmdev Removesdevicesfromtheconfigurationdatabase.

cfgmgr ConfiguresdevicesandoptionallyinstallsdevicesoftwarebyrunningtheprogramsspecifiedintheConfigurationRulesobjectclass.

lsattr Displaysattributecharacteristicsandpossiblevaluesofattributesfordevicesinthesystem.

smittyProvidesacursor-basedtextinterfacetoperformsystemmanagement.Inadditiontoahierarchyofmenus,smittyallowsFastPathtotakeusersdirectlytothedialog,bypassingthemenuinteractive.

smitty chgenet Configuresanadapter,determinesanetworkadapterhardwareaddress,setsanalternativehardwareaddressorenablesJumboFrames.

smit mktcpipSetstherequiredvalueforstartingTCP/IPonahost,includingsettingthehostname,settingtheIPaddressoftheinterfaceintheconfigurationdatabase,settingthesubnetworkmask,oraddingastaticroute.

ifconfig ConfiguresordisplaysnetworkinterfaceparametersforaTCP/IPnetwork.

Netstat Displaysnetworkstatus,includingthenumberofpacketsreceived,transmittedanddropped,andtheroutesandtheirstatus.

Entstat ShowsEthernetdevicedriveranddevicestatistics.Forexample,thecommandentstatent0displaysthedevicegenericstatisticsforent0.

ping Checksnetworkconnectivity.

traceroute TrackstheroutepacketsfromanIPnetworktoagivenhost.

Forthedetailsconcerningtheabove-listedcommands,refertopublib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp.

Page 149: Data Center Network Connectivity With Ibm Server

148 Appendices

Configuring Virtual I/O Server Network

VirtualI/OServer(VIOS)networkconfigurationisusedinPOWER5,POWER6andPOWER7systems.TableA.5listsanddefinessomeofthemorecommonlyusedVIOSnetworkconfigurationcommands.

Table A.5 VIOS Commands and Definitions

Commands Definitions

mkvdevCreatesamappingbetweenavirtualadapterandaphysicalresource.Forexample,thefollowingcommandcreatesaSEAthatlinksphysicalent0tovirtualent2:

mkvdev –sea ent0 –vadapter ent2 –default ent1 –defaultid 1

lsmapListsthemappingbetweenvirtualadaptersandphysicalresources.Forexample,usethefollowinglsmapcommandtolistallvirtualadaptersattachedtovhost1:

lsmap –vadapter vhost1

chdevChangestheattributeonthedevice.Forinstance,usethefollowingchdevcommandtoenablejumboframesontheent1device:

chdev –dev ent0 –attr jumbo _ frame=yes

chtcpipChangestheVIOSTCP/IPsettingandparameters.Forexample,usethefollowingcommandtochangethecurrentnetworkaddressandmasktothenewsetting:

chtcpip –interface en0 –inetaddr 9.1.1.1 –netmask 255.255.255.0

lstcpipDisplaystheVIOSTCP/IPsettingandparameters.Forexample,usethefollowingcommandtolistthecurrentroutingtable:

lstcpip –routetable

oem _ setup _ env

InitiatestheOEMinstallationandsetupenvironmentsothatuserscaninstallandsetupsoftwareinthetraditionalway.Forexample,theoem_setup_envcommandcanplaceauserinanon-restrictedUNIXrootshellsothattheusercanimplementtheAIXcommandstoinstallandsetupsoftwareandusemostoftheAIXnetworkcommands,includinglsdev,rmdev,chdev,netstat,entstat,pingandtraceroute.

ForfurtherdetailsconcerningVIOSnetworkcommands,refertopublib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/iphcg/iphcg_network_commands.htm.

Page 150: Data Center Network Connectivity With Ibm Server

Appendices 149

Configuring Windows 2003 Network

Typically,Windows2003networkconfigurationisperformedbythenetworkappletintheGUI-basedcontrolpanel.NICvendorsalsomightprovideawebGUItoconfiguretheNICsetting,includingframesize.TableA.6listsanddefinessomeofthemorecommonlyusedWindows2003commandsfornetworkconfiguration.

Table A.6 Windows 2003 Network Commands

Commands Definitions

ipconfig CommandlineutilitytogetTCP/IPnetworkadaptersconfiguration.

route Commandlineutilitytoaddorremoveastaticroute.Youcanmakethechangepersistentbyusingthe–p optionwhenaddingroutes.

ping Usedtochecknetworkconnectivity.

tracert UsedtotrackstheroutepacketstakenfromanIPnetworkontheirwaytoagivenhost.

ForthedetailsconcerningWindows2003networkcommands,refertoWindows2003producthelpattechnet.microsoft.com/en-us/library/cc780339%28WS.10%29.aspx.

Page 151: Data Center Network Connectivity With Ibm Server

150 Appendices

Appendix B: LAG Test Results

TableB.1listsdetailedLAGtestresultsfortheMX480.

NOTE ThefollowingvalueslistedinTableB.1representapproximationsinseconds.

Table B.1 MX480 Link Aggregation Failover Scenarios

LAG Failover Scenarios (Graceful Restart Enabled)

Routing Engine Failover

FPC Failover(FPC with one link of LAG)

FPC Failover (FPC with 2 links of LAG and interface to traffic generator)

Switch Fabric Failover

System Upgrade without ISSU

System Upgrade with ISSU(NSR must be enabled)

LACP

EnabledNSR

Enabled0

1.5 53

(53,53)Immediate

0

(upgradebackupfirst,

andthenupgradethe

primary)

0

LACP

EnabledNSR

Disabled0

1.5 ~52

(51,52,53)Immediate – –

LACPDisabled

NSREnabled ~20 10

~63

(57,63,64)Immediate

~20

(upgradebackupfirst,

andthenupgradethe

primary)

~20*

LACPDisable

NSRDisabled

~20 10~63

(63,64)Immediate – –

NOTE ThefollowingvalueslistedinTableB.2representapproximationsinseconds.

Table B.2 EX8200 Link Aggregation Failover Scenarios

LAG Failover Scenarios

Routing Engine Failover

FPC Failover (FPC with LAG and interface to traffic)

Switch Fabric Failover

System Upgrade (without ISSU/without GRES)

System Upgrade (without ISSU/with GRES)

LACPEnabled/Disabled(DoesnotmatterifLACPisenabled/disabled)

0

~84

(82,86) Immediate 527 152

Page 152: Data Center Network Connectivity With Ibm Server

Appendices 151

NOTE RefertoTableB.2whenreviewingthefollowingsystemupgradesteps.

Stepsassociatedwithsystemupgrade(withoutISSU/withoutGRES):

1. BreakGRESbetweentheprimaryandthebackupdevice.

2. Upgradethebackupdevice.

3. Upgradetheprimarydevice.(Observetheoutageinapproximateseconds).

4. Re-establishtheGRESbetweentheprimaryandthebackupdevice.

StepsassociatedwithSystemUpgrade(withoutISSU/withGRES):

1. BreakGRESbetweentheprimaryandthebackupdevice.

2. Upgradethebackupdevice.

3. Re-establishtheGRESbetweentheprimaryandthebackupdevice.

4. Reversetherolesbetweentheprimaryandbackupdevices(Theprimarydevicebecomesthebackupandthebackupdevicebecomestheprimary).Ignorethewarningaboutversion-mismatch.

5. BreakGRESbetweentheprimarydeviceandthebackupdevice.

6. Upgradethebackupdevice.

7. Re-establishtheGRESbetweentheprimaryandthebackupdevice.

Methods for Performing Unified ISSU

ThethreemethodsforperformingaunifiedISSUarethefollowing:

• UpgradingandRebootingBothRoutingEnginesAutomatically.

• UpgradingBothRoutingEnginesandManuallyRebootingtheNewBackupRoutingEngine.

• UpgradingandRebootingOnlyOneRoutingEngine.

Method 1: Upgrading and Rebooting Both Routing Engines Automatically

Thismethodusesthefollowingrebootcommand:

request system software in-service-upgrade package-name reboot

1. DownloadthesoftwarepackagefromtheJuniperNetworksSupportWebsite.

2. Copythepackagetothe/var/tmpdirectoryontherouter:

user@host>file copy ftp://username:[email protected]/filename /var/ tmp/filename

3. VerifythecurrentsoftwareversiononbothRoutingEngines,usingthe show version invoke-on all-routing-engines command:

{backup} user@host> show version invoke-on all-routing-engines

Page 153: Data Center Network Connectivity With Ibm Server

152 Appendices

4. Issuethe request system software in-service-upgrade package-name reboot commandonthemasterRoutingEngine:

{master} user@host> request system software in-service-upgrade /var/tmp/jinstall- 9.0-20080114.2-domestic-signed.tgz reboot ISSU: Validating Image PIC 0/3 will be offlined (In-Service-Upgrade not supported) Do you want to continue with these actions being taken ? [yes,no] (no) yes ISSU: Preparing Backup RE Pushing bundle to re1 Checking compatibility with configuration . . . ISSU: Old Master Upgrade Done ISSU: IDLE Shutdown NOW! . . . *** FINAL System shutdown message from root@host *** System going down IMMEDIATELY Connection to host closed.

5. Logintotherouteroncethenewmaster(formerlybackupRoutingEngine)isonline.VerifythatbothRoutingEngineshavebeenupgraded:

{backup} user@host> show version invoke-on all-routing-engines

6. TomakethebackupRoutingEngine(formermasterRoutingEngine)theprimaryRoutingEngine,issuethefollowingcommand:

{backup} user@host> request chassis routing-engine master acquire Attempt to become the primary routing engine ? [yes,no] (no) yes Resolving mastership... Complete. The local routing engine becomes the master. {master} user@host>

7. Issuethe request system snapshotcommandoneachoftheRoutingEnginestobackupthesystemsoftwaretotherouter’sharddisk.

Method 2: Upgrading Both Routing Engines and Manually Rebooting the New Backup Routing Engine

1. Issuetherequestsystem software in-service-upgrade command.

2. Performsteps1through4asdescribedinMethod1.

3. Issuethe show version invoke-on all-routing-engines commandtoverifythatthenewbackupRoutingEngine(formermaster)isstillrunningtheprevioussoftwareimage,whilethenewprimaryRoutingEngine(formerbackup)isrunningthenewsoftwareimage:

{backup} user@host> show version

4. Atthispoint,achoicebetweeninstallingnewersoftwareorretainingtheoldversioncanbemade.Toretaintheolderversion,executethe request system software delete install command.

Page 154: Data Center Network Connectivity With Ibm Server

Appendices 153

5. Toensurethatanewerversionofsoftwareisactivated,rebootthenewbackupRoutingEngine,byissuingthefollowing:

{backup}user@host> request system rebootReboot the system ? [yes,no] (no) yesShutdown NOW!. . .System going down IMMEDIATELYConnection to host closed by remote host.

6. LogintothenewbackupRoutingEngineandverifythatbothRoutingEngineshavebeenupgraded:

{backup}user@host> show version invoke-on all-routing-engines

7. Tomakethenewbackuptheprimary,issuethefollowingcommand:

{backup}user@host> request chassis routing-engine master acquireAttempt to become the master routing engine ? [yes,no] (no) yes

8. Issuethe request system snapshot commandoneachoftheRoutingEnginestobackupthesystemsoftwaretotherouter’sharddisk.

Method 3: Upgrading and Rebooting Only One Routing Engine

Usetherequest system software in-service-upgrade package-name no-old-master-upgradecommandonthemasterRoutingEngine.

1. RequestanISSUupgrade:

{master}user@host> request system software in-service-upgrade/var/tmp/jinstall-9.0-20080116.2-domestic-signed.tgz no-old-master- upgrade

2. ToinstallthenewsoftwareversiononthenewbackupRoutingEngine,issuethe request system software add command.

Troubleshooting Unified ISSU

NOTE ThefollowingUnifiedISSUstepsrelateonlytotheJunos9.6release.

PerformthefollowingstepsiftheISSUprocedurestopsprogressing.

1. Executea request system software abort in-service-upgrade commandonthemasterRoutingEngine.

2. Toverifythattheupgradehasbeenaborted,checktheexistingroutersessionforthefollowingmessage: ISSU: aborted!

Page 155: Data Center Network Connectivity With Ibm Server

154 Appendices

Appendix C: Acronyms

AAFE:ApplicationFrontEnds

apsd:automaticprotectionswitchingprocess

BBPDU:BridgeProtocolDataUnit

BSR:BootstrapRouter

CCBT: CoreBasedTree

CIST: CommonInstanceSpanningTree

CLI: CommandLineInterface

CoS: classofservice

Ddcd: devicecontrolprocess

DDoS: DistributedDenialofService

DHCP: DynamicHostcontrolProtocol

DNS: DomainNameSystem

DSCP: DiffservCodePoints

DUT: DeviceUnderTest

DVMRP: DistanceVectorMulticastRoutingProtocol

EESM: EthernetSwitchModule,EmbeddedSyslogManager

FFC: FibreChannel

FCS: framechecksequence

FPC: FlexiblePICConcentrator

FSP: FlexibleServiceProcessor

GGRES: GracefulRouteEngineSwitchover

GSL: globalserverloadbalancing

HHBA: HostBusAdapter

HEA: HostEthernetAdapter

HMC: HardwareManagementConsole

I

Page 156: Data Center Network Connectivity With Ibm Server

Appendices 155

IDP: IntrusionDetectionandPrevention

IGMP: InternetGroupManagementProtocol

ISCSI: InternetSmallComputerSystemInterface

iSSU: InServiceSoftwareUpgrade

IVE: InstantVirtualExtranet

IVM: IntegratedVirtualizationManager

LLAG: LinkAggregation

LDAP: LightweightDirectoryAccessProtocol

LPAR: LogicalPartitions

LHEA: LogicalHostEthernetAdapter

MMAC: MediaAccessControl

MCS: MultiCoreScaling

mgd: managementprocess

MLD: MulticastListenerDiscovery

MM: ManagementModule

MOSPF: MulticastOpenShortestPathFirst

MSTI: MultipleSpanningTreeInstance

MSDP: MulticastSourceDiscoveryProtocol

MSTP: MultipleSpanningTreeProtocol

MTA: mailtransferagent

MTTR: meantimetorepair

MTU: MaximumTransmissionUnit

NNAT: NetworkAddressTranslation

NIC: NetworkInterfaceCard

NIST: NationalInstituteofScienceandTechnology

NPU: networkprocessingunit

NSB: NonstopBridging

NSR: nonstopactiverouting

O

Page 157: Data Center Network Connectivity With Ibm Server

156 Appendices

OEM: OriginalEquipmentManufacturer

OSS: operationsupportsystems

PPDM: PowerDistributionModule

PIC: PhysicalInterfaceCard

PIM: ProtocolIndependentMulticast

PLP: packetlosspriority

PM: Pass:throughModule

PoE: PoweroverEthernet

PVST: Per-VLANSpanningTree

QQoS: QualityofService

RRED: randomearlydetection

ROI: returnoninvestment

RP: rendezvouspoint

RPC: remoteprocedurecall

rpd:routingprotocolprocess

RTG: RedundantTrunkGroup

RSTP: RapidSpanningTreeProtocol

RVI: routedVLANinterface

SSAN: storageareanetwork

SAP: SessionAnnouncementProtocol

SCB: SwitchControlBoard

SDP: SessionDescriptionProtocol

SEA: SharedEthernetAdapter

SMT: SimultaneousMultithreading

SNMP: SimpleNetworkManagementProtocol

snmpd: simplenetworkmanagementprotocolprocess

SOA: ServiceOrientedArchitecture

SOL: SerialoverLAN

SPOF: singlepointoffailure

Page 158: Data Center Network Connectivity With Ibm Server

Appendices 157

STP: SpanningTreeProtocol

SSH: source-specificmulticast

SSL: SecureSocketsLayer

SSM: source:specificmulticast

Syslogd: systemloggingprocess

TTWAMP: Two-WayActiveMeasurementProtocol

VVID: VLANIdentifier(IEEE802.1q)

VIOS: VirtualI/OServer

VLAN: VirtualLAN

VLC: VideoLAN

VPLS:virtualprivateLANservice

VRF: VirtualRoutingandForwarding

VRRP: VirtualRouterRedundancyProtocol

VSTP: VirtualSpanningTreeProtocol

WWPAR: WorkloadbasedPartitioning

Page 159: Data Center Network Connectivity With Ibm Server

158 Appendices

Appendix D: References

• www.juniper.net/techpubs/software/junos/junos90/swconfig-high-availability/swconfig-high-availability.pdf

• TheJunos High Availability Configuration Guide,Release9.0presentsanoverviewofhighavailabilityconceptsandtechniques.ByunderstandingtheredundancyfeaturesofJuniperNetworksroutingplatformsandtheJunossoftware,anetworkadministratorcanenhancethereliabilityofanetworkanddeliverhighlyavailableservicestocustomers.

• IEEE802.3adlinkaggregationstandard

• STP-IEEE802.1D1998specification

• RSTP-IEEE802.1D-2004specification

• MSTP-IEEE802.1Q-2003specification

• www.nettedautomation.com/standardization/IEEE_802/standards_802/Summary_1999_11.html

ProvidesaccesstotheIEEE802Organizationwebsitewithlinkstoall802standards.

• RFC3768,VirtualRouterRedundancyProtocol

• https://datatracker.ietf.org/wg/vrrp/

ProvidesaccesstoallRFCsassociatedwiththeVirtualRouterRedundancyProtocol(VRRP).

• RFC2338,VirtualRouterRedundancyProtocolforIPv6

• https://datatracker.ietf.org/doc/draft-ietf-vrrp-ipv6-spec/

ProvidesaccesstotheabstractthatdefinesVRRPforIPv6.

Page 160: Data Center Network Connectivity With Ibm Server

7100125-001-EN June 2010

Data Center Network Connectivity with IBM Servers

Data Center Network Connectivity HandbookThis handbook serves as an-easy-to-use reference tool for implementing a two-tier data center

network by deploying IBM open systems as the server platform with Juniper Networks routing and

switching solutions.

“A MuST-READ, pRACTICAl GuIDE fOR IT pROfESSIONAlS, NETWORK ARCHITECTS

AND ENGINEERS, WHO WISH TO DESIGN AND IMplEMENT A HIGH pERfORMANCE

DATA CENTER INfRASTRuCTuRE. THIS BOOK pROVIDES A STEp-BY-STEp AppROACH,

WITH VAlIDATED SOluTION SCENARIOS fOR INTEGRATING IBM OpEN SYSTEM

SERVERS AND JuNIpER NETWORKS DATA CENTER NETWORK, INCluDING TECHNICAl

CONCEpTS AND SAMplE CONfIGuRATIONS.”

− Scott Stevens, VP Technology, Worldwide Systems Engineering, Juniper Networks

“THIS BOOK IS A VAluABlE RESOuRCE fOR ANYONE INTERESTED IN DESIGNING

NETWORK INfRASTRuCTuRE fOR NExT GENERATION DATA CENTERS...IT pROVIDES

ClEAR, EASY TO uNDERSTAND DESCRIpTIONS Of THE uNIquE REquIREMENTS

fOR DATA COMMuNICATION IN AN IBM OpEN SYSTEMS ENVIRONMENT. HIGHlY

RECOMMENDED!”

− Dr. Casimer DeCusatis, IBM Distinguished Engineer


Recommended