+ All Categories
Home > Documents > Data Center Poster

Data Center Poster

Date post: 10-Apr-2015
Category:
Upload: roberto-solano
View: 841 times
Download: 8 times
Share this document with a friend
Description:
data center cisco manual
1
10 GbE GbE Cisco Application Control Engine Service Devices Placement Location Service Appliances Area Service Modules Area Cisco 3000 Series Multifabric Server Switch Wireless Connection End-user Workstation 1536 GbE servers 4992 GbE attached servers Blade Servers Blade Servers When using pass-through modules dual-home servers to access/edge layer switches. Pass-through modules allow Fibre Channel environments to avoid interoperability issues while allowing access to the advance SAN fabric features. Use PortChannels and trunks to aggregate multiple physical inter-switch links (ISL) into a logical link. Use VSANs to segregate multiple distinct SANs in a physical fabric to consolidate isolated SANs and SAN fabrics. Use core-edge topologies to connect multiple workgroup fabric switches when tolerable over-subcription is a design objective. Consolidate application and security services (service modules or appliances) at the aggregation layer switches. Ensure the access layer design (whether L2 or L3) provides a predictable and deterministic behavior and allows the server farm to scale up the expected number of nodes. Use VLANs in conjunction with instances of application and security services applied to each application environment independently. Application, security and virtualisation services provided by service modules or appliances are best offered from the aggregation layer. Services are made available to all servers, provisioning is centralized and the network topology is kept predictable and deterministic. Select a primary aggregation switch to be the primary default gateway and STP root. Set the redundant aggregation switch as the backup default gateway and secondary root. Use HSRP and RPVST+ as the primary default gateway and STP protocols and ensure he active service devices are in the STP root switch. Use firewalls to control the traffic path between tiers of servers and to isolate distinct application environments. Use ACE as a content switch to monitor and control server and application health, and to distribute traffic load between clients and the server farm, and between server/application tiers. Deploy access layer switches in pairs to enable server dual-homing and NIC Teaming. Use trunks and channels between access and aggregation switches. Carry VLANs that are needed throughout the server farm on every trunk, to increase flexibility. Trimm unneeded VLANs from every trunk. Connect access switches used in application and back-end segments to each other across application tier function boundary through EtherChannel ® links. Use VLANs to separate groups of servers by function or application service type. Use VSANs to group isolated fabrics into a shared infastructure while keeping their dedicated fabric services, security, and stability integral per group. Dual-home hosts to each of the SAN fabrics using Fibre Channel Host Bus Adapters (HBAs). Use VSANs to create separate SANs over a shared physical infrastructure. Use two distinct SAN fabrics to mainain a highly available SAN environment. Use port channel to increase path redundancy and fast recovery from link failure. Use FSPF for equal cost load-balancing through redundant paths. Use storage virtualisation to pool distinct physical storage arrays as one, hiding physical details (arrays, spindles, LUNs). COLLAPSED MULTITIER DESIGN EXPANDED MULTI-TIER DESIGN LARGE-SCALE PRIMARY DATA CENTER DATA CENTER CORE The Campus core provides connectivity between the major areas of an Enterprise network including the data center, extranet, Internet edge, Campus, Wide Area Network (WAN), and Metropolitan Area Network (MAN). Use a fully-meshed Campus core to provide high-speed redundant Layer 3 connectivity between the different network areas. Use dual-stack IPv6-IPv4 in all Layer 3 devices and desktop services. When Layer 2 is used in the Campus access layer, select a primary distribution switch to be the primary default gateway and STP root. Set the redundant distribution switch as the backup default gateway and secondary root. Use HSRP and PVRST+ as the primary default gateway and STP protocols. Use 10GbE throughout the infrastructure (between distribution switches and between access and distribution) when high throughput is required. Use Layer 3 access switches when shared VLANs are not needed in more than one access switch at a time, and very low convergence is required. CAMPUS CORE Building Y Building X SECONDARY INTERNET EDGE AND EXTRANET SECONDARY DATA CENTER Use storage virtualisation to further increase the effective storage utilization and centralise management of storage arrays. Arrays form a single pool of virtual storage which are presented as virtual disks to applications. Use a collapsed Internet Edge and extranet design for a highly centralized and integrated edge network. Edge services are provided by embedding intelligence from service modules such as firewall, content switching and SSL (ACE) and VPN modules, and appliances such as Guard XT and the Anomaly Detector for DDoS protection. Additional edge functionality includes site selector and content caching, as well as event correlation engines and traffic monitoring provided by the integrated service devices. Consider the use of dual-stack IPv4-IPv6 services in Layer 3 devices and the need to support IPv6 firewall policies and IPv6 filtering capabilities. Use the secondary data center as a backup location that houses critical standby transactional (near zero RPO and RTO) and redundant active non-transactional applications (RPO and RTO in the 12-24 hours range). Group servers providing like-functions in the same VLANs to apply consistent and manageable set of security, SSL, load balancing, and monitoring policies. Dual-home critical servers to different access switches, and stagger primary physical connections between available access switches. The secondary data center design is a smaller replica of the primary that houses backup critical application environments. These support business functions that must be resumed to achieve regular operating conditions. Modular chassis per rack group 8 aggregation - 16 access switches 16 10GbE downlinks per aggregation 8 10GbE uplinks per access Layer 3 in aggregation and access 8-way equal cost multipath ECMP 312 GbE ports per access switch 3.9:1 Oversubscription 80 Gigabit per access switch The core layer is required when the cluster needs to connect to an existing IP network environment. The modular access layer switches provide access functions to groups of racks at a time. Design is aimed at reducing hop count between any two nodes in the cluster. CAMPUS NETWORK Use a high-speed (10GbE) metro optical network for packet-based and transparent LAN services between distributed Campus and Data Centre environments. MAN INTERCONNECT Building Z Place the Global Site Selector in the DMZ to prevent DNS traffic from penetrating the edge security boundaries. Consider the design of the Internet-facing server farm following the same best practices used in intranet server farms, with specific scalability and security requirements driven by the size of the target user population. Use the AONs application gateway for XML filtering such as schema and digital signature validation, and provide transaction integrity and security for legacy application message formats. Perform Distributed Denial of Service (DDoS) attack mitigation at the Enterprise Edge. Place the Guard and Anomaly Detector to detect and mitigate high-volume attack traffic by diverting through anti-spoofing and attack specific dynamic filter counter-measures. Use the Adaptive Security Appliance to concurrently perform firewall, VPN, and instrusion protection functions at the edge of the enterprise network. Use dual-stack IPv6-IPv4 on the edge routers and consider IPv6 firewall and filtering capabilities. Use a dedicated extranet as a highly scalable and secure termination point for IPsec and SSL VPNs to support business partner connectivity. Apply the intranet server farm design best practices to the specific partner facing application environments, but considering their specific security and scalability requirements. Consider the use of the Application Control Engine (ACE) to provide high performance and high scalability load balancing, SSL and application security capabilities per partner profile by using virtual instances of these functions. INTERNET EDGE EXTRANET Partners VPN Service Provider 1 Service Provider 2 INTERNET Use a non-blocking design for server clusters dedicated to computational tasks. In a non-blocking design for every HCA connected to the edge/access layer, there is an uplink to an aggregation layer switch. Topology Details Edge 24 switches - 24 ports each 12 servers per switch 12 uplinks to aggregation layer Aggregation 12 switches - 24 ports each 1 or more uplinks to each core switch Core number of core switches based on connectivity needs to IP network Integrate wireless controllers at the distribution layer and wireless access points at the access layer. Use Etherchannel between the distribution switches to provide redundancy and scalability. Dual-home access switches to the distribution layer to increase redundancy by providing alternate paths. Consider the use of dual-stack IPv4-IPv6 services at the access, distribution and core Layer 3 and/or Layer 2 devices. To achieve additional redundancy on an HA server cluster, distribute a portion of the servers in the HA cluster to a data center. This distribution of HA clusters across distributed data centers, referred to as geo-clusters or stretched clusters, often times requires Layer 2 adjacency between distributed nodes. Adjacency means the same VLAN (IP subnet) and VSAN have to be extended over the shared transport infrastructure, between the distributed data centers. The HA cluster spans multiple geographically distant data center hosting facilities. Use a dual-fabric (fabrics A and B) topology to achieve high resiliency in SAN environments. A common management VSAN is recommended to allow the fabric manager to manage and monitor the entire network environment. Use ACE as a content switch to scale application services including SSL off-loading on server farms. Use virtual firewalls to isolate application environments. Use AON to optimise inter-application security and communications services, and to provide visibility into real-time transactions. Use MARS to detect security anomalies by correlating data from different traffic sources. Use a SONET/SDH transport network for FCIP, in addition to voice, video, and additional IP traffic between distributed locations in a metro or long-haul environments. Consider the use of RPR/802.17 technology to create a highly available MAN core for distributed locations. Use a DWDM/SONET/SDH/Ethernet transport network to support high-speed, low-latency uses, such as synchronous data replication between distributed disk subsystems. The common transport network supports multiple protocols such as FC, GbE, and ESCON concurrently. B A B A B A B A Use dual Integrated Services Routers for a large branch office. Each router is connected to different WAN links for higher redundancy. Use dual stack IPv6-IPv4 services on Layer 3 devices. Use the integrated IOS firewall and intrusion prevention for edge security and the integrated Call Manager for remote voice capabilities. Consider IPv6 firewall policies, filtering and DHCP prefix delegation when IPv6 traffic is expected to/from branch offices. Use integrated Wide Area Engines for file caching, local video broadcast, and static content serving. Consider the storage network design and available storage connectivity options: FC, iSCSI and NAS. Plan the data replication process from the branch to headquarters based on latency and transaction rates requirements. Consider QoS classification to ensure the different types of traffic match the loss and latency requirements of the applications. LARGE BRANCH OFFICE Large branches have similar LAN/SAN designs to small data centers and small campuses for server and storage, and client connectivity respectively. Use Layer 3 switches to house branch-wide LAN services and to provide connectivity to all required access layer switches. Consider a number of VLANs based on the branch functions such as server farm, point of sale, voice, video, data, wireless and management in the LAN design. HIGH AVAILABILITY SERVER CLUSTER Consider the home office as an extension of the enterprise network. Basic services include access to applications and data, voice and video. Use VPN to ensure security for teleworker environments, thus relying on the corporate security policies. Also consider the use of wireless access as an extension of the enterprise network; secure and reliable. Enable QoS to police and enforce service levels for voice, data and video traffic. Consider a dual-stack IPv6-IPv4 router to support IPv6 remote devices. Security policies for IPv6 traffic should include IPv6 filtering and capable firewalls. REMOTE OFFICES HOME OFFICE SMALL OFFICE PSTN Use the WAN as the primary path for user traffic destined for the intranet server farm. Through the use of DNS and RHI control the granularity of applications being independently advertised, and state of distributed application environments. Ensure the proper QoS classification is used for voice, data and video traffic. Use dual-stack IPv6-IPv4 in all Layer 3 devices. CAMPUS CORE SECONDARY CAMPUS NETWORK WIDE AREA NETWORK Use an Infiniband fabric for applications that execute a high rate of computational tasks and require low latency and high throughput. Select the proper oversubcription rate between edge and aggregation layers for intracluster purposes, or between aggregation, core and the Ethernet fabric. In a 2:1 blocking topology for every two HCAs connected to edge switches, there is an uplink to an aggregation layer switch. Core switches provide connectivity to the Ethernet fabric. Use Vframe to manage I/O virtualisation capabilities of server fabric switches. HIGH PERFORMANCE INFINIBAND CLUSTER Use a NOC VLAN to house critical management tools and to isolate management traffic from client/server traffic. Use NTP, SSH-2, SNMPv3, CDP and Radius/TACACS+ as part of the management infrastructure. Use CiscoWorks LMS to manage the network infrastructure and monitor IPv4-IPv6 traffic, and the Cisco Security Manager to control, configure and deploy firewall, VPN and IPS security policies. Use on the Performance Visibility Manager to measure end-to-end application performance. Use the Monitoring, Analysis, and Response System to correlate traffic for anomaly detection purposes. Use the Network Planning Solution to build network topology models, for failure scenario analysis and other what-if scenarios based on device configuration, routing tables, NAM and NetFlow data. Use the MDS Fabric Manager to manage the storage network. Use NetFlow and the Network Analysis Module for capacity planning and traffic profiling. Attach integrated Infiniband switches to Server Fabric Switches acting as gateways to the Ethernet network. Connect the gateway switches to the aggregation switches to reach the IP network. HIGH DENSITY ETHERNET CLUSTER BLADE SERVER COMPLEX High availability clusters consist of multiple servers supporting mission-critical applications in business continuance or disaster recovery scenarios. The applications include databases, filers, mail servers or file servers. The nodes of a single application cluster use a clustering mechanism that relies on unicast packets if there are two nodes or multicast if using more than two nodes. The nodes backup each other and use heartbeats to determine node status. The network infrastructure suporting the HA cluster is shared by other server farms. Additional VLANs and VSANs are used to connect additional NICs and HBAs required by the cluster to operate. The application data must be available to all nodes in the cluster. This requires the disk to be shared so it cannot be local to each node. The shared disk or disk array is also accessible through IP (iSCSI or NAS), Fibre Channel (SAN) or shared SCSI. The transport technologies that can be used to connect the LAN and the SAN of the data centers can be Dark Fiber, DWDM, CWDM, SONET, Metro Ethernet, EoMPLS, L2TPv3 as some of the options shown above. NOC VLAN/VSAN SERVER CLUSTER VSAN 4992 Node Ethernet Cluster 288 Node Infiniband Cluster VLAN X VLAN Y VSAN P VSAN Q The nodes in HA clusters are linked to multiple networks using existing network infrastructure. Use the private network for heartbeats and the public network for inter-cluster communication and client access. Nodes in distributed data centers may need to be in the same subnet, requiring Layer 2 adjacency. PUBLIC NETWORK PRIVATE NETWORK METRO ETHERNET The transport network supports multiple communication streams between nodes in the stretched clusters. Use multiple VLANs to separate intracluster from client traffic. Use multiple SAN fabrics to provide path redundancy for the extended SAN. Use multipathing on the hosts and IVR between the SAN fabric Directors to take advantage of the redundant fabrics. Use write acceleration to improve the performance rate of the data replication process. Consider the use of encryption to secure data transfers and compression to increase the data transfer rates. FABRIC C FABRIC D PUBLIC VLANs CLUSTER VLANs SERVER CLUSTER VSAN HIGH AVAILABILITY SERVER CLUSTER NETWORK OPERATIONS CENTER (NOC) PRIMARY SITE SECONDARY SITE Topology details for 1024 servers include: Edge: • 96 switches - 24 ports each - 12 servers per switch • 12 Uplinks to aggregation layer Aggregation: • 6 switches - 96 ports each • 12 downlinks - one per edge switch Core: • Number of switches based on fabric connectivity needs A KEY FOUNDATION OF CISCO SERVICE-ORIENTED NETWORK ARCHITECTURE DATA CENTRE NETWORKED APPLICATIONS BLUEPRINT Consider an integrated services design for a full service branch environment. Services include voice, video, security and wireless. Voice services include IP phones, local call processing, local voice mail, and VoIP gateways to the PSTN. Security services include integrated firewall, intrusion protection, IPsec and admission control. Connect small office networks to headquarters through VPN, and ensure QoS classification and enforcement provides adequate service levels to the different traffic types. Configure multicast for applications that require concurrent recipients of the same traffic. Consider a dual-stack IPv4-IPv6 router to support IPv6 traffic. Ensure IPv6 firewall rules and filtering capabilities are enabled on the router. Use a data center core layer to support multiple aggregation modules. Use multiple aggregation modules when the number of servers per module exceeds the capacity of the module. Connect the data center core switches to the campus core switches to reach the rest of the Enteprise network. Consider the use of 10GbE links between core and aggregation switches. Use dual-stack IPv6-IPv4 in all Layer 3 devices in the data center, and identify the server farm requirements to ensure IPv6 traffic conforms to firewall and filtering policies. High density Ethernet clusters consist of multiple servers that operate concurrently to solve computational tasks. Some of these tasks require certain degree of processing parallelism while others require raw CPU capacity. Common applications of large Ethernet clusters include large search engine sites and large web server farms. The diagram shows a tiered design using “top of rack” 1RU access switches for a total of 1536 servers. Topology Details: - 8 core switches connected to each aggregation module through a 10GbE link per switch - 4 aggregation modules each with 2 Layer 3 switches that provide 10GbE connectivity to the access layer switches - 8 access switches per aggregation module, each switch connecting to 2 aggregation switches through 10GbE links - Each access layer switch supports 48 10/100/1000 ports and 2 10GbE uplinks Consider blade server direct attachment and network fabric options; Pass-through modules or integrated switches, and Fibre Channel, Ethernet and Infiniband. Data Center Fundamentals: www.ciscopress.com/datacenterfundamentals Design Best Practices: www.cisco.com/go/datacenter Infiniband Network In an integrated Ethernet switch fabric, set up half the blades active on switch1 and half active on switch2. Dual-home each Ethernet switch to Layer 3 switches through GbE-channels. Use RPVST+ for fast STP convergence. Use link-state tracking to detect uplink failure and allow the blades standby NIC to take over. Place all network-based service devices (modules or appliances) at the aggregation layer to centralise the configuration and management tasks and to leverage service intelligence applied to the entire server farm. Designed By: Mauricio Arregoces [email protected] Questions: [email protected] Part #: 910300406R01
Transcript
Page 1: Data Center Poster

10 GbEGbE

Cisco ApplicationControlEngine

Service Devices

Placement Location

Service Appliances

Area

Service Modules

Area

Cisco 3000Series

MultifabricServer Switch

WirelessConnection

End-userWorkstation

1536 GbEservers

4992 GbE attached servers

Blade Servers

Blade Servers

When using pass-through modules dual-home servers to access/edge layer switches. Pass-through modules allow Fibre Channel environments to avoid interoperability issues while allowing access to the advance SAN fabric features.

Use PortChannels and trunks to aggregate multiple physical inter-switch links (ISL) into a logical link. Use VSANs to segregate multiple distinct SANs in a physical fabric to consolidate isolated SANs and SAN fabrics. Use core-edge topologies to connect multiple workgroup fabric switches when tolerable over-subcription is a design objective.

Consolidate application and security services (service modules or appliances) at the aggregation layer switches. Ensure the access layer design (whether L2 or L3) provides a predictable and deterministic behavior and allows the server farm to scale up the expected number of nodes. Use VLANs in conjunction with instances of application and security services applied to each application environment independently.

Application, security and virtualisationservices provided by service modules or appliances are best offered from the aggregation layer. Services are made available to all servers, provisioning is centralized and the network topologyis kept predictable and deterministic.

Select a primary aggregation switch to be the primary default gateway and STP root. Set the redundant aggregation switch as the backup default gateway and secondary root.Use HSRP and RPVST+ as the primary default gateway and STP protocols and ensure he active service devices are in the STP root switch.

Use firewalls to control the traffic path between tiers of servers and to isolate distinct application environments. Use ACE as a content switch to monitor and control server and application health, and to distribute traffic load between clients and the server farm, and between server/application tiers.

Deploy access layer switches in pairs to enable server dual-homing and NICTeaming. Use trunks and channels between access and aggregation switches. Carry VLANs that are neededthroughout the server farm on every trunk, to increase flexibility. Trimm unneeded VLANs from every trunk.

Connect access switches used in application and back-end segments to each other across application tier function boundary through EtherChannel® links. Use VLANs to separate groups of servers by function or application service type.

Use VSANs to group isolated fabrics into a shared infastructure while keeping their dedicated fabric services, security, and stability integral per group. Dual-home hosts to each of the SAN fabrics using Fibre Channel Host Bus Adapters (HBAs).

Use VSANs to create separate SANs over a shared physical infrastructure. Use two distinct SAN fabrics to mainain a highly availableSAN environment. Use port channel to increase path redundancy and fast recovery from link failure. Use FSPF for equal cost load-balancing through redundant paths. Use storage virtualisation to pool distinct physical storage arrays as one, hiding physical details (arrays, spindles, LUNs).

COLLAPSED MULTITIER DESIGN

EXPANDED MULTI-TIER DESIGN

LARGE-SCALE PRIMARY DATA CENTERDATA CENTER CORE

The Campus core provides connectivity between the major areas of an Enterprise network includingthe data center, extranet, Internet edge, Campus, Wide Area Network (WAN), and Metropolitan Area Network (MAN). Use a fully-meshed Campus core to provide high-speed redundant Layer 3 connectivity between the different network areas. Use dual-stack IPv6-IPv4 in all Layer 3 devices and desktop services.

When Layer 2 is used in the Campus access layer, select a primary distribution switch to be the primary default gateway and STP root. Set the redundant distribution switch as the backup default gateway and secondary root. Use HSRP and PVRST+ as the primary default gateway and STP protocols.

Use 10GbE throughout the infrastructure (between distribution switches and between access and distribution) when high throughput is required.Use Layer 3 access switches when shared VLANs are not needed inmore than one access switch at a time, and very low convergence is required.

CAMPUS COREBuilding Y

Building X

SECONDARY INTERNET EDGE AND EXTRANET

SECONDARY DATA CENTER

Use storage virtualisation to further increase the effective storage utilization and centralise management of storage arrays. Arrays form a single pool of virtual storage which are presented as virtual disks to applications.

Use a collapsed Internet Edge and extranet design for a highly centralizedand integrated edge network. Edgeservices are provided by embeddingintelligence from service modules such asfirewall, content switching and SSL (ACE)and VPN modules, and appliances such as Guard XT and the Anomaly Detector for DDoS protection. Additional edge functionality includes site selector and content caching, as well as event correlation engines and traffic monitoring provided by the integrated service devices. Consider the use of dual-stack IPv4-IPv6 services in Layer 3 devices and the need to support IPv6 firewall policies and IPv6 filtering capabilities.

Use the secondary data center as a backup location that houses criticalstandby transactional (near zero RPO and RTO) and redundant active non-transactional applications (RPO and RTO in the 12-24 hours range).

Group servers providing like-functions in the same VLANs to apply consistent and manageable set of security, SSL, load balancing, and monitoring policies. Dual-home critical servers to different access switches, and stagger primary physical connections between available access switches.

The secondary data center design is a smaller replica of the primary that houses backup critical application environments. These support business functions that must be resumed to achieve regular operating conditions.

• Modular chassis per rack group• 8 aggregation - 16 access switches• 16 10GbE downlinks per aggregation• 8 10GbE uplinks per access• Layer 3 in aggregation and access • 8-way equal cost multipath ECMP• 312 GbE ports per access switch• 3.9:1 Oversubscription• 80 Gigabit per access switch

The core layer is required when the cluster needs to connect to an existing IP network environment. The modular access layer switches provide access functions to groups of racks at a time. Design is aimed at reducing hop countbetween any two nodes in the cluster.

CAMPUS NETWORK

Use a high-speed (10GbE) metro opticalnetwork for packet-based and transparentLAN services between distributed Campusand Data Centre environments.

MAN INTERCONNECT

Building Z

Place the Global Site Selector in the DMZ to prevent DNS traffic from penetrating the edge security boundaries. Consider the design of the Internet-facing server farm following the same best practices used in intranet server farms, with specific scalability and security requirements driven by the size of the target user population. Use the AONs application gateway for XML filtering such as schema and digital signature validation, and provide transaction integrity and security for legacy application message formats.

Perform Distributed Denial of Service (DDoS)attack mitigation at the Enterprise Edge. Place the Guard and Anomaly Detector to detect and mitigate high-volume attack traffic by diverting through anti-spoofing and attack specific dynamic filter counter-measures. Use the Adaptive Security Appliance to concurrently perform firewall, VPN, and instrusion protection functions at the edge of the enterprise network. Use dual-stack IPv6-IPv4 on the edge routers and consider IPv6 firewall and filtering capabilities.

Use a dedicated extranet as a highly scalable and secure termination point for IPsec and SSL VPNs to support business partner connectivity. Apply the intranet server farm design best practices to the specific partner facing application environments, but considering their specific security and scalability requirements. Consider the use of the Application Control Engine (ACE) to provide high performance and high scalability load balancing, SSL and application security capabilities per partner profile by using virtual instances of these functions.

INTERNET EDGEEXTRANET Partners

VPN

ServiceProvider 1

ServiceProvider 2

INTERNET

Use a non-blocking design for server clusters dedicated to computational tasks. In a non-blocking design for every HCA connected to the edge/access layer, there is an uplink to an aggregation layer switch.

Topology DetailsEdge• 24 switches - 24 ports each• 12 servers per switch• 12 uplinks to aggregation layerAggregation• 12 switches - 24 ports each• 1 or more uplinks to each core switchCore• number of core switches based on connectivity needs to IP network

Integrate wireless controllers at the distribution layer and wireless access points at the access layer. Use Etherchannel between the distribution switches to provide redundancy and scalability. Dual-home access switches to the distribution layer to increase redundancy by providing alternate paths. Consider the use of dual-stack IPv4-IPv6 services at the access, distribution and core Layer 3 and/or Layer 2 devices.

To achieve additional redundancy on an HA server cluster, distribute a portion of the servers in the HA cluster to a data center. This distribution of HA clusters across distributed data centers, referred to as geo-clusters or stretched clusters, often times requires Layer 2 adjacency between distributed nodes. Adjacency means the same VLAN (IP subnet) and VSAN have to be extended over the shared transport infrastructure, between the distributed data centers. The HA cluster spans multiple geographically distant data center hosting facilities.

Use a dual-fabric (fabrics A and B) topology to achieve high resiliency in SAN environments. A common management VSAN is recommended to allow the fabric manager to manageand monitor the entire network environment.

Use ACE as a content switch to scale application services including SSL off-loading on server farms. Use virtual firewalls to isolate application environments. Use AON to optimise inter-application security and communications services, and to provide visibility into real-time transactions. Use MARS to detect security anomalies by correlating data from different traffic sources.

Use a SONET/SDH transport network for FCIP, in addition to voice, video, and additional IP traffic between distributed locations in a metro or long-haul environments. Consider the use ofRPR/802.17 technology to create a highlyavailable MAN core for distributed locations.

Use a DWDM/SONET/SDH/Ethernet transport network to support high-speed, low-latency uses, such as synchronous data replication between distributed disk subsystems. The common transport network supports multiple protocols such as FC, GbE, and ESCON concurrently.

BA

BA

BA

BA

Use dual Integrated Services Routers for a large branch office. Each router is connected to differentWAN links for higher redundancy. Use dual stack IPv6-IPv4 services on Layer 3 devices. Use the integrated IOS firewall and intrusion prevention for edge security and the integrated Call Manager for remote voice capabilities. Consider IPv6 firewallpolicies, filtering and DHCP prefix delegation when IPv6 traffic is expected to/from branch offices. Useintegrated Wide Area Engines for file caching, local video broadcast, and static content serving.

Consider the storage network design and availablestorage connectivity options: FC, iSCSI and NAS.Plan the data replication process from the branch to headquarters based on latency and transactionrates requirements. Consider QoS classification toensure the different types of traffic match the loss and latency requirements of the applications.

LARGE BRANCH OFFICE

Large branches have similar LAN/SAN designs to small data centers and small campuses for server and storage, and client connectivity respectively. Use Layer 3 switches to house branch-wide LAN services and to provide connectivity to all required access layer switches. Consider a number of VLANs based on the branch functions such asserver farm, point of sale, voice, video, data, wireless and management in the LAN design.

HIGH AVAILABILITY SERVER CLUSTER

Consider the home office as an extension of the enterprise network.Basic services include access to applications and data, voice and video. Use VPN to ensure security for teleworker environments, thusrelying on the corporate security policies. Also consider the use ofwireless access as an extension of the enterprise network; secure and reliable. Enable QoS to police and enforceservice levels for voice, data andvideo traffic. Consider a dual-stackIPv6-IPv4 router to support IPv6remote devices. Security policies for IPv6 traffic should include IPv6filtering and capable firewalls.

REMOTE OFFICES

HOME OFFICE SMALL OFFICE

PSTN

Use the WAN as the primary path for user traffic destined for the intranet server farm. Through the use of DNS and RHI control the granularity of applications being independently advertised, and state of distributed application environments. Ensure the proper QoS classification is used for voice, data and video traffic. Use dual-stack IPv6-IPv4 in all Layer 3 devices.

CAMPUS CORESECONDARY CAMPUS NETWORK

WIDE AREA NETWORK

Use an Infiniband fabric for applications that execute a high rate of computational tasks and require low latency and high throughput. Select the proper oversubcription rate between edge and aggregation layers for intracluster purposes, or between aggregation, core and the Ethernet fabric. In a 2:1 blocking topology for every two HCAs connected to edge switches, there is an uplink to an aggregation layer switch. Core switches provide connectivity to the Ethernet fabric. Use Vframe to manage I/O virtualisation capabilities of server fabric switches.

HIGH PERFORMANCE INFINIBAND CLUSTER

Use a NOC VLAN to house critical management tools and to isolate management traffic from client/server traffic. Use NTP, SSH-2, SNMPv3, CDP and Radius/TACACS+ as part of the management infrastructure. Use CiscoWorks LMS to manage the network infrastructure and monitor IPv4-IPv6 traffic, and the Cisco Security Manager to control, configure and deploy firewall, VPN and IPS security policies. Use on the Performance Visibility Manager to measure end-to-end applicationperformance. Use the Monitoring, Analysis, and Response System to correlate traffic for anomaly detection purposes. Usethe Network Planning Solution to build network topology models, for failure scenario analysis and other what-if scenarios based on device configuration, routing tables, NAM and NetFlow data. Use the MDS Fabric Manager to manage the storage network. Use NetFlow and the Network Analysis Module for capacity planning and traffic profiling.

Attach integrated Infiniband switches to Server Fabric Switches acting as gateways to the Ethernet network. Connect the gateway switches to the aggregation switches to reach the IP network.

HIGH DENSITY ETHERNET CLUSTER

BLADE SERVER COMPLEX

High availability clusters consist of multiple servers supporting mission-critical applicationsin business continuance or disaster recovery scenarios. The applications include databases,filers, mail servers or file servers. The nodes of a single application cluster use a clustering mechanism that relies on unicast packets if there are two nodes or multicast if using more than two nodes. The nodes backup each other and use heartbeats to determine node status. The network infrastructure suporting the HA cluster is shared by other server farms. Additional VLANs and VSANs are used to connect additional NICs and HBAs required by the cluster to operate. The application data must be available to all nodes in the cluster. This requires the disk to be shared so it cannot be local to each node. The shared disk or disk array is also accessible through IP (iSCSI or NAS), Fibre Channel (SAN) or sharedSCSI. The transport technologies that can be used to connect the LAN and the SAN of the data centers can be Dark Fiber, DWDM, CWDM, SONET, Metro Ethernet, EoMPLS, L2TPv3 as some of the options shown above.

NOC VLAN/VSAN

SERVERCLUSTERVSAN

4992 Node Ethernet Cluster

288 Node Infiniband Cluster

VLAN X VLAN Y VSAN P VSAN Q

The nodes in HA clusters are linked to multiple networks using existing network infrastructure. Use the private network for heartbeats and the public network for inter-cluster communication and client access. Nodes in distributed data centers may need to be in the same subnet, requiring Layer 2 adjacency.

PUBLICNETWORK

PRIVATENETWORK

METRO ETHERNET

The transport network supports multiple communication streams between nodes in the stretched clusters. Use multiple VLANs to separate intracluster from clienttraffic. Use multiple SAN fabrics to provide path redundancy for the extended SAN. Use multipathing on the hosts and IVR between the SAN fabric Directors to take advantage of the redundant fabrics. Use write acceleration to improve the performance rate of the data replication process. Consider the use of encryption to secure data transfers and compression to increase the data transfer rates.

FABRIC C

FABRIC D

PUBLIC VLANs

CLUSTER VLANs

SERVERCLUSTERVSAN

HIGH AVAILABILITY SERVER CLUSTER

NETWORK OPERATIONS CENTER (NOC)

PRIMARY SITE SECONDARY SITE

Topology details for 1024 servers include:Edge: • 96 switches - 24 ports each - 12 servers per switch • 12 Uplinks to aggregation layerAggregation: • 6 switches - 96 ports each • 12 downlinks - one per edge switchCore: • Number of switches based on fabric connectivity needs

A KEY FOUNDATION OF CISCO SERVICE-ORIENTED NETWORK ARCHITECTUREDATA CENTRE NETWORKED APPLICATIONS BLUEPRINT

Consider an integrated services designfor a full service branch environment.Services include voice, video, security and wireless. Voice services include IPphones, local call processing, local voice mail, and VoIP gateways to the PSTN. Security services include integrated firewall, intrusion protection, IPsec and admission control. Connect small office networks to headquarters through VPN, and ensure QoS classification and enforcement provides adequate service levels to the different traffic types. Configure multicast for applications that require concurrent recipients of the same traffic. Consider a dual-stack IPv4-IPv6 router to support IPv6 traffic. Ensure IPv6 firewall rules and filtering capabilities are enabled on the router.

Use a data center core layer to support multiple aggregation modules. Use multiple aggregation modules when the numberof servers per module exceeds the capacity of the module. Connect the data center core switches to the campus core switches to reach the rest of the Enteprise network. Consider the use of 10GbE links between core and aggregation switches.Use dual-stack IPv6-IPv4 in all Layer 3 devices in the datacenter, and identify the server farm requirements to ensure IPv6 traffic conforms to firewall and filtering policies.

High density Ethernet clusters consist of multiple servers that operate concurrentlyto solve computational tasks. Some of these tasks require certain degree of processing parallelism while others requireraw CPU capacity. Common applications of large Ethernet clusters include large search engine sites and large web server farms. The diagram shows a tiered designusing “top of rack” 1RU access switchesfor a total of 1536 servers.

Topology Details:

- 8 core switches connected to each aggregation module through a 10GbE link per switch- 4 aggregation modules each with 2 Layer 3 switches that provide 10GbE connectivity to the access layer switches- 8 access switches per aggregation module, each switch connecting to 2 aggregation switches through 10GbE links- Each access layer switch supports 48 10/100/1000 ports and 2 10GbE uplinks

Consider blade server direct attachment and network fabric options; Pass-through modules or integratedswitches, and Fibre Channel, Ethernet and Infiniband.

Data Center Fundamentals: www.ciscopress.com/datacenterfundamentals

Design Best Practices: www.cisco.com/go/datacenter

Infiniband Network

In an integrated Ethernet switch fabric, set up half the blades active on switch1 and half active on switch2. Dual-home each Ethernet switch to Layer 3 switches through GbE-channels. Use RPVST+ for fast STP convergence. Use link-state tracking to detect uplink failure and allow the blades standby NIC to take over.

Place all network-based service devices (modules orappliances) at the aggregation layer to centralise the configuration and management tasks and to leverage service intelligence applied to the entire server farm.

Designed By:

Mauricio Arregoces [email protected]

Questions:

[email protected]

Part #: 910300406R01

Recommended