Date post: | 06-Jul-2015 |
Category: |
Devices & Hardware |
Upload: | it-brand-pulse |
View: | 76 times |
Download: | 2 times |
Industry Brief Streamlining Server Connectivity:
It Starts at the Top
Featuring
vNETtm I/O Maestro
Where IT perceptions are reality
Copyright 2013© IT Brand Pulse. All rights reserved. Document # INDUSTRY2013002 v11, February, 2013
Document # INDUSTRY2013002 v11, February, 2013 Page 2
Data Centers
The High Cost and Complexity of Server I/O Stands Out While discussions about data center architecture focus on pools of virtualized
resources, over half of existing infrastructure is still based on a discrete data
center architecture. Discrete data centers consist of islands of non-virtualized
servers, storage and networking deployed for specific applications.
Deployment of Server I/O Did Not Keep Pace
As global business data exploded, technologies used to efficiently scale IT
islands did not keep pace. Server I/O stands out as an example of how far the
cost of network adapters, and the complexity of thousands of cables, has out-
paced budgets and cable management systems.
450 Server Case Study
In a recent case study by IT Brand Pulse, a
manufacturer with 2,500 employees had 450
servers in their main data center. To illustrate
the high cost and complexity of server I/O in discrete data centers, look at
what is required for that organization to deploy 450 servers, starting with
network adapters. In demanding application environments where 10GbE LAN
and 8Gb FC SAN technologies are deployed, the average cost per port is
approximately $400. With an average of 8 ports per server, 2 of which are on
-board the server motherboards, the cost for 3 additional network adapters
is $2,400, or equal to the cost of
many rack mount servers. As for
cabling, deploying 450 rack mount
servers typically requires 30 racks
with 15 servers per rack. With an
average of 8 network ports per server, each rack would then
stream 120 I/O cables through the ceiling or floor to network
switches. The quantity for all 25 racks totals 3,600 I/O cables.
Server adapters used to get data in and out of servers in data
centers include 10GbE NICs, Fibre Channel HBAs, iSCSI HBAs, 10GbE
CNAs, and InfiniBand HCAs. Server I/O
At 15 servers per rack, it takes
30 racks to house 450 servers.
With 8 network ports per server, each
rack requires 120 network cables.
450 servers x 8 network cables = 3,600 cables which
must be channeled 360 miles from the data center
racks through the floor or ceiling.
A typical midrange server has a
combination of SAN and LAN ports
totaling 8 ports per server.
Document # INDUSTRY2013002 v11, February, 2013 Page 3
Virtualized Data Centers
Virtual I/O Emerges as an Important Category of Virtual Infrastructure The epic migration to virtualized data
centers is well underway. While less
than half of installed infrastructure is
part of a discrete architecture, the
percentage of new workloads deployed
into a virtualized infrastructure has
grown to over 70%. The IT community
who familiarized themselves with server
virtualization over the last several years
has now set out to increase the number
of VMs per server. IT professionals
polled by IT Brand Pulse expect the
number of VMs per server to double in
the next 12 months. And what IT
professionals said they need most to
increase the density of VMs per server is
more RAM memory and more I/O
bandwidth. With three times the
capacity of the previous generation of
servers, a new generation of servers is
addressing the need for more memory.
Virtual I/O has emerged to efficiently
deliver more I/O bandwidth by
virtualizing physical links, and supporting
multiple I/O protocols in each Virtual I/O
system.
IT professionals expect the number of VMs per server in their environment to
double in the next 24 months.
The average number of VMs per server in my environment:
What I need most to increase the density of VMs per physical
servers is more:
What IT professionals need most to increase VM density is more RAM memory
and IO bandwidth.
The ability to virtualize physical 10GbE NICs, Fibre Channel HBAs,
iSCSI HBAs, 10GbE CNAs, and InfiniBand HCAs, into multiple virtual
adapters. The result is less adapters, less cables and lower cost. Virtual IO
Document # INDUSTRY2013002 v11, February, 2013 Page 4
It Starts at The Top
Top-of-Rack 1.0 Consolidating server connectivity started with the use
of switches to aggregate network cables at the top of
server racks—with only a few uplink cables running
across the data center from ToR switches to core
switches. This is an effective solution for reducing the
number of cables from a server rack, but does nothing
to reduce the number of expensive network adapters,
or the quantity of cables inside a server rack.
Top-of-Rack 2.0
The advent of 10Gbps converged networks allowed
ten 1GbE server links to be consolidated into one
server link. This technology also enabled multiple ToR rack switches to be replaced by a single converged
switch supporting TCP/IP LAN and NAS traffic, as well as FCoE, and iSCSI protocol SAN traffic. But 65% of
network ports in data centers today remain 1GbE and the adoption of converged networks has been slow.
Because adoption of FCoE has been limited, most server racks still include separate Ethernet, Fibre Channel
and InfiniBand adapters and TOR switches.
Top-of-Rack 3.0
ToR 3.0 solutions consist of Virtual I/O systems.
Virtual IO systems overcome the limitations of ToR
1.0 and 2.0 ToR solutions by replacing all types of
network adapters—and forever eliminating multiple
I/O cables from the server—with a single protocol-
agnostic PCIe server adapter. In addition, a single
ToR appliance provides hundreds of virtual NICs,
HBAs and HCAs, while eliminating the need for
Ethernet, Fibre Channel and InfiniBand switches.
Switches, virtual I/O appliances and storage systems installed top-of-rack
for sharing by servers in that rack domain. ToR
My virtualized servers are configured in "rack
domains" with (select all that apply):
Only 14% of IT Pros in large enterprises and HPC environments have not
implemented a rack domain with some form of ToR technology.
TOR 3.0 virtual I/O systems overcome the limitations of TOR 1.0 and
2.0 TOR solutions by replacing all types of network adapters—and
forever eliminating multiple I/O cables from the server.
No ToR 120 ext. cables
45 adapters 0 switches
ToR 1.0 8 ext. cables 45 adapters 3 switches
ToR 2.0 8 ext. cables 30 adapters 2 switches
ToR 3.0 8 ext. cables 15 adapters 1 appliance
Multi-Protocol I/O for 15 Servers
Document # INDUSTRY2013002 v11, February, 2013 Page 5
Top-of-Rack Industry Road Map
No Top of Rack - Multiple adapters per server. - Servers cabled to external Ethernet, FC, FCoE and IB switches
1.0 ToR Switches - Multiple Adapters per server - Ethernet, FC, FCoE and IB ToR switches
2.0 Converged Fabric - 10GbE CNAs & IB Adapters - Converged 10GbE ToR switches & InfiniBand ToR Switches
3.0 Virtual I/O - One Adapter per server - ToR Virtual I/O to any network
Pros: Only one adapter needed. Only one ToR system needed. Cons: Designed for 20-30 server domains.
Pros: Less adapters and switches. Cons: Limited adoption of expensive 10GbE & converged networks. InfiniBand not part of converged networks.
Pros: Reduced cables from rack to floor and ceiling. Cons: Too many expensive server adapters. Multiple ToR switches needed. Too many cables in the rack.
Pros: Isolated networks for security. Cons: Too many expensive server adapters. Too many cables in the rack, floor and ceiling.
Switches, virtual I/O appliances and storage systems installed end-of-row
for sharing by servers in multiple racks. EOR
Document # INDUSTRY2013002 v11, February, 2013 Page 6
Anatomy of a Virtual I/O System
Server, storage and network piece-parts architected,
integrated and deployed in application silos.
Discrete Infrastructure
PCIe Bus Extenders One PCIe bus extension card in
each server provides single or dual-port connectivity to the
virtual I/O appliance.
Scalable Server Connectivity Slots in the rear enable server connectivity to scale cost effectively.
Operating Systems Off-the-shelf drivers are used for standard NICs, HBAs and HCAs in the virtual I/O appliance .
Universal LAN, SAN and Cluster Connectivity
A virtual I/O appliance connects to any LAN, SAN or HPC clusters.
Standard Server Adapters Universal connectivity is achieved with a modular design which uses standard server adapters as network interfaces.
Virtual I/O Appliance—Front View
Virtual I/O Appliance—Rear View
vNICs and vHBAs A few physical adapters in the
virtual I/O appliance are transformed into hundreds vNICs,
vHBAs and vHCAs which can be provisioned to servers.
Document # INDUSTRY2013002 v11, February, 2013 Page 7
NextIO, Inc.
The Virtual IO Innovation Leader
NextIO was founded with a vision of creating shared server I/O resource pools. To that end, NextIO pioneered
any-to-any connectivity among a wide variety of data center resources. The NextIO architecture gives data
center managers a blueprint for consolidating, sharing and provisioning server I/O at top of rack.
NextIO separates networking and storage I/O from the compute nodes within
the rack and creates pools of virtual I/O resources that may be shared by
multiple servers and dynamically allocated among the servers in the rack.
Instead of over-provisioned, fixed, and underutilized resources per server, the
NextIO architecture allows for infrastructure such as Ethernet, Fibre Channel, Flash SSD and GPU
accelerators, to be fully-utilized and provisioned based on application needs. By basing the solution on
industry-standard PCIe, NextIO delivers a simple, low-cost top of rack architecture that can be used by every
server and I/O device.
The Virtual IO Market Leader
NextIO is recognized by the industry as a technology pioneer, and by IT professionals in multiple categories of
I/O Virtualization leadership. In the 2012 I/O Virtualization Brand Leader Survey, NextIO swept the leader
awards after being selected by IT professionals in the Market, Performance, Price, Reliability, Innovation and
Service & Support categories.
55.9% of IT Pro respondents selected NextIO as I/O Virtualization
Market Leader—30.4% more than the second place vendor.
Market Leader
In March of 2012, IT professionals in SMBs, large enterprises and HPC environments were asked who they perceive as the I/O
virtualization leader is six different categories. NextIO was selected over other I/O virtualization vendors in all six categories.
Document # INDUSTRY2013002 v11, February, 2013 Page 8
vNET I/O Maestro
The First in a New Class of ToR 3.0 Virtual I/O Appliances
Best-in-class virtual I/O technology is embodied in the vNET I/O
Maestro from NextIO. vNET I/O Maestro is a rack-level appliance that
simplifies the deployment and management of complex server I/O.
vNET I/O Maestro eliminates the need for individual physical storage
and networking adapters to be installed in every server by
consolidating these devices into a shared pool of I/O resources. vNET
replaces the I/O resources of physical servers with virtual NICs and
virtual HBAs that can be dynamically deployed and re-allocated to servers any time a workload changes. The
virtual I/O resources function exactly like traditional server I/O and appear to the OS and application just like
physical NICs and HBAs, so they require no application or OS modification. vNET I/O Maestro traffic appears
as traditional server I/O to the network and SAN resources. Its ports are discovered and managed as physical
entities so they do not require any changes to your infrastructure. vNET I/O Maestro also consolidates
multiple Ethernet and Fibre Channel cables per server into a single industry standard PCI Express® cable (or
two for redundancy) and eliminates the corresponding network and storage leaf switches from the rack.
NextIO vNET I/O Maestro is designed to reduce capital expenditures
(CapEx) up to 40%. CapEx
vNICs and vHBAs
Passive PCIe bus extender
Single 10Gb or 20Gb cable per server
(dual for redundancy)
Single vNET I/O Maestro (dual for redundancy)
Up to 30 physical servers per vNET
I/O Maestro
Up to 8 IO Modules (any combination of 10GbE or 8Gb FC)
10GbE Uplink Ports
8Gb FC Uplink Ports
NextIO vNET I/O Maestro
Document # INDUSTRY2013002 v11, February, 2013 Page 9
450 Servers with vNET I/O Maestro
The Low Cost and Simplicity of vNet I/O Maestro Stands Out To illustrate the low cost and simplicity of server I/O in virtualized data
centers, look again at what’s required to deploy 450 servers, this time with
NextIO vNET I/O Maestro. Starting with network adapters, where high-
performance 10GbE LAN and 8Gb FC SAN technologies are deployed, the
average cost per port is approximately
$400. Without vNET I/O Maestro, an
average of 6 additional adapter ports are
required, with the cost for network adapters totaling $2,400, or equal to
the cost of many rack mount servers. With vNET I/O Maestro, only 1
adapter port per server is required, with the cost for network adapters
totaling $250, a small fraction of the cost of a rack mount server. As for
cabling, deploying 450 rack mount servers typically requires 30 racks with
15 servers per rack. Using ToR switches or vNET I/O Maestro, each rack
with 15 servers would have 45 I/O cables
for a total of 1,350 cables streaming through the ceiling or floor to end-of-
rack or core switches. However, vNet Maestro has the unique ability to
reduce the quantity of network adapters to one per server, in many cases
allowing 2u servers to be used instead of 4u servers. The result is the number
of data center cabinets and floor space is cut in half.
In summary, the low cost and simplicity of vNET I/O Maestro stands out. For
a 450 server deployment, the Capex savings by eliminating over 3,000 cables
and 1,500 network adapters would be
in the range of $1.5M. Over time, the
Opex savings would exceed that
amount as network and cable
management and service is vastly
simplified.
At 15 servers per rack, it still takes 30
racks to house 450 servers.
NextIO vNET I/O Maestro is designed to reduce operational expenditures
(OpEx) up to 60%. OpEx
Because 1 pair of redundant bus ex-
tenders replace six network adapters,
2u servers replace 4u servers and the
quantity of data center cabinets is
reduced from 30 to 15.
The number of network ports per server is
reduced from 2 LOM ports plus a combina-
tion of 6 NIC, HBA and HCA ports, to 2 LOM
ports plus 2 PCIe bus extender ports.
Cables which must be channeled from the data
center racks through the floor or ceiling are re-
duced from 3,600 to 1,350 with ToR switches and
virtual I/O appliances.
Document # INDUSTRY2013002 v11, February, 2013 Page 10
vNET I/O Maestro Advantage
The amount saved by deploying vNET I/O Maestro instead of ToR switches. 46%
I/O for 450 Servers ToR Switches vNET Maestro vNET Advantage Servers 4U servers 450 0 2u servers 0 450 Cost of 450 servers $3,825,000 $3,150,000 18% Data Center Cabinets Qty. of data center cabinets 30 15 Cost of data center cabinets $75,000 $37,500 50% Adapters Qty. of servers per cabinet 15 30 10GbE, FC, IB adapters per server 6 0 10GbE, FC, IB adapter ports per server 3 0 10GbE, FC, IB adapter cost per server $2,400 $0 PCIe bus extender ports per server 0 2 Adapters and bus extenders per server 3 2 PCIe bus extender cost per server 0 $200 Cost of adapters for 450 servers $1,080,000 $180,000 83% Network Cables Network cables per server 8 2 Internal 15ft Network cables 3600 900 Miles of cables 10.2 3 Cost of cables for 450 servers $180,000 $45,000 75% Switches & Appliances 10GbE, FC, IB switches per cabinet 4 0 10GbE, FC, IB switch cost per cabinet $20,000 0 Virtual I/O appliance per cabinet 0 2 Virtual I/O appliance cost per cabinet 0 $25,000 Cost of switches and appliances for 450 servers $2,400,000 $750,000 69% Total Total cost of cabinets, adapters, switches, appli-
ances & cables for 450 serves $3,735,000 $1,012,500 73%
Document # INDUSTRY2013002 v11, February, 2013 Page 11
Streamlining Server Connectivity
The Bottom Line
Streamlining network connectivity for servers starts at the top-of-rack. It’s simple. If you like how server
virtualization slices your investment in servers, and simplifies server management, you’ll appreciate how
virtual IO cuts the need for hundreds of expensive network adapters and thousands of cables.
Related Links
To learn more about the companies, technologies, and products mentioned in this report, visit the following web pages: NextIO, Inc. vNET IO Maestro IT Brand Pulse About the Author
Frank Berry is founder and senior analyst for IT Brand Pulse, a trusted source of data and analysis about IT infrastructure, including servers, storage and networking. As former vice president of product marketing and corporate marketing for QLogic, and vice president of worldwide marketing for the automated tape library (ATL) division of Quantum, Mr. Berry has over 30 years experience in the development and marketing of IT infrastructure. If you have any questions or comments about this report, contact [email protected].