eGuideIN THIS eGUIDE
Sponsored by
NETWORKING REDEFINEDWe’re living in an era of server consolidation, virtualization, green initiatives and cloud computing—initiatives throwing the data center network into a state of flux. Is legacy infrastructure, typically comprising multiple switching tiers running proprietary protocols, capable of handling next-generation, dynamic application demands? Or is time for a network overhaul built on the concepts of open, virtual switching, unified fabrics and bandwidths of 10 Gigabit Ethernet and beyond? In these articles, Network World examines how the data center network is evolving into a more simplified, open infrastructure.
2 Data Center Derby Heats Up Handicapping the crowd-ed field, from the odds-on favorites to the long shots
5 10G Ethernet Shakes Net Design to the Core Shift from three- to two-tier architectures driven by need for speed, server virtualization, unified switching fabrics
8 Remaking the Data CenterLow-latency switches are the foundation for build-ing a unified-fabric data center
13 Standards for Soothing Headaches in the Data CenterEmerging IEEE specifica-tions aim to address serious management issues raised by the explosion of virtual machines
16 A Bridge to Terabit EthernetWith 40/100G Ethernet products on the way, Ethernet experts look ahead to Terabit Ethernet standards and products by 2015
20 Data Center as Ethernet Switch Driver How next-generation data center initiatives shape the LAN switching market
22 Networking Resources
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
2 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
DATA CENTER DERBY HEATS UP
Network thoroughbred Cisco jumps into the blade server market. Server stallion HP adds security blades to its Pro-Curve switches. IBM teams up with Brocade. Oracle buys Sun. And everybody courts that prize filly VMware.
In this era of server consolidation and virtualization, green initiatives and cloud computing, the data center is in flux and all the major vendors are jockeying for position, galloping in with new products, strategies and alliances.
“What you see right now is everybody shoring up and getting as many offerings as they can to provide all the hardware in the data center. Cisco, for example, wants to make it so you can be a complete Cisco shop, including all your servers,” says Mitchell Ashley, principal consultant with Converging Networks and a Network World blogger.
Cisco’s blade servers are part of its data center plat-form, called the Unified Computing System (UCS), which includes storage, network and virtualization resources. Cisco’s platform includes VMware’s vSphere technology and partnerships with BMC Software, EMC, Intel, Micro-soft and Oracle.
But Cisco’s entry into the data center fray has kicked up some dust among its longtime server partners HP and IBM, and forced all of the major players to respond in some way. “Cisco has been so successful in the network space, all the other vendors have to take it seriously at the data center level,’’ says Anne Skamarock, a research director at Focus Consulting.
The resultant flurry of activity has included:
•HP releasing the BladeSystem Matrix, a converged software, server, storage and network platform.
•IBM deepening its relationship with Brocade, deciding to sell Brocade’s Foundry switches and routers under the IBM banner.
•Juniper unveiling Stratus Project, a multiyear under-taking through which it will partner with server, stor-age and software companies to develop a converged data center fabric.
•Oracle buying Sun for its hardware and software, then grabbing Virtual Iron for its Xen-based hypervisor.
“Everything is pointing to a unified fabric,” says John Turner, director of network and systems at Brandeis Univer-sity in Waltham, Mass.
“We’re in a transition, and it’s very important not to just
buy who you bought from before. This is a great time to evalu-
ate your vendors, ask about long-term road maps and part-
By Beth Schultz • Network World
Handicapping the crowded field, from the odds-on favorites to the long shots
3 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
nerships, see how integrated they are,” says Yankee Group
analyst Zeus Kerravala. “I wouldn’t make any decisions hastily
if I were in IT.”
This industry shakeup also could provide an opportunity for
some long-shot vendors to make a move on the leaders. Kerrav-
ala puts Brocade in this category because of its storage and net-
work strengths, Citrix Systems for virtualization, F5 Networks for
networking, and Liquid Computing for fabric computing. “These
could be the dark horses,” he says.
Turner agrees that opportunities are available for the right
vendors. “I’m happy with my Cisco network. I’m thrilled with
it. No, I’m wowed by it. But that doesn’t mean there isn’t an
opportunity for another vendor to come in, pique my inter-
est, gain my respect and get in here,” Turner says. “This is an
opportunity to take a big leap. Companies are going to be
doing big refreshes.”
These changing times for IT infrastructure require an open
mind, says Philip Buckley-Mellor, a designer with BT Vision,
a provider of digital TV service in London. Yet Buckley-Mellor
admits he can’t imagine BT Vision’s future data center with-
out HP at the core.
Buckley-Mellor expects most of Vision’s data center opera-
tions to run on HP’s latest blades, the Intel Nehalem multicore
processor-based G6 servers. The infrastructure will be virtualized
using VMware as needed. HP’s Virtual Connect, a BladeSystem
management tool, is an imperative.
“The ability to use Virtual Connect to re-patch our re-
sources with networks and storage live, without impacting
any other service, without having to send guys out to site,
without having the risk of broken fibers, has shaved at least
50%, and potentially 60% to 70%, off the time it takes to
deploy a new server or change the configuration of existing
servers,” Buckley-Mellor says.
Within another year or so, he expects Vision to move to a Ma-
trix-like orchestrated provisioning system. The HP BladeSystem
Matrix packages and integrates servers, networking, storage,
software infrastructure and orchestration in a single platform.
“We already have most of the Matrix pieces ... so orches-
trating new servers into place is the next logical step,” Buck-
ley-Mellor says.
Place your wagersGartner analyst George Weiss says Cisco and HP unified
compute platforms run pretty much neck and neck. How-
ever, IBM, HP’s traditional blade nemesis in the data center,
has more work to do in creating the fabric over which the
resources are assembled, he adds.
“IBM can do storage, and the server component in
blades, and the networking part through Cisco or Bro-
cade, so from a user perspective, it seems a fairly inte-
grated type of architecture. But it’s not as componentized
“This is a great time to evaluate your vendors, ask about long-term road maps and partnerships, see how integrated they are. I wouldn’t make any decisions hastily if I were in IT.”
— Zeus Kerravala, analyst, Yankee Group
THE DOOR IS ALWAYS OPEN
4 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
as what Cisco and HP have,” Weiss says.
“But with Virtual Connect and networking solutions like
ProCurve [switches], and virtualization software, virtualiza-
tion management, blade-based architecture, all of the ele-
ments Cisco is delivering are within HP’s grasp and to a large
extent HP already delivers. It may not be everything, but,
there may be things HP delivers that Cisco doesn’t, like a
command of storage management,” he explains.
Buckley-Mellor sees one technology area in which Cisco
is a step ahead of HP—converged networking, a la Fibre
Channel over Ethernet (FCoE). Cisco’s Nexus 7000 data
center switch supports this ANSI protocol for converging
storage and networking and the UCS will feature FCoE in-
terconnect switches.
“There are no two ways about it, we’re very interested in
converged networking,” Buckley-Mellor says. Still, he’s not
too worried. “That technology needs to mature and I’m sure
HP will be there with a stable product at the right time for us.
In the meantime, Virtual Connect works great and saves me
an ocean of time,” he adds.
All this is not to say that Cisco and HP are the only horses
in the race for the next-generation data center. But they,
as well as companies like IBM and Microsoft—each of
which have their own next-generation data center strate-
gies—will have leads because they’ve already got deep cus-
tomer relationships.
“IT organizations will look to vendors for their strategies and
determine how they’ll utilize those capabilities vs. going out and
exploring everything on the market and figuring out what new
things they’ll try and which they’ll buy,” Ashley says.
Cover your betsIn planning for their next-generation data centers, IT executives
should minimize the number of vendors they’ll be working with.
At the same time, it’s unrealistic to not consider a multivendor
approach from the get-go, says Andreas Antonopoulos, an ana-
lyst with Nemertes Research.
“They’ll never be able to reduce everything down to one ven-
dor, so unless they’ve got a multivendor strategy for integration,
they’re going to end up with all these distinct islands, and that
will limit flexibility,” he says.
He espouses viewing the new data center in terms of orches-
tration, not integration.
“Because we’ll have these massive dependencies among
servers, network and storage, we need to make sure we
can run these as systems and not individual elements. We
have to be able to coordinate activities, like provisioning
and scaling, across the three domains. We have to keep
them operating together to achieve business goals,” Anto-
nopoulos says.
From that perspective, a unified compute-network-stor-
age platform makes sense—one way to get orchestration is
to have as many resources as possible from a single ven-
dor, he says. “Problem is, you can only achieve that within
small islands of IT or at small IT organizations. Once you get
to a dozen or more servers, chances are even if you bought
them at the same time from the same vendor, they’ll have
some differences,” he adds.
Skamarock equates these emerging unified data center
platforms to the mainframes of old. “With the mainframe, IT
had control over just about every component. That kind of con-
trol allows you to do and make assumptions that you can’t
when you have a more distributed, multi-vendor environment.”
That means every vendor in this race needs to contin-
ue to build partnerships and build out their ecosystems,
especially in the management arena.•
Schultz is a longtime IT writer and editor. You can reach her at
5 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
The emergence of 10 Gigabit Ethernet, virtualization and
unified switching fabrics is ushering in a major shift in
data center network design: three-tier switching architec-
tures are being collapsed into two-tier ones.
Higher, non-blocking throughput from 10G Ethernet
switches allows users to connect server racks and top-of-rack
switches directly to the core network, obviating the need for an
aggregation layer. Also, server virtualization is putting more ap-
plication load on fewer servers due to the ability to decouple
applications and operating systems from physical hardware.
More application load on less server hardware requires
a higher-performance network.
Moreover, the migration to a unified fabric that converges
storage protocols onto Ethernet also requires a very low-la-
tency, lossless architecture that lends itself to a two-tier ap-
proach. Storage traffic cannot tolerate the buffering and laten-
cy of extra switch hops through a three-tier architecture that
includes a layer of aggregation switching, industry experts say.
All of this necessitates a new breed of high-performance,
low-latency, non-blocking 10G Ethernet switches now hitting
the market. And it won’t be long before these 10G switches
are upgraded to 40G and 100G Ethernet switches when
those IEEE standards are ratified in mid-2010.
“Over the next few years, the old switching equipment
needs to be replaced with faster and more flexible switch-
es,” says Robin Layland of Layland Consulting, an adviser
to IT users and vendors. “This time, speed needs to be
coupled with lower latency, abandoning spanning tree
and support for the new storage protocols. Networking in
the data center must evolve to a unified switching fabric.”
A three-tier architecture of access, aggregation and
core switches has been common in enterprise networks
for the past decade or so. Desktops, printers, servers and
LAN-attached devices are connected to access switches,
which are then collected into aggregation switches to
manage flows and building wiring.
Aggregation switches then connect to core routers/
switches that provide routing, connectivity to wide-area
network services, segmentation and congestion manage-
ment. Legacy three-tier architectures naturally have a
large Cisco component–specifically, the 10-year-old Cata-
lyst 6500 switch–given the company’s dominance in en-
terprise and data center switching.
Cisco says a three-tier approach is optimal for segmen-
tation and scale. But the company also supports two-tier
architectures should customers demand it.
By Jim Duffy • Network World
Shift from three- to two-tier architectures driven by need for speed, server virtualization, unified switching fabrics
10G ETHERNET SHAKES NET DESIGN TO THE CORE
6 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
“We are offering both,” says Senior Product Manager
Thomas Scheibe. “It boils down to what the customer
tries to achieve in the network. Each tier adds another two
hops, which adds latency; on the flipside it comes down
to what domain size you want and how big of a switch
fabric you have in your aggregation layer. If the customer
wants to have 1,000 10G ports aggregated, you need a
two-tier design big enough to do that. If you don’t, you
need another tier to do that.”
Blade Network Technology agrees: “Two-tier vs. three-
tier is in large part driven by scale,” says Dan Tuchler, vice
president of strategy and product management at Blade
Network Technologies, a maker of blade server switches
for the data center. “At a certain scale you need to start
adding tiers to add aggregation.”
But the latency inherent in a three-tier approach is inade-
quate for new data center and cloud computing environments
that incorporate server virtualization and unified switching
fabrics that converge LAN and storage traffic, experts say.
Applications such as storage connectivity, high-perfor-
mance computing, video, extreme Web 2.0 volumes and the
like require unique network attributes, according to Nick Lip-
pis, an adviser to network equipment buyers, suppliers and
service providers. Network performance has to be non-block-
ing, highly reliable and faultless with low and predictable la-
tency for broadcast, multicast and unicast traffic types.
“New applications are demanding predictable perfor-
mance and latency,” says Jayshree Ullal, CEO of Arista Net-
works, a privately held maker of low-latency 10G Ethernet
top-of-rack switches for the data center. “That’s why the
legacy three-tier model doesn’t work because most of the
switches are 10:1, 50:1 oversubscribed,” meaning different
applications are contending for limited bandwidth which
can degrade response time.
This oversubscription plays a role in the latency of today’s
switches in a three-tier data center architecture, which is 50
to 100 microseconds for an application request across the
network, Layland says. Cloud and virtualized data center
computing with a unified switching fabric requires less than
10 microseconds of latency to function properly, he says.
Part of that requires eliminating the aggregation tier in a
data center network, Layland says. But the switches themselves
must use less packet buffering and oversubscription, he says.
Most current switches are store-and-forward devices
that store data in large buffer queues and then forward it
to the destination when it reaches the top of the queue.
“The result of all the queues is that it can take 80 micro-
seconds or more to cross a three-tier data center,” he says.
New data centers require cut-through switching–which
is not a new concept–to significantly reduce or even elimi-
nate buffering within the switch, Layland says. Cut-through
switches can reduce switch-to-switch latency from 15 to
50 microseconds to 2 to 4, he says.
Another factor negating the three-tier approach to data
center switching is server virtualization. Adding virtualization
to blade or rack-mount servers means that the servers them-
selves take on the role of access switching in the network.
Virtual switches inside servers takes place in a hypervi-
sor and in other cases the network fabric is stretched to
the rack level using fabric extenders. The result is that the
access switching layer has been subsumed into the serv-
ers themselves, Lippis notes.
“In this model there is no third tier where traffic has
to flow to accommodate server-to-server flows; traffic is
either switched at access or in the core at less than 10
microseconds,” he says.
Because of increased I/O associated with virtual switching
in the server there is no room for a blocking switch in between
the access and the core, says Asaf Somekh, vice president
7 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
FORK IN THE ROADVirtualization, inexpensive 10G links and unified Ethernet switching fabrics are catalyzing a migration from three-tier Layer 3 data center switching architectures to flatter two-tier Layer 2 designs that subsume the aggregation layer into the access layer. Proponents say this will decrease cost, optimize operational efficiency, and simplify management.
Three tier Two tier
Aggregation
Access/Aggregation
Core Core
Access
of marketing for Voltaire, a maker of Infiniband and Ether-
net switches for the data center. “It’s problematic to have so
many layers.”
Another requirement of new data center switches is to
eliminate the Ethernet spanning tree algorithm, Layland says.
Currently all Layer 2 switches determine the best path from
one endpoint to another using the spanning tree algorithm.
Only one path is active, the other paths through the fabric
to the destination are only used if the best path fails. The
lossless, low-latency requirements of unified fabrics in virtu-
alized data centers requires switches using multiple paths
to get traffic to its destination, Layland says. These switches
continually monitor potential congestion points and pick the
fastest and best path at the time the packet is being sent.
“Spanning tree has worked well since the beginning of
Layer 2 networking but the ‘only one path’ [approach] is not
good enough in a non-queuing and non-discarding world,”
Layland says.
Finally, cost is a key factor in driving two-tier architec-
tures. Ten Gigabit Ethernet ports are inexpensive–about
$500, or twice that of Gigabit Ethernet ports yet with 10
times the bandwidth. Virtualization allows fewer servers to
process more applications, thereby eliminating the need
to acquire more servers.
And a unified fabric means a server does not need sepa-
rate adapters and interfaces for LAN and storage traffic.
Combining both on the same network can reduce the num-
ber and cost of interface adapters by half, Layland notes.
And by eliminating the need for an aggregation layer of
switching, there are fewer switches to operate, support,
maintain and manage.
“If you have switches with adequate capacity and
you’ve got the right ratio of input ports to trunks, you don’t
need the aggregation layer,” says Joe Skorupa, a Gartner
analyst. “What you’re doing is adding a lot of complexity
and a lot of cost, extra heat and harder troubleshooting
for marginal value at best.” •
8 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
REMAKING THE DATA CENTERBy Robin Layland • Network World
Low-latency switches are the foundation for building a unified-fabric data center
A major transformation is sweeping over data center switch-
ing. Over the next few years the old switching equipment
needs to be replaced with faster and more flexible switches.
Three factors are driving the transformation: server vir-
tualization, direct connection of Fibre Channel storage to
the IP switching and enterprise cloud computing.
They all need speed and higher throughput to succeed but
unlike the past it will take more than just a faster interface.
This time speed needs to be coupled with lower latency, aban-
doning spanning tree and supporting new storage protocols.
Without these changes, the dream of a more flexible and
lower-cost data center will remain just a dream. Networking
in the data center must evolve to a unified switching fabric.
Times are hard, money is tight; can a new unified-fabric re-
ally be justified? The answer is yes. The cost-savings from sup-
porting server virtualization along with merging the separate
IP and storage networks is just too great. Supporting these
changes is impossible without the next evolution in switching.
The good news is that the switching transformation will take
years, not months, so there is still time to plan for the change.
The driversThe story of how server virtualization can save money is well-
known. Running a single application on a server commonly
results in utilization in the 10% to 30% range. Virtualization
allows multiple applications to run on the server within their
own image, allowing utilization to climb into the 70% to 90%
range. This cuts the number of physical servers required; saves
on power and cooling and increases operational flexibility.
The storage story is not as well-known, but the savings
are as compelling as the virtualization story. Storage has
been moving to IP for years, with a significant amount of
storage already attached via NAS or iSCSI devices. The
cost-savings and flexibility gains are well-known.
The move now is to directly connect Fibre Channel stor-
age to the IP switches, eliminating the separate Fibre Chan-
nel storage-area network. Moving Fibre Channel to the IP
infrastructure is a cost-saver. The primary way is by reducing
the number of adapters on a server. Currently servers need
an Ethernet adapter for IP traffic and a separate storage
adapter for the Fibre Channel traffic. Guaranteeing high
availability means that each adapter needs to be duplicated,
resulting in four adapters per server. A unified fabric reduces
the number to two since the IP and Fibre Channel or iSCSI
traffic share the same adapter. The savings grow since halv-
ing the number of adapters reduces the number of switch
ports and the amount of cabling. It also reduces operational
costs since there is only one network to maintain.
9 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
The third reason is internal or enterprise cloud comput-
ing. In the past when a request reached an application, the
work stayed within the server/application. Over the years,
this way of design and implementing applications has
changed. Increasingly when a request arrives at the server,
the application may only do a small part of the work; it dis-
tributes the work to other applications in the data center,
making the data center one big internal cloud.
Attaching storage directly to this IP cloud only increases
the number of critical flows that pass over the switching
cloud. A simple example shows why low latency is a must.
If the action took place within the server, then each storage
get would only take a few microseconds to a nanosecond to
perform. With most of the switches installed in enterprises
the get can take 50 to 100 microseconds to cross the cloud,
which, depending on the number of calls, adds significant
delays to processing. If a switch discards the packet, the
response can be even longer. It becomes critical that the
cloud provides very low latency with no dropped packets.
The network and switch problemWhy can’t the current switching infrastructure handle vir-
tualization, storage and cloud computing? Compared with
the rest of the network the current data center switches
provide very low latency, discard very few packets and
support 10 Gigabit Ethernet interconnects. The problem is
that these new challenges need even lower latency, better
reliability, higher throughput and support for Fibre Chan-
nel over Ethernet (FCoE) protocol.
The first challenge is latency. The problem with the
current switches is that they are based on a store-and-
forward architecture. Store-and-forward is generally asso-
ciated with applications such as e-mail where the mail
server receives the mail, stores it on a disk and then later
forwards it to where it needs to go. Store-and-forward is
considered very slow. How are layer 2 switches, which are
very fast, store-and-forward devices?
Switches have large queues. When a switch receives
a packet, it puts it in a queue, and when the message
reaches the front of the queue, it is sent. Putting the pack-
et in a queue is a form of store-and-forward. A large queue
has been sold as an advantage since it means the switch
can handle large bursts of data without discards.
The result of all the queues is that it can take 80 micro-
seconds or more for a large packet to cross a three-tier data
center. The math works as follows. It can take 10 microsec-
onds to go from the server to the switch. Each switch-to-
switch hop adds 15 microseconds and can add as much
as 40 microseconds. For example, assume two servers are
at the “far” end of the data center. A packet leaving the
requesting server travels to the top of rack switch, then the
end-of-row switch and onward to the core switch. The hops
are then repeated to the destination server. That is four
switch-to-switch hops for a minimum of 60 microseconds.
Add in the 10 microseconds to reach each server and the
total is 80 microseconds. The delay can increase to well
over 100 microseconds and becomes a disaster if a switch
has to discard the packet, requiring the TCP stack on the
sending server to time out and retransmit the packet.
Latency of 80 microseconds each way was acceptable
in the past when response time was measured in seconds,
but with the goal to provide sub-second response time, the
microseconds add up. An application that requires a large
chunk of data can take a long time to get it when each get
can only retrieve 1,564 byes at a time. A few hundred round
trips add up. The impact is not only on response time. The
application has to wait for the data resulting in an increase
in the elapsed time it takes to process the transaction.
That means that while a server is doing the same amount
10 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
of work, there is an increase in the number of concurrent
tasks, lowering the server overall throughput.
The new generation of switches overcomes the large
latency of the past by eliminating or significantly reducing
queues and speeding up their own processing. The words
used to describe it are: lossless transport; non-blocking; low
latency; guaranteed delivery; multipath and congestion man-
agement. Lossless transport and guaranteed delivery mean
they don’t discard packets. Non-blocking means they either
don’t queue the packet or have a queue length of one or two.
The first big change in the switches is the design of the
way the switch forwards packets. Instead of a store-and-
forward design, a cut-through design is generally used,
which significantly reduces or eliminates queuing inside
the switch. A cut-through design can reduce switch time
from 15 to 50 microseconds to two to four microseconds.
Cut-through is not new, but it has always been more com-
plex and expensive to implement. It is only now with the
very low-latency requirement that switch manufacturers
can justify spending the money to implement it.
The second big change is abandoning spanning tree
within the data center switching fabric. The new genera-
tion of switches uses multiple paths through the switching
fabric to the destination. They are constantly monitoring
potential congestion points, or queuing points, and pick
the fastest and best path at the time the packet is being
sent. Currently all layer 2 switches determine the “best”
path from one endpoint to another one using the span-
ning tree algorithm. Only one path is active, the other
paths through the fabric to the destination are only used
if the “best” path fails. Spanning tree has worked well
since the beginning of layer 2 networking but the “only
one path” is not good enough in a non-queuing and non-
discarding world.
A current problem with the multi-path approach is that
there is no standard on how they do it. Work is underway
within standard groups to correct this problem but for the
early versions each vendor has their own solution. A signif-
icant amount of the work falls under a standard referred
to as Data Center Bridging (DCB). The reality is that for
the immediate future mixing and matching different ven-
dor’s switches within the data center is not possible. Even
when DCB and other standards are finished there will be
many interoperability problems to work out, thus a single
vendor solution may be the best strategy.
Speed is still part of the solution. The new switches are
built for very dense deployment of 10 Gigabit and prepared
for 40/100 Gigabit. The result of all these changes reduces
the trip time mentioned from 80 microseconds to less than
10 microseconds, providing the needed latency and through-
put to make fiber channel and cloud computing practical.
The first big change in [new generation] switches is the way the switch forwards packets. Instead of a store-and-forward design, a cut-through design is used, which significantly reduces or eliminates queuing inside the switch. A cut-through design can reduce switch time [of] 15 to 50 microseconds to two to four microseconds.
HOW DO THEY DO THAT?
11 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
Virtualization curve ballServer virtualization creates additional problems for the
current data center switching environment. The first prob-
lem is each physical server has multiple virtual images,
each with its own media access control (MAC) address.
This causes operational complications and is a real prob-
lem if two virtual servers communicate with each other.
The easiest answer is to put a soft switch in the VM, which
all the VM vendors provide. This allows the server to pres-
ent a single MAC address to the network switch and per-
form the functions of a switch for the VMs in the server.
There are several problems with this approach. The
soft switch needs to enforce policy and access control
list (ACL); make sure VLANs are followed and implement
security. For example, if one image is compromised, it
should not be able to freely communicate with the other
images on the server if policy says they should not be
talking to each other.
If they were on different physical servers the network
would make sure policy and security procedures were fol-
lowed. The simple answer is that the group that maintains
the server and the soft switch needs to make sure all the
network controls are followed and in place. The practical
problem with this approach is the coordination required
between the two groups and the level of knowledge of the
networking required by the server group. Having the net-
work group maintain the soft switch in the server creates
the same set of problems.
Today, the answer is to learn to deal with confusion and
develop procedures to make the best of the situation and
hope for the best. A variation on this is to use a soft switch
from the same vendor as the switches in the network. The
idea is that coordination will be easier since the switch
vendor built it and has hopefully made the coordination
easier. Cisco is offering this approach with VMware.
The third solution is to have all the communications
from the virtual server sent to the network switch. This
would simplify the switch in the VM since it would not
have to enforce policy, tag packets or worry about secu-
rity. The network switch would perform all these functions
as if the virtual servers were directly connected to the
servers and this was the first hop into the network.
This approach has appeal since it keeps all the well
developed processes in place and restores clear account-
ability on who does what. The problem is spanning tree
does not allow a port to receive a packet and send it back
on the same port. The answer is to eliminate the spanning
tree restriction of not allowing a message to be sent back
over the port it came from.
Spanning tree and virtualizationThe second curve ball from virtualization is ensuring that
there is enough throughput to and from the server and
that the packet takes the best path through the data cen-
ter. As the number of processors on the physical server
keeps increasing, the number of images increase, with
the result that increasingly large amounts of data need
to be moved in and out of the server. The first answer is
to use 10 Gigabit and eventually 40 or 100 Gigabit. This
is a good answer but may not be enough since the data
center needs to create a very low-latency, non-blocking
fabric with multiple paths. Using both adapters attached
to different switches allows multiple paths along the en-
tire route, helping to ensure low latency.
Once again spanning tree is the problem. The solution
is to eliminate spanning tree, allowing both adapters to
be used. The reality is the new generation layer 2 switches
in the data center will act more like routers, implementing
their own version of OSPF at layer 2.
12 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
StorageThe last reason new switches are needed is Fibre Channel
storage. Switches need to support the ability to run stor-
age traffic over Ethernet/IP such as NAS, ISCSI or FCoE.
Besides adding support for the FCoE protocol they will also
be required to abandon spanning tree and enable greater
cross-sectional bandwidth. For example Fibre Channel re-
quires that both adapters to the server are active and carry-
ing traffic, something the switch’s spanning tree algorithm
doesn’t support. Currently the FCoE protocol is not finished
and vendors are implementing a draft version. The good
news is that it is getting close to finalization.
Current state of the marketHow should the coming changes in the data center affect your
plan? The first step is to determine how much of your traffic
needs very low latency right now. If cloud computing, migrat-
ing critical storage or a new low-latency application such as
algorithmic stock trading is on the drawing broad, then it is
best to start the move now to the new architecture. Most en-
terprises don’t fall in that group yet but they will in this year
or next and thus have time to plan an orderly transformation.
The transformation can also be taken in steps. For ex-
ample, one first step would be to migrate Fibre Channel
storage onto the IP fabric and immediately reduce the
number of adapters on each server. This can be accom-
plished by replacing just the top-of-the-rack switch. The
storage traffic flows over the server’s IP adapters and to
the top-of-the-rack switch, which sends the Fibre Channel
traffic directly to the SAN. The core and end-of-rack switch
do not have to be replaced. The top-of-the-rack switch
supports having both IP adapters active for storage traf-
fic only with spanning tree’s requirement of only one ac-
tive adapter applying to just the data traffic. Brocade and
Cisco currently offer this option.
If low latency is needed, then all the data center switches
need to be replaced. Most vendors have not yet implement-
ed the full range of features needed to support the switch-
ing environment described here. To understand where a
vendor is; it is best to break it down into two parts. The
first part is whether the switch can provide very low latency.
Many vendors such as Arista Networks, Brocade, Cisco, Ex-
treme, Force 10 and Voltaire have switches that can.
The second part is whether the vendor can overcome the
spanning tree problem along with support for dual adapt-
ers and multiple pathing with congestion monitoring. As is
normally the case vendors are split on whether to wait until
standards are finished before providing a solution or pro-
vide an implementation based on their best guess of what
the standards will look like. Cisco and Arista Networks have
jumped in early and provide the most complete solutions.
Other vendors are waiting for the standards to be complet-
ed in the next year before releasing products.
What if low latency is a future requirement, what is the
best plan? Whenever the data center switches are sched-
uled for replacement they should be replaced with switch-
es that can support the move to the new architecture and
provide very low latency. This means it is very important
to understand the vendor’s plans and migration schemes
that will move you to the next-generation unified fabric.
Layland is head of Layland Consulting. He can be reached
13 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
STANDARDS FOR SOOTHING HEADACHES IN THE DATA CENTER
Cisco, HP and others are waging an epic battle to gain
control of the data center, but at the same time they are
joining forces to push through new Ethernet standards
that could greatly ease management of those increasingly
virtualized IT nerve centers.The IEEE 802.1Qbg and 802.1Qbh specifications are
designed to address serious management issues raised
by the explosion of virtual machines in data centers
that traditionally have been the purview of physical serv-
ers and switches. In a nutshell, the emerging standards
would offload significant amounts of policy, security and
management processing from virtual switches on network
interface cards (NIC) and blade servers and put it back
onto physical Ethernet switches connecting storage and
compute resources.
The IEEE draft standards boast a feature called Virtual
Ethernet Port Aggregation (VEPA), an extension to physical
and virtual switching designed to eliminate the large number
of switching elements that need to be managed in a data
center. Adoption of the specs would make management
easier for server and network administrators by requiring
fewer elements to manage, and fewer instances of element
characteristics—such as switch address tables, security and
service attribute policies, and configurations—to manage.
“There needed to be a way to communicate between the
hypervisor and the network,” says Jon Oltsik, an analyst
at Enterprise Systems Group. “When you start thinking
about the complexities associated with running dozens of
VMs on a physical server the sophistication of data center
switching has to be there.”
But adding this intelligence to the hypervisor or host
would add a significant amount of network processing
overhead to the server, Oltsik says. It would also dupli-
cate the task of managing media access control address
tables, aligning policies and filters to ports and/or VMs
and so forth.
“If switches already have all this intelligence in them, why
would we want to do this in a different place?” Oltsik notes.
VEPA does its part by allowing a physical end station
to collaborate with an external switch to provide bridg-
By Jim Duffy • Network World
Emerging IEEE specifications aim to address serious management issues raised by the explosion of virtual machines
14 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
ing support between multiple virtual end stations and
VMs, and external networks. This would alleviate the
need for virtual switches on blade servers to store and
process every feature—such as security, policy and ac-
cess control lists (ACLs)—resident on the external data
center switch.
Diving into IEEE draft standard detailsTogether, the 802.1Qbg and bh specifications are de-
signed to extend the capabilities of switches and end sta-
tion NICs in a virtual data center, especially with the pro-
liferation and movement of VMs. Citing data from Gartner,
officials involved in the IEEE’s work on bg and bh say 50%
of all data center workloads will be virtualized by 2012.
Some of the other vendors involved in the bg and bh
work include 3Com, Blade Network Technologies, Bro-
cade, Dell, Extreme Networks, IBM, Intel, Juniper Net-
works and QLogic. While not the first IEEE specifications
to address virtual data centers, bg and bh are amend-
ments to the IEEE 802.1Q specification for virtual LANs
and are under the purview of the organization’s 802.1
Data Center Bridging and Interworking task groups.
The bg and bh standards are expected to be ratified
around mid-2011, according to those involved in the IEEE
effort, but pre-standard products could emerge late this
year. Specifically, bg addresses edge virtual bridging: an
environment where a physical end station contains mul-
tiple virtual end stations participating in a bridged LAN.
VEPA allows an external bridge—or switch—to perform in-
ter-VM hairpin forwarding of frames, something standard
802.1Q bridges or switches are not designed to do.
“On a bridge, if the port it needs to send a frame on is
the same it came in on, normally a switch will drop that
packet,” says Paul Congdon, CTO at HP ProCurve, vice
chair of the IEEE 802.1 group and a VEPA author. “But
VEPA enables a hairpin mode to allow the frame to be
forwarded out the port it came in on. It allows it to turn
around and go back.”
VEPA does not modify the Ethernet frame format but only
the forwarding behavior of switches, Congdon says. But
VEPA by itself was limited in its capabilties. So HP com-
bined its VEPA proposal with a Cisco’s VN-Tag proposal for
server/switch forwarding, management and administration
to support the ability to run multiple virtual switches and
multiple VEPAs simultaneously on the endpoint.
This required a channeling scheme for bg, which is
based on the VN-Tag specification created by Cisco and
VMware to have a policy follow a VM as it moves. This
multichannel capability attaches a tag to the frame that
identifies which VM the frame came in on.
But another extension was required to allow users to
deploy remote switches—instead of those adjacent to the
server rack—as the policy controlling switches for the vir-
tual environment. This is where 802.1Qbh comes in: It
allows edge virtual bridges to replicate frames over mul-
tiple virtual channels to a group of remote ports. This will
enable users to cascade ports for flexible network design,
and make more efficient use of bandwidth for multicast,
broadcast and unicast frames.
The port extension capability of bh lets administrators
choose the switch they want to delegate policies, ACLs,
filters, QoS and other parameters to VMs. Port extenders
will reside in the back of a blade rack or on individual
blades and act as a line card of the controlling switch,
says Joe Pelissier, technical lead at Cisco.
“It greatly reduces the number of things you have to
manage and simplifies management because the control-
ling switch is doing all of the work,” Pelissier says.
What’s still missing from bg and bh is a discov-
15 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
ery protocol for autoconfiguration, Pelissier says. Some
in the 802.1 group are leaning toward using the existing
Logical Link Discovery Protocol (LLDP), while others, includ-
ing Cisco and HP, are inclined to define a new protocol for
the task.
“LLDP is limited in the amount of data it can carry and how
quickly it can carry that data,” Pelissier says. “We need some-
thing that carries data in the range of 10s to 100s of kilobytes
and is able to send the data faster rather than one 1,500 byte
frame a second. LLDP doesn’t have fragmentation capability
either. We want to have the capability to split the data among
multiple frames.”
Cisco, HP say they’re in synchCisco and HP are leading proponents of the IEEE effort de-
spite the fact that Cisco is charging hard into HP’s tradition-
al server territory while HP is ramping up its networking ef-
forts in an attempt to gain control of data centers that have
been turned on their heads by virtualization technology.
Cisco and HP say their VEPA and VN-Tag/multichannel and
port extension proposals are complementary despite reports that
they are competing techniques to accomplish the same thing:
reducing the number of managed data center elements and de-
fining a clear line of demarcation between NIC, server and switch
administrators when monitoring VM communications.
“This isn’t the battle it’s been made out to be,” Pelissier says.
Though Congdon acknowledges he initially proposed
VEPA as an alternative to Cisco’s VN-Tag technique, the two
together present “a nice layered architecture that builds
upon one another where virtual switches and VEPA form
the lowest layer of implementation, and you can move all
the way to more complex solutions such as Cisco’s VN-Tag.”
And the proposals seem to have broad industry support.
“We do believe this is the right way to go,” says Dhritiman
Dasgupta, senior manager of data center marketing at Juniper.
“This is putting networking where it belongs, which is on net-
working devices. The network needs to know what’s going on.”•
Cisco and HP are leading proponents of the IEEE effort despite the fact that Cisco is charging hard into HP’s traditional server territory while HP is ramping up its networking efforts. ...
OF LIKE MINDS
16 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
A BRIDGE TO TERABIT ETHERNET
With 40/100G Ethernet products on the way, Ethernet experts look ahead to Terabit Ethernet standards and products by 2015
IT managers who are getting started with--or even pushing
the limits of--10 Gigabit Ethernet in their LANs and data
centers don’t have to wait for higher-speed connectivity.
Pre-standard 40 Gigabit and 100 Gigabit Ethernet
products--server network interface cards, switch uplinks
and switches—have hit the market. And standards-com-
pliant products are expected to ship in the second half of
this year, not long after the expected June ratification of
the 802.3ba standard.
The IEEE, which began work on the standard in late
2006, is expected to define two different speeds of Eth-
ernet for two different applications: 40G for server con-
nectivity and 100G for core switching.
Despite the global economic slowdown, global revenue
for 10G fixed Ethernet switches doubled in 2008, accord-
ing to Infonetics. And there is pent-up demand for 40
Gigabit and 100 Gigabit Ethernet, says John D’Ambrosia,
chair of the 802.3ba task force in the IEEE and a senior
research scientist at Force10 Networks.
“There are a number of people already who are using
link aggregation to try and create pipes of that capacity,”
he says. “It’s not the cleanest way to do things ... [but]
people already need that capacity.”
D’Ambrosia says even though 40/100G Ethernet prod-
ucts haven’t arrived yet, he’s already thinking ahead to Tera-
bit Ethernet standards and products by 2015. “We are going
to see a call for a higher speed much sooner than we saw the
call for this generation” of 10/40/100G Ethernet, he says.
According to the 802.3ba task force, bandwidth re-
quirements for computing and core networking applica-
tions are growing at different rates, necessitating the defi-
nition of two distinct data rates for the next generation of
Ethernet. Servers, high-performance computing clusters,
blade servers, storage-area networks and network-at-
tached storage all currently make use of 1G and 10G Eth-
ernet, with 10G growing significantly in 2007 and 2008.
I/O bandwidth projections for server and computing
applications, including server traffic aggregation, indicate
that there will be a significant market potential for a 40G
Ethernet interface, according to the task force. Ethernet
at 40G will provide approximately the same cost balance
between the LAN and the attached stations as 10G Ether-
net, the task force believes.
Core networking applications have demonstrated the
need for bandwidth beyond existing capabilities and be-
yond the projected bandwidth requirements for computing
By Jim Duffy • Network World
17 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
applications. Switching, routing, and aggregation in data
centers, Internet exchanges and service provider peering
points, and high-bandwidth applications such as video on
demand and high-performance computing, need a 100
Gigabit Ethernet interface, according to the task force.
“Initial applications (of 40/100G Ethernet) are already
showing up, in stacking and highly aggregated LAN links,
but the port counts are low,” says George Zimmerman, CTO
of SolarFlare, a maker of Ethernet physical layer devices.
Zimmerman says 10G is just now taking off in the ac-
cess layer of large networks and will eventually move to
the client side, creating the need for 40/100G in the dis-
tribution layer and the network core.
He says the application of 100 Gigabit Ethernet in the
core is imminent, and is about two years away in the distri-
bution layer. “Both will be driven by and drive 10G adoption
in the access and client end of the network, where today the
numbers are still much smaller than the potential,” he says.
Spec designed for seamless upgradesThe 802.3ba specification will conform to the full-duplex
operating mode of the IEEE 802.3 Media Access Control
(MAC) layer, according to the task force. As was the case in
previous 802.3 amendments, new physical layers specific
to either 40Gbps or 100Gbps operation will be defined.
By employing the existing 802.3 MAC protocol,
802.3ba is intended to maintain full compatibility with the
installed base of Ethernet nodes, the task force says. The
spec is also expected to use “proven and familiar media,”
including optical fiber, backplanes and copper cabling,
and preserve existing network architecture, management
and software, in an effort to keep design, installation and
maintenance costs at a minimum.
With initial interoperability testing commencing in late
2009, public demonstrations will emerge in 2010, and
certification testing will start once the standard is ratified,
says Brad Booth, chair of the Ethernet Alliance.
The specification and formation of the 40/100G task
force did not come without some controversy, however.
Participants in the Higher Speed Study Group (HSSG)
within the IEEE were divided on whether to include 40G
Ethernet as part of their charter or stay the course with
100 Gigabit Ethernet.
After about a month though, the HSSG agreed to work on
a single standard that encompassed both 40G and 100G.
“In a sense, we were a little bit late with this,” D’Ambrosia
says. “By our own projections, the need for 100G was in
the 2010 timeframe. We should have been done with the
100G [spec] probably in the 2007-08 timeframe, at the
latest. We actually started it late, which is going to make
the push for terabit seem early by comparison. But when
we look at the data forecasts that we’re seeing, it looks
to be on cue.”
Driving demand for 40/100G Ethernet are the same
drivers currently stoking 10G: data center virtualization
and storage, and high-definition videoconferencing and
medical imaging. Some vendors are building 40/100G
Ethernet capabilities into their products now.
Vendors prepare for 100 Gigabit EthernetCisco’s Nexus 7000 data center switch, which debuted in ear-
ly 2009, is designed for future delivery of 40/100G Ethernet.
“We have a little more headroom, which isn’t bad to
have when you look at future Ethernet speed transitions
coming in the market,” says Doug Gourlay, senior director of
data center marketing and product management at Cisco.
“We’re pretty early advocates of the 100G effort in the IEEE.
“[But] the earliest you’ll see products from any com-
pany that are credible deliveries and reasonably priced:
18 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
second half of 2010 onward for 40/100G,” he adds.
Verizon Business offers 10G Ethernet LAN and Ethernet
Virtual Private Line services to customers in 100 U.S. met-
ro markets. Verizon Business also offers “10G-capable”
Ethernet Private Line services.
The carrier has 40G Ethernet services on its five-year
road map but no specific deployment dates, says Jeff
Schwartz, Group Manager, Global Ethernet Product Mar-
keting. Instead, Verizon Business has more 10G Ethernet
access services on tap.
“We want to get to 100G,” Schwartz says. “40G may be
an intermediary step.”
Once Verizon Business moves its backbone architec-
ture toward 40/100G, products and services will be fol-
lowing, he says.
Spirent Communications, a maker of Ethernet testing
gear, offers 40G Ethernet testing modules, with 100 Giga-
bit Ethernet modules planned for release in early 2010,
says Tim Jefferson, general manager of the converged
core solutions group at Spirent. Jefferson says one of the
caveats that users should be aware of as they migrate
from 10G to 40/100G Ethernet is the need to ensure pre-
cise clocking synchronization between systems--especially
between equipment from different vendors.
Imprecise clocking between systems at 40/100G--even at
10G--can increase latency and packet loss, Jefferson says.
“This latency issue is a bigger issue than most people
anticipate,” he says. “At 10G, especially at high densities,
the specs allow for a little variance for clocks. As you ag-
gregate traffic into 10G ports, just the smallest difference
in the clocks between ports can cause high latency and
packet loss. At 40G, it’s an order of magnitude more im-
portant than it is for 10G and Gig.
“This is a critical requirement in data centers today be-
cause a lot of the innovations going on with Ethernet and
a lot of the demand for all these changes in data centers
are meant to address lower latencies,” Jefferson adds.
Cabling challengesAnother challenge is readying the cabling infrastructure
for 40/100G, experts say. Ensuring the appropriate grade
and length of fiber is essential to smooth, seamless op-
eration, they say.
“The big consideration is, what’s a customer’s cabling
installation going to look like and what they’re looking for
to be able to handle that,” Booth says. “They are probably
going to need to have a parallel fiber capability.”
“The recommendations we’re making to customers on
their physical plant today are designed to take them from
1G to 10G; 10G to a unified fabric; and then address
future 40G,” Cisco’s Gourlay says.
Latency is a bigger issue than most people anticipate. ... As you aggregate traffic into 10G ports, just the smallest difference in the clocks between ports can cause high latency and packet loss. At 40G, it’s an order of magnitude more important than it is for 10G and Gig.
—Tim Jefferson, general manager, Spirent
TIME AND TIME AGAIN
19 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
The proposed physical interfaces (PHY) for 40G Ethernet
include a range to cover distances inside the data center
up to 100 meters, to accommodate a range of server form
factors, including blade, rack and pedestal, according to
the Ethernet Alliance. The 100 Gigabit Ethernet rate will in-
clude distances and media appropriate for data center, as
well as service provider interconnection for intra-office and
inter-office applications, according to the organization.
The proposed PHYs for 40G Ethernet are 1 meter back-
plane, 10 meter copper and 100 meter multimode fiber; and
10 meter copper, 100 meter multimode, and 10 kilometer
and 40 kilometer single-mode fiber for 100 Gigabit Ethernet.•
20 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
DATA CENTER AS ETHERNET SWITCH DRIVERBy Jim Duffy • Network World
How next-generation data center initiatives shape the LAN switching market
2010 promises to be an interesting year in the enterprise
LAN switching market.
With the exception of Avaya, next-generation data cen-
ter initiatives are driving the LAN switching market and
its consolidation. And they are all intended to compete
more intensely with Cisco, which owns 70% of the Ether-
net switching market but still has an insatiable appetite
for growth.
“Big data center vendors are driving LAN switching de-
cisions, and purchases,” says Zeus Kerravala, an analyst
with The Yankee Group. “Where innovation’s been needed
is in the data center.”
“Innovation is being driven in the data center,” says
Steve Schuchart of Current Analysis. The drive to auto-
mate the data center is making the all-in-one buy from a
single large vendor more attractive to customers, he says.
Indeed, the LAN switching market is no longer “Cisco
and the Seven Dwarves”–the seven companies all vy-
ing for that 25% to 30% share Cisco doesn’t own. The
LAN switching market is now steered by Cisco, IBM, HP
and Dell, and perhaps Brocade–data center networking,
server and storage stalwarts looking to take their custom-
ers to the next-generation infrastructure of unified fabrics,
virtualization, and the like.
Data center deployments of 10G Ethernet are help-
ing to drive the market, according to Dell’Oro Group. The
firm expects the global Ethernet switching market to grow
modestly in 2010, to $16.3 billion from $15.6 billion in
2009. This is down considerably though from the $19.3
billion market in 2008, Dell’Oro notes.
And pricing pressure is expected to increase, accord-
ing to a Nov. 19, 2009, Goldman Sachs survey of 100
IT executives on IT spending. With its 3Com buy, HP can
now offer a core data center switch in addition to the en-
terprise switches it sells at roughly half the price of com-
parable Cisco products, the survey notes. And with Juni-
per ramping up its IBM and Dell OEM channels, Cisco’s
market share will be squeezed if profit margins are to be
maintained, the survey suggests.
Another carrot for Juniper and its high-performance
networking direction will be buying patterns. The Goldman
Sachs survey found that most respondents base their pur-
chase on price performance over architectural road map.
Where does all this jockeying among the top tier leave
Extreme, Enterasys, Force10 and the rest of the pack?
They’ve always claimed price/performance advances over
Cisco but never gained any meaningful market share. And
in terms of marriages, Enterasys united with Siemens En-
21 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
terprise Communications to go squarely after the secure
wired/wireless unified communications opportunity.
Force10 is merging with Turin Networks, a provider of
wireless backhaul, Carrier Ethernet and converged access
systems for service providers. Force10 seems to be gravi-
tating more and more to the carrier cloud, but is still a
high-performance data center play–though one that was
left behind by the data center systems mainstays.
That leaves Extreme Networks virtually alone in LAN
switching. The company has been extending its product
line for data center-specific applications, such as virtual-
ization and 10G Ethernet. But analysts say they will have
little relevance beyond Extreme’s installed base.
“What problem is Extreme solving that nobody else is?”
Kerravala asks. “There just isn’t a differentiator compel-
ling enough.”
Extreme begs to differ. “Extreme Networks delivers a
network that requires fewer resources to operate and ac-
quire while offering unique capabilities to scale for future
requirements and changing demands,” says Chief Mar-
keting Officer Paul Hooper. “We achieve this through the
delivery of a consistent Ethernet portfolio, stretching from
the edge of the network to the core, all powered by a
single OS, ExtremeXOS. Extreme’s network platform also
enables organizations to migrate their data centers from
physical to virtual to cloud networks. The benefit is that
enterprises can smoothly transition from separate to con-
verged networks and carriers can adopt pure Ethernet-
based services.”
Switching may not be a differentiator for Avaya either,
after the Nortel deal. Due to the price sensitive and hotly
competitive nature of the LAN switching business, Ker-
ravala believes Avaya will look to part with its acquired
Nortel data networking products.
Avaya says it will issue a Nortel/Avaya product road
map 30 days after the deal’s close.
“The best place for Nortel data is in HP/3Com or Bro-
cade, a company looking to expand its customer base,”
he says.
The best place for everyone else is with a major OEM
partner, according to CurrentAnalysis’ Schuchart. And if
they haven’t had much success selling on price/perfor-
mance, perhaps they should play the architectural road
map card.
“For companies that don’t have a deal or are not whol-
ly owned by a compute vendor, next year’s going to be
tough sailing for them,” Schuchart says. “There’s also a
fair amount of room out there for companies who have
best-of-breed products, although in a data center moving
towards virtualized automation, the standalone providers
are going to have a harder time.”•
The Dell’Oro Group expects the global Ethernet switching market to grow modestly in 2010, to $16.3 billion from $15.6 billion in 2009. This is down considerably though from the $19.3 billion market in 2008.
IS THE GLASS HALF FULL?
22 of 22
NETWORKING REDEFINED Sponsored by
Data Center Derby Heats Up
Data Center as Ethernet Switch Driver
Resources10G Ethernet Shakes
Net DesignRemaking the Data Center
Soothing Data Center Headaches
A Bridge to Terabit Ethernet
White PaperRedefining the Economics of NetworkingThis paper provides an overview of the challenges businesses face today and how IT addresses the explicit need to manage network costs, provide choice and flexibil-ity and reduce complexity. In addition, this paper highlights the differ-ence between proprietary and standards-based networking and how innovative networking solutions that embrace a standards-based approach allow organizations to break free from restrictive proprietary networking solutions and enable better busi-ness outcomes.
Read more >>
White PaperROI of Ethernet Networking Solutions:To determine the return on invest-ment (ROI) associated with imple-mentation of an HP ProCurve network solution, IDC conducted a study of medium-sized to large organi-zations with an HP ProCurve implementa-tion up and running in their production en-vironment. IDC estimates that these businesses were able to achieve a 473% ROI; a three-year (discounted) benefit of $38,466 per 100 users; and payback on their initial invest-ment within 5.7 months.
Read more >>
NETWORKING RESOURCESWhite PaperWhy your Firewall, VPN and IEEE 802.11i aren’t enough to protect your networkWith a comprehensive approach to WLAN security, an intrusion detection and prevention system (IDS/IPS) for WLANs adds to IEEE stan-dards-based technology and wired network security mechanisms. An IDS/IPS specifically designed for WLANs addresses the risks associated with this networking technology.
Learn more >>
White Paper802.11n Drives an Architectural EvolutionToday’s enterprises deploy wireless LANs (WLANs) as a standard busi-ness tool to drive productivity and enhance collaboration. Enter the state-of-the-art WLAN: 802.11n. Organiza-tions can expand their wireless capabilities with this expanding technology to dramatically boost network capac-ity and speeds up to 600 Mbps).
Learn more >>
White Paper
Green Networking inthe Data CenterThis paper provides an overview of the challenges faced in today’s data centers; addressing the issues surrounding data center power, cooling and efficiency, with an emphasis on how specific networking tools and strategies can help address these issues. It also highlights the HP ProCurve data center solutions that focus on efficiency in the data center and the benefits they provide.
Read more >>
Executive summary ............................................... 2The challenge of a complex data center .................. 2The network effect ................................................ 2Green data center best practices ........................... 3Power and cooling utilization ................................. 3Building efficient infrastructures .............................. 4Environmental Sustainability ................................... 5Why ProCurve ..................................................... 5For more information ............................................ 6
Green networking in the data center White paper
W H I T E P AP E R
R O I o f S w i t c h e d E t h e r n e t N e t w o r k i n g S o l u t i o n s f o r t h e M i d m a r k e t Sponsored by: HP ProCurve
Randy Perry Abner Germanow August 2009
E x e c u t i v e S u m m a r y
New generations of network equipment continue to be more reliable than previous generations. Meanwhile, the applications running across the network have become more ubiquitous and more demanding. Underlying this cycle, the network has become much more important to businesses of all sizes — including midmarket firms — and in all industries.
Driven by the financial crisis, midmarket firms are taking a close look at all budget line items. They demand solutions that provide more than sufficient functionality for their current networking needs and also leave plenty of headroom to scale their network in the years to come, in terms of both bandwidth and functionality. At the same time, they want these network systems to be cost effective to deploy and run.
One company striving to address these needs is HP. HP ProCurve networking products include a broad line of LAN core switches, LAN edge switches, and wireless LAN and network security solutions that are all brought together under a unified management suite. To determine the return on investment (ROI) associated with implementation of an HP ProCurve network solution, IDC conducted a study of medium-sized to large organizations with an HP ProCurve implementation up and running in their production environment. IDC estimates that these businesses were able to achieve a 473% ROI; a three-year (discounted) benefit of $38,466 per 100 users; and payback on their initial investment within 5.7 months.
N e t w o r k I n f r a s t r u c t u r e G r o w t h D r i v e r s i n T o d a y ' s M i d m a r k e t E n v i r o n m e n t s
The IT industry in general and the networking market in particular are finally showing signs of stabilizing after the financial crisis of late 2008/early 2009. Looking forward, IDC anticipates that networking will rebound more strongly than other areas of IT spending, driven by the fact that the recession has not changed the fundamental reasons for businesses to continue investing in their networks. Major drivers for midmarket firms to continue investing in networking equipment include:
Migration of voice and video to IP. As businesses look to reduce expenses by adopting technologies such as videoconferencing and voice over IP, the increasing amount of voice and video traffic is creating new challenges for the network. Response times for Web sites or applications of up to a second used to be acceptable, but the human eye and ear can detect delays measured in milliseconds. Simply throwing bandwidth at the problem is insufficient as the mix of application demands on the network rises. Midmarket firms must incorporate new levels of bandwidth and intelligence into their network to handle these more complex quality-of-service requirements.
Glo
bal H
eadq
uarte
rs: 5
Spe
en S
treet
Fra
min
gham
, MA
0170
1 U
SA
P.5
08.8
72.8
200
F.
508.
935.
4015
w
ww.
idc.
com
Executive summary .............................................. 2Challenge among change ..................................... 2Proprietary vs. open standards............................... 2Value-driven solutions ........................................... 2Breaking the barriers of networking ........................ 3Conclusion .......................................................... 4For more information ............................................ 4
HP ProCurve business white paper: Redefining the economics of networking Advanced networking that break IT barriers and redefine the value of networking
White paper
802.11n Drives an Architectural Evolution
Introduction Today’s enterprises deploy wireless LANs (WLANs) as a standard business tool to drive productivity and enhance collaboration.
Enter the state-of-the-art WLAN—802.11n. Organizations can expand their wireless capabilities with this expanding technology to dramatically boost network capacity and speed—up to 600 Mbps (see Figure 1). There are major implications as to how organizations will use and implement wireless networks moving forward. Contrast this with the 54 Mbps of 802.11a/g networks or the 100 Mbps Fast Ethernet. This extra capacity and speed will allow organizations upgrading to 802.11n to expand the range of applications mobilized over wireless networks, including both existing and ground-breaking high-bandwidth applications, which may help to streamline business processes and foster corporate competitive advantage.
Figure 1: 802.11n brings a dramatic increase in traffic
Contents at a glance
Introduction 1
Legacy WLAN architecture: centralized WLAN switch
2
The 802.11n-ready network: optimized WLAN architecture
4
Conclusion: WLAN architectural changes are a natural evolution
6
HP ProCurve mobility 6
White paper
Why Your Firewall, VPN, and IEEE 802.11i Aren’t Enough to Protect Your Network
Overview Like any network technology, wireless local area networks (WLANs) need to be protected from security threats. Though recent developments in IEEE standards have been designed to help ensure privacy for authenticated WLAN users, WLAN clients and enterprise infrastructure can still be vulnerable to a variety of threats that are unique to WLANs. Mischievous hackers may try to attack the network, or a negligent employee may create a security breach that leaves the corporate WLAN or a client device vulnerable to attack. These threats cannot be mitigated by traditional firewall technologies and virtual private networks (VPNs), nor eliminated through encryption and authentication mechanisms used in conventional enterprise network security systems. With a comprehensive approach to WLAN security, an intrusion detection and prevention system (IDS/IPS) for WLANs adds to IEEE standards-based technology and wired network security mechanisms. An IDS/IPS specifically designed for WLANs addresses the risks associated with this networking technology.
Contents at a glance
Overview 1
A new class of security threats to enterprise networks
1
Protecting enterprise networks from WLAN threats
3
About HP ProCurve Mobility
4 A new class of security threats to enterprise networks The prevailing model of enterprise network security is rooted in the axiom that being “physically inside is safe and outside is unsafe.” Connecting to a network point within the enterprise is generally considered safe and is subject to weaker security controls. On the other hand, tight security controls are enforced at the network traffic entry and exit points using firewalls and VPNs.
A WLAN breaks the barrier provided by the building perimeter as the physical security envelope for a wired network because invisible radio signals used by the WLAN cannot be confined within the physical perimeter of a building, and usually cut through walls and windows. This creates a backdoorfor unauthorized devices to connect to the enterprise network. Some specific security threats from WLANs are described below.