+ All Categories
Home > Documents > Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable...

Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable...

Date post: 13-Jun-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
66
Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks WWW.SIEMON.COM
Transcript
Page 1: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

Advanced NetworkInfrastructure2015

8

Actionable Guidance for Deploying Superior IntelligentBuilding and Data Center Networks

WWW.SIEMON.COM

EBook_REV2_BRC_IcePack 12/9/14 3:42 PM Page C1

Page 2: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

TA

BL

E O

F C

ON

TE

NT

S

www.siemon.com

Zone Cabling for Cost Savings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Killer App Alert! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .IEEE 802.11ac 5 GHz Wireless Updateand Structured Cabling Implications

Advantages of Using Siemon Shielded Cabling Systems . . . . . . . . .To Power Remote Network Devices

Data Center Storage Evolution: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .An Update on Storage Network Technologiesincluding DAS, NAS, and SAN

The Need for Low-Loss Multifiber Connectivity In . . . . . . . . . . . . . . .Today’s Data Center

Considerations for choosing top-of-rack in today’s . . . . . . . . . . . . .fat-tree switch fabric configurations

Tech Brief: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .SFP+ Cables and Encryption – Cost-Effective Alternatives Overcome Vendor Locking

Getting Smart, Getting Rugged . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Extending LANs into Harsher Environments

Case Study: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .L.A. Dodgers hit a home run with cabling upgrade from Siemon

1

9

16

21

29

41

49

53

61

EBook_REV2_BRC_IcePack 12/9/14 3:43 PM Page C2

Page 3: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

1www.siemon.com

Zone Cabling for Cost Savings

Workspaces are becoming increasingly social and flexible and are constantly being re-arranged

and updated. To determine how structured cabling can best support this evolving trend,

Siemon studied the cost and environmental impact of various structured cabling designs.

The results are in: zone cabling deployments provide the optimum balance of performance,

flexibility, and efficient use of cabling materials in today’s enterprise environments.

EBook_REV2_BRC_IcePack 12/9/14 3:43 PM Page 1

Page 4: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

2

ZO

NE

TO

PO

LO

GIE

S

www.siemon.com

A zone cabling design (or topology) begins with

horizontal cables run from patch panels in the

telecommunications room (TR) to connections

within a zone enclosure (ZE, sometimes referred

to as a zone box), which can be mounted under a

raised floor, in the ceiling, or on the wall. Cables

are then run from the outlets or connecting blocks

in the zone enclosure to telecommunications

outlets in the work area (WA), equipment outlets

serving BAS devices, or directly to BAS devices.

Patch cords are used to connect voice and data

equipment to telecommunications outlets and to

connect BAS equipment to equipment outlets.

Note that the connections in the zone enclosure

are made using modular outlets and/or punch

down blocks - there is no active equipment in the

zone enclosure. When deploying a zone cabling

solution, Siemon recommends positioning zone

enclosures in the most densely populated areas

of the floor space. Figure 1 shows an example of

a zone cabling layout .

What is Zone Cabling?

Figure 1: Example zone cabling layout servingvoice, data, and BAS applications

Enabling flexible client work spaces that efficiently accommodate moves, adds, and changes (MACs) is a signature element

of a zone cabling design. Through analyzing customers’ office reconfiguration needs, Siemon observed that zone cabling

deployments have the potential to provide significant cost savings benefits compared to traditional “home run” work area to

TR cabling. This is because MACs performed on traditional home run topologies require more cabling materials and more

installation time to implement.

As an example, Figure 2 shows a traditional home run cabling link and a zone cabling link; both of which are

supporting a work area outlet located 200 feet away from the TR. The zone enclosure is pre-cabled from the TR with spare

ports available to support new services and is located 50 feet from the work area outlet. If a second cable needs to be

deployed, 200 feet of new cable needs to be pulled from the TR with a traditional design, while only 50 feet needs to be pulled

when using a zone design. Significantly reduced installation times and minimized client disruption are additional benefits as-

sociated with pulling 75% less cable, which all contributes to improved return-on-investment (ROI) when using zone

cabling designs.

EBook_REV2_BRC_IcePack 12/9/14 3:43 PM Page 2

Page 5: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

3www.siemon.com

ZO

NE

TO

PO

LO

GIE

S

Figure 2: Example 200 foot traditional and zone cabling links depicting new cabling length required to support the addition of a new service

Zone Cabling DesignsZone cabling systems are easily implemented using a variety of Siemon components, which encompass all categories of

cabling and connectivity. The diagrams in Figures 3a, 3b, and 3c depict example zone and traditional cabling channel

topologies for a sampling of media types. For demonstration purposes, the connection within the zone enclosure, but not the

zone enclosure itself, is shown. The components shown in these figures, with the addition of cable managers (Siemon RS3-

RWM-2) and a plenum rated ceiling zone enclosure (Chatsworth A1024-LP), formed the material list used in Siemon’s MAC

cost impact study discussed later in this paper.

Figure 3a depicts Siemon’s recommended category 5e and 6 UTP zone cabling topology. Note that Siemon’s

category 5e or category 6 connecting block system is the recommended connection in the zone enclosure.

This solution eliminates the need to stock factory pre-terminated and tested interconnect cords for connections in the zone

enclosure and simplifies cable management by eliminating cable slack. The traditional category 5e and 6 UTP cabling

topology is shown for comparison purposes and for use as a reference in the cost comparison analysis.

Figure 3a: Siemon’s recommended category 5e and 6 UTP zone cabling topology and reference traditional topology

EBook_REV2_BRC_IcePack 12/9/14 3:43 PM Page 3

Page 6: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

4

ZO

NE

TO

PO

LO

GIE

S

www.siemon.com

Figure 3b depicts typical category 6A UTP zone and traditional cabling topologies. These figures are provided for reference

and are used in the cost comparison analysis; however, Siemon does not recommend category 6A UTP media for use in zone

cabling deployments for both performance and flexibility reasons. UTP cabling may be susceptible to excessive alien crosstalk

under certain installation conditions and is not the optimum media for support of remote powering applications carrying 30W

and higher power loads. In addition, because category 6A UTP zone deployments rely on modular connections within the

zone enclosure, factory pre-terminated and tested interconnect cords for connections must be on hand in order to quickly

facilitate MAC requests. Siemon recommends cost effective shielded zone cabling solutions to overcome these concerns.

Figure 3b: Reference category 6A UTP zone cabling topology and traditional topology

Figure 3c depicts Siemon’s recommended category 6A zone topology, which is comprised of shielded cables and components.

Note that Siemon’s TERA® connector is used in the zone enclosure. Because this shielded modular connector is

field-terminatable, it eliminates the need to stock factory pre-terminated and tested interconnect cords and simplifies cable

management by eliminating cable slack in the zone enclosure. The traditional category 6A shielded cabling topology is shown

for comparison purposes and for use as a reference in the cost comparison analysis.

Figure 3c: Siemon’s recommended category 6A zone cabling topology and reference traditional topology constructed from shielded components

EBook_REV2_BRC_IcePack 12/9/14 3:43 PM Page 4

Page 7: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

5www.siemon.com

ZO

NE

TO

PO

LO

GIE

S

Siemon designed traditional and zone cabling layouts for a typ-

ical one-floor commercial building space and analyzed the cap-

ital and operating costs associated with each design. For the

purposes of this analysis, the traditional cabling topology sce-

nario provided two outlets to 36 work areas for a total of 72 ca-

bles or “drops” and the zone topology scenario provided two

outlets at 36 work areas and 72 connection points in a zone en-

closure, plus an additional 24 cables pulled to the zone

enclosure to accommodate future expansion.

To establish a baseline, Siemon first calculated the material and

installation costs for the category 5e UTP, category 6 UTP, cat-

egory 6A UTP, category 6A shielded, and category 7A shielded

traditional (72 drops) and zone (96 drops to the zone enclosure

and 72 drops to the work area) cabling designs and plotted the

results shown in Figure 4. Since zone cabling is most commonly

deployed in the ceiling where air handling spaces are prevalent,

media costs were derived using plenum rated materials where

applicable. Not surprisingly, the total cost for the zone cabling

design is higher than for the traditional design because there is additional connectivity in each channel and some pre-cabling

between the TR and zone enclosure is included for future connections. This baseline also clearly demonstrates that Siemon’s

recommended shielded category 6A zone cabling design provides the added benefits of performance and termination flexibility

at the zone enclosure at virtually no additional cost over a category 6A UTP zone cabling design.

Although additional capital expenditure (“CAPEX”) is required when zone cabling is initially deployed, a more accurate

assessment of the total comparative costs of these solutions must include operating expense (“OPEX”). MAC work

performed on a cabling plant falls into the category of OPEX and it is in this area that the real cost benefits of a zone cabling

solution become apparent. For this analysis, a cabling “add” represents the cost to pull one new cable and a cabling “move”

is the cost to pull one new cable and remove the abandoned cable. The table in Figure 5 depicts Siemon’s calculated cost

savings per move or add for all of the categories of cabling evaluated and the number of MACs that need to be performed for

the combined CAPEX and OPEX cost associated with the traditional cabling design to equal that of the zone cabling design.

This tipping point is often referred to as the time when Return-

On-Investment (“ROI”) is achieved for a zone cabling design.

Enterprise clients’ information technology needs are dynamic

and often require rapid floor space reconfiguration. Due to

their enhanced ability to support MACs, building owners can

realize a significant ROI benefit with their zone cabling sys-

tems within two to five years compared to traditional cabling

systems. According to the cost analysis, either 14 moves and

17 adds or 16 moves and 20 adds (depending upon cabling

type) will realize a full ROI of the additional CAPEX for a zone

cabling solution and each MAC above the ROI threshold yields additional OPEX benefits over a traditional cabling design.

Depending on the number of MACs performed, a zone cabling design can pay for itself quickly. Figure 6 shows that the

combined CAPEX and OPEX costs for all category zone cabling designs are always lower than for traditional cabling designs

after 16 moves and 20 adds are performed and there is still flexibility to add additional services to the zone cabling design!

Quantifying the Cost Savings

$0

Category 6Category 5e Category 6AUTP

Category 6AShielded

$5,000

$10,000

$15,000

$20,000

$25,000

$30,000

$35,000

$40,000

$45,000

$50,000

$55,000

Category 7AShielded

Traditional Cabling - 72 WA drops

Zone Cabling - 72 WA/96 ZE drops

Figure 4: Installation and materials costs (CAPEX) for traditionaland zone cabling scenarios

Category/Topology $ Saved $ Saved MAC’s per move per add Until ROI

Any Traditional Topology $0 $0 NO ROI

Zone Category 5e $144 $128 14 Moves & 17 Adds

Zone Category 6 $163 $146 14 Moves & 17 Adds

Zone Category 6A UTP $265 $249 16 Moves & 20 Adds

Zone Category 6A Shielded $282 $266 16 Moves & 20 Adds

Zone Category 7A Shielded $409 $393 14 Moves & 17 Adds

Figure 5: Cost of work area MACs and ROI for traditional andzone cabling designs

EBook_REV2_BRC_IcePack 12/9/14 3:43 PM Page 5

Page 8: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

6

ZO

NE

TO

PO

LO

GIE

S

www.siemon.com

Zone Cabling ROIThe results of this analysis may be extrapolated and applied tosmall, medium, and large zone cabling installations. While obviously dependent upon the exact number of moves,adds, and changes (MACs) performed per year, typical zone ca-bling plants of any size planned with 25% spare port availabilitynot only significantly reduce client disruption,but allow the buildingowner to recoup the cost of the extra port capacity within a two tofive year span or after reaching the ROI threshold (i.e. either 14moves and 17 adds or 16 moves and 20 adds depending uponcabling type) in the example provided in this paper.

$0

Category 6Category 5e Category 6AUTP

Category 6AShielded

$5,000

$10,000

$15,000

$20,000

$25,000

$30,000

$35,000

$40,000

$45,000

$50,000

$55,000

$60,000

Category 7A

Shielded

Traditional Cabling - 92 WA drops

Zone Cabling - 92 WA/96 ZE drops

Figure 6: Combined CAPEX and OPEX costs for traditionaland zone cabling scenarios after 16 moves and 20 adds

In addition to the obvious cost benefits, deployment of zone cabling provides the following additional benefits:

• Factory pre-terminated and tested trunking cables may beused for expedited installation and reduced labor cost.

•  Spare ports in the zone enclosure allow for the rapid addition of new devices and facilitate moves and changesof existing services.

• Pathways are more efficiently utilized throughout the build-ing space.

• Deployment of the structured cabling system is faster andless disruptive.

•  New IP devices, such as WAPs, BAS devices, voice/data security devices, audio/video devices, digital signage, etc.are easily integrated into the existing structured cablingsystem via connections made at the zone enclosure.

Additional Benefits

Going Green?

Zone cabling systems are ideal for use in smart and green building designs.

Factory pre-terminated trunking cables can be installed for a reduction in labor costs and on-site

waste and the centralized connection location within the zone enclosures allow for more efficient

pathway routing throughout the building.

Integrating Siemon’s end-to-end category 7A/class FA TERA® cabling system into a zone topology

allows customers to further take advantage of cable sharing strategies, which maximizes the potential to qualify for LEED credits

as issued by the United States Green Building Council (USGBC). Cable sharing supports multiple low-speed, low pair count ap-

plications operating over one 4-pair cabling system, which results in more efficient cable and pathway utilization.

For example, a standard IP security door deployment configuration typically consists of two category 5e cables (one for an IP

camera and the other for access control) installed in a traditional home run topology. By switching to a TERA category 7A/class

FA TERA cabling system configured in zone topology, a single cable can serve both devices, thereby reducing cabling and path-

way materials. Although the CAPEX associated with implementing TERA category 7A/class FA TERA cabling may be slightly

higher, the benefits realized by obtaining LEED accreditation can justify this additional cost.

EBook_REV2_BRC_IcePack 12/9/14 3:43 PM Page 6

Page 9: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

7www.siemon.com

ZO

NE

TO

PO

LO

GIE

S

• Today’s enterprise workspaces are increasingly more social and flexible and are subject to frequent reconfiguration and updating.

•  Zone cabling enables flexible client work spaces that can accommodate moves, adds, and changes morequickly and with less disruption than traditional cabling.

•  Zone cabling supports more efficient utilization of pathways and materials and is ideal for today’s smarter greenbuilding designs.

• Siemon recommends category 6A shielded cabling in zone cabling designs for maximum performance.

•  Shielded category 6A zone cabling designs provide the added benefits of performance, superior support of remote powering applications, and termination flexibility at the zone enclosure at virtually no additional costcompared to category 6A UTP designs.

• Zone cabling plants of any size planned with 25% spare port availability not only significantly reduce client disruption, but typically allow the building owner to recoup the cost of the extra port capacity within a two to fiveyear span or after reaching the ROI threshold.

Executive Summary:

EBook_REV2_BRC_IcePack 12/9/14 3:43 PM Page 7

Page 10: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

ZO

NE

TO

PO

LO

GIE

S

8 www.siemon.com

© 2

014

Sie

mon

W

P_Zo

ne_C

ble

Rev

. B 1

1/14

(U

S)

Worldwide Headquarters North AmericaWatertown, CT USAPhone (1 ) 860 945 4200 USPhone (1 ) 888 425 6165

Regional Headquarters EMEAEurope/Middle East/AfricaSurrey, EnglandPhone (44 ) 0 1932 571771

Regional HeadquartersAsia/PacificShanghai, P.R. ChinaPhone (86) 21 5385 0303

Regional HeadquartersLatin AmericaBogota, ColombiaPhone (571) 657 1950

Siemon is a member of the U.S. Green Building Council

EBook_REV2_BRC_IcePack 12/9/14 3:43 PM Page 8

Page 11: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

9www.siemon.com

Killer App Alert! IEEE 802.11ac 5 GHz Wireless Update and Structured Cabling Implications

Killer app alert! The newly published IEEE 802.11ac Very High Throughput wireless LAN standard1 has far reaching impli-

cations with respect to cabling infrastructure design. Users can expect their current wireless speeds to appreciably increase

by switching to 802.11ac gear with 1.3 Gb/s data rate capability that is available today. And, 256-QAM modulation, 160 MHz

channel bandwidth, and a maximum of eight spatial streams can theoretically deliver 6.93 Gb/s in the future! For the first

time, the specification of high performance cabling supporting access layer switches and uplink connections is critical to

achieving multi-Gigabit throughput and fully supporting the capacity of next generation wireless access points.

EBook_REV2_BRC_IcePack 12/9/14 3:43 PM Page 9

Page 12: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

10

WI-

FI 8

02

.11a

c

Key cabling design strategies to ensure that the wired network

is ready to support 802.11ac wireless LANs addressed in this

paper include:

• Specifying category 6A or higher performing horizontal

cabling in combination with link aggregation to ensure

immediate support of the 1.3 Gb/s theoretically achievable

data rate deliverable by 802.11ac 3-stream wireless access

points (WAPs) and routers available today

• Installing a minimum of 10 Gb/s capable balanced twisted-

pair copper or multimode optical fiber backbone to support

increased 802.11ac uplink capacity

• Utilizing a grid-based zone cabling architecture to accom-

modate additional WAP deployments, allow for rapid recon-

figuration of coverage areas, and provide redundant and

future-proof connections

• Using solid conductor cords, which exhibit better thermal sta-

bility and lower insertion loss than stranded conductor cords,

for equipment connections in the ceiling or in plenum spaces

where higher temperatures are likely to be encountered

• Recognizing that deploying Type 2 PoE to remotely power

802.11ac wireless access points can cause heat to build up

in cable bundles

— Siemon’s shielded class EA/category 6A and class FA/

category 7A cabling systems inherently exhibit superior

heat dissipation and are qualified for mechanical reliability

up to 75°C (167°F), which enables support of the Type 2

PoE application over the entire operating temperature

range of -20°C to 60°C (-4°F to 140°F)

— Shielded systems are more thermally stable and support

longer channel lengths (i.e. less length de-rating is re-

quired at elevated temperatures to satisfy TIA and

ISO/IEC insertion loss requirements) when deployed in

high temperature environments

— A larger number of shielded cables may be bundled with-

out concern for excessive heat build-up within the bundle

• Specifying IEC 60512-99-001 compliant connecting hard-

ware ensures that contact seating surfaces are not dam-

aged when plugs and jacks are unmated under 802.11ac

remote powering current loads

What’s in a name?

The latest 802.11ac wireless LAN technology goes by many

names, including:

• 5 GHz Wi-Fi – for the transmit frequency

• Gigabit Wi-Fi – for the short range data rate of today’s

three spatial stream implementation

• 5G Wi-Fi – for 5th generation (i.e. 802.11a, 802.11b,

802.11g, 802.11n, and 802.11ac)

• Very High Throughput Wi-Fi – from the title of the

application standard

No matter what you call it, the fact is that the increasing pres-

ence and capacity of mobile and handheld devices, the evo-

lution of information content from text to streaming video and

multimedia, combined with limits on cellular data plans that

encourage users to “off-load” to Wi-Fi are all driving the need

for faster Wi-Fi networks. As Wi-Fi becomes the access

media of choice, faster wireless LAN equipment will play an

important role in minimizing bottlenecks and congestion, in-

creasing capacity, and reducing latency but only if the cabling

and equipment connections can support the additional band-

width required. The Wi-Fi Alliance certified the first wave of

production-ready 802.11ac hardware in June 2013 and adop-

tion of 802.11ac is anticipated to occur more rapidly than any

of its 802.11 predecessors. Today, 802.11ac routers, gate-

ways, and adapters are widely available to support a range

of 802.11ac-ready laptops, tablets, and smart phones. In

fact, sales of 802.11ac devices are predicted to cross the 1

billion mark (to total 40% of the entire Wi-Fi enabled device

market) by the end of 2015!2

www.siemon.com

EBook_REV2_BRC_IcePack 12/9/14 3:43 PM Page 10

Page 13: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

11

WI-

FI 8

02

.11a

c

Phase˚

1010 1000 0000

25% 50% 75%

I0001

1011 1001 0010 0011

1101 1100 0100 0110

1111 1110 0101 0111

Figure 1: Example 16-QAM Constellation and Correlating Symbol Bit Information

www.siemon.com

A Technology Evolution

The enhanced throughput of 802.11ac devices is facilitated

by an evolution of existing and proven 802.11n3 Wi-Fi com-

munication algorithms. Like 802.11n, 802.11ac wireless

transmission utilizes the techniques of beamforming to con-

centrate signals and transmitting over multiple send and re-

ceive antennas to improve communication and minimize

interference (often referred to as multiple input, multiple out-

put or MIMO). The signal associated with one transmit and

one receive antenna is called a spatial stream and the ability

to support multiple spatial streams is a feature of both

802.11ac and 802.11n. Enhanced modulation, wider channel

spectrum, and twice as many spatial streams are the three

key technology enablers that support faster 802.11ac trans-

mission rates while ensuring backward compatibility with

older Wi-Fi technology.

Quadrature amplitude modulation (QAM) is an analog and

digital modulation scheme that is used extensively for digital

telecommunications systems. Using this scheme, a four

quadrant arrangement or “constellation” of symbol points is

established with each point representing a short string of bits

(e.g. 0’s or 1’s). Sinusoidal carrier waves that are phase

shifted by 90° are modulated using amplitude-shift keying

(ASK) digital modulation or amplitude modulation (AM) ana-

log modulation schemes and are used to transmit the con-

stellation symbols. Figure 1 depicts a rudimentary example

of a 16-QAM constellation for demonstration purposes. Note

that there are four points in each quadrant of the 16-QAM

constellation and each point equates to four information bits,

ranging from 0000 to 1111. The 64-QAM scheme utilized by

802.11n equipment carries 6 bits of information per constel-

lation point and the 256-QAM scheme utilized by 802.11ac

equipment carries an amazing 8 bits of information per con-

stellation point!

Amplitude Phase Data

25% 45° 0000

75% 22° 0001

75% 45° 0011

75% 68° 0010

25% 135° 1000

75% 112° 1001

75% 135° 1001

75% 158° 1010

25% 225° 1100

75% 202° 1101

75% 225° 1111

75% 248° 1110

25% 315° 0100

75% 292° 0101

75% 315° 0111

75% 337° 0110

EBook_REV2_BRC_IcePack 12/9/14 3:43 PM Page 11

Page 14: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

12

WI-

FI 8

02

.11a

c

www.siemon.com

802.11ac devices will transmit exclusively in the less crowded

5 GHz spectrum. This spectrum supports higher transmission

rates because of more available non-overlapping radio chan-

nels. It is considered “cleaner” because there are fewer de-

vices operating in the spectrum and less potential for

interference. One disadvantage to operating in this spectrum

is that 5 GHz signals have a shorter transmission range and

have more difficulty penetrating building materials than 2.4

GHz signals. Designing a flexible cabling infrastructure that

can accommodate the addition of future WAPs and enable

rapid reconfiguration of coverage areas can save headaches

later. Figure 2 depicts a recommended zone cabling approach

utilizing enclosures that house consolidation points (CPs) with

spare port capacity to facilitate connections to equipment out-

lets (EOs) that are positioned in a grid pattern. In addition, be-

cause most WAPs are located in the ceiling or in plenum

spaces where higher temperatures are likely to be encoun-

tered, the use of solid conductor cords, which exhibit better

thermal stability and lower insertion loss than stranded con-

ductor cords4, are recommended for all equipment connec-

tions in high temperature environments. Refer to ISO/IEC

247045 and TIA TSB-162-A6 for additional design and instal-

lation guidelines describing a grid-based cabling approach that

maximizes WAP placement and reconfiguration flexibility.

The Implications of Speed

In 802.11n and 802.11ac, channels that are 20 MHz wide are

aggregated to create the “pipe” or “highway” for wireless

transmission. 802.11ac technology allows radio transmission

over either four or eight bonded 20 MHz channels supporting

maximum throughput of 433 Mb/s and 866 Mb/s, respectively.

In addition, 802.11ac can accommodate up to eight antennas

and their associated spatial streams for an unprecedented

maximum theoretical data speed of 6.93 Gb/s! Note that, un-

like full duplex balanced twisted-pair BASE-T type Ethernet

transmission where throughput is fixed in both the transmit

and receive orientations, the speed specified for wireless ap-

plications represents the sum of upstream and downstream

traffic combined. Figure 3 summarizes the key capability dif-

ferences between 802.11n and 802.11ac technology.

Figure 2: Example Grid-Based WAP Zone Cabling Deployment Design

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 12

Page 15: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

13

Because of the variables of channel bandwidth and number

of spatial streams, 802.11ac deployments are highly config-

urable. In general, the lower end of the throughput range will

be targeted for small handheld devices with limited battery

capacity such as smart phones, the middle of the throughput

range will be targeted towards laptops, and the highest end

of the throughput range will be targeted at specialized and

outdoor applications where there is less device density com-

pared with indoors. Figure 4 provides examples of currently

available first wave and second wave (available mid 2015)

802.11ac implementation configurations with target devices

indicated. Possible future 802.11ac implementations are also

shown, but these implementations may not be available for

years, if at all. While this may seem surprising, consider that

there are no 4-stream implementations of 802.11n even

though the technology is standardized. Wireless LAN

provider Aruba Networks suggests that manufacturers will

leapfrog 4-stream 802.11n products in favor of 802.11ac prod-

ucts. The bottom line is that end-users can reasonably expect

their current wireless speeds to at least double by switching

to 802.11ac gear that is available today and more than

quadruple when second wave products become available.

www.siemon.com

WI-

FI 8

02

.11a

c802.11n 802.11ac

Transmit Frequency 2.4 or 5 GHz 5 GHz only

Channel Bandwidth 20 or 40 MHz 80 or 160 MHz

Modulation 64-QAM 256-QAM

Maximum Number of Spatial Streams

4 8

Theoretical Maximum Data Rate per Stream

144 Mb/s 866 Mb/s

Theoretical Maximum Data Rate

576 Mb/s 6.93 Gb/s

Figure 3: 802.11n versus 802.11ac Technology Comparison

Channel Bandwidth Number of Spatial Streams Maximum Speed Target Device

or Application

First Wave – Products Available Now

80 MHz 1 433 Mb/sDual-band smartphone, nextVoIP handset, or tablet

80 MHz 3 1.3 Gb/s High-end laptop

Second Wave – Products Available Mid 2015

80 MHz 2 867 Mb/sNetbook/low-end

laptop

160 MHz 3 2.6 Gb/s High-end laptop

Possible Future Implementations

160 MHz 4 3.5 Gb/sOutdoor or low coverage areas

160 MHz 8 6.9 Gb/s Specialized

Figure 4: Example 802.11ac Implementation Configurations

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 13

Page 16: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

14

WI-

FI 8

02

.11a

c

www.siemon.com

1400

11ac (3-antenna)

11n (1-antenna)

11n (3-antenna)

1200

1000

800

600

400

200

0

1 Room 2 Rooms 3 Rooms

Distance in Meters

802.11ac Coverage

Data

Rat

e in

Mb/

s

4+ Rooms

0 5 10 20 30 40 50

Figure 5: Data Rate versus Coverage Radius (provided courtesy of Broadcom)

When comparing wireless capabilities, it’s important to keep in

mind that the maximum realizable data rate is impacted by the

number of wireless users, protocol overhead, and the spatial

distribution of end-user devices from the access point. The

image in figure 5 illustrates how data rate decreases as dis-

tance from the WAP transmitter increases for a commonly

available 802.11ac 3-stream 80 MHz transmitter and 802.11n

1- and 3-stream transmitters. The chart shows that 1.3 Gb/s

data rates are theoretically achievable within a coverage radius

of 5m (16.4 ft) from an 802.11ac 3-stream WAP. Transfer data

collected for first generation wireless products confirms that the

802.11ac 3-stream data rate at relatively close range to a single

device is roughly on par with that achievable with a wired Gi-

gabit Ethernet (1000BASE-T) link. In some cases, the

802.11ac wireless data transfer rate was fast enough to satu-

rate the 1000BASE-T copper balanced twisted-pair cabling link

provided between the 802.11ac router and the server!7

Greater than 1 Gb/s wireless data rate capability has serious

implications related to wired media selection for router to server

and other uplink connections. For example, two 1000BASE-T

connections may be required to support a single 802.11ac

WAP (this is often referred to as link aggregation) if 10GBASE-

T uplink capacity is not supported by existing equipment (refer

to figure 2, which depicts two horizontal link connections to

each equipment outlet). As 802.11ac equipment matures to

support 2.6 Gb/s and even higher data rates, 10 Gb/s uplink

capacity will become even more critical. Moreover, access

layer switches supporting 802.11ac deployments must have a

minimum of 10 Gb/s uplink capacity to the core of the network

in order to sufficiently accommodate multiple WAPs.

Power Consumption

Although 802.11ac radio chips are more efficient than prior gen-

eration wireless chips, they are doing significantly more com-

plex signal processing and the amount of power required to

energize 802.11ac devices is higher than for any previous

802.11 implementation. In fact, 802.11ac WAP’s are unable to

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 14

Page 17: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

15www.siemon.com

WI-

FI 8

02

.11a

cwork within the 13-watt budget of Type 1 Power over Ethernet

(PoE) and must be supported by either a direct DC power

adapter or 30-watt Type 2 PoE remote power. (Note that some

802.11ac products may be able to draw power from two Type 1

PoE connections, but this is an impractical and fairly uncommon

implementation.) While safe for humans, Type 2 PoE remote

power delivery, at an applied current of 600mA per pair, can pro-

duce up to 10°C (22°F) temperature rise in cable bundles8 and

create electrical arcing that can damage connector contacts.

Heat rise within bundles has the potential to cause bit errors be-

cause insertion loss is directly proportional to temperature. In

extreme environments, temperature rise and contact arcing can

cause irreversible damage to cable and connectors. Fortu-

nately, the proper selection of network cabling, as described

next, can eliminate these risks.

The Wired Infrastructure

Existing wireless access devices, client devices and the back

end network and cabling infrastructure may need to be up-

graded in order to fully support 802.11ac and Type 2 power

delivery. In addition, 802.11ac’s 5 GHz transmission band re-

quires relatively dense WAP coverage areas and existing

802.11n grid placement layouts may not be sufficient. For

both new and existing wireless deployments, now is the time

to seriously consider the wired cabling uplink infrastructure.

Under all circumstances, the equipment outlets, patch panels,

and other connecting hardware used in the channel should

comply with IEC 60512-99-0019 to ensure that critical contact

seating surfaces are not damaged when plugs and jacks are

unmated under 802.11ac remote powering current loads. In

addition, the use of Siemon shielded class EA/category 6A

and class FA/category 7A cabling systems, which support

longer channel lengths (i.e. less length de-rating is required

at elevated temperatures to satisfy TIA and ISO/IEC insertion

loss requirements) and are qualified for mechanical reliability

up to 75°C (167°F), are recommended for Type 2 PoE remote

powering applications in locations having an ambient temper-

ature greater than 20°C (68°F). Furthermore, larger numbers

of shielded cables may be bundled without concern for ex-

cessive heat build-up within the bundle.

Designing a cabling infrastructure to robustly support

802.11ac deployment requires consideration of the switch,

server, and device connection speeds commonly available

today as well as strategies to support redundancy, equipment

upgrades, and future wireless technologies. A grid-based cat-

egory 6A zone cabling approach using consolidation points

housed in zone enclosures is an ideal way to provide suffi-

cient spare port density to support 1000BASE-T link aggre-

gation to each 802.11ac WAP as necessary, while also

allowing for more efficient port utilization when 10GBASE-T

equipment connections become available. Zone cabling is

highly flexible and enables rapid reconfiguration of coverage

areas and conveniently provides additional capacity to ac-

commodate next generation technology, which may require

10GBASE-T link aggregation. Additional WAPs can be easily

incorporated into the wireless network to enhance coverage

with minimal disruption when spare connection points in a

zone cabling system are available. This architecture is espe-

cially suited for deployment in financial, medical, and other

critical data-intensive environments because redundant

10GBASE-T data and backup power connections provided to

each WAP can safeguard against outages.

Siemon recommends that each zone enclosure support a

coverage radius of 13m (42.7 ft) with 24 port pre-cabled con-

solidation points available to facilitate plug and play device

connectivity. For planning purposes, an initial spare port ca-

pacity of 50% (i.e. 12 ports unallocated) is recommended.

Spare port availability may need to be increased and/or cov-

erage radius decreased if the zone enclosure is also providing

service to building automation system (BAS) devices and

telecommunications outlets (TOs). Backbone cabling should

be a minimum design of 10 Gb/s capable balanced twisted-

pair copper or multimode optical fiber media to support

802.11ac uplink capacity.

Conclusion:

A killer app forces consumers to stop and question legacy

views about broadly deployed operating platforms or systems.

IEEE 802.11ac is a dual-edged killer app in that it requires

both 10GBASE-T and Type 2 remote powering for optimum

performance – swiftly making the wait-and-see stance con-

cerning 10GBASE-T adoption in support of LAN applications

a position of the past. A properly designed and deployed zone

cabling architecture utilizing thermally stable shielded cate-

gory 6A or higher cabling products engineered to withstand

the maximum TIA and ISO/IEC ambient temperature of 60°C

(140°F) plus the associated heat rise generated by 600mA

Type 2 PoE current loads will ensure that your cabling infra-

structure is a killer app enabler.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 15

Page 18: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

16

WI-

FI 8

02

.11a

c

www.siemon.com

Footnotes:1 IEEE Std 802.11ac™-2013, “IEEE Standard for Information technology – Telecommunications and information exchange between systems Local and metropolitan

area networks Specific requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 4: Enhancements

for Very High Throughput for Operation in Bands below 6 GHz”, December 11, 2013

2 Strategy Analytics' Connected Home Devices (CHD) service report, "Embedded WLAN (Wi-Fi) CE Devices: Global Market Forecast"

3 IEEE Std 802.11n™-2009, “IEEE Standard for Information technology – Local and metropolitan area networks – Specific requirements – Part 11: Wireless LAN

Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 5: Enhancements for Higher Throughput”, October 29, 2009

4 Siemon white paper, “Advantages of Using Siemon Shielded Cabling Systems to Power Remote Network Devices”, 2013

5 ISO/IEC TR 24704, “Information technology – Customer premises cabling for wireless access points”, July, 2004

6 TIA TSB-194-A, “Telecommunications Cabling Guidelines for Wireless Access Points”, November, 2013

7 APC, “Five Things to Know about 802.11ac”, May, 2013

8 Siemon white paper, “IEEE 802.3at PoE Plus Operating Efficiency: How to Keep a Hot Application Running Cool”, 2010

9 IEC 60512-99-001, “Connectors for Electronic Equipment – Tests and Measurements – Part 99-001: Test Schedule for Engaging and Separating Connectors Under

Electrical Load – Test 99A: Connectors Used in Twisted Pair Communication Cabling with Remote Power”, 2012 WP_

WiF

i_C

6/1

4

Worldwide Headquarters North AmericaWatertown, CT USAPhone (1 ) 860 945 4200 USPhone (1 ) 888 425 6165

Regional Headquarters EMEAEurope/Middle East/AfricaSurrey, EnglandPhone (44 ) 0 1932 571771

Regional HeadquartersAsia/PacificShanghai, P.R. ChinaPhone (86) 21 5385 0303

Regional HeadquartersLatin AmericaBogota, ColombiaPhone (571) 657 1950

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 16

Page 19: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

17www.siemon.com

Advantages of Using Siemon Shielded Cabling Systems To Power Remote Network Devices

Remote powering applications utilize the copper balanced twisted-pair IT cabling infrastructure to deliver dc power to

IP-enabled devices. The popularity of this technology and the interest in expanding its capabilities is staggering.

Consider:

• Over 100 million Power over Ethernet (PoE) enabled ports are shipping annually

• Cisco® 60W Universal PoE (UPOE) technology is driving the adoption of virtual desktop infrastructure (VDI) and, when

paired with Cisco’s EnergyWise IOS-based intelligent energy management solution, supports using the IT network to

monitor and control power consumption as well as turn devices on and off remotely to save power when the devices are

not being used

• Published, but not yet commercially available, Power over HDBaseT (POH)1 technology can deliver up to 100W over

twisted-pair cable to support full HD digital video, audio, 100BASE-T, and control signals in television and display

applications

• The IEEE 802.3 4-Pair Power over Ethernet (PoE) Study Group has been formed to investigate developing a new remote

powering application that will provide superior energy efficiency than a 2-pair application and expand the market for PoE

systems.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 17

Page 20: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

18

SH

IEL

DE

D C

AB

LIN

G A

DV

AN

TA

GE

S

www.siemon.com

In less than a decade, remote powering technology has

revolutionized the look and feel of the IT world.

Now, devices such as surveillance cameras, wireless ac-

cess points, RFID readers, digital displays, IP phones, and

other equipment all share network bandwidth that was

once exclusively allocated for computers. It’s common

knowledge that the networking of remotely powered de-

vices for autonomous data transmission and collection is

driving the need for larger data center infrastructures and

storage networks. However, many IT managers aren’t

aware that remote power delivery produces temperature

rise in cable bundles and electrical arcing damage to con-

nector contacts. Heat rise within bundles has the potential

to cause higher bit errors because insertion loss is directly

proportionate to temperature. In extreme environments,

temperature rise and contact arcing can cause irreversible

damage to cable and connectors. Fortunately, the proper

selection of network cabling can completely eliminate

these risks.

Choosing qualified shielded category 6A and category 7Acabling systems provides the following advantages that

ensure a “future-proof” cabling infrastructure capable of

supporting remote powering technology for a wide range

of topologies and operating environments:

• Assurance that critical connecting hardware contact mat-

ing surfaces are not damaged when plugs and jacks are

cycled under remote powering current loads

• Higher maximum operating temperature for IEEE 802.3

Type 2 2 PoE Plus applications

• Fully compliant transmission performance for a wider

range of channel configurations in environments having

an ambient temperature greater than 20°C (68°F)

• An option to support remote powering currents up to

600mA applied to all four pairs and all networking appli-

cations up to and including 10GBASE-T in 70°C (158°F)

environments over a full 4 connector, 100 meter channel

topology

• Reliable and thermally stable patching solutions for con-

verged zone cabling connections (e.g. device to horizon-

tal connection point) in hot environments

Protecting your connections

Telecommunications modular plug and jack contacts are

carefully engineered and plated (typically with gold or pal-

ladium) to ensure a reliable, low resistance mating sur-

face. Today’s remote powering applications offer some

protection to these critical connection points by ensuring

that dc power is not applied over the structured cabling

plant until a remotely powered device (PD) is sensed by

the power sourcing equipment (PSE). Unfortunately, un-

less the PD is shut off beforehand, the PSE will not dis-

continue power delivery if the modular plug-jack

connection is disengaged. This condition, commonly re-

ferred to as, “unmating under load”, produces an arc as

the applied current transitions from flowing through con-

ductive metal to air before becoming an open circuit.

While the current level associated with this arc poses no

risk to humans, arcing creates an electrical breakdown of

gases in the surrounding environment that results in cor-

rosion and pitting damage on the plated contact surface

at the arcing location.

While it’s important to remember that arcing and subse-quent contact surface damage is unavoidable under certain mating and unmating conditions - contacts can bedesigned in such as way as to ensure that arcing will occurin the initial contact “wipe” area and not affect mating in-tegrity in the final seated contact position. Figure 1 depicts an example of such a design that features a dis-tinct “make-first, break-last” zone that is separated by atleast 2mm from the “fully mated” contact zone on both theplug and outlet contacts. Note that any potential damagedue to arcing will occur well away from the final contactmating position for this design.-

Seated contact position

Location of arc during unmating cycle

Figure 1: Arc location in “wipe” area occurs outsideof final seated Z-MAX® contact position

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 18

Page 21: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

19www.siemon.com

SH

IEL

DE

D C

AB

LIN

G A

DV

AN

TA

GE

STo ensure reliable performance and contact integrity, Siemonrecommends that only connecting hardware that is independ-ently certified for compliance to IEC-60512-99-0013 be usedto support remote powering applications. This standard wasspecifically developed to ensure reliable connections for re-mote powering applications deployed over balanced twistedpair cabling. It specifies the maximum allowable resistancechange that mated connections can exhibit after being sub-jected to 100 insertion and removal cycles under a load con-dition of 55V dc and 600mA applied to each of the eightseparate plug/outlet connections.

All Siemon Z-MAX® and TERA® connecting hardware hasbeen certified by an independent test lab to be in full compli-ance with IEC 60512-99-001.

Keeping it coolThe standard ISO/IEC operating environment for structuredcabling is -20°C to 60°C (-4°F to 140°F). Compliance to in-dustry standards ensures reliable long term mechanical andelectrical operation of cables and connectors in environmentswithin these temperature limits. Exceeding the specified op-erating range can result in degradation of the jacket materialsand loss of mechanical integrity that may have an irreversibleeffect on transmission performance that is not covered by amanufacturer’s product warranty. Since deployment of cer-tain remote powering applications can result in a temperaturerise of up to 10°C (50°F) within bundled cables (refer to TableA.1 in TIA TSB-1844 and Table 1 in ISO/IEC TR 291255 ), thetypical rule of thumb is to not install minimally compliant ca-bles in environments above 50°C (122°F).

This restriction can be problematic in regions such as theAmerican southwest, the Middle East, or Australia’s NorthernTerritory, where temperatures in enclosed ceiling, plenum,and riser shaft spaces can easily exceed 50°C (122°F). Toovercome this obstacle, Siemon recommends the use ofshielded category 6A and 7A cables that are qualified for me-chanical reliability up to 75°C (167°F). Not only do these ca-bles inherently exhibit superior heat dissipation (refer toSiemon’s white paper, “IEEE 802.3 at PoE Plus OperatingEfficiency: How to Keep a Hot Application Running Cool6” ),but they may be installed in high temperature environmentsup to the maximum 60°C (140°F) specified by TIA andISO/IEC structured cabling standards without experiencingmechanical degradation caused by the combined effects ofhigh temperature environments and heat build-up insidecable bundles due to remote power delivery.

Maximizing reachAwareness of the amount of heat build-up inside the cablebundle due to remote power delivery is important becausecable insertion loss increases (signals attenuate more) inproportion to temperature. The performance requirementsspecified in all industry standards are based on an operatingtemperature of 20°C. The temperature dependence of cablesis recognized in cabling standards and both TIA and ISOspecify an insertion loss de-rating factor for use in determin-ing the maximum channel length at temperatures above

20ºC (68°F). The temperature dependence is different forunshielded and shielded cables and the de-rating coefficientfor UTP cable is actually three times greater than shieldedcable above 40°C (104°F) (refer to Annex G in ANSI/TIA-568-C.27 and Table 21 in ISO/IEC 11801, 2nd edition8 ). Forexample, at 60ºC (140°F), the standard-specified length re-duction for category 6A UTP horizontal cables is 18 meters.In this case, the maximum permanent link length must be re-duced from 90 meters to 72 meters to offset increased inser-tion loss due to temperature. For minimally compliantcategory 6A F/UTP horizontal cables, the length reduction is7 meters at 60ºC (140°F), which means reducing maximumlink length from 90 meters to 83 meters. The key takeawayis that shielded cabling systems have more stable transmis-sion performance at elevated temperatures and are bestsuited to support remote powering applications and installa-tion in hot environments.

Siemon’s category 6A and 7A shielded cables exhibit extremely stable transmission performance at elevated temperatures and require less length reduction than specifiedby TIA and ISO/IEC standards to satisfy insertion loss re-quirements; thus, providing the cabling designer with signif-icantly more flexibility to reach the largest number of workareas and devices in “converged” building environments. As shown in figure 2, the length reduction for Siemon 6AF/UTP horizontal cable at 60ºC (140°F) is 3 meters, whichmeans reducing maximum link length from 90 meters to 87meters. Furthermore, Siemon 6A F/UTP horizontal cablemay be used to support remote powering currents up to

600mA applied to all four pairs up to 60ºC (140°F). In thiscase, the maximum link length must be reduced from 90 me-ters to 86 meters. Note that the TIA and ISO/IEC profilesfrom 60ºC to 70ºC (140°F to 150°F) are extrapolated assum-ing that the de-rating coefficients do not change and are pro-vided for reference only. Due to their superior and stableinsertion loss performance, Siemon’s fully-shielded category7A cables do not require any length de-rating to support re-mote powering currents up to 600mA applied to all four pairsand all networking applications up to and including10GBASE-T over a full 4-connector, 100-meter channeltopology in environments up to 70°C (150°F)!

Figure 2: Horizontal cable length de-rating versus temperature forapplication speeds up to 10GBASE-T

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 19

Page 22: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

SH

IEL

DE

D C

AB

LIN

G A

DV

AN

TA

GE

S

20 www.siemon.com

References:1 HDBaseT Alliance, “Power Over HDBaseT Addendum to the HDBaseT 1.0 Specification”, 2011

2 IEEE Std 802.3™-2012, “IEEE Standard for Ethernet”, 2012

3 IEC 60512-99-001, “Connectors for Electronic Equipment - Tests and Measurements - Part 99-001: Test Schedule for Engaging and Separating Connectors Under Electrical

Load - Test 99A: Connectors Used in Twisted Pair Communication Cabling with Remote Power”, 2012

4 TIA TSB-184, “Guidelines for Supporting Power Delivery Over Balanced Twisted-Pair Cabling”, 2009

5 ISO/IEC TR 29125, “Information Technology – Telecommunications Cabling Requirements for Remote Powering of Terminal Equipment”, 2010

6 Siemon white paper, “IEEE 802.3at PoE Plus Operating Efficiency: How to Keep a Hot Application Running Cool”, 2010

7 ANSI/TIA-568-C.2, “Balanced Twisted-Pair Telecommunications Cabling and Components Standards”, 2009

8 ISO/IEC 11801, 2nd edition, “Information technology – Generic cabling for customer premises”, 2002

WP_

Shie

ldA

dv_

Rev.

B 6

/13

A better patching solutionWhile TIA and ISO/IEC temperature dependence characterization focuses on the performance of solid conductor cables, it iswell known that the stranded conductor cables used to construct patch cords exhibit significantly greater insertion loss rise dueto elevated temperature than do solid conductor cables. To maximize flexibility and minimize disruptions when device moves,adds, and changes are made, a zoned cabling solution is the topology of choice for the building automation systems (BAS)most likely to take advantage of remote powering solutions. However, most BAS horizontal connection points in a zoned topologyare located in the ceiling or in plenum spaces where high temperatures are most likely to be encountered. Fortunately, the riskof performance degradation due to elevated temperatures in zone cabling environments can be mitigated by using solid conductor cords for equipment connections. Equipment cords constructed from Siemon shielded category 6A solid conductorcable are recommended for support of remote powering applications in environments up to 60ºC (140°F) and equipment cordsconstructed from Siemon shielded category 7A solid conductor cable are recommended for support of remote powering appli-cations in environments up to 70ºC (150°F).

The future of remote powering applications:The advent of remote powering technology has significantly increased the number of networked devices, with surveillancecameras, IP phones, and wireless access points driving the market for PoE chipsets today. As the PD market matures, newand emerging remote powering technology continues to evolve to support advanced applications, improved efficiency, and increased power delivery. Power over HDBaseT, UPOE, and the work of the IEEE 802.3 4-Pair Power over Ethernet StudyGroup formed to investigate more efficient power injection schemes are enabling remote powering applications that will support new families of devices, such as lighting fixtures, high definition displays, digital signage, and point-of-sale (POS) devices that can consume more than 30W of power. All trends indicate that four pair power delivery is the future of remote powering technology. Choosing connectors and cables that are specifically designed to handle remote powering current loads,associated heat build-up, and contact arcing are important steps that can be taken to minimize the risk of component damageand transmission errors.

Conclusions:As the market for remotely powered IP-devices grows and more advanced powering technology is developed, the ability ofcables and connectors to operate in higher temperature environments and perform under dc load conditions will emerge ascritical factors in the long term reliability of cabling infrastructure used to support PoE and other low voltage applications thatdeliver power over twisted-pairs. Fortunately, cabling products designed to operate under demanding environmental and remote powering conditions are already available today. Siemon’s shielded category 6A and category 7A cabling systems provide the following implementation advantages when deploying remote powering technology:

• Siemon’s Z-MAX® and TERA® connecting hardware complies with IEC 60512-99-001, which ensures that critical contactseating surfaces are not damaged when plugs and jacks are mated and unmated under remote powering current loads

• Siemon’s Z-MAX shielded category 6A and TERA category 7A cabling solutions support the IEEE 802.3 Type 2 PoE Plus application over the entire ISO/IEC operating temperature range of -20°C to 60°C (-4°F to 140°F)

• Siemon’s Z-MAX shielded category 6A cabling solutions require less than one-fifth the length de rating than minimally compliant category 6A UTP cables at 60°C (140°F)

• Siemon’s TERA category 7A cabling solutions support data rates up to at least 10GBASE-T in 70°C (150°F) environmentsover a full 4-connector, 100-meter channel topology - no length de-rating required

• Siemon’s shielded category 6A and 7A solid equipment cords are uniquely capable of maintaining highly reliable and stableperformance with no mechanical degradation when used for converged zone cabling connections in hot environments.

Worldwide Headquarters North AmericaWatertown, CT USAPhone (1 ) 860 945 4200 USPhone (1 ) 888 425 6165

Regional Headquarters EMEAEurope/Middle East/AfricaSurrey, EnglandPhone (44 ) 0 1932 571771

Regional HeadquartersAsia/PacificShanghai, P.R. ChinaPhone (86) 21 5385 0303

Regional HeadquartersLatin AmericaBogota, ColombiaPhone (571) 657 1950

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 20

Page 23: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

21www.siemon.com

Data Center Storage EvolutionExecutive Summary

Data is growing at explosive rates in today's businesses. Big Data is increasing storage demands in a way that could only

be imagined just a few short years ago. A typical data record has tripled if not quadrupled in size in just the last five years,

however this data now has many forms including structured, semi-structured and non-structured. In fact, according to a

recent IBM® study, 2.5 quintillion bytes of data are written every day and 90% of global data has been created in the last two

years alone. It is glaringly apparent that the size of databases is growing exponentially.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 21

Page 24: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

22

DA

TA

CE

NT

ER

ST

OR

AG

E E

VO

LU

TIO

N

www.siemon.com

Aside from a company's human resources, data has become

the most valuable corporate asset both tangibly and

intangibly. How to effectively store, access, protect and man -

age critical data is a new challenge facing IT departments. A

Storage Area Network (SAN) applies a networking model to

storage in the data center. The SANs operate behind the

servers to provide a common path between servers and stor -

age devices. Unlike server-based Direct Attached Storage

(DAS) and file-oriented Network Attached Storage (NAS)

solutions, SANs provide block-level or file level access to

data that is shared among computing and personnel

resources. The predominant SAN technology is implemented

in a Fibre Channel (FC) configuration, although new

configurations are becoming popular including iSCSI and

Fibre Channel over Ethernet (FCoE). The media on which

the data is stored is also changing.

With the growth of SANs and the worldwide domination of

Internet Protocol (IP), using IP networks to transport storage

traffic is in the forefront of technical development. IP

networks provide increasing levels of manageability,

interoperability and cost-effectiveness. By converging the

storage with the existing IP networks (LANs/MANs/WANs)

immediate benefits are seen through storage consolidation,

virtualization, mirroring, backup, and management. The

convergence also provides increased capacities, flexibility,

expandability and scalability.

The two main standards utilizing the IP protocol are FCoE

(Fibre Channel over Ethernet), and iSCSI (ip Small

Computer System Interface). Both carry either Fibre Channel

or SCSI commands incorporated into an IP datagram.

FCoE is different in that Fibre Channel commands are

encapsulated into IP traffic, but this requires a converged

network adapter (CNA) that is capable of speaking both Fibre

Channel and Ethernet for encapsulation. iSCSI operates

over standard Ethernet networks and standard Ethernet

adapters at the edge device called the initiator.

Today, 10Gigabit Ethernet is becoming increasingly popular

as the horizontal application of choice in corporate data cen -

ters. Gaining a competitive edge from deploying 10 Gigabit

Ethernet in the enterprise requires a robust IT infrastructure.

Increasingly, 10GBASE-T and 10Gb SFP+ applications

provide a reliable foundation for data centers’ networking

components and SAN networking. With a structured cabling

system capable of 10GBASE-T, users are provided with an

open and industry standards-based infrastructure that can

support multiple converged applications.

This paper provides some useful insight into both existing

and new storage.

Storage Technologies

With the advent of the Internet, Big Data, corporate intranets,

e-mail, e-commerce, business-to-business (B2B), ERP

(Enterprise Resource Planning), Customer Resource

Management (CRM), data warehousing, CAD/CAM, rich

media streaming, voice/video/data convergence, and many

other real time applications, the demands on the enterprise

storage capacity has grown by leaps and bounds. The data

itself is as important to a business's successful operation as

its personnel and systems. The need to protect this strategic

asset has far exceeded the capabilities of a tape backup.

Tape access and capacities can simply not address the

growing demands. Growing data stores meant having to

implement tape libraries. Even then, there are inherent

issues with tape media that could only be addressed with

either supplemental storage or replacement of the

media altogether.

Downtime is one critical factor in today's businesses. Based

on a recently published study by Dun & Bradstreet, 59% of

Fortune 500 companies experience a minimum of 1.6 hours

of downtime per week. Wages alone levy a downtime cost

of $896,000 per week or just over $46 million per year. A

recent conservative Gartner study lists downtime costs at

$42,000 per hour. A USA today survey of 200 data center

managers found that over 80% reported that their

downtime costs exceed $50,000 per hour, and another 20%

said they exceed $500,000 per hour. These costs alone

have pushed the stor age industry to provide redundancy

and high-availability. Further, Federal mandates for the

medical and financial industry have created yet another

mandate for security and high availability due to compliance

requirements.

Storage network technology has developed in the following

three main configurations: Direct Attached Storage (DAS),

Network Attached Storage (NAS), and Storage Area

Networks (SAN).

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 22

Page 25: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

23

DA

TA

CE

NT

ER

ST

OR

AG

E E

VO

LU

TIO

N

www.siemon.com

Network Attached Storage (NAS)

NAS is a file-level access storage architecture with storage elements attached directly to a LAN. It provides file access to

het erogeneous computer systems. Unlike other storage systems the storage is accessed directly via the network as shown in

Figure 2. An additional layer is added to address the shared storage files. This system typically uses NFS (Network File System)

or CIFS (Common Internet File System) both of which are IP applications. A separate computer usually acts as the "filer" which

is basically a traffic and security access controller for the storage which may be incorporated into the unit itself. The advantage

to this method is that several servers can share storage on a separate unit. Unlike DAS, each server does not need its own

dedicated storage which enables more efficient utilization of available storage capacity. The servers can be dif ferent platforms

as long as they all use the IP protocol.

Figure 1: A Simple DAS Diagram

Direct Attached Storage (DAS)

DAS is the traditional method of locally attaching storage devices to servers via a direct communication path between the server

and storage devices. As shown in Figure 1, the connectivity between the server and the storage devices are on a dedicated path

separate from the network cabling. Access is provided via an intelligent controller. The storage can only be accessed through the

directly attached server. This method was developed primarily to address shortcomings in drive-bays on the host computer

systems. When a server needed more drive space, a storage unit was attached. This method also allowed for one server to mirror

another. The mirroring functionality may also be accomplished via directly attached server to server interfaces.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 23

Page 26: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

24

DA

TA

CE

NT

ER

ST

OR

AG

E E

VO

LU

TIO

N

www.siemon.com

Figure 2: Simple NAS Architecture Figure 3: Meshed SAN Architecture

Switch/SAN Director

Storage Area Networks (SANs)

Like DAS, a SAN is connected behind the servers. SANs

provide block-level access to shared data storage. Block

level access refers to the specific blocks of data on a

storage device as opposed to file level access. One file will

contain several blocks. SANs provide high availability and

robust business continuity for critical data environments.

SANs are typically switched fabric architectures using Fibre

Channel (FC) for connectivity. As shown in Figure 3 the

term switched fabric refers to each storage unit being

connected to each server via multiple SAN switches also

called SAN directors which provide redundancy within the

paths to the storage units. This provides additional paths

for communications and eliminates one central switch as a

single point of failure.

Ethernet has many advantages similar to Fibre Channel for

supporting SANs. Some of these include high speed,

support of a switched fabric topology, widespread interoper -

ability, and a large set of management tools. In a storage

network application, the switch is the key element. With the

significant number of Gigabit and 10 Gigabit Ethernet ports

shipped, leveraging IP and Ethernet for storage is a natural

progression for some environments.

SAN over IP

IP was developed as an open standard with complete inter -

operability of components. Two new IP storage network

tech nologies are Fibre Channel over Ethernet (FCoE) and

SCSI over IP (iSCSI). IP communication across a standard

IP net work via Fibre Channel Tunneling or storage

tunneling has the benefit of utilizing storage in locations that

may exceed the directly attached limit of nearly 10 km when

using fiber as the transport medium. Internal to the data

center, legacy Fibre Channel can also be run over coaxial

cable or twisted pair cabling, but at significantly shorter

distances. The incor poration of the IP standard into these

storage systems offers performance benefits through

speed, greater availability, fault tolerance and scalability.

These solutions, properly imple mented, can almost

guaranty 100% availability of data. The IP based

management protocols also provide network man agers

with a new set of tools, warnings and triggers that were

proprietary in previous generations of storage technology.

Se curity and encryption solutions are also greatly

enhanced. With 10G gaining popularity and the availability

of new faster WAN links, these solutions can offer true

storage on demand.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 24

Page 27: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

25www.siemon.com

DA

TA

CE

NT

ER

ST

OR

AG

E E

VO

LU

TIO

N

Fibre Channel (FC) and Fibre Channel over Ethernet

(FCoE)

Native FC is a standards-based SAN interconnection

technology within and between data centers limited by

geography. It is an open, high-speed serial interface for

interconnecting servers to storage devices (discs, tape

libraries or CD jukeboxes) or servers to servers. FC has large

addressing capabilities. Similar to SCSI, each device

receives a number on the channel. It is the dominant storage -

networking interface today. The Fibre Channel can be fully

meshed providing excellent redundancy. FC can operate at

the following speeds: 1, 2, 4, 8, 16 and 32 Gb/s with 8Gb/s

to 16 Gb/s currently being predominant. The

transmission distances vary with the speed and media. With

FCoE, the packets are processed with the lengths and

distances afforded by an Ethernet Network and again, vary

according to speed and media. According to the IEEE

802.3ae standard for 10Gigabit Ethernet over fiber, when

using singlemode optical fiber cables, the distance supported

is 10 kilometers, up to 300m when using laser optimized

50 micron OM3 multimode fiber and up to 400m with OM4

as compared to native Fibre Channel with a distance of only

130m. Laser optimized OM3 and OM4 fiber is an important

consideration in fiber selection for 10Gb/s transmission.

FC Topology

Native Fibre Channel supports three different connection

topologies: point-to-point, arbitrated loop, and switched

fabric. Switched fabric, as the name implies, is the better

solution as it allows for a mesh within the Fibre Channel. It

may also be configured in what is known as fabric islands.

Fabric islands connect geographically diverse Fibre Channel

fabrics. These fabrics may be anywhere within the range of

the medium without IP. With IP, the fabric can reach greater

distances as it is extended by routers and links outside of the

fabric. They may also comprise different topologies

(cascaded, ring, mesh, or core-to-edge), but may require

additional connectivity for shared data access, resource

consolidation, data backup, remote mirroring, or disaster

recovery.

FCoE Topology

Fibre Channel is accomplished on a separate network than

the Ethernet network. With Fibre Channel over Ethernet,

Converged Network Adapters are used in place of Ethernet

adapters and allow a single channel to pass both Ethernet

and Fibre Channel encapsulated packets across a standard

IP network extending distance over an entire enterprise,

regardless of geography via Ethernet routers and bridges.

For replication between storage systems over a wide area

network, FCoE provides a mechanism to interconnect

islands of FC SAN or FCoE SANs over the IP

infrastructure (LANs/MANs/WANs) to form a single, unified

FC SAN fabric.

Native Fibre Channel SAN Typical Component and

Elements

Fibre Channel hardware interconnects storage devices with

servers and forms the Fibre Channel fabric through the

connection of the following:

•  Interconnect device: switches, directors

•  Translation devices: Host bus adapters (HBAs) installed in

server, adapters, bridges, routers, and gateways

•  Storage devices: non-RAID or RAID (Redundant Array of

Independent Disks) disk arrays, tape libraries

•  Servers: The server is the initiator in the Fibre Channel

SAN and provides the interface to an IP network. Servers

interact with the Fibre Channel fabric through the HBA.

•  Physical layer/media: Coax, twisted-pair and/or fiber-optic

cables, however fiber is the most predominant.

The FC SAN switches are classified as either switches or di -

rectors. A SAN fabric switch contains a low to medium port

count, while a director is a high port count switch (generally

above 64 ports). Fibre Channel switches can be networked

together to build larger storage networks. The HBA is more

complex than a traditional Ethernet card. It connects the

Fibre Channel network to the IP network via the networking

cabling subsystem. A bridge may be used to connect legacy

SCSI or ESCON (Enterprise System Connection) storage

devices to the Fibre Channel network. The bridge will serve

to translate and/or encapsulate the various protocols allowing

communi cation with legacy storage devices via the SAN.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 25

Page 28: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

26

DA

TA

CE

NT

ER

ST

OR

AG

E E

VO

LU

TIO

N

www.siemon.com

Figure 4: iSCSI SAN Diagram

Small Computer Systems Interface (SCSI) over IP (iSCSI)

The iSCSI protocol unites storage and IP networking. iSCSI

uses existing Ethernet devices and the IP protocol to carry and

manage data stored in a SCSI SAN. It is a simple, high speed,

low-cost, long distance storage solution. One prob lem with

traditional SCSI attached devices was the distance limitation.

By using existing network components and exploiting the

advantages of IP networking such as network management

and other tools for LANs, MANs and WANs, iSCSI is expanding

in the storage market and extending SAN connectivity without

distance limitations. It is more cost effective due to its

use of existing equipment and infrastructure. With a 10x

increase from existing 1Gigabit to 10Gigabit Ethernet, it will

become a major force in the SAN market. Using 10Gigabit

Ethernet, SANs are reaching the highest storage transportation

speeds ever.

iSCSI Typical Component/Elements:

•  iSCSI Host Bus Adapter (HBA) or NIC (installed in server)

•  Storage devices disk arrays or tape libraries

•  Servers

•  Standard IP Ethernet Switches and Routers

•  Storage Switches and Routers

•  Gateways

•  Physical layer media - Fiber, twisted-pair

Generally, to deploy an iSCSI storage network in a data center,

connectivity is provided via iSCSI Host Bus Adapters

(HBAs) or storage NIC which connects the storage resources

to existing Ethernet via IP Ethernet switches or IP Storage

switches and routers. Specified storage IP routers and switches

have a combination of iSCSI interfaces and other storage

interfaces such as SCSI or Fibre Channel, they provide multi-

protocol connectivity not available in conven tional IP and

Ethernet switches.

When connecting to FC SANs, an IP storage switch or router

is needed to convert the FC protocol to iSCSI. IP storage

routers and switches extend the reach of the FC SAN and

bridge FC SANs to iSCSI SANs. For example, an IP storage

switch allows users to perform FC-to-FC switching, FC-to iSCSI

switching, or FC-to- Ethernet switching in addition to Ethernet

to Ethernet switching.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 26

Page 29: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

27www.siemon.com

DA

TA

CE

NT

ER

ST

OR

AG

E E

VO

LU

TIO

N

Mixed Architectures Storage Networks

Flexibility and low cost are the important driving factors for

implementing an iSCSI approach, especially for long

distance storage. In addition, as Ethernet speeds are

continually increasing, it is believed that the 10 Gigabit

Ethernet based iSCSI will be widely used for SANs in data

centers. A number of devices have been developed to

address the large installed base of native FC storage

solutions in place today. In order to protect an organization's

current investment in storage technology, SAN installations

may evolve from a single specific storage network to a mix

of Fibre Channel and iSCSI products.

Furthermore, a convergence or integration of NAS and SANs

is expected and multilingual (combination) Fibre Channel

and Ethernet switches are expected to evolve. The

integrated SAN and NAS network will be scaleable and cost-

effective, it will support multiple protocols and interfaces. This

integration will enable customers to optimize their native

Fibre Channel SANs by providing reliable connections over

long distances using existing electronics by providing a

convergence between Ethernet, Fibre Channel and iSCSI

protocols.

Evolving Standards for SANs

FC standards are developed by the technical subcommittee

NCITS/T11 of the National Committee for Information

Technology Standards (NCITS). The original FC standards

were approved by the ANSI X.3230 in 1994. The first SCSI

standard was ratified by ANSI in 1986. Since then, there

have been multiple amendments mirroring changes within

the industry.

The Internet Engineering Task Force (IETF) is expanding on

these standards through IP protocol enhancements to the

existing interface and operational standards above. In Feb -

ruary, 2003, the iSCSI specification was officially approved

as a "proposed standard" by the IETF. Additionally, the

Storage Networking Industry Association (SNIA), the Fibre

Channel Industry Association (FCIA), and other industry

groups are also working on the SAN standard's

implementation and development. The data center is the

critical infrastructure hub of an organization. Besides the

SAN /NAS components, a typical data center includes a

variety of other components and connectivity. To address the

evolutions of data centers, the TIA TR-42.1.1 group

developed the "Telecommunications Infrastructure Standard

for Data Centers” published as ANSI/TIA/EIA-942 and later

amended and published as TIA 942-A. The standard covers

the cabling system design, pathway, and spaces. Likewise,

ISO developed ISO 24764 international cabling standard for

data centers.

Cabling Considerations and Design Factors for SANs are

most prevalent in data centers, but they also include video,

voice, and other converged applications. A robust network

cabling foundation is essential. In a data center environment

the basic requirements for the cabling system are:

• Standards-based open system

• Support for 10GbE, 8, 16 and 32Gb/s FC

• Support for multiple types of SAN / NAS and protocols

• Support for cumulative bandwidth demands for

converged applications

• High Reliability

• Redundancy

• Flexible, scaleable and provides mechanisms for easy

deployment of MACs

• It is highly desirable to use the highest performing fiber

with low loss connectors to allow reconfigurations without

running new fiber.

To meet all above requirement, 10GbE copper and laser

optimized multimode fiber are the first choices. TIA recom -

mends category 6A as a minimum copper cabling standard

and now OM4 as the minimum fiber standard. ISO 24764

recommends 6A as a minimum for copper and OM3 for fiber.

A 10GbE capable infrastructure is predominant in data

centers today, with 40 and 100GbE fast approaching for

backbone applications. In order to improve the reliability of

the communications infrastructure, redundancy is a principal

design consideration in a data center. The redundancy can

be achieved by providing physically separated

services, cross-connected areas and pathways, or by

providing redundant electronic devices in fabric topologies.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 27

Page 30: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

28

DA

TA

CE

NT

ER

ST

OR

AG

E E

VO

LU

TIO

N

www.siemon.com

WP_

SAN

_C 8

/14

Worldwide HeadquartersNorth AmericaWatertown, CT USAPhone (1) 860 945 4200 USPhone (1) 888 425 6165

Regional Headquarters EMEAEurope/Middle East/AfricaSurrey, EnglandPhone (44) 0 1932 571771

Regional HeadquartersAsia/PacificShanghai, P.R. China Phone(86) 21 5385 0303

Regional HeadquartersLatin AmericaBogota, ColombiaPhone (571) 657 1950

Conclusion

Storage Area Networks are but one component of converged applications that traverse today's networks. The benefits of these

systems are not only numerous, but completely essential to a business. Providing the bandwidth necessary for all networked

applications using a high performance structured cabling infrastructure will ensure their functionality for years to come. Upgrading

or replacing your infrastructure reactively is costly. Industry experts agree that cabling infrastructure should be planned to carry

data for at least 10 years.

Storage solutions are plentiful and there is no one size fits all for today’s data centers. In fact some data centers utilize a variety of

storage architectures depending on the application requirements. While Fibre Channel in native form is the predominant architecture

for storage, iSCSI and FCoE are gaining some momentum. When fibre channel SANs complement Ethernet networks, dual paths

for moving data are provided. Converging fibre channel over Ethernet decreases the number of connections required, but

doubles the traffic over the used channels. Increasing bandwidth from gigabit to 10GbE provides more bandwidth for these

applications. When increasing the horizontal server to switch speed, uplink ports also need to increase in speed, generally using

multiple 10GbE links or newer 40/100GbE speeds. Siemon’s data center design assistance experts can

help design a storage and network architecture to support your business needs.

The Siemon Company is a global market leader specializing in high performance, high quality cabling systems. Siemon offers a

broad range of copper and fiber cable, connectivity and cable management systems for Data Centers including Storage Area

Networks and beyond. Siemon’s LightStack™ Fiber Plug and Play system combines superior performance with ultra high density

(144 LC and 864 MTP fibers in 1U) and best in class accessibility. Siemon cabling systems are backed by an extended warranty

covering product quality, performance headroom and applications assurance for up to 20 years. For more information on Siemon

Data Center solutions please visit: www.siemon.com/datacenter.

Bibliography

•  Worldwide Disk Storage Systems Report, IDC, www.idc.com

•  SAN for the Masses, Computing Technology Industry Association, http://www.comptia.org/research/

•  Storage Network Infrastructure, 2003 Forecast (Executive Summary), Dataquest of Gartner, www.gartner.com

•  ANSI, American National Standards Institute, www.ansi.org

• TIA, Telecommunications Industry Association, www.tiaonline.org

•  EIA, Electronics Industry Alliance, www.eia.org

• IETF, Internet Engineering Task Force, www.ietf.org

• SNIA, Storage Networking Industry Association, www.snia.org

• FCIA, Fibre Channel Industry Association, www.fibrechannel.org

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 28

Page 31: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

29www.siemon.com

The Need for Low-Loss Multifiber Connectivity In Today’s Data Center

Optical insertion loss budgets are now one of the top concerns among data center managers, especially in today’s large

virtualized server environments with longer-distance 40 and 100 gigabit Ethernet (GbE) backbone switch-to-switch

deployments for networking and storage area networks (SANs). In fact, loss budgets need to be carefully considered during

the early design stages of any data center—staying within the loss budget is essential for ensuring that optical data signals

can properly transmit from one switch to another without high bit error rates and performance degradation.

With the length and type of the fiber cable and number of connectors and splices all contributing to the link loss, data

center managers are faced with the challenge of calculating each connection point and segment within their fiber channels.

Multi-fiber push on (MPO) or mechanical transfer push on (MTP) connectors are rapidly becoming the norm for switch-to-

switch connections due to their preterminated plug and play benefits and ease of scalability from 10 to 40 and 100 gigabit

speeds. Unfortunately, typical MPO/MTP module insertion loss may not allow for having more than two mated connections

in a fiber channel, which significantly limits design flexibility and data center management. Low loss, rather than standard

loss, MPO/MTP connectors better support multiple mated connections for flexibility over a wide range of distances and

configurations while remaining within the loss budget.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 29

Page 32: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

30

LO

W L

OS

S M

UL

TIF

IBE

R

www.siemon.com

Evolving Data Center Architectures Impact Loss

Traditional three-tier Layer 3 switch architectures have been common practice in the data center environment for several

years. These traditional architectures consist of core network and SAN switches located in the main distribution area (MDA);

aggregation switches located in the MDA, intermediate distribution area (IDA) or horizontal distribution area (HDA); and

access switches located in the HDA (see Figure 1).

Figure 1: Traditional three-tier switch architecture per TIA-942 data center standards.

With multiple switch tiers and fiber backbone speeds of 10 gigabits per second (Gb/s), the distance and data rates between

switches have remained short enough for most data centers to maintain two or more connectors without exceeding optical

link loss budgets. However, traditional three-tier architectures are no longer ideal for large virtualized data centers.

While the traditional three-tier architecture was well suited for data traffic between servers that reside on the same access

switch, it does not adequately support the non-blocking, low-latency, high-bandwidth requirements of today’s large virtualized

data centers that divide single physical servers into multiple isolated virtual environments. Non-blocking refers to having suf-

ficient bandwidth so that any port can communicate with any other port at the full bandwidth capacity of the port, while latency

refers to the amount of time it takes for a data packet to travel from its source to its destination. With equipment now located

anywhere in the data center, data traffic between two access switches in a three-tier architecture may have to traverse in a

north-south traffic pattern through multiple aggregation and core switches, resulting in an increased number of switch hops

and increased latency.

In a high-bandwidth, virtualized environment, the traditional north-south traffic pattern (switch to switch) causes the problem

of links not having enough bandwidth to support the traffic.

This has many data centers moving to switch fabric architectures that use only two tiers of switches with fewer switch-to-

switch hops. Switch fabrics provide lower latency and greater bandwidth between any two points by taking advantage of wire-

speed transmissions on the backplanes (port to port) of switches as opposed to uplinks from lower level switch to higher level

switch). This enables dynamic east-west server-to-server traffic where it is needed, eliminating the need for communication

between two servers to travel north-south through multiple switch layers.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 30

Page 33: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

31www.siemon.com

LO

W L

OS

S M

UL

TIF

IBE

RFat-tree switch fabrics, also referred to as a leaf and spine architectures, are one of the most common switch fabrics being

deployed in today’s virtualized data center. The fat-tree architecture consists of interconnection (spine) switches placed in

the MDA and access (leaf) switches placed in the HDA or EDA that each connect, or uplink, to every interconnection switch

in a mesh, typically via optical fiber (see Figure 2).

While the fat-tree flattened architecture leverages the reach of standards-based optical fiber cabling to establish large

numbers of active connections between fewer switches, these new data center designs often result in longer distances

between interconnection and access switches. These longer fiber runs can be difficult to deploy in data center pathways,

and adding new access switches presents the challenge of adding additional long fiber runs to already populated pathways.

To maintain flexibility and manageability, ease deployments and upgrades, and limit access to critical switches, many data

center managers are looking to deploy multiple mated pairs that support distribution points or convenient fiber patching areas

(cross connects).

Convenient patching areas

include the use of fiber connect

panels that mirror interconnection

switch ports and connect via

permanent, or fixed, links to fiber

connect panels that mirror access

switch ports (see Figure 3). These

panels can be located in separate

cabinets, which allows the

switches to remain untouched and

secure. They also enable easier

moves, adds and changes

(MACs) by creating an “any to all”

configuration where any switch

port can be connected to any

other switch port by simply

repositioning fiber jumper connec-

tions at the patching area.

Figure 2: Fat-tree switch fabric architecture per the TIA-942-A addendum.

Figure 3: Cross connects can be deployed for ease of manageability, flexibility and/or keeping switches secure.However, the extra mated connections in the backbone channel may require low loss optical fiber connectivity.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 31

Page 34: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

32

LO

W L

OS

S M

UL

TIF

IBE

R

www.siemon.com

Figure 4: Top-view of a data center showing the use of cross connects for core SAN and network switch connections in the MDA and serversand access switches in the HDA. A server’s fiber connection can easily be switched from network to SAN via a simple fiber jumper changeat the end of row cross connect.

Unfortunately, the use of these valuable cross connects adds additional connection points and subsequent loss in a fiber channel. Consequently, standard loss MPO/MTP insertion loss values can put data center managers at risk of exceeding theiroptical link loss budgets, often preventing the use of cross connects and requiring continued use of long fiber runs that significantly limit flexibility and complicate MACs and upgrades.

Higher Bandwidth Speeds Impact Loss

One of the key driving factors that is making fiber loss budgets a growing concern in the data center environment is the migration of transmission speeds from 1 Gb/s, to 10 Gb/s, to now 40 and 100 Gb/s for Ethernet-based networks and from 8 Gb/s, to 16 Gb/s, to now 32 Gb/s for Fibre Channel-based SANs.

As speeds increase, insertion loss requirements become more stringent than ever, making the use of cross connects more difficult in most scenarios where standard insertion loss values are used. A closer look at the evolution of Ethernet standardsdemonstrates the impact of speed on insertion loss.

The Institute of Electrical and Electronics Engineers (IEEE) 1000BASE-SX standard (1 GbE) allows for a maximum channelloss of 4.5 dB over 1000 meters of OM3 multimode fiber and 4.8 dB over 1100 meters of OM4. The maximum channel lossfor 10GBASE-SR (10 GbE) was reduced to 2.6 dB over 300 meters of OM3 fiber and 2.9 dB over 400 meters of OM4.

Ideal for larger data centers or when optical fiber is distributed to multiple functional areas or zones, the use of cross

connects at interconnection and/or access switches can also allow for one-time deployment of permanent high-fiber-count

cabling from the MDA to the HDA. This allows for the fiber backbone cabling to be used for various purposes (networking or

SAN) without multiple MACs and simplifies the process of adding new access switches and equipment to the data center.

For example, all it takes to swap a server’s fiber connection from a network connection to a SAN connection is a simple fiber

jumper change at the cross connect located at the end of each row (see Figure 4). ).

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 32

Page 35: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

33www.siemon.com

LO

W L

OS

S M

UL

TIF

IBE

R

Figure 5: Per IEEE 802.3ae, a 10 gigabit channel over OM3 fiber has a maximum distance of 300 meters with a maximum loss budget of 2.6 dB.

IEEE 40GBASE-SR4 and 100GBASE-SR10 standards for 40 and 100 GbE over multimode fiber with an 850 nm source now

have more stringent loss requirements for the fiber, which lowers the overall channel loss. As shown in Table 1, for OM3 fiber

cabling, the 40 and 100 GbE standards allows for a channel distance of 100 meters with a maximum channel loss of 1.9 dB,

including a maximum connector loss of 1.5 dB. For OM4 fiber cabling, the distance is increased to 150 meters but with a max-

imum channel loss of 1.5 dB, including a maximum connector loss of 1.0 dB.

Table 1: As speeds have increased from 1 Gb/s to 40 and 100 Gb/s, maximum channel distance and loss has decreased significantly

It should be noted that current TIA and ISO standards require a minimum of OM3 fiber, while TIA recommends the use of

OM4 due to its longer transmission capabilities. In fact, the upcoming 100GBASE-SR4 standard that will use eight fibers

(i.e., four transmitting and four receiving) at 25 Gb/s is anticipated to be supported by OM4 fiber to 100 meters, but to only

70 meters using OM3.

Typical MPO/MTP connectors, which are required for 40 and 100 GbE deployments have insertion loss values that range

from 0.3 dB to 0.5 dB. Typical LC multimode fiber connectors have loss values that range from 0.3 dB to 0.5 dB. While better

than the allowed 0.75 dB TIA value, typical connector loss still limits how many connections can be deployed in 10, 40 and

100 GbE channels. For example, with an LC connector loss of 0.5 dB, a 300-meter 10 GbE channel over OM3 fiber can

include only three connectors with no headroom. Having just two or three connections prevents the use of cross connects

at both interconnection (MDA) and access switches (HDA).

Based on the 0.75 dB maximum acceptable connector loss and the 3.5dB/km maximum fiber loss specified in TIA-568-C.0-2

standards, the loss values for 10 GbE assume two connection points in the channel with connector pairs contributing a total of

1.5 dB allocated for insertion loss and the fiber contributing a total of 1.1 dB for OM3 and 1.4 dB for OM4, respectively. For example,

Figure 5 shows a two-connector 10 GbE channel using OM3 fiber and maximum loss values per the TIA and IEEE standards.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 33

Page 36: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

34

LO

W L

OS

S M

UL

TIF

IBE

R

www.siemon.com

Low Loss Fiber Connectivity to the Rescue

Due to improvements in connector technology and manufacturing techniques,

Siemon has succeeded in lowering the loss to 0.20 dB for MTP connectors and

to 0.15 dB (0.1 dB typical) for LC and SC connectors, well below the industry

standard of 0.75 dB and loss values offered by other manufacturers.

For 10 GbE, Siemon low loss LC BladePatch fiber jumpers offer a loss of 0.15 dB

(typical 0.1 dB) and Siemon low loss plug and play MTP to LC or SC modules

offer a loss of 0.35 dB (typical 0.25 dB). For 40 and 100 GbE, Siemon MTP to

MTP pass-through adapter plates and MTP fiber jumpers offer a loss of 0.2 dB.

These lower loss values allow data center managers to deploy more connection

points in fiber channels, enabling the use of distribution points or cross connects

that significantly increase flexible configuration options.

Table 2 below provides an example of how many connections can be deployed

in 10, 40 and 100 GbE channels over OM3 and OM4 multimode fiber using

Siemon low loss MTP to LC modules for 10 GbE and low loss MTP to MTP

pass-through adapters for 40 and 100 GbE versus standard loss solutions.

Table 2: Siemon low loss multifiber connectivity allows for more connectors in 10, 40 and 100 Gb/s channels over multimode fiber at the 850 nm wavelength.

As indicated in Table 2, the use of low loss connectivity allows for four connections in a 10 GbE OM3 or OM4 channel

compared to just two when using standard loss connectivity. Low loss connectivity allows for eight connections in a 100-

meter 40/100 GbE channel over OM3 versus just four connections using standard loss, and five connections in a 150-meter

40/100 GbE channel over OM4 fiber compared to just two connections using standard loss. Deploying cross connects

between interconnection and access switches requires a minimum of four connections, depending on the configuration.

Therefore, cross connects in a full-distance optical channel are simply not feasible without low loss connectivity.

Figures 6, 7 and 8 shows some example scenarios for deploying cross connects in 10 GbE and 40/100 GbE channels over

OM3 and OM4 fiber using Siemon low loss fiber connectivity. In Figure 6, all changes are made at the cross connect with

LC fiber jumpers. The switches remain separate and the permanent MTP trunk cables need only be installed once. The

cross connect can be placed anywhere within the channel to maximize ease of deployment and manageability.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 34

Page 37: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

35www.siemon.com

LO

W L

OS

S M

UL

TIF

IBE

R

Figure 6: Four Siemon Low Loss MTP-LC Modules can be deployed in a 10 GbE channel, enabling a cross connect for superior flexibility and manageability.

Figure 7. shows an OM3 40/100 GbE channel with six Siemon low loss MTP-MTP pass-through adapter plates and low loss

trunks. This scenario offers 0.4 dB of headroom and provides even better manageability and security. All changes are made

at the cross connects via MTP fiber jumpers, switches remain separate, and the MTP trunk cables need only be installed

once. Once again, the cross connects can be located anywhere in the data center for maximum flexibility. This allows for one-

time deployment of high fiber-count cabling from the cross connect at the interconnection switch to the cross connect at the

access switch. Adding additional access switches can be accomplished with short fiber runs from the cross connect.

Figure 7: For maximum flexibility, manageability and security, up to eight Siemon low loss MTP-MTP pass-through adapters can be deployedusing low loss trunks in a 100-meter 40/100 GbE switch-to-switch backbone channel over OM3 fiber.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 35

Page 38: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

36

LO

W L

OS

S M

UL

TIF

IBE

R

www.siemon.com

Figure 8: Low loss connectivity allows for two cross connects within a 150-meter 40/100 GbE channel over OM4 fiber to easily change from anetwork uplink port to a SAN port via a jumper change at the cross connect.

If the loss budget does not permit deploying six MTP to MTP adapters, one option is to

deploy MTP to LC or MTP to MTP jumpers from the cross connect to the equipment,

depending on the equipment interface. For example, if using OM4 fiber to extend the

channel distance to 150 meters, up to five Siemon Low Loss MTP-MTP pass through

adapters can be deployed as shown in Figure 8.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 36

Page 39: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

37www.siemon.com

LO

W L

OS

S M

UL

TIF

IBE

R

In addition to enabling more connections in 10, 40 and 100 gigabit Ethernet channels, low loss connectivity provides the same

benefits for Fibre Channel deployments in SANs. For example, a 150-meter 8 Gb/s Fibre Channel (GFC) deployment allows

for up to four Siemon low loss MTP to LC modules yet allows for only two modules when using standard loss

components. Using low loss connectivity to deploy cross connects therefore makes it easy to change server connections from

a network uplink port to a SAN port and vice versa by simply changing a jumper at the cross connect as shown in Figure 9.

Figure 9: Low loss connectivity also supports additional connection points for Fibre Channel, making it easy tochange from a network uplink port to a SAN port by changing a jumper at the cross connect.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 37

Page 40: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

38

LO

W L

OS

S M

UL

TIF

IBE

R

www.siemon.com

Table 3 below provides a detailed summary of several Ethernet and Fibre Channel scenarios using Siemon low loss

connectivity versus standard loss connectivity. The blue areas indicate that the loss is within the standards requirements,

while red indicates over budget. As indicated, the maximum operating distance is impacted by the number of connections.

However, low loss connectivity clearly enables more connections in both Ethernet and Fibre Channel fiber links.

Table 3: Siemon low loss fiber enables multiple mated pairs in Ethernet and Fibre Channel applications. * 550 meters is beyond the standard for OM4 but supported by most of today’s equipment

** 0.5 meters is the minimum distance for all Ethernet and FC applications

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 38

Page 41: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

39www.siemon.com

LO

W L

OS

S M

UL

TIF

IBE

RAdditional Considerations

In addition to considering low loss connectivity for more connection points in switch-to-switch and server-to-switch fiber links,

it is important to remember that not all MPO connectors are the same.

Siemon’s MTP connector interface offers higher performance than generic MPO connectors. It features advanced engineering

and design enhancements that offer ease of use, including the ability to re-polish connectors and change connector gender

in the field. The MTP connector also offers improved mechanical performance to maintain physical contact under load, en-

hanced guide pins that provide for better alignment and a metal pin clamp for centering the push spring and

eliminating fiber damage from the spring.

Data center managers should also consider ferrule material when selecting fiber connectivity. Siemon uses high precision

Zirconia ceramic ferrules for optical performance over metal or lower-cost plastic. Zirconia ceramic offers better durability

and dimensional control, which enables more efficient polishing with repeatable results and a finer finish. Using advanced

precision molding techniques, Zirconia ceramic ferrules also provide better physical contact of fibers than other materials.

This provides accurate fiber alignment, which when combined with the benefits of the MTP connector, allows for the best

overall performance with minimal loss.

Summary

With today’s flattened switch architectures and shrinking optical insertion loss budgets, Siemon low loss fiber connectivity

enables more connection points in both Ethernet and Fibre Channel applications in the data center. These additional con-

nection points allow for the use of distribution points and cross connects in fiber network and SAN channels to:

• Deploy shorter fiber runs

• Prevent access to critical switches

• Make easy changes with an “any to all” configuration

• Use fiber backbone cabling for various purposes without having to run new fiber

• Simplify the process of adding new equipment

With loss budgets needing to be carefully considered during the early design stages of any data center, data center

managers can turn to Siemon low loss fiber connectivity to support more connections in 10, 40 and 100 GbE applications or

in 8, 16 and 32 GFC SAN applications. Contact Siemon today for more information about how our low loss LC BladePatch

fiber jumpers, plug and play MTP to LC or SC modules, MTP to MTP pass-through adapter plates and MTP fiber jumpers

and trunks can help you stay within your loss budgets and provide flexibility over a wide range of distances and future proof

configurations.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 39

Page 42: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

40

LO

W L

OS

S

www.siemon.com

WP_

Low

Loss

_ Re

v. C

10/

14

Worldwide Headquarters North AmericaWatertown, CT USAPhone (1 ) 860 945 4200 USPhone (1 ) 888 425 6165

Regional Headquarters EMEAEurope/Middle East/AfricaSurrey, EnglandPhone (44 ) 0 1932 571771

Regional HeadquartersAsia/PacificShanghai, P.R. ChinaPhone (86) 21 5385 0303

Regional HeadquartersLatin AmericaBogota, ColombiaPhone (571) 657 1950/51/52

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 40

Page 43: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

41www.siemon.com

Considerations for Choosing Top of Rack inToday’s Fat-Tree Switch Fabric Configurations

Three-tier switch architectures have been common practice in the data center environment

for several years. However, this architecture does not adequately support the low-latency,

high-bandwidth requirements of large virtualized data centers. With equipment now located

anywhere in the data center, data traffic between two servers in a three-tier architecture may

have to traverse in a north-south traffic (i.e., switch to switch) pattern through multiple switch

layers, resulting in increased latency and network complexity. This has many data centers

moving to switch fabric architectures that are limited to just one or two tiers of switches.

With fewer tiers of switches, server to server communications is improved by eliminating the

need for communication to travel through multiple switch layers.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 41

Page 44: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

42

SW

ITC

H A

RC

HIT

EC

TU

RE

S

www.siemon.com

Fat-tree switch fabrics, also referred to as leaf and spine, are one of the most common switch fabrics being deployed in today’s

data center. In a fat-tree switch fabric, data center managers are faced with multiple configuration options that require decisions

regarding application, cabling and where to place access switches that connect to servers. In a fat-tree switch fabric, access

switches can reside in traditional centralized network distribution areas, middle of row (MoR) positions or end of row (EoR)

positions—all of which use structured cabling to connect to the servers. Alternatively, they can be placed in a top of rack (ToR)

position using point-to-point cabling within the cabinet for connecting to the servers.

There is no single ideal configuration for every data center, and real-world implementation of newer fat-tree switch fabric

architectures warrants CIOs, data center professionals and IT managers taking a closer look at the pros and cons of each

option based on their specific needs within the data center ecosystem. Undertaking a study that looks at the impact of the

various configurations, applications and cabling on manageability, cooling, scalability, and total cost of ownership (TCO) will

help facilities and data center managers ultimately make the best educated decision as they move from traditional three-

tier switch architectures to newer fat-tree switch fabrics.

A Closer Look at the OptionsIn April 2013, the Telecommunications Industry Association (TIA) released ANSI/TIA-942-A-1, an addendum to the ANSI/TIA-

942-A data center standard that provides cabling guidelines for switch fabrics. The fat-tree switch fabric outlined in the ad-

dendum consists of interconnection (spine) switches placed in the main distribution area (MDA) and access (leaf) switches

placed in the horizontal distribution area (HDA) and/or equipment distribution area (EDA). Each access switch connects to

every interconnection switch in a mesh topology, typically via optical fiber (see Figure 1).

Figure 1: Fat-tree switch architecture. Source: ANSI/TIA-942-A-1

In a fat-tree switch fabric, access switches that connect to servers and storage equipment in rows can be located at the MoR

or EoR position to serve the equipment in that row, or they can be located in a separate dedicated area to serve multiple rows of

cabinets (see Figure 2). MoR and EoR configurations, which function in the same manner, are popular for data center environments

where each row of cabinets is dedicated to a specific purpose, and growth is accomplished on a row-by-row basis. For the

purposes of this article, we will concentrate on the more popular EoR configuration.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 42

Page 45: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

43www.siemon.com

SW

ITC

H A

RC

HIT

EC

TU

RE

SFigure 2: In a fat-tree architecture, the accessswitches (HDA) can be located in MoR orEoR positions to serve equipment in rows,or they can be located in separated dedicated areas to serve multiple rows.

Figure 3: In a ToR configuration,small access switches placed in the top of each cabinet connect directly to the equipment in thecabinet via point-to-point cabling. Source: TIA-942-A-1

EoR configurations that place access

switches at the end of each row use struc-

tured cabling with passive patch panels to

serve as the connection point between the

access switches and servers. Patch panels

that mirror the switch and server ports

(cross connect) at the EoR location connect

to corresponding patch panels at the ac-

cess switch and in server cabinets using

permanent links. The connections between

switch and server ports are made at the

cross connect via patch cords.

ToR configurations are geared towards dense 1 rack unit (1RU) server environments, enabling fast server-to-server connections within

a rack versus within a row. ToR is ideal for data centers that require a cabinet-at-a-time deployment and cabinet-level management.

The use of a ToR configuration places the access switch in the EDA, eliminating the HDA and patching area for making con-

nections between switches and servers. In fact, ToR is often positioned as a replacement for and reduction of structured

cabling. However, structured cabling offers several benefits, including improved manageability and scalability, and overall

reduced TCO. These factors should be considered when evaluating ToR and structured cabling configurations in today’s

fat-tree switch fabric environments.

The alternative to placing access switches in the EoR position is to place them in the ToR position. In this scenario, fiber

cabling runs from each interconnection switch in the MDA to smaller (1RU to 2 RU) access switches placed in each cabinet.

Instead of access switches, active port extenders can be deployed in each cabinet. Port extenders, sometimes referred to as

fabric extenders, are essentially physical extensions of their parent access switches. For the purposes of this article, we will

refer to ToR switches in general to represent both access switches and port extenders placed in the ToR position.

Within each cabinet, the ToR switches connect directly to the servers in that cabinet using point-to-point copper cabling often

via short preterminated small form-factor pluggable (e.g., SFP+ and QSFP) twinaxial cable assemblies, active optical cable

assemblies or RJ-45 modular patch cords (see Figure 3).

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 43

Page 46: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

44

SW

ITC

H A

RC

HIT

EC

TU

RE

S

www.siemon.com

Figure 4 Structured Cabling vs.ToR Topology. ToR eliminatesthe convenient patching areafor making changes.

Manageability ConsiderationsWith structured cabling where connections between active equipment are made at patch panels that mirror the equipment

ports, all moves, adds and changes (MACs) are accomplished at the patching area. Any equipment port can be connected

to any other equipment port by simply repositioning patch cord connections, creating an “any-to-all” configuration.

Because ToR switches connect directly to the servers in the same cabinet, all changes must be made within each individual

cabinet rather than at a convenient patching area. Depending on the size of the data center, making changes in each cabinet

can become complicated and time consuming. Imagine having to make changes in hundreds of server cabinets versus

being able to make all your changes at the patching area in each EoR location. Figure 4 provides a visual representation

of the difference between structured cabling and ToR.

With structured cabling, the patch panels that mirror active equipment ports connect to the corresponding panels in patching

areas using permanent, or fixed, links. With all MACs made at the patching area, the permanent portion of the channel

remains unchanged, which allows the active equipment to be left untouched and secure. As shown in Figure 4, the patching

area can reside in a completely separate cabinet so there is no need to access the switch cabinet. This scenario can be ideal

for when switches and servers need to be managed by separate resources or departments.

ToR configurations do not allow for physically segregating switches and servers into separate cabinets, and MACs require

touching critical switch ports. The ToR configuration can be ideal when there is a need to manage a group of servers and

their corresponding switch by application.

Another manageability consideration is the ability for severs across multiple cabinets to “talk” to each other. While ToR en-

ables fast server-to-server connections within a rack, communication from one cabinet to another requires switch-to-switch

transmission. One advantage of the EoR approach is that any two servers in a row, rather than in a cabinet, can experience

low-latency communication because they are connected to the same switch.

Cabling distance limitations can also impact manageability. For ToR configurations, ANSI/TIA-942-A-1 specifies that the point-to-point

cabling should be no greater than 10 m (33 ft). Moreover, the SFP+ twinaxial cable assemblies often used with ToR switches limit the

distance between the switches and the servers to a length of 7 meters in passive mode. The cabling channel lengths with structured

cabling can be up to 100 meters, which allows for more flexible equipment placement throughout the life of the data center.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 44

Page 47: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

45www.siemon.com

SW

ITC

H A

RC

HIT

EC

TU

RE

S

ToR configurations can also land-lock equipment placement due to the short cabling lengths of SFP+ cable assemblies and

data center policies that do not allow patching from cabinet to cabinet. This can prevent placing equipment where it makes

the most sense for power and cooling within a row or set of rows.

For example, if the networking budget does not allow for outfitting another cabinet with a ToR switch to accommodate new

servers, placement of the new servers may be limited to where network ports are available. This can lead to hot spots, which

can adversely impact neighboring equipment within the same cooling zone and, in some cases, require supplemental cooling.

Structured cabling configurations avoid these problems.

A ToR switch can technically be placed in the middle or bottom of a cabinet, but they are most often placed at the top for

easier accessibility and manageability. According to the Uptime Institute, the failure rate for equipment placed in the top third

of the cabinet is three times greater than that of equipment located in the lower two thirds. In a structured cabling configuration,

the passive patch panels are generally placed in the upper position, leaving the cooler space for the equipment.

Cooling Considerations

Scalability ConsiderationsToR configurations allow for cabinet-at-time scalability, which can be a preference for some data centers, depending on thebudget and business model in place. Once several cabinets are deployed, a widespread switch upgrade in a ToR configurationobviously will impact many more switches than with structured cabling. An upgrade to a single ToR switch also improves con-nection speed to only the servers in that cabinet. With an EoR structured cabling configuration, a single switch upgrade canincrease connection speeds to multiple servers across several cabinets in a row.

Application and cabling for the switch-to-server connections is also a consideration when it comes to scalability. For EoR con-figurations with structured cabling, standards-based category 6A twisted-pair cabling is typically the cable media of choice.Category 6A supports 10GBASE-T up to 100 m (328 ft) distances. The 10GBASE-T standard includes a short reach (i.e., lowpower) mode that requires category 6A and higher performing cabling up to 30 m (98 ft). Recent advancements in technologyhave also enabled 10GBASE-T switches to rapidly drop in price and power consumption, putting them on par with ToR switches.

For direct switch-to-server connections in a ToR configuration, many data center managers choose SFP+ twinaxial cable as-semblies rather than category 6A modular patch cords. While these assemblies support low power and low latency, whichcan be ideal for supercomputing environments with high port counts, there are some disadvantages to consider.

Standards-based category 6A cabling supports autonegotiation, but SFP+ cable assemblies do not. Autonegotiation is the ability for a switch to automatically and seamlessly switch between different speeds on individual ports depending on theconnected equipment, enabling partial switch or server upgrades on an as-needed basis. Without autonegotiation, a switchupgrade requires all the servers connected to that switch to also be upgraded, incurring full upgrade costs all at once.

For decades, data center managers have valued standards-based interoperability to leverage their existing cabling investmentduring upgrades regardless of which vendors’ equipment is selected. Unlike category 6A cabling that works with all BASE-Tswitches, regardless of speed or vendor, higher cost proprietary SFP+ cable assemblies may be required by some equipmentvendors for use with their ToR switches. While these requirements help ensure that vendor-approved cable assemblies areused with corresponding electronics, proprietary cabling assemblies are not interoperable and can require cable upgrades tohappen simultaneously with equipment upgrades. In other words, the SFP+ assemblies will likely need to be swapped out ifanother vendor’s switch is deployed.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 45

Page 48: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

46

SW

ITC

H A

RC

HIT

EC

TU

RE

S

www.siemon.com

Table 2 compares a ToR configuration to an EoR configuration with the same assumptions but in a high-density environmentthat assumes an average of 15 to 20 kW of power per cabinet to support 40 servers per cabinet. In this scenario, the totalcost of ownership for ToR is still 20% more than that of using an EoR configuration.

Some ToR switches are even designed to check vendor security IDs on the cables connected to each port and either displayerrors or prevent ports from functioning when connected to an unsupported vendor ID. SFP+ cable assemblies are also typ-ically more expensive than category 6A patch cords, causing additional expense during upgrades. In addition, many of theproprietary cable assemblies required by switch vendors come with an average 90-day warranty. Depending on the cablevendor, category 6A structured cabling typically carries a 15 to 25 year warranty.

In a ToR configuration with one switch in each cabinet, the total number of switch ports depends on the total number of cab-

inets, rather than on the actual number of switch ports needed to support the servers. For example, if you have 144 server

cabinets, you will need 144 ToR switches (or 288 if using dual primary and secondary networks for redundancy). ToR

configurations can therefore significantly increase the amount of switches required, compared to the use of structured cabling

configurations that use patch panels to connect access switches to servers in multiple cabinets.

Having more switches also equates to increased annual maintenance fees and energy costs, which impacts TCO. This is es-

pecially a concern as power consumption is one of the top concerns among today’s data center managers. As data centers

consume more energy and energy costs continue to rise, green initiatives are taking center stage. Reducing the number of

switches helps reduce energy costs while contributing to green initiatives like LEED, BREEAM or STEP.

Based on a low-density144-cabinet data center using a fat-tree architecture, Table 1 compares the cost for installation,

maintenance and annual power consumption for a ToR configuration using SFP+ cable assemblies to an EoR configuration

using category 6A 10GBASE-T structured cabling. The ToR configuration ultimately costs 30% more than using an EoR con-

figuration.

The example assumes an average 5 to 6kW of power per cabinet, which supports ~14 servers per cabinet. It also assumes

primary and secondary switches for redundancy. Installation costs include all switches, uplinks, fiber line cards, fiber backbone

cabling and copper switch-to-server cabling. Annual maintenance costs are based on an average 15% of active equipment

costs. Annual power costs are based on the maximum power rating of each switch for 24x7 operation. The example excludes

the cost of software, servers, cabinets and pathways.

Equipment, Maintenance and Energy Costs

Table 1: Low Density ToR SFP+ vs. EoR Structured Cabling CostComparison (based on MSRP at time of print) for an actual 144-cabinet data center.

Table 2: High Density ToR SFP+ vs. EoR Structured Cabling CostComparison (based on MSRP at time of print) for an actual 144-cabinet data center.

High-Density, 144 Server Cabinets, 40 Servers Per Cabinet

Material, Power & Maintenance ToR (SFP+) EoR (10GBASE-T)

Material Cost $26,394,000 $21,596,100

Annual Maintenance Cost $3,371,900 $2,737,900

Annual Energy Cost $177,600 $106,700

Total Cabling Cost (Included in Material Cost) $5,123,900 $2,078,200

TOTAL COST OF OWNERSHIP $13,542,800 $24,440,700

Low-Density, 144 Server Cabinets, 14 Servers Per Cabinet

Material, Power & Maintenance ToR (SFP+) EoR (10GBASE-T)

Material Cost $11,786,200 $8,638,300

Annual Maintenance Cost $1,655,200 $1,283,100

Annual Energy Cost $101,400 $44,400

Total Cabling Cost (Included in Material Cost) $1,222,300 $70,300

TOTAL COST OF OWNERSHIP $13,542,800 $9,965,800

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 46

Page 49: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

47

Figure 5: 144-Cabinet EoR Configuration

Figure 6: 144-Cabinet ToR Configuration

www.siemon.com

SW

ITC

H A

RC

HIT

EC

TU

RE

S

Table 3: Low-density switch port utilization for ToRvs. EoR Structured Cabling for a 144-cabinet datacenter (assumes average 5 to 6kW per cabinet,dual network, 14 servers per cabinet).

Figures 5 and 6 show the graphical representation of the ToR and EoR configurations used in the above cost examples.

Switch Port UtilizationLow utilization of switch ports also equates to higher total cost of ownership. In a low-density environment of 5 to 6 kW that

can accommodate just 14 servers in a cabinet, server switch port demand will be lower than the 32 switch ports available on

a ToR switch. As shown in Table 3, the same 144-cabinet example used in Table 1 equates to 5,184 unused ports with ToR

versus just 576 unused ports with EoR. That equates to more than 162 unnecessary switch purchases and related mainte-

nance and power. Using an EoR configuration with structured cabling allows virtually all active switch ports to be fully utilized

because they are not confined to single cabinets. Via the patching area, the switch ports can be divided up, on demand, to

any of the server ports across several cabinets in a row.

Low-Density, 144 Server Cabinets, 14 Servers Per Cabinet ToR (SFP+) EoR (10GBASE-T)

TOTAL UNUSED PORTS 5,184 576

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 47

Page 50: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

SW

ITC

H A

RC

HIT

EC

TU

RE

S

48 www.siemon.com

© 2

014

Sie

mon

W

P_C

CC

A R

ev. A

11/

14 (

US)

Worldwide Headquarters North AmericaWatertown, CT USAPhone (1 ) 860 945 4200 USPhone (1 ) 888 425 6165

Regional Headquarters EMEAEurope/Middle East/AfricaSurrey, EnglandPhone (44 ) 0 1932 571771

Regional HeadquartersAsia/PacificShanghai, P.R. ChinaPhone (86) 21 5385 0303

Regional HeadquartersLatin AmericaBogota, ColombiaPhone (571) 657 1950

Even when enough power and cooling can be supplied to a cabinet to support a full complement of servers, the number of

unused ports can remain significantly higher with ToR than with EoR and structured cabling. As shown in Table 4, the same

high-density 144-cabinet example used in Table 2 with 40 servers per cabinet equates to 6,912 unused ports with ToR

versus just 224 unused ports with EoR. The reason for this is that two 32-port ToR switches are required in each cabinet to

support the 40 servers, or four for a dual network using primary and secondary switches. That equates to 24 unused ports

per cabinet, or 48 in a dual network. In a 144-cabinet data center, the number of unused ports quickly adds up.

In reality, the only way to truly improve switch port utilization with ToR is to limit the number of servers to no more than the

number of switch ports per cabinet. However, limiting the number of servers to the number of ToR switch ports is not always

the most efficient use of power and space. For example, in a high-density environment that supports 40 servers per cabinet,

limiting the number of servers per cabinet to 32 (to match the number of switch ports) results in 8 unused rack units per cab-

inet, or 1,152 unused rack units across the 144-cabinet data center. In addition, once the number of servers surpasses the

number of available switch ports, the only option is to add another ToR switch (or two for dual networks). This significantly

increases the number of unused ports.

Regardless of the configuration being considered, it’s always best to consider port utilization when designing the data

center and ensure that either empty rack spaces or unused ports can be effectively managed.

With several configurations available for fat-tree switch fabrics, data center professionals and IT managers need to examine

the pros and cons of each based on their specific needs and total cost of ownership (TCO).

There is no single cabling configuration for every data center. A ToR configuration with access switches placed in each rack or

cabinet and SFP+ cable assemblies used for switch-to-server connections is ideal for data center environments that demand

extremely low-latency server connections and cabinet-level deployment and maintenance.

However, many data center environments can benefit from the manageability, cooling, scalability, lower cost and better port utilization

provided by category 6A structured cabling and 10GBASE-T used in End or Row, Middle of Row or centralized configurations.

Conclusion

Table 4: High-density switch port utilization for ToRvs. EoR Structured Cabling for a 144-cabinet datacenter (assumes average 15 to 20kW per cabinet,dual network, 40 servers per cabinet).

High-Density, 144 Server Cabinets, 40 Servers Per Cabinet ToR (SFP+) EoR (10GBASE-T)

TOTAL UNUSED PORTS 6,912 224

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 48

Page 51: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

49

TECH BRIEF

SFP+ Cables and EncryptionCost-Effective Alternatives Overcome Vendor Locking

As 10Gb SFP+ cables and transceivers become morecommon in today’s data centers, the question of vendorlock or encryption can become an issue for data centerprofessionals. This paper addresses the mechanism thatis used to implement the encryption, why it is employedand how to overcome it.

I2C INTERFACE

SFP+ cables and transceivers employ a 2-wire serial in-terface (called I2C) that allows the network equipment topoll a particular port and get information about the cableor transceiver that is plugged into that port. This interface is also commonly referred to as the DigitalDiagnostic Management Interface, Digital Diagnostic Mon-itoring Interface or DDMI.

The DDMI provides information about the cable or trans-ceiver assembly such as vendor, serial number, part num-ber, and date of manufacture that is stored on a memorychip or microprocessor within the cable assembly.

SFP+ passive copper cables contain EEPROMs within the connector back shell that have I2C ports. Thesecables may also be referred to as DAC or “direct attached copper” cables. An EEPROM is an “ElectricallyErasable Programmable Read-Only Memory” chip that is programmed at the factory with specific infor-mation about the cable assembly.

SFP+ active copper cables and optical transceivers contain microprocessors within the connector backshell. The microprocessor has memory that is accessible to the network through the 2-wire I2C interface.For active cables and transceivers, the interface allows real time access to device operating parametersand includes alarm and warning flags, which alert the equipment when particular operating parametersare outside of the factory settings.

Typically, these EEPROMs and microprocessors comply with the SFF or Small Form Factor standards,which define the I2C interface protocol and allocate certain information to specific memory locations.

SFP+ C

able

s a

nd E

ncryptio

n

PCB Termination

Laser strippedconductors

Automated welding forunmatched consistancy

Welding results in lessdielectric shrink-backthan soldering

Overmold provides additional strainrelief to minimize pistoning

www.siemon.com

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 49

Page 52: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

50

ENCRYPTION OR VENDOR LOCK

Some vendors incorporate encryption or “vendor lock” into their equipment that will issue a warning messageif a non-vendor approved cable assembly is plugged into a port. Theoretically, this ensures that equipmentmanufacturers won’t have to troubleshoot problems caused by sub-standard cables. In many cases, equipmentvendors who use encryption charge more for their own cords because they lock out use of other cords. In re-ality, encryption is unnecessary as all reputable manufacturers of SFP+ cables and transceivers meet thestandards that IEEE and SFF have established for SFP+ and interoperability is never a concern. Most networkequipment vendors that employ encryption allow a work around as long as the user acknowledges the warning.For example, the user may have to acknowledge that he understands the warning and he can accept it beforemoving on.

SIEMON’S SFP+ CABLES

SFP+ passive copper cable assemblies from Siemon Interconnect Solutions (SIS) are a cost-effective andlower-power alternative to optical fiber cables for short reach links in high-speed interconnect applicationssuch as high performance computing (HPC), enterprise networking, and network storage markets.

Siemon’s SFP+ connectors feature robust die cast housings and cable strain reliefs as well as gold platedcontacts. They are SFF-8083, SFF-8431 and SFF-8432 compliant, which are the industry standards for thisparticular connector form factor.

SIEMON’S CISCO COMPATIBLE OFFERING

Siemon has offered industry standard SFP+ cables for several years which have been tested by the UNH Interoperability lab and proven to be compatible with Cisco and equipment from other major vendors. Siemonis now introducing Cisco Compatible SFP+ passive copper cables.

Siemon’s Cisco compatible SFP+ passive copper cables use proprietary encryption within the assembly’sEEPROM to circumvent the warning messages that Cisco equipment may produce when non-Cisco approved cables are plugged in. This allows data center designers to avoid unwarranted concern that maybe associated with startups when the users see these warning messages. Siemon’s cables meet the industrystandards for SFP+ cables and are offered in the same lengths and wire gauges as Cisco DAC assemblies,but at a significant cost reduction.

TECH BRIEF

SFP+ C

able

s a

nd E

ncryptio

nOrdering Information:

Cisco Part NumberSiemon Cisco Compatible

Part Number Length (Meters) Gauge (AWG)

SFP-H10GB-CU1M SFPH10GBCU1MS 1 (3.3 ft) 30

SFP-H10GB-CU1.5M SFPH10GBCU1.5MS 1.5 (4.9 ft) 30

SFP-H10GB-CU2M SFPH10GBCU2MS 2 (6.6 ft) 30

SFP-H10GB-CU2.5M SFPH10GBCU2.5MS 2.5 (8.2 ft) 30

SFP-H10GB-CU3M SFPH10GBCU3MS 3 (9.8 ft) 30

SFP-H10GB-CU5M SFPH10GBCU5MS 5 (16.4 ft) 24

www.siemon.com

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 50

Page 53: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

51

SIEMON’S INDUSTRY STANDARD OFFERING

Siemon has one of the industry’s most comprehensive SFP+ direct attached copper cable assembly offerings,with lengths of up to 7 meters. Please visit http://www.siemon.com/sis/ to learn more.

SFF-8431, “Enhanced Small Form Factor Pluggable Module SFP+”, Chapter 4, “SFP+ 2 Wire Interface”SFF-8472, “Digital Diagnostic Management Interface for Optical Transceivers”SFF-8636, “Common Management Interface”“InfiniBandTM Architecture Specification Volume 2, Release 1.3, PHYSICAL SPECIFICATIONS”

TECH BRIEFSFP+ C

able

s a

nd E

ncryptio

n

STANDARDS

Ordering Information:Siemon Industry Standard Part Number Length (Meters) Gauge (AWG)

SFPP30-00.5 0.5 (1.6 ft) 30

SFPP30-01 1 (3.3 ft) 30

SFPP30-01.5 1.5 (4.9 ft) 30

SFPP30-02 2 (6.6 ft) 30

SFPP30-02.5 2.5 (8.2 ft) 30

SFPP30-03 3 (9.8 ft) 30

SFPP28-05 5 (16.4) 28

SFPP24-07 7 (23.0) 24

www.siemon.com

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 51

Page 54: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

52

TECH BRIEF

SFP+ C

able

s a

nd E

ncryptio

nTB

_SFP

+ C

ABL

ES_A

© 2

014

Sie

mon

Worldwide Headquarters North AmericaWatertown, CT USAPhone (1 ) 860 945 4200 USPhone (1 ) 888 425 6165

Regional Headquarters EMEAEurope/Middle East/AfricaSurrey, EnglandPhone (44 ) 0 1932 571771

Regional HeadquartersAsia/PacificShanghai, P.R. ChinaPhone (86) 21 5385 0303

Regional HeadquartersLatin AmericaBogota, ColombiaPhone (571) 657 1950

www.siemon.com

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 52

Page 55: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

53www.siemon.com

Getting Smart, Getting RuggedExtending LANs into Harsher Environments

Virtually everything we do now on a daily basis touches the network—whether it’s buying

a snack, sending an email or taking a ride at an amusement park. The proliferation of digital

information, wireless handheld devices and Ethernet into every facet of our lives means that

connections to networks need to be in more places than ever before.

With manufacturing environments having rapidly migrated to Industrial Ethernet over the

past decade as a means to deliver information for industrial automation and control systems

and to integrate factory environments with the corporate LAN, it’s no wonder that the industry

is seeing a growing demand for network cables, patch cords and connectors capable of with-

standing more severe conditions.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 53

Page 56: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

54

RU

GG

ED

IZE

D N

ET

WO

RK

S

www.siemon.com

But what about environments that fall somewhere in between—not quite severe enough to be considered “industrial” but in need

of something more ruggedized than what exists in everyday commercial office environments? Extending the network into these

types of environments is becoming more common than one might think. As our world becomes more digital, these types of envi-

ronments are popping up everywhere and demanding ruggedized network cables, patch cords and connectors that maintain

long-term network reliability and prevent the need to replace components due to corrosion and damage from a variety of elements.

Knowing the Standards – From MICE to NEMAWhile standards for industrial environments are certainly applicable to factory floors, manufacturing plants and processing facilities,

the same standards can be used to determine the type of ruggedized cable and connectivity required for those in-between envi-

ronments that are not as clearly identified as either commercial or industrial.

The international standard ISO/IEC 24702 provides application-independent requirements for both balanced copper and fiber

optic cable systems that support Ethernet-based data communications in industrial environments. The standard provides imple-

mentation options and requirements for cable and connectivity that reflect the operating environments within industrial premises.

ISO/IEC 24702, along with its comparable U.S. TIA-1005 and European EN 50173-3 standards, incorporate the MICE method

of classifying parameters for the materials needed to build an industrial network.

MICE stands for Mechanical, Ingress, Climatic and Electromagnetic and includes three levels of environmental harshness—

level 1 for everyday commercial office environments, level 2 for light industrial and level 3 for industrial. For example, M3I3C3E3

environments require network infrastructure components that are able to withstand the highest levels of vibration, shock,

tensile force, impact and bending (see Table 1).

While the MICE method is used to determine the harshness level of commercial, light industrial and industrial, rarely is an envi-

ronment exclusive to one MICE classification. Furthermore, one run of cabling from point A to point B can traverse through various

MICE classifications along the route. Designers planning cabling systems in harsh environments therefore need to have a good

understanding of the environment and what constitutes levels 1, 2 and 3 for each parameter. In some cases, measuring the en-

vironment can require specialized equipment, especially when it comes to measuring vibration and electromagnetic interference.

The standards include MICE tables to help determine which levels exist within the targeted environment (see Table 2).

The trick to using MICE levels to determine components is to always consider the worst case scenario and worst case level

parameter, regardless of the other parameters. For example, an environment exposed to liquid may be classified as M1I3C1E1.

If only ruggedized components meeting M3I3C3E3 are available, they may need to be used regardless of whether that level of

protection is required for all parameters.

Table 1: MICE Parameters

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 54

Page 57: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

55

RU

GG

ED

IZE

D N

ET

WO

RK

S

www.siemon.com

Table 2: MICE ClassificationsAnother standards-based rating to consider for harsh environments is the ingress protection

(IP) ratings developed by the European Committee for Electro Technical Standardization

(CENELEC). Sometimes referred to as an IP code, the IP rating consists of the letters IP fol-

lowed by two digits—the first digit classifying protection against solids (i.e., dust) and the sec-

ond classifying protection against liquids (i.e., water). For example, as shown in Table 3, an IP

rating of IP22 would indicate protection against finger-size objects and vertically dripping water.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 55

Page 58: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

56

Table 4: NEMA Enclosure Ratings and IP EquivalentsR

UG

GE

DIZ

ED

NE

TW

OR

KS

www.siemon.com

There is yet another standard to consider related to enclosures, which can include cabinets,

surface mount boxes, floor and ceiling boxes, junction boxes and even network equipment hous-

ing. The National Electric Manufacturer Association (NEMA) uses a standard rating system for

enclosures that defines the types of environments where they can be used. NEMA ratings for

enclosures also have IP code equivalents, as shown in Table 4 that highlights the most common

NEMA enclosures.

NEMA 4X Enclosures pro-vide protection against dust,water and corrosion inrugged environments.

Table 3: IP Code Ratings

One of the common IP ratings seen for ruggedized connectivity in our industry is IP66/IP67,

which offers total protection against dust ingress and water ingress. While the IP rating is

especially useful for determining the level of protection needed when dealing with wet,

dusty environments, it’s important to remember the remaining MICE parameters such as

ability to withstand higher temperature and humidity ranges or to maintain performance

amidst higher levels of electrostatic discharge (ESD) or radio frequency interference (RFI).

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 56

Page 59: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

57www.siemon.com

RU

GG

ED

IZE

D N

ET

WO

RK

SIdentifying the Key Components – From Cables to ConnectorsWhen it comes to selecting ruggedized cable and connectivity, both copper and fiber solutions may need to be considered—

especially as more fiber is extending out of the commercial data center and telecommunications room environment to bring higher

bandwidth closer to the work area outlet or to deal with longer distance requirements.

While not all MICE parameters will relate to both copper and fiber, especially with fiber being immune to electromagnetic interfer-

ence, the IP66/IP67 rating on connectivity can easily apply to both as can other mechanical, climatic and chemical parameters.

In general, ruggedized cable and connectivity solutions for harsher environments should feature components and characteristics

such as the following:

• Chemical-resistant thermoplastic housing on connectivity

— Plugs and outlets should use materials that provide the

widest range of protection from most solvents and common

industrial chemicals.

• Dust caps for outlets — Ruggedized dust caps can protect

unused outlets and seal outlets during wash downs.

• IP67-rated copper and fiber connectivity — Ruggedized

outlets and modular patch cords with an IP66/IP67-rated seal

protect plugs and outlet contacts from dust and moisture.

• Shielded twisted-pair cabling for copper — Shielded cop-

per cabling such as F/UTP cables and S/FTP cables will pro-

vide much higher resistance to EMI/RFI.

• More durable cable jacket materials — Jacket materials

such as polyurethane and thermoplastic elastomers can

provide better tensile strength and lower temperature flexi-

bility and brittle points, as well as better tear, abrasion,

chemical and moisture resistance.

• IP44-rated faceplates — Stainless steel faceplates with

rear sealing gaskets provide a protective seal from mois-

ture and debris.

• NEMA 4X enclosures — Enclosures and surface mount

boxes with a NEMA rating will protect the termination

points of ruggedized outlets.

Stainless steel faceplates with rear sealing gaskets anddustcaps for unused connections are ideal for protecting

critical network connections in harsh environments.

The need for ruggedized connectivity can also relateto fiber outlets in a variety of environments.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 57

Page 60: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

58

RU

GG

ED

IZE

D N

ET

WO

RK

S

www.siemon.com

Making the Best Choice – From Home Plates to Seafood PlattersWith the proliferation of digital information, handheld devices and Ethernet, consumers and employees everywhere demand

network and Internet access at all times and digital applications that make their lives and jobs easier. Consequently, enterprise

businesses are required to expand their networks into places that in the past would have gone without network connections and

wireless service. With many of the environments that now need access being outside of the realm of standard commercial

environments, enterprise businesses are partnering with manufacturers that offer ruggedized cable and connectivity in addition

to commercial-grade components.

In a $150 million upgrade at

Dodger Stadium, the 52-year old

home of the Los Angeles

Dodgers and the third oldest

park in Major League Baseball,

plenty of enhancements were

made to deliver a state-of-the-art

experience to fans, including a

new high-performance copper

and optical fiber cable system to

support stadium-wide WiFi, dig-

ital displays, IP-based security,

box offices, kiosks and point-

of-sale locations.

Dodger Stadium

As part of the upgrade, two new plazas were added at the left field and right field stadium

entrances. While concession stands are located throughout the stadium, the new Bullpen

Overlook Bars, the Think Blue Bar-B-Que and Tommy Lasorda’s Italian Trattoria conces-

sion stands in the new plazas have drawn the most pre- and post-game attention.

During the design stages of the network, Ralph Esquibel, the Dodgers’ vice president

of IT, worked with Siemon to determine which products would best ensure reliability

for LAN connectivity at the outside food and beverage locations. Due to the potential

for harmful environmental factors that could adversely impact commercial-grade com-

ponents, Siemon Ruggedized Z-MAX category 6A shielded IP66/IP67-rated outlets

and modular cords were selected for use at these locations.

The Ruggedized Z-MAX connectors offer total protection against dust ingress and

short-term protection against water ingress, as well as the ability to withstand higher

temperature and humidity ranges. They feature a durable, chemical resistant, indus-

trial-grade thermoplastic and patented bayonet style quarter-turn mating design for su-

perior protection. Siemon shielded F/UTP cabling was also selected to provide the

performance and noise immunity required throughout the stadium.

“We don’t know when we’ll be able to make this type of investment again,” says Esquibel.

“We have a lot of technology here, and we need to make sure we are protecting it.”

Siemon Ruggedized Z-MAX Category 6A shielded IP66/IP67-rated

outlets and cords were deployed at Dodger Stadium’s outdoor concession stands and kiosks

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 58

Page 61: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

59www.siemon.com

RU

GG

ED

IZE

D N

ET

WO

RK

STrident SeafoodsThe largest seafood company in the U.S., Seattle-based

Trident Seafoods is a vertically integrated seafood company

of fishing vessels and processing plants that produce fresh,

frozen, canned, smoked and ready-to-eat seafood products

under a variety of brand names, including Trident, Louis

Kemp and Rubenstein’s. When the company wanted to ex-

tend network access throughout its three factory trawlers, they

turned to Siemon’s ruggedized connectivity.

Starting with the 276-foot Kodiak Enterprise, Trident sought

to upgrade the entire on-board network to not only improve

existing wheel house communications, but also to provide

whole-ship Wi-Fi for the more than 125-person crew that

lives on the ship for extended periods of time during peak

fishing season. During the short month of dry-dock time,

Cabling & Technology Services (CTS), a full service inte-

grator of network infrastructure systems, removed and

replaced the ship’s entire cabling infrastructure.

“It’s very challenging to deploy cabling on a ship due to

tight spaces, corrosive sea water and other environmental

elements,” says James Gannon, service project manager

for CTS. “We needed to deploy connections throughout for

Wi-Fi access and to connect to computerized packaging

systems in the fish processing area, which is often wet from

floor to ceiling and undergoes wash downs as part of the

company’s sanitation process.”

Throughout the ship, Siemon Ruggedized MAX IP66/IP67-

rated category 6 outlets and modular cords were once again

deployed to offer protection against water ingress, as well as

the ability to withstand the corrosive nature of sea water that

can typically cause non-ruggedized components to fail.

“Trident wanted something that could handle the wet, and

Siemon had the product,” says Gannon. “While I’ve used

Siemon products for many projects in the past, I had not used

their ruggedized connectivity before. We’re also using it in the

two other factory trawlers—the Island Enterprise and the

Seattle Enterprise—both of which will be completed this year.”

Choosing the Right Partner –From Experience to Breadth of ProductWith an increase in the number of harsh environments that are an extension of the corporate LAN, designers and installers who

are experienced in commercial environments may not necessarily understand industrial standards, how to use MICE parameters

or which product features to look for. Furthermore, standards-based methods and parameters for determining the level of harsh-

ness and the components required are not always cut and dry.

While industry standards can be used for determining components based on environment, they often refer to in-between envi-

ronments as “light industrial.” This term can be confusing when the environment is clearly not one that is industrial but is simply

an extension of the commercial LAN into a harsher environment. Consequently, “industrial” standards are not always followed

during the planning stages of these environments, often resulting in the use of inadequate components and network failures.

Trident Seafoods’ 276-foot Kodiak Enterprise is just one of the company’s trawlers that uses Siemon Ruggedized MAX IP66/IP67-rated outlets.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 59

Page 62: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

RU

GG

ED

IZE

D N

ET

WO

RK

S

60 www.siemon.com

© 2

014

Sie

mon

W

P_Ru

gged

ized

Rev

. A 1

2/14

(U

S)

Worldwide Headquarters North AmericaWatertown, CT USAPhone (1 ) 860 945 4200 USPhone (1 ) 888 425 6165

Regional Headquarters EMEAEurope/Middle East/AfricaSurrey, EnglandPhone (44 ) 0 1932 571771

Regional HeadquartersAsia/PacificShanghai, P.R. ChinaPhone (86) 21 5385 0303

Regional HeadquartersLatin AmericaBogota, ColombiaPhone (571) 657 1950

Experience goes a long way in designing for these environments. For example, designers experienced with deploying

networks in industrial and harsh environments will likely know that induction heating within about 10 feet of a component

can require an E3 classification while fluorescent lighting located a few feet away will have little impact and require only

an E1 classification.

Another consideration when selecting ruggedized cable and connectivity is a breadth of copper and fiber types in a

variety of performance levels. Most manufacturers of industrial/ruggedized components provide category 6 at best for

copper, with many offering only category 5e. Furthermore, few offer the latest fiber cable and connectivity in ruggedized

versions. This could very well be due to the fact that many industrial systems don’t require the higher bandwidth asso-

ciated with category 6A and fiber. However, as more LANs extend into harsher environments, designers are looking to

maintain the same performance level as the rest of the corporate LAN. Selecting a manufacturer with ruggedized copper

and fiber cable connectivity available in the same copper and fiber performance as the rest of the LAN will prevent con-

nections in more demanding environments from having to compromise on bandwidth and performance.

Commercial designers with limited experience in planning for cable and connectivity that extends into harsh environments

would be wise to work closely with cable and connectivity manufacturers who understand the standards and specifica-

tions, offer the latest copper and fiber ruggedized components and have experience in determining the type of cable

and connectivity required based on a variety of environmental factors.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 60

Page 63: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

61www.siemon.com

LA Dodgers Hit Home Run in Major League Upgrade

While the rest of us were busy watching football, hockey and basketball, something big was

happening at Dodger Stadium. The 52-year old home of the Los Angeles Dodgers and the

third oldest park in Major League Baseball has spent the past two off-seasons undergoing

more than $150 million of stadium upgrades.

Many of the enhancements were obvious to fans on opening day of the LA Dodgers’ 2014 season

in early April, including expanded entries, new seating and lounges, additional food services,

new retail stores, memorabilia displays, improved access, beautified landscaping and a fenced

walkway for navigating the stadium’s entire circumference.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 61

Page 64: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

62

CA

SE

ST

UD

Y —

LA

DO

DG

ER

S

www.siemon.com

Many of the enhancements were obvious to fans on opening

day of the LA Dodgers’ 2014 season in early April, including

expanded entries, new seating and lounges, additional food

services, new retail stores, memorabilia displays, improved

access, beautified landscaping and a fenced walkway for nav-

igating the stadium’s entire circumference.

Plenty of new technology was also a key component of the

upgrade and one that was carefully considered to enhance

the experience of longtime fans. Not only will fans have ac-

cess to state-of-the-art wireless throughout the stadium, but

season and mini-plan ticket holders will enjoy the new

Dodgers Pride Rewards program that allows them to use

their smartphones to gain entrance at all gates, make pur-

chases, order food and beverages, and check earned

rewards. In turn, the program will allow the stadium to keep

tabs on fan preferences and provide a more tailored game

experience. Enhanced technology will be improving perform-

ance of the players on the field by supporting data analytics

and video replays from the past eight years of plays.

What won’t be obvious to the fans and players is the high-

performance copper and optical fiber cable systems

installed throughout the stadium that make all of this excit-

ing technology possible.

One of the first up-

grades at Dodger Sta-

dium that was a critical

foundation for all other

enhancements was

the electrical service.

“We had electrical sys-

tems in place that were

40 and 20 years old,

and these systems

typically have a life ex-

pectancy of 15 to 20

years,” said Ralph Es-

quibel, the Dodgers’

vice president of IT.

“Those systems were at the end of their life and we couldn’t

do much of anything innovative like implement larger video

boards or other technologies that needed more power.”

During the first phase of the expansion during the 2013

off-season, the stadium deployed new equipment and sub-

stations in coordination with the Los Angeles Department of

Water and Power. With an updated electrical infrastructure

in place, the Dodgers’ IT department was able to turn their

focus to the network cabling infrastructure.

During the design stages of the network, Esquibel conducted

some research and encountered expertise and products from

Siemon (www.siemon.com). After reaching out to Siemon for

more information, a new dedicated team was formed.

“When I started meeting the people at Siemon, I realized

that they were smart people who knew what they were talk-

ing about. Valerie Maguire, Siemon’s director of standards

and technology, came and met with us several times and

made quite an impression on me,” said Esquibel. “Since the

beginning, Siemon has been invested in the LA Dodgers

and our success. While we may have a big name, we are

actually a small company with only about 300 employees.

That’s why I look for true partnerships in my vendors and

require that they act almost as extensions of my team. We

looked at other companies, but it was Siemon that gave us

that level of comfort.”

Ed Havlik, Siemon’s Sales Manager for the project, assem-

bled a team of technical and product resources who were

ready to lend expert support when needed. For example,

when discussions arose related to the pros and cons of var-

ious wireless technologies, Havlik made sure that Esquibel

and Maguire connected.

“It was clear from our first meeting that Ralph had a vision

of providing guests at Dodger Stadium with the highest level

of electronic technology,” said Maguire. “Ralph wanted to

hear our thoughts on the entire gamut of IT technologies—

from which products would best support fans at outside

kiosks and entry points, to ensuring congestion-free support

of IEEE 802.11ac wireless and which media would be most

mechanically and electrically suited to support Type 2 PoE.

He even sought our recommendation on how to best save

space in the data center.”

Forming a Dedicated Team

Ralph Esquibel, the Dodgers’ vice president of IT.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 62

Page 65: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

63

Once the decision was made to deploy Siemon cabling sys-

tems for the upgrade, the project went out for bid. Due to

the sheer size of the installation, several entities were in-

volved in the installation, which included upgraded core

switches in two main equipment rooms, new telecommuni-

cations rooms (TRs) and faster transmission speed via

plenty of Siemon fiber and copper cabling.

“With all of the technology we wanted to add, we knew we

had to introduce a 10 gigabit per second edge network and

deploy new core switches, firewalls and redundancy,” said

Esquibel. “In 1963 when the Dodgers first started playing

here, there were no telecommunications rooms. As part of

the upgrade, we built out 18 proper rooms that tie back to the

two updated cores via 24 strands of singlemode optical fiber.”

According to Esquibel, the LA Dodgers also have a redun-

dant data center at their spring training center in Arizona that

is capable of running 90% of the stadium business in the

event of downtime, which is something Esquibel doesn’t

foresee. “We have the redundancy and reliability in place

that I don’t foresee our network ever going down completely

and having to rely on backup from Arizona,” he says.

In each of the main equipment rooms, switches and network-

ing equipment are housed in Siemon’s VersaPOD cabinets

that feature Zero-U cable management and vertical patching

capabilities to support high density while saving space and

providing excellent thermal efficiency through improved airflow

in and around the equipment. Within the VersaPOD cabinets,

Siemon’s intelligent power distribution units (PDUs) provide

even more energy savings while reliably delivering power to

the critical IT equipment. The PDUs can provide real-time

power information and connect to environmental sensors for

troubleshooting and optimizing energy efficiency.

Within the main equipment rooms and TRs, the singlemode fiber

is terminated at Siemon’s Quick-Pack adapter plates and

housed in RIC3 enclosures that provide superior protection of

the fiber and enhanced accessibility with front and rear door

locks and a fully removable fiber management tray. Depending

on size and configuration, a combination of Siemon’s VersaPOD

cabinets, V600 cabinets and four-post racks were deployed in

the TRs to support active equipment and passive connectivity.

CA

SE

ST

UD

Y —

LA

DO

DG

ER

S

www.siemon.com

Building a Reliable Backbone

Meeting the Latest ExpectationsFrom each of the telecommunications rooms, Siemon’s

Z-MAX category 6A shielded (F/UTP) end-to-end cabling sys-

tem support a wide range of devices throughout the stadium

to bring technology to the fans and the field. The category

6A F/UTP cable is terminated on Z-MAX patch panels in each

of the TRs and at Z-MAX 6A outlets for voice and data access

in offices, wireless access points (WAPs) throughout the sta-

dium, point of sale (POS) locations, box offices, kiosks, scan-

ners and at several locations for eventually connecting digital

displays, IPTVs and IP-based surveillance cameras.

While Dodger Stadium hasn’t yet switched over video dis-

plays and cameras to IP-based systems using the Siemon

cabling, Esquibel knew it made sense to install cabling to

all of these locations while the walls were open. The stadium

was however able to roll out digital scanning locations for

the Dodger Pride Rewards program and the largest

Wi-Fi network ever in Major League Baseball.

“Wireless is now like a utility. People show up at a venue to enjoy

themselves and expect Wi-Fi access in the same way they ex-

pect to be able to turn on the water in a bathroom,” says Esquibel.

“We have successfully architectured our Wi-Fi infrastructure to

support 50 percent or our 56,000 fans at any given moment.”

Unlike traditional overhead placement of WAPs, Dodger

Stadium used a heterogeneous approach and placed the ap-

proximate 1,000 WAPs in unique locations that provide much

better coverage. “If you only place the access points overhead,

people in the prime front-row seating end up with poor service,”

explains Esquibel. “To provide the best coverage, we’re using

a combination of overhead and underseat placement. We even

have some access points placed within hand rails.”

Siemon’s VersaPOD cabinets provide space savings and thermal efficiency in the main equipment rooms.

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 63

Page 66: Advanced Network 2015 Infrastructure - Avaya · Advanced Network Infrastructure2015 8 Actionable Guidance for Deploying Superior Intelligent Building and Data Center Networks EBook_REV2_BRC_IcePack

CA

SE

ST

UD

Y —

LA

DO

DG

ER

S

64 www.siemon.com

© 2

014

Sie

mon

W

P_D

odge

rs R

ev. A

11/

14 (

US)

Worldwide Headquarters North AmericaWatertown, CT USAPhone (1 ) 860 945 4200 USPhone (1 ) 888 425 6165

Regional Headquarters EMEAEurope/Middle East/AfricaSurrey, EnglandPhone (44 ) 0 1932 571771

Regional HeadquartersAsia/PacificShanghai, P.R. ChinaPhone (86) 21 5385 0303

Regional HeadquartersLatin AmericaBogota, ColombiaPhone (571) 657 1950

Siemon’s Z-MAX 6A shielded solution that supports all the

WAPs and other technology throughout the stadium repre-

sents the cutting edge of category 6A cabling. The Z-MAX 6A

shielded system provides the highest margins on all TIA and

ISO performance requirements for category 6A/class EA,

including critical alien crosstalk parameters. Dodger Stadium

also deployed Siemon’s Ruggedized Z-MAX IP67-rated out-

lets in harsher areas such as concession stands and kiosks

to protect against dust and water ingress.

“This has been a huge undertaking, and I don’t know when we’ll

be able to make this type of investment again. We decided to

go with shielded cabling to protect the harmonics of the cable

and provide better performance for future technologies,” says

Esquibel. “For example, when next generation 802.11ac wire-

less devices hit the market we can easily swap out access

points because we have the cabling in place to support the

bandwidth and power over Ethernet requirements.”

Due to the stringent timeline and the need to finalize the ca-

bling infrastructure in time for opening day of the 2014 sea-

son, superior logistics and just-in-time delivery were also

vital to the project’s success. Siemon partnered with Accu-

Tech Los Angeles to ensure proper warehousing and stag-

ing of the cabling components in conjunction with the

installation schedule.

“This was a huge project with many reels of cable. We had

to work behind the scenes to make sure we could consis-

tently stage and release product as it was needed. Our ded-

icated driver was familiar with the stadium and knew where

and when to deliver the material,” says John Ittu, branch

manager for Accu-Tech Los Angeles. “It was a great experi-

ence and exciting for us to be a part of something like this

and actually go into that stadium and see where all of the

cabling we supplied was being installed.”

Protecting the Investment

The Dodgers IT team, including Esquibel (3rd from left), Hisayo....

After their 92-win campaign and trip to the National League

Championship Series in 2013, the LA Dodgers embarked

on the 2014 season with their upgraded stadium and new

high-performance cabling infrastructure that will satisfy the

technology demands of the fans and hopefully improve their

own performance on the field.

“We’re not done implementing technology, but we’ll get there

because we have a great IT team that is dedicated to this

baseball team and the organization. Throughout this tech-

nology upgrade, we’ve always asked ourselves how we are

going to give our fans a better experience. The ability for our

fans to interact with the team through the Dodger Pride Re-

wards program is the direction that any consumer-facing in-

dustry should be moving in,” says Esquibel. “Not only will

we be able to tailor their game experience with exclusive of-

fers, but fans appreciate the convenience of being able to

order food right from their smartphones and avoid lines.”

According to Esquibel, the technology upgrade at Dodger

Stadium will also deliver a better game and help better pre-

pare the players. “We have eight years of video on every

player and every play that has ever transpired here. If we

have a matchup of a specific player and a pitcher, the play-

ers can view past videos and review how they performed in

the past,” says Esquibel. “With our new cabling infrastruc-

ture, the players will have the ability to quickly and easily ac-

cess the videos we need—they won’t accept lag time.”

Winning On and Off the Field

EBook_REV2_BRC_IcePack 12/9/14 3:44 PM Page 64


Recommended