+ All Categories
Home > Documents > Data Centre – Managing a Reliability Quotient Bala Chandran.

Data Centre – Managing a Reliability Quotient Bala Chandran.

Date post: 02-Jan-2016
Category:
Upload: cleopatra-little
View: 216 times
Download: 0 times
Share this document with a friend
Popular Tags:
32
Data Centre – Managing a Reliability Quotient Bala Chandran
Transcript
Page 1: Data Centre – Managing a Reliability Quotient Bala Chandran.

Data Centre – Managing a Reliability Quotient

Bala Chandran

Page 2: Data Centre – Managing a Reliability Quotient Bala Chandran.

Expanding Depth & Breadth of a CIO

In 2009Lowering Company’s overall

operating costs

In 2010Driving innovative new

market offerings

• 49% of CIO’s have responsibilities outside of IT

• 41% of this 49% are part of or heading a Strategy Committee

Internal Challenge External Challenge

Driven by economic challenges

Page 3: Data Centre – Managing a Reliability Quotient Bala Chandran.

Tough times – Hard work

Changed economic scenario

Do even-more with Less

Market expectation rising sharply

Cost expectationdropping sharply

Service level getting more tougher to meet

Benefit becomes a necessity & necessity becomes redundant - some churns turn into a storm!

Page 4: Data Centre – Managing a Reliability Quotient Bala Chandran.

• Consolidation of Data centres into a few central locations– Enhancing capacity across networking, redundancy, computing, storage and

management

• Virtualisation: Background swapping of Data

• Availability: High reliability requires redundancy

– Secondary and tertiary data centers hundreds of miles away from primary

• Operational cost: Driving ‘Green’ initiatives

– Cooling and Powering #1 concern

• Traffic Growth: Driven by an increasingly connected world

– Servers growing 11% per year & Storage at a median rate of 22%

– Strain on data center capacities in environmental control, power, and space

– Struggle to balance between sprawling low-density racks and super-hot power-hungry high-density racks.

The Perfect Storm

And its your infrastructure that needs to weather the storm

Page 5: Data Centre – Managing a Reliability Quotient Bala Chandran.

Key DC Infrastructure

Cabling & Racks

Power & Cooling

OtherStructuralActive

Equipment

Security & Fire

How do all of these stack up?

Page 6: Data Centre – Managing a Reliability Quotient Bala Chandran.

Services &Applications

Systems & Subsystems

Rou

tin

g

Sw

itch

ing

Tra

nsp

ort

Access

BSS/OSS

Services and Applications

Key Focus of a CIO

Infrastructure

Connectivity Solutions

Physical Media

Build-out

Considered as de-focus for a CIO

Old World

Page 7: Data Centre – Managing a Reliability Quotient Bala Chandran.

Services &Applications

Systems & Subsystems

Rou

tin

g

Sw

itch

ing

Tra

nsp

ort

Access

BSS/OSS

Services and Applications

Key Focus of a CIO / CTO

New World

Infrastructure

Connectivity Solutions

Physical Media

Build-out

• Critical Infrastructure• High reliance on vendors

& SI’s• Morphs into the realm of FM,

Admin & Proj.Mgt• Can impact TCO

Page 8: Data Centre – Managing a Reliability Quotient Bala Chandran.

Ownership costs

• RoI analysis is key for business decisions

– Includes assessing & measuring total ownership costs for physical infrastructure

• Utilization of Data Centres vary typically from 10% to 90%

– Cost of ownership in terms of useful work performed and not per DC or per Sq.ft

• Virtualisation and Consolidation to address power savings

The Eco factor

Page 9: Data Centre – Managing a Reliability Quotient Bala Chandran.

Source: VMWare

Server VirtualizationThe Ecological Factor

How do running costs stack up??

Page 10: Data Centre – Managing a Reliability Quotient Bala Chandran.

Running Costs

So what’s the connection??

Page 11: Data Centre – Managing a Reliability Quotient Bala Chandran.

The Connection

Cabling & Racks

Power & Cooling

OtherStructuralActive

Equipment

Security & Fire

But first the Cable

Page 12: Data Centre – Managing a Reliability Quotient Bala Chandran.

Copper Fibre

Advantages

Disadvantages

• Better understood by the industry• Less combinations (e.g. RJ45 only)• Can be ‘Punched Down’• More Copper interfaces in market• Interfaces are cheaper. No Electrical

to Optical Conversion point• Data center managers reporting that

Copper NIC’s are coming for free

It Depends:• Immune to cross talk• Smaller cable so more dense

(12 Fiber trx = 1 Copper Cable)• Can transmit further distances• OS1 can offer ‘unlimited’ bandwidth for

future proofing• Interfaces at 10G are available in volume• Industry is settling on LC as a connector• Consumes less energy, cheaper cooling

and power required per port

• Not-Immune to cross talk• Large cable for 1 Transmit Receive• Limited transmission distances• Greater than 10G may require a new

connector type• Volatility in copper commodity pricing• Low Green credentials due to mining

activity and shipping costs.

• Many combinations (e.g. MM v SM, LC v ST)

• Cannot be ‘Punched Down’• More Copper interfaces in market• Interfaces for conversion are

expensive.

Copper or Fibre for Data Centre?

Should Cabling be a concern for a Network Engineer??

Page 13: Data Centre – Managing a Reliability Quotient Bala Chandran.

Network Engineer’s Concerns within the Data Centre

• Scalable

– Density of equipment, cabinets, frames

– Fast and Accurate Moves, Adds, and Changes

• Thermal

– Problematic in most data centers – more acute with Blade Servers

– Poor Air Flow a Problem in many Data Centers

• Reliability & Manageability

– Can’t afford any downtime even during expansion

How can you positively impact each area of concern?How can you positively impact each area of concern?How can you positively impact each area of concern?How can you positively impact each area of concern?

Page 14: Data Centre – Managing a Reliability Quotient Bala Chandran.

Scalability & Space

• Environmentally controlled real-estate is expensive

– Maximising Space resources is the most critical aspect of data center design.

• Provide adequate empty floor space when designing data center

– Enables flexibility of reallocating space to a particular function, and adding new racks and equipment as needed.

• Ample overhead, under floor cable pathways, and trough– Necessary for future growth and manageability.

– Expanding the physical space of a data center can cost more than the original data center build itself

Page 15: Data Centre – Managing a Reliability Quotient Bala Chandran.

Scalability

• Managed Density

– Choose Fiber and Copper solutions that have the highest manageable density in the industry.

• Fast and Accurate MAC work

– Install products that are designed to be easily installed and managed

• Four fundamentals of Fibre management

– Bend radius protection

– Cable and connector access

– Intuitive cable routing paths

– Physical protection

Page 16: Data Centre – Managing a Reliability Quotient Bala Chandran.

Thermal Issues

• Proper Data Centre Design deployment of Hot Aisle/Cold Aisle Cooling

– Good Airflow/Proper Cooling = Optimal Performance of Servers and Switches

Page 17: Data Centre – Managing a Reliability Quotient Bala Chandran.

Thermal issues

But what happens if poor cable management blocks airflow?

Ridiculousisn’t it ??

Page 19: Data Centre – Managing a Reliability Quotient Bala Chandran.

Thermal

• Proper Cable Management promotes good air flow

– Overhead Fiber cabling through FiberGuide eliminates “Air Dams” below the raised floor

– Glide Cable Management & Angular Panels organizes cables so they don’t restrict airflow to switches and servers

Page 20: Data Centre – Managing a Reliability Quotient Bala Chandran.

Reliability

• Design for redundant, fail-safe reliability and availability

– Downtime can cost anywhere from $50K to over $6 million per hour

• Reliability is also defined by the performance of the infrastructure

– Must consistently support the flow of data without errors that cause retransmission and delays.

Transmission delay and data integrityTransmission delay and data integrityTransmission delay and data integrityTransmission delay and data integrity

Page 21: Data Centre – Managing a Reliability Quotient Bala Chandran.

Cabling & Data

Cabling Infra

Data

Page 22: Data Centre – Managing a Reliability Quotient Bala Chandran.

Throughput degradation

0 ms 2 ms

• One marginal bit can result in:• retransmission of packet• retransmission of packet train

AFTER a 1/2 Second Time Out!!!!

• Impact to business• network sluggishness• network congestion

“A 1 percent drop in Ethernet packets correlates to an 80 percent drop inbandwidth.” -- Robert Metcalfe

Page 23: Data Centre – Managing a Reliability Quotient Bala Chandran.

Reliability/Uptime

• Choose a cabling system that addresses causes of network downtime

– 70% of downtime is caused by physical layer

– Poor cooling affects network equipment reliability

• Must be designed for Data Centers taking into account all the issues that face the network designers, engineers, and technicians

Application

Presentation

Session

Transport

Network

Data Link

Physical Layer

Rs.

70% OF NETWORKING PROBLEMS ARE CAUSED BY THE CABLING INFRASTRUCTURE, WRONGLY SPECIFIED & OR NSTALLED Time

Page 24: Data Centre – Managing a Reliability Quotient Bala Chandran.

Manageability

• Designed as a flexible utility to accommodate disaster recovery, upgrades and modifications

– Unified cable management

– Keeps cabling and connections properly stored and organized

– Easy to locate and access, and simple to reconfigure.

• Lower operating costs – Reduce time taken for modifications, upgrades and

maintenance

• Reduced risk of down time – Ability to isolate network segments for troubleshooting

Page 25: Data Centre – Managing a Reliability Quotient Bala Chandran.

The Challenge

Low

Port

s per

ft2/m

2

H

igh

Good Manageability Poor

Density

Manageability

Unmanaged Density

Page 26: Data Centre – Managing a Reliability Quotient Bala Chandran.

Unmanaged Density…?

“Simply increasing the density of connectivity can create new challenges.”

• Impaired access, obscured ports.

• Unacceptable stress and pressure oncable and connectors, when gainingaccess

• System Downtime due to inadvertent patch cord removal (or knocked out)

Increasing density may harm network integrity and performance if not managed

Page 27: Data Centre – Managing a Reliability Quotient Bala Chandran.

Objective – Managed Density

Low

Port

s per

ft2/m

2

H

igh

Good Manageability Poor

Density

Manageability

= Managed Density

Deliver Density in a Managed Way

Page 28: Data Centre – Managing a Reliability Quotient Bala Chandran.

Managed Density in a Nutshell…

Managed Density = High Density without jeopardising any of the above

Managed Density focuses on:

1. Performance

2. Reliability

3. Accessibility

4. Flexibility

Page 29: Data Centre – Managing a Reliability Quotient Bala Chandran.

Design – Product - Practice

• Best possible accessibility and serviceability – without disturbing floor tiles for adds or changes

• Easy to add or move raceway exits

• Dramatically reduce congestion beneath the raised floor improving airflow

• Implement Physical Layer Management solutions

• Racks designed for slack cable storage (Horizontal & Vertical)

Page 30: Data Centre – Managing a Reliability Quotient Bala Chandran.

Cabling Reliability Quotient

• Future scenario demands a higher reliability on the cabling highway

– Virtualisation & Consolidation demand more bandwidth in cabling

– Transmission performance is a factor of product and design

– Fibre & manageability of physical layer is a key criteria

– Reduce data centre complexity

What should be the cost containment strategy?

Page 31: Data Centre – Managing a Reliability Quotient Bala Chandran.

Cost containment strategy

Courtesy: Symantec State of the Data Center

• Not to be infatuated with upfront costs

• A cabling plant is meant to last decades

• Evaluate it based on life-cycle costs

• Extract full life-cycle value rather than premature replacement

More at risk than before if cabling reliability is not established & guaranteed at the beginning

Page 32: Data Centre – Managing a Reliability Quotient Bala Chandran.

Any Thoughts??


Recommended