Date post: | 02-Jan-2016 |
Category: |
Documents |
Upload: | cleopatra-little |
View: | 216 times |
Download: | 0 times |
Data Centre – Managing a Reliability Quotient
Bala Chandran
Expanding Depth & Breadth of a CIO
In 2009Lowering Company’s overall
operating costs
In 2010Driving innovative new
market offerings
• 49% of CIO’s have responsibilities outside of IT
• 41% of this 49% are part of or heading a Strategy Committee
Internal Challenge External Challenge
Driven by economic challenges
Tough times – Hard work
Changed economic scenario
Do even-more with Less
Market expectation rising sharply
Cost expectationdropping sharply
Service level getting more tougher to meet
Benefit becomes a necessity & necessity becomes redundant - some churns turn into a storm!
• Consolidation of Data centres into a few central locations– Enhancing capacity across networking, redundancy, computing, storage and
management
• Virtualisation: Background swapping of Data
• Availability: High reliability requires redundancy
– Secondary and tertiary data centers hundreds of miles away from primary
• Operational cost: Driving ‘Green’ initiatives
– Cooling and Powering #1 concern
• Traffic Growth: Driven by an increasingly connected world
– Servers growing 11% per year & Storage at a median rate of 22%
– Strain on data center capacities in environmental control, power, and space
– Struggle to balance between sprawling low-density racks and super-hot power-hungry high-density racks.
The Perfect Storm
And its your infrastructure that needs to weather the storm
Key DC Infrastructure
Cabling & Racks
Power & Cooling
OtherStructuralActive
Equipment
Security & Fire
How do all of these stack up?
Services &Applications
Systems & Subsystems
Rou
tin
g
Sw
itch
ing
Tra
nsp
ort
Access
BSS/OSS
Services and Applications
Key Focus of a CIO
Infrastructure
Connectivity Solutions
Physical Media
Build-out
Considered as de-focus for a CIO
Old World
Services &Applications
Systems & Subsystems
Rou
tin
g
Sw
itch
ing
Tra
nsp
ort
Access
BSS/OSS
Services and Applications
Key Focus of a CIO / CTO
New World
Infrastructure
Connectivity Solutions
Physical Media
Build-out
• Critical Infrastructure• High reliance on vendors
& SI’s• Morphs into the realm of FM,
Admin & Proj.Mgt• Can impact TCO
Ownership costs
• RoI analysis is key for business decisions
– Includes assessing & measuring total ownership costs for physical infrastructure
• Utilization of Data Centres vary typically from 10% to 90%
– Cost of ownership in terms of useful work performed and not per DC or per Sq.ft
• Virtualisation and Consolidation to address power savings
The Eco factor
Source: VMWare
Server VirtualizationThe Ecological Factor
How do running costs stack up??
Running Costs
So what’s the connection??
The Connection
Cabling & Racks
Power & Cooling
OtherStructuralActive
Equipment
Security & Fire
But first the Cable
Copper Fibre
Advantages
Disadvantages
• Better understood by the industry• Less combinations (e.g. RJ45 only)• Can be ‘Punched Down’• More Copper interfaces in market• Interfaces are cheaper. No Electrical
to Optical Conversion point• Data center managers reporting that
Copper NIC’s are coming for free
It Depends:• Immune to cross talk• Smaller cable so more dense
(12 Fiber trx = 1 Copper Cable)• Can transmit further distances• OS1 can offer ‘unlimited’ bandwidth for
future proofing• Interfaces at 10G are available in volume• Industry is settling on LC as a connector• Consumes less energy, cheaper cooling
and power required per port
• Not-Immune to cross talk• Large cable for 1 Transmit Receive• Limited transmission distances• Greater than 10G may require a new
connector type• Volatility in copper commodity pricing• Low Green credentials due to mining
activity and shipping costs.
• Many combinations (e.g. MM v SM, LC v ST)
• Cannot be ‘Punched Down’• More Copper interfaces in market• Interfaces for conversion are
expensive.
Copper or Fibre for Data Centre?
Should Cabling be a concern for a Network Engineer??
Network Engineer’s Concerns within the Data Centre
• Scalable
– Density of equipment, cabinets, frames
– Fast and Accurate Moves, Adds, and Changes
• Thermal
– Problematic in most data centers – more acute with Blade Servers
– Poor Air Flow a Problem in many Data Centers
• Reliability & Manageability
– Can’t afford any downtime even during expansion
How can you positively impact each area of concern?How can you positively impact each area of concern?How can you positively impact each area of concern?How can you positively impact each area of concern?
Scalability & Space
• Environmentally controlled real-estate is expensive
– Maximising Space resources is the most critical aspect of data center design.
• Provide adequate empty floor space when designing data center
– Enables flexibility of reallocating space to a particular function, and adding new racks and equipment as needed.
• Ample overhead, under floor cable pathways, and trough– Necessary for future growth and manageability.
– Expanding the physical space of a data center can cost more than the original data center build itself
Scalability
• Managed Density
– Choose Fiber and Copper solutions that have the highest manageable density in the industry.
• Fast and Accurate MAC work
– Install products that are designed to be easily installed and managed
• Four fundamentals of Fibre management
– Bend radius protection
– Cable and connector access
– Intuitive cable routing paths
– Physical protection
Thermal Issues
• Proper Data Centre Design deployment of Hot Aisle/Cold Aisle Cooling
– Good Airflow/Proper Cooling = Optimal Performance of Servers and Switches
Thermal issues
But what happens if poor cable management blocks airflow?
Ridiculousisn’t it ??
Thermal issues
Cables blocking air inlets and exits will raise the temperature of switches and servers lowering their reliability!
Thermal
• Proper Cable Management promotes good air flow
– Overhead Fiber cabling through FiberGuide eliminates “Air Dams” below the raised floor
– Glide Cable Management & Angular Panels organizes cables so they don’t restrict airflow to switches and servers
Reliability
• Design for redundant, fail-safe reliability and availability
– Downtime can cost anywhere from $50K to over $6 million per hour
• Reliability is also defined by the performance of the infrastructure
– Must consistently support the flow of data without errors that cause retransmission and delays.
Transmission delay and data integrityTransmission delay and data integrityTransmission delay and data integrityTransmission delay and data integrity
Cabling & Data
Cabling Infra
Data
Throughput degradation
0 ms 2 ms
• One marginal bit can result in:• retransmission of packet• retransmission of packet train
AFTER a 1/2 Second Time Out!!!!
• Impact to business• network sluggishness• network congestion
“A 1 percent drop in Ethernet packets correlates to an 80 percent drop inbandwidth.” -- Robert Metcalfe
Reliability/Uptime
• Choose a cabling system that addresses causes of network downtime
– 70% of downtime is caused by physical layer
– Poor cooling affects network equipment reliability
• Must be designed for Data Centers taking into account all the issues that face the network designers, engineers, and technicians
Application
Presentation
Session
Transport
Network
Data Link
Physical Layer
Rs.
70% OF NETWORKING PROBLEMS ARE CAUSED BY THE CABLING INFRASTRUCTURE, WRONGLY SPECIFIED & OR NSTALLED Time
Manageability
• Designed as a flexible utility to accommodate disaster recovery, upgrades and modifications
– Unified cable management
– Keeps cabling and connections properly stored and organized
– Easy to locate and access, and simple to reconfigure.
• Lower operating costs – Reduce time taken for modifications, upgrades and
maintenance
• Reduced risk of down time – Ability to isolate network segments for troubleshooting
The Challenge
Low
Port
s per
ft2/m
2
H
igh
Good Manageability Poor
Density
Manageability
Unmanaged Density
Unmanaged Density…?
“Simply increasing the density of connectivity can create new challenges.”
• Impaired access, obscured ports.
• Unacceptable stress and pressure oncable and connectors, when gainingaccess
• System Downtime due to inadvertent patch cord removal (or knocked out)
Increasing density may harm network integrity and performance if not managed
Objective – Managed Density
Low
Port
s per
ft2/m
2
H
igh
Good Manageability Poor
Density
Manageability
= Managed Density
Deliver Density in a Managed Way
Managed Density in a Nutshell…
Managed Density = High Density without jeopardising any of the above
Managed Density focuses on:
1. Performance
2. Reliability
3. Accessibility
4. Flexibility
Design – Product - Practice
• Best possible accessibility and serviceability – without disturbing floor tiles for adds or changes
• Easy to add or move raceway exits
• Dramatically reduce congestion beneath the raised floor improving airflow
• Implement Physical Layer Management solutions
• Racks designed for slack cable storage (Horizontal & Vertical)
Cabling Reliability Quotient
• Future scenario demands a higher reliability on the cabling highway
– Virtualisation & Consolidation demand more bandwidth in cabling
– Transmission performance is a factor of product and design
– Fibre & manageability of physical layer is a key criteria
– Reduce data centre complexity
What should be the cost containment strategy?
Cost containment strategy
Courtesy: Symantec State of the Data Center
• Not to be infatuated with upfront costs
• A cabling plant is meant to last decades
• Evaluate it based on life-cycle costs
• Extract full life-cycle value rather than premature replacement
More at risk than before if cabling reliability is not established & guaranteed at the beginning
Any Thoughts??