Date post: | 05-Dec-2014 |
Category: |
Business |
Upload: | pravin-agarwal |
View: | 1,097 times |
Download: | 0 times |
-Pravin Agarwal Sr. Consultant (MS technologies & Virtualization)
+91 9324 338551 http://www.linkedin.com/in/agarwalpravin
Disasters HappenDisasters Happen
Disasters Happen
Defined by the Telecommunications Infrastructure Standard for Data Centers (TIA 942)
Classifies data centers into Tiers Each Tier offers a higher degree of
sophistication and reliability
Basic: 99.671% availability Annual downtime of 28.8 hours Susceptible to disruptions from both
planned and unplanned activity Single path for power and cooling
distribution, no redundant components (N) May or may not have a raised floor, UPS or
generator 3 months to implement
Redundant Components: 99.741% availability
Annual downtime of 22.0 hours Less susceptible to disruption from both
planned and unplanned activity Single path for power and cooling
disruption, includes redundant components (N+1)
Includes raised floor, UPS and generator 3 to 6 months to implement
Concurrently Maintainable: 99.982% availability
Annual downtime of 1.6 hours Enables planned activity without disrupting
computer hardware operation, but unplanned events will still cause disruption
Multiple power and cooling distribution paths but with only one path active, includes redundant components (N+1)
Includes raised floor, UPS and generator 15 to 20 months to implement
Fault Tolerant: 99.995% availability Annual downtime of 0.4 hours Planned activity does not disrupt critical
load and data center can sustain at least one worst-case unplanned event with no critical load impact
Multiple active power and cooling distribution paths with redundant components
15 to 20 months to implement
Start-up Expenses Estimated $ per Sq ft
Estimated $ for 30,000 Sq ft Facility INR
$0.00 Land 400 $1,20,00,000.00 Rs. 60,00,00,000.00Raised Floor 220 $66,00,000.00 Rs. 33,00,00,000.00Design Engineering 50 $15,00,000.00 Rs. 7,50,00,000.00Power Distribution 41 $12,30,000.00 Rs. 6,15,00,000.00UPS 28 $8,40,000.00 Rs. 4,20,00,000.00Generator & Bus 55 $16,50,000.00 Rs. 8,25,00,000.00Fire Suppression 20 $6,00,000.00 Rs. 3,00,00,000.00Security Systems 3 $90,000.00 Rs. 45,00,000.00Environmental Monitoring 5 $1,50,000.00 Rs. 75,00,000.00Other Construction 45 $13,50,000.00 Rs. 6,75,00,000.00Network Termination Equip 1000 $3,00,00,000.00 Rs. 1,50,00,00,000.00Network Install 600 $1,80,00,000.00 Rs. 90,00,00,000.00Insurance 5 $1,50,000.00 Rs. 75,00,000.00Reserved 10 $3,00,000.00 Rs. 1,50,00,000.00 Rs. 0.00Estimated TOTAL $ $7,44,60,000.00 Rs. 3,72,30,00,000.00
30,000 sq. ft. facility required Capital costs range from $12 million to $36
million (average: $22 million) Operating costs range from $1 million to $4
million per year (average: 3.5 million) Rural or urban, but travel time important
Data Centre Requirements: Tier 3 30,000 sq. ft.
10,000 sq. ft. for servers and racks 5,000 sq. ft. for future growth 15,000 sq. ft. for support
Data Center, with individual representations of each of the physical components: servers, racks, ACUs, PDUs, chilled water pipes, power and data cables, floor grilles…
Entrance Room◦Analogy: “Entrance Facility”
Main Distribution Area (MDA)◦Analogy: “Equipment Room”
Horizontal Distribution Area (HDA)
◦Analogy: “Telecom Room”
Zone Distribution Area (ZDA)◦Analogy: “Consolidation Point”
Equipment Distribution Area (EDA)
◦Analogy: “Work Area”
Requirements & guidelines for the design & installation of a data center or computer room
Intended for use by designers needing comprehensive understanding of data center design
Comprehensive document
Cabling Architectural design Fire protection
Network Design Environmental design Water intrusion
Location Electrical design Redundancy
Access
1. Computer room
2. Telecommunications room
3. Entrance room
4. Main distribution area
5. Horizontal distribution area
6. Zone distribution area
7. Equipment distribution area
8. Backbone cabling
9. Horizontal cabling
Spaces
Cabling subsystems
PowerRacks & Physical Structure
CoolingSecurity & Fire Surpression
StructuredCabling
CPI Management – IP Based
CPI Management – Building Management Systems
UPS & Batteries
PDU
Surge Protection
Switch Gear
Branch Circuits
Dist Panels
Transformers
Generators
CRAC
Chillers
Cooling Towers
Condensers
Ductwork
Pump Packages
Piping
ADU
Server Racks
Telco Racks
Raised Floor
Dropped Ceiling
Air Dams
Aisle Partitions
Power
Data
Conduit Trays
Overhead
Sub-floor
In-rack
Room Security
Rack Security
EPO
Halon
FM-200
INERGEN®
Novec™
Not widely adopted but can utilize SNMP traps
Traditional Facilities management – Analog DCI
Room Zone Row Rack
Cooling
CRAC
Chillers
Cooling Towers
Condensers
Ductwork
Pump Packages
Piping
ADU
Power
UPS & Batteries
PDU
Surge Protection
Switch Gear
Branch Circuits
Dist Panels
Transformers
Generators
Racks & Physical Structure
Server Racks
Telco Racks
Raised Floor
Dropped Ceiling
Air Dams
Aisle Partitions
Cooling
CRAC
Chillers
Cooling Towers
Condensers
Ductwork
Pump Packages
Piping
ADU
CRAC and CRAH
• Operational efficiencies can range from 40-90%
• Standard speed and variable speed drives available
• Almost always over-provisioned
• Measured in tonnage
• Single largest power OpEx in the Data Center
• Not tradtionally managed via the network
Air Distribution
• Aid CRAC in getting air to targeted areas
• Typically just extra fans, no cooling capacity
• Inexpensive, flexible and no “forklift” required
• Can be within racks, rear door or end of row
• Can buy a customer extra time to plan a move
• Typically not managed via the network but can be
ICT Infra37%
Cooling50%
Conversion Loss10%
Lighting3%
Each watt consumed by IT infrastructure carries a “burden factor” of 1.2 to 2.5 for power consumption associated with cooling, conversion/distribution and lighting
Sources: EYP Mission Critical Facilities, Cisco IT, Network World, Customer Interviews, APC
Fewer the power supplies to support a service, fewer the conversion losses
0
10
20
30
40
50
Server Storage Network
50%
35%
15%
1/3 rack footprint Chilled water 18kW nominal
capacity 30kW
w/containment Hot swappable fans Dual power feeds kW metering Network
manageable Inexpensive way to
meet high density requirements
In row Cooling - Densities up
to 30kW Cooling IT
Rack
Front View
Cooling – Rack/Row
IT Rack
HP BladeSystem
4 x Blade Chassis in one rack equates to ~15kW
Legacy Server High-Density Server
Power per Server 2-3 kW per rack > 20 kW per rack
Power per Floor Space 30-40 W/ft² 700-800 W/ft²
Cooling Needs—chilled airflow 200-300 cfm 3,000 cfm
Source: Gartner 2006
20,000 ft²
800kW
+33%
100-200 Racks
Annual Operating Expense = $800k
Annual Operating Expense = $4.6M*
*Peripheral DC costs considered
Legacy DC designed to accommodate 2-3kW per Rack
Introducing 1/3 high-density infrastructure into a legacy facility is cost prohibitive
1. Conduct a cooling checkup/survey.
2. Route data cabling in the hot aisles and power cable in the cold aisles.
3. Control air path leaks and manage cabling system pathways.
4. Remove obstructions below raised floor and seal cutouts.
5. Separate blade server cabinets.6. Implement ASHRAE TC9.9 hot
aisle/cold aisle design.7. Place CRAC units at the ends of
the hot aisles.8. Manage floor vents.9. Install air flow assisting devices
as needed.10. In extreme cases, consider self-
contained cooling units.
1. See Cooling top 10 Steps!2. Standardize on rack SOE3. Implement scalable UPS
systems4. Increase Voltage5. Target higher UPS loading6. Investigate DC power7. Load balance8. Limit branch circuit
proliferation9. Monitor power10. Manage and target power
based on monitoring benchmark
TechnologyTechnology
I/O Subsystem
CPU
Operating System
Networking
Storage
Management
Applications
Cost savings/cost avoidance – a natural by product of virtualization
Increase server utilization – optimize existing investments, including legacy systems
Reclaim data center floor space Increase agility – faster and easier response
to incidents, problems, new mandates for energy efficiency, compliance
Higher availability Do more with less
Process Recommendations
◦ Virtualize to cut Data Center energy consumption by 50 to 70%
◦ Find the Hidden Data Center to extend the life of your facilities (detailed scenario)
◦ Plan for Advanced Power Management
VIRTUALIZATION REDUCE & REFRESH
Decommission
Consolidate
Virtualize
Replatform
Leave Alone
70% Server Volume
Reduction
+Refresh
with Energy Efficient Products
=50-70%
less Data Center
energy use
Increased server utilization to nearly 80%
Consolidated servers by a 20:1 ratio Data center space at 20:1 No staff increase needed New servers deployed in hours not
weeks
Common use case: 200 Servers Virtualized to 10 Physical Chassis
~ Power and cooling savings@ $ 0.10/kWh
$ 595 per server $113.050 annual power savings*
DR - Site
Production
SANBackupServer
DEV/TESTBackupServer
Traditional Disparate Hardware Elements
Virtualization Enables Higher Virtualization Enables Higher Levels Levels of Efficiency
Storagenetwork
Virtualized SharedResource Pools
Simplifies management ◦ Centralize server & storage
management◦ Improve service levels
Increases flexibility◦ Optimize capacity planning◦ Eliminate scheduled downtime
Reduces total cost of ownership◦ Improve capacity utilization ◦ Require fewer physical systems
Enables high speed SAN-based non-disruptive server workload movement◦ Migration (VMotion)◦ Failover (HA – High Availability)◦ Load balancing (DRS – Distributed
Resource Scheduler)
Server Virtualization and Network Storage
Storagenetwork
Reduce TCO including large capital expenditures for Data Center facilities
Resolve immediate environmental issues e.g., hot spots
Combat server and storage growth rates as they exceed data center capacity
Stop data center sprawl to simplify capacity planning and operations
Contain escalating energy costs as power, costs begin to exceed IT equipment costs
Support increasing compute density which can overwhelm existing power/cooling capacity
Comply with corporate “Green” initiatives such as carbon neutrality
Invest
ment
Risky
Underinvestment
Wasteful
Over Provisioning
OPTIM
AL
INVESTM
ENT
Reliability
Find The Hidden Data Find The Hidden Data CenterCenterApproachApproach
Four main types of virtualization technologies emerging:◦ Server Virtualization
Virtualizes the physical CPU, Memory, I/O of servers
◦ I/O Virtualization Virtualizes the physical network topology and mappings between
servers and storage
◦ File Virtualization Virtualizes files and namespaces across file servers
◦ Storage Virtualization Virtualizes physical block storage devices
Virtual Server Virtual Server Virtual Server Virtual Server
“Pools” of commonly grouped physical resources
Dynamic allocations based on application level grouping and usage policies
Interconnected and controlled through an intelligent interconnect fabric
Server Processing I/O StorageApplications
Intelligent Fabric
Stand-By
Resource Pool
Compute Networking and Storage VirtualizationCompute Networking and Storage Virtualization
Physical Virtual
◦ 1,000
◦ Direct attach
◦ 3000 cables/ports
◦ 200 racks
◦ 400 power whips
80Tiered SAN and NAS300 cables/ports10 racks20 power whips
ServersStorageNetworkFacilities
Server, Storage, and Network Consolidation
Physical Server
Browns Virtual Machines
ESX Layer
Physical Server
Browns Virtual Machines
ESX Layer
Physical Server
Browns Virtual Machines
ESX Layer
• Infrastructure in the data centre running out of capacity
• SAN ports• IP ports
• Disaster Recovery planning in place but the ability to execute was not present• Desire to increase the capabilities and service offerings of the IT department• Server environment well structured and built to an SOE
Number of servers analysed 13
Total Processing Capacity 61,634 MHz
Unused Processing Capacity 55,373 MHz
Total Memory Capacity 31 GB
Unused Memory Capacity 13 GB
Total Storage Capacity 1,040 GB
Unused Storage Capacity 487 GB
Total Page File Capacity 49,910 GB
Unused Page File 46,709 GB
Total Storage I/O Utilisation 88 MBytes/sec
Total Network I/O Utilisation 3 MBytes/sec
Server Utilisation Summary
Case Study – Darwin City Council
PUBLIC
vmkernel
ST
AF
FD
MZ
Internet
INTRANET1SharePoint
Doc Svc
TERRAMapInfo
Exponare
WILLIAMCivica
Authority
DRACODataWorks
Doc Mgt
Res
ourc
e P
ool
NT GovernmentNetwork
PU
BLIC
vmkernel
ST
AF
FD
MZ
PU
BLIC
ALEXANDRIAPDC/F&P Web Filter
ERMESExchange
Mail Server
SENTINELWeb/Mail Marshall
THORDNA App
Suite/SMS
ZEUSPDC/F&P Services
ADNWMIGRTNBDC
ELLISCivica
eServices
100 Mb/sec
100 Mb/sec
100 Mb/sec
DMZ
STAFF100 M
b/sec
100 Mb/sec
100 Mb/sec
SoftwareFW/Router Firewall
SoftwareFW/Router Firewall
MEDUSAHTTP Proxy
DNS
PIXFirewall/Router
Appliance
Access for specific outbound traffic only
Acces
s for
spec
ific
outb
ound
traf
fic o
nly
• Virtual Infrastructure built on Dell 2950’s with VT-enabled Woodcrest processors• Network segmented into three security zones• Virtualisation architecture designed to enhance the overall security of the DCC network
Case Study – Darwin City Council
Virtualisation Cost AnalysisNo change (three years)As is cost (Hardware, Electricity) -$194,166.41Provisioning of new hardware -$26,974.36
Total -$221,140.77Greenhouse Emissions (tonnes) 387.23
Assume Software Costs are static
Virtualisation (three years)Virtualisation Hardware -$49,700.00Gain in Productivity $29,587.50Virtualisation software -$14,700.00
Internal Implementation Costs (including provisioning) -$6,069.23Consulting Costs -$16,000.00
Total -$56,881.73Greenhouse Emissions (tonnes) 44.28
Net Change $164,259.04 74%ReductionGreenhouse reduction 114.32Tonnes per annum
Electricity Savings $19,053.00Over 3 yearsServer count reduction 10
NPV $153,973.94After 3 Years77%
46 X86-based servers (retired 30 servers) 122 X86-based hosts 10 ESX servers hosting 86 VMs 86 Virtual hosts vs. 36 Physical hosts 70% virtualized in the x86 space
Cost of each Blade: $6,250.00◦ Includes Disk, Memory, Dual Proc, etc.
Number of additional servers:◦ 86 virtual-10 ESX servers=76
Cost to provide physical Servers 76 x $6,250= $475,000.00