The New Enterprise Data Center An evolutionary new model for efficient IT service delivery
Presenter
Presentation Notes
For many of our clients, they are still plagued with the 20th century model of highly distributed, fragmented islands of computing – but the business is demanding an efficient, dynamic and highly responsive data center - One that is grounded in the goal of simplifying first, addressing the most critical operational issues with consolidation, virtualization, new energy efficient technologies, improved service management capabilities and strong resiliency and security tactics. Transition Line: However, this new approach isn’t just about the underlying infrastructure, it must also address processes and people as well. Note to presenter: Each on of the initiatives is hot linked to a 4 pg. set of charts providing more information in the back of the deck. (There is a return link on pg 1 & 4 in each of them. ) Information Infrastructure is not yet hotlinked and will be provided in the September refresh.
Trusted Advisor: Proven ability to lead clients through difficult transformations.
Industry Leadership: Client-level execution of global NEDC strategy.
Technology
Services
Transformation
Adaptive Infrastructure
Data Center 3.0Next Generation Data Center
Cloud Computing
Network
Perchè
IBM ….
Presenter
Presentation Notes
To drive maximum client value and differentiation IBM must leverage strengths across Technology, Services and Business Transformation Competitive Landscape shows each vendor’s view on market --- doesn’t show leadership. IBM owns 3 powerful weapons to use against all competitors Broad Portfolio: HW, SW, Services with not only leadership in all 3 categories but the best position we have been in over 15 years Trusted Advisor: IBM’s rich history with clients gives us permission to engage them in discussions about the future of their IT and businesses; we have a proven ability to lead clients through disruptive IT transformations Industry Leadership: IBM industry teams and services teams have expertise which is unmatched in the industry No other competitor has this combination of capabilities. HP: Very much like IBM, has permission to talk to clients about data center transformation; they have a very good marketing message but lack the breadth or strength of the IBM portfolio. HP relies on partners and recent acquisitions to try to offer a solution. They center the conversation around strength of the x86 market which they got from the Compaq acquisition. Their view is centered around blades, because HP lacks the diverse platforms from mainframe to x86 that IBM brings to the table Accenture: Approaches from services and business value perspective. Partners to satisfy Portfolio requirements. Has front-office permission but frequently lacks data center permission and expertise. Natural partnership with HP. Cisco: Always comes back to the network. They use the concept of “Cloud computing” to justify moving more workloads to the network. They lack a balanced view and expertise for data center transformation, lack portfolio and permission, but investing to gain both. Google: Approaches from Transformation and cloud computing angle but lacks enterprise-class portfolio and permission in Data Center. Innovation reputation and cloud computing mindshare could open doors.
The New Enterprise Data Center An evolutionary new model for efficient IT service delivery
Presenter
Presentation Notes
For many of our clients, they are still plagued with the 20th century model of highly distributed, fragmented islands of computing – but the business is demanding an efficient, dynamic and highly responsive data center - One that is grounded in the goal of simplifying first, addressing the most critical operational issues with consolidation, virtualization, new energy efficient technologies, improved service management capabilities and strong resiliency and security tactics. Transition Line: However, this new approach isn’t just about the underlying infrastructure, it must also address processes and people as well. Note to presenter: Each on of the initiatives is hot linked to a 4 pg. set of charts providing more information in the back of the deck. (There is a return link on pg 1 & 4 in each of them. ) Information Infrastructure is not yet hotlinked and will be provided in the September refresh.
The New Enterprise Data Center An evolutionary new model for efficient IT service delivery
Presenter
Presentation Notes
For many of our clients, they are still plagued with the 20th century model of highly distributed, fragmented islands of computing – but the business is demanding an efficient, dynamic and highly responsive data center - One that is grounded in the goal of simplifying first, addressing the most critical operational issues with consolidation, virtualization, new energy efficient technologies, improved service management capabilities and strong resiliency and security tactics. Transition Line: However, this new approach isn’t just about the underlying infrastructure, it must also address processes and people as well. Note to presenter: Each on of the initiatives is hot linked to a 4 pg. set of charts providing more information in the back of the deck. (There is a return link on pg 1 & 4 in each of them. ) Information Infrastructure is not yet hotlinked and will be provided in the September refresh.
Let’s take a closer look at our definition of virtualization. Virtualization is the logical representation of resources not constrained by physical limitations. Virtualization can be taking one big thing and creating a bunch of smaller resources. One great example of this is leveraging zVM on the mainframe to create hundreds of virtual servers using a handful of physical engines. Another element of virtualization is making many smaller resources as one. This could be having workloads spread across many physical servers working in concert with one another. Or .. This could be leveraging common resources to serve many discrete servers as in the example of a BladeCenter. Most of our clients start with one of these two forms of utilization, then taking the next step to leverage automation and service management to create a flexible infrastructure that can dynamically adjust and change according to the needs of the business. Transition line: An important point to keep in mind is that you need to adopt a virtualization strategy for both your server and storage resources.
Grid computing grew out from distributed computing. It was often comprised of multiple independent computing clusters composed of resource nodes not located or maintained within a single (administrative) domain and potentially geographically dispersed. As grid computing evolved it began to take on elements and characteristics of a utility. Utility computing -- for example, storage -- provided a metered service similar to that of a public utility that offered a fee-based, “rented” computing capability. Utility computing often encompassed some form of virtualization. As the demand of these “utility” services grew, On Demand became a popular model for computing in which computing resources are made available to the user as needed. These resources may be maintained within the user's enterprise, or made available by a service provider and help overcome an enterprise challenge of being able to meet fluctuating demands (efficiently) Cloud computing is where we are today., building on all that we have collectively learned over the last 15 years – and leveraging the advancements of networks and technology to provide ubiquitous, scalable computing capabilities – anywhere, anytime. Transition: That is not only a vision – but a reality now for many customers that we are working with.
IBM has been an industry leader along the way in these multiple models for outsourcing and hosting. Its data centers have served thousands of enterprises since the 70s in the form of service bureaus and more recently as data center and other outsourcing. It has accumulated experience and record in the business of off-premises IT. Starting in 2007 IBM begun an active investment in new initiatives like “cloud computing”. IBM’s own Research Compute Cloud is helping to drive IBM introduced Cloud Computing (Blue Cloud), investing in multiple Cloud computing centers around the world (US, China, Ireland, others) and IBM has also entered in partnerships with Google (and others), for example, providing cloud computing to university research centers. IBM and Google recently enhanced this partnership by taking this initiative to a new level – with the goal to provide ubiquitous access to anyone – combining the strengths of IBM with the veracity of Google. Technology Incubation: The IBM Research Compute Cloud (RC2) is a self service, on demand IT delivery solution established in 2H07 in the U.S. and currently being deployed across the Research team worldwide. It was created to improve resource utilization and speed time-to-test of new technologies. It serves as a virtualized service environment, integrating existing assets and products based on SOA. RC2 supports the full life cycle of a service delivery from offering creation through order placement, contract fulfillment, monitoring, reporting and billing, allowing a “zero touch” option requiring no involvement from administrators to execute selected business processes. Collaborative Innovation Sogeti Sogeti Group is the IT consulting arm of Capgemini. It has employees across 14 countries in Europe, Asia and the Americas. Sogeti established a cloud environment for idea exchange with the goal to: Improve its sense of being "one company" Foster innovation internally, and transform the mindset to one of innovation Change client perception of Sogeti to one of an innovative company Create ideas that will result in improved value for Sogeti clients Create ideas that can result in improved profit margins for Sogeti. Phase 1: 72 hour ideation event for users in 14 countries. Held in mid-April, 2008, hosted on the IBM Cloud Computing Centre at Dublin 4,183 users generated 1,903 ideas, 3,356+ comments, 11,727 ratings, and 67,372 reviews Phase 2: Idea consolidation and refinement (to take place over the next several months) Phase 3: Incubation and Trial Government-led Initiatives: IBM is helping the government of Vietnam accelerate their technology development base and skills. Previously, they have been unable to adequately leverage skills, education, expertise on a broader scope but have a real need to facilitate real-time collaboration between major universities and research institutions��Benefits: They can foster collaboration among education, research and industry, accelerate development of next generation skills, facilitate real-time collaboration between major universities and research institutions.��Workload: Turns the static content of the Vietnam Information for Science and Technology Advance (VISTA) Web site into dynamic, rich, user-generated content; the Portal provides tools that enable users to build online communities for developing and sharing ideas Software Development IBM is working Wuxi Tai Lake Industry Investment and Development Company Limited and the Wuxi municipal government in building the China Cloud Computing Center, which will be a shared facility providing each software company in the park with its own virtualized computing resource. For example, a company will be able to use the allocated resource for designing, developing and testing its software products. Such virtual environments can replace the traditional data center model, in which each company owns and manages its own hardware and software.
Improving responsiveness of the systems and people starts with having a highly flexible, scalable, dynamic IT infrastructure. So having a choice of highly scalable systems becomes a key factor in making that happen – both by providing systems that scale up and systems that scale out. Virtualization is also key here – since it helps you manage and move resources around as you need them. With a Scale up strategy – Scalability and easy, modular growth is available with enterprise SMP systems, providing the largest, most scalable servers in the world. These servers provide support for z/OS, UNIX, Linux, i5/OS® and Windows that is seamless from bottom to top. For example – System z just broke their own record with the largest core banking benchmark result, delivering a record 9,445 business transaction per second in real-time – for a record-breaking scalability benchmark. System p has over 70 benchmarks today surpassing HP and Sun for significant scalability for UNIX. No other Intel-based server company even tries to go up near 16-way anymore like the System x, providing the most scalable x-86 platform in the market today. System Storage offerings offer a variety of disk storage to choose from to address the varied needs of information management and storage. Scale out – If you want to scale "out," the architecture of our BladeCenter products supports rapid deployment while reducing cost and complexity. IBM’s SAN Volume Controller helps manage across a broad range of storage solutions. SAN Volume Controller hides the boundaries among disk systems – allowing better availability of all storage resources. Managing storage across the enterprise is key – and working with both IBM and non-IBM storage is something that only IBM’s SAN Volume Controller can do. Open Standards – IBM’s embracement of open standards continues to allow customers to pick the right application for their business need – knowing that the operating system needed can be supported on most IBM Systems. An example of this openness? IBM and Sun announced that IBM will distribute the Solaris OS and Solaris Subscriptions for select IBM System x servers and BladeCenter servers. Scale within – IBM Virtualization Solutions allows you to blend this strategy across your technology – and in many cases – across other vendors’ technology as well. Virtualization allows you to drive up utilization – and also allows you to divide up powerful systems into many smaller systems to meet unique application or workload requirements instead of always having to buy more servers. The important point here is that we offer choices in how you scale – providing confidence that you will have the processing power to get the work done – and flexibility in doing so. Transition Line: We teed up the point of virtualization here again – it is so important for creating a truly dynamic, responsive infrastructure that is responsive to what ever workload challenges occur in your business. And the reality is that your IT environment will run across multiple types of servers and storage. To manage a true heterogeneous environment – you need a truly open systems management approach – one that is provided with IBM Systems Director. �
The New Enterprise Data Center An evolutionary new model for efficient IT service delivery
Presenter
Presentation Notes
For many of our clients, they are still plagued with the 20th century model of highly distributed, fragmented islands of computing – but the business is demanding an efficient, dynamic and highly responsive data center - One that is grounded in the goal of simplifying first, addressing the most critical operational issues with consolidation, virtualization, new energy efficient technologies, improved service management capabilities and strong resiliency and security tactics. Transition Line: However, this new approach isn’t just about the underlying infrastructure, it must also address processes and people as well. Note to presenter: Each on of the initiatives is hot linked to a 4 pg. set of charts providing more information in the back of the deck. (There is a return link on pg 1 & 4 in each of them. ) Information Infrastructure is not yet hotlinked and will be provided in the September refresh.
The New Enterprise Data Center An evolutionary new model for efficient IT service delivery
Presenter
Presentation Notes
For many of our clients, they are still plagued with the 20th century model of highly distributed, fragmented islands of computing – but the business is demanding an efficient, dynamic and highly responsive data center - One that is grounded in the goal of simplifying first, addressing the most critical operational issues with consolidation, virtualization, new energy efficient technologies, improved service management capabilities and strong resiliency and security tactics. Transition Line: However, this new approach isn’t just about the underlying infrastructure, it must also address processes and people as well. Note to presenter: Each on of the initiatives is hot linked to a 4 pg. set of charts providing more information in the back of the deck. (There is a return link on pg 1 & 4 in each of them. ) Information Infrastructure is not yet hotlinked and will be provided in the September refresh.
Data sources: Green IT: A New Industry Shock Wave, Gartner Symposium/ITxpo, October 2007, IDC US DOE, F. Renzi
carbon footprint
–
Stime
gartner/mondo, IDC/USA, –
Uguale
circa all’industria
aeronautica
–
Previsto
il
raddoppio
in 4 anni.
Presenter
Presentation Notes
2% stima gartner sull’impatto della produzione di CO2 da parte dell’IT nel mondo IDC stima un 3% come impatto di produzione di CO2 da parte dell’IT in USA Entrambi prevedono che questo impatto raddoppi in 4 anni Tale impatto uguaglia la produzione di CO2 da parte dell’industria aerenautica Si parla della CO2 legata all’attivita’ umana (perche’ la CO2 e’ generata anche dalle piante)
Sul sito www.impattozero.it ciascuno puo’ calcolare il proprio impatto ambientale e quindi la produzione di CO2 sulla base di una serie di dati personali In questo caso si tratta dell’impatto di una persona IBM (se vuoi puoi dire che e’ anche il tuo) Poiche’ l’uso dell’aereo, della macchina, della corrente elettrica a casa (usata per il pc) sono imputabili prevalentemente all’attivita’ lavorativa e quindi all’IT si puo’ dedurre che l’80% della produzione di CO2 della persona e’ dovuta all’IT
Page Summary: 4 fattori per cui un’azienda deve occuparsi dell’aspetto ambientale: Norme e regolamenti- ricoda protocollo di Kyoto e direttiva europea 20 20 20 che dice che entro il 2020: bisogna ridurre del 20% la CO2, bisogna produrre almeno il 20% di energia rinnovabile e bisogna ottenere il 20% di efficientamento energetico Immagine Costi: ogni euro di acquisto HW, 0,5 di consumo energetico Business Development: generazione di nuovi business legati a questo tema Casa= Carbon House secondo IBM Per ogni blocco IBM ha una soluzione per te (lo vediamo nella carta successiva) Inoltre aggiungo una carta con spiegazioni
1 Billion $ allocato ogni anno per accelerare lo sviluppo di servizi e tecnologie “green”:
-
Sviluppo di soluzioni tecnologiche composte da hdw, sfw e servizi con l’obiettivi di ridurre i consumi IT.
-
Sviluppo metodologie e competenze per servizi e consulenza in ambito “Green”.
Progetto “Big Green”
Prime azioni e risultati
40% Tra il 1990 e il 2005 le iniziative IBM di risparmio energetico hanno ridotto o evitato le emissioni di CO2
per un ammontare pari al 40% delle emissioni IBM relative al 1990.
Since inception
Computer Program Charter
Member 1992
Business Environmental Leadership Council
Charter
Member 2002
WRI Green Power Market Development
Group
Charter member 2000
Charter member 2003
Since inception
1605(b) voluntary emissions reporting
since 1995
FORTUNE 500Top 20 2004, 2005, 2006
2005
The Climate Group20051998,
1999, 2001
Green Power Purchaser
Award 2006
CharterMember
2000
USEPAClimate Protection Award1998 and 2006
Premi e Riconoscimenti
Gli sforzi di IBM sono pubblicati e verificati
Obiettivi futuri
12% Dal 2005 al 2012: Ulteriore riduzione delle emissioni di CO2
dovute al proprio consumo di energia del 12%. Questo a fronte di un raddoppio previsto nei consumi ICT.
2008
Presenter
Presentation Notes
IBM to reallocate $1 billion each year: To accelerate “green” technologies and services To offer a roadmap for clients to address the IT energy �crisis while leveraging IBM hardware, software, services, research, and financing teams To create a global “green” team of almost 1,000 energy efficiency specialists from across IBM Re-affirming a long standing commitment at IBM: Energy conservation efforts from 1990 – 2005 have resulted in a 40% reduction in CO2 emissions and a quarter billion dollars of energy savings Annually invest $100M in infrastructure to support remanufacturing and recycling best practices Will double compute capacity by 2010 without increasing power consumption or carbon footprint saving 5 billion kilowatt hours per year…equals energy consumed by Paris – “the City of Lights” What “green” solutions can mean for clients: For the typical 25,000 square foot data center that spends $2.6 million in power annually, energy costs can be cut in half Equals the reduction of emissions from taking 1,300 automobiles off of the road Da sottolineare I vari premi che IBM nell’anno ha ricevuto per il suo impegno. In particolare anno 2008 Best Green IT Company da Computerwolrd (HP al settimo posto, SUN al dodicesimo)
la parte UPS/condizionamento0.5 Euro per power&cooling
ogni
Euro per acquisizione
hdw
Data center watts/sq. ft.
Source: Gartner, “U.S. Data Centers: The Calm Before the Storm,”
ID #G00151687, September 25, 2007
Presenter
Presentation Notes
[Chart: IT energy crisis] According to Gartner, most enterprise data centers – at least in the United States – were built more than seven years ago and were not designed to handle the rising energy and cooling demands of today’s systems and networking equipment. They also weren’t designed to be adaptable enough to take advantage of the changing competitive IT landscape. These legacy data centers were typically built to a design specification of about 35 watts per square foot on the low end to 70 watts per square foot as a high. Current design needs can vary from between 150 to 200 watts per square foot. And by 2011, this could rise to more than 300 watts per square foot. And these figures represent just the energy needed to power the IT equipment in data centers – they don’t include the energy needed by air conditioning systems to remove the heat generated by this equipment. These cooling needs can increase the overall power requirements by an additional 80% to 120%. The implication is that most current data centers will be unable to host the next generation of high-density equipment. Unlike rising management costs and increased administration, power limits are experienced as a wall you run into. Companies are forced to make a decision – they either have to expand their existing data centers, which can cost many millions of dollars, or they have to innovate inside the current envelope. To help address this data center energy crisis and help spur “green” innovation, IBM announced a comprehensive initiative aimed at the data center energy crisis last May.
3 Datacenters are Capital Intensive and Must Run at High Utilization for Cost Effective Operation (model from GTO2008)
To minimize cost of computing:
1. Run at highest possible utilization, while preserving response time2. Buy enough machines to fully utilize power and cooling at average utilization rate3. Consider buying more expensive machines if the utilization can be higher
Presenter
Presentation Notes
[CHART 12: Utilization] With this next chart I’ll show the impact of utilization. We will compute cost per compute kilowatt hour (lower is better). We're trying to come up with a single metric that's got the electrical cost which we consider a variable cost and then all of the fixed costs of the server that are allocated into it. If you look at the first bar, what we've included there is the cost of the infrastructure. This is the cooling towers and the electrical distribution. That thin line that's the floor space. The reason that line is so small is because typically buildings will have a lifetime amortization of on the order of 20 years. If you actually look at the cost of the floor space that's sitting underneath this 30 cubic square foot volume of a server, it's relatively small. Then the second bar is where we put servers in the building and again, the cost goes up. But at this point, the servers are not turned on. You're starting with sort of a baseline expense of 20 cents per compute kilowatt hour to build out the data center, to have servers in it but doing nothing. And then the third bar and the subsequent bars going out, we start to turn on the servers and if you notice across the horizontal axis, the CPU utilization of the server is increasing and so the CPU load will go up a little, the power consumption will go up slightly all the way out to 100 percent. The second important part of this graph is that horizontal line that's labeled with word “throughput” that runs at a 45 degree angle. And this is representative of the amount of work that's done by this server. This is a very simplistic model that says the amount of work scales linearly with the CPU utilization. What that means is if you're producing two units of work when the systems running at 20 percent, if you were to increase the utilization to 60 percent ‑‑ so that's three times greater utilization ‑‑ the units of work would go from two up to six. And then the part that's very interesting on this graph is the line that is labeled, starting right in the middle, cost per compute kilowatt hour. And the way we've calculated this chart is we've divided the value of the bars, which is the cost, by the value on that diagonal line running at 45 degrees, which is the amount of work that's getting done. And that gives you a cost per compute kilowatt hour or in the factory example, cost of finished goods. The very interesting thing is where this curve goes down very, very dramatically, from around 20 percent CPU utilization out to the 60%, you see the cost of actual computing that you're getting done is going down dramatically. This is one of the reasons that driving the whole virtualization movement ‑‑ that the payback especially large at very low utilizations because you have such [sunk] cost in your overall data center, you've got servers that are sitting there that are barely producing any work, and so the economic value to drive up the utilization from very low utilizations into the higher range is incredibly large.
Optimal buildings and products with a real time control loop
3 Datacenter of the future (from GTO2008)
Evidence that there may be optimal datacenter building-blocks (~ 10,000 ft2, 1MW) for cost efficiency and delivery
High value in optimizing all components (servers, storage, networking) into this footprint
24x7 real-time sensors (power, heat) should feedback into systems management environment
Presenter
Presentation Notes
[CHART 13: The Building] The other element of this model also looks at the overall building. If you look at the way you build a building, there's also an optimal building block size which is roughly 10,000 square feet and about one megawatt of electricity. This is not an upper limit of the size, but it's to say that designing around 10,000 square feet and one megawatt would optimize that particular part of the data center. If somebody wanted a 50,000 foot data center, then they'd end up just getting five of these building blocks. Having uniform heat production across the datacenter is important. Since networking gear, storage, and server equipment all produce different amounts of heat it might be good to intersperse them to keep the average load of your data center down. And then there are also trends for real time sensors in data centers both around sensing how much power is being use and how much heat is being produced.
Power Reduction: Monitor and reduce power to idle logic within coresNAP Mode: Power off inactive cores, restore power when neededThermal Tuning: Sensors monitor and reduce power to overactive circuits
We tend to think of the processor as the power culprit, but as you can see from this chart, the ‘other’ category is driving more power requirements than the processor. Some of the areas including in ‘other’ are AC to DC transitions DC to DC Deliveries, and Fans and air movement IBM engineers saw these issues as an opportunity to innovate and deliver better power solutions. So we’ve built BladeCenter using more efficient power supplies so we can reduce the waste during the AC-to-DC transition. This waste can account for as much as 40% wasteful electrical usage. See on next chart. Now, our super energy efficient power supplies deliver more power to the server – less wasted watts in the AC to DC transition. We’ve also designed with fewer parts. This smarter shared infrastructure design means fewer components that draw power – less hardware means less watts A third way we’ve been creative is to create a smarter thermal solution—we’ve been able to reduce the number of fans from 112 down to just 2 low-power-use blowers compared to 1Us and created a smarter solution than even Hps newest blade designs.
The IBM Project Big Green Action Plan has dramatically improved cooling innovations from the molecular level to the facility level. ‘Airgap’ Technology in chips to reduce power by 15% or increase performance by 35% at current power levels. New Power Management modes at the processor level on IBM Power 6 : Power Save , Performance Aware Power Save, Power Capping, Turbo, and Acoustic Optimization. At the system level new design of IBM power supplies meet the new 80/20 requirements . Greater than 80% Efficiency for any load greater than 20 %. IBM server developers uses a system design methodology know as Calibrated Vectored Cooling. We package design features that direct cooling to specific locations based on thermal needs, we use zone cooling, even counter-rotating fan blades to hexagonal air holes and advanced heat sink designs for faster heat removal. At rack level we can save at least 15% energy savings on cooling energy by switching from air to water. Water is a better medium to transfer heat out of the server environment compared to air. IBM Rear Door heat exchanger cooling systems can be placed within the rack using chilled water flow through pipes forming a closed loop save system. At the facility level IBM is active with the EPA and the industry in helping to get the Energy Star Tier 1 completed. We are also working with the EPA/DOE and the EU on Data Center level metrics for assessments and ratings. We participate in 80+ to drive efficient power supply designs and a measurement and rating system for them. IBM has services to assist customers seeking LEEDs Green Building certification, and is participating in developing a scorecard for Green Data center certifications. We are active in both DMTF and SNIA to deliver standards for collecting energy information from IT and Facilities equipment in the data center. American Society of Heating, Refrigeration, and Air Conditioning Engineers (ASHRAE) is where we are working on defining resilient and efficient data center guidelines.
Measure and Manage Track things like temperature and power usage, with trending over time to help with planning. Power capping can be very important to clients who have signed an agreement with their Utility Providers where usage beyond a set threshold means they pay a premium. This capability allows the client to make the appropriate trade-offs between performance and efficiency. Note: power capping – set policies to throttle down the clock speed when certain thresholds are met to lower power consumption. Key Takeaway: AEM allows customer to trade off power for performance, providing the tools to make these decisions on how much energy each server should be allocated and setting thresholds not to exceed this level. Customers can work with actual data versus nameplate data on the server.
Data Center Data Center Infrastructure Infrastructure
AssetsAssets
Trend dei
consumi
su
singoli
elementi
o per gruppi
Informazioni
su
consumo
energetico
e dissipazione
termica
Visualizzazione
tridimensionale
Active Energy Manager
Presenter
Presentation Notes
So far we have spent much of this session looking at Improved Performance / Watt, Energy Saving Modes, Energy Monitoring & Trending Dynamic Energy Optimization, and Improving System Utilization. But a green agenda is not complete without looking at the overall Data Center / Building Efficiency & Reliability. How can we ties between IT and Data Center Infrastructure together. How can we integrate with Enterprise Management to do complete Data Center Efficiency , Thermal Monitoring & Mgmt and use new tools for Data Center Modeling. This chart represents the expanded capabilities IBM is now delivering through a integrated solution stack. IBM Director/AEM monitors and manages energy at the resource level for IBM systems and non IBM systems. Then Tivoli products expand the AEM scope and function to the IT Services, Workloads, and Service Level Agreements in the data center. Tivoli integrates Energy management into Enterprise Mgmt. This allows IBM to monitor power usage and thermal data from IT resources through embedded or remote sensors, leveraging partner capabilities for data center assets and facilities assets and integrate with application performance metrics. This all creates a method that integrates traditional IT measurements and emerging environmental measurements onto common dashboard with thresholding, trending, and event generation. This aggregation of IT and environmental metrics makes it possible to take manual or automated actions when needed for physical and virtual systems monitoring and management. Specific to facilities management, this all give IBM the unique ability to map and visualize data center facilities , obtain information on power, temperature, and layout, and identify problem areas , and enable improved facilities management in support of IT. Solution scenarios include Measure & Monitor at the lower levels of performance, utilization, response times, power usage and thermals. Control & Optimization then can be used for power capping, virtualization, storage tiering, and intelligent provisioning. As you move up Dynamic Optimization takes over by saving power by dynamic consolidation using Live VM Mobility or by coordinating with facilities infrastructure. What IBM has accomplished is the creation of Energy Management as a component of Systems Management.
The New Enterprise Data Center An evolutionary new model for efficient IT service delivery
Presenter
Presentation Notes
For many of our clients, they are still plagued with the 20th century model of highly distributed, fragmented islands of computing – but the business is demanding an efficient, dynamic and highly responsive data center - One that is grounded in the goal of simplifying first, addressing the most critical operational issues with consolidation, virtualization, new energy efficient technologies, improved service management capabilities and strong resiliency and security tactics. Transition Line: However, this new approach isn’t just about the underlying infrastructure, it must also address processes and people as well. Note to presenter: Each on of the initiatives is hot linked to a 4 pg. set of charts providing more information in the back of the deck. (There is a return link on pg 1 & 4 in each of them. ) Information Infrastructure is not yet hotlinked and will be provided in the September refresh.
The New Enterprise Data Center An evolutionary new model for efficient IT service delivery
Presenter
Presentation Notes
For many of our clients, they are still plagued with the 20th century model of highly distributed, fragmented islands of computing – but the business is demanding an efficient, dynamic and highly responsive data center - One that is grounded in the goal of simplifying first, addressing the most critical operational issues with consolidation, virtualization, new energy efficient technologies, improved service management capabilities and strong resiliency and security tactics. Transition Line: However, this new approach isn’t just about the underlying infrastructure, it must also address processes and people as well. Note to presenter: Each on of the initiatives is hot linked to a 4 pg. set of charts providing more information in the back of the deck. (There is a return link on pg 1 & 4 in each of them. ) Information Infrastructure is not yet hotlinked and will be provided in the September refresh.
ad un approccio olistico, integrato, che coniuga la tecnologia con i processi e l’organizzazione, la sicurezza ICT con la protezione delle infrastrutture critiche
The New Enterprise Data Center An evolutionary new model for efficient IT service delivery
Presenter
Presentation Notes
For many of our clients, they are still plagued with the 20th century model of highly distributed, fragmented islands of computing – but the business is demanding an efficient, dynamic and highly responsive data center - One that is grounded in the goal of simplifying first, addressing the most critical operational issues with consolidation, virtualization, new energy efficient technologies, improved service management capabilities and strong resiliency and security tactics. Transition Line: However, this new approach isn’t just about the underlying infrastructure, it must also address processes and people as well. Note to presenter: Each on of the initiatives is hot linked to a 4 pg. set of charts providing more information in the back of the deck. (There is a return link on pg 1 & 4 in each of them. ) Information Infrastructure is not yet hotlinked and will be provided in the September refresh.
The New Enterprise Data Center An evolutionary new model for efficient IT service delivery
Presenter
Presentation Notes
For many of our clients, they are still plagued with the 20th century model of highly distributed, fragmented islands of computing – but the business is demanding an efficient, dynamic and highly responsive data center - One that is grounded in the goal of simplifying first, addressing the most critical operational issues with consolidation, virtualization, new energy efficient technologies, improved service management capabilities and strong resiliency and security tactics. Transition Line: However, this new approach isn’t just about the underlying infrastructure, it must also address processes and people as well. Note to presenter: Each on of the initiatives is hot linked to a 4 pg. set of charts providing more information in the back of the deck. (There is a return link on pg 1 & 4 in each of them. ) Information Infrastructure is not yet hotlinked and will be provided in the September refresh.
IBM Tivoli Market needs and profiling study 2005; The Costs of Enterprise Downtime: NA Vertical Markets 2005" Information Research; IBM Market Intelligence
Presenter
Presentation Notes
In order to manage this explosive growth businesses require an Information Infrastructure that consists of servers, software, storage, and networks, all integrated and optimized to deliver information from the storage media to the business applications. IBM Information Infrastructure is an initiative that helps clients meet the challenge of the Information Explosion by helping to improve competencies around 4 key areas: Information Availability Information Security Information Retention Information Compliance
The New Enterprise Data Center An evolutionary new model for efficient IT service delivery
Presenter
Presentation Notes
For many of our clients, they are still plagued with the 20th century model of highly distributed, fragmented islands of computing – but the business is demanding an efficient, dynamic and highly responsive data center - One that is grounded in the goal of simplifying first, addressing the most critical operational issues with consolidation, virtualization, new energy efficient technologies, improved service management capabilities and strong resiliency and security tactics. Transition Line: However, this new approach isn’t just about the underlying infrastructure, it must also address processes and people as well. Note to presenter: Each on of the initiatives is hot linked to a 4 pg. set of charts providing more information in the back of the deck. (There is a return link on pg 1 & 4 in each of them. ) Information Infrastructure is not yet hotlinked and will be provided in the September refresh.