+ All Categories
Home > Documents > Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf ·...

Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf ·...

Date post: 08-Oct-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
34
Virtualization reality EXECUTIVE GUIDE Virtualization is being used to make the most out of everything from applications and servers to desktops and storage. But as companies expand their reliance on the technology, challenges mount. meets Sponsored by www.vmware.com
Transcript
Page 1: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDE

Virtualization reality

EXECUTIVE GUIDE

Virtualization is being used to make the most out of everything from applications and servers to desktops and storage. But as companies expand their reliance on the technology, challenges mount.

meets

Sponsored by

www.vmware.com

Page 2: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com �

Table of ContentsIntroduction 3

Virtualization basics 4

Variations on a virtualization theme ............................................................................................................................................... 4Server virtualization goes mainstream ......................................................................................................................................... 6New data centers mean new job opportunities ......................................................................................................................... 6

Management and security 8

The 8 key challenges of virtualizing your data center .............................................................................................................. 8Q&A: Virtualization invites management nightmare, says Yankee Group analyst .......................................................10Security and virtualization ...............................................................................................................................................................11Virtualization security risks being overlooked, Gartner warns ..........................................................................................11Virtualization reality check ..............................................................................................................................................................12Virtualization: Xen and the art of hypervisor maintenance .................................................................................................13

Money matters 14

Virtualization ROI hard to quantify ..............................................................................................................................................14New ways to save on virtualization ...............................................................................................................................................14

Case studies 16

Application virtualization paying off for university .................................................................................................................16Combo of single sign-on and virtualization pays big dividends at hospital ....................................................................17IT management done right ................................................................................................................................................................19

VMware White Paper �4

Page 3: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com 3

Introduction

Virtualization meets reality

Half of the 1,770 North American businesses surveyed by Forrester Research said they were using or testing server virtualization last year. Buyers vouch for the cost savings and performance benefits. And software vendors are salivating at what some say will be a $1 billion market in 2007.

Virtualization — for servers, storage, applications and desktops — is clearly taking root.

But that doesn’t mean all the questions have been answered. As the technology is used more, buyers are becoming more concerned with everything from manage-ment to security to ROI.

“We’ve dealt with physical-server sprawl and we know what that is,” says George Hamilton, director of Yankee Group’s enabling technologies enterprise group. “Now we have virtual server sprawl. You can end up with capacity issues and resource allocation issues, so a lot of the ini-tial challenges are around how to optimize the physical infrastructure for all the virtual machines.”

Andreas Antonopoulos, an analyst at Nemertes Research, says that the terms “security” and “virtualization” were rarely heard together until recently, but those who are operating virtualized environments need to think seriously about security.

“Security has a lot to gain from virtu-alization —- and virtualization has a lot to lose if it has no security controls,” he says.

Allwyn Sequeira, senior vice president of product operations at patching specialist Blue Lane Technologies, agrees: “The decou-pling [of hardware and software] risks blinding security pros to what is going on behind their network security appliances.”

These and other adoption lessons have some customers questioning whether they are getting the most out of their early efforts. Some 44% of 800 IT organizations polled in one survey said they weren’t sure if their server virtualization rollouts were successful.

Other early adopters have run into assorted gotchas, from overloaded WAN connections to departments that don’t want to give up control of physical servers to applications that aren’t optimized for virtualized systems. “I have heard of com-panies that have gotten a lot of pushback from departments who don’t want to give up their own hardware or applications,” says Charles King, principal analyst with Pund-IT, a technology analysis firm.

With virtualization being a relatively new technology, particularly in x86 server environments, many organizations are still sorting out the best way to use it as they shift from testing and development to production. Some companies are using virtualization for basic physical server consolidation, while others are mastering the intricacies of hypervisors to coordinate communication between operating systems and server hardware. Still others are looking to cluster their virtualized systems.

Southwest Washington Medical Center likes what it sees so far with virtualization. It adopted the technology as part of a broader single sign-on project to serve up about 200 applications to more than 6,000 employees and partners. Using virtualiza-tion, the medical center has been able to support these users with about 20% fewer servers.

Among the benefits has been easier troubleshooting. “If it’s one user having a problem but you’ve got 300 other users using the same application on the same cluster, your triage cycle is greatly pruned,” says Christopher Paidhrin, CSO for South-west Washington Medical Center.

Northeastern University used to take a month or more to roll out a new applica-tion to workstations at campus labs. It now takes almost no time for authorized users because desktop virtualization technology has essentially changed the meaning of the term “installation process.”

“Because most applications are modular these days, when you want to run an application you don’t have to download the whole thing. You just take or cache the [part] you need,” says Navid Atoofi, director of system production services.

At Babson College, the school’s director of architecture and development, manager of networks and servers, and manager of software services are all behind virtualiza-tion. “As of February 2007, we’re running 29 virtual servers on four physical dual-processor machines,” says Kuljit Dharni, Babson’s director of architecture and development. “The conservative estimate is [we can] scale to 80 virtual machines on the same hardware.”

Vendors pushing virtualization — and it seems most are — say the bottom line for the technology is that it should keep data centers from getting out of control.

“Customers who run a data center with 50 or 100 physical servers may need 500 or 1,000 of those machines some day,” says Kevin Lehay, director of virtualization at IBM. “How do you manage all of that environment?”

The answer is virtualization.

Page 4: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 1

VIRTUALIZATION

4

TOne option is to junk dozens, or hundreds, of stand-alone server boxes and consolidate virtualized Linux server images onto a few large hosts. Another is to buy hundreds of new Linux machines and tie them together as a single, virtual system via clustering or grid technology.

“Linux is the strongest example of an operating system that runs on almost any hardware you can think of, and almost any deployment scenario you can think of,” says Jean Bozman, research vice president with IDC’s Enterprise Server Group. “The style of a virtualized Linux deployment you use depends who you are and what problems you’re trying to solve. Clusters, grids, virtual-ized servers are all possible from the basic building blocks of Linux.”

Scale up with consolidationThe trendy data-center virtualization

scheme among Linux users is server consolida-tion, which aims to address a problem that has roots in the economic downturn of 2001 to 2003, when cash-strapped enterprises started favoring smaller servers over larger ones, Bozman says.

“Over that time, there was a proliferation of volume servers, the likes of which has never been seen,” he says. Before 2001, Linux server shipments were around 3 million to 4 million units per year. Now they top 7 million. For customers who built out data centers using hundreds of machines, there is now a push to pare down the amount of “pizza box” hardware.

“Customers who run a data center with 50 or 100 physical servers may need 500 or 1,000 of those machines someday,” says Kevin Lehay, director of virtualization at IBM. “How do you manage all of that environment? That’s where the scale-up environment takes advantage of that.”

The drivers behind the scale-up model include the ability to manage and provision servers more easily, with virtualized servers all running inside one box. Cost savings on power consumption of one large machine, vs. hundreds of single-rack-unit boxes, can be significant. A study by Gartner found that the cost of energy in data centers is in some cases almost equal to the cost of the server hardware itself.

For Nationwide Insurance, consolidation of 416 Linux servers onto a single Big Iron box means less walking around and pushing buttons. This is not insignificant when consid-ering wide-scale server maintenance, such as applying Linux kernel or application software patches, according to Steve Womer, senior IT architect at Nationwide Insurance.

“Let’s say it takes you 45 minutes per server to apply patches and software fixes, to reboot them and get them back up,” Womer says. “Forty five minutes, with 418 servers - that’s 315 man-hours. I’ve got eight people to do all this. That’s a long time.”

Womer uses a single shared-root file system, which the 418 servers share, running on top of the IBM z/VM virtualization layer of the mainframe. “If you only have one root, it’s only two man-hours to patch the copy of the shared read-only root, then you start rolling it through.”

Hype over hypervisorsSeveral key Linux kernel and system-

tool advancements over the last several years are helping these virtualized data-centers-in-a-box and grid-style deployments to evolve.

“The introduction of hypervisor technology you might say is the single-most important virtualization advancement over the past five years,” says Justin Steinman, Novell’s director of product marketing for Linux and open source.

The hypervisor is a software layer that sits between the guest operating system and the physical server. “The best way to think of it is as the traffic cop,” Steinman says. The software controls the different operating systems that are running on a virtualized server and manages the flow of the hardware resources, such as I/O, storage, and processor use and memory access. Open source and vendor-specific products in this area include Xen’s open source virtualization technology, IBM’s z/VM and VMware’s ESX Server.

Virtualization via a hypervisor layer is called paravirtualization, Steinman says, as opposed to standard VMware-style virtualiza-tion, in which a guest operating system runs inside a host, without any knowledge of the host system. Novell’s SUSE Linux Enterprise Server 10 has a Xen hypervisor built into the Linux distribution, and Red Hat’s forthcoming update to its Enterprise Linux Server also will have this virtualization piece built in.

“You need to put software drivers [in the guest Linux systems] to make them aware that they’re being virtualized,” he says. This enables the virtualized Linux systems

Variations on a virtualization theme

To virtualize or not to virtualize -- that is no longer the question when it comes to deploying Linux in the data center. Today, the question is which virtualization approach to take.

Virtualization basics

By Phil Hochmuth

Which strategy is right for your data center: consolidation, clusters or grids?

Page 5: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 1: Virtualization basics • • •

to use processor resources more efficiently. Otherwise, the systems would compete for resources, with the software functioning as if run-ning on a weak hardware system.

In clustering and distributed computing, some of the important advances have happened inside the Linux kernel, as well as with system and management tools offered by vendors to harness and control dozens, hundreds or thou-sands of Linux-based processors.

“That’s a challenge for high-performance computing users,” Steinman says. “How do you make sure all those processors are the exact same operating system with the exact same patch, with all the different tweaks there? If one box is out of sync, it could bring the whole system down.”

Tweaks in the Linux kernel over the last few years also have expanded possibilities for distrib-uted, virtualized Linux.

“Some of the advancements inside of Linux that have helped this stuff are improvement in scalability and performance,” Steinman says. Linux software can now scale to 10TB of memory across a grid or cluster, and as many as 1,024 proces-sors. “That’s an advantage where the open source technology has improved to enable that. You could go out this afternoon and download the code and find the exact code tweak that was made to implement that kind of advancement.”

Linux virtualization also is being used to consolidate Windows servers in some IT shops. Success Apparel, a children’s clothing company in New York, boiled down its 17 separate Windows servers to nine servers running SUSE Enterprise Linux, VMware and virtual Windows instances on top.

The move “has reduced

operating expenses by 25% while allowing our IT staff to concentrate on other projects,” says Steven Golub, the company’s IT manager.

Scale out with clustering“It’s funny with all the excite-

ment about virtualization, people have sort of almost forgotten that clustering is a form of virtualiza-tion,” Bozman says. “Clustering was one of the earliest forms of virtual-ization, in the sense that when an application is cluster-aware, it views all of the attached server nodes as being resources that it can use, as if it were on a big SMP [symmetric multiprocessing] machine.’’

Users of large, high-powered Linux cluster systems say the mix of proprietary virtualization management software, along with low-cost hardware and free Linux, are opening up the processing-power floodgates.

CIS Hollywood is a digital special-effects house that produced digital images for “Pirates of the Caribbean,” the fantasy epic “Eragon” and an “X-Men” movie sequel, among dozens of other movies. Much of CIS Hollywood’s rendering work -- in which large computer files are processed and crunched down into a viewable digital movie format -- is done on a cluster of 40 Linux PCs, running the free 64-bit version of the CentOS Linux distribution, which are managed by software from Linux Networx.

“The big key with Linux Networx is manageability,” says Matt Ashton, systems manager for CIS Hollywood. “Instead of having to maintain individual nodes -- which can be done with a variety of scripts -- they’ve got all of that all set up to go. I can update all 40 machines with a few mouse clicks without having to do it by hand.”

To CIS’ users -- artists, graphic

designers and computer techni-cians -- the Linux cluster appears as one large virtual machine. Fronting the cluster is a sched-uling application written in-house, which distributes rendering jobs to the 40 machines. “Users don’t interact with individual nodes,” Ashton says. “They just submit jobs, and the queue management software takes care of it.”

CIS has used a clustered, vir-tual rendering system for more than four years as a way to process the work of its artists more quickly and inexpensively. Ashton says nodes in the cluster -- dual-processor AMD Opteron boxes with 4GB of memory -- cost about $4,000 each. CIS’ large SMP Linux machines -- four-processor, dual-core machines with 32GB of memory -- cost between $30,000 and $40,000 each. The cost savings on a per-node basis is between $2,000 and $3,000 when scaling the system out, as opposed to up, he says.

PayPal, the online payment system owned by eBay, uses thou-sands of Linux machines to run its Web presence. The Web company replicates a single Linux/Apache image, bundled with its own transaction software, across these servers that appear as a single system to customers.

“Rather than have a monolithic box, we just have so many [nodes] that the breakages are irrelevant,” says Matthew Mengerink, vice president of core technologies for PayPal.

However, few enterprises need the kind of computing power of a CIS Hollywood, or the scale of a global payment system, such as PayPal’s.

Google is another example of the scale-out model, Steinman says. Its search engine runs on thousands of distributed Linux

computers, which provide its sig-nature fast, accurate search results. “But will an enterprise run its SAP platform on that model?” Steinman asks. “Probably not.”

Griddy upHowever, this does not

preclude the use of distributed, vir-tualized computing in enterprises.

“Businesses tend to use [a distributed Linux] model in certain specialized enterprise applications, such as actuary or risk manage-ment applications,” IBM’s Leahy says. “You could build a stand-alone environment, which could deliver these processes in minutes or hours, but it would be pretty expensive and dedicated to one thing.” This single-purpose system also would remain idle most of the time, he adds.

This is popular in Wall Street firms, where trading desks have very powerful workstations that often sit idle during the hours when the markets are closed.

“Some people would like to have a series of distributed resources, the kind of work you used to do on a mainframe,” IDC’s Bozman says. “This is a work in progress, but clearly people would like to do that.”

Whether Linux users deploy virtualization in a consolidated deployment, or in clustered applications or grids, Bozman says there’s a common thread shared among trends.

“It’s like back to the future. What we’re doing is reinventing the economics of computing, but we still want the same results that we had before” in the mainframe and large-system days -- “lots of reliability and lots of availability and utilization. But we’re doing it today at lower price points than we did in the early ‘90s.”

Page 6: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 1: Virtualization basics • • •

6

B

TThe research firm surveyed about 1,770 enterprise and smaller companies and found that use of server virtualization grew from 29% in 2005 to 40% in 2006, and the number of firms piloting remained flat at 11%.

Forrester also found that “interest and awareness” in the technology also increased in 2006. The survey shows the number of those aware but uninterested dropped from 23% to 17% in the same time period, and firms unaware of server virtualization decreased from 19% to 8%. About 92% of those responding said they were at least aware of the technology.

Adoption and awareness could have increased due to more commercial and open source technologies available -- and bigger marketing efforts on the part of vendors, Forrester says.

“Server virtualization moved from a niche Unix technology to mainstream use in x86 servers in many firms in 2005,” reads the report. “As a result server virtualization market leader VMware has to compete with Microsoft’s Virtual Server product and the Xen open source virtualization technology.”

Despite more vendor choices, VMware still came out on top as the virtualization vendor of choice among North American survey respondents.

More than half (53%) of those polled named VMware as the single vendor they would consider most for virtualization on Intel-based servers. Some 11% wrote in HP as their top vendor, and 9% chose Microsoft. IBM and Dell were also write-in choices for 9% and 8% of respondents, respectively.

Forrester also discovered that

technology use isn’t as varied among different-sized companies as it had been in the past. The three categories of enterprise companies -- Global 2000 (more than 20,000 employees), very large (5,000 to 19,999 employees) and large enterprise (1,000 to 4,999 employees) -- have similar adoption and awareness profiles, with 43% of both Global 2000 and large enterprises, and 37% of large enterprises already using server virtualization technology.

Small-to-midsize companies are lagging a bit behind the bigger companies. Twenty percent of those with between 500 to 999 employees and 13% with 100 to 499 employees have the technology in use. Another 3% and 4%, respectively, are piloting server virtualization products.

“SMBs trail by half in deployment, even though larger SMBs usually have enough x86 servers to merit server virtualization,” the report states.

The number of IT shops putting server virtualization technology to use in production and pilots surpassed 50% in 2006, according to a survey by Forrester Research.

By Denise Dubie

Server virtualization goes mainstreamForrester Research survey shows 51% of North American companies polled are using or piloting the technology

That was the message sent by Johna Till Johnson, president of Nemertes Research and a speaker at Network World’s IT Roadmap:Boston event.

One reason better organization is needed is that overhauling data centers is difficult. Nemertes found in a survey of 82 executives from 65 organizations that less than half considered their data center strate-gies highly successful, as they struggle with issues such as soaring power requirements, server and storage growth, bigger cooling needs, availability and plain old floor space.

What’s more, the pressure is on for companies to rework their data centers (“Most data centers are outdated,” Johnson said.). Nemertes found that about a third of those surveyed said their data centers went up in 1980s and another third were built in the 1990s. More than half the respondents said their companies have consolidated data centers in the past 12 months and more than half said they have consolidation plans for the next 12 months (about half also have data center construction on tap over the next 18 months).

Close to half of those surveyed said their organizations are managing data centers via a director of operations. Johnson said this is good in that operations leaders tend to be good at keeping things up and running with low overhead, but it’s bad in that they don’t tend to have a long-term view.

What’s needed is a dedicated data center architect who oversees facilities, computing, networking, management and security. This person should be looking out 10 to 15 years and be thinking about things such as how SOA will affect power requirements. “This is one thing few organizations have, but it’s starting to emerge,” Johnson said.

A storage SWAT team should also be put in place to oversee all aspects of storage. As

BOSTON—The shift to more centralized data centers that use virtualization, service-oriented architectures and other new technologies should get com-panies thinking about new job definitions for their IT staff as well.

By Bob Brown

New data centers mean new job opportunities

Page 7: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 1: Virtualization basics • • •

she noted, storage requirements aren’t going to get any smaller, with financial and other organizations socking away e-mails and other data in case they get called on to present the information in court.

Another emerging role is that of a compute manager in charge of getting CPU cycles to users and overseeing business continuance, Johnson said.

Organizations are also hiring service delivery managers to ensure that services enabled by virtualized data centers make their way to users. These managers would define and implement service-level agreements (SLA) and track service delivery metrics.

“Most people don’t even have internal SLAs,” Johnson said. “But most are on the road to developing them.”

Establishing and sticking to good service levels is being driven in part by IT organiza-tions through a sense of self preservation -- they don’t want their jobs outsourced, Johnson said.

Data center management teams also need to foster tighter links with non-IT busi-ness groups, including facilities, legal, human resources and compliance, Johnson said.

Companies that successfully change their data center management strategy should see a big payoff, Johnson said. Early indications are that technologies such as virtualization can save millions of dollars on hardware costs and labor, and put companies in a position for greater application flexibility.

Page 8: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 2

VIRTUALIZATION

8

T1. Forgoing the physical:

The idea of moving to a virtual environ-ment is to run more virtual workloads on fewer physical systems, but that doesn’t mean hardware moves down on the list of priorities. If organizations don’t care-fully consider what physical resources are necessary to support virtual workloads and monitor the hardware resources accordingly, they may find themselves in trouble. “With virtualization, it’s really a matter of putting the right physical systems behind it,” says David Payne, CTO at Xcedex, a virtualization consulting firm based in Minneapolis. “Some people think they can buy a cheap system from Dell or HP, throw in the hardware, then put virtualization on top of it and have their virtual environment. But many times that’s done based on commodity price, rather than really considering what the virtual work-loads are going to be. The companies we’ve worked with that have been most successful have paid a lot of attention to the planning portion and they end up with a really good result, getting high utilization on these sys-tems and a really good consolidation ratio.”

2. Sub-par application performance:

While virtualization is becoming increas-ingly widespread, many applications aren’t yet tuned for virtual environments. For example, Daniel Burtenshaw, senior systems engineer at University Health Care in Salt Lake City, deployed VMware’s ESX Server in 2006 with mostly good results. “Our biggest issues have been with some of our application vendors not being willing to support their applications on virtual servers, as well as limitations with the version of ESX that we are using,” he says. The healthcare organization has a large Citrix envi-ronment, but when it moved some of its Citrix servers into the VMware environment, it found that performance didn’t keep up, Burtenshaw says. “Basically, we get a very limited number of users per server, so if we virtualize, a bunch of virtual servers on a host is equivalent to just having one physical host,” he says, adding that his firm is upgrading to VMware’s Virtual Infra-structure 3. “From what we have read — but we have not tested it yet —Virtual Infrastructure 3 is supposed to be optimized better for hosting Citrix, so we should be able to get a more normal user load on the virtual servers.”

3. Sneaky security: Once you deploy a virtual environ-

ment, you’re removing the link between hardware and software, which can create confusion when it comes to securing your infrastructure. “The decoupling risks blinding security pros to what is going on behind their network security appliances,” says Allwyn Sequeira, senior vice president of product operations at patching specialist Blue Lane Technologies. “The server environ-ment gets more fluid, more complex and the security pros ultimately lose the stability that hardware offered. Any type of vulnerability scan could be rendered obsolete in minutes.” Dennis Moreau, CTO at security and compli-ance firm Configuresoft, agrees. Virtualization streamlines provisioning and processes such as patching, but it also adds complications that IT professionals may not be thinking about. “We always had to patch the operating system and the application, and you still have to do that when you virtualize, but now, all of a sudden, you also have to patch the [virtual machine manager] layer where vul-nerabilities can exist,” he says. “So the work of maintaining a secure environment and of documenting that for compliance purposes, just by the fact of introducing a virtualization technology layer, gets more complex.”

4. Left in lock-in: The virtualization market is evolving

quickly and even VMware is pushing for a standard way to create and manage virtual machines. But standards and interoper-ability will come slowly. Companies that aren’t careful may find themselves locked in to a certain vendor’s approach, making it difficult and expensive to move among other

The 8 key challenges of virtualizing your data center The benefit of virtualizing x86 servers is clear: break the link between software and hardware and create the foundation for a more dynamic, flexible and efficient data center. With the market for virtualization software expected to grow to more than $1 billion in 2007, compa-nies are more than kicking the tires on the technology. But the road to a virtual data center isn’t without its twists and turns. The move to a virtual environment must be done carefully and with an under-standing of how the new infrastructure will change IT planning and management. What follows is a list of eight virtualization “gotchas” — hurdles that users may face as they deploy virtual environments — that we’ve compiled through discussions with IT professionals, analysts and vendors.

Management and security

By Jennifer Mears

Page 9: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 2: Management and security • • •

approaches as technologies mature. “Try to pick products that can be considered somewhat standard and open to the virtu-alization market, like products where you can import [virtual machines] from other products,” says Ulrich Seif, CTO at National Semiconductor in Santa Clara, Calif. “Too many things can happen in this space in the next couple of years, so don’t corner yourself if you can help it.”

5. VM sprawl: Originally, virtualization was

a big hit simply for consolidating physical servers — and thus reducing power demands and heat output. But because of the ease with which virtual machines can be deployed, organizations may find that while they have reduced the number of physical devices, the number of virtual systems to be managed has exploded. “One of the biggest gotchas out there is [virtual machine] sprawl,” says John Humphreys, a program director at IDC. “We see this again and again: customers that before virtualization had 500 servers each with one image on them, for example, after virtual-ization all of a sudden have 700 images they’re trying to manage.” The best way to avoid that kind of sprawl is to plan virtual machine life cycles, recovering virtual instances that are no longer being used, he says.

6. Licensing costs: Just as companies may

be haggling with independent software vendors that set license fees based on CPU usage over pricing on multicore servers, they also may find surprises when it comes to licensing in

virtual environments. “Software licenses may be a barrier,” says John Enck, a research vice presi-dent at Gartner. “You may want to run an application in a large, virtualized server, but the license may be written to apply to the physical processor cores in the machine. So if, for example, you move such an application from

a two-way server to a four-way virtualized server, your software license costs may increase — even though the software is only using two processors in the virtual environment.”

7. Stuck on storage:Because many of the

candidates for virtualization were on distributed x86 systems, it’s easy to forget how the more centralized architecture of virtual resources can impact things. Storage, for example,

should get a close look because in many cases virtual resources will all access a shared storage-area network (SAN). “Some companies may buy a certain type of storage array and they may not consider the workload that the VMware environment is going to put on it and it ends up being that that array just can’t

handle it: too much throughput, too much I/O,” Xcedex’s Payne says. “If that array goes down and has an issue on the SAN every single virtual machine is going to be negatively effected, meaning they’re probably going to crash, they’re probably going to get corrupted and it’s going to be a really bad experience.” National Semiconductor’s Seif says storage concerns should be a priority when planning a virtual environment. SAN storage “is essential to reap the benefits

of [business continuity/disaster recovery], allowing shifting workloads for optimizing uptime/performance and better scaling of guests to hosts,” he says. “The amount of storage — shifting from operating system, software and data on local server hard drives to SAN capacity — can add up very fast, 40GB per host for us, and without a solid tiered storage strategy, it can eat up very expen-sive SAN storage very quickly.”

8. Virtual roadblocks: With AMD and Intel servers

running side-by-side in many data centers, some companies may think mobile virtual machines can be moved across any x86 hardware, but that’s not the case. “The question people are struggling with is, ‘As I move these [virtual machines] around, one, do I have to have similar hardware,’” says IDC’s Humphreys. Today, VMware virtual machines can’t move between Intel- and AMD-based systems, says Raghu Raghuram, vice president product and solutions mar-keting at VMware. “Our vmotion technology allows you to migrate a running application from one physical box to another, but the processors in those boxes have to be the same: so you can move from AMD to AMD or from Xeon to Xeon,” he says. “It’s because of the difference in processor architectures and the behavior of certain instructions. It’s a problem that will get solved over the longer term.”

• Will the physical site have adequate and appropriate electrical power?

• Will the physical site have adequate and appropriately concentrated cooling capacity?

• Will the physical site have appropriate security facilities?

• Will the physical site have adequate utility backup?

• Will the consolidated/virtualized platform provide the availability needed for the workloads it will run?

• Will the consolidated/virtualized platform require new support tools and/or staff skills?

Thinking ahead In a January report titled “Virtualiza-tion considerations: Forewarned is forearmed,” Saugatuck Technology analysts lay out issues companies should think about when they’re virtualizing servers:

Page 10: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 2: Management and security • • •

10

IVirtualiza-

tion technology is expected to start moving onto production net-works. How should network managers prepare?

We’ve dealt with physical-server sprawl and we know what that is. Now we have virtual-server sprawl. You can very quickly deploy virtual machines into an environment, and it’s very easy to get virtual-machine sprawl. You can end up with capacity issues and resource alloca-tion issues, so a lot of the initial challenges are around how to optimize the physical infrastructure for all the virtual machines.

What will help prevent this virtual sprawl from affecting network perfor-mance?

It’s getting visibility into the behavior of the virtual machines that are running in production. If you think of VMware’s VMotion technology, which allows you to move live servers around in real time, there are still some manual processes for being able to identify

virtual machines that may be performing badly and trying to correlate that with how the physical servers are performing. IT managers need to orchestrate the alerts and then be able to move VMs to the right place at the right time to optimize performance and capacity.

Why won’t the tools or processes network managers are using today stand up in this virtual environment?

It would just be impossible based on the number of alerts -- which could be hundreds to thousands of alerts on performance issues. To be able to manually reallocate resources, that is just not feasible and it goes against the whole value of virtualization. You would end up having to provision a bunch of overhead so that you could move the VMs around successfully without interrupting anything. And that defeats the whole purpose of trying to optimize the infrastructure.

Will virtualization change the way vendors have to manage infrastructure?

Absolutely. A lot of monitoring today is still infrastructure-focused. It focuses on static thresholds that are set on the physical devices. And there are still a lot of manual processes

that take place to understand and correlate the performance of virtual machines with the performance of the physical machines.

Is automation the best option to handle virtualization in today’s data centers?

Data-center administrators are still very nervous about turning the keys over to an automation engine. But if they can get a baseline of behavior, and get the same alerts over and over again saying, “Are you sure you want to do this?” they can eventually click the little box that says don’t ask me anymore and trust the product to do it.

What if any impact does such server virtualization technology have on the network?

That’s the thing. If you get into a full production environment, and you have VMs that are being reallocated and reprovisioned on the fly continuously, there will be an impact to the network. The switch architecture is going to have to have some visibility into the application traffic. Moving servers around within the data center will increase variability on the network itself.

How will vendors have to innovate to tackle virtualization?

What’s really next and the emerging growth area in management is around service orchestration. A user makes a request, applica-tion components get bound together based on that request, and infrastructure needs to be associated to fulfill that request. That is where a lot of research and development is being done, especially by IBM, HP, CA and BMC. They are looking at how to dynamically allocate resources per policy or when behavior is a certain way. A lot of the innovation is going to be in that level of orchestration.

IT managers looking to unleash virtualization technology in their production networks should anticipate a major overhaul to their management strategies as well. That’s because as virtualization adds flexibility and mobility to server resources, it also increases the complexity of the environment in which the technology lives. George Hamilton, director of Yankee Group’s enabling technologies enter-prise group, talked with Denise Dubie, Network World senior editor, about what network managers need to do to get ahead of the manage-ment challenges that virtualization could introduce to their networks. What follows is an excerpt from their conversation.

By Denise Dubie

Q&A: Virtualization invites management nightmare, says Yankee Group analyst George Hamilton lays out the inevitable management challenges that come with virtual environments

Page 11: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 2: Management and security • • •

11

IThis is a shame, because security has a

lot to gain from virtualization — and virtual-ization has a lot to lose if it has no security controls. So what do I mean by security virtualization? At the most basic level, it is security that has the physical layer abstracted. One easy example is the ability to take a single physical firewall and partition it into multiple virtual firewalls to serve different administrative domains or applications.

But the real challenge, and the reason security and virtualization are discussed a lot today, is that server virtualization is moving beyond the development environ-ment and into production. In a production setting, many of the ideas that seemed great

in development are running into objections by the security team and auditors. “So, you took the three-tier architecture with firewalls and collapsed it into a single server pool? How are you controlling between the virtual machines?” And thus, the on-demand, virtual-moving dream of dynamic servers smacks hard into the static, inflexible reality of security-by-physical architecture.

Which leads to the conundrum: Is security going to thwart your business agility and new computing paradigms? Or are you going to find a new, more dynamic way of doing security? Security virtualization is therefore more about making security infrastructure (hardware, software or both)

flexible enough to co-exist and contribute to a virtualized data center environment. In a virtualized environment, some of the old concepts have to go: IP addresses do not identify servers because servers can be redeployed on-the-fly to a different subnet. So your “IP A.A.A.A can send packets to IP B.B.B.B” access control design is no longer relevant or helpful. What was at IP A.A.A.A has moved to a different subnet/data center/continent.

Dynamically allocated virtual servers need dynamically allocated virtual security. Maybe it is software in the virtual machine in the hypervisor, as a virtual switch I/O path plug-in, or some combination of software and hardware. But it cannot be a ring of physical appliances surrounding the pool of servers and trying to make sense of three dozen VLAN segments. For virtualization companies 2007 is going to be the year of security, either because they create an entirely new security market and paradigm, or they get stigmatized by a massive security problem. Or maybe I will get two more years of blank stares.

I sometimes find myself talking about a topic and getting blank stares. Then a year or two later, everyone is suddenly talking about it. One such topic is security virtualization. Until now, those two words were seldom seen together. You would have to live in a cave to have not heard about server virtualization, and storage virtualization also is discussed widely in storage areas. Network virtualization applies to virtual LANs (VLAN) and MPLS, so lots of people discuss that. But security was never brought into the virtualization discussion.

By Andreas M. Antonopoulos

Security and virtualization

I“Virtualization, as with any emerging technology, will be the target of new security threats,” said Neil MacDonald, a vice president at Gartner, in a published statement.

Virtualization software offers the ability to run multiple operating systems, or multiple sessions of a single operating system, on a single physical machine, whether server or

desktop. But virtualization software, such as hypervisors, present a layer that will be attacked and security strategies need to be put in place in advance, Gartner warns.

“Many organizations mistakenly assume that their approach for securing virtual machines will be the same as securing any OS and thus plan to apply their existing

configuration guidelines, standards and tools,” MacDonald said. While this is a start, a closer look at securing virtual machines is required, especially since needed tools may be “imma-ture or non-existent,” according to Gartner.

Among the specific points about virtualization and security which Gartner will address at the conference are:

• Loss of separation of duties for administrative tasks.

• Patching and signature updates and protection from tampering.

Companies in a rush to deploy virtualization technologies for server consolidation efforts could wind up overlooking many security issues and exposing themselves to risks, warns research firm Gartner.

By Ellen Messmer

Virtualization security risks being overlooked, Gartner warnsGartner raises warning on virtualization and security

Page 12: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 2: Management and security • • •

1�

M

• Limited visibility into the host OS and virtual network to find vulnerabilities and correct configuration.

• Restricted views into “inter-VM traffic” for inspection by

intrusion prevention systems. • Mobile VMs and security

policy. • Immature and incomplete

security and management tools. Gartner speculates that the

“rush to adopt virtualization for server consolidation efforts” will result in many security issues being overlooked. That, in combi-nation with the lack of available security tools for virtualization,

will mean “as a result, through 2009, 60% of production [virtual machines] will be less secure than their physical counterparts.”

As CIO of the U.S. Defense Contract Management Agency (DCMA), which monitors work on military contracts, Williams had a problem on his hands. Consolidating all those datacenters without reconfiguring the WAN was like consolidating 17 cities into three without widening the freeways.

“It is important to make sure to optimize the WAN. We actually didn’t,” Williams says. “All of a sudden the speed of light is not so fast anymore.”

Williams’ story is one of many cautionary tales surrounding early virtualization efforts. Although virtualization promises cost-saving optimization of datacenter resources, the path to that payoff is littered with hazards.

Not just network configuration, but software licensing, security, and systems man-agement are all potential pitfalls, say industry experts and enterprises that have gone virtual. And people issues can be more troublesome than technical ones if the corporate culture resists virtualization.

When the city of Charlotte, N.C., began virtualization, some departments hesitated, says Philip Borneman, assistant director of informa-tion technology. “You’ll always have some early adopters and others who want to wait and see.”

Elsewhere, some IT fiefdoms simply won’t share. “I have heard of companies that have gotten a lot of pushback from departments who don’t want to give up their own hardware or applications,” says Charles King, principal analyst with Pund-IT, a technology analysis firm.

The organization chart can complicate other projects, adds Nick van der Zweep,

vice president for virtualization at Hewlett-Packard. One unidentified insurance company, notes van der Zweep, maintained separate IT resources for group insurance, individual insurance, financial investments, and other departments. “When you decide to bring them together, you get turf wars,” van der Zweep says. “You’ve got to convince a lot of people.”

Virtualization also upends the software model. Typically, software is licensed to run on just one server, but having to license it to each of 50 virtual servers limits potential cost savings, van der Zweep explains.

HP faced that problem when deploying BEA WebLogic software on 400 virtual servers. HP created a shared application server utility, a cluster of five server nodes. HP paid for five licenses even though each cluster feeds up to 60 virtual machines.

Some but not all software companies are revising their software licenses for virtualization, while others withhold support if their software is run in a virtual environment.

Virtualization presents security issues, too, says Michelle Bailey, an IDC research director.

If security software runs on a physical server but one of the virtual servers is moved to another physical server without it, “that could be a problem,” says Bailey. “The security policy has to live somewhere else, such as on the network layer.”

Using the right management tool is critical to making virtualization work, she says, and maintaining security is just one of its functions.

Companies assigning virtual workloads

to physical servers need to make sure they are properly configured, have up-to-date patches and don’t still contain rogue software that could cause problems, says Erik Josowitz, vice president of product strategy at Surgient, a provider of virtualization for software develop-ment and testing.

The rush to virtualization poses a danger that some companies will practice the equiva-lent of “finding a server by the side of the road and plugging it in,” says Josowitz. “There will be some breach [in 2007] that will be a lesson to all,” he predicts.

Mike Williams considered his virtualization project a success after consol-idating 17 U.S. datacenters into three. But then the traffic jams started.

By Robert Mullins

Virtualization reality check

Page 13: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 2: Management and security • • •

13

T“Both have been talking up their plans and efforts for months, as well as their proposed superiority over competing solutions — namely VMware. In 2007, customers will finally be able to tell for themselves,” says Charles King, analyst at Pund-IT Research.

Don’t expect Microsoft or Xen to leapfrog VMware right away. After all, VMware has been selling its products since 2001. In addition, it has evolved beyond a simple way to partition a server into multiple operating-system instances into a broad systems infrastructure tool that lets customers pool virtual resources and allocate them as business needs demand.

Customers can expect virtualization vendors to start heading in that direction as the focus moves beyond the hypervisor into the management realm. VMware, Microsoft, XenSource and Virtual Iron offer low-level virtualization capabilities for free. But IDC expects North American software revenue in the virtual machine market to continue to grow from $324 million in 2005 to more than $1 billion by 2010, as companies spend money on virtualization-management tools.

For example, Art Beane, IT enterprise architect at Aegis Mortgage in Houston, is moving VMware out of test and development and into production environments. He plans to use VMware to support business continuity and disaster recovery. “For 2007, we’re going to be putting most of our effort into moving produc-tion [servers] to virtual [environments], and that will more than likely drive us to enhancing the management environment,” he says.

As such companies as Aegis consider virtu-

alizing their production servers, they will have a growing number of options from vendors that include Microsoft, XenSource, Virtual Iron, SWsoft and Parallels. But those vendors still are chasing the incumbent, VMware. Ed Baldwin, senior network engineer at a major oil and gas company, puts it this way: “We have looked at what Microsoft has to offer and feel that it does not meet high-availability production needs at this time. As for XenSource, we have found better performance and support from the VMware products at this time and see no reason to switch.”

While the challengers to VMware have some catching up to do, they are making strides. In Xen’s case, Novell ships its SUSE Linux Enterprise Server with Xen technology. Red Hat included Xen in Red Hat Enterprise Linux 5. Egenera chose Xen as the basis for its newest virtualization-management tools, and Sun says it will embed the Xen hypervisor in Solaris. Xen-Source, the commercial front for Xen, partnered with Microsoft in 2006 to make Linux run better in Windows Virtual Server environments. Finally, Virtual Iron uses Xen technology as the basis for its virtualization products.

“Xen won’t have the maturity of VMware in 2007, but it might be a cheaper alternative, if that’s a major consideration,” says Gordon Haff, an analyst at Illuminata.

For its part, Microsoft has been slow in making its virtualization promises reality. The company offers a free version of Virtual Server but doesn’t plan to ship its hypervisor, code-named Viridian, until after it ships Longhorn. That means Windows customers won’t see Viridian until year-end or early 2008.

The virtualization market is expected to heat up in 2007 as Microsoft and the open source Xen project challenge VMware, which has been the only game in town when it comes to virtualizing x86 servers.

By Jennifer Mears

Virtualization: Xen and the art of hypervisor maintenance

Page 14: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 3

VIRTUALIZATION

14

TCA released the findings of a study showing that more than 40% of 800 IT orga-nizations polled worldwide were uncertain if their use of server virtualization technology was successful.

“In fact, 44% of the organizations that have deployed server virtualization are unable to say whether or not the deployment has been successful -- pointing to problems with measurement negatively influencing server virtualization satisfaction levels [for example] measuring server and network infrastructure performance,” reads the report, which The Strategic Counsel conducted on behalf of CA.

The research firm estimates that 39% of organizations with more than 500 employees

have adopted server virtualization tech-nology, and it expects that figure to grow by 20% over the next 18 months. Such uptake in the adoption of virtual servers will shine a light on some issues around managing the technology, the report says.

‘The pattern of organizational server virtualization deployment has led to the creation of multiple, heterogeneous server virtualization environments within single organizations. That is the norm shown by the survey, not the exception,” the report reads. “With heterogeneity comes management issues and constraints.”

According to the survey, those organizations with heterogeneous server environments are already experiencing a

few key management problems, including server sprawl, configuration workload changes, difficulties in reporting and staff skill set limitations. Specifically, 39% of organizations that run multiple server virtu-alization technologies indicated they suffer from server sprawl. Thirty-two percent said they suffer significantly increased difficulty with reporting, visibility and metrics to get a consistent view of server performance. And one-quarter reported increased configura-tion requirements and workloads as an “important constraint or issues to deal with.” Lastly, 24% said they need to maintain multiple skill sets among staffers.

“The future of server virtualization will be just as much about virtualization itself as it is about managing multiple, heterogeneous server virtualization environments,” the report concludes.

Virtualization ROI hard to quantify

Server virtualization is becoming more popular in enterprise data centers, but a survey says that doesn’t necessarily mean the tech-nology has proven itself successful when deployed.

Money matters

By Denise Dubie

Study reveals majority of IT shops aren’t sure if virtualization is a complete success, face management issues.

IFor starters, VMware rivals XenSource

and Virtual Iron Software have launched open source alternatives, undercutting VMware on price. Also, new processors, such as IBM’s Power 5 and upcoming Power 6 chips, and new operating systems have

virtualization capabilities built in, negating the need for some additional software, says Clay Ryder, president of The Sageza Group. Sun’ Solaris 10 operating system includes a virtualization feature it calls Containers. Microsoft was expected to release a beta

version of Windows Virtual Server in the first quarter of 2007 and ship it in 2008.

That’s good news: As virtualization features become part of the hardware or the operating system, software providers will have to offer useful extra features, Ryder says, such as automated new software testing or security patch management.

Virtual Iron introduced Version 3.1 of its virtualization platform in December 2006 for a license price of $499 per socket--com-pared to $2,875 per socket for a comparable

In 2006, many enterprise IT groups saw the potential in virtualization, rushed to consolidate servers and subsequently propelled VMware software to a market-leading spot. As 2007 began, VMware’s prices were under attack, just as more CIOs look to virtualization to control server and storage sprawl and tame data center power costs.

By Robert Mullins and China Martens

New ways to save on virtualization

Page 15: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 3: Money matters • • •

1�

VMware license. XenSource also introduced new virtualization products at sub-$1,000 prices.

Both XenSource and Virtual Iron build their proprietary products on top of the open source Xen platform for virtualization hypervisors. (A hypervisor lets a computer run multiple operating systems at once.) VMware’s products are not based on open source.

Meanwhile, middleware vendor BEA Systems is making its own cost-cutting move. In the first half of 2007, BEA planned to release WebLogic Server Virtual Edition (WLS-VE), a version of its Java application server that includes Liquid VM--a BEA-specific Java Virtual Machine that lets Java applications run directly on a hypervisor without requiring an operating system to be present. This will let users substantially reduce the amount of computer power, lowering the hardware costs per application, says Guy Churchward, vice president and general manager of the Java Runtime Products Group at BEA Systems. Ini-tially, Liquid VM will work only with VMware’s ESX Server hypervisor.

Analysts advise CIOs planning for virtualization in 2007 to think strategically, not just tactically.

“To really make virtualization work you want to do it so end users access capability, not just specific machines,” Ryder says.

“That’s only going to be possible if you take a strategic approach,” he says, noting that virtualization needs to be applied to storage, networking and the introduction of new software, not just servers.

Mike Williams, CIO for the Defense Contract Management Agency (DCMA), learned a lesson about thinking strategically when he did a virtualization project in 2006. (The DCMA, a federal Department of Defense agency, places contract managers inside companies fulfilling defense contracts for weapons systems, jets, military equipment and parts.) Williams deployed VMware, reducing the agency’s number of servers to 160 from 560 and the number of data centers to three from 17. But that move taxed the WAN when all the network traffic converged on the three data centers. His advice: Optimize the WAN first.

Still, Williams likes the results. Before virtualization, DCMA replaced about one-third of its 560 servers annually at a cost of about $2 million, Williams says; virtualization cut that expense to $560,000.

Page 16: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 4

VIRTUALIZATION

16

NThe

difference is virtualization software that completely changes the software instal-lation process, and indeed the very meaning of the term “installation.” Virtualization

also brings savings of 50% to 60% in people, time and network bandwidth, according to Navid Atoofi, director of system production services at Northeastern, who presented a case study on his experiences with Microsoft SoftGrid Application Virtualization at the recent Network World IT Roadmap Confer-ence & Expo in Boston, where the school is based. The implementation was so successful that the university is now poised to use SoftGrid for all application deployments to its 25,000-plus students, faculty and staff.

Defining the problem and solutionBefore Northeastern implemented

SoftGrid in September 2005, installing a new application on a lab machine meant going

through a painstaking regression testing pro-cess, to ensure the new program would play nice with all the existing applications on the machine. That could take two to three weeks, “and we were always wrong, because there are hundreds and hundreds of applications,” Atoofi said in a follow-up interview.

It could take another couple of weeks to package the application and send it to the user, sometimes by creating a new desktop image for the user or using a software distribution program such as Microsoft Systems Management Server. “It wasn’t always as clean as it should be,” he says. If a user had an application installed locally that IT didn’t know about, the new application may override it, for example. “We were not able to deploy applications fast enough,” he says.

When Atoofi encountered SoftGrid (which Microsoft acquired along with Softricity in July, 2006), he saw it as a potential solution. SoftGrid requires only a small footprint on the client machine, a “container” in which applications are cached after being streamed on demand from a central server. Sitting in its own virtual container, the applica-tion is never actually installed on the desktop in the traditional sense, meaning writing to registry files and the like. Because of that, it can’t interfere with other applications; each

is in its own virtual container. That alone elimi-nates the two to three weeks of regression testing that Northeastern used to conduct.

Now, if a user is already authorized to use a particular application, he can download it at will. If he needs authorization, that requires only a simple update in Northeastern’s Active Directory infrastructure, which Atoofi says typically takes two or three days at most.

Implementing SoftGrid was fairly straight-forward as well, he says, although it does take time at first. Each application must be “sequenced” to prepare it for streaming from the SoftGrid server. Using a sequencer that comes with the product, the process is fairly simple for applications built in a modular fashion, Atoofi says, but can take longer for larger, more monolithic applications.

Once the sequencing is done, North-eastern uses Active Directory and Group Policy to make applications available to various groups. They can see what applica-tions are available to them on their menus, just as with traditional desktops.

An array of benefitsWhen a user clicks on an application for

the first time, it begins downloading to the SoftGrid cache on his desktop. “Because most applications are modular these days, when you want to run an application you don’t have to download the whole thing. You just take or cache the amount that you need,” Atoofi says. “During the time you are using it, the other part will get downloaded and cached.” So, users may see a small delay the first time they use an application, but none after.

Application virtualization paying off for university

Not so long ago, it took Northeastern University some four to five weeks to roll out a new application to one of the more than 1,000 work-stations in its various campus laboratories. Today, it takes almost no time – any user with proper authorization can merely request the application and almost immediately begin using it.

Case studies

By Paul Desmond

Microsoft software saving Northeastern University both time and bandwidth

Page 17: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 4: Case studies • • •

1�

That feature helps conserve bandwidth, Atoofi says, requiring 50% or less than distributing an entire application at once. Also, when you stream an application, the load is distributed among different servers at different times, depending upon user requests. The traditional method of updating a user desktop requires creating a new software image and distributing it to all users at once, producing far more load on the server and the network.

SoftGrid also saves North-eastern the time it takes IT staff to test applications and prepare them for distribution. Additionally, the virtual desktop approach has reduced the load on the help desk, because users don’t change the applications; everyone is

using the application exactly as it appears on the central SoftGrid server. “Not only does it reduce the help desk calls, we can help people more effectively because we see exactly what they see,” Atoofi says.

Software licensing provided more big savings, since SoftGrid makes it easy to see exactly who is using what applications – and for how long. In one case, Northeastern bought 1,000 licenses for an application and found it actually needed fewer than 10. IT can also dictate usage, say enabling a user to employ an application only during certain hours.

Determining an exact ROI is a bit tricky for an educational institution, Atoofi notes. “But I can

easily say it has saved us over 50% in everything – resource allocation, software costs and deployment.”

Parting adviceUsing a tool like SoftGrid

doesn’t mean that you can ditch all of your other software distribution tools, Atoofi warns. “Not every application is sequenceable, so you have to have a different means to deploy some applications,” he says, noting Northeastern had five or six applications that it was not able to sequence – out of more than 120. “So we had to have a good infrastructure not only for deploying the other applications that were not sequenceable, but also to support SoftGrid.”

A good Active Directory infrastructure with well-defined Group Policy Objects is likewise a must, he says, since that dictates who gets access to what applica-tions.

And the organization also has to buy in to the concept that users don’t “own” the software on their desktops anymore. “You have to know your organization to be able to do that,” he says.

IThe Vancouver,

Wash.-based SWMC embarked on its SSO project to reduce the “hassle factor” for the 6,000 users that log on to an average of

six to 12 applications per day, according to Christopher Paidhrin, CSO for the firm. During

a session at the Network World IT Roadmap Conference & Expo in San Francisco, Paidhrin told attendees that SSO saves 15 to 30 sec-onds per logon, or roughly five minutes per day per employee – paying for the $100,000 price tab of the project in just eight months.

The SSO project, which involved imple-mentation of the Imprivata OneSign appliance, was impressive enough on its own to earn

SWMC a Network World All-Star Award. But during his IT Roadmap presentation, and in a follow-up interview, Paidhrin also expounded on the virtualization angle of the project. That involved implementation of the Softricity (now Microsoft) SoftGrid application virtualization platform, which reduced the number of Citrix servers required to provision applications for some 2,500 remote users while simplifying provisioning for internal users as well.

Driving the needSWMC’s quest for SSO began in early

2005, driven by business and IT consid-erations. Reducing the hassle factor was important not only from a business produc-tivity standpoint but also a competitive one, Paidhrin says. “Physicians work in a highly

It’s a technology that many companies still find elusive, but single sign-on (SSO) is working as promised at Southwest Washington Medical Center (SWMC), while delivering a return on investment in just eight months. As a bonus, the SSO project also prompted the company to delve into virtualization technology, which is saving the firm some 20% on server resources along with heating, electricity and support costs.

By Paul Desmond

Combo of single sign-on and virtualization pays big dividends at hospitalSouthwest Washington Medical Center gets quick payback from Imprivata and SoftGrid implementations

Page 18: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 4: Case studies • • •

18

competitive environment and there’s competition right down the street,” he says, referring to the hospital eight miles away. Making their logon experience as seamless as possible can help encourage physicians to bring their patients to SWMC instead of another facility.

From an IT perspective, Paidhrin was looking to gain centralized control over all access management. And of course the medical center had to comply with regulations, including HIPAA and the Joint Commission on Accredi-tation of Healthcare Information Management requirements.

“There are 45 technical HIPAA elements, and single sign-on alone fully addresses eight and somewhat addresses 15 of them, at least as implemented in the Imprivata product. That gets us most of the way to our technical compliance,” Paidhrin says.

Selection and implementation

SWMC spent nine months researching SSO products before settling on Imprivata OneSign. The company looked at players both large and small, including Novell, IBM, CA and Sentillion. Many solutions were “very nice, but very expensive,” Paidhrin says.

Ultimately, Imprivata proved to be a good fit because it cost less than some competitors and could deal with multiple back-end sources of authentication information. That was important because SWMC, while on its way to migrating to Microsoft Active Directory as its sole source of authentication data, in the mean-time had to deal with data stored in Novell NDS, a RADIUS server and a couple of proprietary healthcare-specific data stores.

At the time, Imprivata was still a

relatively young company, however, so Paidhrin had one more require-ment: that the product be easy to remove, just in case something went wrong. “We tested it. We turned the power off on both [Imprivata OneSign] devices and it had zero impact on the rest of the network,” he said. In that case, users would simply revert to their old logonn routines.

The actual implementation took three months—not bad, considering SWMC at the time had more than 160 applications, a figure that is now closer to 200, and more than 6,600 employees or partners. The OneSign device requires no changes to any applications, only an agent to be installed on each client. OneSign builds XML-based profiles that describe the logon requirements for each application by “observing” typical application behavior. These profiles are stored on the OneSign appliance along with any company-defined policies.

SWMC started its implemen-tation with 50 core applications, a process that was so successful that the company quickly expanded to the remaining applications.

Going virtualSome 200 clinics, with about

2,500 physicians and medical staff, tie into SWMC remotely via SSL-based VPNs. To support those users, the company opted to create a Web portal through which remote users could access patient information. The back-end applica-tions and SSL services run on Citrix servers, and SoftGrid enables the company to support more users with about 20% fewer servers. With SoftGrid, applications need to be loaded on only one server; the rest run virtual implementations that are served up as needed.

The company faced just a

few hiccups in its Citrix/SoftGrid implementation, with only five to 10 applications presenting a serious challenge. “There are no applica-tions that we have that cannot be Citrix-served,” Paidhrin says. That includes a large suite of new McKesson healthcare applications that the company is now installing.

Besides reducing server hardware requirements, SoftGrid saves SWMC an average of 20% on costs for HVAC, power, rack space and desktop support. It also makes upgrades, including patches, far easier to implement, because applications actually exist on only a single server. After testing the upgrade, “one individual can update an entire server farm in a matter of minutes,” he says.

Similarly, the virtual approach simplifies troubleshooting of user problems. “If it’s one user having a problem but you’ve got 300 other users using the same application on the same cluster, your triage cycle is greatly pruned,” Paidhrin says. Most problems are solved by simply closing and reopening the Citrix client or rebooting the machine.

Tallying the benefitsWith its SSO and virtualization

implementations up and running for about 12 months now, SWMC is now enjoying the benefits—and ROI is certainly one. In addition to the eight-month ROI for OneSign, the SoftGrid ROI was immediate, based on the 20% savings in number of servers required.

At the same time, security has improved for SWMC. In the past, password policies were difficult to enforce, even though it stipulated password changes only every six months. For users that routinely used even six applica-tions, that was too much. “We were lucky to get everyone to change

once per year,” Paidhrin admits. Now, password changes are far simpler and the company can easily run an audit to determine when user passwords are out of compliance with company policy.

SWMC will also soon have all of its authentication data migrated to Active Directory. That single source for user data and policy information likewise improves security by eliminating duplicate accounts and making it simpler to enforce policies.

OneSign also supports a variety of authentication mechanisms, including biometrics, SecurID tokens and the traditional username/password. SWMC uses some of each for various applica-tions. Its fingerprint readers work well most of the time, although Paidhrin says certain individuals have had problems. For example, one doctor swims every morning and comes in with hands swollen from chlorine. “In the afternoon [the reader] works flawlessly, but it doesn’t work in the morning.”

SWMC is now testing Indala Corp.’s passive stripe proximity cards, which are ID cards that can be read by a reader at distances of about 3 to 6 inches. The same card can be used to open doors to secure areas, log on to computers and even buy food in the cafeteria. In the trial, users swipe the card and enter a PIN, providing for two-factor authentication. Without the PIN, a lost or stolen card could be used by anyone to gain unfettered access until it was disabled in the system.

Perhaps the greatest benefit for SWMC from its SSO project is that the frustration level among users is down, Paidhrin says. “Once people get used to [Impri-vata OneSign], they love it. Now they only have to remember one user account and one password.”

Page 19: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 4: Case studies • • •

1�

WNetwork World Editor Bob Brown conducted separate interviews with three members of the 38-person IT team at Babson College, a business school for 3,300 students based in Wellesley, Mass., to gain insight into the perspectives of members from different departments and to learn how they are coor-dinating their efforts to manage a 9,000-port network (plus 300 wireless access points) across roughly 60 buildings.

The school is pushing hard on three big projects: Digital Babson, an initiative to get Babson content and thought leadership online; next-generation application infra-structure; and network/server architecture enhancements.

Kuljit Dharni, director of architecture and development

How do you coordinate interaction among your network, applications and systems teams?

On the formal end we have a full team meeting on a biweekly basis, and a full list of all current projects are published prior to this meeting. Additionally, I

hold one-on-one’s with my direct reports on a biweekly basis. On an informal basis, I talk with pretty much every team member daily -- this may be just a hello or a quick chat about a

current project of theirs.

Where is the interaction most natural, and where do you need to work hardest to make it happen?

The informal talks are the most natural. I love to talk tech, and people are happy to talk about their current projects to demonstrate “cool” things. The formal group meetings are a 50-50 affair: Half the people like them, and half can’t stand them. Additionally, not every-thing that comes out of a casual meeting gets documented -- many times there is no need, but some projects need my input, and if I approach someone out of the formal project context, good discussions/solutions tend to get forgotten ‘til sometime in the future (i.e., at app review/testing time).

How do you see the roles of your network, applications and systems teams changing in the light of new technologies and business requirements?

The biggest issue we face is that of our customers demanding more-complex software in a short time horizon -- both in terms of the notice given to us as well as the time to complete the projects. Since the teams (server, app, network) manage all aspects of the life cycle (definition, creation, testing, implementa-tion and support), we often feel rushed and overwhelmed. To manage this, the teams are approaching the projects in a more flexible manner -- i.e., through an iterative process vs. let’s define everything and build it in one go. We are also attempting to layer more formal processes for support and project request. This is tough in a college environment, since people do not toe to the bottom line.

Industry pundits increasingly tend to dismiss core network technologies, such as switching and routing , as mere

plumbing. But is designing a network really a no-brainer these days?

Yes, it’s plumbing, but you still need good plumbers. It is true that these services are now “dismissed” as dial-tone, but the lack of a well-designed infrastructure hurts. The approach we take to network design is to define the business problem to solve and then through an iterative process come up with potentials. These potentials are fleshed out, and I make the pick based on a few criteria -- namely, how well does it serve the initial purpose, how complex is it (i.e., how will we support it) and what will it cost (initial and ongoing). The other thing we do is to continually keep reviewing technologies to make sure we have the best solution for us -- sometimes this is based on feature and functions, sometimes purely on cost.

You’ve mentioned that you want to emphasize a certain QoS for users that goes beyond the sort of Layer 2 and 3 QoS that we sometimes hear network-product companies discuss. What do you mean?

The QoS that I find relevant is not at the bits and bytes level, it is all about the “perceived performance” from the users’ perspective. This is not to say we take all user comments of “the network/application is slow” at face value, but we do try to find patterns in these comments and investigate the causes that lead to the comments. If there is one standard measurement in computing, it is that happy users don’t complain. Therefore, we take all complaints seriously. Setting arbitrary performance levels at the network layer is not useful to me, the entire experience of the user has to be taken into account. This usually means we look at the application on down when researching performance/quality of service issues.

In short, the line between the network vs. servers vs. application is so fuzzy you have to take a macro view toward QoS, no matter what your job

With the emergence of such technologies as VoIP, server virtualiza-tion and wireless networking, the lines between network, systems and applications groups keep getting blurrier.

By Network World

IT management done rightHow Babson College syncs its network, systems and software groups

Page 20: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 4: Case studies • • •

�0

role is. I expect everyone within my organization to look at the user experience first and then their statistics to support/refute the claim.

You have a background in disaster recovery and continuity services with Comdisco. How do you exploit that experience?

I have a healthy paranoia of things going wrong. I try to bring my experience to bear in terms of balancing the need/cost of recovery solutions. I.e., anyone can throw money at a problem (especially recovery), but how do you do it cost effectively and how do you prioritize?

How do you cope with the budget constraints of run-ning a college IT department?

Pretty much all of my comments mentioned the need to be cost effective. I continually try to challenge my team to present solutions that do more than one thing -- i.e., how do we maximize our investment? E.g., we purchased a product from Acronis for making DR images of critical servers (that was issue No. 1 -- identify critical servers, not all servers). Using it in the mode it was intended for is great (i.e., server recovery), but how about using it as an image tool to create server replicas for testing, or how about using it to do a P to V move? Another example is that we tested HBAs, accelerator cards and built-in NICs for use with iSCSI vs. just taking the HBA route (per everyone’s suggestion). The result: We use the built-in NIC, our testing (and now experience) shows no loss in performance (I/O or CPU) using the server NIC’s from Intel.

What one technology has you most excited these days?

In my position, I find it

difficult to be excited by any one technology, but will limit my answer to two:

* Virtualization, in conjunc-tion with iSCSI. We adopted this dual architecture over the summer of ‘06 and have been thrilled with the performance, flexibility and relative low cost. This environment has allowed us to scale our operations while cutting back on the number of physical hardware. As of February, we’re running 29 virtual servers on four physical dual-processor machines. The conservative estimate is to scale to 80 virtual machines on the same hardware. With the redundant iSCSI array from EqualLogic, we are able to have all hard disk for the virtuals on the SAN (including boot volumes). Therefore, we fully leverage our VMware ESX environment with automatic failover and load balancing.

* On the application side, I am excited about the ready availability of enterprise applica-tions that are “user focused,” i.e., social-networking capabilities, wikis and blogs, etc. While these technologies are not exactly new, they are underutilized on the enterprise level, including at colleges and universities. The interest I have is not to replicate MySpace, but to truly create a useful and engaging experience for our students and faculty.

Name a technology you think is overhyped or under-rated?

I would count SOA as an overhyped technology. Don’t get me wrong, I believe in the intent of SOA, it’s just that it’s the latest in a long list of terms for an obvious application architecture. Anyone trained in computing science

in the last 20 years should have been thinking about and building applications that were loosely coupled and supported the busi-ness rather than IT. However, if it finally has people thinking about architecting scalable solutions it will have been worth the pain.

What one technology do you wish never existed or that you could have back?

The administrative side of me wishes P2P had never been invented. From a technology viewpoint I love it, so simple, so clever. From the administrative side, especially in a college setting, it has been frustrating to keep P2P under control, i.e., managing use of our bandwidth, complying with intel-lectual-property issues, etc. Basically, it makes us in IT into the bad guys in the eyes of the students, when all we’re trying to do is a) preserve a good computing experience for all of our clients and b) stay within the confines of the legal system as well as the rules of the college.

Steve Thurlow, manager of enterprise services

How do you interact with your peers in software services?

On all levels. For starters the network and systems groups are managed by me, so we have a great deal of interaction where otherwise we might not.

We also have integrated weekly operational and project meetings in which staff from applications/systems and net-works work together in delivering our services. Typically, in IT this kind of interaction only happens on projects. The problem with that scenario for us is that we do both operations and project work. I have found that including everyone in the operational meet-ings as well as project meetings has produced a balance from all groups that truly understand we are not just project-focused and point fingers at each other when we have issues with project deadlines. The other aspect is that it becomes harder to separate the pieces when you have applica-tions running on virtual servers connecting to an IP SAN with VLAN trunking on backup and production networks. All these technologies are merging to a point that it becomes difficult to function well if we are not working together.

What are the keys to successfully working with the other groups?

360-degree communication is critical. Since we are technical groups, we often listen to one sce-nario while thinking of something totally different. This means we walk away thinking we have con-sensus that we don’t actually have. This happens all the time, and the only way to effectively deal with it is clearly stating after the fact what

Page 21: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 4: Case studies • • •

�1

we talked about. This will identify areas that need more discussion prior to getting knee-deep doing something that you find out later was not what everyone agreed to in their own minds.

Give me a thumbnail sketch of a project where your groups worked with software services.

The deployment of virtual servers has pulled everyone together in a way that required the trusting of each other’s disci-plines in the delivery of services to our customers. Combining applications on underutilized servers to a common virtual server is an area where the application experts needed to trust the system experts in providing a platform that would perform for the applications that couldn’t be physically touched. The system admins needed to work with the network admins in delivering VLAN trunks and lag ports to the virtual servers as well as connectivity to disk delivered over an Ethernet network.

Can you think of any examples where you should have worked more closely with these other groups?

The most important lesson learned is don’t be afraid to both ask for help understanding something and also don’t hold back in the giving of information to others. The biggest mistake is to assume that because someone works in another group that they either wouldn’t understand or don’t care about the expertise that you provide.

How do you see the role/priorities of the network team changing in the light of

such new technologies as VoIP and wireless?

Over the next five to 10 years, we will continue to see a blurring of service providers for voice, video and data. As this trend continues, companies will struggle with how they can morph their separate groups for these services into a combined force that will be able to really utilize the emerging technologies in ways that will improve their service offerings.

Babson emphasizes QoS beyond Layers 2 and 3. How does your group contribute to optimizing the user experience?

By simply looking at it as an experience and not just a narrow view of how fast the packets get from one part of the network to another. Everyone feels a responsibility to making sure that the user’s experience is the best that we can offer. Sometimes that may mean that a system admin helps a user troubleshoot their networking connectivity when the call that came in was related to server performance. Our user community sees IT as one face. We try to make sure the face they see does not have multiple personalities.

What one technology has you most excited these days?

Virtual server technology is by far the most exciting. With IP SAN connectivity to a virtual server infrastructure, we can reduce our physical footprint in the data center while allowing for ease of deployments of new servers and applications. It wasn’t too long ago that we had to plan out hardware purchases. We would work to size CPU, RAM and disk for point solutions. The result would be a longer time to get

the hardware ready and it would create a static point in time for individual configurations that would be difficult to change if we either under- or overcalculated the original requirements. Today, we can build servers from home if needed and make changes to the environment virtually. This has reduced our time building servers dramatically.

Name a technology you think is overhyped or under-rated?

VoIP can be overhyped in some environments. Most existing conventional PBX implementations have installed wiring and are running on mature hardware. Alongside the existing wired PBX is an Ethernet infrastruc-ture that has been grown from many past standards. These networks often range in capabilities in the following ways: wiring is subcategory 3 to category 6; network hard-ware with speeds varying from 10 to 1000Mbits; some switches with power over Ethernet and some not. In most cases the wiring closets where network gear is located are not even on UPS, let alone the concept of a backup generator. Uplifting this kind of environment to support VoIP can be cost prohibitive, especially when the wired PBX is doing the job.

VoIP can also be underrated, though. This is most likely the case where the voice and data groups are not being managed closely together. This I believe is the case in a fair amount of IT groups. For some of the reasons mentioned above, conventional ways of thinking prevail to the point of installing costly dual systems in

new architecture, when installing an integrated VoIP solution would be the best idea.

What one technology do you wish never existed or that you could have back?

While I don’t think that wireless networking should have never existed, I do wish it were introduced in main-stream networking earlier, so that it would be more mature today. The problem is we have applications like peer-to-peer file sharing, which can con-sume a tremendous amount of resources. Ethernet evolved from a shared wired tech-nology using repeaters to full duplex switched ports to every user. Users got used to this kind of performance, and now they don’t understand the sharing again that is inherent to wire-less technologies. Wireless will get there. However, the applica-tions we are prepared to run over it today can make the delivery of a well-performing system problematic at best.

How do you keep up on new technologies?

We have regular reviews with our existing vendors and their competitors to find out where they are taking new tech-nology initiatives. We also attend seminars on specific technolo-gies that we have implemented or are planning to implement. These seminars give us an opportunity to see vendors’ offer-ings while hearing how other IT groups have used their solutions to solve their problems. We also connect with other educational institutions to share how we do things and to gain insights into how they do things.

Page 22: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 4: Case studies • • •

��

Andy Lymburner, manager of software services

How do you interact with your peers in networks and systems?

In an environment like ours, where small teams are counted upon to continuously innovate while still providing the highest level of support to applications and services that are already in use by our customers, it’s critical that we maintain a good relation-ship with the enterprise-services team that provides both networking and systems. In general, we consult regarding resource allocation if there is a project that requires a hefty commitment from someone on the other team. Otherwise, we are content to allow the members of our teams to work together as often as possible -- and we’ve never

said no when that collaboration is requested. Two members of the applications team are involved with all network and systems planning in order to keep the “gotchas” to a minimum. Team members work together daily to troubleshoot and resolve support requests from our customers and to work through problems uncovered by our internal monitoring of applications, systems and the network.

What are the keys to successfully working with the other groups?

The keys are simple. Good, talented people who respect each other personally and professionally coupled with the understanding that a problem at any point in the service-delivery chain is perceived by our customers to be a failure of the entire IT organization. As a manager, I work to empower each individual to work across team lines to solve the issue at hand and then implement safeguards to keep it from happening again.

Give me a thumbnail sketch of a project where your group worked with one or both of these other groups?

The server-consolidation and -virtualization project has numerous examples of how the groups work together. An ongoing project for over a year, it has been necessary for every application move to be coordinated meticu-lously to minimize downtime. A typical scenario would involve members of both teams meeting to iron out the details, then coor-dinating a run-through in our test environment. This run-through is documented thoroughly and the

trial runs continue until every-thing is as smooth as possible. Then the production move is scheduled. Moving applications to new, virtual servers with file-system changes can be a complex process, and it requires clear, open communication across the team.

Can you think of any examples where you should have worked more closely with these other groups?

The culture that we are trying to foster is one of coop-eration and responsibility with minimal top-down management and maximum flexibility. The big-gest challenge is communicating the constant changes across and within the network, server and application groups.

How do you see the role/priorities of the apps team changing in the light of such new technologies as SOA, VoIP and wireless?

The priority remains the same. Of course, this means that our platform must evolve to take advantage of SOA, wireless and other technologies, as our custom applications and those delivered by our vendors become even more ubiquitous.

Babson emphasizes QoS beyond Layers 2 and 3. How does your group contribute to optimizing the user experience?

For Babson, QoS can be defined simply by customer satisfaction. Our effectiveness is measured by consistently high-application availability and per-formance. We continuously improve our applications and infrastructure

based upon the feedback from our users and the logging/monitoring that we build into all of our in-house applications.

How do you get buy-in from nontech groups at Babson when you’re rolling out new technology?

One of the advantages of working in higher education, par-ticularly in a school with such an entrepreneurial focus, is that even our most nontechnical users are being pushed by their customers -- generally, the student body -- to continuously evolve their processes and offerings. This envi-ronment of near constant change has made it relatively easy for us to roll out new technologies that provide perceived advantages, no matter how incremental. The larger challenge for us is to reign in some of our more bleeding-edge groups to derive more value from the existing applications.

What one technology has you most excited these days?

Mashups and using the Web as a platform. Our development team is working on projects to further enhance our portal by offering Babson-specific mashups, customized RSS feeds and more community functionality.

Name a technology you think is overhyped or underrated?

For all the hype that it’s gotten, businesses are just now beginning to find ways to create value for the enterprise through the use of mashup technology. Yahoo has created an interesting site that allows people without programming skills to utilize the

Page 23: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com

Section 4: Case studies • • •

�3

power of RSS and Web-based APIs to bring together data from multiple sites, manipulate it and then publish it out with value added. This concept has been taken a bit further by Kapow.com and its product, RoboSuite. As more and more functionality is made avail-able to the user, it will become incumbent upon application and data providers and integrators to find ways to make work- and school-related information securely available through the same means that people are accessing the information related to the rest of their lives.

What one technology do you wish never existed or that you could have back?

I can’t think of any technology that I wish never existed -- and since nothing really goes away (I contend that it simply evolves into something else), there is nothing that I’d like to have back. There are numerous technologies that have made things difficult for us over the years, most notably products with security holes and the proliferation of worms, etc., to take advantage of those holes. However, to me, it’s what people do with the technology that may be a problem, not the technology itself.

How do you keep up on new technologies?

I am both an avid reader and an endless Web surfer. If I find something that particularly interests me, I’ll read more deeply on a subject -- or create a project for my team to dig into it.

Page 24: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

EXECUTIVE GUIDEBack to TOC

Sponsored by VMwarewww.vmware.com �4

VMware White Paper

Page 25: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

www.ovum.com

Virtualization delivers IT andbusiness benefits for SMBs

07 June 2007

John [email protected]

Page 26: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

1 of 9

Table of Contents.........................................................................................................................1Virtualization delivers IT and business benefits for SMBs............................................................2

WTC Communications discovers virtualization’s value for efficient IT management.................2Bowdoin College achieves more effective data center management and disaster recovery.....4SMBs share common needs for virtualization deployment........................................................5Survey data supports virtualization’s spread among SMBs.......................................................6

© Ovum Summit 2007. Unauthorized reproduction prohibited.

Page 27: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

2 of 9

1

Virtualization delivers IT and business benefits for SMBsAfter years of vendor hype about virtualization, customers are realizing real-life business and IT benefits from implementing this technology. By moving away from the traditional ‘siloed’ approach of linking applications to specific IT infrastructure, towardsan approach that creates shared pools of virtualized server, storage and network resources, customers can dynamically assign the pooled resources wherever and whenever needed.

Although many vendors have focused their selling and marketing efforts on the virtualization benefits for enterprise-class customers, more and more SMB customers are reaping virtualization rewards as well. Nonetheless, a good portion of SMBs still view IT virtualization as something that can only be attained by large enterprises, and as a technology that has little relevance in their comparatively smaller IT operations. However, virtualization’s benefits of increased utilization, improved service reliability and the positive impact on both internal and external business processes can apply to SMB as well as to enterprise-class operations.

Virtualization leader VMware, an independent EMC subsidiary, has been a pioneer in both enterprise and SMB virtualization deployments, particularly in server virtualization. VMware’s products logically ‘break’ each physical server into several independent virtual servers, allowing customers to run multiple operating systems and applications on a single machine simultaneously. Each virtual server is independent of the others, so failure in one will not affect others. Furthermore, the workload from the failed virtual server can be reassigned to another virtual machine.

In June 2006, VMware launched its VMware Infrastructure 3 bundled solutions,packaging together the company’s older and newer product functionality into several easily digestible bundles. This initiative included some entry-level pricing designed to put virtualization in reach of a wider array of enterprises and, notably, SMB customersgrappling with various IT management and disaster recovery issues.

This paper profiles the virtualization experience of two VMware SMB-size customers,and provides a window into the ‘real-life’ impact of virtualization for IT managers juggling the dual priorities of remaining competitive and keeping costs in check.

WTC Communications discovers virtualization’s value for efficient IT managementWTC Communications is a small telephone company in rural Kansas that provides telephone, cable and Internet services to about 8,000 customers. Starting in 2000,

© Ovum Summit 2007. Unauthorized reproduction prohibited.

Page 28: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

3 of 9

2

when it began branching into new areas such as Internet access, the company’s IT infrastructure grew rapidly, from only a few Intel-based servers to almost 15. Although a small number by enterprise standards, keep in mind WTC has only about 25 employees, and only two dedicated to maintaining its servers and keeping the IT infrastructure up with an expanding business. As the head of the company’s IT department states, ‘it was really becoming a headache [to manage]’.

After considering different options, in early 2005, WTC decided to deploy VMware ESX Server. ESX Server functions as a virtualization layer (commonly called a hypervisor)that abstracts the processor, memory, storage and networking resources on one x86 Intel-based server into multiple virtual machines. This deployment allowed the company to reduce its physical 15-server environment to three servers supporting 25virtual machines among them. The virtual machines, in turn, run everything from internal billing systems to customer email and web hosting services. Most of the virtual machines run Windows, although some run Linux.

Although WTC has not performed a detailed financial analysis, its executives believe the company has saved thousands of dollars in hardware costs, and has freed up precious IT man hours to focus on other projects. ‘We know that we’re doing better than if we were doing it the other way’, says one WTC IT manager.

Since its initial deployment, WTC upgraded to the VMware Infrastructure 3 suite, taking advantage of the $1,000 pricing of the two-CPU Starter edition, as well as some additional features in other VMware Infrastructure 3 editions. Among the benefits WTC cites are:

• automated resource allocation: with the dynamic allocation functions of VMware Distributed Resource Scheduler (DRS), administrators no longer have to manually allocate more server resources depending on end-user or application needs. WTC can set rules and priorities ahead of time, and can change or alter them with no interruption to services

• consolidated back-up: in the past, system or server crashes meant hours or days spent rebuilding server configurations. WTC can now perform virtually instantaneous restores, accessing back-up files and leveraging other technologies in the VMware Infrastructure 3 suite such as VMware Consolidated Backup functionality

• centralized management: VMware VirtualCenter allows WTC’s IT managers to make alterations or changes, where needed, without internal users or customers being aware that anything has occurred

• ongoing support from VMware: according to WTC, it has never felt that VMwaretreats it poorly just because it is a smaller customer, and praises VMware’s ongoing technical support processes. The vendor provides ongoing training on new product features, as well as timely and accurate information on any patches, fixes

© Ovum Summit 2007. Unauthorized reproduction prohibited.

Page 29: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

4 of 9

3

or bugs. In fact, WTC has actively participated in VMware’s customer councils to help VMware identify ways to improve its products.

Bowdoin College achieves more effective data center management and disaster recoveryBowdoin College is a well-known private liberal arts college in Brunswick, Maine. Bowdoin’s IT infrastructure has grown rapidly over the years to serve the needs of its diverse academic community, which currently stands at about 1,700 students and more than 850 faculty and staff. The IT department is responsible for everything from maintaining custom administrative, admissions and instructional applications, to managing the college’s website (through which students and staff can access customized portals), to running data archiving solutions for the college’s three museums.

In early 2004, Bowdoin’s IT department realized it was quickly running out of data center space due to its need to run different operating systems (including multiple versions of Windows, Red Hat Linux and Sun Microsystems Solaris). The growth of servers also stressed its data center power and cooling systems. For example, as oneIT administrator described it, its uninterruptible power supply (UPS) for back-up was at 92% capacity. ‘We were right at the edge of boiling the thing up,’ he said.

What’s more, the college at the time had an inadequate disaster recovery plan, with no remote storage or back-up capabilities. In the event of a disaster, Bowdoin faced the prospect of needing weeks or months to fully recover everything in its IT infrastructure.

To begin addressing these problems, the college purchased its first VMware ESX Servers in mid 2004 for some web server applications, with the intention to build from there. The initial deployment was so successful that the IT department decided that when servers reached end-of-life or were to be retired, Bowdoin would migrate to VMware virtual environments running on blades, rather than purchasing one new physical server for each application.

Three years on, Bowdoin now has more than 100 virtualized environments on 11 ESX Servers, running on three racks of blades from Hewlett-Packard, with room to grow if necessary. The college uses VirtualCenter and VMware DRS for dynamic management and resource allocation across its virtualized servers, to respond to the fluctuating needs of various college departments. Bowdoin’s IT department estimates about 70%of its server environment is now virtualized, and says the use of virtualized blades has allowed the college to save on power and cooling costs.

Virtualization has also allowed the college to implement a unique disaster recovery program. Bowdoin teamed up with Loyola Marymount University in Los Angeles –

© Ovum Summit 2007. Unauthorized reproduction prohibited.

Page 30: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

5 of 9

4

which also utilizes data center virtualization – to co-host virtual machines for each other’s Web, DNS and Windows Active Directory servers. Each college, in effect, acts as a cross-country remote disaster recovery site for the other, using a dedicated network tunnel to send regular updates between the two.

Bowdoin has purchased, and is currently testing, VMware Infrastructure 3, and plans to migrate to the new suite fairly soon. It wants to move ahead on VMware Infrastructure 3 because of the positive IT and business outcomes realized since its VMware deployment three years ago, including:

• substantial cost savings. The college estimates it has saved at least $1 million in three years on the purchase of physical servers by utilizing virtualized blades. It has also achieved a higher degree of energy efficiency and saved on power and cooling

• the ability for IT staff to concentrate on other critical IT projects, including a planned voice-over-IP (VoIP) implementation and an upgrade of its email infrastructure

• the peace of mind in knowing that the college now has an innovative and bulletproof disaster recovery plan. In the event of a failure, Bowdoin and its partner college will be able to recover more quickly than under their previous programs.

SMBs share common needs for virtualization deploymentBoth WTC Communications and Bowdoin College had common needs and shared some common approaches as they implemented VMware’s virtual infrastructure.

• The need to consolidate and better utilize their server environments due to ongoing cost pressures, which entailed an assessment and inventory of their overall IT environments. Virtualization allows SMB IT managers to optimize their current server environments, without the need for additional or extensive hardware investments. (Some organizations such as Bowdoin College can decide to migrate to newer blade-based systems for additional efficiencies; others want to maintain the servers they have, but to utilize them more effectively with virtualization.)

• A new approach to disaster recovery and/or improved back-up capability. Many SMBs have come to realize that they need enterprise-class recovery procedures in order to be properly prepared for unforeseen disasters or, in some cases, for regulatory compliance. However, many SMBs lack the financial resources or personnel to implement a complex disaster recovery solution. Through virtualization, IT managers can restore virtual servers on any physical hardware device, ensuring faster recovery capabilities.

© Ovum Summit 2007. Unauthorized reproduction prohibited.

Page 31: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

6 of 9

5

• A means to reduce their overall energy costs by reducing their physical data center server count, which in turn lowers their overall power and cooling costs. With skyrocketing energy prices and maxed-out power grids, leveraging consolidation projects and virtualization technologies to impact energy requirements is emerging as an important decision-making driver.

• A desire to free up IT staff from time-consuming server administration duties, and leverage automated management functions as ways to make IT a business driver, rather than solely a cost center.

• A way to ratchet up application availability and scalability ‘on the fly’ in order to meet unexpected demands on IT resources, which requires IT and business leaders working together to plan for potential resource demands, priorities and allocations.

• An IT approach that would allow their operations to grow (in terms of applications, data and transactions), while maintaining a reduced and predictable IT cost structure. In fact, the ability to work with VMware to ‘start small’ with limited and controlled virtualization testing and deployments – and grow as necessary – was an attractive option for both of the SMB-sized organizations profiled here.

Survey data supports virtualization’s spreadamong SMBsA recent survey conducted by Ovum Summit makes clear that a wider variety of small and mid-sized customers – in addition to traditional enterprise clients – are experiencing the benefits of virtualization more than ever before. Other SMBs, in particular mid-sized businesses, plan to introduce virtualization into their IT environments as part of their future strategies, according to some of our survey data.

As shown in Figure 1, out of more than 150 mid-sized businesses surveyed (we categorize ‘mid-sized’ as customers with 100–999 employees), 26% said that storage virtualization was fully operational within their organizations, supporting either many or a few applications and business areas; 25% of respondents said the same of server virtualization, and 46% said they were involved in consolidation projects. In more and more cases, virtualization is seen as a key technology to successful and cost-effective data center consolidation within SMBs as they attempt to contain costs and optimize their current data center environments; some SMBs in fact turn to virtualization as a method of avoiding large-scale, expensive hardware investments.

© Ovum Summit 2007. Unauthorized reproduction prohibited.

Page 32: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

7 of 9

6

Figure 1 Mid-sized businesses technology adoption

4%

5%

6%

8%

9%

10%

15%

21%

6%

12%

19%

17%

17%

23%

23%

25%

19%

19%

26%

26%

25%

14%

23%

20%

25%

23%

21%

19%

25%

22%

17%

17%

10%

14%

8%

9%

6%

10%

6%

5%

35%

27%

18%

19%

19%

21%

15%

12%

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

ITSM

SOA

Server virtualization

Clustered servers/grids

Storage virtualization

Blade servers

Web services

Consolidation

Fully operational, supporting many applications and business areas

Fully operational, supporting a few applications and some business areas

Committed to use as part of long term strategy – still in early stages

Still in evaluation/pilot stage – very limited or no active deployments

Evaluated and rejected

Unsure/N/A

To what extent has your organization currently implemented each of the following? (Please select ONE response in EACH ROW)

All mid-sized businesses: N = 155

4%

5%

6%

8%

9%

10%

15%

21%

6%

12%

19%

17%

17%

23%

23%

25%

19%

19%

26%

26%

25%

14%

23%

20%

25%

23%

21%

19%

25%

22%

17%

17%

10%

14%

8%

9%

6%

10%

6%

5%

35%

27%

18%

19%

19%

21%

15%

12%

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

ITSM

SOA

Server virtualization

Clustered servers/grids

Storage virtualization

Blade servers

Web services

Consolidation

Fully operational, supporting many applications and business areas

Fully operational, supporting a few applications and some business areas

Committed to use as part of long term strategy – still in early stages

Still in evaluation/pilot stage – very limited or no active deployments

Evaluated and rejected

Unsure/N/A

To what extent has your organization currently implemented each of the following? (Please select ONE response in EACH ROW)

All mid-sized businesses: N = 155

Source: Ovum Summit

When asked about the extent to which certain technologies would be implemented in their organizations by mid-2009, a full 40% of mid-sized businesses earmarked storage virtualization, supporting either many or a few applications and business areas. In addition, 38% of mid-sized businesses earmarked server virtualization as a technology priority. More than half of mid-sized respondents said they would be involved in an IT consolidation initiative in the same time frame (see Figure 2).

© Ovum Summit 2007. Unauthorized reproduction prohibited.

Page 33: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

8 of 9

7

Figure 2 Projected technology adoption – mid-sized businesses

Fully operational, supporting many applications and business areas

Fully operational, supporting a few applications and some business areas

Committed to use as part of long term strategy – still in early stages

Still in evaluation/pilot stage – very limited or no active deployments

Evaluated and rejected

Unsure/N/A

To what extent do you believe your organization will have implemented each of the following technologies by mid-year 2009?

(Please select ONE response in EACH ROW.)

5%8%

10%

15%

15%

17%

17%

22%

32%

11%

14%

25%

23%

23%

19%

20%

21%

18%

23%

21%

22%

21%

23%

23%

17%

17%

12%

13%

13%

10%

9%

12%

7%

6%

6%

41%

35%

23%

24%

26%

26%

22%

19%

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

ITSM

SOA

Storage virtualization

Server virtualization

Clustered servers/grids

Blade servers

Web services

Consolidation

All mid-sized businesses: N = 155

Fully operational, supporting many applications and business areas

Fully operational, supporting a few applications and some business areas

Committed to use as part of long term strategy – still in early stages

Still in evaluation/pilot stage – very limited or no active deployments

Evaluated and rejected

Unsure/N/A

To what extent do you believe your organization will have implemented each of the following technologies by mid-year 2009?

(Please select ONE response in EACH ROW.)

5%8%

10%

15%

15%

17%

17%

22%

32%

11%

14%

25%

23%

23%

19%

20%

21%

18%

23%

21%

22%

21%

23%

23%

17%

17%

12%

13%

13%

10%

9%

12%

7%

6%

6%

41%

35%

23%

24%

26%

26%

22%

19%

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

ITSM

SOA

Storage virtualization

Server virtualization

Clustered servers/grids

Blade servers

Web services

Consolidation

All mid-sized businesses: N = 155

Source: Ovum Summit

Done properly, virtualization can lower SMBs’ overall IT costs, enable more efficient IT operations, improve disaster recovery/business continuity, advance overall business productivity, and deliver energy savings for SMBs’ data center power and cooling needs. It can also serve as a good foundation for other critical IT projects such as implementing new service-oriented architecture (SOA)-related services, and result in more satisfied employees, customers and suppliers, thanks to improved levels of availability and reliability.

© Ovum Summit 2007. Unauthorized reproduction prohibited.

Page 34: Virtualization meets realitypds11.egloos.com/pds/200901/08/49/vmware_virt-eg_1207.pdf · Virtualization reality Virtualization is being used to make the most out of everything from

9 of 9

8

SMBs that still think of virtualization as something that is too expensive or too unwieldy to implement can take a lesson from their counterparts at WTC Communications and Bowdoin College. Whether looking for cost effective consolidation or robust disaster recovery solutions, VMware’s various technologies and management capabilities can help SMBs achieve enterprise-class IT and business benefits that may have previously been out of reach.

This paper was commissioned by VMware.

Ovum Summit is a leading analyst and research firm tracking the evolution of enterprise and mid-

market dynamic computing strategies, including virtualization, SOA, IT management, and related

enabling services. Every care has been taken to ensure the accuracy of the information contained

in this report. The facts, estimates and opinions stated are based on information and sources

that, while we believe them to be reliable, are not guaranteed. Ovum Summit maintains final

editorial control over its research and does not endorse specific vendors or offerings. No liability

can be accepted by Ovum Europe Limited, its directors or employees for any loss occasioned to

any person or entity acting or failing to act as a result of anything contained in, or omitted from,

the content of this material, or our conclusions as stated.

© Ovum Summit 2007. Unauthorized reproduction prohibited.


Recommended