+ All Categories
Home > Documents > Service Virtualization in Enterprise Application … Virtualization in Enterprise Application...

Service Virtualization in Enterprise Application … Virtualization in Enterprise Application...

Date post: 17-Apr-2018
Category:
Upload: nguyenhuong
View: 258 times
Download: 3 times
Share this document with a friend
12
© 2009 iTKO | www.itko.com | Service Virtualization Whitepaper | page 1 of 12 Service Virtualization in Enterprise Application Development Complementing hardware virtualization with a unique and compelling ability to reduce cost and increase business agility across the entire software lifecycle January 2009 By John Michelsen, iTKO LISA iTKO LISA 1505 LBJ Freeway Suite 250 Dallas, TX 75234 USA www: http://www.itko.com email: [email protected] tel: 877-BUY-ITKO (289-4856) © 2002 - 2009, Interactive TKO, Inc. All rights reserved.
Transcript

© 2009 iTKO | www.itko.com | Service Virtualization Whitepaper | page 1 of 12

Service Virtualization in Enterprise Application Development Complementing hardware virtualization with a unique and compelling ability to reduce cost and increase business agility across the entire software lifecycle January 2009 By John Michelsen, iTKO LISA iTKO LISA 1505 LBJ Freeway Suite 250 Dallas, TX 75234 USA www: http://www.itko.com email: [email protected] tel: 877-BUY-ITKO (289-4856) © 2002 - 2009, Interactive TKO, Inc. All rights reserved.

© 2009 iTKO | www.itko.com | Service Virtualization Whitepaper | page 2 of 12

Executive Summary Service virtualization removes costly constraints on the development and deployment of dynamic enterprise applications Virtualization is the practice of simulating the behavior of a physical asset in a software emulator, and hosting that emulator in a virtual environment. Virtualization can help companies better manage their physical resources, improve their utilization, and increase business agility by removing physical constraints regarding resource access, capacity, and data volatility. Most large organizations are in the process of adopting hardware virtualization technology for the development of their dynamic enterprise applications. However, hardware virtualization is not appropriate or feasible in all circumstances. Many remaining critical challenges can be resolved with the complementary use of Service Virtualization, which leads to substantial additional benefits and improved ROI. Hardware virtualization is only possible when the whole system involved can be imaged into a virtual machine. Yet many of the enterprise’s most complex and costly systems are not possible to image in this way. Hardware virtualization pools resources, but does not eliminate the need for them, so the organization is still constrained by its physical capacity. Hardware virtualization does not adequately address non-hardware costs like licensing, data volatility, patch release management, and other IT admin issues. Finally, hardware virtualization cannot help organizations struggling to contain costs and stabilize the data of 3rd party hosted applications, SaaS, and cloud-based systems. Service virtualization involves the imaging of software service behavior and the modeling of a virtual service to stand in for the actual service during development and testing. Service virtualization is complementary to hardware virtualization, and provides a solution to address each of the aforementioned hardware virtualization limitations such as:

• Providing 24/7 access to the service end point on your terms • Removing capacity constraints • Addressing data volatility across distributed systems • Reducing or eliminating the cost of invoking 3rd party systems for non-production use

Organizations that want to realize the full benefits of a virtualization strategy should evaluate the practices and technologies that support service virtualization, and establish criteria for a business case justification. iTKO can conduct an assessment to evaluate the improvements that service virtualization can deliver to reduce costs and improve the business agility in the development of dynamic enterprise applications.

"Virtualization technology delivers so much to an enterprise. First and foremost, it allows for a malleable enterprise environment to be delivered anywhere independent of an operating system." Theresa Lanowitz, Senior Analyst, voke, inc., "What's Next?" blog, May 17, 2007

© 2009 iTKO | www.itko.com | Service Virtualization Whitepaper | page 3 of 12

Introduction Virtualization is hot. But are we realizing its full potential? Leading technology analysts (and Wall Street financial analysts) cite the adoption of virtualization, broadly defined as the practice of simulating the behavior of a technology asset, as one of the most significant technology priorities of this decade. There is a reason for this excitement – virtualization at the hardware and data center level generates an almost immediate payback in saved IT operations costs – potentially saving several million dollars in IT costs in months, not years. However, by focusing solely on hardware virtualization concepts, are we leaving opportunities to save time and money on the table? It is significant that we can reduce the cost of servers, but does this represent the bulk of the IT budget? What if we could also apply the benefits of virtualization where we spend 80 percent or more of that budget -- in the enterprise software that runs our business, and the extensive development, support and maintenance costs of these applications? Market pressures for greater business agility are driving enterprise applications to become more flexible and dynamic. Applications are increasingly assembled from distributed services and components that are shared across departments and organizations on a global scale. When we distribute component or service development tasks across multiple teams, we inevitably find that these teams still need access to live versions of the rest of the application in order to complete their own development and testing goals. There is still a high level of dependency and interconnectedness between all of those teams to deliver a completed workflow. For large-scale enterprise systems, this represents a significant cost and agility challenge. This paper introduces the practice of Service Virtualization (SV) to address these challenges and provide a key complement to hardware virtualization. Service virtualization entails capturing the behavior of deployed software assets, along with virtualizing the behavior of those not yet in existence, to facilitate the efficient development and testing of dynamic enterprise applications. This paper also discusses the benefits of the SV approach, and describes the value of SV in the context of a hardware virtualized environment. The opportunities presented by SV offer significant breakthroughs in productivity and agility for the entire enterprise application development and deployment lifecycle..

Introduction to Virtualization There has been a lot of industry buzz around virtualization, and with good reason, as business continually drives IT to accomplish more with fewer resources. Hardware virtualization can be used to virtualize test beds and maintain the countless configurations of operating systems and platforms that software needs to run against. Virtualization Defined

"The ability to predict the reliability of an application by virtually testing performance at any point in the lifecycle is a necessary ingredient for managing the complex software of the 21st century. By virtualizing away the dependencies inherent in testing a services-based environment, Quality Assurance and IT operations teams are empowered to tackle the most difficult performance challenges with a higher degree of automation." -- Theresa Lanowitz, Senior Analyst, voke inc., 2008

© 2009 iTKO | www.itko.com | Service Virtualization Whitepaper | page 4 of 12

Virtualization is the practice of simulating the behavior of a physical technology asset, such as a server or application, in a software emulator, and hosting that emulator in a virtual environment. The virtual environment behaves enough like the physical environment that communication with the emulated asset is identical to the real thing. This provides a number of benefits:

Benefit 1. Better Manage Technology Assets

Many enterprises have challenging configuration and change management issues with technology. The additional abstraction of a virtual environment makes adherence to best practices around change and configuration management easier.

Benefit 2. Improve Utilization of Physical Capacity

We can better leverage the physical capacity of technology assets with virtualization, stemming the tide of server proliferation. Instead of every team acquiring its own hardware platforms for development, QA, and integration testing, the same servers can be leveraged between disciplines and even across teams. This assumes of course that the physical utilization of these assets is very low.

Benefit 3. Improve Agility

The needs on our physical resources switch back and forth, as project teams perform different functions. Most organizations are capable of redirecting human resources much faster than physical hardware resources. With virtualization, we can access the applications and services that we need more rapidly, and without costly delays in reconfiguring or waiting on IT to switch servers.

Hardware Virtualization Definition Hardware virtualization is in effect a simulator of hardware behavior, but is in fact an intelligent proxy to the physical world. These proxies have the ability to negotiate with the virtual environment in which they are deployed, so they do not require exclusive access to the hardware they represent.

For example, a team may need a set of servers for order management and another team might leverage that same physical hardware to provision servers for web ecommerce development. Both our teams are accessing proxies for memory, CPU, and disks that are sharing access to a single physical asset that is managed by the virtual environment.

How it works The market is abuzz with case studies of companies taking advantage of hardware virtualization using technology from VMware, Citrix, Microsoft, and others. Most often, the key driver is to improve utilization of physical capacity.

While a complete discussion of how companies accomplish a transition to a virtual hardware environment is outside the scope of this paper, a few steps are worth noting:

Evaluate and deploy the virtual hardware environment. Among the tools provided with this environment are those that can migrate physical hardware configurations to virtual ones.

Image the existing physical asset usually by cloning the hard disk contents. From this image, the tool constructs a virtual machine.

© 2009 iTKO | www.itko.com | Service Virtualization Whitepaper | page 5 of 12

That virtual machine is now ready for deployment into the virtual environment, thus making it available for use. The virtual environment is frequently called a hypervisor.

Hardware Virtualization is not a Complete Solution Hardware virtualization by itself does not represent a complete virtualization strategy, nor will it deliver the full benefits possible through virtualization. There are several IT assets that are exceedingly difficult or impossible to virtualize with hardware.

Systems that can’t be imaged from hardware There are many systems that simply cannot be imaged through the hardware virtualization process. Systems not on Intel-class servers like mainframes, externally hosted partner services, SaaS, and cloud-based applications are all examples of systems that could definitely benefit from virtualization, but often cannot leverage hardware virtualization. In many cases, these assets are the most mission-critical and the most costly to provide access and capacity.

Still Constrained to Real World Capacity While your need for infrastructure resources continues to expand, you cannot virtualize beyond the limits of your physical world. For example, you can virtualize your network connectivity but the aggregate I/O used can’t be greater than the bandwidth you actually have in the physical world. Or, you can use a virtual environment to run several independent virtual machines, but those virtual machines cannot produce more output than the physical machine it is running on can produce. In that case, you must still buy more hardware to increase your capacity.

TCO is Much More than the Hardware “Box” Where hardware virtualization is possible, it clearly reduces the hardware acquisition component of TCO. However, the acquisition cost of the hardware is usually just a fraction of the total TCO. The majority of the costs are comprised of the IT labor overhead of maintaining the server images, software configurations (especially when publishing changes to them), installs, patches and licensing costs.

Hardware Virtualization is Not Appropriate in Some Circumstances Finally, there are circumstances where hardware virtualization is just not appropriate. A clear example of this is when we are considering the performance management lab. You might deploy hardware virtualization in the performance lab, but you cannot generally get the typical benefits of hardware virtualization. The basic assumption of underutilized capacity is inherently false when you are driving thousands of transactions into the system. And we cannot allow the variability of sharing hardware when collecting performance and resource consumption metrics on applications in the lab.

Service Virtualization Approach We spend a lot of time in software architecture trying to work out a decoupling of software components. What we want is the ability for one software component to change without that change immediately having to affect other software components. Middleware products and standardized protocols like WSDL and SOAP allow software components to be loosely coupled and language neutral. Yet the teams that work on these components remain tightly coupled. Until

“Through 2009, 60% of unplanned downtime for SOA-based, loosely coupled applications will be due to application failure, up from 40% on non-SOA-based applications (0.7 probability).” -- Allie Young, Gartner, Inc. “Application Testing: The New Offshore Frontier” Gartner Outsourcing Summit, March 2007

© 2009 iTKO | www.itko.com | Service Virtualization Whitepaper | page 6 of 12

we get a loose coupling of the teams that work on these components, development cycles will suffer significant constraints. Service virtualization can offer a solution. To illustrate, recall other virtual worlds you are familiar with – “Webkinz.com” if you have young kids or grandkids – or “Second Life” if you’re a GenX’er – or even the Holodeck on the Starship Enterprise if you’re a Trekkie. In each of these, we take real world things, capture their behavior, and bring them into a virtual environment so that we can interact with them freely without real world constraints. You can control their behavior, their capacity, their access, and also constrain them in the ways completely unrelated to what the real world would impose.

By capturing the behavior of real world assets and placing them in the virtual world, the real world constraints are eliminated and now defined on your terms. Let’s look at how SV plays a role in dynamic enterprise application development and deployment.

© 2009 iTKO | www.itko.com | Service Virtualization Whitepaper | page 7 of 12

Example of Service Virtualization Let’s assume we’re going to change an existing application with 3 downstream dependencies (starting from the bottom up):

Access to a mainframe: There are dozens of project teams, each needing access to the mainframe. Yet these teams receive very limited access, and often during timeslots that do not align to their project plan. Team leaders will exclaim, ’I can’t build or test my application without access to a mainframe, and yet, I barely ever get access’. This is a persistent problem that introduces risk, unacceptable delays, lost productivity and added cost. Access to a database: Another team owns a database that is accessed by others. This database is currently accessible in a development lab, but one team needs a new stored procedure added to that database. It may take 3 or 4 months with the DBA team in order to deploy those stored procedures. Until then, this team has no ability to move forward until this constraint gets cleared, threatening the delivery date and negatively impacting productivity. Access to the web service: There is a legacy system deployed and due to an architectural mandate, access to legacy system functionality will be provided via web services. However, that web service doesn’t actually exist yet, and the selection process for the vendor that will build the web services is ongoing. Once again, the team has a scope and a due date; yet no web service to test against, and not even an individual responsible for creating the web service. All of the above are examples of very real world constraints — and there are many more. Let’s now apply SV to capture these IT assets as virtual services, and bring them into a virtual world where each team can eliminate these physical constraints and redefine them on their terms.

© 2009 iTKO | www.itko.com | Service Virtualization Whitepaper | page 8 of 12

After Applying Service Virtualization Let’s look at the same environment with 3 downstream dependencies after applying SV techniques (starting from the bottom up):

The mainframe where we had access issues: For the few hours that the mainframe is accessible, iTKO’s LISA Virtualize product captures traffic between the application and the mainframe. LISA analyzes the way that system responds and builds a sophisticated model of that behavior which is then hosted as a virtual service that is accessible 24/7 in a virtual service environment. Capture and enhance the database. In this example, we need more than just database access. We need the ability to make changes to the database rapidly without impacting other teams. LISA allows you to change the database virtual model -- to include stored procedures, to add columns -- whatever changes the team needs the database to do. In essence you are capturing the requirements for the future state of the database by changing it in the virtual model. These requirements can be handed off to the DBA team in parallel, but SV allows you to recapture those 3 or 4 months of productive time on the database by modifying its behavior in an unconstrained virtual world, as opposed to the physical world. The Web Service: LISA can construct a virtual web service from a WSDL, SOAP documents, XML samples, and other artifacts. LISA can construct a model for that web service so that teams can be productive much more rapidly as opposed to waiting for that contract to be signed, and then waiting for someone to build and deliver the web service.

© 2009 iTKO | www.itko.com | Service Virtualization Whitepaper | page 9 of 12

Reference Architecture 1: No Virtualization In Use Let’s now explore a combined hardware virtualization and service virtualization environment and walk through a reference architecture including both.

This example application has access to a middle-tier set of services through some form of application server or integration layer. Those services frequently make calls out to legacy mainframe systems and third-party hosted applications. Several challenges are presented here.

In the back end there are access restrictions, and capacity constraints such as: “you’re not allowed to put a lot of load on that mainframe”, or “don’t test against that hosted solution because you are charged for every transaction.” In addition, there are significant data volatility and data dependency concerns. These challenges can be particularly acute for teams working on packaged ERP implementations.

How can I keep all my data synchronized among all these different systems? There’s data in Service1, data in the SaaS solution, data in Application1, data in the mainframe, and keeping all that data synchronized and working together in concert is very difficult. A common IT complaint is: “I do a 2 hour load test against the system, then I spend 2 days resetting the data. Then, I run the load test again for 2 hours, and I spend another 2 days resetting the data.” Data management obliviously represents a significant cost and delivery constraint.

How do I handle middle tier challenges? There are several teams all trying to share the same infrastructure for their own development processes, everyone’s configurations are all tied up together, everyone wants the configuration different, and IT people are concerned about resource pooling. And what happens when Service2 changes? Even if a team has access to it, they still need the new version or they’re stymied.

Let’s recap the challenges in this scenario:

Server proliferation in the middle tier to support multiple teams & workflows Lost productivity while waiting for things like Service2 to be developed Increased cost from 3rd party systems - every time you access that SaaS-based solution

or that hosted application, it’s costing more money Increased risk because of the fact that the team can’t do as much repeatable testing

because of data volatility.

© 2009 iTKO | www.itko.com | Service Virtualization Whitepaper | page 10 of 12

Increased risk for integration testing when the last component shows up late in the integration lab. You may have six months of access to everything else, but if one piece is late, 6 months becomes just 2 weeks to test – a very risky proposition.

Reference Architecture 2: Leveraging Hardware Virtualization

Hardware Virtualization: Helpful… but Still Not Enough Regarding hardware virtualization, in a sense every team wants their own server because they don’t want to be affected by all the other teams. Even the service providers themselves need their own resource pools, and need their own configuration of the application server. This can largely be addressed by hardware virtualization, which gives you the concept of server proliferation (where everybody gets their own server) but it doesn’t require physical assets to do it. However, beyond the hardware assets, there are still difficulties with parallel application development. Realizing that Service2 is not ready for an application to use (e.g., the database in our example above), attempting to run virtualized hardware won’t help until the development team implements their changes. These constraints can’t be removed with hardware. And unfortunately I still have access issues on the mainframe, I still have capacity constraints, I still have costly access on the SaaS side, and I still have a tremendous amount of data volatility. There may also be security concerns when sharing a process among teams. Local teams may have access to the mainframe, but offshore teams may not have access because of security and premise issues, limited time windows, and other constraints.

© 2009 iTKO | www.itko.com | Service Virtualization Whitepaper | page 11 of 12

Reference Architecture 3: Added Value through SV

Improvements with Service Virtualization: SV can allow us to address problems we encountered in the middle tier. SV allows us to create virtual models, modify the models and enable parallel development between teams that was lacking previously. This enables a significant improvement in project team agility and improves time to market. In the back end, those constrained resources can now be made accessible 24/7. We can take IT assets that had capacity constraints, for example the performance lab that could only do 10 TPS against the back-end systems, and virtualize it to allow hundreds or thousands of transactions. Data consistency issues are also resolved. Specific data scenarios are easy to control and manage. Predictable, stable test data is now available across a variety of systems – even the systems that are not under direct ownership and control, a crucial requirement for any distributed system. And finally, the security constraints for sensitive systems can be removed. For example I can make a model of the behavior of Service1 and the mainframe, and give that to an offshore team – so they have unfettered access and more productivity, without the security concerns. Same Virtualization Process… Just Different Targets The high level virtualization process is essentially the same, whether you are doing hardware virtualization or service virtualization. When I virtualize a piece of hardware, I take an image of the disk, I construct a virtual machine, and I host that in a virtual hardware emulator like VMware or Citrix or Microsoft. With a virtual service, you image the behavior of a particular service, you construct the virtual service from that behavior, and then you deploy it to a virtual service environment. Same steps, but different tools, and different outcomes based on the type of virtualization.

© 2009 iTKO | www.itko.com | Service Virtualization Whitepaper | page 12 of 12

You can isolate IT from physical constraints by using Service Virtualization. We invite you to explore how this new strategy can deliver the cost savings and agility you expect across the lifecycle of your entire software infrastructure. Recommendations Organizations involved in the development of dynamic, enterprise applications should consider the benefits afforded by Service Virtualization. If there is an existing initiative to virtualize hardware, consider adding SV capabilities to complete the strategy and add more value. If there is not a virtualization initiative, consider whether the benefits of hardware virtualization and service virtualization warrant investigation. Hardware virtualization will likely deserve your attention when middle-tier server proliferation is occurring among servers that are largely stable, and incrementally inexpensive to replicate. Service Virtualization will be justified when complexity of architecture, data volatility, 3rd party costs, and the need for development agility are high. Contact iTKO for an assessment of your environment. iTKO can help you leverage both complementary approaches to virtualization so you can achieve the most from your constrained budget.

About the Author John Michelsen, Founder & Chief Architect, iTKO, Inc. John has over fifteen years of experience as a technical leader at all organization levels, designing, developing, and managing large-scale, object-oriented solutions in traditional and network architectures. He is the chief architect of iTKO's LISA automated testing product and a leading industry advocate for software quality. Before forming iTKO, Michelsen was Director of Development at Trilogy Inc., and VP of Development at AGENCY.COM. He has been titled Chief Technical Architect at companies like Raima, Sabre, and Xerox while performing as a consultant. Through work with clients like Cendant Financial, Microsoft, American Airlines, Union Pacific and Nielsen Market Research, John has deployed solutions using technologies from the mainframe to the handheld device.

About iTKO LISA™

iTKO LISA Testing, Validation & Virtualization solutions help companies mitigate the business risks of increasing change and complexity within Service-Oriented Architectures (SOA). iTKO’s mission is to allow everyone involved in IT to own quality, from development, to QA and business analysts. Our LISA solution performs unit, functional, regression, load and performance tests, and service virtualization, without requiring test coding or script maintenance efforts. LISA can test and validate websites, web services, J2EE, .NET, ESB/messaging, databases, and many more technologies. iTKO customers include Intel, Bank of America, eBay, DISA and TIBCO.

For more information on iTKO LISA solutions, visit http://www.itko.com/lisa, contact iTKO at [email protected] or call 1-877-BUY-ITKO (289-4856).


Recommended