+ All Categories
Home > Documents > Microsoft Office Word - Mainframe Benchmark Project Final

Microsoft Office Word - Mainframe Benchmark Project Final

Date post: 03-Jan-2017
Category:
Upload: vandieu
View: 222 times
Download: 2 times
Share this document with a friend
70
i Mainframe Linux Benchmark Project Microsoft Corporation Published: July 2003
Transcript
Page 1: Microsoft Office Word - Mainframe Benchmark Project Final

i

Mainframe Linux Benchmark Project

Microsoft Corporation Published: July 2003

Page 2: Microsoft Office Word - Mainframe Benchmark Project Final

ii

The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.

This White Paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT.

Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

© 2003 Microsoft Corporation. All rights reserved.

Microsoft®, Windows®, and Windows Server TM are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

The names of actual companies and products mentioned herein may be the trademarks of their respective owners.

Page 3: Microsoft Office Word - Mainframe Benchmark Project Final

iii

Mainframe Linux Benchmark Project

Table of Contents

Executive Summary .......................................................................................................................v

NetBench Results without z/VM..................................................................................................vi NetBench Results with z/VM.......................................................................................................vi NetBench Conclusion..................................................................................................................vi WebBench Results..................................................................................................................... vii WebBench Conclusion............................................................................................................... vii Cost Analysis.............................................................................................................................. vii Conclusion................................................................................................................................. viii

Introduction.................................................................................................................................... 1

Purpose of the Benchmark.......................................................................................................... 1 Project Methodology........................................................................................................................ 2

NetBench Description.................................................................................................................. 4 WebBench Description................................................................................................................ 5 Benchmark Procedures............................................................................................................... 5 Test Process ............................................................................................................................... 7

Setup....................................................................................................................................... 8 Initial Testing........................................................................................................................... 8 Pre-test.................................................................................................................................... 8

Optimizations and Observations ................................................................................................. 9 Hardware Optimization ........................................................................................................... 9 Software Optimization............................................................................................................. 9

NetBench Results...................................................................................................................... 10 WebBench Results.................................................................................................................... 17

Analysis of Results ........................................................................................................................ 21 NetBench Analysis .................................................................................................................... 22 WebBench Analysis .................................................................................................................. 24

Implications for Customers ............................................................................................................ 25 Server Consolidation: File Serving............................................................................................ 25 Linux under z/VM Cost Comparison to Windows Server 2003................................................. 30 Server Consolidation – Web Serving ........................................................................................ 31 Conclusion................................................................................................................................. 33

Appendix A: Configurations ........................................................................................................... 35 EXHIBIT A ..................................................................................................................................... 35 Hardware ....................................................................................................................................... 35

Mainframe: ............................................................................................................................ 35 Clients Configuration:............................................................................................................ 35 CISCO Catalyst Switch configuration ................................................................................... 35 900 Mhz Pentium III Xeon Server Used for Comparison...................................................... 37 Windows Server 2003 Cost Table ........................................................................................ 37

EXHIBIT B ..................................................................................................................................... 38 Software......................................................................................................................................... 38

Mainframe Software:............................................................................................................. 38 Client Software:..................................................................................................................... 38

Page 4: Microsoft Office Word - Mainframe Benchmark Project Final

iv

Cisco 6506 Network Switch Software:.................................................................................. 38 Appendix B .................................................................................................................................... 39 Optimizations ................................................................................................................................. 39

z/VM Optimizations ............................................................................................................... 39 Linux on z/VM Tuning ........................................................................................................... 39 Resource sharing.................................................................................................................. 40 OSA Adapters ....................................................................................................................... 41

Linux.......................................................................................................................................... 42 WebBench................................................................................................................................. 44

z/VM...................................................................................................................................... 44 Linux...................................................................................................................................... 44 Apache.................................................................................................................................. 44 Linux...................................................................................................................................... 46 Samba................................................................................................................................... 46

Appendix C .................................................................................................................................... 49 NetBench Result Details and Mainframe CPU Utilization......................................................... 49

Appendix D .................................................................................................................................... 56 WebBench Result Details and Mainframe CPU Utilization....................................................... 56

List of Tables and Figures

Table 1. File serving cost comparison summary ........................................................................... viii Table 2. Static (HTTP) web serving cost comparison summary ................................................... viii Figure 1. Test bed hardware topology............................................................................................. 3 Table 3. Pre-Test Plan..................................................................................................................... 6 Table 4. Test matrix and the relations C/S (Client/Server).............................................................. 7 Table 5. NetBench Single Server Comparisons............................................................................ 10 Figure 2. NetBench Single Server Comparisons........................................................................... 11 Figure 3. Two CPU LPAR Linux (no z/VM) .................................................................................. 12 Figure 4. Two CPU LPAR, one Linux Server Image, SuSE Linux V.8 under z/VM 4.03 ............. 12 Table 6. NetBench Linux Server Image Comparisons .................................................................. 13 Figure 5. NetBench Linux Server Image Comparisons................................................................. 14 Figure 6. Throughput Four Virtual Linux Server NetBench ........................................................... 15 Figure 7. Average Response Time Four Linux Virtual Server NetBench...................................... 15 Figure 9. Accumulated z/VM CPU Usage Run, Twenty Virtual Linux Servers ............................. 17 Table 7. WebBench Single Server Comparisons ......................................................................... 18 Figure 10: WebBench Single Server Comparisons....................................................................... 18 Table 8. WebBench Linux Server Image Comparisons ................................................................ 20 Figure 11. WebBench Linux Server Image Comparisons ............................................................. 20 Figure 12. Accumulated CPU Usage Four Virtual Linux Servers................................................. 21 Table 9. NetBench Max Average Throughput ............................................................................... 23 Figure 13. NetBench Linux server images .................................................................................... 23 Table 10. NetBench Cost Comparison.......................................................................................... 27 Table 11. File serving cost comparison summary ......................................................................... 30 Table 12. WebBench Cost Comparison ........................................................................................ 32

Page 5: Microsoft Office Word - Mainframe Benchmark Project Final

v

Mainframe Linux Benchmark Project

Executive Summary

For the past several years, IBM has positioned its z900 mainframe as a superior platform for consolidation of Windows® servers, especially for file serving and web serving. IBM provides many examples, both real and hypothetical, where millions of dollars are saved. IBM’s premise is that a z900 processor running Linux under z/VM is the equivalent of three to four Intel processors of the same clock speed (900 MHz) running Windows server operating systems, each running at full capacity, for data intensive tasks such as file serving. IBM further assumes that the typical Windows server operates at less than 5 percent of capacity, so that a single z900 processor could consolidate over one hundred Windows servers. However, IBM has provided no data based on standard, cross-platform benchmarks to prove these claims, citing that each customer situation is different and has to be studied independently. Customers have expressed concerns about these claims, especially considering the significant improvement in both PC server technology and in Windows server operating system functionality and performance over the past few years. Microsoft® felt that customers should have objective benchmark data on hand in order to determine the validity of IBM’s claims measuring the IBM mainframe against Windows servers running the same standard file serving and web serving benchmarks. In the absence of IBM providing these benchmarks to customers, Microsoft has chosen to do so. The Mainframe Linux Benchmark Project assembled a team of mainframe Linux experts, made available to them suitable IBM technology and PC clients, and gave the team the tools and freedom to do the best benchmark possible. The team optimized for maximum mainframe performance without employing extraordinary measures that would be impossible for customers to reproduce or that would invalidate licenses, such as building customized operating system kernels. The facility used by the team included a dedicated two CPU IBM z900 1C6 LPAR with 24 Gigabytes of memory providing an estimated 425 MIPS of power, 1.2 Terabytes of dedicated IBM ESS (Shark) Storage, two dedicated fiber channels, (IBM’s latest FICON Express Channels), two OSA adapters with four dedicated Gigabit Ethernet ports, a current generation Cisco 6500 Series Switch with 5 Gigabit Ethernet connections, and a ninety-six PC client pool to drive the benchmarks. The PCs were co-located in the data center with the mainframe to eliminate possible network latency delays. The latest stable versions of industry standard Ziff Davis Media's PC Magazine NetBench ™ and WebBench ™ test suites were used. Mainframe software included z/VM version 4.03, and SuSE Enterprise Linux version 8, which includes Samba for file serving and Apache for web serving. See Appendix A Configurations for more details. All software and hardware were at the latest fix levels. A detailed project plan was developed and reviewed by an independent auditor from the META Group. The tests were conducted over a two-month period without interruption. The tests included twenty-five NetBench iterations—one run of the suite for each increment in the number of Linux server images used, from one to ninety-six. The tests consumed more than four hours for each set of servers. The tests also included WebBench, which was run in a comparable fashion, taking 2.15 hours per set of server images. Over 400 hours of testing were conducted during the two months as we sought to achieve the best possible results. The META Group, an independent consultancy, has audited the plan, the facility, the tests, and the final report. META Group was asked to verify that the benchmark configuration and procedures were appropriate but was not asked to endorse the results one way or the other. Also, neither VeriTest nor Ziff Davis Media, who provided the PC Magazine

Page 6: Microsoft Office Word - Mainframe Benchmark Project Final

vi

benchmark test suites, were not approached about endorsing the results. Based on IBM’s marketing, expectations going in to the project were that mainframe Linux would produce results at the higher end of Windows server performance. The results turned out quite the opposite.

NetBench Results without z/VM The IBM z900 two processor LPAR achieved 14 percent less performance than an Intel-based server with two 900 MHz Intel Xeon processors running Windows ServerTM 2003. The Windows configuration was benchmarked in an independent test performed at the request of Microsoft by VeriTest, a division of Lionbridge Technologies Ltd. The VeriTest benchmark was conducted under similar circumstances using an identical version of NetBench and WebBench. The VeriTest study, conducted in April 2003, is available for download at (http://www.veritest.com/clients/reports/microsoft/ms_performance_updated.pdf). The results showed that, contrary to IBM’s claims, one z900 processor is clearly not equivalent to three or four Intel processors for data intensive workloads. It is not even equivalent to a single 900 MHz Intel Xeon processor running Windows Server 2003. The IBM z900 two processor LPAR was also not equivalent to a two processor Intel server using 900 MHz Intel Xeon CPUs and running Windows Server 2003. Both the z900 and 900 MHz Intel servers have been succeeded by later generation products, but there is no reason to think these results aren’t indicative of what would happen if IBM’s newest z990 processor were compared to the latest 3.0 GHz Intel Xeon processor. On the NetBench Enterprise DiskMix suite for testing file serving, the z900 only achieved 546 Megabits per Second maximum throughput, compared to 632 Megabits per Second maximum throughput the Windows server achieved in the VeriTest study. The IBM z900 results were achieved running a single Linux server image on the two processor LPAR without z/VM. The Windows result was achieved with a single Windows image on the two processor 900 MHz Intel Xeon-based server.

NetBench Results with z/VM z/VM, which is required to run multiple virtual Linux servers on the mainframe, exerts a heavy penalty on mainframe performance for file serving. The conventional rule of thumb is that z/VM only results in a 5 percent penalty on mainframe performance. This may be true for combination workloads involving computation as well as file IO, but for true file serving, the penalty measured by the Mainframe Linux Benchmark Project was 24 percent at maximum throughput with four virtual Linux server images. Overhead exceeded 48 percent with ninety-six server images. Overall, the highest NetBench results for Linux on z/VM were 417 Megabits per Second throughput with four Linux server images and sixty clients. Additionally the z900 started generating read errors on the clients after twenty server images were reached, resulting in the benchmark software dropping clients. At twenty server images with ninety-six clients, the maximum throughput achieved was 288 Megabits per Second. At ninety-six server images, and ninety-five clients with one dropped client, the mainframe achieved 199 Megabits per Second maximum throughput. This means that the maximum average throughput per server at ninety-six servers was only 2.071 Megabits per Second, and that it would take one of these server images 38.62 seconds to serve one 10 Megabyte file.

NetBench Conclusion IBM’s claim of hundreds of Windows servers at 5 percent capacity being consolidated on one or two z900 processors is not supported by these results.

Page 7: Microsoft Office Word - Mainframe Benchmark Project Final

vii

WebBench Results Windows Server 2003 proved to be superior to z900 for static web page serving, providing nearly 300 percent greater performance on the WebBench benchmark. The Apache Web Server running on the two CPU IBM z900 Linux LPAR achieved 5,042 requests per second for static web page serving with sixty clients in a Linux-only state with no z/VM. This compares to 14,214 Requests per Second for a two processor Intel 900 MHz Xeon Server running Windows Server 2003 from the VeriTest benchmark. The highest z/VM performance for the IBM two CPU z900 LPAR again was with four Linux server images, this time at seventy-six clients, at 3,428 Requests per Second. As it did in the NetBench test, the z/VM performance declined steadily until it reached 1,562 Requests per Second with ninety-six server images. The z/VM performance penalty was higher with WebBench than it was with NetBench, reaching a 70 percent degradation comparing the Linux native single server Image results to the ninety-six server Image results. However, there were no errors encountered, as there were with file serving, adding server images had the cumulative effect of reducing Requests per Second and Throughput but not in causing read errors or clients to be dropped.

WebBench Conclusion Simply put, the results on the WebBench benchmark showed that Web server consolidation on a z900 running Linux makes no sense. The two CPU LPAR under z/VM at its maximum provided performance about equal to a single 900 MHz Intel Xeon-based server running Windows NT 4.0, again, based on the VeriTest report. Keep in mind, however, that Windows NT 4.0 is seven-year-old technology released in 1996. The performance improvement of Windows Server 2003 over Windows NT 4.0 was over 162 percent in that study. With web serving, one does not have the same legacy issues one has with file serving. One also does not have the isolation and separation of application issues to the extent that one would have in a file server environment. Therefore, given the disparity in performance, not even considering the additional negative disparity in cost, it is clear that server consolidation for web serving on a mainframe is very hard to justify.

Cost Analysis The following table summarizes the file server financial analysis. Three mainframe scenarios for file server consolidation are presented along with one Windows scenario. Column one is the z900 two CPU LPAR at a fully loaded estimated Enterprise cost including: CPU, memory, z/VM software, disks, hardware and software maintenance and Linux support. Column two assumes existing capacity and is the mainframe with the CPU at no charge, half price for memory, the rest at estimated enterprise costs. Column three is the cost for new capacity from IBM at its special Linux pricing rates. Column 4 is the VeriTest benchmark configuration for Windows Server 2003 running on a two CPU 900 MHz Intel Xeon server. It is interesting to note that the closest relative comparison for the z900 and the Windows Server 2003 server is that between the existing capacity mainframe scenario running Linux alone and the Intel-based server. But even here, the cost differential as expressed in cost per Megabit per Second is over a factor of ten in favor of Windows Server 2003. The Microsoft solution is only 8.7 percent of the cost of the least expensive IBM scenario, where much of the resources are considered “free”.

Page 8: Microsoft Office Word - Mainframe Benchmark Project Final

viii

Mainframe

Base Cost Mainframe Sunk Cost

Mainframe IFL (New capacity)

Windows Server 2003, two 900 MHz Intel Xeon

Annualized Costs $479,118 $252,849 $407,556 $25,440Cost per Megabit Throughput per Second – Linux under z/VM

$1148.34 $606.02 $976.82 $40.25

Cost per Megabit Throughput per Second – Linux LPAR no z/VM

$878.74 $463.75 $747.49 $40.25

Cost per Megabit Throughput per Second – Linux LPAR no z/VM, no z/VM Software Costs

$830.11 $414.83 $688.45 $40.25

Table 1. File serving cost comparison summary As one would expect the financial results of the WebBench comparison are even more dramatic as seen in Table 2. Here, the smallest disparity between Windows Server 2003 and mainframe Linux is a factor of more than twenty-five, with Windows only 3.7 percent the cost per Peak Requests per Second compared to mainframe Linux. Mainframe

Base Cost Mainframe Sunk Cost

Mainframe IFL - New capacity

Windows -Server 2003, two Intel 900 MHz Xeon

Annualized Costs

$470,899 $244,416 $393,163 $25,440

Cost per Peak Requests per Second – Linux under z/VM

$137.37 $71.23 $114.69 $1.79

Cost per Peak Requests per Second – Linux LPAR no z/VM

$93.40 $48.42 $77.98 $1.79

Cost per Peak Requests per Second – Linux LPAR no z/VM, no z/VM Software costs

$88.16 $43.19 $72.76 $1.79

Table 2. Static (HTTP) web serving cost comparison summary

Conclusion Given that the industry standard Ziff Davis Media NetBench and WebBench benchmarks are representative of file serving and web serving, the Mainframe Linux Benchmark Project demonstrates that the IBM z900 mainframe running Linux by itself or under z/VM is much less capable and vastly more expensive than Windows Server 2003 as a platform for server consolidation. Customers considering using IT resources to implement IBM’s recommendation would be wise to consider allocating those resources to server consolidation solutions based on Windows Server 2003. As a minimum, they would achieve higher performance levels at a significantly lower cost.

Page 9: Microsoft Office Word - Mainframe Benchmark Project Final

1

Introduction

Purpose of the Benchmark For the past two years, IBM has been attempting to convince its mainframe enterprise customers to use Linux on IBM mainframes in order to achieve significant cost savings through consolidation of both UNIX and Windows servers, especially for file and print and web serving workloads. IBM provides other rationales for Linux on the mainframe as well, for example, simplifying the web server/ firewall/ application server/ back-end legacy database server scenario through use of Linux for web serving and possibly application serving. Another rationale could be in developing Enterprise Application Integration scenarios involving Linux web facing front-ends combined with legacy System 390 back-ends. Having all these components on one Parallel Sysplex connected internally achieves, in IBM’s view, superior throughput, availability, redundancy, and security. However, it is commodity server consolidation where IBM has placed great emphasis and resources, and represents the major use of mainframe Linux by Enterprise customers to date. Early on, Linux on mainframe was viewed as a good choice for consolidating lower utilization legacy UNIX workloads. More recently, IBM has begun emphasizing the consolidation of Windows server workloads, particularly in situations involving Windows NT and file/print and email/Exchange servers, as well as web servers. Even though IBM has been claiming superior price/performance for Linux on the mainframe compared to Windows running on Intel-based servers, it has not provided proof of its claims in the form of objective, quantified and repeatable benchmarks. IBM has released some synthetic benchmarks run without a network and some indices of performance that cannot be compared to other systems. Customers need to know how mainframe Linux stacks up, particularly against Windows Server 2003 solutions. IBM’s marketing material says that it is possible to run hundreds of Linux server images on z900 processors.1 However, IBM does not tell the marketplace how much z900 resources in terms of MIPS, memory, and channels are required to support its claims. To date, all IBM has provided are examples of specific mainframe configurations that may possibly have replaced collections of Intel and Sun Solaris servers. It isn’t clear whether these examples are theoretical or actual. There are, however, IBM technical RedPapers and Redbooks available for download that provide some guidance in terms of sizing, implementation and tuning, but then add the caveat that the customer not use the guidance for final sizing and that the customer needs to contact an IBM representative to get an official sizing2. The specific RedPaper that is most relevant is “Server consolidation with Linux for zSeries”, published in July 2002. In this paper3, IBM states that for data-intensive tasks one z900 processor is the equivalent of three to four Intel Windows servers, and provides as an example replacing one-hundred Intel Windows servers at 5 percent utilization by two to three z800 Processors running at 80 percent utilization. A z900 processor running at 100 percent is roughly equivalent to two z800 Processors running at 80 percent. In the absence of publicly available guidance for customers to consider, Microsoft has embarked on collecting performance data on its own.

1 p11,Winning with Consolidation: Optimizing Your I.T. Infrastructure, IBM, May 2002 2 p7, Server Consolidation with Linux for zSeries, IBM RedPaper, Erich Amrehn, Joachim Jordan, Frank Kirschner, Bill Reeder, 2002 3 p7, Server Consolidation with Linux for zSeries, IBM RedPaper, Erich Amrehn, Joachim Jordan, Frank Kirschner, Bill Reeder, 2002

Page 10: Microsoft Office Word - Mainframe Benchmark Project Final

2

This report of the results of Microsoft’s Mainframe Linux Benchmark Project provides some answers both in terms of raw performance data points and in the interpretation of this data as relevant and usable cost/benefit information. IBM’s server consolidation rational has been to compare low utilization Windows and UNIX servers against very high utilization IBM mainframe scenarios. The problem though, is that at high utilization, performance can deteriorate. With the advent of virtualization technology and enhanced system management functionality and the continued relentless march of Intel server technology (Blades, Xeon processors, IA64 Architecture, Itanium, Gigabit Ethernet, fiber connectivity and 3 GHz processors) the Windows environment is capable of achieving both high utilization and high bandwidth. The new Windows Server 2003 technology combining hardware and software advances appears to be better able to handle many of the data intensive and business-centric tasks for which the mainframe has been chosen in the past. An objective of this project was to see how close the mainframe and Windows servers are in terms of absolute performance and cost for file serving and web serving. The results in this report should be viewed as objective evidence to help enterprise customers evaluate IBM’s claims regarding the relative performance of Windows and mainframe servers. From the results reported here, which any customer with a mainframe facility can duplicate, new and more realistic comparisons of consolidation on Windows Server 2003 vs. consolidation on Linux mainframes can be developed for specific customer situations.

Project Methodology The major goal of the Mainframe Linux Benchmark Project was to determine the performance of Linux server images on the IBM z900 mainframe performing file serving and web serving tasks. The tests were to be run in an objective way using standard third party benchmarks and test scripts, control programs, and data. The tests have been fully documented, including all of the settings and parameters that were used (See Appendix B) and are completely reproducible so that customers with or without mainframes will be able to have a good idea what the price/performance of comparable workloads would be on a variety of platforms, including the IBM mainframe and the amount of effort necessary to optimize performance. Ziff Davis Media's PC Magazine NetBench ™ and WebBench ™ test suites were run against varying numbers of z900 Linux server images. The tests employed of as many as ninety-six physical PC clients, which were current generation 1.7 GHz Intel PCs running Windows XP Professional Service Pack 1, connected over Gigabit Ethernet links to a two processor z900 configuration with 24 Gigabytes of available memory, and two FICON Express channels connected to a dedicated 1.2 Terabyte IBM Shark Storage system (see Figure 1, Project Topology). See Appendix A, Configurations for additional detail.

Page 11: Microsoft Office Word - Mainframe Benchmark Project Final

3

Project Topology- Hardware

BenchmarkController

SharkStorageArray

Fico

nE

xpre

ss

Fico

nE

xpre

ss

Client SideCompaq Evo 510d Ultra

Pentium4, 1.7GHz

Mainframe Sidez900 – IBM 2064-1C6 with 2 CPU’s

12

5

34

…96

6789

1011

OSA Adapters

OSA3

OSA4

OSA1

OSA2zSerieszV

M

CIS

CO

Cat

alys

t 650

6Cisco 2924 mxl

Figure 1. Test bed hardware topology The hardware configuration chosen was tailored to running a performance benchmark. The Project team wanted to make sure that there would be no bottlenecks caused by the peripheral hardware or by the network topology. For example, two FICON Express Channels can handle up to 2 Gigabits per second of throughput each. Since there were four OSA Ports each capable of 1 Gigabit of throughput, neither the input nor storage access was a constraint on benchmark performance. It is true that ninety-six simultaneous clients could theoretically generate up to 9.6 Gigabits per second of throughput (96 times 100 Megabits each), more than double the capacity of the OSA ports. As it turned out the NetBench maximum throughput at 545 Megabits per Second and the WebBench maximum throughput at 296 Megabits per Second did not come close to exhausting one Gigabit Ethernet adapter much less four adapter ports. In terms of the network topology, the ninety-six clients can be connected through the 6500 Cisco Switch to any of the OSA adapters on the z900. The z/VM guest Linux server images had to be assigned to specific OSA adapter ports. IBM recommends that no more than thirty server images be assigned to a specific port. Using more then thirty guests per port would require internal routing, which would have affected performance. As a result, for this project, the Linux server images (ninety-six) were assigned in groups of twenty-four to each of the four ports. The clients were attached through the Cisco Switch, which can target the clients at any of the ports so that one can easily run benchmark tests from one client to ninety-six clients against any server. This flexibility was valuable since a NetBench client can only be targeted at one server at a time; on the other hand, WebBench permits one client to target many servers. Additionally, the ninety-six clients plus the benchmark controllers (connected to the Cisco 6500 through a Cisco 2924 Switch) formed a LAN so that the VeriTest Server was able to communicate with all of the clients, as well as the mainframe. For additional detail, see Appendix A Configurations. The Linux distribution used for this benchmark was SuSE SLES 8 (31-bit version). This is the Linux distribution chosen by a majority of enterprise mainframe customers. The project team used the standard SuSE Kernel and optimized performance through use of parameters that are covered by SuSE’s standard support and maintenance. (See optimization discussion in Appendix B). The SuSE distribution includes currently supported versions of Samba for file sharing and

Page 12: Microsoft Office Word - Mainframe Benchmark Project Final

4

Apache for Web serving. Custom kernel building was not performed since most customers would not be willing or able to perform or support such a customized environment. The benchmark software chosen, NetBench Enterprise DiskMix 7.03 and WebBench 4.1, have been run many times and were considered the most current stable versions available. The client software was installed using Ghost with client images all cloned from a Master Image, thus guaranteeing the consistency of all the client configurations.

NetBench Description The Ziff Davis Media NetBench Enterprise DiskMix Test Suite is a comprehensive simulation of real-world file serving. It uses many physical clients to make network-based file requests to a server and then records throughput and response times. It exercises twenty-one different operations in a scripted pattern with randomizing elements and consumes approximately four hours of elapsed time. The detailed operations, including number of calls and response times, are: Open File calls, Read response times, Read calls, Write response times, Write calls, Lock File response times, Lock file calls, Unlock file, Close, GetFile Attributes, SetFile Attributes, Rename File, Delete File, Create File, Find/Open, Find/Next, Find/Close, GetFile Time, SetFileTime, FlushFileBuffers, GetDiskFreeSpace. The NetBench Test Suite provides a great deal of well-organized data for individual clients as well as aggregated data (See Appendix C). This information is captured by the benchmark controller, which, because it is located on a LAN along with the clients, does not impact the server performance being measured. In the case of the mainframe Linux benchmark, the standard test scripts provided with the benchmark were utilized. The only changes had to do with expanding the number of physical clients to ninety-six from the standard sixty, and in automating the running of the suites for varying numbers of servers, specifically the virtual Linux server images within z/VM. The NetBench results are reported in detail with average, minimum/maximum, and standard deviations highlighted. The details are summarized for Total Throughput, Average Megabit per Second Throughput, and Average Response Time for each set of runs. Generally, the Benchmark Test Suite is run through a number of iterations by number of clients for a single server, beginning with one and proceeding in groups of four until the maximum number of clients is achieved. For the Mainframe Linux Benchmark Project, the benchmark test suite was run for all relevant clients for various numbers of virtual Linux server images, beginning with one and proceeding through four, six, eight, ten, twelve, fourteen, sixteen and twenty server images. Additionally, twenty-four, forty-eight, and ninety-six virtual Linux server image runs were attempted, although not with complete success as the mainframe performance began to deteriorate. In addition to the tests with virtual Linux servers, tests were carried out for Linux running without z/VM in both 2 CPU LPAR and in a one CPU LPAR as well. The average throughput and average response time were then plotted. Whether the test suite was successful is noted in the NetBench produced reports, and if there are errors, they are assigned to a particular client. Once an error occurs that client is dropped from all successive runs. In addition to the results reported by NetBench, the Mainframe Linux Benchmark Project also summarized the results by looking at maximum average throughput per second achieved by number of server images and by the response time associated with that maximum. These results were compared against CPU utilization graphs produced inside z/VM and within Linux Images. The NetBench Suite is widely considered the most representative benchmark process available for determining file-serving performance. Once configured, with standard test scripts being used, it is easily reproducible as long as the hardware and software topology are documented and can be replicated. This is what has been done in this project.

Page 13: Microsoft Office Word - Mainframe Benchmark Project Final

5

WebBench Description WebBench provides a way to measure the performance of web servers. WebBench uses client PCs to simulate web browsers. However, unlike real web browsers, the WebBench clients do not display the files that the server sends in response to their requests. Instead when a client receives a response from the server it records the information associated with the response and then immediately sends another request to the server. (Adapted from Ziff Davis Media PC Magazine Web Site, WebBench FAQs). The WebBench series of tests includes dynamic Web testing as well as static Web testing. The Mainframe Linux Benchmark Project only performed static web page processing benchmarks. The reason for this was that WebBench 4.1 was selected as the most mature and stable of the WebBench suites available, but CGI source code was not available for compilation of CGI executables for WebBench 4.1 to run on the mainframe. Although WebBench 5.0 includes CGI source, it was not believed to be sufficiently stable to introduce into the environment at the time the benchmarks were performed. WebBench’s standard test suites provide two measures for the web server: requests per second and throughput as measured in bytes per second. In the Mainframe Linux Benchmark Project, we have provided both results as well as CPU utilization for a wide range of number of server images.

Benchmark Procedures The objectives of the tests were to create enough data to plot the performance curves of both response time and throughput for NetBench (file serving) in order to determine the maximum number of Linux server images that could be accommodated on the z900 test bed as well as response time and throughput before degradation begins. On WebBench, the number of requests per second and throughput as measured in bytes per second were obtained. Tests were also run of single image servers in one and two CPU LPARs, and with z/VM and without z/VM. This provided the Mainframe Linux Benchmark Project with data on the effect of z/VM on the key metrics. The team also examined the CPU utilization of selected tests closely, and looked at and documented the CPU utilization of all tests. The test bed configuration was also analyzed for market costs so that direct price/performance information could be developed and that information compared to comparable Windows Server 2003 on Intel benchmark scenarios. The Mainframe Linux Benchmark Project used a three-phase approach for implementation:

1) Development of a detailed Project Plan 2) Engage in an intensive Pre-Test exercise to

A) Validate the plan B) Test hardware/software to ensure smooth operations C) Tune/optimize software D) Establish limits of NetBench and WebBench performance to determine test parameters, where performance degradation begins, to establish final test iterations (which numbers of servers to test) E) Modify plan for final, formal testing

3) Conduct formal benchmark tests The Project Plan covered procedures for conducting the tests, and for configuring and mapping both hardware and software. Implementation scripts were written to automate the testing process as much as possible, since time was limited and the team wanted to utilize the hardware fully during its availability. For example the 18 GB of CPU memory available to Linux (6 GB were reserved for z/VM) was mapped against server images as follows

Page 14: Microsoft Office Word - Mainframe Benchmark Project Final

6

Number of server images

Number of active Clients

Main-Memory per Server-Image in MB

Number of Clients per Share

1 12 2048 12 1 24 2048 24 1 48 2048 48 1 72 2048 72 1 96 2048 96 24 24 768 1 24 48 768 2 24 96 768 4 48 48 384 1 48 96 384 2 96 96 192 1

Table 3. Pre-Test Plan (Note, the maximum memory for a 31-Bit Linux server image is 2048MB) The IBM RedBooks currently available recommend that at least 128 MB be available for each Linux server image, as the mapping above was in fact implemented by the team, the least amount of memory per Server Image was 192 MB. The Pretest was performed using both one and two real CPUs to make sure that the two CPU z900 was not oversized for the maximum of ninety-six clients. The result certainly demonstrated that it was not oversized. In order to develop accurate data points to create meaningful performance curves only scenarios involving the same number of clients and with identical generated load per client were used. The question was: How will the mainframe handle this workload with different scenarios of server images? To ensure a complete benchmark the test would ideally run against from one to ninety-six server images divided into granular steps. At least one client is needed for each server. Tests with fewer clients than servers were not recommended (to find out the influence of idle Linux images) and could not be performed, due to limitations in the NetBench client, where NetBench cannot point one client to many servers. As it turned out, due to mainframe limitations it was not feasible to go in granular steps from one to ninety-six clients for NetBench. The plan was changed to run more iterations at smaller virtual Linux server image counts and just forty-eight and ninety-six server images at the high end for NetBench because of read buffer underflow and dropped client problems. For WebBench, the team was able to run steps that are more granular from one to ninety-six server images. The tests were themselves conducted in granular steps of clients, in groups of four as explained below. In the Table 4, the “Numeric Ratios” represent symmetric distributions of clients to servers on a particular row. These are fully comparable instances. The blank row elements are not fully Servers

Linux server images on z/VM Clients

One LPAR

1 8 16 24 32 40 48 56 64 72 80 88 96 1 1:1

4 4:1

8 8:1 1:1 12 12:1

16 16:1 2:1 1:1

20 20:1

24

24:1 3:1 1:1

Page 15: Microsoft Office Word - Mainframe Benchmark Project Final

7

Servers Linux server images on z/VM

Clients

One LPAR

1 8 16 24 32 40 48 56 64 72 80 88 96 28 28:1

32 32:1 4:1 2:1 1:1 36 36:1

40 40:1 5:1 1:1

44 44:1

48 48:1 6:1 3:1 2:1 1:1

52 52:1

56 56:1 7:1 1:1 60 60:1

64 64:1 8:1 4:1 2:1 1:1

68 68:1

72 72:1 9:1 3:1 1:1

76 76:1

80 80:1 10:1 5:1 2:1 1:1 84 84:1

88 88:1 11:1 1:1

92 92:1

96 96:1 12:1 6:1 4:1 3:1 2:1 1:1 # 25 25 23 21 19 17 15 13 11 9 7 5 3 1 X 25 25 12 6 4 3 2 2 1 1 1 1 1 1

Table 4. Test matrix and the relations C/S (Client/Server) Linux server images are across (X Axis; Servers ) and clients are down (Y Axis; Clients ) Ratio of number of runs available with symmetric workloads between C and S, based on an increase of four clients per measurement #=Possible Combinations of Clients and Linux Server Image Combinations X= Number between Clients and Servers with symmetric workload combinations to be tested 1:1= point of symmetric workload scenario between Clients and Servers � =available runs without symmetric workloads between C and S

comparable. To get a full set of data points, to verify the slope of the performance curves, it was necessary to run some of the blank instances. The chart indicates an optimal number of tests; the team thought that fewer tests would be run because of time constraints. As it turned out more tests were run, primarily by utilizing off-shift automated test runs. For most of the test runs, twenty-five measurements were taken, so that quite a few of the “blank instances” were in fact tested. The gray fields in the matrix show tests with fewer clients than server images that are not relevant. The fields with a numeric ratio show symmetric shared load to all server images, fields without a Numeric Ratio have a non-symmetric shared load, because we have a fixed mapping of server images with their shares and clients. When a new server image configuration was tested script changes were necessary for both the client and server side of the benchmark. As a result, the complete NetBench scenario was run against one server configuration scenario and then switched to the next. This meant that the team tested one to ninety-six clients against one server, then one to ninety-six clients against four servers, etc.

Test Process The testing process proceeded according to the following schedule:

Page 16: Microsoft Office Word - Mainframe Benchmark Project Final

8

Setup • Clients • Benchmark Controller and Workstation • Cisco Switches • Mainframe hardware and software

Initial Testing • Connectivity • Ensuring smooth software operation and configuration • Preliminary tuning/ optimization of hardware/software

Pre-test • Initial runs of WebBench and NetBench in single image configuration • Examination of results for parameter tuning, timing and pretest planning • Decision to pretest WebBench first while working on improving NetBench results and

reliability • Successful execution of WebBench test suites for one to ninety-six clients and one to

ninety-six servers as delineated in Project Plan • Continued work on NetBench: parameter changes, downloads of NetBench code fixes,

use/ not use of z/VM Timer Patch for Linux, experimentation with various OSA filling strategies

• Iterative execution of NetBench suites with examination of test results and parameter changes. See optimization discussion in Appendices to this document.

• Final successful running of pretest NetBench suites for one to ninety-six clients and one to ninety-six Servers in general accordance with Project Plan (modifications to plan noted below).

The formal NetBench test runs were conducted the following guidelines:

o Activate the required number of server images o Prepare them for NetBench (mount disks, establish partitions, start Samba) o Remap all clients and verify mapping o Start and verify collection of system statistics o Start NetBench for the specific scenario o Stop statistic collection o Shutdown server images

The formal WebBench test runs were conducted using the following guidelines:

o Start the appropriate number of server images (See Appendix B for server thread discussion)

o Prepare them for WebBench (mount disks, establish partitions, start Apache, run optimization script (See Appendix B)

o Remap all clients and verify mapping o Start and verify collection of system statistics o Start WebBench for the specific scenario o Stop statistic collection o Shutdown server images

All final test runs for NetBench and WebBench with associated scripts, test data and configuration files were saved along with the associated z/VM (FCON) and Linux (SAR) statistics. These results were correlated based on run name and time stamp.

Page 17: Microsoft Office Word - Mainframe Benchmark Project Final

9

Optimizations and Observations The following discussion of optimizations covers some of the key learnings during the course of the Mainframe Linux Benchmark Project. Implementing an optimized file serving benchmark for Linux under z/VM was not trivial, even for a team with significant mainframe Linux experience. Many of the optimizations occurred in response to problems in obtaining a stable benchmark. We are confident that we achieved optimal performance based on published recommendation from IBM and the experience of the mainframe Linux team.

Hardware Optimization The Cisco 3500 and 2500 Series Switches initially selected for the test network were found to be insufficient in that they couldn’t easily deploy ninety-six clients through four Gigabit adapters and simultaneously connect the benchmark controller. As a result, the switches were upgraded to the Cisco 6506, with an auxiliary switch (Cisco 2924) for the benchmark controller and workstation connected to the main switch over a Gigabit connection. This provided five 1 Gigabit Ethernet connections. Four connected the Cisco 6506 to the mainframe; the other was used to connect the 6506 to the auxiliary Cisco switch where the benchmark controller and workstation were connected. Initially, long 100 Megabit copper runs connected the benchmark controller to the PC clients. These runs were shortened to eliminate network latency that was thought to be causing initially poor results. For final tests, all the client workstations and the benchmark controller and workstation were located within the computer room close to the mainframe. The 18 GB allocation for Linux server images was divided by the number of server images with a maximum of 2 GB per server (the maximum addressable by a 31-Bit operating system). The minimum main storage available at ninety-six server images was still well above the minimum recommended by IBM in the relevant RedBook. The 6 GB allocation for z/VM was also considered optimal. Based on the amount of memory available both to z/VM and to the Linux server images, there was no swapping of memory to disk observed. However, sufficient 834 cylinder disks (1/4 of a 3390-3) were attached to each server image so that there was more swap space available than allocated memory should it have been required. For further details, see optimization discussion in Appendix B. The dedicated IBM ESS Storage Area Network with 1.2 Terabytes provided more storage than was necessary for the benchmarks. For NetBench the workspace is an LVM (Logical Volume Manager) consisting of two physical volumes, 3390-3. LVM is used for file serving on the Mainframe due to the small size of the 3390-3 physical volumes, the LVM permits clustering of multiple physical volumes for file serving. Within Linux, the ext2 file system (except the root file system was run as an ext3 file system, which is preferred in the event of an improper shutdown) was run on the logical volume with a mount of noatime (no access time update on the file system). Further information is in Appendix B. There was no additional hardware tuning because all the storage and communication devices on the mainframe were completely dedicated to the benchmark tests.

Software Optimization Early testing provided some counter-intuitive results especially with regard to the OSA adapters and the way they work with Linux server images under z/VM. The first observation was that running without z/VM and working with a single consolidated Linux LPAR was extremely sub-optimal. The network was configured so that all clients were communicating through a single adapter in the single image scenario. However, the preliminary benchmark results showed that the mainframe was unable to use all the bandwidth of even a single Gigabit Adapter, therefore this limitation did not affect the benchmark results. On the other hand, the Project Plan carefully plotted the allocation of the clients through the Cisco Switch to the four adapter ports. Initially each client was to be directed to a specific Gigabit Port

Page 18: Microsoft Office Word - Mainframe Benchmark Project Final

10

through allocations in groups of four, one to each port in order. Exhaustive testing showed that the counter-intuitive filling of each adapter before going on to the next, resulted in significantly better mainframe performance. As a result, the NetBench and WebBench final tests reported below were carried out using that approach. This may have not been as important for WebBench as for NetBench because the WebBench application works differently than the NetBench software. Rather than assigning a client to a specific server image, WebBench uses a round robin approach to direct requests to server images, so that requests are filled equitably. However, there was a significant improvement in WebBench performance between the Pre-test and the Final Test because of optimizations. See Appendix B discussion of OSA Adapters and Timer Patch. In terms of general software tuning, no special z/VM parameters were selected. This was because the Quickdsp (Quick dispatch) option was selected for all Linux server images (VM Guests). This option provides better performance than the “SET SRM” settings mentioned in the IBM Performance RedBook that work on “regularly scheduled” guests. Both Linux and Samba were tuned. All the specifics are available in the configuration files and directory entries, and additional discussions are in Appendix B. One of the project breakthroughs in improving performance was within the Samba Configuration File. The early tests had an increasing number of network errors as the number of server images increased as well as a precipitous decline in throughput. This was attributed to read overruns due to the 4K/8K blocksize specified in the socket options parameter of the Samba Configuration File. By doubling the “adaptive transmit threshold” from twelve to twenty-four, errors were reduced. The NetBench results reported here used the 4K setting because it was possible to get more server images running before the clients started recording read errors. The summary graphs and tables are described in the next section.

NetBench Results The highest average throughput as measured by the NetBench Controller was for a single server image running on the two CPU LPAR without z/VM (See Table 5 and Figure 2 below). This was with fifty-two clients and the result was 546.086 Megabits per Second, with an average response time of 1.520 Milliseconds. Table 5 reports on the peak throughput achieved in the single server image test runs for varying numbers of clients. The detailed test reports in the appendices show the individual runs with all client instances There is an inverse correlation between response time and maximum throughput, the shorter the response time, the greater the throughput, once a reasonable level of work is achieved through client requests. So that at ninety-six clients, the single image Linux, two CPU LPAR test reached 516.302 Megabits per Second with an average response time of 2.969 Milliseconds. Based on Linux SAR measures the total CPU utilization was relatively constant in approaching 100 percent once fifty-two clients were engaged, with approximately 83 – 87 percent of utilization coming from System, and 13 to 17 percent being User (See Figure 3).

Number of CPUs/Clients at maximum performance level

Maximum Average Throughput in Megabits per

Second

Response Time in Milliseconds at Max

Throughput one CPU z/VM / eight Clients 150.597 0.847 two CPU z/VM / forty Clients 257.927 2.483

one CPU Linux LPAR / ninety-six Clients

267.679 5.733

two CPU Linux LPAR / fifty-two Clients

546.086 1.520

Table 5. NetBench Single Server Comparisons

Page 19: Microsoft Office Word - Mainframe Benchmark Project Final

11

NetBench Single Server Comparisons

0

100

200

300

400

500

600

1 C

PU

VM

/ 8 C

lient

s

2 C

PU

VM

/ 40

Clie

nts

1 C

PU

Linu

xLP

AR

/ 96

Clie

nts

2 C

PU

Linu

xLP

AR

/ 52

Clie

nts

Number of CPUs/Client

max average throughput

0

1

2

3

4

5

6

7

response time in millsec.

Maximum Throughput in Mbit p sec Response time in mil sec at max throughput

Figure 2. NetBench Single Server Comparisons Single Image file serving, however is not the usual mainframe server consolidation scenario. The usual example involves the running of Linux under z/VM so that one can have multiple server images, corresponding to different legacy Server workloads, server identities, or server owners. The typical example found in IBM RedPapers and in marketing presentations involves one server image for each server to be consolidated. There appears to be a heavy penalty for this level of virtualization, at least on the z900 Mainframe when running the NetBench Suite. As Table 5 and Figure 3 demonstrate, there is over a 50 percent drop in throughput when comparing the two CPU LPAR running a single server image under z/VM to the best performing one running without z/VM. The z/VM example however, did achieve lower CPU utilization at roughly 170 out of 200 percent (100 percent for each of the two processors - or 85 percent overall), and did not sustain this level as long as the Linux LPAR did (Figure 4).

Page 20: Microsoft Office Word - Mainframe Benchmark Project Final

12

Figure 3. Two CPU LPAR Linux (no z/VM)

Figure 4. Two CPU LPAR, one Linux Server Image, SuSE Linux V.8 under z/VM 4.03

Page 21: Microsoft Office Word - Mainframe Benchmark Project Final

13

Table 6 summarizes the peak throughput per second for NetBench for Linux under z/VM. The table shows the maximum throughput per second for each run of server images. The peak result was at a specific client load, the response time at that peak is reported, not necessarily the absolute minimum for the run. However, if the response time was not the absolute minimum, it was close. Typically, the test runs for each group of server images resulted in asymptotes occurring once a level of client activity is reached, with the asymptote continuing through ninety-six clients, but also typically not really rising at the end. The highest maximum average throughput for z/VM was achieved with four Linux server images and ninety-two clients with 417.228 Megabits per Second and an average response time of 3.521 Milliseconds. The throughput degraded gracefully and the response time increased gradually until at twenty server images and ninety-six clients results of 288.167 Megabits per Second of maximum average throughput with an average response time of 5.344 Milliseconds were achieved. Unfortunately, after twenty server images the benchmark controller encountered errors on the mainframe. These errors took the form of Read Underflows, i.e., short reads, indicating a saturation of the mainframe LPAR. This problem was not experienced in the WebBench tests; there the IBM mainframe reacted predictably to the load. Due to the presence of errors, only final tests at twenty-four, forty-eight, and ninety-six server images were conducted above the last complete and successful test at twenty server images. The NetBench Controller dropped one client at twenty-four server images, beginning at Client 52. It dropped thirty-one clients with forty-eight server images beginning at Client 48, its initial run. As discussed in the methodology section, NetBench requires a minimum of one client to one server image, so at forty-eight server images, the initial run occurred with forty-eight clients. NetBench only dropped one client at ninety-six server images. Possibly, the one-server-to-one-client ratio at ninety-six server images resulted in only one being dropped as there is only one test run at ninety-six. There was also a Response Time anomaly at forty-eight server images, possibly because so many clients were dropped that a decrease in response time resulted, but this drop in response time did not result in increased throughput. The maximum throughputs per second results for the failed runs as well as the successful runs are included in Table 6 and Figure 5 below. Number of server images/# of

Clients Maximum Average

Throughput, Megabits per Second

Response Time in Milliseconds, at Max

Throughput 4 Servers / 92 Clients 417.228 3.521 6 Servers / 92 Clients 404.154 3.640 8 Servers / 92 Clients 377.019 3.897

10 Servers / 96 Clients 361.924 4.232 12 Servers / 96 Clients 334.230 4.599 14 Servers / 96 Clients 321.305 4.778 16 Servers / 96 Clients 311.287 4.960 20 Servers / 96 Clients 288.167 5.344 24 Servers / 95 Clients* 271.674* 5.593* 48 Servers / 64 Clients * 231.589* 4.429* 96 Servers / 95 Clients* 198.880* 7.692*

Table 6. NetBench Linux Server Image Comparisons by Max Throughput two CPU LPAR, Linux under z/VM

Note: * indicates clients dropped due to Mainframe Read Underflows may have affected results

Page 22: Microsoft Office Word - Mainframe Benchmark Project Final

14

NetBench Linux Server Image Comparisons by Max. Throughput2 CPU LPAR, Linux under z/VM

050

100150200250300350400450

4 S

erve

rs /

92C

lient

s

6 S

erve

rs /

92C

lient

s

8 S

erve

rs /

92C

lient

s

10 S

erve

rs /

96C

lient

s

12 S

erve

rs /

96C

lient

s

14 S

erve

rs /

96C

lient

s

16 S

erve

rs /

96C

lient

s

20 S

erve

rs /

96C

lient

s

24 S

erve

rs /

95C

lient

s*

48 S

erve

rs /

64C

lient

s *

96 S

erve

rs /

95C

lient

s*

Number of Server Images/#clients

max

ave

rage

thro

ughp

ut/M

bit p

. se

0123456789

resp

onse

tim

e pe

r sec

Max average throughput in Mbit per sec response time at max throughput

Figure 5. NetBench Linux Server Image Comparisons by Max Throughput two CPU LPAR, Linux under z/VM

(Dropped clients due to Mainframe Read Underflows may have affected Results for twenty-four, forty-eight, & ninety-six Servers)

The average throughput per second approached an asymptote in the NetBench runs predominantly as higher numbers of clients were engaged. (See Sample Figure 6 for the four-server image run).There was also a corresponding increase in response Time for the higher number of clients within the individual Server runs. (See Figure 7). We also reached peak CPU utilizations as is shown in Figure 9, “Accumulated z/VM CPU Usage Twenty Server Images.” Figure 8, the accumulated CPU Utilization for 4 server images, shows slightly less utilization at the very highest number of clients, but still well in excess of 95 percent. As a result, it is clear that the maximum z/VM throughput per second was achieved at four server images and 92 Clients. It is also clear that a two CPU z900 Turbo LPAR is incapable of achieving a hundred active server images while running the NetBench simulation, with even a moderate load.

Page 23: Microsoft Office Word - Mainframe Benchmark Project Final

15

Throughput

0.000

50.000

100.000

150.000

200.000

250.000

300.000

350.000

400.000

450.000

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96

Number of Clients

Thro

ughp

ut(M

Bit

Figure 6. Throughput Four Virtual Linux Server NetBench

Average Response Time

0.000

0.500

1.000

1.500

2.000

2.500

3.000

3.500

4.000

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96

Number of Clients

Resp

onse

Tim

e(m

illis

econ

ds

Figure 7. Average Response Time Four Linux Virtual Server NetBench

Page 24: Microsoft Office Word - Mainframe Benchmark Project Final

16

Figure 8. Accumulated z/VMCPU Usage Run Four Virtual Linux Servers NetBench

Page 25: Microsoft Office Word - Mainframe Benchmark Project Final

17

Figure 9. Accumulated z/VM CPU Usage Run, Twenty Virtual Linux Servers

WebBench Results In WebBench testing the single measure that is most meaningful is the number of Requests per Second. The tests summarize the results for both Requests per Second and Total Throughput (Bytes/Second). The summary is obtained by adding up all of the individual Requests per Second and Throughput for each client. The peak results of both are reported in the tables below. In general, the highest Total Throughput is synonymous with the peak Requests per Second. Sometimes the highest total throughput by number of server images is with different number of clients, however, in these cases the differences in the number of Requests per Second is quite small. Typically, the Requests per Second have reached an asymptote, as has the Throughput, and we are dealing with very minor variations, especially given the randomizing function that drives the benchmarks. The decision was made to report the Total Throughput that corresponded to the maximum number of Requests per Second as opposed to illustrating both sets of numbers with different numbers of clients. Again, as with NetBench, the highest measure, occurred in the two CPU LPAR Linux without z/VM, single server image scenario (See Table 7 and Figure 10). The two CPU z900 LPAR achieved 5,042 Requests per Second with sixty clients. The results scaled well with the Requests per Second approaching 5,000 beginning at twenty-four clients and continuing to hover around 5,000 requests per second through ninety-six clients. According to the Linux-produced CPU utilization report as depicted in the time-stamped Figure 11, the two processor LPAR approached 100 percent utilization beginning at the fourth test, which would be 12 clients, and continued at close to 100 percent through the course of the testing. The User utilization was slightly over 40 percent and the System utilization was close to 60 percent.

Page 26: Microsoft Office Word - Mainframe Benchmark Project Final

18

Also as with NetBench, there is a distinct z/VM penalty in terms of performance. The two CPU LPAR running Linux native was slightly more than double the two CPU LPAR running z/VM with a Single Server Image, which was 2,507 Requests per Second. The one CPU LPAR runs achieved a bit less than half the results of the two CPU LPAR runs. The one CPU Linux native results were 2,337 Requests per Second and the one CPU Linux under z/VM results were 1,166 Requests per Second. Number of CPUs/Clients Maximum Requests per

Second Throughput in Bytes per Second

One CPU z/VM/ four Clients 1166.412 7,059,165.125Two CPU z/VM/ sixty Clients 2507.433 15,061,539.64One CPU Linux LPAR /sixty Clients 2337.038 14,011,151.579Two CPU Linux LPAR / sixty Clients 5,042.375 36,592,049.314

Table 7. WebBench Single Server Comparisons

WebBench Single Server Comparisons

0

5,000,000

10,000,000

15,000,000

20,000,000

25,000,000

30,000,000

35,000,000

40,000,000

1 CPU VM/ 4 Clients

2 CPU VM/ 60 Clients

1 CPU Linux LPAR /60 Clients

2 CPU Linux LPAR / 60 Clients

Number of CPUs/Clients

Thro

ughp

ut in

Byt

es p

er s

ec

0

1,000

2,000

3,000

4,000

5,000

6,000

Max

imum

Req

uest

s pe

r sec

Throughput in Bytes per Second Maximum Requests Per Second

Figure 10: WebBench Single Server Comparisons

Page 27: Microsoft Office Word - Mainframe Benchmark Project Final

19

Figure 11. Two CPU Linux LPAR Table 8 contains the peak results for WebBench under z/VM in the two CPU LPAR. Due to the lack of errors encountered while performing WebBench testing, collecting data on runs using between twenty-four and ninety-six server images was more straightforward than was the case with NetBench. As was the case with NetBench, the maximum number of Requests per Second and highest Total Throughput achieved was with four servers. In the WebBench case, this occurred with seventy-six clients. The peak achieved was 3,428 Requests per Second. This is 33 percent less than the result for two CPU LPAR Linux without z/VM, indicating that the z/VM penalty is even greater for web serving than for file serving. The maximum result then decline as the number of server images increased, although not as uniformly as with NetBench, to the point where performance fell to 1,562 Requests per Second at ninety-six servers and 84 clients. Again, as with NetBench, the z/VM penalty running WebBench increased with the number of server images. The z/VM penalty at ninety-six server images was 69 percent, far greater than the 48 percent decline experienced at ninety-six server images with NetBench. The graph of Accumulated z/VM CPU Usage (Figure 12) shows that at four virtual Linux servers, the z900 was already approaching 100 percent utilization. (The graph’s scale is at 200 percent indicating that there are two CPUs being monitored in the LPAR.) With four images driving 100 percent utilization to process 3,428 Requests per Second, it is obvious that adding more virtual Linux servers diverted CPU resources away from Apache and over to running the virtual servers, accounting for the decline of 54 percent to 1,562 Requests per Second processed by ninety-six virtual Linux servers.

Page 28: Microsoft Office Word - Mainframe Benchmark Project Final

20

Number of server images/ Number of Clients

Requests per Second Throughput in Bytes Per Second

1 Server / 60 Clients 2,507.433 15,061,539.641 4 Servers / 76 Clients 3,428.388 20,703,393.236 8 Servers / 88 Clients 2,938.946 17,687,099.110 12 Servers / 96 Clients 2,976.629 17,802,265.798 16 Servers / 88 Clients 2,614.679 16,011,264.02 20 Servers / 88 Clients 2,759.713 16,754,751.954 24 Servers / 32 Clients 2,341.433 14,193,634.314 28 Servers / 92 Clients 2,037.862 12,757,114.056 32 Servers / 88 Clients 1,939.600 11,776,365.353 36 Servers / 96 Clients 2,125.950 12,809,707.361 40 Servers / 76 Clients 1,973.224 11,937,129.986 44 Servers / 88 Clients 1,914.963 11,611,717.213 48 Servers / 88 Clients 1,795.371 10,823,556.415 56 Servers / 96 Clients 2,059.758 12,449,263.188 64 Servers / 96 Clients 1,931.573 11,734,103.641 80 Servers / 96 Clients 1,832.183 11,077,371.532 88 Servers / 96 Clients 1,765.429 10,633,839.219 96 Servers / 84 Clients 1,562.233 9,512,007.422

Table 8. WebBench Linux Server Image Comparisons by Maximum Requests per Second, Linux under z/VM, two CPU LPAR, z900 1C6

WebBench Linux Server Image Comparisons by Maximum Requests Per Second, Linux under z/VM, 2 CPU LPAR, z 900 1C6

0

5,000,000

10,000,000

15,000,000

20,000,000

25,000,000

1 Serv

er /

60 C

lients

4 Serv

ers /

76 C

lients

8 Serv

ers /

88 C

lients

12 Serv

ers / 9

6 Clie

nts

16 Serv

ers / 8

8 Clie

nts

20 Serv

ers / 8

8 Clie

nts

24 Serv

ers / 3

2 Clie

nts

28 Serv

ers / 9

2 Clie

nts

32 Serv

ers / 8

8 Clie

nts

36 Serv

ers / 9

6 Clie

nts

40 Serv

ers / 7

6 Clie

nts

44 Serv

ers / 8

8 Clie

nts

48 Serv

ers / 8

8 Clie

nts

56 Serv

ers / 9

6 Clie

nts

64 Serv

ers / 9

6 Clie

nts

80 Serv

ers / 9

6 Clie

nts

88 Serv

ers / 9

6 Clie

nts

96 Serv

ers / 8

4 Clie

nts

Number of Server Images/ Number of Clients

Thro

ughp

ut in

Byt

es p

er s

ec

0

500

1,000

1,500

2,000

2,500

3,000

3,500

4,000

Req

uest

s pe

r se

c

Requests Per Second Throughput in Bytes Per Second

Figure 11. WebBench Linux Server Image Comparisons by Maximum Requests per Second, Linux under z/VM, two CPU LPAR, z900 1C6

Page 29: Microsoft Office Word - Mainframe Benchmark Project Final

21

Figure 12. Accumulated CPU Usage Four Virtual Linux Servers

Analysis of Results IBM has talked about the ability to consolidate hundreds, even thousands of Windows servers using mainframe technology. However, the examples chosen are set at a very low utilization typically 5 percent. In addition to unusually low utilization, the unspoken assumption is that the servers being consolidated are at least two generations obsolete and are running, most probably, early versions of Windows NT. There is no other way to give any credence to IBM’s claims based on the findings in this study. Moore’s Law continues to be a factor in the Intel World. The 900 MHz server, which was state of the art two years ago, has been replaced by 3 GHz servers today. Scalability, which was at eight processors for Intel-based Windows servers three years ago, is at sixty-four today. Peripherals have also been improving with Fiber Channel Storage Area Networks accessing Intel servers, Gigabit Ethernet adapters and recently TOE switches, which can take TCP/IP overhead off the CPU. Some state of the art high-speed peripheral advances are reaching the Intel world before they are getting to the IBM mainframe. However, most important from the standpoint of this project is the maturing and improvement of the base operating system and associated infrastructure software of Windows servers. Virtualization, advanced systems management, better security and systems administration, have brought to the Windows world many of the features that have kept mainframes at the heart of

Page 30: Microsoft Office Word - Mainframe Benchmark Project Final

22

mission critical enterprise applications. The Windows Server has also been much improved in both file serving and web serving performance. The VeriTest benchmark of file serving and web serving comparing Windows Server 2003 to Windows 2000 and Windows NT 4 demonstrated that. For example, Windows Server 2003 tested on an eight-way multiprocessor delivered 485 percent more performance than Windows NT 4 and 355 percent more than Windows 2000 Advanced Server for static web serving. For file serving the performance improvements were 148 percent and 84 percent respectively. Imagine how much improved the results of the VeriTest benchmark would have been if they had been run on the latest Intel Xeon processor rather than on 900 MHz processors.

NetBench Analysis As discussed in the prior section, the IBM z900 two CPU LPAR achieved 546 Megabits per Second peak throughput running a single Linux Server Image without z/VM, and 417 Megabits per Second throughput at four server images with z/VM. The VeriTest results for two CPU Intel Xeon 900 MHz processors with 4 Gigabytes of memory, a client test bed of 120, over 600 Gigabytes of RAID disk storage and 4 Gigabit Ethernet adapters was 632 Megabits of peak throughput. Windows Server 2003 achieved this at twenty-four clients and the results stayed asymptotic to 600 Megabits per Second through ninety-six clients tailing off slightly to reach over 550 Megabits per Second at 120 clients. Contrast the Windows results with those of the IBM single server Linux results for the two CPU LPAR, in which, after achieving 546 Megabits per Second at fifty-two clients started a slight decline reaching 516.3 Megabits at ninety-six clients. Overall, the Windows Server 2003 two CPU alternative achieved nearly 16 percent greater performance than the mainframe and sustained that performance over a greater range of clients than the mainframe did. In this instance, Windows Server 2003 not only performed better, but scaled better as well. Compared to Linux on z/VM, the Windows Server 2003 running on two Intel CPUs achieved 52 percent greater performance than produced by four Linux server images under z/VM. However, the two CPU Intel-based server did not produce the highest NetBench results in the VeriTest benchmark report. Those were achieved by an eight processor 900 MHz Intel Xeon server that produced 1088 Megabits per Second of throughput running without the TOE adapter, and 1370 Megabits per Second with the TOE adapter. It is unlikely, based on our findings, that a four CPU z900 LPAR would be able to achieve these results, certainly not running z/VM. In looking at mainframe performance as a consolidator of Windows servers, one does not only have to look at the throughput of all the server images, which is what was done in the prior section and above, but also at the throughput of the individual server images that make up this throughput. This where the problems for the concept of the mainframe as a consolidator of anything but the most obsolete and least-used processors becomes clear. Once the maximum average throughput that each set of Linux server images achieved on the NetBench tests, it is easy to calculate the maximum average throughput per server. Once that is known, it is then possible to calculate how long it would take that server to transfer a typical 10-Megabyte file. This is shown on Table 9 and the Figure 13 (On the graph the “pink” are single server images expressed in Megabits per Second throughput, the “blue “ are the sum of the single images, the number of servers, expressed in Megabits per Second of throughput) As the number of server images increase, the throughput per Server Image gets smaller. So that, at sixteen server images it would take over 4 seconds to serve a 10 Megabyte file, and at twenty server images it would take 5.6 seconds. Disregarding the dropped clients, it would take over 38 seconds for a Server Image to serve a 10-Megabyte file at ninety-six server images.

Page 31: Microsoft Office Word - Mainframe Benchmark Project Final

23

Number of

Linux server images

Number of Clients at Max Average

Throughput

Max Average Throughput

(Megabits per Second)

Max Average Throughput per Server image (Megabits

per Second)

1 40 257.927 257.937 4 92 417.228 104.307 6 92 404.154 67.359 8 92 377.019 47.127

10 96 361.924 36,192 12 96 334.230 27.850 14 96 321.305 22.950 16 96 311.287 19.455 20 96 288.167 14.408 24* 95* 271.674* 11.320* 48* 64* 231.589* 4.825* 96* 95* 198.880* 2.071*

Table 9. NetBench Max Average Throughput per Linux Server Image z/VM two CPU LPAR

Note: * indicates clients dropped due to Mainframe Read Underflows may have affected results

NetBench Max Average Throughput per Linux Server Image z/VM 2 CPU LPAR

0

50

100

150

200

250

300

1/40

4/92

6/92

8/92

10/9

6

12/9

6

14/9

6

16/9

6

20/9

6

24*/

95*

48*/

64*

96*/

95*

# of Linux Server Images / to # of Clients at max aver. throughput

Max

ave

rage

thro

ughp

uin

Mbi

ts p

er s

ec.

050100150200250300350400450

Max

ave

rage

thro

ughp

upe

r Ser

ver i

mag

e in

Mbi

tpe

r sec

.

Max average throughput per sec Max average throughput in Mbits per sec

Figure 13. NetBench Linux server images The above casts doubt on the base premise, how capable would a mainframe be at hosting over a hundred images. How large a z/VM LPAR would be required? Clearly two z900 processors are insufficient. However, the examples given by IBM typically are of one or two CPU LPARS. We get some idea of how many Windows servers could be consolidated by looking at the VeriTest results. Since the Windows 2003 Server as a file server achieved 632 Megabits per Second throughput at a high CPU utilization, probably approaching 100 percent, it was thought to be interesting to see how many equivalent server images at 5 percent utilization could be served. The answer is not many. Ten Linux server images on a z900 two CPU LPAR are the equivalent of one two CPU 900 MHz Windows Server 2003 server running at 5 percent CPU utilization. Above ten server images, the mainframe LPAR can only match still lower percent utilization. The

Page 32: Microsoft Office Word - Mainframe Benchmark Project Final

24

maximum average throughput of one server image when twenty server images are run with ninety-six clients is only 2.3 percent of one semi-modern Windows server.

WebBench Analysis Although the WebBench results achieved by the Mainframe Linux Benchmark Project do not have any errors, and achieve relatively high asymptotic measures at higher simultaneous client requests, the results pale in comparison to the static web serving results achieved in the Windows Server 2003 VeriTest benchmark. This is obviously true for the z/VM results, where the virtualization penalty is so much greater for web serving than even for file serving, but it is also true for the Linux single server Image results as well. The Mainframe Linux Benchmark Project peak results for WebBench were achieved in the Linux two CPU LPAR at sixty clients with 5,042 Requests per Second. This contrasts with the two processor Intel-based server running Windows Server 2003 at 14,214 Requests per Second. The Windows platform had nearly three times the performance with two 900 MHz Intel processors compared to a two processor mainframe LPAR. However, with eight processors, Windows Server 2003 achieved 33,991 Requests per Second. Again, with no more Gigabit Ethernet connections than the z900 had, but with a multithreaded request stream per client, because the single threaded stream the Mainframe Linux Benchmark Project utilized was not enough to drive the 8 multi-processors to maximum throughout, the Intel multiprocessor achieved over 6 times the throughput of the IBM mainframe two processor LPAR. According to the IBM RedPaper, as was previously discussed, the two CPU z900 LPAR should have been equivalent to approximately eight Intel processors for data intensive tasks, instead it was 6 times worse. The much greater disparity between the WebBench and VeriTest results compared to the NetBench/ VeriTest results, which were not good from a mainframe standpoint, may be more web server related than platform related. The web serving aspects of Windows Server 2003 have improved much more than the file serving aspects, and that improvement was on the order of 100 percent. It is widely recognized that Apache does not perform well, but it is the overwhelming choice for open source web serving. Apache has competitive functionality but it does not possess competitive performance. Tux, which might perform better, lacks functionality and is not used in production situations, not even registering in NetCraft’s survey of web servers. This analysis is based on the results from the single image Linux LPAR. If the results for Linux under z/VM are considered, the case for Mainframe web server consolidation becomes even more difficult to justify. As previously stated the peak z/VM results were at four server images and seventy-six clients, with 3428 Requests per Second, 33 percent less than the Single Server Image Linux case. At ninety-six server images, the requests per Second are down to 1,562, nearly 70 percent degradation from the single Linux server case. With the two CPU Intel Xeon-based server at 14,000 plus Requests per Second, there is just no way that consolidating Windows based web serving on IBM mainframes makes any sense from a performance standpoint, much less when economics are taken into account.

Page 33: Microsoft Office Word - Mainframe Benchmark Project Final

25

Implications for Customers From a performance standpoint, results from the Mainframe Linux Benchmark Project demonstrates that the IBM z900 mainframe may not be an appropriate choice for server consolidation for file serving or as a platform for web serving, either when adding new servers or consolidating existing servers. This becomes obvious when the results from VeriTest Windows Server 2003 NetBench and WebBench performance tests are compared to the mainframe Linux NetBench and WebBench results reported here. Comparing the two CPU mainframe LPAR to the dual processor Intel-based Windows Server used in the VeriTest report, for both file serving and web serving, the Windows environment resulted in greater throughput and lower response times. However, performance in and of itself does not provide the complete answer for customers. Financial and operational factors have to be considered alongside the results of the benchmark. Generally, server consolidation discussions include a Total Cost of Ownership (TCO) analysis. This typically includes analyses of the costs of hardware, software, facilities, maintenance, support, administration, and projected downtime. IBM has a number of consultant offerings in this area, including Scorpion Studies for complex multi-platform situations and ALIGN methodology studies for simpler single platform situations. Gartner Group has developed a complex financial model that is often used4. What all of these approaches assume is the truth of IBM’s basic performance premise that Intel Servers are only being utilized at 5 percent, and that the capability of the Intel Server under a Windows operating system is not even close to the capacity of an IBM zSeries mainframe processor in accomplishing data intensive business tasks. The results of this benchmark demonstrate that a two processor LPAR of a z900 mainframe has less capability than a dual processor 900 MHz Intel Xeon-based server running Windows Server 2003 for either file serving or web serving. Based on the VeriTest results for Windows NT, Windows 2000, and Windows Server 2003, a single processor 900 MHz Intel Xeon-based server with 500 MB of memory provides greater throughput than a single z900 CPU with 12 GB of memory.

Server Consolidation: File Serving Customers are interested in server consolidation for file serving for a number of reasons: in order to get control of a growing server farm, to centralize IT infrastructure and deliver better service to end users, to enable new server based applications to be more easily implemented, and above all significantly reduce costs. Customers with large mainframes understand that they have benefited from the mainframe consolidation of years past, as smaller distributed mainframes have been replaced by a fewer, larger, more powerful complexes (and Sysplexes). IBM, in its effort to increase the amount of new workload going on to the mainframe has been championing the use of the mainframe with z/VM and Linux as a consolidation vehicle. IBM needs to increase the new workload because the legacy workload is either stagnant or declining a few percentage points, while the capacity of IBM’s mainframe computers increases. From a customer viewpoint, there are two scenarios for file serving. In the first scenario, the customer frees up existing z900 capacities and chooses to use them for server consolidation. The methodology the customer employs to determine the amount of cost to assign to server consolidation in this case can be quite variable. Should the cost of the mainframe, which is typically under some form of multi-year lease, be assigned to the new use, should it be considered sunk and the remaining cost assigned to its original use and user, should the remainder of the lease term be assigned to server consolidation, should the entire cost be assigned to the new use, at least for analysis purposes? There is a whole range of possibilities.

4 Projecting, Monitoring, and Managing Business Value, Kathy Harris, Gartner Group, January 21, 2003 TVO Methodology: Valuing IT Investments via the Gartner Business Performance Framework, Audrey L. Apfel and Michael Smith, March 2, 2003

Page 34: Microsoft Office Word - Mainframe Benchmark Project Final

26

The analysis below provides two ends of the spectrum for Scenario 1, existing capacity. Column one of Table 10 reflects total costs for the z900 capacity to be consumed as if it were a new obligation, Column two reflects the view that all the existing mainframe resources are considered to have been paid in full, and only the new costs incurred to support server consolidation are considered. In the second scenario, Column three of Table 10, IBM provides new capacity in the form of IFLs. An IFL is a z900 processor that can run Linux or Linux under z/VM, but cannot run traditional z/OS workloads. IBM charges much less for IFLs than it does for full function processors capable of running z/OS. However, IBM tends to compensate for this savings by charging for the full complement of memory added to support the IFLs; with the base CPU a significant amount of memory typically is included in the cost of the mainframe. IBM also isolates the new capacity from hardware and software charges that are based on total mainframe MIPS, so as not to cause the customer to go into a higher tier in terms of costs for IBM supplied technology. The cost comparisons discussed here use a measure that is consistent not only within mainframe scenarios but also across Windows scenarios as well, -- annualized direct systems and support costs divided by maximum average throughput per second. The cost information is for the Mainframe Linux Benchmark Project’s configuration. This configuration was designed to get the maximum performance from the mainframe; as a result, it has fewer disks than a typical customer would have and possibly more memory. Looking at only the mainframe alternatives, The New Capacity alternative is 36 percent more expensive than the existing capacity alternative where the mainframe costs are considered “sunk” (Column two). The New Capacity alternative is 15 percent less expensive than the total cost scenario (Column one). All mainframe scenarios are much more costly both in absolute and relative terms than any possible Windows scenario. It certainly appears that the existing capacity scenario where the customer considers the mainframe “free” makes the most sense. However, even then the customer should calculate both the opportunity cost/value of competing uses for the capacity, if there are any. Moreover, in both the existing and new capacity scenarios, does the customer want to burden the enterprise with additional expensive mainframe resources when it might be to the enterprise’s advantage to work at migrating as much off the mainframe as possible? It is surprising that the IFL Scenario (Column two) is so close in cost to the full purchase price scenario (Column one). This may be because the cost of memory is very high relative to the cost of the processors. In addition, this analysis does not include much of the cost of mainframe computing, the cost of infrastructure and applications software. IBM and Independent Software vendors including Computer Associates, BMC, SAS Institute, etc. typically charge based on the amount of mainframe capacity a customer has, not on how much the software is used. So that as a customer consolidates and adds Mainframe MIPS, the cost of software goes up, even if it is not getting any additional use. It is estimated that customers spend nearly 70 percent more for software than for hardware in a typical mainframe installation. With an IFL approach, some additional software costs are avoided. Certainly, the direct additional costs from IBM are avoided, but the costs from the ISVs may go up based on how customer contracts are worded. Another point of interest is the effect of the z/VM performance penalty on relative cost per Megabit/second of throughput. The base cost difference is the same as the difference in performance in the best case, 24 percent, when including the cost of z/VM Software in the Linux only scenario, but increases when the cost of z/VM, at $40,000 per IFL is removed. Expressed in dollar terms, the differentials are $318 per Megabit per second of throughput for the Base case, $191 for the free CPU case, and $289 for the new capacity scenario.

Page 35: Microsoft Office Word - Mainframe Benchmark Project Final

27

Existing Capacity (Base)

Existing Capacity, Mainframe and ½

memory Sunk Cost, z/VM, ½ Memory, Disks, considered

new purchase

New Capacity (2 IFLs)

Mainframe Costs $786,250 0 $250,000 z/VM Software $80,000 $80,000 $80,000 Linux Software 0 0 0

Memory (at $25K per GB) $300,000 (first 12 GB included in

mainframe costs above)

$300,000 $600,000 (assume existing memory not available to be used,

therefore need 24 GB) Disks (607.2 GB at $45 per GB) $27,324 $27,324 $27,324

Total Capital cost $1,193,574 $407,324 $957,324 Monthly cost $32,680 (36 Month

Lease at 6 percent with a 10 percent Lessor residual)

$11,315 (Assume purchase divided by

36)

$26,211 (Assume 36 month lease at 6 percent

with 10 percent Lessor residual)

Annualized Costs $392,157 $135,775 $314,532 Annual Linux Support $24,000 $24,000 $24,000

Annualized Mainframe Hardware and Software Maintenance Costs For

columns 1 and 3, first year free, total for years 2 and 3 spread

over 3 years, (90,341*2/3)

$60,228 $90,341 $60,228

Annualized Disk maintenance at 15 percent of initial purchase

assumption, first year free, total for years 2 and 3 spread over 3

years(4100*2/3

$2,733 $2,733 $2,733

Total Annual Cost $479,118 $252,849 $407,556 Max Average z/VM Throughput

in Megabits per Second, 4 Linux server images, 92 Clients

417.228 417.228 417.228

Cost per Megabit per second - z/VM

$1148.34 (annualized)

$606.02 (annualized)

$976.82 (annualized)

Max Average Linux LPAR Throughput in Megabits per

Second, Single Linux Server Image, 64 Clients

545.234 545.234 545.234

Cost per Megabit per second - Linux

$878.74 (annualized)

$463.75 (annualized)

$747.49 (annualized)

Cost per Megabit per Second, - Linux taking z/VM software out

of Mainframe costs

$830.11 $414.83 $688.46

Table 10. NetBench Cost Comparison Two CPU LPAR, Table 10 row definitions and explanations are as follows:

1) Column one. Existing capacity: Base Costs. Three year costs of mainframe, software, memory, disks, maintenance, and support based on the original costs to the Enterprise.

Page 36: Microsoft Office Word - Mainframe Benchmark Project Final

28

A) The mainframe cost - $786,250, is based on a Gartner estimate of $1850 per mainframe MIP as of year-end 20035; this represents the average cost per z900 MIP. Gartner states that 75 percent of transactions will occur at a higher price. The number of MIPS is obtained by taking the IBM value of 1276 MIPS for a z900 model 1C6, which has six CPUs, dividing that number by 6 to get the average performance of one CPU and then multiplying by 2 to get the estimated MIPS of a two CPU LPAR, which is 425 MIPS. This number is multiplied by $1850 to obtain the purchase price of the benchmarked LPAR. Included in the cost of the mainframe are the first 12 Gigabytes of memory, the OSA adapters (4 Gigabit Ethernet Ports), and the two FICON Express Channels (each FICON channel is the equivalent of eight ESCON channels). B) z/VM Software - $80,000, is at the price IBM charges for two processors for z/VM for use with Linux, on a one-time basis. We are assuming a zero cost for Linux, although there is generally a small charge. The mainframe cost - $786,250, is based on a Gartner estimate of $1850 per mainframe MIP as of year-end 2003; this represents the average cost per z900 MIP. Gartner states that 75 percent of transactions will occur at a higher price. The number of MIPS is obtained by taking the IBM value of 1276 MIPS for a z900 model 1C6, which has six CPUs, dividing that number by 6 to get the average performance of one CPU and then multiplying by 2 to get the estimated MIPS of a two CPU LPAR, which is 425. This number is multiplied by $1,850 to obtain the purchase price of the benchmarked LPAR. Included in the cost of the mainframe are the first 12 Gigabytes of memory, the OSA adapters (4 Gigabit Ethernet Ports), and the two FICON Express Channels (each FICON channel is the equivalent of eight ESCON channels). B) z/VM Software - $80,000, is at the price IBM charges for two processors for z/VM for use with Linux, on a one-time basis. We are assuming a zero cost for Linux, although there is generally a small charge.

C) Memory - $300,000, is at a Gartner suggested cost of $25,000 per Gigabyte, for Gigabytes 12 through 24. The first 12 gigabytes as explained above are considered to have been included with the cost of the mainframe, again as per Gartner.

D) Disks – $27,324 using a cost of $45 per Gigabyte, which is based on Server Consolidation analyses for 8 Terabyte competitive situations, as opposed to 1.2 terabyte actual costs of $65 per gigabyte for IBM ESS 1.2 terabyte systems, which is the benchmark case. Multiplied by 607.2, the number of gigabytes used in the most extensive NetBench cases, as explained in Appendix B, (607.2 happens to coincide with a 3309-3, 2.3 Gig Boundary. Therefore we did not have to round up to the next 3309-3 boundary. 3309-3s were used in the Benchmark.)

E) Total Capital Cost – $1,193,574. The sum of rows A – D, this is an estimate of the average purchase cost of an IBM mainframe two CPU LPAR and associated disk in 2003, based primarily on Gartner data with corroborating data from IBM customers with 15,000 MIPS of mainframe capacity or greater.

F) Monthly Cost - $32,680. This represents the monthly lease cost of the above, assuming typical IBM Global Financing 3 year Operating Lease terms: 36-month term, 6 percent interest rate, 10 percent residual held by IBM. IBM Global Finance does over 90 percent of the financing deals for new mainframes, and a majority of recent transactions, are 3-year operating leases. To qualify as an operating lease, the Lessor, IBM in this case, has to retain a minimum of 10 percent equity in the equipment being financed.

5 IBM Mainframe Futures: Phoenix or Dodo?, Mike Chuba, Gartner Group, 21st Annual Data Center Conference, Las Vega, NV, December, 2002.

Page 37: Microsoft Office Word - Mainframe Benchmark Project Final

29

G) Annualized Cost - $392,157. This is the monthly cost times 12. H) Annual Linux Support - $24,000. This is the cost of either IBM Global Services or SuSE annual support contract per CPU (IFL). I) Annualized Mainframe and Software Maintenance - $60,228 – The first year’s maintenance is included in the purchase price. This is the effect of the maintenance costs for years 2 and 3, spread over the three-year term.

J) Annualized disk maintenance - $2733. Same for disks as above for mainframe K) Total Annual cost - $479,118. The sum of rows F, G.H, and I. L) Max average z/VM Throughput – 417.228. The highest throughput per second result experienced during the entire NetBench Benchmark period for Linux Server Images under z/VM. In this case, it was four server images with ninety-two active clients. M) Cost per Megabyte per Second for z/VM - $1148.34. This represents the annual cost defined in J above divided by the maximum throughput per second from K. N) Max Average Linux LPAR Throughput – 545.234. The highest average throughput per second achieved during the entire NetBench Benchmark period. This was achieved on a two CPU LPAR running SuSE Linux native, with no z/VM. O) Cost per Megabyte per Second for Linux - $878.74. This represents the annual cost defined in J above divided by M, the Linux two CPU LPAR highest average throughput per second. P) Cost per Megabyte per Second for Linux, no z/VM software charges - $830.11. This represents the calculation in (O) above less the effect of $40,000 per IFL for z/VM.

2) Column two. Existing capacity mainframe costs considered to be at no charge to this Benchmark. Three-year straight-line analysis of costs not considered sunk those costs that would be directly incurred for server consolidation.

A) Mainframe Costs – 0. In addition to the mainframe, half of the memory, the OSA adapters, and the FICON Express channels are considered to be at no charge.

B – D) Same as Column one. Incremental costs are to support server consolidation. E-G) Assumption here is that additional items will be purchased and amortized over 36 months rather than placed on an operating lease. H) Same as Column one

I) Since the mainframe is not new, the assumption is this project will assume maintenance costs from day 1.

J-O) Same as Column one

3) Column Three. New Capacity.

A) Mainframe Costs - $250,000. IBM charges $125,000 per IFL per processor, two processors. An IFL is a z900 processor that can run either Linux or z/VM and Linux,

Page 38: Microsoft Office Word - Mainframe Benchmark Project Final

30

but has had part of the z/OS instruction set disabled so that it can not run traditional z/OS workloads.

B-C) Same as Column one. D) Memory. The assumption is that there is no existing memory of which the IFLs can take advantage. Therefore, full 24 GB at $25,000 per GB are assumed to be purchased by the customer. E- O) Same as Column one.

Linux under z/VM Cost Comparison to Windows Server 2003 In the following cost comparison, the Mainframe Linux Benchmark Project configuration costs by alternative are compared to estimated costs for the Windows Server 2003 configuration that was tested in the VeriTest benchmark of April 2003. The Intel-based Windows Servers in the VeriTest benchmark project were configured similarly to the z900 Mainframe in terms of having 4 Gigabit Ethernet Adapters. They also had 4 GB of memory and a total of 653.6 GB of disk storage. This consisted of 144 GB of SCSI RAID disk storage directly attached and an additional 509.6 GB of SCSI RAID storage connected through a SmartArray RAID controller. The specific 900 MHz Intel Xeon-based Windows servers used in the VeriTest benchmark are no longer in production and have been replaced by much more powerful 2.4 to 3 Gigahertz machines. The estimated cost of the Windows servers that were used included Enterprise licensing of Windows Server 2003 and Client Access Licenses for ninety-six clients (the number of clients in the Mainframe Linux Benchmark Project test-bed, (there were as much as 120 clients in the VeriTest project test-bed) is$49,285. Including maintenance and support at 25 percent of purchase price, an annualized three-year cost for the Windows environment is $25,440. See Appendix A for Cost/Configuration details of Windows Server used in VeriTest benchmark. Based on 632 Megabits per Second maximum average throughput for NetBench, the Windows Server 2003 cost per Megabits per Second is $40.25. The contrast with the mainframe alternatives is striking. The difference in performance between the Windows Server 2003 alternative and the Linux LPAR alternative was 14 percent in favor of Windows. However, as seen in Table 11, the relative cost differential based on maximum average cost per Megabits per Second of throughput between the least expensive mainframe scenario and the Windows server is over 90 percent less for Windows even without licensing for z/VM included. Table 11 summarizes the relative and absolute annualized costs of the IBM mainframe scenarios and the two processor Windows Server 2003 from the VeriTest report. The analysis makes it difficult to justify server consolidation on the z900 given how much more cost effective and higher performing Windows Server 2003 is, and will be as virtualization, advanced systems management, CPU failover, and additional “mainframe technologies” become more and more prevalent on Windows Server platforms.

Mainframe Base Cost

Mainframe Sunk Cost

Mainframe IFL/ New capacity

Windows Server 2003, two 900 MHz Intel Xeon

Annualized Costs $479,118 $252,849 $407,556 $25,440 Cost per Megabit Throughput per Second – Linux under z/VM

$1148.34 $606.02 $976.82 $40.25

Cost per Megabit Throughput per Second – Linux LPAR no z/VM

$878.74 $463.75 $747.49 $40.25

Table 11. File serving cost comparison summary

Page 39: Microsoft Office Word - Mainframe Benchmark Project Final

31

Server Consolidation – Web Serving For web serving, the use of the Mainframe as a consolidation vehicle is a bit more problematic as there are fewer Web Servers to consolidate. Given the three times performance disparity between Windows Server 2003 as reported in the VeriTest WebBench benchmark and the results from running WebBench in the Mainframe Linux Benchmark Project, it is hard to think of a scenario where the mainframe would be a suitable alternative for web serving. Table 12 presents the financial comparison between mainframe scenarios and Table 13 presents the comparison between the mainframe and the Windows Server 2003 server as tested by VeriTest. The cost differential is even greater than in the NetBench cases discussed above. The smallest difference between the Windows Server 2003 metric and the mainframe two CPU LPAR without VM and considering much of the cost of the mainframe as "free”, is by a factor of almost twenty-four. Another way of saying this, is that Windows Server 2003 running on a two processor 900 Mhz Intel Xeon server costs less than 5 percent of the mainframe alternative on a cost per Peak Requests per Second basis. The fully priced existing capacity scenario without z/VM licensing is nearly twice as expensive as the "free" scenario which would make it nearly 50 times more expensive than the Windows Server 2003 alternative, and the new capacity mainframe scenario is over 40 times more expensive. For web serving we have the same basic mainframe configuration and scenarios just fewer disks than in the file serving analysis above.

Page 40: Microsoft Office Word - Mainframe Benchmark Project Final

32

Existing Capacity

(Base) Existing Capacity, Mainframe and ½

memory Sunk Cost, z/VM, ½ Memory, Disks, considered

new purchase

New Capacity (2 IFLs)

Mainframe Costs $786,250 0 250,000 z/VM Software $80,000 80,000 80,000 Linux Software 0 0 0 Memory (at $25K per GB) $300,000 (first 12 GB

included in mainframe costs above)

$300,000 $600,000 (assume existing memory

not available therefore need 24

GB) Disks (174.8 GB at $45 per GB) $7,866 $7,866 7,866 Total Capital cost $1,174,116 $387,866 937,866 Monthly cost $32,147

(36 Month Lease at 6 percent with a 10

percent Lessor residual)

$10,774 (Assume purchase

divided by 36)

$25,679 (Assume 36 month lease at 6 percent

with 10 percent Lessor residual)

Annualized Costs $385,884 $129,288 $308,148 Annual Linux Support $24,000 $24,000 $24,000 Annualized Mainframe Hardware and Software Maintenance Costs For columns one and three, first year free, total for years 2 and 3 spread over 3 years, (90,341*2/3)

$60,228 $90,341 $60,228

Annualized Disk maintenance at 15 percent of initial purchase assumption, first year free, total for years 2 and 3 spread over 3 years(4100*2/3)

$787 $787 $787

Total Annual Cost $470,899 $244,416 $393,163 Peak Requests per Second – z/VM 4 Linux server images – 76 Clients two CPU LPAR

3,428 3,428 3,428

Annual Cost per Peak Requests per Second - z/VM

$137.37 $71.23 $114.69

Peak Requests per Second – Linux Single Server Image – no z/VM -60 Clients, two CPU LPAR

5,042 5,042 5,042

Annual Cost per Peak Requests per Second – Linux Single Server Image two CPU LPAR

$93.40 $48.42 $77.98

Annual Cost per Peak Requests per Second – Linux Single Server Image, no z/VM, no z/VM Software cost

$88.16 $43.19 $72.76

Table 12. WebBench Cost Comparison two CPU LPAR, The Table 12 row definitions and explanations are the same as the NetBench Financial table above, except for: Row D, the WebBench Benchmark used much less disk than the NetBench Benchmark, as a result we reduced that cost substantially, and Rows L – O, the metric used in web serving, Peak Requests per Second replaces the metric used in file serving, maximum Average Throughput per Second.

Page 41: Microsoft Office Word - Mainframe Benchmark Project Final

33

Mainframe Base Cost Scenario

Mainframe Sunk Cost Scenario

Mainframe IFL/ New capacity Scenario

Windows Server 2003 on Two CPU, 900 MHz Intel Xeon

Annualized Costs $470,899 $244,416 $393,163 $25,440 Cost per Peak Requests per Second – Linux under z/VM

$137.37 $71.23 $114.69 $1.79

Cost per Peak Requests per Second – Linux LPAR no z/VM

$93.40 $48.42 $77.98 $1.79

Cost per Peak Requests per Second – Linux LPAR no z/VM, no z/VM Software costs

$88.16 $43.19 $72.76 $1.79

Table 13. Static web serving cost comparison summary

Conclusion The Mainframe Linux Benchmark Project has charted new territory in a sense. For years, Independent Software Vendors have been reluctant to challenge IBM on the performance and capabilities of the Mainframe. This may be due to the cost of mainframe use, which is not inexpensive, or it may be because of the unfamiliarity that many ISVs have with Linux on the mainframe. In any event, IBM has been able to make unchallenged and possibly exorbitant claims. In the case of server consolidation IBM should consider itself challenged A back level two CPU Intel Xeon-based server running Windows Server 2003 outperforms the equivalent IBM two CPU z900 LPAR for file serving and especially for web serving, based on comparisons of both systems running identical industry standard benchmarks. Mainframe server consolidation is based on the premise that z/VM is a benign, non-intrusive, operating system, that permits total and effortless virtualization. The Mainframe Linux Benchmark Project results show that while functionally z/VM does what it is supposed to do, from a performance and resource use standpoint it is much more intrusive than expected, and that at significant virtualization levels, levels much within the number of servers IBM claims are feasible for server consolidation, performance suffers to such an extent as to make consolidation very unfeasible. Of course, if there is next to no activity to consolidate, the customer could consolidate on a mainframe or he could just consolidate on his existing platforms and not spend any money. From a cost standpoint, the Mainframe Linux Benchmark Project has attempted to take the mask off IBM’s mainframe pricing strategies. IBM’s IFL pricing is much more expensive than the equivalent Windows Server 2003 costs would be, and not much less than the cost of a mainframe configured for traditional applications. Even when one considers a ”free” mainframe, one that an organization makes available to a new function, but considers much of the cost “sunk”, and not to be charged to the new use. Even then, the residual costs are such that the Windows Server 2003 alternative is still over 10 times less expensive. To conclude, Windows Serve 2003 is quickly becoming a superior platform for server consolidation, for file serving and for web serving. The z900 has many good qualities and has legacy application sets that it is very good at running. However, it is not a good

Page 42: Microsoft Office Word - Mainframe Benchmark Project Final

34

choice for the consolidation of Windows servers; Windows Server 2003 clearly is the right choice for customers.

Page 43: Microsoft Office Word - Mainframe Benchmark Project Final

35

Appendix A: Configurations

EXHIBIT A Hardware

Mainframe: IBM zSeries 900 model 2064 – 1C2 meeting the following requirements:

• one LPAR of a z900 – IBM 2064-1C6 with two CPU’s, with 24 GB Memory • 2 OSA Express GbE Adapter, 2 Ports each IBM FC 2365, wired • 2 FICON Express LX, IBM FC2319 wired to ESS 2105-F20 • IBM ESS 2105 – F20 (Shark), 1.2 TB containing 384 3390-3 Disks at 2.3 GB each (for

the NetBench Benchmark 607.2 GB or 264 3390-3s were utilized at maximum and for Webbench 174.8 Gb were utilized) the remaining 316.8 GB were 3390-9 Disks (not utilized for the benchmarks, only for monitoring data).

• z/VM 4.3 preinstalled • Fcon/ESA Monitoring pre-installed • OSA Express GbE Adapter with Switch, Multimode SC

Clients Configuration:

• Compaq Evo 510d Ultra • 1.7GHz Pentium 4 • 1GB RAM • 40GB Disk Drive • 10/100 Network Interface Cards

NetBench/Webbench Test Controller Configuration:

• Compaq Evo 510d Ultra • 1.7 GHz Pentium 4 • 1GB RAM • 40GB Disk Drive • 10/100 Network interface Cards

CISCO Catalyst Switch configuration

• Cisco Catalyst 6506 System with ninety-six ports 10/100 Ethernet • 5 port 1000SX GB Ethernet • Detail:

o 1x WS-C6506-1300AC o 1x WS-CAC-1300W o 1x WS-X6K-S1A-MSFC2 o 2x WS-X6348-RJ45V o 1x WS-X6408A-GBIC

Page 44: Microsoft Office Word - Mainframe Benchmark Project Final

36

o 5x WS-G5484

• Cisco 2924 MXL (Connecting VeriTest Server and Workstation to 6500 through one of the 1000SX Gigabit Ethernet Adapters)

Page 45: Microsoft Office Word - Mainframe Benchmark Project Final

37

900 Mhz Pentium III Xeon Server Used for Comparison

HP DL760 2 900 MHz Pentium III Xeon Processors (Configuration used in VeriTest benchmark)

Unit Price Quantity

DL 760 4 CPU 2GB Memory $ 44,400 1 $ 44,400

Delete 2 CPU $ (6,999) 2 $ (13,998)

Add 2 GB Memory $ 2,099 2 $ 4,198 SmartArray 5300 SCSI Controller $ 2,099 1 $ 2,099 Ultra3 36.4 GB SCSI Drives $ 799 4 $ 3,196 StorageWorks Cabinet $ 2,995 1 $ 2,995 Ultra3 18.2 GB SCSI Drives $ 539 28 $ 15,092 Total $ 57,982 15% Discount $ 49,285

Windows Server 2003 Cost Table Total Hardware Costs $49,285 Annualized Hardware Costs (3 years – straight line amortization)

$16,428

Maintenance At 15 % for 2 Years, annualized over 3 years

$4,929

Windows Server 2003 –Standard Edition $689 Client Access License (29.8 per 96 clients) $2,861 Software support license at 15% per year of total software

$533

Total Annualized Windows Server 2003 Configuration costs

$25,440

Page 46: Microsoft Office Word - Mainframe Benchmark Project Final

38

EXHIBIT B Software

Mainframe Software:

• SuSE SLES 8 • Apache, version 1.2.23 (included with SuSE SLES 8) • Samba, version 2.2.3 (included with (SuSE SLES 8)

Client Software:

• Windows XP Professional, with SP1 • Office XP with FrontPage • r/server, which is part of the Net Bench software. • The Test Controller machines were configured as follows:: • Windows Server with service pack 3 • Office XP with Front Page • NetBench 7.03 • WebBench 4.1

Cisco 6506 Network Switch Software: Internetwork Operating System Software (IOS) version 12.1 (13)E4 Early Deployment Release (compiled 1/31/03)

Page 47: Microsoft Office Word - Mainframe Benchmark Project Final

39

Appendix B

Optimizations

In general, the same optimizations were performed on both NetBench and WebBench. For example, in every configuration file a relaxed handling of symlinks was defined so no server had to test if a directory or a file is actually a file or a symlink, which could have several security implications. Those tests would use a lot of processor resources because they cannot be cached by the server processes. Avoiding having to perform these tests improved benchmark performance.

During the course of the project much more time was spent on NetBench optimization than on WebBench optimization. That was due to the errors that were encountered, the OSA adapter port filling issues found, and the key Samba optimizations that had to be found. From the details provided below, it appears that there was much more to optimize with WebBench, but the amount of time to achieve optimum results was definitely weighted towards NetBench.

Part of this apparent disparity is caused by the differences in the way the benchmarks handled the virtual server images. For WebBench one has to provide a list with the servers identified and then WebBench decides randomly which server to access. Every request has to be handled by a server process and one could have a maximum of 96 * 5 parallel requests if all clients are accessing a server simultaneously. In order to avoid the excess processor time consumed by this, all server processes were started before test initiation. For NetBench, a drive is mapped before the benchmark run and this connection is persistent during the whole run. So the Project team knew exactly how many Samba processes were needed and there were no randomizations in NetBench to deal with, so this aspect was faster to optimize. The following headings discuss the optimizations by software product.

z/VM Optimizations There were only a few changes to z/VM that one could call actual z/VM tuning. Most of the work performed in z/VM was optimization by way of distributing available resources, including disks, OSA Adapters, and memory. These are explained below.

Linux on z/VM Tuning

Scheduling All Linux guest images were configured as “QUICKDSP“. Using Quick-Dispatch is recommended for Linux Guests (Linux server images) by IBM in several RedBooks and RedPapers discussing Linux. Further tunings with “SET SRM“, especially with “SET SRM DSPSLICE“ did not have any positive effects in terms of improving Linux server image performance. This was thoroughly tested in all directions, shorter and longer slices on few guests as well as on many guests.

Minidisk Cache Minidisk-Cache was activated, but only allowed to use 4096M of Expanded Storage and no normal Storage.

Page 48: Microsoft Office Word - Mainframe Benchmark Project Final

40

Resource sharing

DASD

The project had a dedicated IBM ESS with 1.2 terabytes available for benchmarking. There were 3390-3 disks configured on addresses FX00-FX3F, where X was 0-5. This means 64 * 6 = 384 DASD 3390 Model 3. There were additional 3390-9 disks defined, which where only used for maintenance tasks. The following ranges were used:

• Range F0XX was used for Linux root file systems • Range F10X was used for monitoring data storage and maintenance • Range F11X was used for WebBench data disks. • Ranges F12X and F13X were used for Linux swap disks. • Ranges F2XX, F3XX, F4XX and F5XX were used for NetBench Working Disks

There was a shared read-only A minidisk for every guest, with a CMS profile and an automated IPL. This is counter to examples in the latest Linux under z/VM Consolidation RedPaper, which suggests sharing of all available DASD to conserve space. Sharing was not used for two reasons: 1) to improve performance, and 2) to preserve standard Linux updating capabilities. Every Linux server image was allocated half a 3390 Model 3 disk as root file system; these were distributed over the complete range F0XX. Half a Model 3 equates to 1669 cylinders, which is about 1.15 GB. This was enough for the root-file system and all the Linux logfiles generated during the benchmark runs. Server000-Server047 where placed on the lower half of a DASD while Server048-Server095 where placed on the upper half of a DASD. This meant that, only in scenarios with more than forty-eight server images were disks put into double use. The Linux Server Image’s disks were distributed with the formula: 0xf000 + (SERVER percent 4) * 0x10 + int((SERVER percent 48) / 4 ) where SERVER is replaced with the number of the Server (0-95). Every Linux server image requires the availability of Swap Space. All calculations were performed in a manner so that no Guest (Linux Server Image) would ever have to use the Swap Space during the course of a Benchmark Run. Even in the ninety-six Server Scenario, because there were 18 GB of memory available, there was 192 MB per guest, IBM recommends 128 MB as a minimum for Samba file serving or for Apache web serving. The complete Range containing the swap area was defined as Minidisks with 834 Cylinders which means about 0.57 GB of space. These disks were distributed among the Guests so that every Guest had at least 2 times the allocated real memory as swap space. The disks were distributed using a similar algorithm to the one described above for the Root File system Minidisks, so that only in scenarios with high number of server images would some images have to share a DASD for their swap minidisks. Additional configuration detail: 0200 was the root-file system 0201-020X were the swap disks then the WebBench Data Minidisks begin on 0210

Page 49: Microsoft Office Word - Mainframe Benchmark Project Final

41

excerpt from the ninety-six user directory: ..* Minidisks MDISK 0200 3390 1 1669 LXF000 MW MDISK 0201 3390 1 834 LXF120 MW .. excerpt from the forty-eight user directory: * Minidisks MDISK 0200 3390 1 1669 LXF000 MW MDISK 0201 3390 1 834 LXF120 MW MDISK 0202 3390 835 834 LXF128 MW excerpt from the twenty-four user directory: * Minidisks MDISK 0200 3390 1 1669 LXF000 MW MDISK 0201 3390 1 834 LXF120 MW MDISK 0202 3390 1 834 LXF12C MW MDISK 0203 3390 835 834 LXF128 MW excerpt from the twelve user directory: * Minidisks MDISK 0200 3390 1 1669 LXF000 MW MDISK 0201 3390 1 834 LXF120 MW MDISK 0202 3390 1 834 LXF126 MW MDISK 0203 3390 1 834 LXF12C MW MDISK 0204 3390 835 834 LXF122 MW MDISK 0205 3390 835 834 LXF128 MW MDISK 0206 3390 835 834 LXF12E MW Linux always uses the parameter DASD=0200-0210,300-301 The 200-210 parameter value reserves a DASD device name for every device on address 0200, 0201 and so on. If there is no device defined on an address, it is reserved but reports an error if you try to access it. The Linux startup was modified such that the swap disks were not enabled with the fstab mechanism but with a special script integrated into the boot process. If swap disks appear, they are always on address 201-20f. Only swap disks can appear on these addresses. So a swap area on each of the devices was created and activated, mapped to 201-20f and the errors were ignored. But in the end, all disks defined in the actual z/VM directory for use as swap disks were used as swap disks and no manual changes had to be done to Linux during the scenario switch, that is, when changing from one group of server images to the next. Linux activated all the swap areas, but in general Linux never made an active use of any swap areas, because there was always enough memory.

OSA Adapters There were 2 OSA-Adapters with 2 Ports available on each. Every OSA Adapter was configured to be shared to 25 Guests (Linux server images). This means that 4 additional Guests other than the benchmark servers could be attached to the adapters; these were used for z/VM TCP/IP and 2 more Linux guests for statistics evaluation and collection. Every Chpid was used by twenty-four server images. The ports within the OSA adapters were filled sequentially. Pretests showed that, counter to the description in the planning methodology, this was the most efficient way to distribute the 4 OSA Ports to the Linux Guests. The following short example illustrates the distribution of the Adapters’ addresses to the guests. USER SRV000 DEDICATE E800 E800 DEDICATE E801 E801 DEDICATE E802 E802 USER SRV001

Page 50: Microsoft Office Word - Mainframe Benchmark Project Final

42

DEDICATE E800 E804 DEDICATE E801 E805 DEDICATE E802 E803 USER SRV002 DEDICATE E800 E806 DEDICATE E801 E807 DEDICATE E802 E808 .... USER SRV023 DEDICATE E800 E846 DEDICATE E801 E847 DEDICATE E802 E845 USER SRV024 DEDICATE E900 E900 DEDICATE E901 E901 DEDICATE E902 E902 ... USER SRV095 DEDICATE EB00 EB46 DEDICATE EB01 EB47 DEDICATE EB02 EB45

Linux

The general Linux optimization was performed using the sysctl mechanism. The following values were set explicitly: net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_window_scaling = 0 net.ipv4.tcp_sack = 0 net.ipv4.ip_no_pmtu_disc = 0 kernel.hz_timer = 0 The kernel.hz_timer is the special Linux for S/390 option formerly known as timerpatch. IBM has now made it possible to enable or disable the Timerpatch during runtime. The net.ipv4 options are recommended options for all Linux benchmarks to reduce the kernel overhead on special voluntary ipv4 features. Tests were conducted during the Pre Test period to determine if we should run with Timerpatch on or off. The function of the Timerpatch is to reduce overhead when Linux server images are not being used, so they are not automatically awakened. Given that the Benchmark environment is one of high usage/processor utilization, it was thought that running with the Timerpatch off might provide better performance. It was determined during the pretest that there was very little difference between running with it on or off. The effect of the Timerpatch can be seen in the table below:

Page 51: Microsoft Office Word - Mainframe Benchmark Project Final

43

Server Clients Throughput Resp. Time Throughput Resp. Time Throughput Resp. Time Throughput Resp. Time Throughput Resp. Time Throughput Resp. Time96 96 75.3 20.7 69 21.7

48 48 34.3 22.6 33.3 23.4 36.5 21.5 35.8 22 33.8 22.8 32.6 23.960 44.7 21.8 43.4 22.3 49.3 19.5 47.1 20.7 42.2 22.9 41.7 22.972 54.6 21.4 51.8 21.9 58.7 19.6 61.5 19 53.6 21.7 51.9 22.184 64.7 21 61.7 21.3 73.7 18.3 72.2 18.9 63 21.6 61.5 21.796 75 21 70.7 21 86.3 17.5 93.6 16.8 74 21 72.1 21.1

24 24 28.1 13.9 26.5 14.8 34.1 11.3 40.7 9.5 29.6 13.1 27.5 14.236 43.5 13.3 41 14.3 153.1 3.8 196.4 3 48 12.2 42.9 13.648 58.1 13.4 56.7 13.7 215 3.6 212.4 3.6 67.1 11.5 64.1 12.260 76 12.8 71.2 13.7 230.7 4.2 229.3 4.1 103.9 9.4 92.5 10.372 94.2 12.4 90.8 12.9 239.4 4.8 238.3 4.8 145 8 135.2 8.484 114.3 11.9 109.6 12.3 251.3 5.4 249.7 5.4 214.7 6.3 206.6 6.496 138.9 11.2 135.2 11.5 259.5 5.9 258.4 6 222.4 6.9 221.8 6.9

12 12 18.5 10.6 17.5 11.224 36.6 10.6 40.2 9.736 55.9 10.4 171.9 3.448 75.1 10.3 248.2 3.160 96.2 10.1 272.7 3.572 120.6 9.6 295.3 3.984 148.8 9.1 314 4.396 197 7.8 328.7 4.7

OSA planned OSA filled OSA filled halfWith Timer Patch No Timer Patch With Timer Patch No Timer Patch With Timer Patch No Timer Patch

The Linux root-file system uses the ext3 file system which makes it easier to handle the root file systems if a box is not shutdown properly. Even the sysstat performance monitoring was done on that partition, where it would be advantageous to have consistent data even if this meant a slight amount of overhead (slight being 0.1 -0.2 percent). As few services as possible were started on a Linux Server Image, such as secure shell daemon or cron, in order to achieve better performance. The Apache or Samba services were only started directly before a benchmark run. Additionally, some services useful for normal operation were stopped, like cron for example. The monitoring facility was started at the same time as the benchmarked service. The Linux server images were freshly booted before every new benchmark run in order to have a known state. This means that when switching from the 4 to 8 server scenario, stopping 4 servers and starting 8 servers rather than just booting 4 more server images. All available swap disks were enabled with the same priority. This meant that in the case of Swapping, there was a round-robin use of allocated disks instead of filling up one and then the next; this was also done to improve performance. To explain what is meant by round robin in this context, the following is excerpted from the Linux man pages, (excerpt from man 2 swapon): Swap pages are allocated from areas in priority order, highest priority first. For areas with different priori- ties, a higher-priority area is exhausted before using a lower-priority area. If two or more areas have the same priority, and it is the highest priority available, pages are allocated on a round-robin basis between them. If Linux would have had any reasons to use swap space, and there was more than one minidisk for swapping, all were defined with the same priority, because then first swapping is done on first minidisk, second is done on second minidisk, etc.

Page 52: Microsoft Office Word - Mainframe Benchmark Project Final

44

WebBench

z/VM

Every server image had a 139 cylinder (98 MB) minidisk containing the WebBench data files. This meant that twenty-four minidisks could be put on one real disk, that we only required 4 physical volumes for all ninety-six minidisks. It would be possible to put the data on one disk and share this around to all Linux Guests but as mentioned above, the Project Team elected not to share any data or program disks during the benchmark. The minidisks where distributed over the 4 DASD by server number modulus 4 and the start cylinder was Server Number divided by 4 multiplied by 139.

Linux

The minidisk containing the WebBench data files was on an ext2 file system and mounted read only during the benchmarks.

Apache

The original SuSE RPM distribution for Apache was utilized, but extraneous modules were removed through use of the configuration files. No modules were required, because Apache only needs the core to deliver static files. The httpd.conf was provided this way: ServerType standalone ServerRoot "/srv/www" LockFile /var/lock/subsys/httpd/httpd.accept.lock PidFile /var/run/httpd.pid ScoreBoardFile /var/run/httpd.scoreboard Timeout 30 MaxRequestsPerChild 0 KeepAlive Off Port 80 User wwwrun Group nogroup ServerAdmin webmaster@server000 ServerName server000 DocumentRoot "/bench/web" <Directory /> AllowOverride None Options FollowSymLinks </Directory> AccessFileName .htaccess UseCanonicalName Off DefaultType text/plain HostnameLookups Off ErrorLog /var/log/httpd/error_log LogLevel crit ServerSignature Off Most of these lines are not relevant for benchmarking. The KeepAlive was disabled because it is not used by the WebBench clients.

Page 53: Microsoft Office Word - Mainframe Benchmark Project Final

45

Setting MaxRequestClients to 0 means that no server process can be restarted and is only relevant for avoiding loss of memory in case of memory leaks. Disabling AllowOverrides on Directory / means that Apache does not search for .htaccess files in every directory, significantly improving performance. Allowing Apache to follow Symlinks improves performance as well, because not every directory or file has to be tested for every request to see if it is a Symlink or not. HostnameLookups are disabled as well; errorlog is the only log written but only if the LogLevel is critical. This means 404's (page not found, intentionally created by the clients) are not logged. The dimensioning of the Apache Web Server also changed between the Pretest and the final tests. When ninety-six clients with 5 threads are each exercising a single server image, there are a maximum of nearly 500 simultaneous requests that the single image has to handle. If you have ninety-six server images to handle the ninety-six clients with 5 threads each, every image receives only a part of the total requests. But the requests are distributed randomly, making it hard to size the Apache Web Server. If every of the ninety-six server images provided 500 server processes, then 50,000 Apache processes would wait to handle 500 requests in the ninety-six server scenario. This is an excessive number, causing a lot of useless scheduling overhead inside Linux and then inside z/VM, which has first to schedule ninety-six Linux images, which in turn have to schedule 500 processes each. If only 5 Apache processes per image wait for requests, however, this is too low to handle all the requests distributed randomly, resulting in a large number of failed requests. The pretests showed that it is very expensive to dynamically manage the number of Apache processes, making it preferable to define a maximum number of processes and start as many as necessary and not stopping them if they are unused. This how the Apache Web Server was dimensioned. Following are the parameter definitions used: MinSpareServers 125 MaxSpareServers 125 StartServers 125 MaxClients 125 This means when Apache is started, 125 httpd processes are started, stay up all the time and no more than 125 processes are allowed. After the results of the Pretests were examined, it was decided to use the following dimensioning for the several scenarios, therefore replace the 125's in the example with the appropriate number from the table below:

Maximum number of server images Apache Parameter 1 500 7 250

19 125 35 100 55 75 96 50

In order to keep the WebBench results comparable and reproducible an optimization script was started before each run to fill the buffer cache with the WebBench test data. This meant that the initial client scenarios started with the same relative number of transactions to run as the later scenarios. If this was not performed, the earlier scenarios using fewer clients

Page 54: Microsoft Office Word - Mainframe Benchmark Project Final

46

would have less than optimal throughput because they would have had to fill the buffer cache from disk, which the later scenarios would not have to do. Another optimization having the minidisk cache (mdcache) in a defined status. Without this, the prior benchmark run would determine if data is in the mdcache or not. With this optimization script the Project team was able to ensure comparable runs and repeatable results. NetBench z/VM By default, every client is assigned a pair of disks for the NetBench Filespace. This is more than is required in the mainframe environment because the Filespace needed by Enterprise DiskMix is only about 20 MB per client and in the one server scenario this means less than 2000MB are required for the server. In the ninety-six server scenario this means every server only needs 20MB of disk space. For this, two DASD would have been sufficient. Both DASD are dedicated to the server image and the addresses are selected in a way, that sequential server images do not use sequential numbers. The first DASD address is calculated as follows: (SERVER percent 2) * 0x100 + 0xf200 + ((SERVER / 2) percent 4) * 0x10 + int(SERVER / 8 ) The second address is calculated as follows: (SERVER percent 2) * 0x100 + 0xf400 + ((SERVER / 2) percent 4) * 0x10 + int(SERVER / 8 )

Linux

Why were two disks per Linux Image utilized, if the data would fit on a single DASD? On Linux for zSeries you can’t build a fileserver without a LVM (Logical Volume Manager) because in this environment, DASD are provided as Model 3 (part of 2.3GB) or Model 9 (part of 6.9GB) and this means that one has to concatenate the parts. It was decided to include the LVM layer and build a volume group out of two disks for a more realistic scenario. The two physical volumes were added to a volume group with striping, and a single logical volume spanning the whole volume group was created. This logical volume was formatted with an ext2 file system and mounted with option noatime, which means not to update the access times. Before every run this file system was cleaned, except for the lost+found directory. And before every run the client mapping was verified that every client maps to the correct server. Before each run samba Services were started and cron was stopped.

Samba

Samba optimization was done through the Samba configuration file. [global] workgroup = WORKGROUP server string = percentv@ percenth # no need for printers load printers = no # no mangling, because it consumes excessive CPU cycles

Page 55: Microsoft Office Word - Mainframe Benchmark Project Final

47

mangle case = no case sensitive = yes # only scan every 5 minutes change notify timeout = 300 # hiding also consumes CPU time hide dot files = no # w2k performance++ large readwrite = yes log level = 0 max log size = 0 security = share # Pretests socket options = TCP_NODELAY SO_RCVBUF=4096 SO_SNDBUF=65336 domain master = no preferred master = no local master = no wins support = no dns proxy = no guest account = root # no link verification wide links = yes # need to be no, to enable level2 oplocks kernel oplocks = no [bench] comment = /bench/net path = /bench/net browseable = yes writable = yes guest only = yes public = yes level2 oplocks = yes oplocks = yes write cache size = 4096 Most of these settings are recommended on Samba performance sites and seemed reasonable. Some were verified during the pre test phase. Write Cache Size of 4K is the same as the page size of S/390 hardware which seems to be quite easy and fast to allocate. Logging is disabled, oplocking is enabled. All the tests that consume CPU were disabled, such as: case sensitiveness, case mangeling, link handling, hiding of dot files and so on. NetBench does not require any of these features. All authorization issues are reduced to the minimum to avoid overhead. The Samba optimizations were key to improving NetBench performance in particular the socket options TCPIP NODELAY … RCVBUF=4096. However, paradoxically, once the performance improved at the lower server instances, that was when the buffer underflows started occurring at the higher server image instances, confirming the scalability issues in this environment.

Page 56: Microsoft Office Word - Mainframe Benchmark Project Final

48

Page 57: Microsoft Office Word - Mainframe Benchmark Project Final

49

Appendix C

NetBench Result Details and Mainframe CPU Utilization

Page 58: Microsoft Office Word - Mainframe Benchmark Project Final

50

NetB ench Results Summary produced by NetBench

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-01.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_1_client 1 48.745 0.324 Engine Types: file

dm_4_client 4 156.610 0.404 NetBench 7.0.3

dm_8_client 8 232.198 0.547 Start Suite: Mon May 12 11:22:30 2003

dm_12_client 12 267.435 0.714 Finish Suite: Mon May 12 16:17:36 2003

dm_16_client 16 282.888 0.902 Elapsed Time: 04:55:06

dm_20_client 20 286.590 1.114 Status: Suite completed successfully

dm_24_client 24 284.195 1.348 Comments: 1 CPU LPAR - One Linux Server

dm_28_client 28 284.126 1.573dm_32_client 32 283.488 1.804dm_36_client 36 281.516 2.044dm_40_client 40 280.123 2.286dm_44_client 44 277.850 2.531dm_48_client 48 275.032 2.787dm_52_client 52 277.350 2.995dm_56_client 56 276.610 3.230dm_60_client 60 271.328 3.529dm_64_client 64 270.871 3.768dm_68_client 68 270.157 4.023dm_72_client 72 268.257 4.288dm_76_client 76 267.568 4.549dm_80_client 80 268.987 4.756dm_84_client 84 269.247 4.983dm_88_client 88 268.621 5.236dm_92_client 92 266.693 5.519dm_96_client 96 267.679 5.733

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-01.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_1_client 1 46.681 0.339 Engine Types: file

dm_4_client 4 155.519 0.407 NetBench 7.0.3

dm_8_client 8 235.695 0.539 Start Suite: Mon May 12 17:27:55 2003

dm_12_client 12 288.159 0.663 Finish Suite: Mon May 12 22:18:48 2003

dm_16_client 16 326.527 0.781 Elapsed Time: 04:50:53

dm_20_client 20 399.455 0.797 Status: Suite completed successfully

dm_24_client 24 438.216 0.873 Comments: LPAR 2 CPU Run - One Linux Server

dm_28_client 28 466.705 0.956dm_32_client 32 490.358 1.042dm_36_client 36 507.571 1.133dm_40_client 40 524.868 1.217dm_44_client 44 536.255 1.310dm_48_client 48 543.014 1.411dm_52_client 52 546.086 1.520dm_56_client 56 544.813 1.642dm_60_client 60 544.583 1.762dm_64_client 64 545.234 1.877dm_68_client 68 542.189 2.007dm_72_client 72 536.144 2.150dm_76_client 76 535.416 2.273dm_80_client 80 535.290 2.391dm_84_client 84 528.188 2.544dm_88_client 88 527.034 2.673dm_92_client 92 520.659 2.825dm_96_client 96 516.302 2.969

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-01.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_1_client 1 44.525 0.356 Engine Types: file

dm_4_client 4 126.969 0.500 NetBench 7.0.3

dm_8_client 8 150.597 0.847 Start Suite: Mon May 19 23:09:15 2003

dm_12_client 12 150.523 1.272 Finish Suite: Tue May 20 04:03:08 2003

dm_16_client 16 147.823 1.729 Elapsed Time: 04:53:53

dm_20_client 20 147.110 2.174 Status: Suite completed successfully

dm_24_client 24 146.081 2.623 Comments: z/VM 1 CPU One Linux Image

dm_28_client 28 143.712 3.114dm_32_client 32 143.003 3.570dm_36_client 36 141.473 4.066dm_40_client 40 137.431 4.652dm_44_client 44 138.116 5.081dm_48_client 48 137.233 5.608dm_52_client 52 135.645 6.139dm_56_client 56 135.150 6.635dm_60_client 60 134.836 7.157dm_64_client 64 133.857 7.686dm_68_client 68 133.361 8.207dm_72_client 72 131.876 8.777dm_76_client 76 129.656 9.448dm_80_client 80 131.572 9.797dm_84_client 84 130.392 10.379dm_88_client 88 130.506 10.875dm_92_client 92 128.832 11.545dm_96_client 96 127.864 12.094

Mainframe CPU Usage Charts Generated by Linux Nice when Linux running on LPAR, by FCON when Linux

running on z/VM

Page 59: Microsoft Office Word - Mainframe Benchmark Project Final

51

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-01.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_1_client 1 41.249 0.383 Engine Types: file

dm_4_client 4 108.184 0.588 NetBench 7.0.3

dm_8_client 8 149.148 0.854 Start Suite: Mon May 19 13:52:39 2003

dm_12_client 12 187.547 1.022 Finish Suite: Mon May 19 18:45:45 2003

dm_16_client 16 207.476 1.231 Elapsed Time: 04:53:06

dm_20_client 20 224.779 1.419 Status: Suite completed successfully

dm_24_client 24 241.535 1.588 Comments: z/VM, 2 CPUs - One Linux Virtual Server

dm_28_client 28 247.224 1.808dm_32_client 32 252.891 2.024dm_36_client 36 255.260 2.258dm_40_client 40 257.927 2.483dm_44_client 44 252.107 2.791dm_48_client 48 247.469 3.101dm_52_client 52 253.755 3.276dm_56_client 56 256.945 3.480dm_60_client 60 247.759 3.873dm_64_client 64 241.592 4.233dm_68_client 68 239.465 4.546dm_72_client 72 231.926 4.972dm_76_client 76 233.442 5.215dm_80_client 80 232.290 5.516dm_84_client 84 212.446 6.333dm_88_client 88 209.256 6.756dm_92_client 92 226.039 6.515dm_96_client 96 230.285 6.686

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-02.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_2_client 2 79.340 0.399 Engine Types: file

dm_4_client 4 136.812 0.464 NetBench 7.0.3

dm_8_client 8 202.678 0.628 Start Suite: Tue May 20 20:40:24 2003

dm_12_client 12 243.616 0.786 Finish Suite: Wed May 21 01:31:58 2003

dm_16_client 16 275.026 0.927 Elapsed Time: 04:51:34

dm_20_client 20 300.779 1.062 Status: Suite completed successfully

dm_24_client 24 320.917 1.194 Comments: z/VM, 2 CPUs - 2 Linux Virtual Servers

dm_28_client 28 337.969 1.322dm_32_client 32 353.795 1.446dm_36_client 36 362.610 1.586dm_40_client 40 374.635 1.705dm_44_client 44 382.270 1.839dm_48_client 48 383.568 2.004dm_52_client 52 391.048 2.129dm_56_client 56 392.416 2.285dm_60_client 60 393.092 2.441dm_64_client 64 395.374 2.593dm_68_client 68 386.338 2.815dm_72_client 72 386.258 2.978dm_76_client 76 392.400 3.093dm_80_client 80 387.036 3.298dm_84_client 84 390.339 3.431dm_88_client 88 388.001 3.623dm_92_client 92 388.199 3.784dm_96_client 96 386.586 3.965

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-04.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_4_client 4 134.239 0.473 Engine Types: file

dm_8_client 8 191.822 0.663 NetBench 7.0.3

dm_12_client 12 224.924 0.850 Start Suite: Sun May 11 23:15:31 2003

dm_16_client 16 248.308 1.028 Finish Suite: Mon May 12 03:54:34 2003

dm_20_client 20 268.343 1.191 Elapsed Time: 04:39:03

dm_24_client 24 287.520 1.333 Status: Suite completed successfully

dm_28_client 28 306.519 1.457 Comments: z/VM, 2 CPUs - 4 Linux Virtual Servers

dm_32_client 32 323.335 1.581dm_36_client 36 338.855 1.698dm_40_client 40 345.158 1.854dm_44_client 44 359.083 1.960dm_48_client 48 370.275 2.076dm_52_client 52 377.052 2.209dm_56_client 56 389.221 2.304dm_60_client 60 395.212 2.435dm_64_client 64 399.688 2.569dm_68_client 68 405.124 2.690dm_72_client 72 409.151 2.816dm_76_client 76 411.069 2.955dm_80_client 80 410.913 3.103dm_84_client 84 414.022 3.239dm_88_client 88 411.034 3.426dm_92_client 92 417.228 3.521dm_96_client 96 417.090 3.675

Page 60: Microsoft Office Word - Mainframe Benchmark Project Final

52

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-06.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_6_client 6 161.709 0.590 Engine Types: file

dm_8_client 8 182.961 0.695 NetBench 7.0.3

dm_12_client 12 217.275 0.880 Start Suite: Wed May 07 13:12:56 2003

dm_16_client 16 236.476 1.080 Finish Suite: Wed May 07 17:51:40 2003

dm_20_client 20 252.109 1.267 Elapsed Time: 04:38:44

dm_24_client 24 265.773 1.441 Status: Suite completed successfully

dm_28_client 28 279.699 1.598 Comments: z/VM, 2 CPUs - 6 Linux Virtual Servers

dm_32_client 32 292.721 1.747dm_36_client 36 304.674 1.888dm_40_client 40 317.976 2.014dm_44_client 44 325.167 2.164dm_48_client 48 333.643 2.303dm_52_client 52 343.315 2.424dm_56_client 56 348.823 2.567dm_60_client 60 356.208 2.698dm_64_client 64 367.083 2.789dm_68_client 68 379.969 2.860dm_72_client 72 383.812 3.003dm_76_client 76 389.607 3.117dm_80_client 80 396.988 3.221dm_84_client 84 398.211 3.375dm_88_client 88 400.483 3.514dm_92_client 92 404.154 3.640dm_96_client 96 402.840 3.810

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-08.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_8_client 8 177.110 0.718 Engine Types: file

dm_12_client 12 208.110 0.919 NetBench 7.0.3

dm_16_client 16 230.088 1.111 Start Suite: Tue May 20 06:39:53 2003

dm_20_client 20 243.245 1.313 Finish Suite: Tue May 20 11:07:03 2003

dm_24_client 24 254.987 1.503 Elapsed Time: 04:27:10

dm_28_client 28 264.019 1.695 Status: Suite completed successfully

dm_32_client 32 271.472 1.886 Comments: z/VM, 2 CPUs, 8 Linux Virtual Servers

dm_36_client 36 281.276 2.049dm_40_client 40 292.245 2.193dm_44_client 44 301.751 2.338dm_48_client 48 311.208 2.471dm_52_client 52 322.534 2.584dm_56_client 56 329.460 2.719dm_60_client 60 338.641 2.829dm_64_client 64 345.327 2.960dm_68_client 68 352.559 3.083dm_72_client 72 355.648 3.228dm_76_client 76 358.194 3.381dm_80_client 80 366.593 3.482dm_84_client 84 371.308 3.609dm_88_client 88 371.387 3.782dm_92_client 92 377.019 3.897dm_96_client 96 375.446 4.084

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-10.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_10_client 10 187.226 0.852 Engine Types: file

dm_12_client 12 199.338 0.959 NetBench 7.0.3

dm_16_client 16 219.342 1.165 Start Suite: Wed May 07 21:27:05 2003

dm_20_client 20 235.756 1.355 Finish Suite: Thu May 08 01:54:25 2003

dm_24_client 24 246.102 1.559 Elapsed Time: 04:27:20

dm_28_client 28 254.203 1.762 Status: Suite completed successfully

dm_32_client 32 261.221 1.960 Comments: z/VM, 2 CPUs, 10 Linux Virtual Servers

dm_36_client 36 268.070 2.149dm_40_client 40 275.685 2.325dm_44_client 44 283.847 2.486dm_48_client 48 291.312 2.639dm_52_client 52 298.311 2.785dm_56_client 56 306.456 2.917dm_60_client 60 309.340 3.089dm_64_client 64 318.498 3.205dm_68_client 68 326.899 3.323dm_72_client 72 333.437 3.452dm_76_client 76 340.035 3.570dm_80_client 80 344.218 3.710dm_84_client 84 351.091 3.821dm_88_client 88 353.228 3.974dm_92_client 92 357.117 4.106dm_96_client 96 361.924 4.232

Page 61: Microsoft Office Word - Mainframe Benchmark Project Final

53

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-12.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_12_client 12 196.336 0.974 Engine Types: file

dm_16_client 16 215.828 1.184 NetBench 7.0.3

dm_20_client 20 229.494 1.391 Start Suite: Tue May 06 17:54:01 2003

dm_24_client 24 242.491 1.581 Finish Suite: Tue May 06 22:10:17 2003

dm_28_client 28 250.091 1.790 Elapsed Time: 04:16:16

dm_32_client 32 255.710 2.003 Status: Suite completed successfully

dm_36_client 36 260.460 2.216 Comments: z/VM, 2 CPUs, 12 Linux Virtual Servers

dm_40_client 40 267.134 2.400dm_44_client 44 273.068 2.581dm_48_client 48 277.977 2.759dm_52_client 52 283.799 2.930dm_56_client 56 290.062 3.084dm_60_client 60 295.742 3.240dm_64_client 64 301.832 3.386dm_68_client 68 308.194 3.526dm_72_client 72 314.361 3.662dm_76_client 76 318.064 3.812dm_80_client 80 322.194 3.964dm_84_client 84 326.664 4.125dm_88_client 88 330.349 4.269dm_92_client 92 332.239 4.437dm_96_client 96 334.230 4.599

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-14.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_14_client 14 199.108 1.123 Engine Types: file

dm_16_client 16 206.372 1.239 NetBench 7.0.3

dm_20_client 20 223.935 1.426 Start Suite: Thu May 08 09:51:04 2003

dm_24_client 24 235.005 1.630 Finish Suite: Thu May 08 14:07:13 2003

dm_28_client 28 244.425 1.833 Elapsed Time: 04:16:09

dm_32_client 32 251.543 2.036 Status: Suite completed successfully

dm_36_client 36 257.054 2.243 Comments: z/VM, 2 CPUs, 14 Linux Virtual Servers

dm_40_client 40 257.042 2.490dm_44_client 44 264.405 2.662dm_48_client 48 269.414 2.849dm_52_client 52 273.509 3.040dm_56_client 56 277.199 3.228dm_60_client 60 282.614 3.387dm_64_client 64 286.392 3.567dm_68_client 68 289.913 3.749dm_72_client 72 296.729 3.880dm_76_client 76 302.148 4.023dm_80_client 80 305.736 4.189dm_84_client 84 308.529 4.345dm_88_client 88 315.442 4.450dm_92_client 92 319.237 4.612dm_96_client 96 321.305 4.778

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-16.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_16_client 16 204.343 1.251 Engine Types: file

dm_20_client 20 217.964 1.466 NetBench 7.0.3

dm_24_client 24 231.707 1.656 Start Suite: Tue May 06 23:31:27 2003

dm_28_client 28 242.146 1.851 Finish Suite: Wed May 07 03:36:41 2003

dm_32_client 32 248.904 2.056 Elapsed Time: 04:05:14

dm_36_client 36 252.278 2.284 Status: Suite completed successfully

dm_40_client 40 256.299 2.500 Comments: z/VM, 2 CPUs - 16 Linux Virtual Servers

dm_44_client 44 260.031 2.706dm_48_client 48 260.322 2.944dm_52_client 52 266.104 3.122dm_56_client 56 270.218 3.311dm_60_client 60 273.545 3.509dm_64_client 64 277.324 3.686dm_68_client 68 281.839 3.851dm_72_client 72 288.329 3.993dm_76_client 76 292.754 4.146dm_80_client 80 291.358 4.380dm_84_client 84 291.614 4.602dm_88_client 88 301.350 4.671dm_92_client 92 307.438 4.801dm_96_client 96 311.287 4.960

Page 62: Microsoft Office Word - Mainframe Benchmark Project Final

54

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-20.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_20_client 20 209.313 1.526 Engine Types: file

dm_24_client 24 221.395 1.733 NetBench 7.0.3

dm_28_client 28 231.620 1.933 Start Suite: Thu May 08 14:47:38 2003

dm_32_client 32 237.468 2.160 Finish Suite: Thu May 08 18:40:48 2003

dm_36_client 36 246.962 2.338 Elapsed Time: 03:53:10

dm_40_client 40 252.010 2.540 Status: Suite completed successfully

dm_44_client 44 254.357 2.769 Comments: z/VM, 2 CPUs - 20 Linux Virtual Servers

dm_48_client 48 257.454 2.985dm_52_client 52 257.610 3.220dm_56_client 56 259.230 3.456dm_60_client 60 260.315 3.684dm_64_client 64 262.820 3.889dm_68_client 68 266.917 4.072dm_72_client 72 268.805 4.284dm_76_client 76 272.067 4.464dm_80_client 80 274.211 4.663dm_84_client 84 277.247 4.845dm_88_client 88 279.975 5.032dm_92_client 92 285.252 5.172dm_96_client 96 288.167 5.344

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-24.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_24_client 24 213.336 1.800 Engine Types: file

dm_28_client 28 222.251 2.018 NetBench 7.0.3

dm_32_client 32 230.829 2.219 Start Suite: Thu May 08 19:49:53 2003

dm_36_client 36 234.394 2.461 Finish Suite: Thu May 08 23:41:41 2003

dm_40_client 40 242.542 2.639 Elapsed Time: 03:51:48

dm_44_client 44 246.711 2.853 Status: Suite completed successfully

dm_48_client 48 249.003 3.077 Comments: z/VM, 2 CPUs - 24 Linux Virtual Servers

dm_52_client 51 250.476 3.254

Comments: Final Run; Mix dm_52_client had errors; Mix dm_56_client had errors; Mix dm_60_client had errors; Mix dm_64_client had errors; Mix dm_68_client had errors; Mix dm_72_client had errors; Mix dm_76_client had errors; Mix dm_80_client had errors; Mix dm_84_client had errors; Mix dm_88_client had errors; Mix dm_92_client had errors; Mix dm_96_client had errors

dm_56_client 55 253.753 3.459dm_60_client 59 255.315 3.701dm_64_client 63 254.516 3.952dm_68_client 67 257.895 4.148dm_72_client 71 258.030 4.398dm_76_client 75 262.030 4.593dm_80_client 79 265.025 4.777dm_84_client 83 266.331 4.990dm_88_client 87 268.969 5.184dm_92_client 91 270.026 5.399dm_96_client 95 271.674 5.593

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-48.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_48_client 47 213.237 3.519 Engine Types: file

dm_52_client 48 214.091 3.582 NetBench 7.0.3

dm_56_client 50 212.807 3.753 Start Suite: Sun May 11 09:08:38 2003

dm_60_client 50 214.246 3.729 Finish Suite: Sun May 11 11:54:12 2003

dm_64_client 51 213.690 3.813 Elapsed Time: 02:45:34

dm_68_client 53 223.054 3.787 Status: Suite completed successfully

dm_72_client 54 214.948 4.017 Comments: z/VM, 2 CPUs - 48 Linux Virtual Servers

dm_76_client 56 225.944 3.973

Comments: Final Run; Mix dm_48_client had errors; Mix dm_52_client had errors; Mix dm_56_client had errors; Mix dm_60_client had errors; Mix dm_64_client had errors; Mix dm_68_client had errors; Mix dm_72_client had errors; Mix dm_76_client had errors; Mix dm_80_client had errors; Mix dm_84_client had errors; Mix dm_88_client had errors; Mix dm_92_client had errors; Mix dm_96_client had errors

dm_80_client 56 218.189 4.115dm_84_client 59 224.090 4.214dm_88_client 60 226.257 4.251dm_92_client 62 228.620 4.338dm_96_client 64 231.589 4.429

Page 63: Microsoft Office Word - Mainframe Benchmark Project Final

55

Table 1: NetBench SummaryC:\NetBench\Controller\Suites\NetBench\edm-96.tst

Mix NameEngines

ParticipatingTotal Throughput

(MBits/sec)

Average Response Time (milliseconds) Test Information

dm_96_client 95 198.880 7.692 Engine Types: file

NetBench 7.0.3

Start Suite: Sun May 11 21:23:45 2003

Finish Suite: Sun May 11 21:37:07 2003Elapsed Time: 00:13:22Status: Suite completed successfullyComments: z/VM, 2 CPUs, 96 Linux Images

Comments: Final Run; Mix dm_96_client had errors

Page 64: Microsoft Office Word - Mainframe Benchmark Project Final

56

Appendix D

WebBench Result Details and Mainframe CPU Utilization

Page 65: Microsoft Office Word - Mainframe Benchmark Project Final

57

WebBench Results Summary Produced by WebBench

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-01.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 563.246 3398673.500 Engine Types: http

4_client 2177.608 13174385.500 Ziff Davis Media WebBench 4.1

8_client 2218.688 13420152.000 Start Suite: Tue May 13 10:19:42 2003

12_client 2185.638 13323766.250 Finish Suite: Tue May 13 12:35:33 2003

16_client 2117.767 12701603.688 Elapsed Time: 02:15:51

20_client 2293.042 13881201.500 Status: Suite completed successfully

24_client 2328.821 13945623.688 Comments: LPAR 1 CPU One Linux Image

28_client 2327.696 13972523.62632_client 2294.333 13792293.90736_client 2328.750 14003151.62640_client 2314.058 13955194.34544_client 2306.933 13937846.40748_client 2328.504 14015225.65852_client 2319.584 13978052.14256_client 2328.571 14011151.57960_client 2337.038 14066775.37664_client 2323.513 14066436.56368_client 2327.763 14171913.11172_client 2334.721 13982045.47076_client 2328.838 14094018.65780_client 2310.454 13959665.68984_client 2299.662 13916103.67388_client 2302.654 13972289.57992_client 2319.667 13960780.09596_client 2317.575 14049268.290

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-01.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 564.454 3373467.750 Engine Types: http

4_client 2201.517 13280359.250 Ziff Davis Media WebBench 4.1

8_client 4156.963 25002440.250 Start Suite: Tue May 13 06:21:00 2003

12_client 4681.958 28423001.250 Finish Suite: Tue May 13 08:36:28 2003

16_client 4656.054 28063666.500 Elapsed Time: 02:15:28

20_client 4786.200 29074122.250 Status: Suite completed successfully

24_client 4944.533 29814023.125 Comments: LPAR 2 CPUs One Linux Image

28_client 4935.404 29766572.00032_client 4994.783 30286016.06336_client 5020.025 30366031.00040_client 5007.171 30285738.00044_client 5038.387 30519047.31348_client 5034.796 30415763.62552_client 5009.629 30184690.12556_client 5013.808 30416601.62560_client 5042.375 30592049.31464_client 5026.383 30446927.93968_client 5035.725 30394289.37772_client 4983.846 29964100.97176_client 5027.338 30460676.62780_client 5034.454 30322515.34684_client 5018.433 30130076.37788_client 5037.208 30498464.25392_client 4989.858 30223221.94096_client 4991.067 30122673.534

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-01.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 505.233 3027504.750 Engine Types: http

4_client 1824.492 11015875.750 Ziff Davis Media WebBench 4.1

8_client 1929.133 11692927.250 Start Suite: Sun May 18 07:17:42 2003

12_client 1925.158 11645684.563 Finish Suite: Sun May 18 09:32:48 2003

16_client 2082.163 12615946.313 Elapsed Time: 02:15:06

20_client 2231.525 13459465.563 Status: Suite completed successfully

24_client 2231.029 13569094.813 Comments: z/VM 2 CPU One Linux Image

28_client 2274.525 13745646.87632_client 2226.425 13342608.65736_client 2317.900 14002822.34540_client 2308.058 13838719.25144_client 2250.308 13560132.00148_client 2432.067 14573622.59552_client 2428.008 14732380.40756_client 2469.663 14906477.04860_client 2507.433 15061539.64164_client 2373.741 14341243.53268_client 2381.188 14425274.79872_client 2329.925 14064467.31476_client 2364.008 14357533.62680_client 2345.996 14112764.15884_client 2348.121 14216289.01788_client 2302.529 13906044.02492_client 2303.779 13885585.10396_client 2374.104 14416890.627

Mainframe CPU Usage Charts Generated by Linux Nice when Linux running on LPAR, by FCON when Linux

running on z/VM

Page 66: Microsoft Office Word - Mainframe Benchmark Project Final

58

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-04.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 564.117 3368076.500 Engine Types: http

4_client 1987.646 11929012.500 Ziff Davis Media WebBench 4.1

8_client 2888.079 17460843.500 Start Suite: Tue May 13 20:53:28 2003

12_client 3108.217 18888123.000 Finish Suite: Tue May 13 23:07:59 2003

16_client 3105.396 18753145.875 Elapsed Time: 02:14:31

20_client 3078.517 18600568.875 Status: Suite completed successfully

24_client 3085.154 18596791.563 Comments: z/VM 2 CPU 4 Linux Images

28_client 3107.858 18802601.56332_client 3140.354 18979452.50036_client 3110.017 18777508.18840_client 3155.350 19065973.81444_client 3181.858 19246605.93948_client 3184.242 19342458.00152_client 3274.566 19800812.72056_client 3326.100 20114714.56460_client 3309.883 20046248.72064_client 3361.367 20257705.59568_client 3387.163 20481148.56472_client 3409.150 20574084.93976_client 3428.388 20703393.23680_client 3416.042 20584859.42384_client 3427.917 20621353.01788_client 3427.821 20804054.20592_client 3386.096 20333862.90796_client 3424.396 20852257.689

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-08.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 562.483 3363056.000 Engine Types: http

4_client 1869.908 11268594.250 Ziff Davis Media WebBench 4.1

8_client 2426.779 14740806.750 Start Suite: Sun May 18 19:27:57 2003

12_client 2764.904 16752469.375 Finish Suite: Sun May 18 21:42:04 2003

16_client 2909.842 17667828.750 Elapsed Time: 02:14:07

20_client 2938.871 17764519.313 Status: Suite completed successfully

24_client 2861.583 17278184.813 Comments: z/VM 2 CPU 8 Linux Images

28_client 2827.646 17103665.00032_client 2837.046 17210348.93836_client 2844.542 17162732.81340_client 2826.188 17138950.68944_client 2851.325 17269339.43948_client 2869.250 17421379.28352_client 2849.021 17293168.31456_client 2853.196 17335032.78360_client 2868.012 17186971.26764_client 2868.700 17304995.70468_client 2859.587 17230662.56472_client 2879.042 17452165.56476_client 2914.546 17662149.28380_client 2909.796 17492260.36184_client 2912.737 17598699.73588_client 2938.946 17687099.11092_client 2904.762 17642091.20596_client 2924.167 17675696.626

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-12.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 563.508 3369360.250 Engine Types: http

4_client 1804.504 10899475.500 Ziff Davis Media WebBench 4.1

8_client 2211.558 13327479.500 Start Suite: Fri May 16 09:18:30 2003

12_client 2464.088 14945937.875 Finish Suite: Fri May 16 11:32:41 2003

16_client 2638.938 15785413.875 Elapsed Time: 02:14:11

20_client 2744.192 16553540.313 Status: Suite completed successfully

24_client 2773.317 16713242.500 Comments: z/VM 2 CPUs 12 Linux Images

28_client 2815.625 17029295.31332_client 2807.925 16972853.09436_client 2796.887 16906248.09540_client 2746.988 16574673.15744_client 2759.446 16672262.53348_client 2856.858 17176279.22052_client 2868.408 17244639.47056_client 2881.788 17367095.22060_client 2851.779 17278515.65864_client 2892.175 17503045.06468_client 2909.171 17573154.07972_client 2926.750 17665954.45476_client 2902.708 17541870.25180_client 2935.163 17737420.97084_client 2949.575 17808011.95488_client 2959.721 17772008.89292_client 2939.167 17701537.90896_client 2976.629 17902265.798

Page 67: Microsoft Office Word - Mainframe Benchmark Project Final

59

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-16.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 560.192 3359502.500 Engine Types: http

4_client 1763.650 10622860.750 Ziff Davis Media WebBench 4.1

8_client 2078.642 12568992.750 Start Suite: Sun May 18 22:24:18 2003

12_client 2257.200 13750592.500 Finish Suite: Mon May 19 00:38:24 2003

16_client 2409.254 14444850.938 Elapsed Time: 02:14:06

20_client 2488.883 15025885.938 Status: Suite completed successfully

24_client 2538.738 15307622.063 Comments: z/VM 2 CPUs 16 Linux Images

28_client 2565.825 15515652.56332_client 2568.912 15491662.28236_client 2581.108 15563712.59540_client 2527.063 15342850.37644_client 2594.771 15654213.78248_client 2596.008 15680458.37652_client 2588.038 15665671.04856_client 2602.696 15728569.09560_client 2602.141 15791480.25164_client 2612.004 15821988.87768_client 2610.017 15760450.18972_client 2518.716 15178306.28276_client 2563.521 15673611.47080_client 2602.575 15851649.45484_client 2612.400 15979259.34588_client 2614.679 16011264.00292_client 2576.100 15555842.28396_client 2544.634 15480989.181

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-20.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 560.600 3409473.750 Engine Types: http

4_client 1730.321 10460010.250 Ziff Davis Media WebBench 4.1

8_client 1977.167 12017093.500 Start Suite: Mon May 19 06:47:35 2003

12_client 2129.608 12865149.563 Finish Suite: Mon May 19 09:01:46 2003

16_client 2260.896 13641712.813 Elapsed Time: 02:14:11

20_client 2357.141 14216325.563 Status: Suite completed successfully

24_client 2448.221 14907619.563 Comments: z/VM 2 CPUs 20 Linux Images

28_client 2518.533 15233568.87532_client 2551.117 15387345.68836_client 2588.975 15671349.84540_client 2569.592 15561441.53244_client 2635.779 15948574.00148_client 2675.692 16319509.31352_client 2691.137 16192077.40856_client 2604.050 15712085.59660_client 2682.654 16253637.70464_client 2669.450 16097032.06468_client 2658.783 16067609.40772_client 2674.042 16125614.79876_client 2707.071 16329374.87680_client 2725.875 16494743.39284_client 2727.329 16467660.45488_client 2759.713 16754751.95492_client 2757.921 16729455.81496_client 2751.538 16652219.142

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-24.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 559.250 3379417.000 Engine Types: http

4_client 1719.904 10387346.500 Ziff Davis Media WebBench 4.1

8_client 1947.296 11813242.500 Start Suite: Thu May 15 03:02:59 2003

12_client 2073.650 12595173.875 Finish Suite: Thu May 15 05:18:01 2003

16_client 2177.262 13102762.438 Elapsed Time: 02:15:02

20_client 2249.767 13653494.438 Status: Suite completed successfully

24_client 2316.987 13907446.125 Comments: z/VM 2 CPUs 24 Linux Images

28_client 2334.813 14083687.03232_client 2341.433 14193634.31436_client 2313.029 13959968.59540_client 2336.996 14119424.12644_client 2331.921 14035371.18948_client 2325.558 14003989.36152_client 2309.584 13955021.89256_client 2296.033 13938331.28360_client 2302.783 13960011.22064_client 2321.792 13981329.95468_client 2246.108 13700879.43872_client 2255.271 13812596.11176_client 2250.079 13767829.92380_client 2259.379 13864094.79884_client 2293.125 14114571.15888_client 2269.363 13931416.86892_client 2270.345 13914144.83096_client 2292.062 14156476.048

Page 68: Microsoft Office Word - Mainframe Benchmark Project Final

60

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-40.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 558.175 3391431.750 Engine Types: http

4_client 1678.858 10149347.250 Ziff Davis Media WebBench 4.1

8_client 1766.083 10691188.750 Start Suite: Thu May 15 10:49:17 2003

12_client 1743.058 10595756.938 Finish Suite: Thu May 15 13:03:18 2003

16_client 1772.521 10747898.938 Elapsed Time: 02:14:01

20_client 1856.996 11311295.563 Status: Suite completed successfully

24_client 1891.050 11407227.376 Comments: z/VM 2 CPUs 40 Linux Servers

28_client 1818.896 11010126.72032_client 1901.413 11441536.28236_client 1923.179 11512355.03240_client 1893.958 11372200.20444_client 1941.413 11763545.04848_client 1942.946 11812385.39152_client 1934.417 11694579.90756_client 1939.271 11732199.65760_client 1960.584 11736520.76664_client 1963.109 11819712.87668_client 1952.242 11772909.17372_client 1964.033 11876912.82976_client 1973.224 11937129.98680_client 1952.441 11824514.44684_client 1965.846 11893376.44788_client 1964.154 11808394.74392_client 1965.275 11922617.43896_client 1938.617 11690220.587

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-44.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 559.025 3382154.500 Engine Types: http

4_client 1710.687 10318486.000 Ziff Davis Media WebBench 4.1

8_client 1777.646 10736989.125 Start Suite: Sat May 17 11:50:10 2003

12_client 1811.917 10910610.000 Finish Suite: Sat May 17 14:04:10 2003

16_client 1827.767 11160598.875 Elapsed Time: 02:14:00

20_client 1843.979 11109855.313 Status: Suite completed successfully

24_client 1857.238 11221209.313 Comments: z/VM 2 CPUs 44 Linux Servers

28_client 1871.604 11173207.62532_client 1884.687 11367858.50136_client 1807.325 10965330.53240_client 1885.033 11344136.06344_client 1897.629 11446783.07948_client 1884.025 11477940.21952_client 1894.517 11408044.62656_client 1899.375 11442210.87660_client 1904.792 11528534.53264_client 1907.488 11590390.17368_client 1908.250 11468460.96972_client 1904.596 11468343.54876_client 1906.492 11511428.54880_client 1911.963 11577086.07184_client 1910.904 11541971.97088_client 1914.963 11611717.24392_client 1905.967 11492635.39296_client 1901.917 11517389.634

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-48.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 557.263 3348689.750 Engine Types: http

4_client 1626.975 9874928.000 Ziff Davis Media WebBench 4.1

8_client 1709.821 10455951.000 Start Suite: Fri May 16 06:46:38 2003

12_client 1751.579 10659308.875 Finish Suite: Fri May 16 09:01:42 2003

16_client 1755.129 10657422.125 Elapsed Time: 02:15:04

20_client 1755.100 10671111.813 Status: Suite completed successfully

24_client 1775.688 10779052.720 Comments: z/VM 2 CPUs 48 Linux Servers

28_client 1762.875 10737687.34532_client 1787.858 10834600.37636_client 1734.079 10445297.34540_client 1798.321 10792374.36044_client 1794.554 10852874.04848_client 1802.267 10834219.09552_client 1807.309 10874597.18856_client 1771.117 10667830.39260_client 1796.263 10858867.37664_client 1793.592 10719215.31468_client 1793.083 10799822.95472_client 1739.079 10502080.06476_client 1736.016 10520693.10280_client 1746.842 10563001.57184_client 1751.250 10580426.18088_client 1795.371 10823556.41592_client 1781.508 10796380.22096_client 1780.004 10717943.540

Page 69: Microsoft Office Word - Mainframe Benchmark Project Final

61

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-56.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 552.458 3314623.000 Engine Types: http

4_client 1642.704 9873124.750 Ziff Davis Media WebBench 4.1

8_client 1718.900 10427925.625 Start Suite: Sat May 17 15:30:21 2003

12_client 1777.733 10628706.813 Finish Suite: Sat May 17 17:44:25 2003

16_client 1816.679 10980369.500 Elapsed Time: 02:14:04

20_client 1855.400 11189225.625 Status: Suite completed successfully

24_client 1900.088 11332563.688 Comments: z/VM 2 CPUs 56 Linux Servers

28_client 1920.779 11635789.90732_client 1950.758 11777611.18936_client 1807.242 10879347.93940_client 1968.775 11824954.28344_client 1988.758 11987196.51748_client 1996.512 12078779.28252_client 2014.754 12070849.56356_client 2021.646 12188380.87660_client 2030.367 12242943.72064_client 2038.062 12249220.25168_client 2024.133 12350120.81472_client 2047.317 12388277.81376_client 2054.325 12391683.98580_client 2054.567 12453583.32984_client 2051.929 12333812.14288_client 2057.166 12484305.91592_client 2053.258 12383662.04196_client 2059.758 12449263.188

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-64.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 557.842 3387013.250 Engine Types: http

4_client 1590.088 9653997.250 Ziff Davis Media WebBench 4.1

8_client 1677.875 10153138.375 Start Suite: Fri May 16 22:08:44 2003

12_client 1721.592 10404900.188 Finish Suite: Sat May 17 00:22:42 2003

16_client 1763.838 10684377.000 Elapsed Time: 02:13:58

20_client 1791.983 10741862.625 Status: Suite completed successfully

24_client 1815.113 10894529.594 Comments: z/VM 2 CPUs 64 Linux Servers

28_client 1760.100 10611595.40732_client 1845.029 11206210.68836_client 1865.875 11238274.93840_client 1868.071 11372715.72044_client 1877.954 11368059.54848_client 1893.946 11457944.57852_client 1893.646 11409724.89256_client 1914.008 11533759.40760_client 1909.708 11564541.93864_client 1908.721 11535119.43868_client 1906.520 11463587.37672_client 1911.071 11542555.61176_client 1913.892 11506095.33080_client 1924.954 11620424.86184_client 1898.542 11481648.04888_client 1914.191 11543051.27492_client 1927.708 11524023.32196_client 1931.575 11734103.641

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-72.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 553.987 3362606.500 Engine Types: http

4_client 1571.729 9531120.500 Ziff Davis Media WebBench 4.1

8_client 1655.588 10038319.750 Start Suite: Sat May 17 18:20:32 2003

12_client 1695.279 10155749.375 Finish Suite: Sat May 17 20:34:38 2003

16_client 1734.267 10537166.500 Elapsed Time: 02:14:06

20_client 1754.925 10643923.156 Status: Suite completed successfully

24_client 1783.179 10798784.282 Comments: z/VM 2 CPUs 72 Linux Images

28_client 1789.141 10895295.59432_client 1634.429 9918269.65736_client 1836.521 11060138.56340_client 1836.454 11102858.42344_client 1852.512 11237194.53248_client 1858.375 11229655.59452_client 1864.525 11290662.65756_client 1880.229 11337805.81360_client 1875.233 11313143.79864_client 1885.488 11327219.26768_client 1865.729 11230638.47072_client 1859.359 11250124.23576_client 1865.392 11342706.28280_client 1866.371 11263875.15884_client 1874.034 11215555.29888_client 1867.596 11269815.70492_client 1876.317 11352613.14996_client 1871.504 11235911.595

Page 70: Microsoft Office Word - Mainframe Benchmark Project Final

62

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-80.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 548.921 3340701.250 Engine Types: http

4_client 1523.592 9229480.250 Ziff Davis Media WebBench 4.1

8_client 1625.225 9880551.625 Start Suite: Fri May 16 16:52:16 2003

12_client 1645.558 9930997.375 Finish Suite: Fri May 16 19:06:26 2003

16_client 1687.896 10206509.438 Elapsed Time: 02:14:10

20_client 1712.046 10340124.438 Status: Suite completed successfully

24_client 1733.100 10437356.594 Comments: z/VM 2 CPUs 72 Linux Images

28_client 1663.825 10040566.62632_client 1752.392 10675723.65736_client 1773.371 10685010.40740_client 1784.750 10809669.97044_client 1796.775 10906910.87648_client 1803.913 10895226.37652_client 1814.146 10876173.67256_client 1813.079 10979246.78260_client 1821.317 11032901.03264_client 1827.871 11102496.65868_client 1820.858 10986564.40772_client 1810.379 10942795.75176_client 1818.900 11012425.11080_client 1816.158 11015670.54084_client 1830.366 11033962.08688_client 1796.267 10835510.25892_client 1815.533 10906116.89196_client 1832.183 11077371.532

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-88.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 556.163 3347506.500 Engine Types: http

4_client 1542.817 9245833.000 Ziff Davis Media WebBench 4.1

8_client 1600.971 9647649.250 Start Suite: Sun May 18 01:02:32 2003

12_client 1628.254 9790460.063 Finish Suite: Sun May 18 03:16:43 2003

16_client 1660.763 10054929.438 Elapsed Time: 02:14:11

20_client 1666.488 10072985.094 Status: Suite completed successfully

24_client 1700.138 10263284.844 Comments: z/VM 2 CPUs 88 Linux Images

28_client 1703.737 10219315.28232_client 1706.921 10392114.43836_client 1728.742 10447621.43940_client 1723.616 10284350.01644_client 1730.388 10515820.00148_client 1732.171 10472185.53252_client 1749.854 10521974.53256_client 1743.837 10486953.48560_client 1727.050 10435019.37664_client 1745.066 10441043.11068_client 1737.600 10530495.84572_client 1754.392 10522449.45476_client 1740.133 10517175.38480_client 1751.146 10567636.54784_client 1739.504 10545819.87688_client 1747.038 10565820.82192_client 1748.896 10587564.17396_client 1765.429 10633839.219

Table 1: WebBench SummaryC:\WebBench\Controller\Suites\WebBench\image\wb-96.tst

Mix Name Requests Per SecondThroughput (Bytes/Sec) Test Information

1_client 535.733 3276261.750 Engine Types: http

4_client 1362.246 8196081.125 Ziff Davis Media WebBench 4.1

8_client 1410.942 8556899.000 Start Suite: Thu May 15 23:14:49 2003

12_client 1461.717 8844944.750 Finish Suite: Fri May 16 01:30:05 2003

16_client 1492.342 8966252.688 Elapsed Time: 02:15:16

20_client 1485.329 9002009.219 Status: Suite completed successfully

24_client 1439.816 8718959.563 Comments: z/VM 2 CPUs 96 Linux Images

28_client 1498.254 9013426.00132_client 1532.467 9258621.34436_client 1522.604 9303637.39140_client 1534.463 9325291.42244_client 1522.296 9171833.73548_client 1532.354 9255364.22052_client 1513.588 9193509.20456_client 1493.283 8954128.90760_client 1492.508 9026783.59464_client 1462.800 8867448.24368_client 1508.250 9149836.35272_client 1516.179 9183160.57176_client 1534.329 9265169.80580_client 1553.004 9385860.62684_client 1562.233 9512007.42288_client 1546.583 9358848.21692_client 1518.646 9282281.62696_client 1509.492 9165824.891


Recommended