© 2009 VMware Inc. All rights reserved
Virtualizing Oracle Databases with VMware
Richard McDougall Chief Performance Architect
Agenda
VMware Platform Introduction Why Virtualize Databases? Virtualization Technical Primer Performance Studies and Proof Points Deploying Databases in Virtual Environments • Consolidation and Sizing
• VMware Platform Introduction Why
Virtualization Technical Primer
VMware Virtualization Basics
VMotion Technology
VMotion Technology moves running virtual machines from one host to another while maintaining continuous service availability
- Enables Resource Pools - Enables High Availability
Resource Controls
Reservation
• Minimum service level guarantee (in MHz)
• Even when system is overcommitted
• Needs to pass admission control
Shares
• CPU entitlement is directly proportional to VM's shares and depends on the total number of shares issued
• Abstract number, only ratio matters
Limit
• Absolute upper bound on CPU entitlement (in MHz)
• Even when system is not overcommitted
Limit
Reservation
0 Mhz
Total Mhz
Shares apply here
Resource Control Example
Add 2nd VM with same
number of shares
Set 3rd VM’s limit to 25% of total capacity
► ►
▼
Set 1st VM’s reservation to 50% of total capacity
◄ ◄ FAILED
ADMISSION CONTROL 50%
50% 33.3%
37.5%
100%
Add 4th VM with reservation set to 75% of total capacity
Add 3rd VM with same
number of shares
Resource Pools
Motivation • Allocate aggregate resources for sets of VMs
• Isolation between pools, sharing within pools
• Flexible hierarchical organization
• Access control and delegation
What is a resource pool? • Abstract object with permissions
• Reservation, limit, and shares
• Parent pool, child pools and VMs
• Can be used on a stand-alone host or in a cluster (group of hosts)
Pool A
VM1 VM3 VM4
Admin
Pool B L: not set R: 600Mhz S: 60 shares
L: 2000Mhz R: not set S: 40 shares
VM2
60% 40%
Balanced Cluster
Example migration scenario 4_4_0_0 with DRS
2
1
4
3
6
5
8
7
1
3 4 5 61 2
PROC
2
PROC
1
POWERSUPPLY
2POWERSUPPLY
OVERTEMP
INTERLOCK
1 2
POWER CAP
FANS
DIMMS
ONLINESPARE
MIRROR
1A
2D
3G
4B
5E
6H
7C
8F
9i 1A
2D
3G
4B
5E
6H
7C
8F
9i
PLAYER
HPProLiantDL380G6
2
1
4
3
6
5
8
7
1
3 4 5 61 2
PROC
2
PROC
1
POWERSUPPLY
2POWERSUPPLY
OVERTEMP
INTERLOCK
1 2
POWER CAP
FANS
DIMMS
ONLINESPARE
MIRROR
1A
2D
3G
4B
5E
6H
7C
8F
9i 1A
2D
3G
4B
5E
6H
7C
8F
9i
PLAYER
HPProLiantDL380G6
2
1
4
3
6
5
8
7
1
3 4 5 61 2
PROC
2
PROC
1
POWERSUPPLY
2POWERSUPPLY
OVERTEMP
INTERLOCK
1 2
POWER CAP
FANS
DIMMS
ONLINESPARE
MIRROR
1A
2D
3G
4B
5E
6H
7C
8F
9i 1A
2D
3G
4B
5E
6H
7C
8F
9i
PLAYER
HPProLiantDL380G6
2
1
4
3
6
5
8
7
1
3 4 5 61 2
PROC
2
PROC
1
POWERSUPPLY
2POWERSUPPLY
OVERTEMP
INTERLOCK
1 2
POWER CAP
FANS
DIMMS
ONLINESPARE
MIRROR
1A
2D
3G
4B
5E
6H
7C
8F
9i 1A
2D
3G
4B
5E
6H
7C
8F
9i
PLAYER
HPProLiantDL380G6
Heavy Load
Lighter Load
vCenter
Imbalanced Cluster
DRS Scalability – Transactions per minute (Higher the better)
40000
50000
60000
70000
80000
90000
100000
110000
120000
130000
140000
Tra
nsa
ctio
n p
er m
inu
te
2_2_2_2 3_2_2_1 3_3_1_1 3_3_2_0 4_2_1_1 4_2_2_0 4_3_1_0 4_4_0_0 5_3_0_0
Run Scenario
Transactions per minute - DRS vs. No DRS No DRS DRSAlready balanced So, fewer gains Higher gains (> 40%)
with more imbalance
DRS Scalability – Application Response Time (Lower the better)
0.00
10.00
20.00
30.00
40.00
50.00
60.00
70.00
Tra
nsa
ctio
n R
esp
on
se ti
me
(ms)
2_2_2_2 3_2_2_1 3_3_1_1 3_3_2_0 4_2_1_1 4_2_2_0 4_3_1_0 4_4_0_0 5_3_0_0
Run Scenario
Transaction Response Time - DRS vs. No DRS No DRS DRS
VMware HA
App
OS
App
OS
VMware ESX VMware ESX
HA HA App
OS
App
OS
VMs Reboot
VMware Fault Tolerance
App
OS
VMware ESX VMware ESX
App
OS FT
No Reboot – Seamless Cutover
vApp: The application of the cloud
An uplifting of a virtualized workload
• VM = Virtualized Hardware Box
• App = Virtualized Software Solution
• Takes the benefits of virtualization: encapsulation, isolation and mobility higher up the stack
Properties:
• Comprised of one or more VMs (may be multi-tier applications)
• Encapsulates requirements on the deployment environment
• Distributed as an OVF package
Built by:
• ISVs / Virtual Appliance Vendors
• IT administrators
• SI/VARs SAP
Tomcat
Policies 1. Product: eCommerce 2. Topology 3. Resources Req: CPU,
Mem, Disk, Bandwidth 4. Only port 80 is used 5. DR RPO: 1 hour 6. VRM: Encrypt w/ SHA-1 7. Decommission in 2 month Websphere
Exchange
The Progression of Virtualization to Cloud
VMware Infrastructure
Virtual Resource Pools
2003
VMware ESX®
Server Virtualization
2001
VMware Workstation
Virtualization
1998
VMware vSphere™
Complete Virtualization Platform
From Desktop through the Datacenter… to the Cloud
2009
14
Datacenter of the Future – private cloud
• On-demand capacity
• Pooling, load balancing of server, storage, network
• Built-in availability, security and scalability
Compute factory
Resource Pools
vSphere vSphere vSphere vSphere
API
vSphere 4.0 – The Most Complete Virtualization Platform
Application Services
Infrastructure Services
Scalability
vSphere 4.0
Security
• Firewall • Anti-virus • Intrusion Prevention • Intrusion Detection
Dynamic Resource Sizing
• Clustering • Data Protection • Fault Tolerance
Availability
vNetwork vStorage
Network Management
• Storage Management & Replication
• Storage Virtual Appliances
• Hardware Assist • Enhanced Live
Migration Compatibility
vCompute
Business-Critical Application Momentum
Source: VMware customer survey, September 2008, sample size 1038 Data: Within subset of VMware customers running a specific app, % that have at least one instance of that app in production in a VM
In a recent Gartner poll, 73% of customers claimed to use x86 virtualization for mission critical applications in production Source: Gartner IOM Conference (June 2008) “Linux and Windows Server Virtualization Is Picking Up Steam” (ID Number: G00161702)
36%
53% 56%
41% 34%
50%
MS Exchange
MS SharePoint
MS SQL
Oracle Middleware
Oracle DB
IBM WebSphere
% of customers running apps in production on VMware
IBM DB2
24%
SAP
27%
Agenda
VMware Platform Introduction Why Virtualize Databases? Virtualization Technical Primer Performance Studies and Proof Points Deploying Databases in Virtual Environments • Picking a Hardware Platform
• Configuring Storage
• Configuring the Virtual Machine
• OS Choices and Tuning
• Database Configuration
• Performance Monitoring
Provision DB On-Demand
Pre-Configured vApps " Standardize on
optimal app & OS configurations
" Minimize configuration drift and errors
" Support multi-tier Apps
Provision On Demand " Accelerate app
development " Faster service
availability
OS
SQL
Database Enterprise Ed. 4 vCPU 4 GB
Lab Production
Accelerate dev & test
Faster service availability
OS
SQL
Database
4 vCPU 4 GB
Databases: Why Use VMs Rather than DB Virtualization?
Virtualization at hypervisor level provides the best abstraction • Each DBA has their own hardened, isolated, managed sandbox
Strong Isolation • Security
• Performance/Resources
• Configuration
• Fault Isolation
Scalable Performance • Low-overhead virtual Database performance
• Efficiently Stack Databases per-host
Agenda
VMware Platform Introduction Why Virtualize Databases? Virtualization Technical Primer Performance Studies and Proof Points Deploying Databases in Virtual Environments • Picking a Hardware Platform
• Configuring Storage
• Configuring the Virtual Machine
• OS Choices and Tuning
• Database Configuration
• Performance Monitoring
VMware ESX Architecture
VMkernel
Guest
Physical Hardware
Monitor (BT, HW, PV)
Guest
Memory Allocator
NIC Drivers
Virtual Switch
I/O Drivers
File System
Monitor
Scheduler
Virtual NIC Virtual SCSI
TCP/IP File
System
CPU is controlled by scheduler and virtualized by monitor Monitor supports: ! BT (Binary Translation) ! HW (Hardware assist) ! PV (Paravirtualization)
Memory is allocated by the VMkernel and virtualized by the monitor
Network and I/O devices are emulated and proxied though native device drivers
Agenda
VMware Platform Introduction Why Virtualize Databases? Virtualization Technical Primer Performance Studies and Proof Points Deploying Databases in Virtual Environments • Picking a Hardware Platform
• Configuring Storage
• Configuring the Virtual Machine
• OS Choices and Tuning
• Database Configuration
• Performance Monitoring
Evolution of Performance for Large Apps on ESX
Ability to satisfy Performance Demands
General Population Of Apps
ESX 2.x
Overhead:30-60% VCPUs: 2 VM RAM:3.6 GB Phys RAM:64GB PCPUs:16 core IOPS:<10,000 N/W:380 Mb/s Monitor Type: Binary Translation
VI 3.0
Overhead:20-40% VCPUs:2 VM RAM:16 GB Phys RAM:64GB PCPUs:16 core IOPS:10,000 N/W:800 Mb/s Gen-1 HW Virtualization Monitor Type: VT / SVM
Mission Critical Apps
100%
VI 3.5
Overhead:10-30% VCPUs:4 VM RAM:64GB Phys RAM:256GB PCPUs:64 core IOPS:100,000 N/W:9 Gb/s 64-bit OS Support Gen-2 HW Virtualization Monitor Type: NPT
vSphere 4.0
Overhead:2-15% VCPUs:8 VM RAM:255GB Phys RAM:1 TB PCPUs:64 core IOPS:350,000 N/W:28 Gb/s 64-bit OS Support 320 VMs per host 512 vCPUs per host Monitor Type: EPT
Can I virtualize CPU Intensive Applications? VMware ESX 3.x compared to Native
SPECcpu results covered by O.Agesen and K.Adams Paper
Websphere results published jointly by IBM/VMware
SPECjbb results from recent internal measurements
Most CPU intensive applications have very low overhead
Debunking the myth: High Throughput, Low Overhead I/O
Maximum reported storage: 365K IOPS
• 100K on VI3
Maximum reported network: 16 Gb/s
• Measured on VI3
Can I Virtualize High Networking I/O Applications?
Overall response time is lower when CPU utilization is less than 100% due to multi-core offload
Enterprise Workload Demands vs. Capabilities
Workload Requires vSphere 4 Oracle 11g 8vcpus for 95% of DBs
64GB for 95% of DBs 60k IOPS max for OLTP @ 8vcpus 77Mbits/sec for OLTP @ 8vcpus
8vcpus per VM 256GB per VM 120k IOPS per VM 9900Mbits/sec per VM
SQLserver 8vcpus for 95% of DBs 64GB @ 8vcpus 25kIOPS max for OLTP @ 8vcpus 115Mbits/sec for OLTP @ 8vcpus
8vcpus per VM 256GB per VM 120k IOPS per VM 9900Mbits/sec per VM
SAP SD 8vcpus for 90% of SAP Installs 24GB @ 8vcpus 1k IOPS @ 8vcpus 115Mbits/sec for OLTP @ 8vcpus
8vcpus per VM 256GB per VM 120k IOPS per VM 9900Mbits/sec per VM
Exchange 4cpus per VM, Multiple VMs 16GB @ 4vcpus 1000 IOPS for 2000 users 8Mbits/sec for 2000 users
8vcpus per VM 256GB per VM 120k IOPS per VM 9900Mbits/sec per VM
Apache SPECweb 2-4cpus per VM, Multiple VMs 8GB @ 4vcpus 100IOPS for 2000 users 3Gbits/sec for 2000 users
8vcpus per VM 256GB per VM 120k IOPS per VM 9900Mbits/sec per VM
Measuring the Performance of DB Virtualization
Throughput Delivered
Minimal
Overheads
How large is your database instance? (one VM shown)
0#
2#
4#
6#
8#
1# 2# 4# 8#
Scaling#Ra
1o#
v/p#CPUs#
Na1ve# VM#
IO In Action: Oracle/TPC-C*
58000 IOPS
" ESX achieves 85% of native performance with an industry standard OLTP workload on an 8-vCPU VM
" 1.9x increase in
throughput with each doubling of vCPUs
Eight vCPU Oracle System Characteristics
Metric 8 vcpu VM
Business transactions per minute 250,000
Disk IOPS 60,000
Disk Bandwidth 258 MB/s
Network Packets/sec 27,000
Network Throughput 77 Mb/s
* Our benchmark was a fair-use implementation of the TPC-C business model; our results are not TPC-C compliant results, and not comparable to official TPC-C results
Oracle/TPC-C* Experimental Details
Host was an 8 CPU system with an Xeon 5500 OLTP Benchmark: fair-use implementation of TPC-C workload Software stack includes: RHEL5.1, Oracle 11g R1, internal build of ESX (ESX 4.0 RC)
Were there many Tweaks in getting this result? Not really… • ESX development build with these features
! Async I/O, pvscsi driver, virtual Interrupt coalescing, topology-aware scheduling ! EPT: H/W MMU enabled processor
• The only ESX “tunable” applied: static vmxnet TX coalescing ! 3% improvement in performance
VMware vSphere enables you to use all those cores…
Most applications don’t scale beyond 4/8 way
Virtualization provides a means to exploit the hardware’s increasing parallelism
VMWare ESX Scaling: Keeping up with core counts
“Bonus” Memory During Consolidation: Sharing!
Content-based • Hint (hash of page content)
generated for 4K pages
• Hint is used for a match
• If matched, perform bit by bit comparison
COW (Copy-on-Write) • Shared pages are marked read-
only
• Write to the page breaks sharing
VM 1 VM 2 VM 3
Hypervisor
VM 1 VM 2 VM 3
Hypervisor
Multi-VM Performance: DVD-Rental Workload
! Simulate a large multi-tier application with RDBMS • Simulates DVD store transactions
• Java client tier
• Microsoft SQLServer and Oracle Database
Oracle: Sun 16-core x4600 M2 VMware ESX 3.5 Oracle 10G R2 RHEL4, Update 4, 64-bit
EMC CLARiiON CX-340 SQLserver: Dell PE2950 Quad Core Xeon 2 x Intel X5450 32GB RAM
0
10
20
30
40
50
60
70
80
90
100
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
1 2 3 4 5 6 7
# of VMs
Aggregate TPM vs. Number of VMs
Aggregate TPM
CPU Utilization
Consolidating Multiple Oracle VMs
Scaling to 16 Cores,
256GB RAM!
Average of 1GB Memory Saved per instanced from page sharing
Oracle Performance (Response time)
0
10
20
30
40
50
60
70
80
90
100
0.00
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16
0.18
0.20
1 2 3 4 5 6 7
Agg
rega
te R
espo
nse
Tim
e (s
ecs)
# of VMs
Average response time vs. Number of VMs
Average response time
CPU Utilization
! Oracle scales very well on ESX in consolidation scenarios ! Efficient, guaranteed resource allocation to individual Virtual Machine
Agenda
VMware Platform Introduction Why Virtualize Databases? Virtualization Technical Primer Performance Studies and Proof Points Deploying Databases in Virtual Environments • Consolidation and Sizing
• Picking a Hardware Platform
• Configuring Storage
• Configuring the Virtual Machine
• OS Choices and Tuning
• Database Configuration
• Performance Monitoring
General Best Practices for Virtualizing DBs
Characterize DBs into three rough groups • Green DBs – typically 70% ! Ideal candidate for virtualization:
- Well tuned and modest CPU consumption - Less than 1000 IOPS, 4 cores
• Yellow DBs – typically 25% ! Likely candidate for virtualization
- May need some SQL tuning and monitoring to understand CPU and I/O requirements
- 4-8 cores, >1000 IOPS - Storage I/O planning and configuration required
• Red DBs – typically 5% ! Unlikely candidates until larger VMs available ! Consumes more than 8 physical cores ! Not a lot of SQL tuning to be done
CPU Utilization Distribution
1
10
100
1000
10000
100000
0 20 40 60 80 100
% CPU Utilization
Nu
mb
er o
f S
yste
msConsolidation targets are often
<30% Utilized " Windows average utilization: 5-8% " Linux/Unix average: 10-35%
Consolidation and Sizing
Sizing and Requirements
Virtual Machine sizing is different to Physical • Don’t just take the #cpus in the physical system as the vcpu requirement
• Many Physical systems are sized for the peak utilization for with ample headroom for future growth
• As a result, utilization is often very low in physical systems
• With virtual machines, it’s not necessary to build headroom
• For example, many databases running on 4-cpu systems can easily run in a 2-vcpu guest
Moving of older RISC/SPARC machines to virtual x86 • Even that large older generation SPARC may be a good candidate…
• 48 x 1.2Ghz SPARC cores = 1 x 8 core Nehalem VM
• Since most large SPARC machines are consolidated already, it’s likely that your larger databases can run inside a VM
Picking Hardware: Recent Hardware has Lower Overhead
0
200
400
600
800
1000
1200
1400
Prescott Cedar Mill
Merom Penryn
Nehalem
Intel Architecture VMEXIT Latencies
Latency (cycles)
HW virtualization support improving from CPU generation to generation
Use Intel Nehalem or AMD Barcelona, or later…
Hardware memory management units (MMU) improve efficiency • AMD RVI currently available • Dramatic gains can be seen
But some workloads see little or no value • And a small few actually
slow down 0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
SQL Server Citrix XenApp Apache Compile
AMD RVI Speedup
Databases: Top Ten Tuning Recommendations
1. Optimize Storage Layout, # of Disk Spindles 2. Use 64-bit Database 3. Add enough memory to cache DB, reduce I/O 4. Optimize Storage Layout, # of Disk Spindles 5. Use Direct-IO high performance un-cached path in the
Guest Operating System 6. Use Asynchronous I/O to reduce system calls 7. Optimize Storage Layout, # of Disk Spindles 8. Use Large MMU Pages 9. Use the latest H/W – with AMD RVI or Intel EPT 10. Optimize Storage Layout, # of Disk Spindles
Databases: Workload Considerations
OLTP
Short Transactions Limited number of standardized queries Small amounts of data accessed Uses data from only one source I/O Profile • Small Synchronous reads/writes (2k->8k) • Heavy latency-sensitive log I/O
Memory and I/O intensive
DSS
Long Transactions Complex queries Large amounts of data accessed Combines data from different sources I/O Profile • Large, Sequential I/Os (up to 1MB) • Extreme Bandwidth Required • Heavy ready traffic against data
volumes • Little log traffic
CPU, Memory and I/O intensive Indexing enables higher performance
Databases: Storage Configuration
Storage considerations • VMFS or RDM
• Fibre Channel, NFS or iSCSI
• Partition Alignment
• Multiple storage paths
OS/App, Data, Transaction Log and TempDB on separate physical spindles
RAID 10 or RAID5 for Data, RAID 1 for logs Queue depth and Controller Cache Settings TempDB optimization
Disk Fundamentals
Databases are mostly random I/O access patterns Accesses to disk are dominated by seek/rotate • 10k RPM Disks: 150 IOPS max, ~80 IOPS Nominal
• 15k RPM Disks: 250 IOPS max, ~120 IOPS Nominal
Database Storage Performance is controlled by two primary factors • Size and configuration of cache(s)
• Number of physical disks at the
back-end
Disk Performance
Higher sequential performance (bandwidth) on the outer tracks
Databases: Storage Hierarchy
/dev/hda
Controller Cache
Database Cache
Guest OS Cache
" In a recent study, we scaled up to 320,000 IOPS to an EMC array from a single ESX server.
" 8K Read/Write Mix
" Cache as much as possible in caches
" Q: What’s the impact on the number of disks if we improve cache hit rates from 90% to 95%?
" 10 in 100 => 5 in 100…
" #of disks reduced by 2x!
Storage – VMFS or RDM
VMFS Leverage templates and quick provisioning Fewer LUNs means you don’t have to watch Heap Scales better with Consolidated Backup Preferred Method
RAW
RAW provides direct access to a LUN from within the VM
Allows portability between physical and virtual
RAW means more LUNs • More provisioning time
Advanced features still work
Guest OS
database1.vmdk database2.vmdk
Guest OS Guest
OS /dev/hda /dev/hda
/dev/hda
FC or iSCSI LUN
FC LUN
VMFS
Best Practices: VMFS or RDM Performance is similar
Databases: Typical I/O Architecture
File System
FS Cache
Database Cache
DB Reads
DB Writes
Log Writes
512->1MB
2k,8k,16k x n 2k, 8k, 16k x n
VMkernel
Guest I/O Drivers
File System
Virtual SCSI
File System
Application
K
D
G
Windows Device Queue
R = Perfmon Physical Disk “Disk Secs/transfer”
R A
K = ESX Kernel
G = Guest Latency
A = Application Latency
D = Device Latency
S S = Windows Physical Disk Service Time
Know your I/O: Use a top-down Latency analysis technique
Checking for Disk Bottlenecks
Disk latency issues are visible from Oracle stats • Enable statspack
• Review top latency events
Top 5 Timed Events % Total Event Waits Time (s) Ela Time --------------------------- ------------ ----------- ----------- db file sequential read 2,598 7,146 48.54 db file scattered read 25,519 3,246 22.04 library cache load lock 673 1,363 9.26 CPU time 2,154 934 7.83 log file parallel write 19,157 837 5.68
Oracle File System Sync vs DIO
Oracle DIO vs. RAW
Direct I/O
Guest-OS Level Option for Bypassing the guest cache • Uncached access avoids multiple copies of data in memory
• Avoid read/modify/write module file system block size
• Bypasses many file-system level locks
Enabling Direct I/O for Oracle and MySQL on Linux
# vi init.ora filesystemio_options=“setall” Check: # iostat 3 (Check for I/O size matching the DB block size…)
# vi my.cnf innodb_flush_method to O_DIRECT Check: # iostat 3 (Check for I/O size matching the DB block size…)
Asynchronous I/O
An API for single-threaded process to launch multiple outstanding I/Os • Multi-threaded programs could just just multiple threads
• Oracle databases uses this extensively
• See aio_read(), aio_write() etc...
Enabling AIO on Linux
# rpm -Uvh aio.rpm # vi init.ora filesystemio_options=“setall” Check: # ps –aef |grep dbwr # strace –p <pid> io_submit()… <- Check for io_submit in syscall trace
Picking the size of each VM
vCPUs from one VM stay on one socket*
With two quad-core sockets, there are only two positions for a 4-way VM
1- and 2-way VMs can be arranged many ways on quad core socket
Newer ESX schedulers more efficiency use fewer options • Relaxed co-scheduling
Socket 0 Socket 1 VM Size Options
2
12
8
Use Large Pages
Guest-OS Level Option to use Large MMU Pages • Maps the large SGA region with fewer TLB entries
• Reduces MMU overheads
• ESX 3.5 Uniquely Supports Large Pages!
Enabling Large Pages on Linux
# vi /etc/sysctl.conf (add the following lines:) vm/nr_hugepages=2048 vm/hugetlb_shm_group=55 # cat /proc/vminfo |grep Huge HugePages_Total: 1024 HugePages_Free: 940 Hugepagesize: 2048 kB
Large Pages
Increases TLB memory coverage • Removes TLB misses, improves
efficiency
Improves performance of applications that are sensitive to TLB miss costs
Configure OS and application to leverage large pages • LP will not be enabled by default
0%
2%
4%
6%
8%
10%
12%
Performance Gains
Gain (%)
Linux Versions
Some older Linux versions have a 1Khz timer to optimize desktop-style applications • There is no reason to use such a high timer rate on server-class applications
• The timer rate on 4vcpu Linux guests is over 70,000 per second!
Use RHEL >5.1 or latest tickless timer kernels • Install 2.6.18-53.1.4 kernel or later
• Put divider=10 on the end of the kernel line in grub.conf and reboot, or default on tickless kernel
• All the RHEL clones (CentOS, Oracle EL, etc.) work the same way
Monitor and Control Service Levels with AppSpeed
End-user
Automatically map services to infrastructure Monitor service levels and identify bottlenecks Size infrastructure dynamically to meet SLA cost-effectively
Policies (SLA) 99.9% Uptime
100 ms latency
.01% error rate
Infrastructure
Web
App
DB
App
Performance Whitepapers
• VMware vCenter Update Manager Performance and Best Practices • Microsoft Exchange Server 2007 Performance on VMware vSphere • Virtualizing Performance-Critical Database Applications in VMware vSphere • Performance Evaluation of Intel EPT Hardware Assist • SAP Performance on VMware vSphere • A Comparison of Storage Protocol Performance • Microsoft SQLServer Performance • Fault-Tolerance Performance • Overview of Memory Management in VMware vSphere • Scheduler Improvements in VMware vSphere • Comparison of Storage Protocols with Microsoft Exchange 2007 • Networking Performance and Scalability in VMware vSphere • Performance Analysis of VMware VMFS Filesystem • Performance Impact of PVSCSI • vSphere Performance Best Practices
© 2009 VMware Inc. All rights reserved
For more info: www.vmware.com/oracle
Richard McDougall Chief Performance Architect