A study of introduction of the virtualization technology into
operator consoles
T.Ohata, M.Ishii / SPring-8
ICALEPCS 2005, October 10-14, 2005Geneva, Switzerland
Contents
Virtualization technology overviewCategorize virtualization technologies
Performance evaluationHow many virtual machines run on a server
Introduction into the control systemSystem setup
Conclusion
ICALEPCS 2005 in Geneva, Switzerland
What is virtualization technology?
ICALEPCS 2005 in Geneva, Switzerland
Overview of a virtualization technology
Originated from IBM System/360Enable to consolidate many computers into a small number of host computerEach virtual machine (VMs) has independent resources (CPU, disks, MAC address, etc.) like a stand-alone computer
ICALEPCS 2005 in Geneva, Switzerland
VM VM VM VM VM VM VM VM
Host computer
Mainframe
CPU
Networkcard
MEMORY
DISK
CPU
Networkcard
MEMORY
DISK
Why we need virtualization technology?
ICALEPCS 2005 in Geneva, Switzerland
Problem of present control system
Network distributed computing is standard method
ICALEPCS 2005 in Geneva, Switzerland
We have over 200 computers only in beamline control system
We can construct an efficient control system
• Increasing maintenance tasks such as version up, patching etc.
• We faced increasing hardware failure
maintain them by a few staff
Computer proliferation
Virtualization technology has revived
ICALEPCS 2005 in Geneva, SwitzerlandGeneral-purpose server
We can reduce a number of computers.
consolidation
We can cut hardware costs and their
maintenance costs drastically.
Category of virtualization technology- Three virtualization approaches -
ICALEPCS 2005 in Geneva, Switzerland
typical productstypical products
Resource Resource multiplexmultiplex
Xen*, LPAR(IBM), nPartition(HP)
EmulationEmulationVMware*, VirtualPC, QEMU, Bochs
User-Mode-Linux*, coLinux
Application Application shieldingshielding
Solaris container*, jail, chroot
* Evaluated products
1. Resource multiplex
Originated from mainframeMajor UNIX vendors released several products
A layer multiplexes hardware resources (called hypervisor or virtual machine monitor)
Need small patch to kernel
Less overhead
S/W S/W S/W
OS OS OS
Hardware
Multiplexhardware resourcesCPU, memory, etc.
Special OS to suit layer interface
ICALEPCS 2005 in Geneva, Switzerland
2. Emulation
S/W S/W S/W
OS OS OS
Emulation layer
Hardware
Operating system
Hardware
emulation overhead
ICALEPCS 2005 in Geneva, Switzerland
Many emulator for PC/AT, 68K and game machines
Suitable for development and debugging
Usable unmodified OS
Some overhead in transform instructions
3. Application shielding
S/W S/W S/W
Hardware
Operating system
Hardware
ICALEPCS 2005 in Geneva, Switzerland
Developed for web hosting of IPS (internet service provider) to obtain separate computing environment
Partition makes invisible computing space from others
No overhead
partitions
ICALEPCS 2005 in Geneva, Switzerland
Performance evaluation
How many VMs can run on a server computer
Evaluated products
ICALEPCS 2005 in Geneva, Switzerland
ProductsHost OS
CommentsGuest OS
VMware 4.5 Workstation
Linux-2.6.8 Commercial,
Support many OSLinux-2.6.8
User-Mode-Linux
(UML)
Linux-2.6.8Only Linux on x86
Linux-2.4.26um
Solaris container Solaris 10Sparc and x86
FSS*, CPU pinning*
Xen 2.06Linux-2.6-xen0 FSS, CPU pinning
Live migration*Linux-2.6-xenU* Next sheet
Special function
◆ Fair Share Scheduler (FSS)◆ Scheduling policy, CPU usage is equally distributed among tasks
◆ CPU pinning◆ Pin a VM to specific CPU (effective in SMP environment)◆ (Linux has “affinity” function, which can pin only a process)
◆ Live migration◆ VMs migrate to other host dynamically◆ VMs can be running during migration
VM VMVM VM VM VM VM VM
Live migrationHost 1 Host 2
ICALEPCS 2005 in Geneva, Switzerland
Measurement procedure
MADOCA: Message And Database Oriented Control ArchitectureICALEPCS 2005 in Geneva, Switzerland
VM
Response time between virtual machine and VME by using MADOCA application
Message queue (SYSV IPC) and ONC-RPC network communication protocol
VME
RPC(Remote Procedure Call)
Message size is 350 bytes including RPC header and Ethernet frame header
Measurement bench
ICALEPCS 2005 in Geneva, Switzerland
Network
MADOCA server
MADOCA server
MADOCA server
MADOCA server
• 1~10 VMs are running on single server computer
(Dual Xeon 3.0GHz)• MADOCA client is running on each VM
1~10 MADOCA serverson a network
Measure response time
VM VM
VM VM
MADOCAclient
Number of VM dependency of average response time
HP B2000 is present operator console
VMware and UML becomes worse at many VMs
5~6 VMs of Solaris and Xen are comparable to HP workstation
ICALEPCS 2005 in Geneva, Switzerland
HP B2000 (reference)
Number of VMs
ave
rag
e r
esp
on
se ti
me
[s
ec]
Statistics of response time @ 10VMs
0.00
3.75
7.50
11.25
15.00
Min. Ave. SD.
VMware UML Solaris container Xen HP B2000
0
500
1,000
1,500
2,000
Max.
resp
on
se ti
me
[mse
c]
ICALEPCS 2005 in Geneva, Switzerland
better
better
(%)
Number of VMs
CP
U u
tiliz
atio
nLimit of hardware resources
- CPU utilization -
CPU utilization of the Host of VMs
No more IDLE time at 5~6 VMs
ICALEPCS 2005 in Geneva, Switzerland
Solaris container
5~6 VMs are optimum
Traffic on the GbE network interface card
Utilization is a few percent of full bandwidth
Saturation comes from CPU overload
Number of VMs
(MB/s)
ICALEPCS 2005 in Geneva, Switzerland
NIC
util
iza
tion
Limit of hardware resources- Network interface card (NIC) utilization -
Solaris container
Page fault wastes CPU timeIt makes performance deteriorationSaturation come from miss hit of TLB and swap out
Number of VMs
ICALEPCS 2005 in Geneva, Switzerland
Limit of hardware resources- Page fault frequency -
Solaris container
0
50
100
150
1 2 3 4 5 6 7 8 9 10
page fault frequency
How many VMs are optimum?
Large page size on large addressing space architecture is important.- Physical Address Extension (PAE) or 64-bit architecture
Many core CPU is attractive.- One CPU core is enough for 2~3 VMs
ICALEPCS 2005 in Geneva, Switzerland
5~6 VMs are optimum@(Dual Xeon 3.0GHz)
If you want to run more VMs…
Introduction into the control system
We installed virtualization technology into a beamline control.
ICALEPCS 2005 in Geneva, Switzerland
We use Xen and Linux PC servers by replacing HP operator console.
Control application programs ported onto VM (Linux).
replace
We installed a pair of Xen host and NFS server to keep image file of VM.
System setup and live migration
ICALEPCS 2005 in Geneva, Switzerland
Primary Xen host
VM
Control programs
VM VM
Secondary Xen hostMigration
NFS server
Gigabit Ethernet
VM ImageVM Image VM ImageVM Image VM ImageVM Image
VM
Control programs
VM VM
X-server (thin client)It is possible to use continuously
during maintenance.
A few 100msec
Enable shutdown
Future plan- High availability cluster -
We are studying high availability Single System Image (SSI) cluster configuration with Xen
• Migration function of Xen is not effective when host computer suddenly dies.
ICALEPCS 2005 in Geneva, Switzerland
Xen hypervisor
Single System Image cluster
Xen hypervisor Xen hypervisor
VM VM VM VM VM VM VM VM VM
softwaresoftwaresoftwaresoftware
Structure of OpenSSI with Xen
Future plan (cont’)- reduandant storage -
We will introduce a redundant storage system such as SAN, iSCSI and NAS.
NFS server is a single failure point
ICALEPCS 2005 in Geneva, Switzerland
Primary Xen host
SAN storageSAN fibers
Secondary Xen hostFC Switch
FC Switch
ICALEPCS 2005 in Geneva, Switzerland
About 50 HP-UX workstations will be replaced 8 PC-base servers + redundant storage
(6 VMs runs on each PC server)
75% of total cost can be saved(only hardware)
Cost estimation
Conclusion
We studied several virtualization technology to introduce as operator console.We measured performances of some virtualization environments, and verified they are stable.5~6 VMs are optimum for one server computer.We introduced Xen, which has live migration function, into beamline control system.We have plan to apply Xen for more beamline.
ICALEPCS 2005 in Geneva, Switzerland
Thank you for your attention.
ICALEPCS 2005 in Geneva, Switzerland
Running on Xen primary host Running on Xen secondly host