+ All Categories
Home > Documents > CS136, Advanced Architecture

CS136, Advanced Architecture

Date post: 07-Feb-2016
Category:
Upload: dmitri
View: 40 times
Download: 0 times
Share this document with a friend
Description:
CS136, Advanced Architecture. Virtual Machines. Outline. Virtual Machines Xen VM: Design and Performance Conclusion. Introduction to Virtual Machines. VMs developed in late 1960s Remained important in mainframe computing over the years - PowerPoint PPT Presentation
Popular Tags:
25
CS136, Advanced Architecture Virtual Machines
Transcript
  • CS136, Advanced ArchitectureVirtual Machines

    CS136

  • OutlineVirtual MachinesXen VM: Design and PerformanceConclusion

    CS136

  • Introduction to Virtual MachinesVMs developed in late 1960sRemained important in mainframe computing over the yearsLargely ignored in single-user computers of 1980s and 1990sRecently regained popularity due toIncreasing importance of isolation and security in modern systems, Failures in security and reliability of standard operating systems, Sharing of a single computer among many unrelated users,Dramatic increases in raw speed of processors, making VM overhead more acceptable

    CS136

  • What Is a Virtual Machine (VM)?Broadest definition:Any abstraction that provides a Turing-complete and standardized programming interfaceExamples: x86 ISA; Java bytecode; even Python and PerlAs level gets higher, utility of definition gets lowerBetter definition:An abstract machine that provides a standardized interface similar to a hardware ISA, but at least partly under control of software that provides added featuresBest to distinguish true VM from emulators (although Java VM is entirely emulated)Often, VM is partly supported in hardware, with minimal software controlE.g., give multiple virtual x86s on one real one, similar to way virtual memory gives illusion of more memory than reality

    CS136

  • System Virtual Machines(Operating) System Virtual Machines provide complete system-level environment at binary ISAAssumes ISA always matches native hardwareE.g., IBM VM/370, VMware ESX Server, and XenPresents illusion that VM users have an entire private computer, including copy of OSSingle machine runs multiple VMs, and can support multiple (and different) OSes On conventional platform, single OS owns all HW resources With VM, multiple OSes all share HW resourcesUnderlying HW platform is host; its resources are shared among guest VMs

    CS136

  • Virtual Machine Monitors (VMMs)Virtual machine monitor (VMM) or hypervisor is software that supports VMsVMM determines how to map virtual resources to physical onesPhysical resource may be time-shared, partitioned, or emulated in software VMM much smaller than a traditional OS; Isolation portion of a VMM is 10,000 lines of code

    CS136

  • VMM OverheadDepends on workloadUser-level CPU-bound programs (e.g., SPEC) have near-zero virtualization overhead Runs at native speeds since OS rarely invokedI/O-intensive workloads are OS-intensive Execute many system calls and privileged instructionsCan result in high virtualization overhead Goal for system VMs:Run almost all instructions directly on native hardwareBut if I/O-intensive workload is also I/O-boundProcessor utilization is low (since waiting for I/O)Processor virtualization can be hidden in I/O costsSo virtualization overhead is low

    CS136

  • Important Uses of VMsMultiple OSesNo more dual boot!Can even transfer data (e.g., cut-and-paste) between VMsProtectionCrash or intrusion in one OS doesnt affect othersEasy to replace failed OS with fresh, clean oneSoftware ManagementVMs can run complete SW stack, even old OSes like DOSRun legacy OS, stable current, test release on same HWHardware ManagementIndependent SW stacks can share HWRun application on own OS (helps dependability)Migrate running VM to different computer To balance load or to evacuate from failing HW

    CS136

  • Virtual Machine Monitor RequirementsVM Monitor Presents SW interface to guest softwareIsolates guests states from each otherProtects itself from guest software (including guest OSes)Guest software should behave exactly as if running on native HW Except for performance-related behavior or limitations of fixed resources shared by multiple VMsHard to achieve perfection in real systemGuest software shouldnt be able to change allocation of real system resources directlyHence, VMM must control everything even though guest VM and OS currently running is temporarily using themAccess to privileged state, address translation, I/O, exceptions and interrupts,

    CS136

  • Virtual Machine Monitor Requirements (continued)VMM must be at higher privilege level than guest VM, which generally runs in user mode Execution of privileged instructions handled by VMME.g., timer or I/O interrupt:VMM suspends currently running guestSaves stateHandles interruptPossibly handle internally, possibly delivers to a guestDecides which guest to run nextLoads its state Guest VMs that want timer are given virtual one

    CS136

  • Hardware RequirementsHardware needs roughly same as paged virtual memory: At least 2 processor modes, system and userPrivileged subset of instructionsAvailable only in system modeTrap if executed in user modeAll system resources controllable only via these instructions

    CS136

  • ISA Support for Virtual MachinesIf ISA designers plan for VMs, easy to limit:What instructions VMM must handleHow long it takes to emulate themBecause chip makers ignored VM technology, ISA designers didnt plan aheadIncluding 80x86 and most RISC architecturesGuest system must see only virtual resourcesGuest OS runs in user mode on top of VMMIf guest tries to touch HW-related resource, must trap to VMMRequires HW support to initiate trapVMM must then insert emulated informationIf HW built wrong, guest will see or change privileged stuffVMM must then modify guests binary code

    CS136

  • ISA Impact on Virtual MachinesConsider x86 PUSHF/POPF instructionsPush flags register on stack or pop it backFlags contains condition codes (good to be able to save/restore) but also interrupt enable flag (IF)Pushing flags isnt privilegedThus, guest OS can read IF and discover its not the way it was setVMM isnt invisible any morePopping flags in user mode ignores IFVMM now doesnt know what guest wants IF to beShould trap to VMMPossible solution: modify code, replacing pushf/popf with special interrupting instructionsBut now guest can read own code and detect VMM

    CS136

  • Hardware Support for VirtualizationOld correct implementation: trap on every pushf/popf so VM can fix up resultsVery expensive, since pushf/popf used frequentlyAlternative: IF shouldnt be in same place as condition codesPushf/popf can be unprivilegedIF manipulation is now very rarePentium has even better solutionIn user mode, VIF (Virtual Interrupt Flag) holds what guest wants IF to bePushf/popf manipulate VIF instead of IFHost can now control real IF, guest sees virtual oneBasic idea can be extended for many similar OS-only flags and registers

    CS136

  • Impact of VMs on Virtual MemoryEach guest manages own page tablesHow to make this work?VMM separates real and physical memory Real memory is intermediate level between virtual and physicalSome call it virtual, physical, and machine memoryGuest maps virtual to real memory via its page tablesVMM page tables map real to physicalVMM maintains shadow page table that maps directly from guest virtual space to HW physical address spaceRather than pay extra level of indirection on every memory accessVMM must trap any attempt by guest OS to change its page table or to access page table pointer

    CS136

  • ISA Support for VMs & Virtual MemoryIBM 370 architecture added additional level of indirection that was managed by VMM Guest OS kept page tables as before, so shadow pages were unnecessaryTo virtualize software TLB, VMM manages real one and has copy of contents for each guest VMAny instruction that accesses TLB must trapHardware TLB still managed by hardwareMust flush on VM switch unless PID tags availableHW or SW TLBs with PID tags can mix entries from different VMsAvoids flushing TLB on VM switch

    CS136

  • Impact of I/O on Virtual MachinesMost difficult part of virtualizationIncreasing number of I/O devices attached to computer Increasing diversity of I/O device typesSharing real device among multiple VMsSupporting myriad of device drivers, especially with differing guest OSesGive each VM generic versions of each type of I/O device, and let VMM handle real I/ODrawback: slower than giving VM direct accessMethod for mapping virtual I/O device to physical depends on type:Disks partitioned by VMM to create virtual disks for guestsNetwork interfaces shared between VMs in short time slicesVMM tracks messages for virtual network addressesRoutes to proper guestUSB might be directly attached to VM

    CS136

  • Example: Xen VMXen: Open-source System VMM for 80x86 ISA Project started at University of Cambridge, GNU licenseOriginal vision of VM is running unmodified OSSignificant wasted effort just to keep guest OS happyParavirtualization - small modifications to guest OS to simplify virtualization Three examples of paravirtualization in Xen:To avoid flushing TLB when invoking VMM, Xen mapped into upper 64 MB of address space of each VM Guest OS allowed to allocate pages, just check that it didnt violate protection restrictions To protect guest OS from user programs in VM, Xen takes advantage of 80x86s four protection levels Most x86 OSes keep everything at privilege levels 0 or at 3.Xen VMM runs at highest level (0) Guest OS runs at next level (1) Applications run at lowest (3)

    CS136

  • Xen Changes for ParavirtualizationPort of Linux to Xen changed 3000 lines, or 1% of 80x86-specific code Doesnt affect application binary interfaces (ABI/API) of guest OSOSes supported in Xen 2.0:

    http://wiki.xensource.com/xenwiki/OSCompatibility

    CS136

  • Xen and I/OTo simplify I/O, privileged VMs assigned to each hardware I/O device: driver domains Xen Jargon: domains = Virtual MachinesDriver domains run physical device driversInterrupts still handled by VMM before being sent to appropriate driver domain Regular VMs (guest domains) run simple virtual device driversCommunicate with physical device drivers in driver domains to access physical I/O hardware Data sent between guest and driver domains by page remapping

    CS136

  • Xen PerformancePerformance relative to native Linux for Xen, for 6 benchmarks (from Xen developers)

    But are these user-level CPU-bound programs? I/O-intensive workloads? I/O-bound I/O-Intensive?

    CS136

    Chart1

    1

    0.9704797048

    0.9186046512

    0.9527421237

    0.956937799

    0.9922779923

    Xen/Linux

    Performance relative to native Linux

    Sheet1

    L5671.00Linux (native)Linux (native)Xen (VM)Vmware (VM)Linux (User Mode)

    X5671.00Xen (VM)SPEC INT20001.001.000.980.97

    V5540.98Vmware (VM)Linux build time1.000.970.790.49

    U5500.97Linux (User Mode)OSDB-IR1.000.920.470.38

    SPEC INT2000 (score)OSDB-OLTP1.000.950.120.18

    L2631.00dbench1.000.960.740.27

    X2710.97SPEC WEB991.000.990.290.33

    V3340.79Xen/LinuxVMware Workstation 3.2User Mode Linux

    U5350.49SPEC INT2000100%1.01.0

    Linux build time (s)Linux build time97%0.80.5

    L1721.00PostgreSQL Inf. Retrieval92%0.50.4

    X1580.92PostgreSQL OLTP95%0.10.2

    V800.47dbench96%0.70.3

    U650.38SPEC WEB9999%0.30.3

    OSDB-IR (tup/s)

    L17141.00

    X16330.95

    V1990.12

    U3060.18

    OSDB-OLTP (tup/s)

    L4181.00

    X4000.96

    V3100.74

    U1110.27

    dbench (score)

    L5181.00

    X5140.99

    V1500.29

    U1720.33

    SPEC WEB99 (score)

    Sheet1

    00

    00

    00

    00

    00

    00

    Xen/Linux

    VMware Workstation 3.2

    Performance relative to native Linux

    Sheet2

    0

    0

    0

    0

    0

    0

    Xen/Linux

    Performance relative to native Linux

    Sheet3

  • Xen Performance, Part IISubsequent study noticed Xen experiments based on 1 Ethernet network interface card (NIC), and single NIC was performance bottleneck

    CS136

    Chart1

    942942849

    18821878849

    24621539849

    24461593849

    Linux

    Xen w/ privileged driver VM ("driver domain")

    Xen w/ guest VM + driver VM

    Number of Network Interface Cards

    Receive Throughput (Mbits/sec)

    Figure 8 events web server

    # commands

    # set style data histogram

    # set style fill pattern border -1

    # set yrange [0:4.5]

    # set size 0.6,0.6

    # plot 'file' using ($2/$3):xtic(1) title 2, '' using ($3/$3) title 3, '' using ($4/$3) title 4, '' using ($5/$3) title 5

    # set xlabel "Profiled Hardware Event"

    # set ylabel "Relative costs"

    # set terminal postscript eps

    # profile for the 1 NIC runs

    #profiled parameterLinuxxen-domain0xen-guest0xen-guest1

    arbitLinux (1 CPU)Xen-priviledged driver VM (1 CPU)Xen-guest VM + driver VM (1 CPU)Xen-guest VM + driver VM (2 CPUs)

    "Instr"13005151473422734815

    "L-2"319832001383510470

    "I-TLB"423268781196512159

    "D-TLB"1119130872462627262

    Linux (1 CPU)Xen-priviledged driver VM (1 CPU)Xen-guest VM + driver VM (1 CPU)Xen-guest VM + driver VM (2 CPUs)

    "Instr"13005151473422734815

    1.22.62.7

    "L-2"319832001383510470

    1.04.33.3

    "I-TLB"423268781196512159

    "D-TLB"1119130872462627262

    11.722.024.4

    all_knot_prof.data for figure 8

    Relative to Xen-domain0

    Linux (1 CPU)Xen-priviledged driver VM (1 CPU)Xen-guest VM + driver VM (1 CPU)Xen-guest VM + driver VM (2 CPUs)

    Intructions0.861.002.262.30

    L2 misses1.001.004.323.27

    I-TLB misses0.621.001.741.77

    D-TLB misses0.091.001.882.08

    Figure 8

    Figure 8 events web server

    0000000000000000

    0000000000000000

    0000000000000000

    0000000000000000

    Linux (1 CPU)

    Xen-priviledged driver VM (1 CPU)

    Xen-guest VM + driver VM (1 CPU)

    Xen-guest VM + driver VM (2 CPUs)

    Linux (1 CPU)

    Xen-priviledged driver VM (1 CPU)

    Xen-guest VM + driver VM (1 CPU)

    Xen-guest VM + driver VM (2 CPUs)

    Linux (1 CPU)

    Xen-priviledged driver VM (1 CPU)

    Xen-guest VM + driver VM (1 CPU)

    Xen-guest VM + driver VM (2 CPUs)

    Linux (1 CPU)

    Xen-priviledged driver VM (1 CPU)

    Xen-guest VM + driver VM (1 CPU)

    Xen-guest VM + driver VM (2 CPUs)

    Event count relative to Xen-priviledged driver domain

    Figure 7 events web server

    Request rate (reqs/sec)Throughput (Mbits/sec)

    LinuxXen w/ privileged driver domainXen-guest0Xen-guest1

    10008.138.138.138.13

    200016.2716.2716.2716.27

    300024.424.424.424.4

    400032.5232.5232.5232.52

    500040.6640.5240.5240.52

    600048.848.845.5148.8

    700056.9556.9543.156.95

    800065.165.143.165.1

    900073.273.145073.14

    1000081.4181.4152.581.41

    1100089.5389.5353.0389.53

    1200097.4697.465397.46

    13000105.76105.75105.75

    14000113.9113.9113.9

    15000122.05122.04107.1

    16000130.298.23106.8

    17000138.3370.9491.7

    18000146.4765.53104.6

    19000154.660.26104.51

    20000122.7557.02104.5

    21000119.2257.14

    22000103.0866.33

    2300096.8955.83

    2400089.7250.53

    Figure 7 events web server

    Linux

    Xen w/ privileged driver domain

    Xen-guest0

    Xen-guest1

    Figure 3 Rcv Thruput

    Xen guest VM +privileged driver VM

    Xen privileged driver VM only

    Lunix

    Linux

    Xen-priviledged driver domain

    Xen-guest0

    Request Rate (Reqs/sec)

    Throughput (Mbits/sec)

    # commands

    # set style data histogram

    # set style fill pattern border -1

    # set yrange [0:3000]

    # set size 0.6,0.6

    # plot 'file' using 2:xtic(1) title 2, '' using 3 title 3

    # set xlabel "Number of NICs"

    # set ylabel "Aggregate Throughput (Mb/s)"

    # set terminal postscript eps

    #number of NICSMb/s(linux)xenXen-domain0

    arbitLinuxXen w/ privileged driver VM ("driver domain")

    1942942

    218821878

    324621539

    424461593

    LinuxXen w/ privileged driver VM ("driver domain")Xen w/ guest VM + driver VM

    1942942849

    218821878849

    324621539849

    424461593849

    Number of NICs

    0

    Linux

    Xen w/ privileged driver VM ("driver domain")

    Xen w/ guest VM + driver VM

    Number of Network Interface Cards

    Receive Throughput (Mbits/sec)

    MBD00A6E44D.xls

    Chart5

    0.858585858612.25965537732.2984749455

    0.99937514.32343753.271875

    0.615295143911.73960453621.76781041

    0.085504699311.88171467872.0831359364

    Linux

    Xen-privileged driver VM only

    Xen-guest VM + driver VM

    Xen-guest VM + driver VM (2 CPUs)

    Event count relative to Xen-priviledged driver domain

    Figure 8 events web server

    # commands

    # set style data histogram

    # set style fill pattern border -1

    # set yrange [0:4.5]

    # set size 0.6,0.6

    # plot 'file' using ($2/$3):xtic(1) title 2, '' using ($3/$3) title 3, '' using ($4/$3) title 4, '' using ($5/$3) title 5

    # set xlabel "Profiled Hardware Event"

    # set ylabel "Relative costs"

    # set terminal postscript eps

    # profile for the 1 NIC runs

    #profiled parameterLinuxxen-domain0xen-guest0xen-guest1

    arbitLinuxXen-privileged driver VM onlyXen-guest VM + driver VMXen-guest VM + driver VM (2 CPUs)

    "Instr"13005151473422734815

    "L-2"319832001383510470

    "I-TLB"423268781196512159

    "D-TLB"1119130872462627262

    LinuxXen-privileged driver VM onlyXen-guest VM + driver VMXen-guest VM + driver VM (2 CPUs)

    "Instr"13005151473422734815

    "L-2"319832001383510470

    "I-TLB"423268781196512159

    "D-TLB"1119130872462627262

    11.722.024.4

    all_knot_prof.data for figure 8

    Relative to Xen-domain0

    LinuxXen-privileged driver VM onlyXen-guest VM + driver VMXen-guest VM + driver VM (2 CPUs)

    Intructions0.861.002.262.30

    L2 misses1.001.004.323.27

    I-TLB misses0.621.001.741.77

    D-TLB misses0.091.001.882.08

    Figure 8

    Figure 8 events web server

    Linux

    Xen-privileged driver VM only

    Xen-guest VM + driver VM

    Event count relative to Xen-priviledged driver domain

    Figure 7 events web server

    Request rate (reqs/sec)Throughput (Mbits/sec)

    LinuxXen-priviledged driver domainXen-guest0Xen-guest1

    10008.138.138.138.13

    200016.2716.2716.2716.27

    300024.424.424.424.4

    400032.5232.5232.5232.52

    500040.6640.5240.5240.52

    600048.848.845.5148.8

    700056.9556.9543.156.95

    800065.165.143.165.1

    900073.273.145073.14

    1000081.4181.4152.581.41

    1100089.5389.5353.0389.53

    1200097.4697.465397.46

    13000105.76105.75105.75

    14000113.9113.9113.9

    15000122.05122.04107.1

    16000130.298.23106.8

    17000138.3370.9491.7

    18000146.4765.53104.6

    19000154.660.26104.51

    20000122.7557.02104.5

    21000119.2257.14

    22000103.0866.33

    2300096.8955.83

    2400089.7250.53

    Figure 7 events web server

    Linux

    Xen-priviledged driver domain

    Xen-guest0

    Xen-guest1

    Figure 3 Rcv Thruput

    Xen guest VM+driver VM (1 CPU)

    Xen priviledged driver VM (1 CPU)

    Lunix(1 CPU)

    Xen guest VM+driver VM (2 CPUs)

    Linux

    Xen-priviledged driver domain

    Xen-guest1

    Xen-guest0

    Request Rate (Reqs/sec)

    Throughput (Mbits/sec)

    # commands

    # set style data histogram

    # set style fill pattern border -1

    # set yrange [0:3000]

    # set size 0.6,0.6

    # plot 'file' using 2:xtic(1) title 2, '' using 3 title 3

    # set xlabel "Number of NICs"

    # set ylabel "Aggregate Throughput (Mb/s)"

    # set terminal postscript eps

    #number of NICSMb/s(linux)xenXen-domain0

    arbitLinuxXen-priviledged driver VM ("driver domain")

    1942942

    218821878

    324621539

    424461593

    LinuxXen-priviledged driver VM ("driver domain")Xen-guest VM + driver VM

    1942942800

    218821878930

    324621539930

    424461593930

    Number of NICs

    0

    Linux

    Xen-priviledged driver VM ("driver domain")

    Xen-guest VM + driver VM

    Number of Network Interface Cards

    Receive Throughput (Mbits/sec)

  • Xen Performance, Part III> 2X instructions for guest VM + driver VM> 4X L2 cache misses12X 24X Data TLB misses

    CS136

    Chart3

    0.858585858612.2596553773

    0.99937514.3234375

    0.615295143911.7396045362

    0.085504699311.8817146787

    Linux

    Xen w/ privileged driver VM only

    Xen w/ guest VM + driver VM

    Event count relative to Xen w/ privileged driver domain

    Chart5

    0.858585858612.25965537732.2984749455

    0.99937514.32343753.271875

    0.615295143911.73960453621.76781041

    0.085504699311.88171467872.0831359364

    Linux

    Xen-privileged driver VM only

    Xen-guest VM + driver VM

    Xen-guest VM + driver VM (2 CPUs)

    Event count relative to Xen-priviledged driver domain

    Figure 8 events web server

    # commands

    # set style data histogram

    # set style fill pattern border -1

    # set yrange [0:4.5]

    # set size 0.6,0.6

    # plot 'file' using ($2/$3):xtic(1) title 2, '' using ($3/$3) title 3, '' using ($4/$3) title 4, '' using ($5/$3) title 5

    # set xlabel "Profiled Hardware Event"

    # set ylabel "Relative costs"

    # set terminal postscript eps

    # profile for the 1 NIC runs

    #profiled parameterLinuxxen-domain0xen-guest0xen-guest1

    arbitLinuxXen w/ privileged driver VM onlyXen w/ guest VM + driver VMXen w/ guest VM + driver VM (2 CPUs)

    "Instr"13005151473422734815

    "L-2"319832001383510470

    "I-TLB"423268781196512159

    "D-TLB"1119130872462627262

    LinuxXen w/ privileged driver VM onlyXen w/ guest VM + driver VMXen w/ guest VM + driver VM (2 CPUs)

    "Instr"13005151473422734815

    "L-2"319832001383510470

    "I-TLB"423268781196512159

    "D-TLB"1119130872462627262

    11.722.024.4

    all_knot_prof.data for figure 8

    Relative to Xen-domain0

    LinuxXen w/ privileged driver VM onlyXen w/ guest VM + driver VMXen w/ guest VM + driver VM (2 CPUs)

    Instructions0.861.002.262.30

    L2 misses1.001.004.323.27

    I-TLB misses0.621.001.741.77

    D-TLB misses0.091.001.882.08

    Figure 8

    Figure 8 events web server

    Linux

    Xen w/ privileged driver VM only

    Xen w/ guest VM + driver VM

    Event count relative to Xen-privileged driver domain

    Figure 7 events web server

    Request rate (reqs/sec)Throughput (Mbits/sec)

    LinuxXen-priviledged driver domainXen-guest0Xen-guest1

    10008.138.138.138.13

    200016.2716.2716.2716.27

    300024.424.424.424.4

    400032.5232.5232.5232.52

    500040.6640.5240.5240.52

    600048.848.845.5148.8

    700056.9556.9543.156.95

    800065.165.143.165.1

    900073.273.145073.14

    1000081.4181.4152.581.41

    1100089.5389.5353.0389.53

    1200097.4697.465397.46

    13000105.76105.75105.75

    14000113.9113.9113.9

    15000122.05122.04107.1

    16000130.298.23106.8

    17000138.3370.9491.7

    18000146.4765.53104.6

    19000154.660.26104.51

    20000122.7557.02104.5

    21000119.2257.14

    22000103.0866.33

    2300096.8955.83

    2400089.7250.53

    Figure 7 events web server

    8.138.138.138.13

    16.2716.2716.2716.27

    24.424.424.424.4

    32.5232.5232.5232.52

    40.6640.5240.5240.52

    48.848.845.5148.8

    56.9556.9543.156.95

    65.165.143.165.1

    73.273.145073.14

    81.4181.4152.581.41

    89.5389.5353.0389.53

    97.4697.465397.46

    105.76105.7513000105.75

    113.9113.914000113.9

    122.05122.0415000107.1

    130.298.2316000106.8

    138.3370.941700091.7

    146.4765.5318000104.6

    154.660.2619000104.51

    122.7557.0220000104.5

    119.2257.142100021000

    103.0866.332200022000

    96.8955.832300023000

    89.7250.532400024000

    Linux

    Xen-priviledged driver domain

    Xen-guest0

    Xen-guest1

    Figure 3 Rcv Thruput

    8.138.138.138.13

    16.2716.2716.2716.27

    24.424.424.424.4

    32.5232.5232.5232.52

    40.6640.5240.5240.52

    48.848.848.845.51

    56.9556.9556.9543.1

    65.165.165.143.1

    73.273.1473.1450

    81.4181.4181.4152.5

    89.5389.5389.5353.03

    97.4697.4697.4653

    105.76105.75105.7513000

    113.9113.9113.914000

    122.05122.04107.115000

    130.298.23106.816000

    138.3370.9491.717000

    146.4765.53104.618000

    154.660.26104.5119000

    122.7557.02104.520000

    119.2257.142100021000

    103.0866.332200022000

    96.8955.832300023000

    89.7250.532400024000

    Xen guest VM+driver VM (1 CPU)

    Xen priviledged driver VM (1 CPU)

    Lunix(1 CPU)

    Xen guest VM+driver VM (2 CPUs)

    Linux

    Xen-priviledged driver domain

    Xen-guest1

    Xen-guest0

    Request Rate (Reqs/sec)

    Throughput (Mbits/sec)

    # commands

    # set style data histogram

    # set style fill pattern border -1

    # set yrange [0:3000]

    # set size 0.6,0.6

    # plot 'file' using 2:xtic(1) title 2, '' using 3 title 3

    # set xlabel "Number of NICs"

    # set ylabel "Aggregate Throughput (Mb/s)"

    # set terminal postscript eps

    #number of NICSMb/s(linux)xenXen-domain0

    arbitLinuxXen w/ privileged driver VM ("driver domain")

    1942942

    218821878

    324621539

    424461593

    LinuxXen w/ privileged driver VM ("driver domain")Xen w/ guest VM + driver VM

    1942942800

    218821878930

    324621539930

    424461593930

    Number of NICs

    0

    942942800

    18821878930

    24621539930

    24461593930

    Linux

    Xen w/ privileged driver VM ("driver domain")

    Xen w/ guest VM + driver VM

    Number of Network Interface Cards

    Receive Throughput (Mbits/sec)

  • Xen Performance, Part IV> 2X instructions: caused by page remapping and transfer between driver and guest VMs, and by communication over channel between 2 VMs4X L2 cache misses: Linux uses zero-copy network interface that depends on ability of NIC to do DMA from different locations in memory Since Xen doesnt support gather DMA in its virtual network interface, it cant do true zero-copy in the guest VM12X 24X Data TLB misses: 2 Linux optimizationsSuperpages for part of Linux kernel space: 4MB pages lowers TLB misses versus using 1024 4 KB pages. Not in XenPTEs marked global arent flushed on context switch, and Linux uses them for kernel space. Not in XenFuture Xen may address 2. and 3., but 1. inherent?

    CS136

  • ConclusionVM Monitor presents SW interface to guest software, isolates guest states, and protects itself from guest software (including guest OSes)Virtual Machine revivalOvercome security flaws of large OSesManage software, manage hardwareProcessor performance no longer highest priorityVirtualization challenges for processor, virtual memory, and I/OParavirtualization to cope with those difficultiesXen as example VMM using paravirtualization2005 performance on non-I/O bound, I/O intensive apps: 80% of native Linux without driver VM, 34% with driver VM

    CS136

    CS136


Recommended