32
CS 423 – Operating Systems Design Lecture 36 – Virtualization (Part 3) Klara Nahrstedt Fall 2011 Based on slides by Andrew S. Tanenbaum, Sam King Mendel Rosenblum slides and talk at ASPLOS Keynote “Impact of Virtualization on Computer Architecture and Operating Systems “ White Paper “Understanding Full Virtualization, Paravirtualization, and Hardware Assist”, 2007 VMware CS 423 - Fall 2011

CS 423 – Operating Systems Design Lecture 4 – Processes

Embed Size (px)

Citation preview

CS 423 – Operating Systems Design

Lecture 36 – Virtualization (Part 3)

Klara Nahrstedt

Fall 2011

Based on slides by Andrew S. Tanenbaum, Sam King

Mendel Rosenblum slides and talk at ASPLOS Keynote “Impact of Virtualization on Computer Architecture and Operating Systems “

White Paper “Understanding Full Virtualization, Paravirtualization, and Hardware Assist”, 2007 VMware

CS 423 - Fall 2011

Administrative

MP4 is out

◦ Deadline – December 2 + bonus days

Homework 2 posted on November 28

◦ Deadline - December 7

CS 423 - Fall 2011

Hardware-assisted Virtualization

Technologies (Update on 2nd Gen. of Hardware Assists)

New Generation of Hardware Assist ◦ Intel’s EPT (Extended Page Table) technology is in current

Intel Processors EPT is available in all their Nehalem-based CPUs with virtualization

support

Core i7, Core i5, Core i3, Pentium G6950 and appropriate Xeons; Not available in Core i2-based earlier Intel CPUs

EPT is SLAT (Second Level Address Translation)

SLAT-based EPT hardware virtualization will be in Microsoft’s Hyper-V virtualization technology (used in Windows Server 8)

◦ AMD’s RVI (Rapid Virtualization Indexing) technology was introduced in 3rd generation of Opteron processors, code name Barcelona (equivalent to Intel’s EPT) RVI offers up to 42% gains in performance in comparison to

software-only shadow page table implementation (according to VMware research paper – wikipedia source on AMD RVI)

CS 423 - Fall 2011

Source: http://arstechnica.com/business/news/2011/09/hyper-v-coming-to-windows-8with-new-hardware-virtualization-requirement.ars

Outline

Types of Device Virtualization

Virtual Appliances

CS 423 - Fall 2011

Computer System Organization

CS 423 - Fall 2011

Device Virtualization

Goals

◦ Isolation

◦ Multiplexing

◦ Speed

◦ Mobility

Device Virtualization Strategies

◦ Direct Access (Type 1 Hypervisor)

◦ Emulation (Type 2 Hypervisor)

◦ Para-virtualization

CS 423 - Fall 2011

Goals

High performance I/O from guest VM

Problem: VMM adds extra layer of

software

◦ Shared across all VMs – can be bottleneck

Solution: Direct Access to Hardware

CS 423 - Fall 2011

I/O Virtualization

Guest OS starts probing hardware to find out what kinds of I/O devices are attached

◦ These probes trap to hypervisor

◦ Hypervisor reports back disks, printers, etc that exist

◦ Guest OS loads device drivers for these devices and access them

◦ Device drivers access actual I/O, read and write device’s hardware device register

◦ These instructions are sensitive so they trap to hypervisor

◦ Hypervisor copies needed values to and from hardware registers as needed.

What is the problem with this approach? CS 423 - Fall 2011

Direct Access Device

CS 423 - Fall 2011

Memory Isolation w/ Direct Access

Device

CS 423 - Fall 2011

I/O Virtualization - DMA

Problem: Use of DMA which uses

absolute memory addresses

Solution:

◦ Hypervisor must intervene and remap

addresses before DMA starts

◦ Usage of I/O MMU hardware which virtualizes

I/O the same way MMU virtualizes memory

CS 423 - Fall 2011

Virtualization Enabled Devices - NIC

CS 423 - Fall 2011

Direct Access Device Virtualization

Positives ◦ Fast

◦ Simplify monitor Limited device drivers needed

Negatives ◦ Need hardware support for safety (I/O MMU)

◦ Need hardware support for multiplexing VMM complexity now in hardware

◦ Hardware interface visible to guest Limits mobility of VM

◦ Interposition hard by definition

Is this worth it?

CS 423 - Fall 2011

Virtual Device I/O – Emulated Devices

Problem: Need more flexibility

Solution: Move device emulation into

VMM

CS 423 - Fall 2011

I/O Virtualization - Disk

Problem: Multiple VMs

◦ Each guest OS thinks it owns entire disk

partition, and there may be many more VMs

than there are partitions

Solution:

◦ create file or region on disk for each virtual

machine’s physical disk

◦ Guest OS disk block number is translated

into offset in file or disk region being used for

storage and do I/O

CS 423 - Fall 2011

I/O Virtualization - Disk

Problem: Guest OS has disk model different from real one.

Example: guest OS could assume it has plain old IDE disk with IDE disk driver and Host has RAID

Solution: ◦ Remap Hardware Devices

◦ When guest OS driver issues IDE disk command, hypervisor coverts IDE command into commands to drive new disk

This is good strategy to upgrade hardware without changing software

This strategy was one of reasons of VM/370 becoming popular

CS 423 - Fall 2011

Emulated Devices

Positives

◦ Platform stability

◦ Allows interposition

◦ No special hardware support needed

Isolation, multiplexing provided by VMM

Negatives

◦ Can be slow

Ways to make it less slow

◦ Drivers needed in monitor or host

CS 423 - Fall 2011

Para-Virtualized Devices

Guest passes requests to VMM at higher

abstraction level

◦ VMM calls made to initiate requests

◦ Buffers shared between guest / VMM

Positives

◦ Simplify monitor

◦ Fast

Negatives

◦ VMM needs to supply guest-specific drivers

CS 423 - Fall 2011

I/O Virtualization – Para-virtualization

Alternative Handling of I/O:

◦ Dedicate one VM to run a standard OS and reflect all I/O calls from other VMs to it

Approach enhanced when para-virtualization is used

◦ Command being issues to hypervisor actually says what guest OS wants (e.g., block 1403 from disk 1) rather than being a series of commands writing to device registers (type 1 hypervisor – needs to figure what guest OS is trying to do)

Xen uses this approach to do I/O with VM called domain 0

CS 423 - Fall 2011

I/O Virtualization - Type 2 vs Type 1

Hypervisor I/O virtualization over type-2 hypervisors

has practical advantage over type 1 hypervisor ◦ Host OS includes device drivers for all I/O

regular and strange devices

◦ When application program accesses strange I/O device, translated code can call existing device driver to get work done

With type 1 hypervisor ◦ Hypervisor must either contain driver itself, or

make call to driver in domain 0 (which is similar to host OS)

CS 423 - Fall 2011