Linux PV on HVM - Xen€¦ · Linux PV on HVM: initial feats Initial version in Linux 2.6.36: -...

Preview:

Citation preview

Linux PV on HVM

paravirtualized interfaces in HVM guests

Stefano Stabellini

Linux as a guests: problems

Linux PV guests have limitations:

- difficult “different” to install- some performance issue on 64 bit- limited set of virtual hardware

Linux HVM guests:

- install the same way as native- very slow

Linux PV on HVM: the solution

- install the same way as native

- PC-like hardware

- access to fast paravirtualized devices

- exploit nested paging

Linux PV on HVM: initial feats

Initial version in Linux 2.6.36:

- introduce the xen platform device driver

- add support for HVM hypercalls, xenbus and grant table

- enables blkfront, netfront and PV timers

- add support to PV suspend/resume

- the vector callback mechanism

Old style event injection

Receiving an interrupt

do_IRQ

handle_fasteoi_irq

handle_irq_event

xen_evtchn_do_upcall

ack_apic_level ← >=3 VMEXIT

The new vector callback

Receiving a vector callback

xen_evtchn_do_upcall

Linux PV on HVM: newer feats

Later enhancements (2.6.37+):

- ballooning

- PV spinlocks

- PV IPIs

- Interrupt remapping onto event channels

- MSI remapping onto event channels

Interrupt remapping

MSI remapping

PV spectrum

HVM guests ClassicPV on HVM

EnhancedPV on HVM

Hybrid PV on HVM

PV guests

Boot sequence

emulated emulated emulated paravirtualized

Memory hardware hardware hardware paravirtualized

Interrupts emulated emulated paravirtualized paravirtualized

Timers emulated emulated paravirtualized paravirtualized

Spinlocks emulated emulated paravirtualized paravirtualized

Disk emulated paravirtualized paravirtualized paravirtualized

Network emulated paravirtualized paravirtualized paravirtualized

Privileged operations

hardware hardware hardware paravirtualized

Benchmarks: the setup

Hardware setup:Dell PowerEdge R710CPU: dual Intel Xeon E5520 quad core CPUs @ 2.27GHzRAM: 22GB

Software setup:Xen 4.1, 64 bitDom0 Linux 2.6.32, 64 bitDomU Linux 3.0 rc4, 8GB of memory, 8 vcpus

PCI passthrough: benchmarkPCI passthrough of an Intel Gigabit NICCPU usage: the lower the better:

interrupt remapping no interrupt remapping0

20

40

60

80

100

120

140

160

180

200

CPU usage domUCPU usage dom0

KernbenchResults: percentage of native, the lower the better

PV on HVM 64 bit PV on HVM 32 bit HVM 64 bit HVM 32 bit PV 64 bit PV 32 bit90

95

100

105

110

115

120

125

130

135

140

KernbenchResults: percentage of native, the lower the better

PV on HVM 64 bitPV on HVM 32 bit

KVM 64 bitHVM 64 bit

HVM 32 bitPV 64 bit

PV 32 bit

90

95

100

105

110

115

120

125

130

135

140

PBZIP2Results: percentage of native, the lower the better

PV on HVM 64 bit PV 64 bit PV on HVM 32 bit PV 32 bit100

110

120

130

140

150

160

PBZIP2Results: percentage of native, the lower the better

KVM 64 bit PV on HVM 64 bit PV 64 bit PV on HVM 32 bit PV 32 bit100

110

120

130

140

150

160

SPECjbb2005

PV 64 bit PV on HVM 64 bit0

10

20

30

40

50

60

70

80

90

100

Results: percentage of native, the higher the better

SPECjbb2005Results: percentage of native, the higher the better

PV 64 bit PV on HVM 64 bit KVM 64 bit0

10

20

30

40

50

60

70

80

90

100

Iperf tcpResults: gbit/sec, the higher the better

PV 64 bit PV on HVM 64 bit PV on HVM 32 bit PV 32 bit HVM 64 bit HVM 32 bit0

1

2

3

4

5

6

7

8

Iperf tcpResults: gbit/sec, the higher the better

PV 64 bit PV on HVM 64 bit KVM 64 bit PV on HVM 32 bit PV 32 bit HVM 64 bit HVM 32 bit0

1

2

3

4

5

6

7

8

Conclusions

PV on HVM guests are very close to PV guests

in benchmarks that favor PV MMUs

PV on HVM guests are far ahead of PV guests

in benchmarks that favor nested paging

Questions?

Recommended