Upload
sierraware
View
15.050
Download
4
Tags:
Embed Size (px)
DESCRIPTION
Sierravisor is the first integrated Secure Hypervisor for ARM that supports TrustZone and Virtualization in a seamless easy to use package.
Citation preview
Sierraware Overview
Simply Secure
Sierraware Leading provider of integrated hypervisor and TEE
Delivered as source code. Flexible and easy to customize
Unified TEE and Hypervisor implementation.
Adheres to Global Platforms specifications
Products
– Residential gateways
– Set-top boxes,
– TVs
– Mobile phones
– Automotive and avionics
– Industrial control
Software Suite
SierraVisor:
– Hypervisor for ARM
– Para-Virtualizaiton, TrustZone Virtualization, HW Virtualization
– 64 bit Support for Cortex A5x cores
– Linux, uCOS and various RTOS
SierraTEE/Micro Kernel
– TrustZone/GlobalPlatform TEE
– Android, uCos and various other Oses
– Runs on various CPUs from ARM11, Cortex A9, A15 and Cortex A53/57
SierraSHIELD: Integrity Management
– Linux Kernel Integrity Management
– Application Rootkit Scanner
– Incremental Log Scanner
DRM and Content Protection :
– Hardware accelerated media streaming and DTCP toolkit
– Integration with Microsoft Playready
SierraVisor – Hypervisor for ARM
Integrated with TrustZone and Android
SierraVisor
Universal Hypervisor, with three different choices
Hardware Assisted Virtualization – A15 based SOCs
TrustZone Monitor as VMM – TrustZone supported Cortex-A9 and ARM11 based SOCs
Para-virtualization – A9 and ARM11 based SOCs
Virtualization for Cortex A9, ARM11
• Cortex A9 and ARM11 are the most popular ARM cores that are found in todays SOCs
• No support for virtualization
• Only two levels of privilege
• TrustZone provides a third level. Almost all ARM Cores provide TrustZone support
• There are two distinct ways for virtualizing the CPU
• Hypercalls for sensitive instructions: Run both the Guest kernel and Guest User in ARM Virtual User Mode
• TrustZone Monitor as VMM: Allows Guest run un-modified in its native privileges
Hypercalls and Sensitive Instructions
• Co-processor Access Instructions MRC/MCR/CDP/LDC/STC
• TrustZone SMC
• I/O Regions READs and WRITEs to I/O regions
• Operations on CSPR MRS, MSR, CPS, SRS, RFE, LDM, DPSPC
• Indirect CSPR LDRT, STRT
• Hypercalls are inserted at compilation time
• Very low overhead at run time.
• All the work of identifying the instructions to be re-written will be done at compilation time.
• Enables very flexible scheme as the system designer can choose the operations to be over ridden and differentiate based on the real use.
• Not all Virtualization solutions have to be the same
TrustZone Monitor
Virtualization VMM
Guest1 Guest(n)
Secure World
Secure micro-kernel
Tasklet Tasklet Tasklets . . .
Virtual User Mode. Both User and Kernel run in
Virtual User Mode. Hypervisor runs in
System Mode.
Guest0
TrustZone Monitor as a VMM
TrustZone provides a mirror world, where memory and other resources are completely isolated from the normal world
TrustZone Monitor can be extended to act as a hypervisor. – Guests can continue to work without modifications
– Kernel can continue to run in supervisor mode
– Guests OSes can run in their individual containers with low overhead.
TrustZone Monitor and Virtualization VMM
Guest0
Kernel
User
Guest1
Kernel
User
Guest(n)
Kernel
User
. . .
Secure World
kernel
Tasklet Tasklet Tasklets
Secure World
kernel
Tasklet Tasklet Tasklets
Hardware-Assisted Virtualization
ARM virtualization extensions and LPAE provides the CPU, memory and I/O virtualization support through hardware.
HYP mode – New execution mode “HYP” is added for hypervisor.
Allowing the guest to continue to run in normal supervisor/user modes
– Virtualizing all CPU features including CP15
Large Physical Addressing – 40-bit physical addressing – Guest virtual memory has support for 4GB address space – Up to 1 TB of physical address space
Hardware Assisted Virtualization continued…
Nested Page tables support – Two stages of hardware address translations. i.e.
separate hardware based page tables for Hypervisor, Guest OS.
– Tables from Guest OS translate VA to IPA
– Hypervisor page table translate IPA to PA
VGIC – Virtual interrupts are routed to the non-secure
IRQ/FIQ/Abort
– Virtual Masking bits allow each guest to mask and un-mask GIC virtually
– Guest OS manipulates the virtualized interrupt controller
SierraVisor - Flexibility
ARM Platform
Normal I/O
CPU
• Privileged Access • Execution Time
Normal Memory Protected I/O
Protected Memory
sierraVisor
Memory Access Manager I/O Access Manager
VCPU Cluster
Pseudo Virtual I/O
Virtualized GIC
Abort/Trap nested MMU
HW nested MMU
Hypercalls/TrustZone Scheduler
HW VCPU
Rich GuestOS Kernel/User Space Rich GuestOS
Kernel/User Space
Secure Stack User/Kernel
Secure Stack User/Kernel
TrustZone Mailbox
SIERRA TEE – TRUSTED EXECUTION ENVIRONMENT
Android, RTOS Ready
SierraTEE: TrustZone Environment
ARM SOC
Crypto Engine Secure Memory Secure
External bus
Secure Peripherals:
Flash, Keyboard, Display
Normal World OS (Android/uCOS/RTOS)
Kernel
Secure Driver
Global Platform Client API
Secure OS
Dispatcher
Kernel
Monitor/Real Time Scheduler
Media Playback with DRM
Crypto Display File System
DASM Services
Mgr Trustlet
Secure Tasks
Global Platform Internal API
Java Payment With Secure Input/Output
Powerful, Purpose-built OS
Flexible with Neon and VFP
– Fully shared mode
– Supports both “Secure” or “Normal” world
Thwarts side channel attacks by protecting branch target buffers, TLBs, etc
Supports several interrupt models
– FIQ & IRQ in dedicated secure cores
– FIQ only mode when sharing cores
– Interrupt routing from secure to non-secure world
Multi-core Ready: AMP/SMP
Dedicated Cores for Secure and Normal World
Satisfies size and performance constrained designs
Ideally suited for high performance applications like media playback, transcoding
Secure and Non-secure Kernels Share Cores
Provides maximum peak CPU bandwidth
Both secure and non-secure kernels can utilize all available cores
ARM MP Core
Core0 Core1 Core2 Core3
Normal World
Secure World
SierraTEE
ARM MP Core
Normal World
Secure World
SierraTEE
Core0 Core1 Core2 Core3
ARCHITECTURE AND INTERNALS
Real-time, Flexible and Safe
Hardware Assisted Virtualization and TrustZone
Sierravisor provides the Secure TEE along with the hypervisor support.
Secure TEE and Secure applications support
TrustZone Monitor
HW Virtualization VMM
Guest1 Guest(n)
Secure World
Secure micro-kernel
Tasklet Tasklet Tasklets . . . Guest0
Start, Stop and Restart of Guests – VCPU state transition diagram
Init: Pre-configured guest state.
Reset: VCPU in initialized state and waiting for it to start.
Ready: Ready to run state and available for scheduler
Running: Currently running VCPU.
Pause: Suspended VCPU and is not available for the scheduler.
Halt: Due to erroneous condition vcpu is in this state.
Init
Reset Running Ready
Halt
Pause
Start
Destroy Create
Pause Resume
Reset
Reset Schedule
Schedule
Reset
Halt
Halt
Reset
Device Tree Configuration for Guests
Device tree configuration used for hypervisor and guest configuration.
Device tree used to configure the cpu details, memory, device association and interrupt mapping.
Guest configuration example: guest0 { model = "vexpress-a15"; device_type = "guest"; compatible = "ARMv7a,cortex-a15"; start_pc = <0x80000000>; };
Start, Stop and Restart of Guests Commands
Management console is provided to control the guests Hypervisor shell is implemented using the virtual serial
console driver. It allows to display the Guest OS and hypervisor console output
Start, stop and restart commands are provided to control the guest.
Start command: sw# guest 0 start
Pause command: sw# guest 0 pause
Resume command: sw# guest 0 resume
Application Support on Hypervisor
Application framework supported added by implementing the command support in management console
Run time loadable applications support Pre-defined commands like guest start, pause, resume,
dumpreg, ps Modules list, load, unload commands support. Hypervisor application configuration: struct sw_task { u32 task_id; /* Task ID */ u32 entry_addr; /* Task entry address */ char name[32]; /* Task user defined name */ u32 state; /* Task state */ void* task_sp; /* Task stack pointer */ u32 task_sp_size; /* Task stack size */ void* tls; /* Task local storage */ acl_t acl /* User access control list */ }
Application Support on Hypervisor Task Helper Functions
Create task – This function helps in task creation. It allocates the task structure, task local storage, task stack. It puts the task in suspend state.
create_task(sa_config_t *psa_config, int *task_id);
Start task – This function gets called on command invocation and puts the task in ready to run state.
start_task(int task_id, u32* params);
Destroy task – This function cleans up the created task. It frees the resources which got allocated in task creation.
destroy_task(int task_id);
IPC api – This function helps to send the data between two applications or guests of single request and response.
ipc_api(int origin_id, int target_id, int target_cmd_id, void *req_buf, int req_buf_len, void *res_buf, int res_buf_len, int *ret_res_buf_len);
Devices and I/O abstraction
Pass Through Devices
Direct hardware access support for Guest OS
Memory access is restricted only to the mapped Guest OS.
Pass through device interrupts are mapped to the Guest OS
Device tree is used to configure the pass through devices
Device tree will have information about Device memory region and Interrupts.
Pass Through Devices Continued…
Direct IO driver data flow:
Device
Guest
driver
VMM
Memory Manager
DMA
Interrupt
Control List
Pass Through Device Configuration
Device tree list for pass through devices Interrupts are configured per device Device configuration example for pass through
devices struct device_conf_entry linuxguest1_dtable[] = { {.dce_irq = 42, .dce_name = “ttc-0”. .dce_type = DEVTYPE_PASSTHROUGH, }, {.dce_irq = 54, .dce_name = “ethernet”. .dce_type = DEVTYPE_PASSTHROUGH, }, } ;
Device Emulation - UART
Device
Guest
Driver
VMM
Device Memory
Driver Interrupt
Device memory Write
VIRQ
Virtual Device
Shared Devices – System MMU
Systems with SMMU support – Adding SMMU device for stage 2 translation removes the
need for the hypervisor to manage shadow translation tables totally in software.
– With stage 2 translation, the SMMU enables any Guest OS to directly configure all DMA capable devices in the system sharing the same address space at IPA level.
– The SMMU can also be configured to ensure that devices operating on behalf of one Guest OS are prevented from corrupting memory of another Guest OS.
– The Stage 2 address translation system is based on a translation scheme with a 64-bit descriptor to allow it to address a physical memory larger than 4GBytes.
Devices – System MMU
The following diagram depicts the functionality of 2-stage System MMU translation.
A0-n
Guest 0-n Process A
0
4GB
B0-n
Guest 0-n Process m
4GB
0
Virtual Address (VA) Space
A0
B0 0
4GB
An
Bn 0
4GB
Stag
e 1
Tr
ansl
atio
n
Stag
e 1
Tr
ansl
atio
n
Intermediate Physical Address (IPA) Space
A0
B0
An
Bn Stag
e 2
Tra
nsl
atio
n
4GB
0 Physical Address (PA) Space
Separation from Core Functionalities Continued…
DMA remapping functionality
Domain 1
Domain 2
CPU MMU
Memory
IO MMU
Device 1
Device 2
Domain 1
Domain 2
0
0x20000 GVA = 0x1A00
GPA = 0x400
HPA = 0x1D00
HPA = 0x200
HPA = 0x1B00
HPA = 0x100
DVA = 0x1F00
DVA = 0x600
Physical Memory
Virtual Framebuffer
Virtual Frame buffer
– Ring buffer is used for communication between guest and VMM.
– Guest inform the change in notification by writing a request in the ring buffer.
struct fb_device {
int width; /* Width of the frame buffer */
int height; /* Height of the frame buffer */
u32 line_length; /* Length of a row of pixels */
u32 mem_length; /* Framebuffer length */
}
– Because of bigger in size, data is not place in ring buffer. Copying of this data (~4MB) will be a expensive operation. Both the Guest and VMM have statically mapped entries. Guest driver sends the notification to VMM driver about the region need to be refreshed.
GPU Virtualization API Remoting
Slave Guest Primary Guest
Open GL & EGL Commands
Remote API Marshaled API commands
State Translator
Pipe Driver
GPU Commands
GPU Driver
Hardware Commands
Android GPU Integration
Software Libraries for Android
Surface Flinger
Emulated EGL Lib
Gralloc
Session/Context Manager – TLV Encoder
Native Window
Shared memory pipe
Emulated GLES Lib
Protocol Byte Stream
Applications Master Guest 0
Kernel
VFB Driver
GLES Host Library
Shared mem Pipe
Renderer Application
Custom GPU/EGL Translation
Android Guest (Slave)
GPU Virtualization API Remoting Continued…
Each user call to graphics API translated to remote procedure call
Marshaled commands are transferred over the shared memory and local pointers in case of Textures, surface are translated to safe pointers
On receiving the commands, Proxy Server will translate that to Hardware commands for the GPU driver
GPU driver creates a context for each remote rendering context
It executes the drawing commands in the appropriate context as it services each remote client. By having the control of context switches, it prevents guest applications modifying the state of the other contexts.
OMX and Secure Codec
Para virtualized IL
Media Player
Extractor
OMX Codec IL Proxy
Audio Track
Video Track
Audio Source
Video Source
Context Manager
Media Extractor
Containers
Codec
Color Rende
rer
Android
Kernel Shared Memory
Master Guest 0
Kernel
Shared mem Pipe
OMX Application (Decode & Display)
OMX
VFP
Virtio
Para virtualized I/O: virtio provides an efficient abstraction for hypervisors and a common set of I/O APIs Components of Virt I/O: Front-end drivers Back-end drivers virtio defines two layers to support guest-to-hypervisor communication. Virtqueue: The virtual queue interface that conceptually attaches front-end drivers to
back-end drivers. Drivers can use zero or more queues, depending on their need. For example, the virtio network driver uses two virtual queues (one for
receive and one for transmit). They are actually implemented as rings to traverse the guest-to-hypervisor
transition.
Virtio: Hierarchy
Virtio Internals
From the perspective of the guest 1. At the top is the virtio_driver, which represents the front-
end driver in the guest. 2. Devices that match this driver are encapsulated by the
virtio_device (a representation of the device in the guest). 3. virtio_config_ops structure defines the operations for
configuring the virtio device. 4. The virtio_device is referred to by the virtqueue, which
includes a reference to the virtio_device to which it serves. 5. Finally, each virtqueue object references the virtqueue_ops
object, which defines the underlying queue operations for dealing with the hypervisor driver.
Network Emulation
Network Virtualization support – Emulated virtual switch which can handle multiple
virtual network port
– Virtual network port is a logical connection between switch and network driver
– “dbuf” is a Data buffer which handles the packet data
– Two ring buffers for transmit and receive operations and used to transmit instructions
– Data transfer is done using the shared memory pages
Shared Devices Continued…
Para-virtualization driver data flow:
Device
Guest
Virtio/NET
VMM
Device I/O Manager
Driver
Send Packet
Interrupt
Sierraware
- Here to help
Simply Secure
Professional Services
Porting software to
processors
Integrating TEE
and SierraVisor
with applications
Developing drivers,
encoders or apps
Extensive
experience with
ARM processors
and kernel code
Android, Linux,
BSD, and VxWorks
development
Hardware & FPGA
Phased approach
from planning and
development to
testing & certification
Carefully defined
schedules and
communication with
customers to avoid
surprises & delays
Custom
Services
ARM Design
Expertise
Project
Management
Technical Support
Telephone and Email Support
Online technical documentation
Software updates for commercial products
Previews of upcoming releases
Ability to influence feature enhancements
Commitment to Quality
– Service Level Agreement (SLA) details support response
times and escalation levels