Upload
sharon-washington
View
221
Download
0
Embed Size (px)
Citation preview
Software Based Memory Protection For Sensor Nodes
Ram Kumar, Eddie Kohler, Mani Srivastava
([email protected])CENS Technical Seminar
Series
10/20/06 CENS Seminar 2
Memory Corruption
Run-time Stack
Sensor Node Address Space
Globals and Heap
(Apps., drivers, OS)
No Protection
0x0000
0x0200 Single address space CPU
Shared by apps., drivers and OS
Most bugs in deployed systems come from memory corruption
Corrupted nodes trigger network-wide failures
Memory protection is an enabling technologyfor building robust software for motes
10/20/06 CENS Seminar 3
Why is Memory Protection hard ?
No MMU in embedded micro-controllers MMU hardware requires lot of RAM Increases area and power consumption
Core MMU Cache
Area (mm2)0.13u Tech.
mW per MHz
ARM7-TDMI
No No 0.25 0.1
ARM720T Yes 8 Kb 2.40 (~10x)
0.2 (~2x)
10/20/06 CENS Seminar 4
Software-based Approaches
Software-based Fault Isolation (Sandbox)• Coarse grained protection• Check all memory accesses at run-time• Introduce low-overhead inline checks
Application Specific Virtual Machine (ASVM)• Interpreted code is safe and efficient• ASVM instructions are not type-safe
10/20/06 CENS Seminar 5
Software-based Approaches Type safe languages
• Language semantics prevent illegal memory accesses
• Fine grained memory protection• Challenge is to interface with non type-safe software
• Ignores large existing code-base• Output of type-safe compiler is harder to verify
• Especially with performance optimizations
Ccured - Type-safe retrofitting of C code • Combines static analysis and run-time checks• Provides fine-grained memory safety• Difficult to interface to pre-compiled libraries
• Different representation of pointer types
10/20/06 CENS Seminar 6
Overview Ideal: Combination of software based
approaches• For e.g.: Sandbox for ASVM instructions
Software-based Fault Isolation (SFI)• Building block for providing coarse-grained
protection• Enhanced using other approaches (e.g. static
analysis)
Memory Map Manager• Ensure integrity of memory accesses
Control Flow Manager• Ensure integrity of control flow
10/20/06 CENS Seminar 7
SOS Operating System
Tree RoutingModule
Tree RoutingModule
Data CollectorApplication
Data CollectorApplication
PhotosensorModule
PhotosensorModule
DynamicallyLoaded modules
DynamicMemoryDynamicMemory
MessageSchedulerMessage
SchedulerDynamicLinker
DynamicLinker
KernelComponents
SensorManagerSensor
ManagerMessaging
I/OMessaging
I/OSystemTimerSystemTimer
SOSServices
RadioRadio I2CI2C ADCADCDeviceDrivers
Static SOS Kernel
10/20/06 CENS Seminar 8
Design Goals
Provide coarse-grained memory protection• Protect OS from applications• Protect applications from one another
Targeted for resource constrained systems• Low RAM usage• Acceptable performance overhead
Memory safety verifiable on the node
10/20/06 CENS Seminar 9
Outline
Introduction
System Components•Memory Map• Control Flow Manager• Binary Re-Writer• Binary Verifier
Evaluation
10/20/06 CENS Seminar 10
System Overview
BinaryRe-WriterBinary
Re-WriterSandboxBinary
Raw Binary
MemoryMap
MemoryMap
ControlFlow Mgr.Control
Flow Mgr.
Memory Safe Binary
BinaryVerifierBinaryVerifier
Desktop
Sensor Node
10/20/06 CENS Seminar 11
System Components Re-writer
• Introduce run-time checks
Verifier• Scans for unsafe operations before admission
Memory Map Manager• Tracks fine-grained memory layout and ownership info.
Control Flow Manager• Handles context switch within single address space
10/20/06 CENS Seminar 12
Classical SFI (Sandboxing)
Partition address space of a process into contiguous domains
Applications extensions loaded onto separate domains
Run-time checks force memory accesses to own domain
Checks have very low overhead
Run-timeStack
Kernel Module #1
Operating System
Kernel Module #2
10/20/06 CENS Seminar 13
Challenges - SFI on a mote
Partitioning address space is impractical• Total available memory is severely limited• Static partitioning further reduces memory
Our approach• Permit arbitrary memory layout• But maintain a fine-grained map of layout• Verify valid accesses through run-time
checks
10/20/06 CENS Seminar 14
Memory Map0x0200
0x0000
Fine-grained layout and ownership information
Encoded information per block• Ownership - Kernel/Free or User• Layout - Start of segment bit
User Domain
Kernel Domain
Partition address space into blocks
Allocate memory in segments(Set of contiguous blocks)
10/20/06 CENS Seminar 15
Memmap in Action: User-Kernel Protection
Blocks Size on Mica2 - 8 bytes Efficiently encoded using 2 bits per block
• 00 - Free / Start of Kernel Allocated Segment• 01 - Later portion of Kernel Allocated
Segment• 10 - Start of User Allocated Segment• 11 - Later Portion of User Allocated Segment
User
Kernel
pA = malloc(KERN, 16);
pB = malloc(USER, 30);
mem_chown(pA, USER);Free
10/20/06 CENS Seminar 16
Memmap API
Updates Memory Map• Blk_ID: ID of starting block in a segment• Num_blk: Number of blocks in a segment• Dom_ID: Domain ID of owner (e.g. USER / KERN)
memmap_set(Blk_ID, Num_blk, Dom_ID)
Dom_ID = memmap_get(Blk_ID) Returns domain ID of owner for a memory block
API accessible only from trusted domain (e.g. Kernel)
Property verified before loading
10/20/06 CENS Seminar 17
Using memory map for protection Protection Model
• Write access to a block is granted only to its owner
Systems using memory map need to ensure:1. Ownership information in memory map is current2. Only block owner can free/transfer ownership3. Single trusted domain has access to memory map API4. Store memory map in protected memory
Easy to incorporate into existing systems• Modify dynamic memory allocator - malloc, free• Track function calls that pass memory from one domain to
other• Changes to SOS Kernel ~ 1%
• 103 lines in SOS memory manager• 12720 lines in kernel
10/20/06 CENS Seminar 18
Memmap Checker Enforce a protection model
Checker invoked before EVERY write access
Protection Model• Write access to block granted only to owner
Checker Operations• Lookup memory map based on write address• Verify current executing domain is block owner
10/20/06 CENS Seminar 19
Address Memory Map Lookup
Address (bits 15-0)
Memmap Table
1 Byte has 4 memmap records
72
Block Number (bits 11-3) Block Offset (bits 2-0)9 bits
10/20/06 CENS Seminar 20
Optimizing Memmap Checker
Minimize performance overhead of checks
Address Memory Map Lookup• Requires multiple complex bit-shift operations• Micro-controllers support single bit-shift operations• Use FLASH based look-up table• 4x Speed up - From 32 to 8 clock cycles
Overall overhead of a check - 66 cycles
10/20/06 CENS Seminar 21
Memory Map is Tunable Number of memmap bits per block
• More Bits Multiple protection domains
Address range of protected memory• Protect only a small portion of total memory
Block size• Match block size to size of memory objects• Mica2 - 8 bytes, Cyclops - 128 bytes
Addr. RangeBits / Block
0B - 4096B 256B - 3072B
2 128 B (~3%)
88 B (~2%)
4 256 B (~6%)
176 B (~4%)
Memory Map Overhead - 8 Byte Blocks
10/20/06 CENS Seminar 22
Outline
Introduction
System Components• Memory Map
•Control Flow Manager• Binary Re-Writer• Binary Verifier
Evaluation
10/20/06 CENS Seminar 23
What about Control Flow ?
State within domain can become corrupt• Memory map protects one domain from
other
Function pointers in data memory• Calls to arbitrary locations in code memory
Return Address on Stack• Single stack for entire system• Returns to arbitrary locations in code
memory
10/20/06 CENS Seminar 24
Control Flow Manager
Program Memory
DOMAIN Acall foo
DOMAIN Bfoo:…call local_fn
ret
Ensure control flow integrity• Control flow enters domain at
designated entry point• Control flow leaves domain to
correct return address
Track current active domain• Required for memmap checker
Require Binary Modularity• Program memory is partitioned• Only one domain per partition
10/20/06 CENS Seminar 25
Ensuring control flow integrity Check all CALL and RETURN instructions
CALL Check• If address within bounds of current domain then
CALL• Else transfer to Cross Domain Call Handler
RETURN Check• If address on stack within bounds of current
domain then RETURN• Else transfer to Cross Domain Return Handler
Checks are optimized for performance
10/20/06 CENS Seminar 26
Cross Domain Control Flow
Function call from one domain to other
Determine callee domain identity
Verify valid entry point in callee domain
Save current return address
10/20/06 CENS Seminar 27
Program Memory
Domain Acall fooJT
Domain B
foo:…
ret
Jump Table
Register exported functionfooJT:jmp foo
Cross Domain CallCross Domain Call Stub
Verify call into jump table Get callee domain ID from call address Store return address
Verify call into jump table Get callee domain ID from call address Store return address
10/20/06 CENS Seminar 28
Cross Domain Return
Program Memory
call foo
foo:…
ret
•Verify return address•Restore caller domain ID•Restore prev. return addr•Return
•Verify return address•Restore caller domain ID•Restore prev. return addr•Return
Cross Domain Return Stub
10/20/06 CENS Seminar 29
Stack Protection
Data Memory
Stack Grows Down
Stack Bounds
Stack bound set at cross domain calls and returns
Protection Model• No writes beyond latest stack
bound• Limits corruption to current
stack frame Enforced by
memmap_checker• Check all write address
Single stack shared by all domains
User
Kernel
10/20/06 CENS Seminar 30
Outline
Introduction
System Components• Memory Map• Control Flow Manager
•Binary Re-Writer• Binary Verifier
Evaluation
10/20/06 CENS Seminar 31
Binary Re-Writer
Re-writer is a C program running on PC Input is raw binary output by cross-compiler Performs basic block analysis Insert inline checks e.g. Memory Accesses Preserve original control flow e.g. Branch
targets
BinaryRe-WriterBinary
Re-WriterSandboxBinary
Raw Binary
PC
10/20/06 CENS Seminar 32
Memory Write Checks
st Z, Rsrcst Z, Rsrc push Xpush R0movw X, Zmov R0, Rsrccall memmap_checkerpop R0pop X
push Xpush R0movw X, Zmov R0, Rsrccall memmap_checkerpop R0pop X
Actual sequence depends upon addressing mode Sequence is re-entrant, works in presence of interrupts Can be improved by using dedicated registers
10/20/06 CENS Seminar 33
Control Flow Checks
retret jmp ret_checkerjmp ret_checker
call foocall foo ldi Z, foocall call_checker ldi Z, foocall call_checker
Return Instruction
Direct Call Instruction
icallicall call call_checkercall call_checkerIn-Direct Call Instruction
10/20/06 CENS Seminar 34
Outline
Introduction
System Components• Memory Map• Control Flow Manager• Binary Re-Writer
•Binary Verifier Evaluation
10/20/06 CENS Seminar 35
Binary Verifier Verification done at every node
Correctness of scheme depends upon correctness of verifier
Verifier is very simple to implement• Single in-order pass over instr.
sequence• No state maintained by verifier• Verifier Line Count: 205 lines• Re-Writer Line Count: 3037 lines
10/20/06 CENS Seminar 36
Verified Properties
All store instructions to data memory are sandboxed
Store instructions to program memory are not permitted
Static jump/call/branch targets lie within domain bounds
Indirect jumps and calls are sandboxed
All return instructions are sandboxed
10/20/06 CENS Seminar 37
Outline
Introduction System Components• Memory Map• Cross Domain Calls• Binary Re-Writer• Binary Verifier
Evaluation
10/20/06 CENS Seminar 38
Resource UtilizationType Normal Protect
edOverhead
Prog. Mem
41690 B 47232 B
13%
Data Mem
2892 B 3040 B 5% Implemented scheme in SOS operating system
Compiling blank SOS kernel for Mica2 sensor platform
Size of Memory Map - 128 Bytes
Additional memory used for storing parameters• Stack Bound, Return Address etc.
10/20/06 CENS Seminar 39
Memory Map Overhead API modification overhead (CPU cycles)
API Normal
Protected
Increase
ker_malloc 363 622 82%
ker_free 138 446 238%
change_own 55 270 418%
Overhead of setting and clearing memory map bits
10/20/06 CENS Seminar 40
Control Flow Checks and Transfers
OPERATION CYCLEScross_domain_call 38 (9x)cross_domain_ret 38 (9x)ker_ret_check 14ker_icall_check 8
Inline checks occur most frequently
ker_ret_check: Push and Pop of return address
Module verification ~ 175 ms for 2600 byte module
10/20/06 CENS Seminar 41
Impact on Module Size
Code size increase due to inline checks• Can be reduced if performance is not critical
• True for most sensor network apps
• Increased cost for module distribution
No change in data memory used
Name Normal
Protect
Inc. # Instr.
Blink 246 B 342 B 39%
10
Surge 688 B 1170 B 70%
41
Tree Routing
2616 B
4584 B 75%
136
Time Sync 1070 B
1992 B 86%
53
10/20/06 CENS Seminar 42
Performance Impact Experiment Setup• 3-hop linear network simulated in Avrora• Simulation executed for 30 minutes• Tree Routing and Surge modules inserted into
network• Data pkts. transmitted every 4 seconds• Control packets transmitted every 20 seconds
1.7% increase in relative CPU utilization• Absolute increase in CPU - 8.41% to 8.56% • 164 run-time checks introduced• Checks executed ~20000 times• Can be reduced by introducing fewer checks
10/20/06 CENS Seminar 43
Deployment Experience Run-time checker signaled violation in Surge Offending source code in Surge:
hdr_size = SOS_CALL(s->get_hdr_size, proto);
s->smsg = (SurgeMsg*)(pkt + hdr_size);
s->smsg->type = SURGE_TYPE_SENSORREADING;
SOS_CALL fails in some conditions, returns -1 Unchecked return value used as buffer offset Protection mechanism prevents such
corruption
10/20/06 CENS Seminar 44
Conclusion Software-based Memory Protection• Enabling technology for reliable software
systems
Memory Map and Cross Domain Calls• Building blocks for software based fault isolation• Low resource utilization• Minimal performance overhead
Widely applicable• SOS kernel with dynamic modules• TinyOS components using dynamic memory• Natively implemented ASVM instructions
10/20/06 CENS Seminar 45
Future Work
Explore CPU architecture extensions• Prototype AVR implementation in
progress
Static analysis of binary• Reduce number of inline checks• Improve overall system performance• Increase complexity of verifier
Thank You !http://nesl.ee.ucla.edu/projects/
sos-1.xRam Kumar
CENS SeminarOctober 20, 2006
10/20/06 CENS Seminar 47
SOS Memory Layout
Static Kernel State• Accessed only by kernel
Heap• Dynamically allocated• Shared by kernel and
applications
Stack• Shared by kernel and
applications
Run-time Stack
Static Kernel State
Dynamically
Allocated
Heap
0x0200
0x0000
10/20/06 CENS Seminar 48
Reliable Sensor Networks
Reliability is a broad and challenging goal
Data Integrity• How do we trust data from our sensors ?
Network Integrity• How to make network resilient to failures ?
System Integrity• How to develop robust software for sensors ?