15
SMART: Secure and Minimal Architecture for (Establishing a Dynamic) Root of Trust Karim El Defrawy UC Irvine [email protected] Aur´ elien Francillon Institute EURECOM [email protected] Daniele Perito INRIA [email protected] Gene Tsudik UC Irvine [email protected] Abstract Remote attestation is the process of securely verifying internal state of a remote hardware platform. It can be achieved either statically (at boot time) or dynamically, at run-time in order to establish a dynamic root of trust. The latter allows full isolation of a code region from pre- existing software (including the operating system) and guarantees untampered execution of this code. Despite the untrusted state of the overall platform, a dynamic root of trust facilitates execution of critical code. Prior software-based techniques lack concrete security guaran- tees, while hardware-based approaches involve security co-processors that are too costly for low-end embedded devices. In this paper, we develop a new primitive (called SMART) based on hardware-software co-design. SMART is a simple, efficient and secure approach for establishing a dynamic root of trust in a remote embedded device. We focus on low-end micro-controller units (MCU) that lack specialized memory management or protection features. SMART requires minimal changes to existing MCUs (while providing concrete security guarantees) and assumes few restrictions on adversarial capabilities. We demonstrate both practicality and feasibility of SMART by implementing it – via hardware modifications – on two common MCU platforms: AVR and MSP430. Results show that SMART implementa- tions require only a few changes to memory bus access logic. We also synthesize both implementations to an 180nm ASIC process to confirm its small impact on MCU size and overall cost. 1 Introduction Verifying internal state of a remote embedded device is an important task in many scenarios and application settings, e.g., smart meters, implantable medical devices (IMDs) and actuators in industrial control systems that perform critical functions and operate unattended for long periods of time. In addition, increasing adoption of wireless net- working prompts concerns about remote exploits of such devices. The recent Stuxnet worm [15] demonstrated the magnitude of damage from attacks on embedded devices. Stuxnet infected Programmable Logic Controllers (PLC) used in industrial control systems and caused consider- able physical damage by modifying their control software. Embedded devices are also sometimes placed in physi- cally inaccessible locations, e.g., IMDs, military or indus- trial sensors and actuators. In such settings, it is hard to physically connect to an external interface to verify the state of a target device. The discussion above motivates the need for attestation techniques to detect, and possibly disable, malicious code prior to performing critical operations. Current attestation methods fall somewhat short of meeting requirements for a wide range of embedded devices. They generally fall into two extremes on the design spectrum: hardware- and software-based techniques. The former rely on special- ized hardware (e.g., a TPM [6]) or on the availability of special CPU instructions [24] to perform attestation, ei- ther statically (at boot time) or dynamically, during nor- mal run-time operation. These techniques have attracted a lot of attention from both the research community and in- dustry. They are best-suited for higher-end devices, such as laptops and smart-phones. Experimental devices that include a full TPM as a separate chip have been con- structed [23]. However, this approach cannot provide a dynamic root of trust and is expensive 1 for low-end de- vices. Several software-based attestation methods have been proposed for commodity [37] and embedded devices [38, 36, 25, 35, 47]. However, they generally offer uncertain security guarantees [41] and some have been subject to attacks [8]. Furthermore, all current software-based tech- 1 The cost of a TPM chip is close to that of a low-end MCU.

SMART: Secure and Minimal Architecture for (Establishing a

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: SMART: Secure and Minimal Architecture for (Establishing a

SMART: Secure and Minimal Architecture for(Establishing a Dynamic) Root of Trust

Karim El DefrawyUC Irvine

[email protected]

Aurelien FrancillonInstitute [email protected]

Daniele PeritoINRIA

[email protected]

Gene TsudikUC Irvine

[email protected]

Abstract

Remote attestation is the process of securely verifyinginternal state of a remote hardware platform. It can beachieved either statically (at boot time) or dynamically,at run-time in order to establish a dynamic root of trust.The latter allows full isolation of a code region from pre-existing software (including the operating system) andguarantees untampered execution of this code. Despitethe untrusted state of the overall platform, a dynamicroot of trust facilitates execution of critical code. Priorsoftware-based techniques lack concrete security guaran-tees, while hardware-based approaches involve securityco-processors that are too costly for low-end embeddeddevices.

In this paper, we develop a new primitive (calledSMART) based on hardware-software co-design.SMART is a simple, efficient and secure approach forestablishing a dynamic root of trust in a remote embeddeddevice. We focus on low-end micro-controller units(MCU) that lack specialized memory management orprotection features. SMART requires minimal changesto existing MCUs (while providing concrete securityguarantees) and assumes few restrictions on adversarialcapabilities. We demonstrate both practicality andfeasibility of SMART by implementing it – via hardwaremodifications – on two common MCU platforms: AVRand MSP430. Results show that SMART implementa-tions require only a few changes to memory bus accesslogic. We also synthesize both implementations to an180nm ASIC process to confirm its small impact onMCU size and overall cost.

1 Introduction

Verifying internal state of a remote embedded device is animportant task in many scenarios and application settings,e.g., smart meters, implantable medical devices (IMDs)

and actuators in industrial control systems that performcritical functions and operate unattended for long periodsof time. In addition, increasing adoption of wireless net-working prompts concerns about remote exploits of suchdevices. The recent Stuxnet worm [15] demonstrated themagnitude of damage from attacks on embedded devices.Stuxnet infected Programmable Logic Controllers (PLC)used in industrial control systems and caused consider-able physical damage by modifying their control software.Embedded devices are also sometimes placed in physi-cally inaccessible locations, e.g., IMDs, military or indus-trial sensors and actuators. In such settings, it is hard tophysically connect to an external interface to verify thestate of a target device.

The discussion above motivates the need for attestationtechniques to detect, and possibly disable, malicious codeprior to performing critical operations. Current attestationmethods fall somewhat short of meeting requirements fora wide range of embedded devices. They generally fallinto two extremes on the design spectrum: hardware- andsoftware-based techniques. The former rely on special-ized hardware (e.g., a TPM [6]) or on the availability ofspecial CPU instructions [24] to perform attestation, ei-ther statically (at boot time) or dynamically, during nor-mal run-time operation. These techniques have attracted alot of attention from both the research community and in-dustry. They are best-suited for higher-end devices, suchas laptops and smart-phones. Experimental devices thatinclude a full TPM as a separate chip have been con-structed [23]. However, this approach cannot provide adynamic root of trust and is expensive 1 for low-end de-vices.

Several software-based attestation methods have beenproposed for commodity [37] and embedded devices [38,36, 25, 35, 47]. However, they generally offer uncertainsecurity guarantees [41] and some have been subject toattacks [8]. Furthermore, all current software-based tech-

1The cost of a TPM chip is close to that of a low-end MCU.

Page 2: SMART: Secure and Minimal Architecture for (Establishing a

niques involve restrictive assumptions on adversarial ca-pabilities that make them unsuitable for many realisticapplications. In particular, they typically assume “adver-sarial silence”, meaning that, during each attestation pro-cess, only the intended prover (device being attested) iscommunicating with the verifier (entity that performs at-testation). In other words, even though the prover mighthave malware installed, it is not aided – or impersonated– by any external party during attestation. The same as-sumption is sometimes referred to as “no collusion”. Anyattestation technique that makes this assumption is lim-ited to close-range (one-hop) communication between theprover and the verifier and its security often relies on strictround-trip time measurements. It is easy to see that theadversarial silence assumption is necessary as long as nosecret information can be maintained on the prover. Main-taining secrets, however, requires secure storage, which,in turn, prompts the need for hardware support.

Software-based attestation also assumes that the adver-sary impersonating (or colluding with) the prover mustuse the same hardware as the genuine prover. While thisassumption might hold in a few specific settings, it is un-realistic for many applications.

Finally, there are some proprietary techniques for em-bedded processors currently on the market. For example,ARM TrustZone [3] provides an additional – secure – pro-cessor mode of execution. It includes a new set of shadowregisters, a few KBytes of on-chip SRAM 2, and allowscontrolling access to peripherals by the operating system.Though TrustZone inherently relies on secure boot, it canbe used to provide a dynamic root of trust [26, 12]. How-ever, it targets more powerful devices than those consid-ered in this paper3.

The problem with the static root of trust is that, in gen-eral, it does not offer any guarantees about the currentstate of a device, since adversarial exploits can occur post-boot. Even worse, a static root of trust (e.g., TPM v1.1 orSecure Boot) is unsuitable for detecting a powerful attackclass based on Return-Oriented Programming (ROP) [39].ROP allows execution of an arbitrary return-oriented pro-gram by merely manipulating the return addresses on thestack, i.e., without changing code. In order to detect suchattacks, techniques that do not rely on the code isolationprovided by a dynamic root of trust have to check areas of

2In contrast with our target MCUs, most devices with TrustZone donot all include RAM and Flash on chip.

3 TrustZone is available on the high-end ARM processors (ARM11and Cortex-AX series). However, to the best of our knowledge, it is un-available for low-end ARM devices that correspond to MCU-s we focuson, e.g., ARM Cortex-M1. Low-end ARM cores with security exten-sions are known as SecureCore. However, no detailed information ispublicly available about them [4].

memory that are highly volatile, e.g., stack and heap.

1.1 RoadmapIn this paper, we stay clear of both efficient-but-limitedsoftware-based techniques and heavy-weight TPM-basedapproaches to attestation. We focus on the design spacearea that has not been previously explored by utilizinga software/hardware co-design approach to architect anattestation mechanism with minimal hardware require-ments.

Our main design guideline is to carefully justify eachcomponent necessary to achieve secure establishment ofa dynamic root of trust in a remote embedded device. Fol-lowing this guideline leads us to an approach – called:SMART: Secure and Minimal Architecture for (Estab-lishing a Dynamic) Root of Trust – that entails mini-mal hardware modifications to current embedded MCU-s. To the best of our knowledge, this represents the firstminimal hardware solution for establishing a dynamicroot of trust in low-end embedded devices. We imple-mented it on two widely available low-cost MCU plat-forms: Atmel AVR and Texas Instruments MSP430, bymodifying open-source implementations of these MCU-sin VHDL and Verilog. (Both obtained from the Open-Cores Project [31]).Organization: Section 2 discusses our goals and build-ing blocks. Then, Section 3 presents SMART detailsand features. Security issues are addressed in Section 4.Next, Section 5 presents several concrete protocols utiliz-ing SMART as a primitive. Implementation details arediscussed in Section 6 and related work in Section 7.

2 Goals and Design ElementsThe main result of this paper is the development ofa new primitive called: SMART: Secure and MinimalArchitecture for (Establishing a Dynamic) Root of Trust.SMART is executed by the prover, PRV , and, in do-ing so, attests a region of code and jumps to it. A proofof execution is computed and sent to the verifier, VRF .SMART guarantees that attested code is executed even ifthe entire prover system is compromised (except SMARTROM code). In the rest of this section, we describe our se-curity objectives, adversarial assumptions and SMART’smain building blocks.

2.1 Security ObjectivesSMART has three security objectives based upon success-ful completion of the attestation protocol:

Page 3: SMART: Secure and Minimal Architecture for (Establishing a

• Prover Authentication: VRF obtains entity authen-tication of PRV .

• External Verification: VRF is assured that memorysegment [a, b] on PRV contains the expected con-tent.

• Guaranteed Execution: VRF is assured that code atlocation x was executed by PRV .

2.2 Adversarial AssumptionsWe assume that the adversary,ADV , has complete controlover the software state, code and data of PRV before andafter SMART execution. In particular, ADV can modifyany writable code on PRV and learn any secret that is notexplicitly protected by the MCU on PRV . Furthermore,ADV has complete control over the communication chan-nel and – during the protocol – can use multiple colludingdevices in order to pass or subvert attestation.

We also assume that ADV does not perform hardwareattacks onPRV . Specifically, it does not alter code storedin ROM, induce hardware faults or retrieveK using exter-nal side-channels. Likewise, ADV has no means of inter-rupting execution of ROM-resident code on PRV .

Protection against hardware-based attacks could beadded by encasing the MCU in tamper-resistant coat-ing and employing standard techniques to prevent side-channel key leakage. Since our approach is confined to theMCU, employing such techniques is quite natural. Fur-thermore, hardware attacks could be mitigated using well-known tamper-resistance techniques, such as anomaly de-tection, internal power regulators and additional metallayers or meshes for tamper detection.

Some processor peripherals might be capable of mod-ifying memory without interaction with the MCU core,e.g., a DMA engine. We assume that such peripherals canbe disabled during SMART execution.

Finally, we assume that PRV and VRF share a secretkey K. This key can be pre-loaded onto PRV at produc-tion time or later. We do not address the details of thisprocedure.

2.3 Building BlocksOur design relies on four main components that reside onPRV:

• Attestation Read-Only Memory: Memory region inROM inside the MCU. The key K can only be ac-cessed from this region.

• Secure Key Storage: Memory region inside the CPU;it can be accessed only from SMART code in ROM.

• MCU Access Controls: Controls access toK and pre-vents non-SMART code from accessing it.

• Reset and Memory Erasure: If any error is reportedby the above components, a hardware reset of theMCU is performed. Upon reset, hardware enforces amemory cleanup.

We argue that these four components are both necessaryand sufficient for building a dynamic root of trust in a low-end embedded system. We detail their purpose in the nextsection.

2.4 SMART Overview

As discussed above, the central goal of SMART is guar-anteed execution of a piece of code on the prover (PRV)to an external verifier (VRF), even when the prover isfully compromised. SMART relies on a challenge-basedprotocol – initiated by VRF – that leverages special hard-ware features of PRV . At the start of SMART (Figure 1),VRF sends several parameters to PRV: attestation re-gion boundaries a and b; address xwherePRV optionallypasses control after attestation if xflag is set; and nonce nto prevent replay attacks. A ROM-resident code segmenton PRV computes a cryptographic checksum C of a re-gion [a, b] in PRV’s memory (using nonce n) and thenpasses control to x. After execution of code starting at x,PRV returns C to VRF . The latter verifies correctnessofC by re-computing it using the same parameters andK.We refer to ROM-resident code asRC and code optionallyexecuted thereafter – as HC. The sequence of operationsof SMART is shown in Figure 1 and the correspondingpseudo-code ofRC is illustrated in Algorithm 1.

We note that a non-keyed function, such as a crypto-graphic hash (e.g., SHA-256), is unsuitable for attestation.This is because, without a secret key, anyone can computea hash of any input and fake a reply by PRV . In particu-lar, malware that infected PRV can do so. Therefore, ourcryptographic checksum is implemented as HMAC keyedwith K that resides in secure storage on PRV’s MCU 4.Usage of, and access to, K is restricted by the MCU suchthat only (trusted and immutable) RC is allowed to useit. For its part, RC only uses K to compute HMAC andthen passes control toHC. RC is instrumented using bothstatic and dynamic analysis tools to prevent accidentalleakage of K.

In addition, when xflag is set, interrupts remain dis-abled after execution of RC. This is to ensure that HC

4If minimality was not a primary goal, public key cryptography couldbe used to improve key management.

Page 4: SMART: Secure and Minimal Architecture for (Establishing a

Verifier VRFGenerates nonce n

if C is correct then Accept else Reject

Prover PRV

C = SMART(n, a, b, x, xflag , −, −)if xflag == True then Exec(x)

(n, a, b, x, xflag )

C

Figure 1: Overview of Protocol Using SMART.

Program Memory Address Space Data Memory Address Space

RC: SMART ROM Code

K: Protected KEY

User's Application Code Registers/IO

Application DataMemory

HC: Code to Attest

Stack Area

Data read / writeControl flow

HMAC Result1 2

2

3

4

C is executed (optional) 4

1 User application starts SMART

Code attestation is performed using the protected key

2

HMAC result is written to global memory, at a predefined location

3

a=x

b

Figure 2: SMART Operation Overview.

is subsequently executed, and to prevent Time-of-check-to-time-of-use (TOCTTOU) attacks. A TOCTTOU attackcould entail installing a malicious interrupt handler andscheduling an interrupt (e.g., a timer) to occur during thefirst instructions of HC. Such an interrupt handler couldallow reading or writing memory between executions ofRC and HC. Furthermore, hardware modifications to theMCU are added to avoid code reuse attacks.

3 SMART in Detail

This section describes, in detail, features and componentsof SMART.

3.1 Attestation ROMROM is a standard feature in many commodity MCUs.Generally, it incurs very little overhead in the design andconstruction of the MCU, since it constitutes a cheap formof storage. Typically, ROM is hardwired during manufac-turing, rendering it immutable. What makes our attesta-tion ROM special is its exclusive hardware-enforced abil-ity to access K.ROM Code. RC must guarantee the following properties:

1. Key Isolation: K must not be leaked from ROM.

2. Memory Safety: Software bugs should not allow tem-porary memory exposure or K leakage.

3. Atomic Execution: ROM code must be executedatomically and cannot be invoked partially.

Property (3) is guaranteed by the MCU, as discussed inSection 3.3 below. Properties (1) and (2) are guaranteedby using two code instrumentation tools: CQUAL [19] andDeputy [11].

As shown in Algorithm 1, SMART computes anHMAC of a particular memory segment and then jumps –without being interrupted – to a verifier-specified addresswithin that segment. The implementation consists of ap-proximately 500 lines of C code, which makes checkingits correctness both feasible and relatively easy.Key Secrecy. Upon termination, SMART passes con-trol to the untrusted portion of PRV , where maliciouscode can sift through memory and search for traces of Kor intermediate states used in HMAC computation. Thiscould lead to disclosure of K. For this reason, we in-strumented SMART code with CQUAL – a tool that de-tects information leakage in C programs. Specifically, Kis marked with a SECRET type. CQUAL propagates thistype to each variable that is computed with any involve-ment of any other variable of type SECRET. Each func-tion is equipped with a check for leakage of any SECRET

Page 5: SMART: Secure and Minimal Architecture for (Establishing a

Algorithm 1: SMART code in ROM.input : a, b start/end addresses for attestation

x address to jump to after attestationxflag jump or not?n nonce sent by verifierout output address where to store checksumin (optional) input parameter

output: HMAC outputbegin

/* Disable interrupts during SMART codeexecution */DisableIRQ();/* Attestation key K is unlocked automaticallyby the MCU */InitHmac(K);/* Attest all parameters */HmacProcess(a||b||x||xflag||n||in||out);/* Attest memory region [a, b] */for i ∈ [a, b] do

HmacProcess(Mem[i]);endC ←− FinishHmac();/* Store HMAC result in global variable */Copy(∗out, C);/* Erase temporary variables */ResetMemory();if xflag = True then

/* If execute flag set, exec function at addressx */Call(x, in);

else/* Restore interrupts status as before SMARTexec*/RestoreIRQ();

endend

variable. CQUAL instrumentation is performed off-line; itdoes not incur any overhead during operation of SMART.The end-result is simple: each variable marked SECRETby CQUAL is zeroed out at the end of each function. Theonly variables not erased are the outputs of each function.Also, the memory location of K is no longer accessibleupon completion of SMART.Memory Safety. Key isolation alone does not preventkey leakage, since our code could contain vulnerabilitiesthat allow ADV to retrieve K by running SMART onmalicious (or malformed) inputs. Fortunately, SMARTinvolves only around 500 lines of code. This relativelysmall size allows manual inspection for memory corrup-

tion bugs. We also enhance manual inspection usingDeputy – a C compiler based on GCC, that provides anannotation language for describing memory boundaries inC. For example, a C array can be augmented with infor-mation about its size. The compiler adds instructions tocheck all memory accesses to the array and detects mem-ory corruptions. Once SMART code is reinforced withDeputy, whenever a memory corruption is detected, aspecial reset is performed by Deputy instrumentationcode. As for other error conditions that could cause a re-set, we deal with them by making sure that, at each reset,all memory (stack, heap, and registers) is erased.

Furthermore, the stack and out pointers might be con-trolled by ADV when SMART code is called. If invalidvalues are provided 5, memory corruption may occur dur-ing SMART execution. This could be exploited by ADVto abuse SMART, e.g., recover bits ofK or skip executionof important code. Therefore, both stack pointer and outpointers are checked at the beginning of SMART code.Side-Channels. Another avenue for ADV to extract Kis via side-channel attacks. Since hardware side-channelsare out of scope of this work, we focus on software side-channels, i.e., malware on PRV trying to learn K byobserving SMART execution. Low-end MCUs (such asMSP430 or AVR) do not have caches that could be usedfor timing attacks based on hits and misses. Also, differ-ences in execution time due to bus contention are data-independent and cannot leakK 6. Finally, a software-onlytiming side-channel attack against HMAC-SHA used inSMART is not viable. Code used for HMAC computa-tion does not have conditional branching instructions, re-sulting in constant execution time. Moreover, to the bestof our knowledge, no timing attacks have been reportedagainst HMAC-SHA.

3.2 Secure Key Storage

The next question is where to store K used for computingHMAC. Clearly, it cannot be stored in normal memory,since malware could easily access it and pass attestation.We use a special hardware-controlled memory location tohouse a single symmetric key, K. This storage must beimmune to software attacks. Recall that hardware attacksare out of scope of this work. We also note that hardware

5For example, a stack pointer that points to an invalid memory re-gion, such as I/O register space. Or, the out pointer pointing to the stackitself, leading to corruption of the stack region used by SMART whenthe HMAC result is written to it.

6There is no bus contention on AVR due to its Harvard architecture.On MSP430, we manually verified rare cases of bus contention. On otherprocessors, wait cycles can be added to address this issue (only neededwhen executing SMART code).

Page 6: SMART: Secure and Minimal Architecture for (Establishing a

attacks require direct physical access or at least very closephysical proximity to the target device. This is improb-able in many access-restricted settings, e.g., manufactur-ing plants, utility stations, fabrication labs or implantablemedical devices (IMDs).

Although details of K initialization are not discussedin this paper, there are at least two viable approaches.In the first, K is hard-coded at production time andnever changed again, i.e., in addition to be being access-restricted, K storage location is read-only. Alternatively,there could be a secure means of modifying, but not read-ing,K by an authorized party (e.g., the verifier) that wouldrely on a special authenticated channel.7

3.3 MCU Access Controls

Simplicity and minimal cost are some of the primary ob-jectives of SMART. Hardware modifications are limitedto memory access checking and availability of ROM. Wenow describe the hardware modifications necessary to en-force key protection and to restrict execution ofRC.

Key Access Controls. To enforce K secrecy we needto ensure that it can be accessed only when the programcounter (PC) is in the RC memory region. One simplemethod to enforce this is to connect the data bus to Kmemory when the program counter is in ROM range andthe data address is pointing to K address range. The inter-nal reset signal is triggered ifKmemory is accessed whilethe program counter is not in ROM range. Figure 3 showshow access to K is controlled in the MCU.

ROM Execution Control. Since RC is authorized to ac-cessK, its usage must be controlled to prevent recovery bymalware. For example, ADV can attempt to selectivelyexecute portions of RC by using code reuse techniques(e.g., return to libc [42], borrowed code chunks [27] orReturn-Oriented Programming [39, 7, 9]). To preventsuch attacks, we provide additional access controls uponRC entry and exit. The program counter is only allowed tomove into ROM starting at SMART initial address. Sim-ilarly, the program counter can leave ROM only from thelast SMART address. These controls ensure thatRC can-not be invoked partially: once any attempt to do otherwiseis detected, the MCU is immediately reset. This necessi-tates for RC to be compiled such that any valid termina-tion of SMART execution is ended by a return from theits last instruction address.

7This topic is deferred to future work.

3.4 Cleaning Memory on ResetWhen an invalid operation takes place, such as an attemptto violate SMART memory access controls, a hardwareexception occurs, leading to an immediate MCU reset.However, if SMART code does not terminate properly,it cannot clean up its working memory and keying mate-rial could remain in memory after reset. (The situation issimilar if a power loss occurs.) This technique was usedin several attacks on MCUs to recover keying material orstore information across resets [20, 21]. Therefore, it ismandatory to perform memory cleanup upon each reset.In SMART, memory cleanup is performed by processorlogic triggered upon every boot or reset.

We note that the aforementioned phenomenon is similarto cold boot attacks [22] whereby a computer is stoppedduring execution and its memory is removed in order torecover keying material. However, since a typical MCUfeatures processor and memory in a single “package”, thelatter cannot be accessed directly. If debugging interfacesare permanently deactivated and memory is freed uponeach reset, only hardware attacks (that are out of scope ofSMART ) would allow recovery of parts of memory.

4 Security AnalysisOur present security argument is informal. A more sub-stantial argument (or a proof) would require formal anal-ysis and verification of SMART code, which is plannedas part of future work. The security argument is based onthe following assertions:

A1 Cryptographic checksum C computed by PRV can-not be forged. Since C is a result of secure HMACfunction (e.g., HMAC-SHA) we assume that, for anyADV – external to PRV – that observes a polyno-mial number of such checksums, finding HMAC col-lisions and/or learning bits of the attestation key isinfeasible.

A2 Physical and hardware-based attacks on PRV arebeyond ADV’s capabilities.

A3 Attestation key K can be accessed only from withinROM-resident SMART code. This is guaranteed byMCU-based access controls.

A4 SMART code cannot be modified since it resides inROM.

A5 SMART code can be only invoked at its beginning.The hardware checks that, except for the very firstinstruction in RC, if the program counter is in RC

Page 7: SMART: Secure and Minimal Architecture for (Establishing a

GT

LT

A[15:0]

B[15:0]

C O M P M 1 6

GT

LT

A[15:0]

B[15:0]

C O M P M 1 6

PC

GT

LT

A[15:0]

B[15:0]

C O M P M 1 6

GT

LT

A[15:0]

B[15:0]

C O M P M 1 6

DATA_ADDR_BUS

KEY_LOW

ROM_MAX

KEY_HIGH

ROM_MIN

AND3

Internal_reset

OR2

AND4

SRAM_BUS

M U X F 8I0

I1

S

O DATA_BUS

KEY_MEM_BUS

Figure 3: Schematic view of access control for attestation key.

range, then the previous executed instruction mustalso be in ROM.

A6 RC execution can only terminate at the very last in-struction address in RC. The hardware checks that,except for the very last instruction inRC, if the pro-gram counter is not in RC range, then the previousinstructions must also be outsideRC range.

A7 Upon each invocation of SMART, all interrupts aredisabled 8 and remain so if, upon completion ofSMART, control is passed toHC.

A8 K cannot be extracted by any software-based ADVinternal to PRV . Upon completion of SMART exe-cution, K is no longer accessible. Also, all memoryused by SMART code is securely erased. The onlyvalue based (statistically dependent) on K is the out-put C.

A9 For each invocation, SMART computes C based onthe contents of the requested memory segment [a, b].Although C is guaranteed to be computed correctly,it may or may not result inPRV passing attestations,since [a, b] might be previously corrupted by ADV .

8From the security perspective, executing SMART with interruptsdisabled is redundant with respect to assertions A5 and A6. However,this (assertion A7) prevents a reset if an interrupt occurs during SMARTexecution, thus improving reliability.

A10 Any erroneous state (e.g., violation of assertions A3,A5, A6) leads to a hardware reset. Upon reset, alldata memory and registers are erased, which pre-vents K leakage. This boot-time memory erasurealso guarantees that, if power loss occurs duringSMART execution, no information about K is re-tained in memory.

A11 Observing normal execution of SMART should leakno information about K. Therefore, SMART execu-tion time and amount of memory used must not bekey-dependent.

Key Protection Guarantee. Assertion A3 implies that Kis not directly available to untrusted software. AssertionsA5 and A6 guarantee that code reuse attacks to recoverK are impossible. A10 implies that, when error conditionoccurs, execution is stopped and no information about Kis leaked. A11 guaranties that side-channels cannot beused to gather information about K by untrusted softwareexecuting on the MCU. Other side-channels commonlyused in key recovery attacks rely on power consumptionanalysis and electromagnetic emanations [34]. However,these are hardware/physical attacks 9.

9We note that these side-channels might be exploitable in very spe-cific cases by a local attacker, e.g., if hardware to perform such measure-ments is available as a peripheral of the device, e.g., a coulomb counterthat measures remaining battery power. This could, in theory, provideinformation on power consumption of SMART code. We assume thatsuch features are not available on the device.

Page 8: SMART: Secure and Minimal Architecture for (Establishing a

Given the above assertions and the key protection guar-antee and assuming that VRF receives and successfullyverifiesC, we argue that postulated security objectives aresatisfied:Prover Authentication. If C is correctly computed andn is a random nonce of sufficient bit-length, VRF con-cludes that C was computed by PRV within the intervalof time between the initial request message and the receiptof C. This yields fresh authentication of PRV .External Verification. Assertions A1-A8 imply that Cwas computed by SMART code on PRV . Therefore,memory region [a, b] on PRV contained code or data ex-pected by VRF .Guaranteed Execution. Assertion A6 implies that, im-mediately after computing C, PRV executes code at x, ifxflag is set. If C is deemed correct by VRF and x = a,VRF is assured that the expected code at location a wasexecuted.

5 Other Uses of SMART

In this section we describe several techniques that can beimplemented using SMART as a building block.

Algorithm 2: SMART usage to attest a memoryrange.

input : n nonce sent by VRFa start address to attestb end address to attestH HMAC result (global variable)

output: HMAC outputbegin

SMART(a, b, ∅, False,n,&H ,∅);Send(H);

end

5.1 Remote Attestation of Parts of MemoryThe most natural usage of SMART is to attest a memorysegment and verify that it contains data (or code) that itis expected to contain. This can be achieved by invokingSMART with the start and end addresses of the memoryrange to be attested, as shown in Algorithm 2.

5.2 Remote Proof of ResetSome applications need to ensure that a device has beenreset successfully. This can be easily done with SMART,as shown in Algorithm 3. HMAC guarantees that the reset

function (R) has been verified and executed. Here, weassume that output of HMAC is not erased during reset,e.g., stored in Flash or EEPROM.

5.3 Attested Reading of MeasurementsSome applications need to make sure that values readfrom a peripheral device cannot be forged by malwarepossibly present on that device. For example, large-scaleincorrect reports of current electricity consumption bysmart meters might lead to power outages. Or, an IMDthat returns incorrect values when queried by a physi-cian might result in an incorrect prescription issued to apatient, with potentially catastrophic consequences. Pre-dictably, attestation of measurements should provide: (1)freshness of the values read, (2) proof of reading the val-ues from the peripheral and (3) integrity of the values.

Freshness is provided via a nonce, present by default inSMART invocation. Proof of reading the value is providedby calling SMART to attest and runHC, that reads the val-ues. Finally, HC calls SMART a second time, as a normalHMAC function, to protect integrity of the read values.Algorithm 4 presents this primitive.

Although this approach, using hash chains, bears someresemblance to the extend operation of a TPM, there aresome important differences: HMAC attests each output ofSMART with the secret key of the device. This allows fora simpler design. Besides integrity, HMAC correctnessconfirms that it was produced by SMART. This is funda-mentally different from the extend operation preformedby a TPM, since integrity of the PCR is enforced by hard-ware.

We note that the Send function, that sends the HMACto VRF , is not guaranteed to be executed since it is notverified by SMART. However, this does not impact validityof the HMAC or the obtained measurements.

5.4 Further Uses and ExtensionsA primitive providing a dynamic root of trust, such asSMART, can be used many other purposes. For example,if certain known malware propagates over a network ofembedded devices, VRF can introduce detection or dis-infection code. This code could be launched by SMARTto perform remote search for known malicious patterns incode or data. Using SMART, validity of returned HMACwould guarantee that detection code was executed unin-terrupted and that the detection result is genuine.SMART can also facilitate mutual authentication and

shared key generation between two (or more) previouslypaired devices. In this case, each device acts as both aPRV and VRF . SMART guarantees that, even in the

Page 9: SMART: Secure and Minimal Architecture for (Establishing a

Algorithm 3: SMART usage to securely reset a de-vice.

input : n nonce sent by VRFR reset function address|R| the reset function sizeH HMAC result (global variable)

output: HMAC outputbegin

SMART(R, R+ |R|, R, True,n,&H ,∅);end// ResetFunction: R()begin

ShutdownDevices();EraseAllMemoryButH();PC = 0 ;

end// The value H will be returned to VRF// after boot is completed.

event of full software compromise of either device, a de-vice’s long-term attestation key cannot be modified or dis-closed. Consequently, the adversary cannot clone a gen-uine device or eavesdrop on communication between twodevices. One possible application example is in car keyfobs. Such a fob, paired with the car’s on-board Embed-ded Compute Unit (ECU) could share a key protected bySMART.

Fine-grained access control to sensitive peripherals canbe limited toHC only with simple hardware extensions toSMART. For example,HC code can be provided in a bun-dle with its own HMAC and a bit field that describes au-thorization to access specific memory regions correspond-ing to memory mapped peripherals. Access to these mem-ory regions would, in turn, be authorized only if HMACis validated. This is useful in many applications, e.g.,pacemakers where it could control delivery of pacing im-pulses.

6 Implementation

To assess feasibility, practicality and impact of SMARTwe implemented it on two low-end commodity MCU plat-forms. We believe that this is the best way to understandits benefits and limitations as well as to evaluate the im-pact of required MCU modifications. We chose to baseour implementation on two fully open-source clones ofwidely used off-the-shelf MCU-s: Atmel AVR and TexasInstruments MSP430. These processors share many fea-tures. They both have a limited memory address space

Algorithm 4: SMART usage to attest a measurement,e.g., a reading from a peripheral accessed from mem-ory mapped I/O.input : n nonce sent by VRF

in address to read from deviceR reading function address|R| the reading function sizeH1 first HMAC (global variable)H2 second HMAC (global variable)

output: HMAC outputbegin

SMART(R, R+ |R|, R, True,n,&H1,in);/* Function R will be called by SMART code */Send(V,H2);

end// ReadingFunction: R(in)begin

V ←− ReadValueFromHW(in) ;tmp=V ||H1 ;SMART(&tmp,&tmp+sizeof(tmp),0,False,n,&H2,∅);RestoreIRQ();

end

with 16-bit addresses. Common memory sizes in bothdevices are between 2 − 16 KBytes of SRAM used asdata memory and between 16− 64 KBytes of flash mem-ory used for program storage. Both are designed for low-power as well as low-cost and are widely adopted in manyapplication areas, e.g., in the automotive industry, utilitymeters, consumer devices and peripherals.

AVR and MSP430 also have some major architecturaldifferences. Notably, MSP430 is a 16-bit Von Neumannarchitecture processor with common data and code ad-dress spaces. Whereas, AVR is an 8-bit Harvard archi-tecture processor that has separate address spaces for dataand program memory. Another prominent difference is inthe instruction set: AVR is a RISC architecture with mostinstructions requiring a single 16-bit word and executingin one clock cycle. In contrast, MSP430 can perform mul-tiple memory accesses within a single instruction. Its in-struction execution time can range from 1 to 6 clock cy-cles, and instruction length can vary from 16 to 48 bits.

The differences between AVR and MSP430 makesthem good representatives of architectures commonlyused in many modern embedded systems.

Page 10: SMART: Secure and Minimal Architecture for (Establishing a

6.1 Implementation DetailsSMART implementation consists of three main compo-nents:

• Processor modifications to add ROM code, key stor-age and memory access controls.

• Largely architecture-independent SMART routinestored in ROM that implements Algorithm 1. This Ccode has a small number of architecture-dependentlines.

• One or more software protocol implementations thatutilize the SMART primitive.

Implementation on AVR and MSP430 Cores. We firstimplemented the hardware part of SMART on the AVRprocessor, an Atmega103 [5] clone from the OpenCoresProject [31]. Figure 4a illustrates the execution core andits memory. Parts that had to be modified or added areshaded. They mainly correspond to memory and memoryaccess controls on memory buses.

Next, we implemented SMART on MSP430. We usedthe open-source OpenMSP430 core from the OpenCoresProject [31] and ported SMART to it. The port consistsof processor modifications, adaptation of ROM code toMSP430 architecture as well as testing and synthesiz-ing the resulting core. These tasks were performed inone week by one developer with moderate Verilog knowl-edge and no previous experience with the OpenMSP430core. Processor modifications were limited to implement-ing and adding modules for ROM code and key memory.In addition, minor modifications and address checks wererequired in the memory backbone module of the Open-MSP430 core. The memory backbone module performsarbitration of memory accesses. Figure 4b presents re-quired modifications (shaded) for MSP430.

In both processors, less than 200 lines of code (Table 1)were changed to implement these modifications. In ad-dition to processor modifications, we extended existingregression tests (or test benches) to verify correct imple-mentation of each of assertion from Section 4 that is rele-vant here: A3, A5, A6, and A10.ROM-Resident Code. This code corresponds to 487lines of portable C and uses a standard SHA-1 implemen-tation [13]. It requires 4KBytes of ROM for the AVR and6KBytes for MSP430. It executes in 10-s to 100-s of mil-liseconds (see Table 2), depending on the size of HC toattest.

Memory usage in SMART has to be carefully managed.SMART code cannot reserve memory for its own usage.Memory should only be allocated on the stack (i.e. localfunctions variables). It should not attempt to use global

Component Original ChangedLines Lines Ratio

AVR, core (VHDL) 3932 151 3.84%AVR, tests 2244 760MSP430, core (Verilog) 4593 182 3.96%MSP430, tests 17665 1122

Table 1: Changes made (in # of HDL lines of code)in AVR and MSP430 processors, respectively, excludingcomments and blank lines.

Data Size Cycles Time at 8MHz1 KByte 2302281 287 ms

512 Bytes 1281049 160 ms32 Bytes 387471 48 ms

Table 2: HMAC execution timing.

variables or heap allocated memory. Doing so allows usto avoid relying on untrusted data. Finally, the code iscompiled and linker scripts are used to generate the ROMimage suitable to the modified processor.Hardware Footprint. Simulating the design demon-strates its functional status. Whereas, comparing the num-ber of lines of code of its implementation provides in-sights into the amount of effort required to implementSMART on a given MCU. However, this is insufficient toassess real impact of SMART in terms of hardware over-head, i.e., surface increase due to its presence on an actualmanufactured device. A single line of HDL can add asimple wire, a register or an entire memory block; each ofthese would be counted as one line of code, although theyhave very different impact on synthesized hardware. Wesynthesized the original and SMART-ified designs for bothAVR and MSP430. This provides an initial estimate of theimpact of SMART on the final devices. Synthesizing is theact of transforming (or compiling) the design from a high-level description language (Verilog or VHDL) into a set ofwires and elementary gates that serve as building blocksof an Application-Specific Integrated Circuit (ASIC).

Synthesis needs to be performed for a specific targethardware. We used the library from UMC 180nm pro-cess [18] and Synopsys Design Compiler [44]. For betterperformance, RAM and ROM memories were generatedwith a specific tool [17, 16]. Flash memory numbers weregleaned from publicly available information [10]. Resultscan vary substantially depending on many parameters,such as: required maximum frequency, latency, placementand routing and availability of better memory IP. How-

Page 11: SMART: Secure and Minimal Architecture for (Establishing a

(a) AVR: Dark gray boxes represent logic added to the processor. Corecontrol signals provide information about internal processor status tomemory bus controls.

(b) MSP430: Memory backbone was modified to control access toROM and K. Since MSP430 is based on Von Neumann architecture,concurrent access can occur to different memory parts (e.g., instruc-tion fetch and read data). In that case, memory backbone arbitratesbus access and temporarily saves/restores data.

Figure 4: Modifications to AVR and MSP430.

ever, our current measurements (in Table 3) show that theimpact of SMART on surface area is minimal. AddingSMART to both AVR and MSP430 caused only a 10%increase in their respective surface areas. As mentionedbefore, most of that added area is due to the ROM hous-ing SMART code. Modifications to the core required only1K and 0.7K gate equivalents in AVR and MSP430, re-spectively. This could probably be reduced as we did notperform optimizations.

6.2 Lessons Learned From ExperimentsThe first observation from our experiments is that imple-menting SMART is not a complex task and porting it toa different architecture is even easier. Second, additionalfootprint of our implementation is minimal. One changethat impacted chip surface area the most is the additionalROM storing SMART code.

Another important result is that, in both cases, we didnot have to change the processor core itself. Instead,we only had to modify the memory access controller.10

Therefore, SMART might be also well-suited to settings

10The only exception is that, in some cases, we needed informationabout the execution engine state (e.g., detection of wait states).

where the processor core is available only as a “black box”and provides enough information about accessed memory

Component Size in kGEOrig. with SMART Ratio

AVR MCU 103 113 10%Core 11.3 11.6 2.6%SRAM 4 kB 26,6 26.6 0%Flash 32 kB 65 65 0%ROM 6 kB - 10.3 -MSP430 MCU 128 141 10%Core 7.6 8.3 9.2%SRAM 10 kB 55.4 55.4 0%Flash 32 kB 65 65 0%ROM 4 kB - 12.7 -

Table 3: Comparison of chip surface used by each com-ponent of the original MCU to its modified version. kGEstands for thousands of Gate Equivalents (GE-s). OneGE is proportional to the surface of the chip and com-puted form the module surface divided by the surface of aNAND2 gate, 9, 37 ∗ 10−6mm2 with this library.

Page 12: SMART: Secure and Minimal Architecture for (Establishing a

on its external interface, e.g., low-end ARM cores.One limitation is that we rely on “reasonably” fast

HMAC computation, which might make SMART too slowfor some applications. This is a consequence of the con-scious trade-off made when we chose to limit the amountof hardware changes in the processor. Depending on theapplication, it may be possible to use a hardware-basedSHA-1 implementation (e.g., [30]), which would signifi-cantly improve performance without requiring major pro-cessor modifications.

7 Related WorkRelated work falls into several categories:

Hardware Attestation. Secure boot [2] checks the in-tegrity of a system at power-on. The root of trust, usuallya small bootloader, computes a hash of loaded memory,and compares it to a signed value, a device is allowedto boot-up only if all checks are passed. Trusted Plat-form Modules (TPM) [45] are secure co-processors thatare nowadays present in most commodity systems. TPMscompute integrity checksums of loaded memory at boottime and send them to be verified by a remote verifier.TPMs also protect data against compromised operatingsystem, i.e. make an encryption key available only whenPlatform Configuration Registers (PCRs) are in a givenstate. The integrity measurements are stored in PCRs in-side the TPM. Security is based on the following facts: (1)PCRs are only accessible through the TPM and (2) mea-surements stored in PCRs can only be extended by includ-ing the previous values in the computation. Each exten-sion is computed using a cryptographic hash of the currentmeasurement and the previous PCR value. Trust is es-tablished because the very first extension is performed byBIOS upon boot. Several approaches have been proposedthat rely on the TPM as a common foundation [33, 14, 25].Software Attestation. Pioneer [37] provides device at-testation without relying on any specialized hardware orsecure co-processor. It computes a checksum of mem-ory using a function that relies on “enough” side-effectsof computation (status registers, etc.) such that maliciousemulation of this function incurs a temporal overhead thatis sufficient to detect cheating. Attestation that relies ontimed software checksums has been also adapted to em-bedded devices in [36, 38, 40]. However, security of suchsolutions has been challenged by [41] and several attackson such schemes have been proposed [8]. Other hybridsolutions (e.g., [32]) rely on ROM and fill prover’s entirememory to ensure absence of malicious code and then re-store a device to a known secure state. All software solu-

tions rely on strong assumptions on adversarial capabili-ties and do not consider that colluding devices can activelyparticipate in the protocol to defeat attestation. This, com-bined with the high overhead of software solutions, makesthe application of software attestation for time critical de-vices questionable.Dynamic Root of Trust. Recently a dynamic root oftrust mechanism has been added to the TPM specifica-tions [46] and has been implemented as AMD SVM [1]and Intel TXT [24]. This provides a way to perform at-testation dynamically after boot. This is accomplishedby allowing a specific CPU instruction to atomically re-set the state of some PCRs, isolate a region of memory,hash the contents of that memory and execute it. Severalhardware protections measures, such as disabling DMA,debugging and resetting the TPM PCRs, are included toprevent fraudulent attestation. The Flicker system archi-tecture [29] establishes a dynamic root of trust on com-modity computers, leveraging AMD and Intel advances,by running a Piece of Application Logic (PAL) on theprover. The execution of PAL is guaranteed even if BIOS,OS and DMA of the system are all compromised. Thiswas further extended into TrustVisor [28] which providesa dynamic root of trust for PALs directly from a minimalhypervisor. This significantly improves the performanceof the Dynamic Root of Trust mechanism. Flicker andTrustvisor are the closest to the approach considered inthis paper. However, their complexity and reliance on aTPM and Intel or AMD architectures inhibits their use inlow-cost commodity embedded devices.Other Hardware-Based Techniques. SPM [43] is ahardware-based mechanism for process isolation. It relieson a special vault module that must be bootstrapped witha static root of trust. This vault bootstraps SPM protectedprograms that gain exclusive control over the protectionof their own memory pages. SPM and SMART share somekey features, such as the use of program counter to restrictaccess to secret storage, and code entry point enforce-ment. However, unlike SMART, SPM does not provides adynamic root of trust. It also involves a larger TCB and isgenerally oriented towards higher-end embedded systemswith an MMU or an MPU. Furthermore, SPM requiresnew custom instructions to be added to the core. Finally,its feasibility (i.e., effort needed to implement on a realhardware platform) and footprint remain unclear.

8 ConclusionsThis paper is motivated by lack of currently feasible tech-niques for providing dynamic root of trust on remote em-bedded devices. We proposed SMART a very simple,

Page 13: SMART: Secure and Minimal Architecture for (Establishing a

lightweight and low-cost architecture that nonetheless of-fers concrete security guarantees in the presence of anykind of non-physical attacks. Future work will consist informally verifying the ROM-resident code in order to ob-tain a strong security proof for the entire architecture; thisis likely to be a challenging task. More experiments us-ing current MCU implementations need to be performedto better assess the overhead. We also plan to implementand evaluate SMART on several other common MCU plat-forms and among a larger project we plan to produce afew test ASIC samples of microcontrollers with SMART.

9 Acknowledgements

We thank Frank K. Gurkaynak for his help with ASIC de-sign as well as Travis Goodspeed, Kasper Rasumussen,Srdjan Capkun and NDSS’12 anonymous reviewers fortheir insightful comments that helped us improve this pa-per.

Daniele Perito was supported in part by the Euro-pean Commission within the STREP WSAN4CIP project.Aurelien Francillon was supported by the European Com-mission within the STREP TAMPRES project. The viewsand conclusions contained herein are those of the authorsand should not be interpreted as representing the officialpolicies or endorsement of the WSAN4CIP or TAMPRESprojects or the European Commission.

References

[1] ADVANCED MICRO DEVICES. AMD, Secure Vir-tual Machine Architecture Reference Manual. Pub-lication No. 33047, Revision 3.01, May 2005.

[2] ARBAUGH, W. A., FARBER, D. J., AND SMITH,J. M. A secure and reliable bootstrap architecture. InSP ’97: Proceedings of the 1997 IEEE Symposiumon Security and Privacy (Washington, DC, USA,1997), IEEE Computer Society, p. 65.

[3] ARM CORPORATION. Building a secure systemusing TrustZone technology. Publication number:PRD29-GENC-009492C.

[4] ARM CORPORATION. Securcore processors,2011. http://www.arm.com/products/processors/securcore/index.php.

[5] ATMEL CORPORATION. 8-bit microcontroller with128k bytes in-system programmable flash.

[6] ATMEL CORPORATION. ATMEL TrustedPlatform Module AT97SC3201, June 2005.http://www.atmel.com/atmel/acrobat/doc5010.pdf.

[7] BUCHANAN, E., ROEMER, R., SHACHAM, H.,AND SAVAGE, S. When good instructions go bad:generalizing return-oriented programming to RISC.In CCS ’08: Proceedings of the 15th ACM con-ference on Computer and Communications Security(2008), ACM.

[8] CASTELLUCCIA, C., FRANCILLON, A., PERITO,D., AND SORIENTE, C. On the difficulty ofsoftware-based attestation of embedded devices. InCCS 09: Proceedings of 16th ACM Conference onComputer and Communications Security (Novem-ber 2009).

[9] CHECKOWAY, S., DAVI, L., DMITRIENKO, A.,SADEGHI, A.-R., SHACHAM, H., AND WINANDY,M. Return-oriented programming without re-turns. In Proceedings of CCS 2010 (Oct. 2010),A. Keromytis and V. Shmatikov, Eds., ACM Press,pp. 559–72.

[10] CHINGIS TECHNOLOGY CORPORATION. Embed-ded e2 Flash IP, PF32K17E, 32Kbyte (16K x16)Embedded Flash Macro (EFM), 2011. Details avail-able online at: http://www.chingistek.com/pfusion_03.asp?seq=9.

[11] CONDIT, J., HARREN, M., ANDERSON, Z., GAY,D., AND NECULA, G. Dependent types for low-level programming. In Programming Languagesand Systems, R. De Nicola, Ed., vol. 4421 of Lec-ture Notes in Computer Science. Springer Berlin /Heidelberg, 2007, pp. 520–535.

[12] COSTAN, V., SARMENTA, L. F., DIJK, M., ANDDEVADAS, S. The trusted execution module: Com-modity general-purpose trusted computing. In Pro-ceedings of the 8th IFIP WG 8.8/11.2 internationalconference on Smart Card Research and AdvancedApplications (Berlin, Heidelberg, 2008), CARDIS’08, Springer-Verlag, pp. 133–148.

[13] EASTLAKE, D., AND JONES, P. RFC 3174 - US Se-cure Hash Algorithm 1 (SHA1). IETF RFC, Septem-ber 2001.

[14] ENGLAND, P., LAMPSON, B., MANFERDELLI, J.,PEINADO, M., AND WILLMAN, B. A trusted openplatform. IEEE Computer 36, 7 (2003).

Page 14: SMART: Secure and Minimal Architecture for (Establishing a

[15] FALLIERE, N., MURCHU, L. O., AND CHIEN, E.W32.stuxnet dossier, version 1.4. Symantec Secu-rity Response, February 2011.

[16] FARADAY TECHNOLOGY CORPORATION. 0.18µmsynchronous VIA1 programmable ROM compiler,FSA0A C SP, 2004. Details available online at:http://www.faraday-tech.com.

[17] FARADAY TECHNOLOGY CORPORATION. 0.18umsynchronous high speed single port memory com-piler fsa0a c su, 2004. Details available online at:http://www.faraday-tech.com.

[18] FARADAY TECHNOLOGY CORPORATION. Fara-day FSA0A C 0.18 µm ASIC standard cell library,2004. Details available online at: http://www.faraday-tech.com.

[19] FOSTER, J. S., TERAUCHI, T., AND AIKEN, A.Flow-sensitive type qualifiers. In Proceedings of theACM SIGPLAN 2002 Conference on Programminglanguage design and implementation (New York,NY, USA, 2002), PLDI ’02, ACM, pp. 1–12.

[20] FRANCILLON, A., AND CASTELLUCCIA, C. Codeinjection attacks on Harvard-architecture devices. InCCS 08: Proceedings of the 15th ACM Conferenceon Computer and Communications Security (2008),P. Ning, P. F. Syverson, and S. Jha, Eds., ACM.

[21] GOODSPEED, T. Extracting keys from second gen-eration zigbee chips. Black Hat USA, July 2009.

[22] HALDERMAN, J. A., SCHOEN, S. D., HENINGER,N., CLARKSON, W., PAUL, W., CALANDRINO,J. A., FELDMAN, A. J., APPELBAUM, J., ANDFELTEN, E. W. Lest we remember: Cold boot at-tacks on encryption keys. In USENIX Security Sym-posium (2008), USENIX Association, pp. 45–60.

[23] HU, W., TAN, H., CORKE, P., SHIH, W. C., ANDJHA, S. Toward trusted wireless sensor networks.ACM Trans. Sen. Netw. 7 (August 2010), 5:1–5:25.

[24] INTEL CORPORATION. Intel Trusted ExecutionTechnology (Intel TXT) – Software DevelopmentGuide, December 2009. Document Number:315168-006.

[25] KIL, C., SEZER, E. C., AZAB, A. M., NING, P.,AND ZHANG, X. Remote attestation to dynamicsystem properties: Towards providing complete sys-tem integrity evidence. In DSN 09: Proceedings ofthe 39th IEEE/IFIP Conference on Dependable Sys-tems and Networks (June 2009).

[26] KOSTIAINEN, K., DMITRIENKO, A., EKBERG, J.-E., SADEGHI, A.-R., AND ASOKAN, N. Key at-testation from trusted execution environments. InProceedings of the 3rd international conference onTrust and trustworthy computing (Berlin, Heidel-berg, 2010), TRUST’10, Springer-Verlag, pp. 30–46.

[27] KRAHMER, S. x86-64 buffer overflow exploitsand the borrowed code chunks exploitation tech-nique. Tech. rep., suse, September 2005. availableat http://www.suse.de/ krahmer/no-nx.pdf.

[28] MCCUNE, J. M., LI, Y., QU, N., ZHOU, Z.,DATTA, A., GLIGOR, V. D., AND PERRIG, A.TrustVisor: Efficient TCB reduction and attestation.In Proceedings of IEEE Symposium on Security andPrivacy (Oakland 2010) (May 2010).

[29] MCCUNE, J. M., PARNO, B. J., PERRIG, A., RE-ITER, M. K., AND ISOZAKI, H. Flicker: an execu-tion infrastructure for TCB minimization. In Pro-ceedings of the 3rd ACM SIGOPS/EuroSys Euro-pean Conference on Computer Systems 2008 (NewYork, NY, USA, 2008), Eurosys ’08, ACM, pp. 315–328.

[30] O’NEILL (MCLOONE), M. Low-Cost SHA-1 HashFunction Architecture for RFID Tags. In Workshopon RFID Security – RFIDSec’08 (Budapest, Hun-gary, July 2008).

[31] The Opencores Project. http://opencores.org/.

[32] PERITO, D., AND TSUDIK, G. Secure code updatefor embedded devices via proofs of secure erasure.In Proceedings of the 15th European conference onResearch in computer security (Berlin, Heidelberg,2010), ESORICS’10, Springer-Verlag, pp. 643–662.

[33] SAILER, R., ZHANG, X., JAEGER, T., AND VANDOORN, L. Design and implementation of atcg-based integrity measurement architecture. InSSYM’04: Proceedings of the 13th conference onUSENIX Security Symposium (Berkeley, CA, USA,2004), USENIX Association, pp. 16–16.

[34] SCHRAMM, K., LEMKE, K., AND PAAR, C. Em-bedded cryptography: Side channel attacks. In Em-bedded Security in Cars, K. Lemke, C. Paar, andM. Wolf, Eds. Springer Berlin Heidelberg, 2006,pp. 187–206.

Page 15: SMART: Secure and Minimal Architecture for (Establishing a

[35] SESHADRI, A., LUK, M., AND PERRIG, A. SAKE:Software attestation for key establishment in sensornetworks. In DCOSS ’08: Proceedings of the 4thIEEE international conference on Distributed Com-puting in Sensor Systems (2008).

[36] SESHADRI, A., LUK, M., PERRIG, A., VANDOORN, L., AND KHOSLA, P. SCUBA: Securecode update by attestation in sensor networks. InWiSe ’06: Proceedings of the 5th ACM workshop onWireless security (2006), ACM.

[37] SESHADRI, A., LUK, M., SHI, E., PERRIG, A.,VAN DOORN, L., AND KHOSLA, P. Pioneer: veri-fying code integrity and enforcing untampered codeexecution on legacy systems. In SOSP ’05: Proceed-ings of the twentieth ACM symposium on Operatingsystems principles (2005), ACM.

[38] SESHADRI, A., PERRIG, A., VAN DOORN, L.,AND KHOSLA, P. K. SWATT: SoftWare-based AT-Testation for embedded devices. In IEEE Sympo-sium on Security and Privacy (2004), IEEE Com-puter Society.

[39] SHACHAM, H. The geometry of innocent flesh onthe bone: Return-into-libc without function calls (onthe x86). In CCS ’07: Proceedings of the 14th ACMconference on Computer and Communications Se-curity (2007), ACM.

[40] SHANECK, M., MAHADEVAN, K., KHER, V., ANDKIM, Y. Remote software-based attestation forwireless sensors. In ESAS (2005).

[41] SHANKAR, U., CHEW, M., AND TYGAR, J. D.Side effects are not sufficient to authenticate soft-ware. In Proceedings of the 13th USENIX SecuritySymposium (August 2004).

[42] SOLAR DESIGNER. return-to-libc attack. Bugtraqmailing list, August 1997.

[43] STRACKX, R., PIESSENS, F., AND PRENEEL, B.Efficient isolation of trusted subsystems in embed-ded systems. In Proceedings of ICST Conference onSecurity and Privacy in Communication Networks(SecureComm’10) (2010).

[44] SYNOPSYS, INC. Design compiler 2010, 2010.http://www.synopsys.com/home.aspx.

[45] TRUSTED COMPUTING GROUP. TCPA main speci-fication, version 1.1b.

[46] TRUSTED COMPUTING GROUP. TPM main speci-fication level 2 version 1.2.

[47] YANG, Y., WANG, X., ZHU, S., AND CAO, G. Dis-tributed software-based attestation for node compro-mise detection in sensor networks. In SRDS (2007),IEEE Computer Society.