Software Platforms

Embed Size (px)

Citation preview

  • 7/29/2019 Software Platforms

    1/121

    Lecture 2:Software Platforms

    Anish Arora

    CIS788.11J

    Introduction to Wireless Sensor Networks

    Lecture uses slides from tutorials

    prepared by authors of these platforms

  • 7/29/2019 Software Platforms

    2/121

    2

    Outline

    Discussion includes OS and also prog. methodology

    some environments focus more on one than the other

    Focus is on node centric platforms

    (vs. distributed system centric platforms)

    composability, energy efficiency, robustness

    reconfigurability and pros/cons of interpreted approach

    Platforms TinyOS (applies to XSMs) slides from UCB

    EmStar (applies to XSSs) slides from UCLA

    SOS slides from UCLA

    Contiki slides from Upsaala

    Virtual machines (Mat) slides from UCB

  • 7/29/2019 Software Platforms

    3/121

    3

    NesC, Programming Manual, The Emergence ofNetworking Abstractions and Techniques in TinyOS,TinyOS webpage

    EmStar: An Environment for Developing WirelessEmbedded Systems Software, Sensys04 Paper, EmStarwebpage

    SOS Mobisys paper, SOS webpage

    Contiki Emnets Paper, Sensys06 Paper, Contiki webpage

    Mate ASPLOS Paper, Mate webpage, (SUNSPOT)

    References

    http://www.tinyos.net/papers/nesc.pdfhttp://csl.stanford.edu/~pal/pubs/tinyos-programming.pdfhttp://www.cs.berkeley.edu/~pal/pubs/tinyos-nsdi04/http://www.cs.berkeley.edu/~pal/pubs/tinyos-nsdi04/http://www.tinyos.net/tinyos-2.x/doc/http://lecs.cs.ucla.edu/Publications/papers/emstar.pdfhttp://lecs.cs.ucla.edu/Publications/papers/emstar.pdfhttp://lecs.cs.ucla.edu/~girod/papers/emtos-sensys04.pdfhttp://cvs.cens.ucla.edu/emstar/http://cvs.cens.ucla.edu/emstar/http://nesl.ee.ucla.edu/projects/sos-1.x/publications/sos_mobisys_05.pdfhttp://nesl.ucla.edu/projects/sos-1.x/http://www.sics.se/~adam/dunkels04contiki.pdfhttp://www.sics.se/~adam/dunkels06runtime.pdfhttp://www.sics.se/~adam/contiki/http://www.cs.berkeley.edu/~pal/pubs/asplos02.pdfhttp://www.cs.berkeley.edu/~pal/research/mate.htmlhttp://www.cs.berkeley.edu/~pal/research/mate.htmlhttp://www.cs.berkeley.edu/~pal/pubs/asplos02.pdfhttp://www.sics.se/~adam/contiki/http://www.sics.se/~adam/dunkels06runtime.pdfhttp://www.sics.se/~adam/dunkels04contiki.pdfhttp://nesl.ucla.edu/projects/sos-1.x/http://nesl.ee.ucla.edu/projects/sos-1.x/publications/sos_mobisys_05.pdfhttp://cvs.cens.ucla.edu/emstar/http://cvs.cens.ucla.edu/emstar/http://lecs.cs.ucla.edu/~girod/papers/emtos-sensys04.pdfhttp://lecs.cs.ucla.edu/Publications/papers/emstar.pdfhttp://lecs.cs.ucla.edu/Publications/papers/emstar.pdfhttp://www.tinyos.net/tinyos-2.x/doc/http://www.cs.berkeley.edu/~pal/pubs/tinyos-nsdi04/http://www.cs.berkeley.edu/~pal/pubs/tinyos-nsdi04/http://csl.stanford.edu/~pal/pubs/tinyos-programming.pdfhttp://www.tinyos.net/papers/nesc.pdf
  • 7/29/2019 Software Platforms

    4/121

    4

    Traditional Systems

    Well established layers

    of abstractions

    Strict boundaries

    Ample resources

    Independent

    applications at

    endpoints

    communicate pt-pt

    through routers Well attended

    User

    System

    Physical Layer

    Data Link

    Network

    Transport

    Network Stack

    Threads

    Address Space

    Drivers

    Files

    Application

    Application

    Routers

  • 7/29/2019 Software Platforms

    5/121

    5

    Sensor Network Systems

    Highly constrained resources processing, storage, bandwidth, power, limited hardware

    parallelism, relatively simple interconnect

    Applications spread over many small nodes

    self-organizing collectives highly integrated with changing environment and network

    diversity in design and usage

    Concurrency intensive in bursts

    streams of sensor data &network traffic

    Robust

    inaccessible, critical operation

    Unclear where the

    Need a framework for:

    Resource-constrainedconcurrency

    Defining boundaries

    Appln-specific processing

    allow abstractions to emerge

  • 7/29/2019 Software Platforms

    6/121

    6

    Choice of Programming Primitives

    Traditional approaches

    command processing loop (wait request, act, respond)

    monolithic event processing

    full thread/socket posix regime

    Alternative

    provide framework for concurrency and modularity

    never poll, never block

    interleaving flows, events

  • 7/29/2019 Software Platforms

    7/1217

    TinyOS

    Microthreaded OS (lightweight thread support) and

    efficient network interfaces

    Two level scheduling structure

    Long running tasks that can be interrupted by hardware

    events

    Small, tightly integrated design that allows crossover of

    software components into hardware

  • 7/29/2019 Software Platforms

    8/1218

    Tiny OS Concepts

    Scheduler + Graph of Components constrained two-level scheduling

    model: threads + events

    Component:

    Commands Event Handlers

    Frame (storage)

    Tasks (concurrency)

    Constrained Storage Model frame per component, shared stack,

    no heap

    Very lean multithreading

    Efficient Layering

    Messaging Component

    initPow

    er(mode)

    TX_

    packet(buf)

    TX_

    packet_done(success)

    RX_

    packet_done(buffer)

    Internal

    State

    init

    power(mode)

    send_

    msg

    (addr,

    type,

    data)

    msg

    _rec(type,

    data)

    msg_

    send_

    done)

    internal thread

    Commands Events

  • 7/29/2019 Software Platforms

    9/1219

    Application = Graph of Components

    RFM

    Radio byte

    Radio Packet

    UART

    Serial Packet

    ADC

    Temp Photo

    Active Messages

    clockbit

    byte

    packet

    Route map Router Sensor Appln

    application

    HW

    SW

    Example: ad hoc, multi-hop

    routing of photo sensor

    readings

    3450 B code

    226 B data

    Graph of cooperating

    state machines

    on shared stack

  • 7/29/2019 Software Platforms

    10/12110

    TOS Execution Model

    commands request action

    ack/nack at every boundary

    call command or post task

    events notify occurrence

    HW interrupt at lowest level

    may signal events

    call commands post tasks

    tasks provide logical concurrency

    preempted by events

    RFM

    Radio byte

    Radio Packet

    bit

    byte

    packet

    event-driven bit-pump

    event-driven byte-pump

    event-driven packet-pump

    message-event driven

    active message

    application comp

    encode/decode

    crc

    data processing

  • 7/29/2019 Software Platforms

    11/12111

    Event-Driven Sensor Access Pattern

    clock event handler initiates data collection

    sensor signals data ready event

    data event handler calls output command

    device sleeps or handles other activity while waiting

    conservative send/ack at component boundary

    command result_t StdControl.start() {

    return call Timer.start(TIMER_REPEAT, 200);

    }

    event result_t Timer.fired() {

    return call sensor.getData();}

    event result_t sensor.dataReady(uint16_t data) {

    display(data)

    return SUCCESS;

    }

    SENSE

    Timer Photo LED

  • 7/29/2019 Software Platforms

    12/12112

    TinyOS Commands and Events

    {...

    status = call CmdName(args)

    ...

    }

    command CmdName(args) {

    ...

    return status;}

    {

    ...

    status = signal EvtName(args)

    ...}

    event EvtName(args) {

    ...

    return status;}

  • 7/29/2019 Software Platforms

    13/12113

    TinyOS Execution Contexts

    Events generated by interrupts preempt tasks

    Tasks do not preempt tasks

    Both essentially process state transitions

    Hardware

    Interrupts

    even

    ts

    commands

    Tasks

  • 7/29/2019 Software Platforms

    14/12114

    Handling Concurrency: Async or Sync Code

    Async methods call only async methods (interrupts are async)

    Sync methods/tasks call only sync methods

    Potential race conditions:any update to shared state from async codeany update to shared state from sync code that is

    also updated from async code

    Compiler rule:if a variable x is accessed by async code, then any accessof x outside of an atomic statement is a compile-time error

    Race-Free Invariant:

    any update to shared state is either not a potential racecondition (sync code only) or is within an atomic section

  • 7/29/2019 Software Platforms

    15/12115

    Tasks

    provide concurrency internal to a component

    longer running operations

    are preempted by events

    not preempted by tasks

    able to perform operations beyond event context

    may call commands

    may signal events

    {

    ...

    post TskName();

    ...

    }

    task void TskName {

    ...

    }

  • 7/29/2019 Software Platforms

    16/12116

    Typical Application Use of Tasks

    event driven data acquisition

    schedule task to do computational portion

    event result_t sensor.dataReady(uint16_t data) {

    putdata(data);

    post processData();

    return SUCCESS;

    }

    task void processData() {int16_t i, sum=0;

    for (i=0; i maxdata; i++)

    sum += (rdata[i] 7);

    display(sum shiftdata);

    }

    128 Hz sampling rate

    simple FIR filter

    dynamic software tuning for centering themagnetometer signal (1208 bytes)

    digital control of analog, not DSP

    ADC (196 bytes)

  • 7/29/2019 Software Platforms

    17/12117

    Task Scheduling

    Typically simple FIFO scheduler

    Bound on number of pending tasks

    When idle, shuts down node except clock

    Uses non-blocking task queue data structure

    Simple event-driven structure + control over complete

    application/system graph

    instead of complex task priorities and IPC

  • 7/29/2019 Software Platforms

    18/12118

    Maintaining Scheduling Agility

    Need logical concurrency at many levels of the graph

    While meeting hard timing constraints

    sample the radio in every bit window

    Retain event-driven structure throughout application

    Tasks extend processing outside event window

    All operations are non-blocking

  • 7/29/2019 Software Platforms

    19/12119

    RadioTimingSecDedEncode

    The Complete Application

    RadioCRCPacket

    UART

    UARTnoCRCPacket

    ADC

    phototemp

    AMStandard

    ClockC

    bit

    byte

    packet

    SenseToRfm

    HW

    SW

    IntToRfm

    MicaHighSpeedRadioM

    RandomLFSRSPIByteFIFO

    SlavePin

    noCRCPacket

    Timer photo

    ChannelMon

    generic comm

    CRCfilter

  • 7/29/2019 Software Platforms

    20/12120

    Programming Syntax

    TinyOS 2.0 is written in an extension of C, called nesC

    Applications are too just additional components composed with OS components

    Provides syntax for TinyOS concurrency and storage model

    commands, events, tasks

    local frame variable

    Compositional support separation of definition and linkage

    robustness through narrow interfaces and reuse

    interpositioning

    Whole system analysis and optimization

  • 7/29/2019 Software Platforms

    21/121

    21

    Component Interface

    logically related set of commands and events

    StdControl.nc

    interface StdControl {

    command result_t init();

    command result_t start();

    command result_t stop();

    }

    Clock.nc

    interface Clock {

    command result_t setRate(char interval, char scale);

    event result_t fire();

    }

  • 7/29/2019 Software Platforms

    22/121

    22

    Component Types

    Configuration

    links together components to compose new component

    configurations can be nested

    complete main application is always a configuration

    Module

    provides code that implements one or more interfaces and

    internal behavior

  • 7/29/2019 Software Platforms

    23/121

    23

    Example of Top Level Configuration

    configuration SenseToRfm {

    // this module does not provide any interface

    }

    implementation

    { components Main, SenseToInt, IntToRfm, ClockC, Photo as Sensor;

    Main.StdControl -> SenseToInt;

    Main.StdControl -> IntToRfm;

    SenseToInt.Clock -> ClockC;SenseToInt.ADC -> Sensor;

    SenseToInt.ADCControl -> Sensor;

    SenseToInt.IntOutput -> IntToRfm;

    }

    SenseToInt

    ClockC Photo

    Main

    StdControl

    ADCControl IntOutputClock ADC

    IntToRfm

  • 7/29/2019 Software Platforms

    24/121

    24

    Nested Configuration

    includes IntMsg;

    configuration IntToRfm

    {

    provides {

    interface IntOutput;

    interface StdControl;

    }

    }

    implementation

    {

    components IntToRfmM, GenericComm as Comm;

    IntOutput = IntToRfmM;

    StdControl = IntToRfmM;

    IntToRfmM.Send -> Comm.SendMsg[AM_INTMSG];

    IntToRfmM.SubControl -> Comm;}

    IntToRfmM

    GenericComm

    StdControl IntOutput

    SubControl SendMsg[AM_INTMSG];

  • 7/29/2019 Software Platforms

    25/121

    25

    IntToRfm Module

    includes IntMsg;

    module IntToRfmM

    {

    uses {

    interface StdControl as SubControl;

    interface SendMsg as Send;

    }provides {

    interface IntOutput;

    interface StdControl;

    }

    }

    implementation

    {bool pending;

    struct TOS_Msg data;

    command result_t StdControl.init() {

    pending = FALSE;

    return call SubControl.init();

    }

    command result_t StdControl.start()

    { return call SubControl.start(); }

    command result_t StdControl.stop()

    { return call SubControl.stop(); }

    command result_t IntOutput.output(uint16_t value)

    {

    ...

    if (call Send.send(TOS_BCAST_ADDR,sizeof(IntMsg), &data)

    return SUCCESS;

    ...

    }

    event result_t Send.sendDone(TOS_MsgPtr msg,result_t success)

    {

    ...

    }}

  • 7/29/2019 Software Platforms

    26/121

    26

    Atomicity Support in nesC

    Split phase operations require care to deal with pending

    operations

    Race conditions may occur when shared state is accessed

    by premptible executions, e.g. when an event accesses a

    shared state, or when a task updates state (premptible by

    an event which then uses that state)

    nesC supports atomicblock

    implemented by turning of interrupts

    for efficiency, no calls are allowed in block

    access to shared variable outside atomic block is not allowed

  • 7/29/2019 Software Platforms

    27/121

    27

    Supporting HW Evolution

    Distribution broken into apps: top-level applications

    tos: lib: shared application components

    system: hardware independent system components

    platform: hardware dependent system componentso includes HPLs and hardware.h

    interfaces

    tools: development support tools

    contrib

    beta

    Component design so HW and SW look the same example: temp component

    may abstract particular channel of ADC on the microcontroller

    may be a SW I2C protocol to a sensor board with digital sensor or ADC

    HW/SW boundary can move up and down with minimal changes

  • 7/29/2019 Software Platforms

    28/121

    28

    Example: Radio Byte Operation

    Pipelines transmission: transmits byte while encoding next byte

    Trades 1 byte of buffering for easy deadline

    Encoding task must complete before byte transmission complete

    Decode must complete before next byte arrives

    Separates high level latencies from low level real-time rqmts

    Encode Task

    Bit transmission Byte 1

    Byte 2

    RFM Bits

    Byte 2

    Byte 1 Byte 3

    Byte 3

    Byte 4

    start

  • 7/29/2019 Software Platforms

    29/121

    29

    Dynamics of Events and Threads

    bit event filtered

    at byte layer

    bit event =>end of byte =>

    end of packet =>

    end of msg send

    thread posted to start

    send next message

    radio takes clock events to detect recv

  • 7/29/2019 Software Platforms

    30/121

    30

    Sending a Message

    bool pending;struct TOS_Msg data;

    command result_t IntOutput.output(uint16_t value) {

    IntMsg *message = (IntMsg *)data.data;

    if (!pending) {

    pending = TRUE;message->val = value;

    message->src = TOS_LOCAL_ADDRESS;

    if (call Send.send(TOS_BCAST_ADDR, sizeof(IntMsg), &data))

    return SUCCESS;

    pending = FALSE;

    }

    return FAIL;

    }destination length

    Refuses to accept command if buffer is still full or network refuses toaccept send command

    User component provide structured msg storage

  • 7/29/2019 Software Platforms

    31/121

    31

    Send done Event

    Send done event fans out to all potential senders

    Originator determined by match

    free buffer on success, retry or fail on failure

    Others use the event to schedule pending communication

    event result_t IntOutput.sendDone(TOS_MsgPtr msg,result_t success)

    {

    if (pending && msg == &data) {

    pending = FALSE;

    signal IntOutput.outputComplete(success);

    }

    return SUCCESS;

    }

    }

  • 7/29/2019 Software Platforms

    32/121

    32

    Receive Event

    Active message automatically dispatched to associated handler

    knows format, no run-time parsing

    performs action on message event

    Must return free buffer to the system

    typically the incoming buffer if processing complete

    event TOS_MsgPtr ReceiveIntMsg.receive(TOS_MsgPtr m) {

    IntMsg *message = (IntMsg *)m->data;

    call IntOutput.output(message->val);

    return m;

    }

  • 7/29/2019 Software Platforms

    33/121

    33

    Tiny Active Messages

    Sending declare buffer storage in a frame

    request transmission

    name a handler

    handle completion signal

    Receiving

    declare a handler

    firing a handler: automatic

    Buffer management

    strict ownership exchange

    tx: send done event reuse

    rx: must return a buffer

  • 7/29/2019 Software Platforms

    34/121

    34

    Tasks in Low-level Operation

    transmit packet

    send command schedules task to calculate CRC

    task initiates byte-level data pump

    events keep the pump flowing

    receive packet

    receive event schedules task to check CRC

    task signals packet ready if OK

    byte-level tx/rx

    task scheduled to encode/decode each complete byte

    must take less time that byte data transfer

  • 7/29/2019 Software Platforms

    35/121

    35

    TinyOS tools

    TOSSIM: a simulator for tinyos programs

    ListenRaw, SerialForwarder: java tools to receive raw packets on PC

    from base node

    Oscilloscope: java tool to visualize (sensor) data in real time

    Memory usage: breaks down memory usage per component (in contrib

    Peacekeeper: detect RAM corruption due to stack overflows (in lib)

    Stopwatch: tool to measure execution time of code block by

    timestamping at entry and exit (in osu CVS server)

    Makedoc and graphviz: generate and visualize component hierarchy

    Surge, Deluge, SNMS, TinyDB

  • 7/29/2019 Software Platforms

    36/121

    36

    Scalable Simulation Environment

    target platform: TOSSIM whole application compiled for host native instruction set

    event-driven execution mapped into event-driven simulatormachinery

    storage model mapped to thousands of virtual nodes

    radio model and environmental

    model plugged in

    bit-level fidelity

    Sockets = basestation

    Complete application

    including GUI

  • 7/29/2019 Software Platforms

    37/121

    37

    Simulation Scaling

  • 7/29/2019 Software Platforms

    38/121

    38

    TinyOS 2.0: basic changes

    Scheduler: improve robustness and flexibility Reserved tasks by default ( fault tolerance)

    Priority tasks

    New nesC 1.2 features:

    Network types enable link level cross-platform interoperability

    Generic (instantiable) components, attributes, etc.

    Platform definition: simplify porting

    Structure OS to leverage code reuse

    Decompose h/w devices into 3 layers: presentation, abstraction, device-independent

    Structure common chips for reuse across platforms

    so platforms are a collection of chips: msp430 + CC2420 +

    Power mgmt architecture for devices controlled by resource reservation

    Self-initialisation

    App-level notion of instantiable services

  • 7/29/2019 Software Platforms

    39/121

    44

    TinyOS Limitations

    Static allocation allows for compile-time analysis, but can make

    programming harder

    No support for heterogeneity

    Support for other platforms (e.g. stargate) Support for high data rate apps (e.g. acoustic beamforming)

    Interoperability with other software frameworks and languages

    Limited visibility

    Debugging

    Intra-node fault tolerance

    Robustness solved in thedetails of implementation

    nesC offers only some types of checking

  • 7/29/2019 Software Platforms

    40/121

    45

    Em*

    Software environment for sensor networks built from

    Linux-class devices

    Claimed features:

    Simulation and emulation tools

    Modular, but not strictly layered architecture

    Robust, autonomous, remote operation

    Fault tolerance within node and between nodes

    Reactivity to dynamics in environment and task

    High visibility into system: interactive access to all services

  • 7/29/2019 Software Platforms

    41/121

    46

    Contrasting Emstar and TinyOS

    Similar design choices

    programming framework Component-based design

    Wiring together modules into an application

    event-driven reactive to sudden sensor events or triggers

    robustness Nodes/system components can fail

    Differences

    hardware platform-dependent constraints Emstar: Develop without optimization

    TinyOS: Develop under severe resource-constraints

    operating system and language choices Emstar: easy to use C language, tightly coupled to linux (devfs,redhat,)

    TinyOS: an extended C-compiler (nesC), not wedded to any OS

  • 7/29/2019 Software Platforms

    42/121

    47

    Em* Transparently Trades-off Scale vs. Reality

    Em* code runs transparently at many degrees of reality:

    high visibility debugging before low-visibility deployment

    Reality

    Sc

    ale

    Pure Simulation

    Data Replay

    Portable Array

    Deployment

    Ceiling Array

  • 7/29/2019 Software Platforms

    43/121

    48

    Em* Modularity

    Dependency DAG

    Each module (service)

    Manages a resource &

    resolves contention Has a well defined interface

    Has a well scoped task

    Encapsulates mechanism

    Exposes control of policy Minimizes work done by client

    library

    Application has same

    structure as services

    Hardware Radio

    Topology Discovery

    Collaborative Sensor

    Processing Application

    Neighbor

    Discovery

    Reliable

    Unicast

    Sensors

    Leader

    Election

    3d Multi-

    Lateration

    Audio

    Time

    Sync

    Acoustic

    Ranging

    State

    Sync

  • 7/29/2019 Software Platforms

    44/121

    49

    Em* Robustness

    Fault isolation via multiple processes

    Active process management (EmRun)

    Auto-reconnect built into libraries

    Crashproofing prevents cascading failure

    Soft state design style

    Services periodically refresh clients

    Avoid diff protocolsmotor_x motor_y

    scheduling

    path_plandepth map

    camera

    EmRun

  • 7/29/2019 Software Platforms

    45/121

    50

    Em* Reactivity

    Event-driven software structure

    React to asynchronous notification

    e.g. reaction to change in neighbor list

    Notification through the layers

    Events percolate up

    Domain-specific filtering at every level

    e.g.

    neighbor list membership hysteresis

    time synchronization linear fit and outlier rejection

    motor_y

    scheduling

    path_plan

    notify

    filter

  • 7/29/2019 Software Platforms

    46/121

    51

    Tools

    EmRun

    EmProxy/EmView

    EmTOS

    Standard IPC

    FUSD

    Device patterns

    Common Services

    NeighborDiscovery

    TimeSync

    Routing

    EmStar Components

  • 7/29/2019 Software Platforms

    47/121

    52

    EmRun: Manages Services

    Designed to start, stop, and monitor services

    EmRun config file specifies service dependencies

    Starting and stopping the system Starts up services in correct order

    Can detect and restart unresponsive services

    Respawns services that die

    Notifies services before shutdown, enabling gracefulshutdown and persistent state

    Error/Debug Logging

    Per-process logging to in-memory ring buffers

    Configurable log levels, at run time

  • 7/29/2019 Software Platforms

    48/121

    53

    EmSim/EmCee

    Em* supports a variety of types of

    simulation and emulation, from

    simulated radio channel and sensors

    to emulated radio and sensor

    channels (ceiling array)

    In all cases, the code is identical

    Multiple emulated nodes run in their

    own spaces, on the same physicalmachine

  • 7/29/2019 Software Platforms

    49/121

    54

    EmView/EmProxy: Visualization

    emviewEmulator

    nodeNnodeN

    nodeN

    Mote Mote

    Mote

    motenic

    linkstat

    neighbor

    emproxy

  • 7/29/2019 Software Platforms

    50/121

    55

    Inter-module IPC : FUSD

    Creates device file

    interfaces

    Text/Binary on same file

    Standard interface

    Language independent

    No client library required

    Client Server

    kfusd.o

    /dev/fusd/dev/servicename

    Kernel

    User

  • 7/29/2019 Software Platforms

    51/121

    56

    Device Patterns

    FUSD can support virtually any semantics

    What happens when client calls read()?

    But many interfaces fall into certain patterns

    Device Patterns

    encapsulate specific semantics

    take the form of a library:

    objects, with method calls and callback functions

    priority: ease of use

  • 7/29/2019 Software Platforms

    52/121

    57

    Status Device

    Designed to report current state no queuing: clients not guaranteed to see

    every intermediate state

    Supports multiple clients

    Interactive andprogrammatic interface ASCII output via cat

    binary output to programs

    Supports client notification notification via select()

    Client configurable client can write command string

    server parses it to enable per-client

    behavior

    Status Device

    Server

    O I

    Client1 Client2 Client3

    Config

    Handler

    State Request

    Handler

  • 7/29/2019 Software Platforms

    53/121

    58

    Packet Device

    Designed for message streams

    Supports multiple clients

    Supports queuing

    Round-robin service of outputqueues

    Delivery of messages to all/specific clients

    Client-configurable: Input and output queue lengths

    Input filters

    Optional loopback of outputs to

    other clients (for snooping)

    Packet Device

    Server

    Client1

    I O

    F

    Client2

    I O

    F

    Client3

    I O

    F

    O I

  • 7/29/2019 Software Platforms

    54/121

    59

    Device Files vs Regular Files

    Regular files:

    Require locking semantics to prevent race conditions between readers

    and writers

    Support status semantics but not queuing

    No support for notification, polling only

    Device files:

    Leverage kernel for serialization: no locking needed

    Arbitrary control of semantics: queuing, text/binary, per client configuration

    Immediate action, like an function call:

    system call on device triggers immediate response from service, rather than

    setting a request and waiting for service to poll

  • 7/29/2019 Software Platforms

    55/121

    60

    Interacting With em*

    Text/Binary on same device file

    Text mode enables interaction

    from shell and scripts

    Binary mode enables easy

    programmatic access to data as Cstructures, etc.

    EmStar device patterns support

    multiple concurrent clients

    IPC channels used internally canbe viewed concurrently for

    debugging

    Live state can be viewed in the

    shell (echocat w) or using

    emview

  • 7/29/2019 Software Platforms

    56/121

    61

    SOS: Motivation and Key Feature

    Post-deployment software updates are necessary to

    customize the system to the environment

    upgrade features

    remove bugs

    re-task system

    Remote reprogramming is desirable

    Approach: Remotely insert binary modules into running kernel software reconfiguration without interrupting system operation

    no stop and re-boot unlike differential patching

    Performance should be superior to virtual machines

  • 7/29/2019 Software Platforms

    57/121

    62

    Architecture Overview

    Clock

    Timer

    I2CADC SPIUART

    Sensor

    Manager

    Comm.

    Stack

    Serial

    Framer

    Dynamic Memory SchedulerFunction Pointer

    Control Blocks

    Hardware

    Abstraction

    Layer

    Kernel

    Services

    Low-level

    Device

    Drivers

    Tree

    Routing

    Light

    SensorApplication

    Dynamically

    Loadable

    Modules

    Light

    Sensor

    Static Kernel

    Provides hardware abstraction &

    common services

    Maintains data structures to enable

    module loading

    Costly to modify after deployment

    Dynamic Modules

    Drivers, protocols, and applications

    Inexpensive to modify after

    deployment

    Position independent

  • 7/29/2019 Software Platforms

    58/121

    63

    SOS Kernel

    Hardware Abstraction Layer (HAL)

    Clock, UART, ADC, SPI, etc.

    Low layer device drivers interface with HAL

    Timer, serial framer, communications stack, etc.

    Kernel services

    Dynamic memory management

    Scheduling

    Function control blocks

  • 7/29/2019 Software Platforms

    59/121

    64

    Kernel Services: Memory Management

    Fixed-partition dynamic memory allocation

    Constant allocation time

    Low overhead

    Memory management features

    Guard bytes for run-time memory overflow checks

    Ownership tracking

    Garbage collection on completion

    pkt = (uint8_t*)ker_malloc(hdr_size + sizeof(SurgeMsg), SURGE_MOD_PID);

  • 7/29/2019 Software Platforms

    60/121

    65

    Kernel Services: Scheduling

    SOS implements non-preemptive priority scheduling via priority

    queues

    Event served when there is no higher priority event

    Low priorityqueue for scheduling most events

    High priorityqueue for time critical events, e.g., h/w interrupts &

    sensitive timers

    Prevents execution in interrupt contexts

    post_long(TREE_ROUTING_PID, SURGE_MOD_PID, MSG_SEND_PACKET,

    hdr_size + sizeof(SurgeMsg), (void*)packet, SOS_MSG_DYM_MANAGED);

  • 7/29/2019 Software Platforms

    61/121

    66

    Modules

    Each module is uniquely identified by its ID or pid

    Has private state

    Represented by a message handler & has prototype:

    int8_t handler(void*private_state,Message *msg)

    Return value follows errno

    SOS_OK for success. -EINVAL, -ENOMEM, etc. for failure

  • 7/29/2019 Software Platforms

    62/121

    67

    Kernel Services: Module Linking

    Orthogonal to module distribution protocol

    Kernel stores new module in free block located in program memory

    and critical information about module in the module table

    Kernel calls initialization routine for module

    Publish functions for other parts of the system to use

    char tmp_string = {'C', 'v', 'v', 0};

    ker_register_fn(TREE_ROUTING_PID, MOD_GET_HDR_SIZE, tmp_string, (fn_ptr_t)tr_get_header_size);

    Subscribe to functions supplied by other modules

    char tmp_string = {'C', 'v', 'v', 0};

    s->get_hdr_size = (func_u8_t*)ker_get_handle(TREE_ROUTING_PID, MOD_GET_HDR_SIZE, tmp_string);

    Set initial timers and schedule events

  • 7/29/2019 Software Platforms

    63/121

    68

    Kernel provides system services and access to hardware

    ker_timer_start(s->pid, 0, TIMER_REPEAT, 500);

    ker_led(LED_YELLOW_OFF);

    Kernel jump table re-directs system calls to handlers

    upgrade kernel independent of the module

    Interrupts & messages from kernel dispatched by a high priority message buffer

    low latency

    concurrency safe operation

    ModuletoKernel Communication

    Module A

    System

    Jump Table

    Hardware

    System Call

    High Priority

    Message

    Buffer

    HW Specific API Interrupt

    System Messages

    SOS Kernel

  • 7/29/2019 Software Platforms

    64/121

    69

    Inter-Module Communication

    Module A

    Module Function

    Pointer Table

    Indirect Function Call

    Module B

    Inter-Module Message Passing

    Asynchronous communication

    Messages dispatched by a two-level priority scheduler

    Suited for services with longlatency

    Type safe binding throughpublish / subscribe interface

    Post

    Message

    Buffer

    Module A Module B

    Inter-Module Function Calls

    Synchronous communication

    Kernel stores pointers tofunctions registered by modules

    Blocking calls with low latency

    Type-safe runtime function

    binding

  • 7/29/2019 Software Platforms

    65/121

    70

    Synchronous Communication

    Module can register function for

    low latency blocking call (1)

    Modules which need such function

    can subscribe to it by getting

    function pointer pointer (i.e.

    **func) (2)

    When service is needed, module

    dereferences the function pointer

    pointer (3)

    Module Function

    Pointer Table

    Module A Module B

    3

    12

  • 7/29/2019 Software Platforms

    66/121

    71

    Asynchronous Communication

    Module is active when it is handling the message (2)(4)

    Message handling runs to completion and can only be

    interrupted by hardware interrupts

    Module can send message to another module (3) or send

    message to the network (5)

    Message can come from both network (1) and local host (3)

    Module A

    Module BMsg Queue

    2

    3

    Send Queue4 5

    Network1

  • 7/29/2019 Software Platforms

    67/121

    72

    Module Safety

    Problem: Modules can be remotely added, removed, & modified ondeployed nodes

    Accessing a module

    If module doesn't exist, kernel catches messages sent to it & handles

    dynamically allocated memory If module exists but can't handle the message, then module's default

    handler gets message & kernel handles dynamically allocated memory

    Subscribing to a modules function

    Publishing a function includes a type description that is stored in afunction control block (FCB) table

    Subscription attempts include type checks against corresponding FCB

    Type changes/removal of published functions result in subscribers beingredirected to system stub handler function specific to that type

    Updates to functions w/ same type assumed to have same semantics

  • 7/29/2019 Software Platforms

    68/121

    73

    Module Library

    SurgeMemory

    Debug

    Photo

    Sensor

    Tree

    Routing

    Surge Application

    with Debugging

    Some applications created by combining already written and

    tested modules

    SOS kernel facilitates loosely coupled modules

    Passing of memory ownership

    Efficient function and messaging interfaces

  • 7/29/2019 Software Platforms

    69/121

    74

    Module Design

    Uses standard C

    Programs created by wiring

    modules together

    #include

    typedef struct {uint8_t pid;uint8_t led_on;

    } app_state;

    DECL_MOD_STATE(app_state);DECL_MOD_ID(BLINK_ID);

    int8_t module(void *state, Message *msg){app_state *s = (app_state*)state;

    switch (msg->type){case MSG_INIT: {s->pid = msg->did;s->led_on = 0;ker_timer_start(s->pid, 0, TIMER_REPEAT, 500);break;

    }case MSG_FINAL: {ker_timer_stop(s->pid, 0);break;

    }case MSG_TIMER_TIMEOUT: {if(s->led_on == 1){ker_led(LED_YELLOW_ON);

    } else {ker_led(LED_YELLOW_OFF);

    }s->led_on++;if(s->led_on > 1) s->led_on = 0;break;

    }

    default: return -EINVAL;}return SOS OK

  • 7/29/2019 Software Platforms

    70/121

    75

    Sensor Manager

    Enables sharing of sensor data betweenmultiple modules

    Presents uniform data access APItodiverse sensors

    Underlying device specific drivers registewith the sensor manager

    Device specific sensor drivers control

    Calibration

    Data interpolation

    Sensor drivers are loadable: enables

    post-deployment configuration of sensors

    hot-swapping of sensors on a running node

    Periodic

    Access

    getData

    Sensor

    Manager

    Module A Module B

    I2C

    MagSensor

    ADC

    dataReady

    Signal

    Data Ready

    Polled

    Access

  • 7/29/2019 Software Platforms

    71/121

    76

    Application Level Performance

    Comparison of application performance in SOS, TinyOS, and MateVM

    Platform ROM RAM

    SOS Core 20464 B 1163 B

    Dynamic Memory Pool - 1536 B

    TinyOS with Deluge 21132 B 597 B

    Mate VM 39746 B 3196 B

    Memory footprint for base operating system

    with the ability to distribute and update node

    programs.

    System

    TinyOs 3.31 sec 5.22% NA

    SOS 3.50 sec 5.84% 5.70%

    Mate VM 3.68 sec 6.13% 11.00%

    Active Time

    (in 1 min)

    Active Time

    (%)

    Overhead relative to

    TOS (%)

    CPU active time for surge application.

    Surge Tree Formation Latency Surge Forwarding Delay Surge Packet Delivery Ratio

  • 7/29/2019 Software Platforms

    72/121

    77

    Reconfiguration Performance

    Energy trade offs

    SOS has slightly higher base operating cost

    TinyOS has significantly higher update cost

    SOS is more energy efficient when the system is updated

    one or more times a week

    Module Name Code Size (Bytes)

    sample_send 568

    tree_routing 2242

    photo_sensor 372

    Energy (mJ) 2312.68

    Latency (sec) 46.6

    System

    SOS 1316 0.31 1.86

    TinyOS 30988 1.34 164.02

    Mate VM NA NA NA

    Code Size

    (Bytes)

    Write Cost

    (mJ/page)

    Write

    Energy (mJ)

    System

    SOS 566 0.31 0.93

    TinyOS 31006 1.34 164.02

    Mate VM 17 0 0

    Code Size

    (Bytes)

    Write Cost

    (mJ/page)

    Write

    Energy (mJ)

    Energy cost of light sensor driver update

    Energy cost of surge application update

    Module size and energy profile

    for installing surge under SOS

  • 7/29/2019 Software Platforms

    73/121

    78

    Platform Support

    Supported micro controllers

    Atmel Atmega128 4 Kb RAM 128 Kb FLASH

    Oki ARM 32 Kb RAM 256 Kb FLASH

    Supported radio stacks

    Chipcon CC1000

    BMAC

    Chipcon CC2420

    IEEE 802.15.4 MAC

    (NDA required)

  • 7/29/2019 Software Platforms

    74/121

    79

    Simulation Support

    Source code level network simulation Pthread simulates hardware concurrency

    UDP simulates perfect radio channel

    Supports user defined topology & heterogeneous software configuration

    Useful for verifying the functional correctness

    Instruction level simulation with Avrora

    Instruction cycle accurate simulation

    Simple perfect radio channel Useful for verifying timing information

    See http://compilers.cs.ucla.edu/avrora/

    EmStar integration under development

    http://compilers.cs.ucla.edu/avrora/http://compilers.cs.ucla.edu/avrora/http://compilers.cs.ucla.edu/avrora/
  • 7/29/2019 Software Platforms

    75/121

    81

    Contiki

    Dynamic loading of programs (vs. static)

    Multi-threaded concurrency managed execution(in addition to event driven)

    Available on MSP430, AVR, HC12, Z80, 6502, x86, ...

    Simulation environment available for BSD/Linux/Windows

  • 7/29/2019 Software Platforms

    76/121

    82

    Key ideas

    Dynamic loading of programs

    Selective reprogramming

    Static/pre-linking (early work: EmNets)

    Dynamic linking (recent work: SENSYS) Key difference from SOS:

    no assumption of position independence

    Concurrency management mechanisms

    Events andthreads

    Trade-offs: preemption, size

  • 7/29/2019 Software Platforms

    77/121

    83

    Loadable programs

    One-way dependencies

    Core resident in memory

    Language run-time, communication

    If programs know the core

    Can be statically linked

    And call core functions andreference core variables freely

    Individual programs can beloaded/unloaded Need to register their variable and

    function information with core

    Core

  • 7/29/2019 Software Platforms

    78/121

    84

  • 7/29/2019 Software Platforms

    79/121

    85

    Loadable programs (contd.)

    Core

    Programs can be loaded from

    anywhere

    Radio (multi-hop, single-hop), EEPROM,

    etc.

    During software development, usuallychange only one module

  • 7/29/2019 Software Platforms

    80/121

    86

    Core Symbol Table

    Registry of names and addresses of

    all externally visible variables and functions

    of core modules and run-time libraries

    Offers API to linker to search registry and to update

    registry

    Created when Contiki core binary image is compiled

    multiple pass process

  • 7/29/2019 Software Platforms

    81/121

    87

    Linking and relocating a module

    1. Parse payload into code, data, symbol table,

    and list of relocation entries which correspond to an instruction or address in code or data that needs to be

    updated with a new address

    consist ofo

    a pointer to a symbol, such as a variable name or a function name or a pointer to aplace in the code or data

    o address of the symbol

    o a relocation type which specifies how the data or code should be updated

    2. Allocate memory for code & data is flash ROM and RAM

    3. Link and relocate code and data segments for each relocation entry, search core symbol table and module symbol table

    if relocation is relative than calculate absolute address

    4. Write code to flash ROM and data to RAM

  • 7/29/2019 Software Platforms

    82/121

    88

    Contiki size (bytes)

    Module

    Kernel

    Program loader

    Multi-threading library

    Timer library

    Memory manager

    Event log replicator

    IP TCP/IP stack

    Code AVR

    1044

    -

    678

    90

    226

    1934

    5218

    Code MSP430

    810

    658

    582

    60

    170

    1656

    4146

    RAM

    10 + e +p

    8

    8 + s

    0

    0

    200

    18 + b

  • 7/29/2019 Software Platforms

    83/121

    90

    Revisiting Multi-threaded Computation

    Threads blocked, waiting

    for events

    Kernel unblocks threads

    when event occurs

    Thread runs until next

    blocking statement

    Each thread requires its

    own stack

    Larger memory usage

    Kernel

    Thread Thread Thread

  • 7/29/2019 Software Platforms

    84/121

    91

    Event-driven vs multi-threaded

    Event-driven

    - No wait() statements

    - No preemption

    - State machines

    + Compact code

    + Locking less of a problem

    + Memory efficient

    Multi-threaded

    + wait() statements

    + Preemption possible

    + Sequential code flow

    - Larger code overhead

    - Locking problematic

    - Larger memory requirements

    How to combine them?

  • 7/29/2019 Software Platforms

    85/121

    92

    Contiki: event-based kernel with threads

    Kernel is event-based

    Most programs run directly on top of the kernel

    Multi-threading implemented as a library

    Threads only used ifexplicitly needed

    Long running computations, ...

    Preemption possible

    Responsive system with running computations

  • 7/29/2019 Software Platforms

    86/121

    93

    Responsiveness

    Computation in a thread

    Th d i l d b d k l

  • 7/29/2019 Software Platforms

    87/121

    94

    Threads implemented atop an event-based kernel

    Kernel

    Event

    Event

    Event

    Event ThreadThread

    I l ti ti th d 1

  • 7/29/2019 Software Platforms

    88/121

    95

    Implementing preemptive threads 1

    Event

    handler

    Thread

    Timer IRQ

    I l ti ti th d 2

  • 7/29/2019 Software Platforms

    89/121

    96

    Implementing preemptive threads 2

    Event

    handler

    yield()

    M t

  • 7/29/2019 Software Platforms

    90/121

    97

    Memory management

    Memory allocated when module is loaded

    Both ROM and RAM

    Fixed block memory allocator

    Code relocation made by module loader

    Exercises flash ROM evenly

    P t th d li ht i ht t kl th d

  • 7/29/2019 Software Platforms

    91/121

    98

    Protothreads: light-weight stackless threads

    Protothreads: mixture between event-driven and threaded

    A third concurrency mechanism

    Allows blocked waiting

    Requires per-thread no stack

    Each protothread runs inside a single C function

    2 bytes of per-protothread state

    M t A Vi t l M hi f S N t k

  • 7/29/2019 Software Platforms

    92/121

    99

    Mate: A Virtual Machine for Sensor Networks

    Why VM?

    Large number (100s to 1000s) of nodes in a coverage area

    Some nodes will fail during operation

    Change of function during the mission

    Related Work

    PicoJava

    assumes Java bytecode execution hardware

    K Virtual Machine

    requires 160 512 KB of memory XML

    too complex and not enough RAM

    Scylla

    VM for mobile embedded system

    Mate features

  • 7/29/2019 Software Platforms

    93/121

    10

    Mate features

    Small (16KB instruction memory, 1KB RAM)

    Concise (limited memory & bandwidth)

    Resilience (memory protection)

    Efficient (bandwidth)

    Tailorable (user defined instructions)

    Mate in a Nutshell

  • 7/29/2019 Software Platforms

    94/121

    10

    Mate in a Nutshell

    Stack architecture

    Three concurrent execution contexts

    Execution triggered by predefined events

    Tiny code capsules; self-propagate into network

    Built in communication and sensing instructions

    When is Mate Preferable?

  • 7/29/2019 Software Platforms

    95/121

    10

    When is Mate Preferable?

    For small number of executions

    GDI example:

    Bytecode version is preferable for a program runningless than 5 days

    In energy constrained domains

    Use Mate capsule as a general RPC engine

    Mate Architecture

  • 7/29/2019 Software Platforms

    96/121

    10

    Mate Architecture

    0 1 2 3

    Subroutines

    Clo

    ck

    Sen

    d

    Receiv

    e

    Events

    gets/sets

    0 1 2 3

    Subroutines

    Clo

    ck

    Sen

    d

    Receiv

    e

    Events

    gets/sets

    Code

    Operand

    Stack

    Return

    Stack

    PCC

    ode

    Operand

    Stack

    Return

    Stack

    PC

    Stack based architecture

    Single shared variable

    gets/sets

    Three events:

    Clock timer

    Message reception

    Message send

    Hides asynchrony

    Simplifies programming

    Less prone to bugs

    Instruction Set

  • 7/29/2019 Software Platforms

    97/121

    10

    Instruction Set

    One byte per instruction

    Three classes: basic, s-type, x-type

    basic: arithmetic, halting, LED operation

    s-type: messaging system x-type: pushc, blez

    8 instructions reserved for users to define

    Instruction polymorphism

    e.g. add(data, message, sensing)

    Code Example(1)

  • 7/29/2019 Software Platforms

    98/121

    10

    Code Example(1)

    Display Counter to LED

    gets # Push heap variable on stack

    pushc 1 # Push 1 on stackadd # Pop twice, add, push result

    copy # Copy top of stack

    sets # Pop, set heap

    pushc 7 # Push 0x0007 onto stack

    and # Take bottom 3 bits of value

    putled # Pop, set LEDs to bit pattern

    halt #

    Code Capsules

  • 7/29/2019 Software Platforms

    99/121

    10

    Code Capsules

    One capsule = 24 instructions

    Fits into single TOS packet

    Atomic reception

    Code Capsule

    Type and version information

    Type: send, receive, timer, subroutine

    Viral Code

  • 7/29/2019 Software Platforms

    100/121

    10

    Viral Code

    Capsule transmission: forw

    Forwarding other installed capsule: forwo (use within clock

    capsule)

    Mate checks on version number on reception of a capsule

    -> if it is newer, install it

    Versioning: 32bit counter

    Disseminates new code over the network

    Component Breakdown

  • 7/29/2019 Software Platforms

    101/121

    10

    Component Breakdown

    Mate runs on mica with 7286 bytes code, 603 bytes RAM

    Network Infection Rate

  • 7/29/2019 Software Platforms

    102/121

    10

    Network Infection Rate

    42 node network in 3 by

    14 grid

    Radio transmission: 3 hop

    network

    Cell size: 15 to 30 motes

    Every mote runs its clock

    capsule every 20 seconds

    Self-forwarding clock

    capsule

    Bytecodes vs Native Code

  • 7/29/2019 Software Platforms

    103/121

    11

    Bytecodes vs. Native Code

    Mate IPS: ~10,000

    Overhead: Every instruction executed as separate TOS task

    Installation Costs

  • 7/29/2019 Software Platforms

    104/121

    11

    Installation Costs

    Bytecodes have computational overhead

    But this can be compensated by using small packets

    on upload (to some extent)

    Customizing Mate

  • 7/29/2019 Software Platforms

    105/121

    11

    Customizing Mate

    Mate is general architecture; user can build customized VM

    User can select bytecodes and execution events

    Issues: Flexibility vs. Efficiency

    Customizing increases efficiency w/ cost of changing requirements

    Javas solution:

    General computational VM + class libraries Mates approach:

    More customizable solution -> let user decide

    How to

  • 7/29/2019 Software Platforms

    106/121

    11

    How to

    Select a language

    -> defines VM bytecodes

    Select execution events

    -> execution context, code image

    Select primitives

    -> beyond language functionality

    Constructing a Mate VM

  • 7/29/2019 Software Platforms

    107/121

    11

    Constructing a Mate VM

    This generates

    a set of files

    -> which areused to build

    TOS application

    and

    to configurescript program

    Compiling and Running a Program

  • 7/29/2019 Software Platforms

    108/121

    11

    Compiling and Running a Program

    Write programs inthe scripter

    VM-specific

    binary code

    Send it overthe networkto a VM

    Bombilla Architecture

  • 7/29/2019 Software Platforms

    109/121

    11

    Bombilla Architecture

    Once context: perform operations that only need

    single execution 16 word heap sharing among the context;

    setvar, getvar

    Buffer holds up to ten values;

    bhead, byank, bsorta

    Bombilla Instruction Set

  • 7/29/2019 Software Platforms

    110/121

    11

    Bombilla Instruction Set

    basic: arithmetic, halt, sensing

    m-class: access message header

    v-class: 16 word heap access

    j-class: two jump instructions

    x-class: pushc

    Enhanced Features of Bombilla

  • 7/29/2019 Software Platforms

    111/121

    11

    Enhanced Features of Bombilla

    Capsule Injector: programming environment

    Synchronization: 16-word shared heap; locking scheme

    Provide synchronization model: handler, invocations,

    resources, scheduling points, sequences

    Resource management: prevent deadlock

    Random and selective capsule forwarding

    Error State

    Discussion

  • 7/29/2019 Software Platforms

    112/121

    11

    Discussion

    Comparing to traditional VM concept, is Mate platformindependent? Can we have it run on heterogeneous hardware?

    Security issues:

    How can we trust the received capsule? Is there a way toprevent version number race with adversary?

    In viral programming, is there a way to forward messages other

    than flooding? After a certain number of nodes are infected by

    new version capsule, can we forward based on need?

    Bombilla has some sophisticated OS features. What is the size of

    the program? Does sensor node need all those features?

    .NET MicroFramework (MF) Architecture

  • 7/29/2019 Software Platforms

    113/121

    12

    c o a e o ( ) c tectu e

    .NET MF is a bootable runtime environment tailored forembedded development

    MF services include:

    Boot Code

    Code Execution

    Thread Management

    Memory Management

    Hardware I/O

    .NET MF Hardware Abstraction Layer (HAL)

  • 7/29/2019 Software Platforms

    114/121

    12

    y ( )

    Provides an interface to access hardware and peripherals Relevant only for system, not application developers

    Does not require operating system

    Can run on top of one if available

    Interfaces include:

    Clock Management

    Core CPU Communications

    External Bus Interface Unit (EBIU)

    Memory Management

    Power

    .NET MF Platform Abstraction Layer (PAL)

  • 7/29/2019 Software Platforms

    115/121

    12

    y ( )

    Provides hardware independent abstractions Used by application developers to access system resources

    Application calls to PAL managed by Common Language

    Runtime (CLR)

    In turn calls HAL drivers to access hardware

    PAL interfaces include:

    Time

    PAL Memory Management Input/Output

    Events

    Debugging

    Storage

    Threading Model

  • 7/29/2019 Software Platforms

    116/121

    12

    g

    User applications may have multiple threads

    Represented in the system as Managed Threads serviced by

    the CLR

    Time sliced context switching with (configurable) 20ms

    quantum

    Threads may have priorities

    CLR has a single thread of execution at the system level

    Uses cooperative multitasking

    Timer Module

  • 7/29/2019 Software Platforms

    117/121

    12

    MF provides support for accessing timers from C#

    Enables execution of a user specified method

    At periodic intervals or one-time

    Callback method can be selected when timer is

    constructed

    Part of the System.Threading namespace

    Callback method executes in a thread pool thread provided

    by the system

    Timer Interface

  • 7/29/2019 Software Platforms

    118/121

    12

    Callback: user specified

    method to be executed

    State: information usedby callback method May be null

    Duetime: delay beforethe timer first fires

    Period: time intervalbetween callbackinvocations

    Change method allowsuser to stop timer Chan e eriod to -1

    ADC Extension to the HAL

  • 7/29/2019 Software Platforms

    119/121

    12

    Extended MF HAL to support ADC APIs

    High-precision, low latency sampling using hardware clock

    Critical for many signal processing applications

    Supported API functions include Initialize: initialize ADC peripheral registers and the clocks

    UnInitialize: reset ADC peripheral registers and uninitialize clocks

    ConfigureADC: select ADC parameters (mode, input channels, etc)

    StartSampling: starts conversion on selected ADC channel

    GetSamplingStatus: whether in progress or complete

    GetData: returns data stored in ADC data register

    Radio Extension to the HAL

  • 7/29/2019 Software Platforms

    120/121

    12

    Extended the MF HAL to support radio APIs

    Supported API functions include

    On: powers on radio, configures registers, SPI bus, initializes clocks

    Off: powers off radio, resets registers, clocks and SPI bus

    Configure: sets radio options for 802.15.4 radio

    BuildFrame: constructs data frame with specified parameters

    destination address, data, ack request

    SendPacket: sends data frame to specified address

    ReceivePacket: receives packet from a specified source address

    MAC Extension to PAL

  • 7/29/2019 Software Platforms

    121/121

    Built-in, efficient wireless communication protocol

    OMAC (Cao, Parker, Arora: ICNP 2006)

    Receiver centric MAC protocol

    Highly efficient for low duty cycle applications Implemented as a PAL component natively on top of HAL

    radio extensions for maximum efficiency

    Exposes rich set of wireless communication interfaces

    OMACSender

    OMACReceiver

    OMACBroadcast