79
Technical Report, IDE0711, January 2007 THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS Master’s thesis in Computer Systems Engineering Famoriyo Olusola Supervised by : Prof Tony Larsson, Halmstad University School of Information Science, Computer and Electrical Engineering, IDE Halmstad University

the evaluation of tinyos with wireless sensor node operating systems

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE
OPERATING SYSTEMS
Famoriyo Olusola
School of Information Science, Computer and Electrical Engineering, IDE
Halmstad University
Node Operating systems
School of Information Science, Computer and Electrical Engineering, IDE Halmstad University
Box 823, S-301 18 Halmstad, Sweden
January 2007
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
ii
Preface and Acknowledgment
This report is a result of a master project carried out at the School of Information Science, Computer and Electrical Engineering(IDE-section) at Halmstad University. The project concludes the Master of Science in Computer Systems Engineering. The project was ini- tiated in March 2006 and was concluded in January 2007.
The work would have not been possible without the help and interest from a number of people that contributed immensely to the success on the thesis,financially, morally and education-wise:
The supervisor of the thesis at Halmstad University, Prof. Tony Larsson for his huge support and advises during the whole duration of the project.
Markus Adolfsson, Anders Ahlander and Veronica Gaspes of Halmstad University for their courses and support which really helped me in the process of completion of the thesis.
I would also like to say a big thank you to my Dad and Mum, Mr and Mrs J.G Famor- iyo for the financial, love and support given to me throughout the duration of my studies in Sweden and also to all my family members that have contributed in one way or the other
Finally i would like to acknowledge the support of all my good friends in Sweden, Canada, USA, UK and Nigeria.
iii
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
iv
2.4 Memory : Compile and Run Time . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 SurgeC [1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.7 MantisOS Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Compilation process of TestSerial Application on all OS . . . . . . . . . . . 28
3.3 Directory Structure of TinyOS-1.x . . . . . . . . . . . . . . . . . . . . . . . 30
3.4 Directory Structure of TinyOS-2.x . . . . . . . . . . . . . . . . . . . . . . . 31
3.5 Application Services in TinyOS-1.x . . . . . . . . . . . . . . . . . . . . . . . 32
3.6 Applications Services in TinyOS-2.x . . . . . . . . . . . . . . . . . . . . . . 33
3.7 The Cygwin Interface of Blink Application . . . . . . . . . . . . . . . . . . . 36
v
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
vi
4.1 Comparative overview features of Sensor node operating systems . . . . . . . . . 39
4.2 Program memory size of Applications in TinyOS-1.x in Test Environment(a) 41
4.3 Program memory size of Applications in TinyOS-1.x in Test Environment(b) 42
4.4 Program memory size of Applications in TinyOS-1.x on Simulation . . . . . 43
4.5 Program memory size of Applications in TinyOS-2.x on Test-Bed . . . . . . 45
4.6 Program memory size of Arbiter Applications in TinyOS-2.x on Test-bed . . 47
4.7 Program memory size of Storage applications in TinyOS-2.x on Test-Bed . . 47
4.8 Program memory size of TestSerial Application in Operating Systems on Test-Bed 47
A.1 Library files of TestSerial application on Sensor node operating system . . . 67
vii
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
viii
CONTENTS
Contents
2.1.1 Processes management . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.2 Storage management . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.3 Power management . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.5 Memory Management . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.6 Security Management . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 TINYOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.2 CONTIKI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 TOOLS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.1 Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.3 Graphviz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
ix
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
3.3.6 Cygwin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3.7 Binutils. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3.8 LIBC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3.9 Python-tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.10 Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.11 Avarice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.12 Insight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4 PLATFORM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.1 Reflections on the Evaluation . . . . . . . . . . . . . . . . . . . . . . . 49
5.2 Tasks and Schedulers in TinyOS. . . . . . . . . . . . . . . . . . . . . . 50
5.2.1 Tasks in TinyOS-1.x . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.3 CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
x
CONTENTS
Abstract
Wireless Sensor nodes fall somewhere in between the single application devices that do not need an operating system, and the more capable, general purpose devices with the resources to run a traditional embedded operating system. Sensor node operating system such as TinyOS, Contiki, MantisOS and SOS which is discussed in this paper exhibit characteristics of both traditional embedded systems and general-purpose operating sys- tems providing a limited number of common services for application developers linking software and hardware.
These common services typically include platform support, hardware management of sen- sors, radios, and I/O buses and application construction etc. They also provide services needed by applications which include task coordination, power management, adaptation to resource constraints, and networking. The evaluation was concentrated on TinyOS including an analysis on version 1.x and 2.x resource management and flexibility and its operation with the other wireless sensor node operating systems.
Keywords: TinyOS, Sensor node Operating system, TelosB, nesC, Task, Applications, Evaluation
1
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
2
1 INTRODUCTION
Wireless Sensor nodes also known as motes are single application devices that combines together in singles, tens or thousands to form a network which makes up the wireless sensor network, where all devices communicate within themselves, and passes information from one node to another node to a central application; which would be used to coordinate the information gathered at a terminal, for human use [2]. In order for the information received to be useful, a general purpose application that co-ordinates all the information would have to be put in place, which can also be regarded to as an operating system, with the ability to run traditional embedded and general purpose operating system features.
The features of a sensor node operating system must exhibit characteristics of both tradi- tional embedded and general-purpose operating systems, providing a number of common services for application developers linking software and hardware implementation [3].
The most common services provided typically includes processes management, storage management, power management, Input and Output systems, memory management and protection and security. Some other internal services that are of importance includes task coordination, time management, adaptation to resource constraints and networking ca- pabilities. In the real sense, there are many embedded devices that do not use or need an operating system, but for the operations that is being carried out in a sensor network, the use of an operating system simplifies most of the constraints that cannot be included in the hardware implementation due to certain factors such as size, cost, complexities etc. Maximizing performance within the constraints of a limited hardware resources, by integrating a software approach to cater for all the remaining deficiencies, is therefore of paramount importance [4, 5].
In performance optimization, the processes emphasizes on an efficient design of inter- faces, while searching for the optimum mapping of modules to hardware and software with the focus on those parts of the design that could be alternatively implemented in software or hardware implementation, and their corresponding interfaces. The prototyp- ing environment for the sensor node operating system gathers characteristic information about hardware and software modules, [3, 6, 4, 7] and stores it in a library of reusable modules. Parameters for implementation between the hardware and software interfaces include execution time, size, cost,resources, inter-operability, reliability, scalability, power management and memory management amongst others.
The introduction of a sensor node operating system has the advantage of giving priority to computational tasks, which can be catered for by event handlers within the various ca- pabilities of the sensor mote, depending on what services is being rendered by the remote sensor. The network of sensors is being employed to offer services such as environmental monitoring, acoustic detection, habitat monitoring, medical monitoring, military surveil- lance and process monitoring etc [4, 8, 5, 9]. A sensor network can be used to perform one of these services or a combination of the services with the detection of light, temperature,
3
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
sound etc.
Sensor node operating systems does not usually offer as many good features as a gen- eral purpose desktop operating system such as Linux, Windows etc., but it must be able to act as a standalone in co-ordinating the services needed to provide all the basic neces- sary features which includes smallness in size, low energy consumption capability, diversity in design, usage and operation, with limited hardware parallelism and a good controller hierarchy.
The sensor node operating system configuration must also include features of a stan- dard networking protocols and data acquisition for it to be useful for a wireless sensor network, distributed services, drivers to read sensor information and data acquisition tools.
In this report, we will be evaluating certain sensor node operating systems that are com- monly employed nowadays in practical use, the description of the most commonly used sensor node operating systems will be discussed in the background section, using TinyOS as a benchmark since it is the currently the most widely employed sensor node operating system and the other commonly used operating systems that we will be considering are Contiki, MantisOS and SOS. The evaluation will be carried out on the four commonly employed wireless sensor node operating system,to analyze it effectiveness with a compar- ison of its memory mode of execution and energy consumption. We will be implementing TinyOS to run on a different wireless sensor node and comparing the expected features as it performs in conformance with TinyOS and generally as an embedded system oper- ating system. The priority goals set out to be achieved by the four wireless sensor node operating systems is as stated below.
TinyOS’s [7, 1, 10] design is motivated by four main goals in mind which are the ca- pabilities to cater for;
• Limited resources
• Power management.
The module of construction of TinyOS is built around these four main goals and it en- compasses all the general features that are expected of a sensor node operating system to perform effectively and also maximize performance.
Contiki [11, 12, 13] has its design motivated by;
• Lightweight
• Event-driven kernel
• Protothread / Preemptive multi-threading (optional) .
SOS [3, 14] is another sensor node operating system mainly motivated by;
• Dynamic re-configurability through fault tolerance, heterogeneous deployment and new programming paradigms.
MantisOS [15, 16] is another sensor node operating system designed with time sliced multi-threading offering all the basic features rendered by a good sensor node operating system.
5
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
6
2.1 Sensor Node Operating System Concepts
Sensor node operating systems are built based on certain fundamental principles that guides its structure [17, 18, 19]. All these principles must be put in mind when creating an operating system for sensor nodes.
2.1.1 Processes management
Processes management controls how a programs execution takes place and what goes on at various instances, and what has to be catered for in providing a smooth running of all processes. How this is achieved are also some of the fundamental issues that needs to be taken into account when creating a sensor node operating system. Inter-Process com- munication among processes must be well administered and a good co-ordination pattern formulated in building the operating system. The switching of services between processes and context to know which services must be running, which are to be blocked and those that should be made ready for use while the other services continues running in the back- ground [18, 20]. The three states represented by tasks is shown in Fig 2.1
Figure 2.1: Task Execution States
Ready state indicates a task which is about to run but cannot run because a higher
7
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
priority task is still running.
Blocked shows the task requesting for a resource that is not available and is waiting for an event handler or a resource to be freed, or keeping itself delayed by waiting for a timing delay to end. The essence of the blocked state is very important , because without a blocked state lower priority tasks would never have the opportunity to run. Starva- tion would occur if higher priority tasks are do not block. A blocked tasks stays blocked until its blocking condition is met. Blocking conditions can be met by the release of a semaphore token, when its time delay expires or when the message the blocked task is waiting for arrives into a message queue.
Running state is the task with the highest priority running and its implemented by loading its registers with the task’s context which is thus executed. A task may move back to its ready state from the running state when it is been preempted by a higher priority task as in step 4 of Fig 2.2 . A running task can move to the blocked state by making a call that requests for an unavailable resource, a call to delay task for a period of time or a request to wait for an event to occur.
The steps described by Fig 2.2 can be used to show what happens in a ready state using four tasks with the same or with different priority levels. Task 1 has the highest priority task (30), tasks 2 and 3 are the next-highest priority tasks of (50) and task 4 has the lowest priority task of (60).
Step 1: Tasks 1,2,3 and 4 are in the task ready list waiting to be run.
Step 2: In step 2, task 1 has the highest priority of (30), its task will be the first to run, the kernels moves it from the waiting list to the running state. While task 1 is exe- cuting, it makes a blocking call, and the kernel moves it to the blocked state, takes task 2 (50) which is currently the highest priority task in the ready list and moves it to the running state.
Step 3: When task 2 makes its own blocking call, task 3 is moved to the running state.
Step 4: In step 4 as task 3 runs, it frees up the resources requested for by task 2, the kernel then returns task 2 to the ready list state and inserts it before task 4 because it still has a higher priority than task 4, while task 3 continues as the currently running task.
Step 5: If task 1 becomes unblocked at any point during this, the kernel would move task 1 to the ready list state, and task 3 would be moved to the ready list state for task 1 to be run since it still has a higher proirity level than all the remaining tasks.
Ready, blocked and running is represented in some embedded system by different no- tations, in VXworks kernels [21] is represented as suspended, pended and delayed, where pended and delayed are the sub-states of the blocked state. These are not known to the user but the operating system designer must put all these into consideration at the initial building stages. A good scheduling structure by the operating system in giving priority to tasks also helps operating system to know when to schedule the operations that can
8
Figure 2.2: Tasks Scheduling Operation
be performed that does not require much resources and yet useful. When tasks are well handled, they provide room for adequate interrupt mapping, queues and synchronization. The interrupt of tasks in operating system must also be flexible or triggered by certain conditions of events. Data race [7, 10] conditions must be avoided and a standard such as [22]IEEE 802.15.4 networking protocols must be employed to achieve the good aim of an efficient wireless sensor network.
2.1.2 Storage management
Storage management deals with the issues of how sensor motes are manufactured either for a specific running operating system or with interoperability with all other platforms. Sensor operating systems should not be designed for just a particular type of hardware; it must be able to run on several platforms either alone or simultaneously between different motes. Examples of such commonly know sensor motes are the Mica, Micaz, Telos, ESB and Tmote sky [23, 24, 25, 22]which all runs on either the AVR Atmel 128 microprocessor platform or Texas Instrument MSP430 microprocessor with a minimal memory capacity which are part of the features the operating system builds on in reducing size and cost.
The Telosb used in the report uses the IEEE 802.15.4 / ZigBee [25] compliant RF transceiver with 2.4 to 2.4835 GHz, a globally compatible ISM band that transmits data at the rate of 250kbps. It has an integrated onboard antenna and with a low consump- tion of power capability. The goals of achieving a platform to cater for the extremely constrained resources of hardware is being built into the software services which is carried out by the operating system, most sensor node operating systems are written in the C
9
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
Figure 2.3: Telosb Sensor Mote
language and some others with additional extension features of C, nesC as in the case of TinyOS .
2.1.3 Power management
Power management can also be regarded as one of the key factors in storage management, because since motes depends on battery to run, the operating system must be able to provide additional services in making the power run processes such as the sleep and wake up of motes to save and extend the battery life for longer provision of adequate power supply to the mote.
2.1.4 Input and Output System
Input and output systems simply deals with the sections of what triggers the sensors in performing its duty of data acquisition, activation, communication etc. This also includes the principles of design of the various sensor motes and how its porting is provided for in the sensor node operating system. They provide the services rendered above in the operation of a sensor.
2.1.5 Memory Management
Memory management is a way of allocating portions of memory to programs at their request, and releasing it for reuse when it is no longer needed. It deals with the program memory and the flash memory. The usage of flash memory is currently used for the storage of application code, the text segment and data. Two approaches used in the management of memory for sensor operating system is by;
• Allocation of physical memory into software-usable memory objects (allocation).
• Availability of memory objects and its management by software (management).
10
CHAPTER 2. BACKGROUND
To increase performance, we must share the memory usage of an operating system since memory is a large array of words or bytes with its own address. The operating system fetches instructions from memory processes it and after the instruction has been exe- cuted, the results are later stored back in memory. The instructions in the memory must be binded and the data to memory addresses is carried out in various ways based on the principle employed by the sensor node operating system.
In a compile time scheme, the memory usage is known prior time to a user in a process that resides at a particular location, and generating a compiler code from the location, which extends up to the completion of the process. If the location changes at any point, the code will have to be recompiled to also effect the changes in the memory address location.
In the load and execution scheme, the instruction binding is delayed until when it is ready to load and be executed. When an address location changes, the binding will only be delayed until the next run time is to be executed. The kernel is useful in controlling the memory management unit and thus its files and attributes must be protected which is one of the reasons why security should be considered in a sensor node operating system. Fig 2.4 shows the basic operation process of the memory for the compile time and run time implementation. The Compile and Run time scheme basically describes the static and dynamic memory allocation. Static memory allocation and Dynamic memory allocation are the two types of memory allocation in Sensor node operating systems. Most of the problems that can arise in Operating system comes from the way in which their memory allocators have been implemented.
A well structured memory management reclaims garbage memory when the system is in its idle state, and the time taken in performing the garbage collection cycle must at worst be proportional to the maximum amount of concurrent live memory. SRAM, a volatile random access memory is generally fast for reads and writes, but consumes more energy than flash, and is even more expensive. And for this reason, it is used in small quantities as data memory.
Many sensor node operating systems are constrained in terms of energy, SRAM, and flash memory. Sensor network nodes “motes” typically support up to 10KB of RAM and up to 1MB of flash memory. The MSP430 has a single (Von Neumann) address space, with data RAM and program ROM all accessed by a single 16-bit pointer. It supports a variety of addressing modes and has dedicated stack instructions, and a stack pointer register. [26] EEPROM (electrically erasable programmable read-only memory ) acts in a similar way to flash memory, it is nonvolatile although slower to write and can only cater for a limited number of writes.
2.1.6 Security Management
Processes handled by a sensor operating system file system must be protected from one another’s activities, using various mechanisms to ensure that only processes with author- ity can gain access to the resources of the operating system. The Kernels must be created
11
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
Figure 2.4: Memory : Compile and Run Time
in a way to protect itself, and other processes from being accessed by other irregular processes, in an address space with a collection of virtual memory locations where access right occurs. An address space is a unit for management of a process’s virtual memory.
The execution of an application code is done in a distinct user level address space for the application. A user process and user level process describes the execution in user mode in a user level address space with restricted memory access. The kernels execution takes place in the kernel’s address space. Both the application processes and the kernels processes thus needs to be protected. The processes can be transfered from the user level address space to the kernels address space via an exception such as interrupt or a system call trap. A system call trap is the invocation mechanism for resources managed by the kernel. The kernels share key data structures such as the queue of runnable threads, and yet still keeps some of its working data private.
The main reason for including protection is to prevent mischievous and intentional vi- olation of an access restriction. Protection can also improve reliability in detecting errors at interfaces of component subsystems [18]. File systems in sensor node operating systems follow certain principles in administering permissions or granting access rights to specific users and for specific uses on its processes. The permissions are managed in three different
12
CHAPTER 2. BACKGROUND
address spaces. These address spaces have the READ, WRITE and EXECUTE options on files shared. A component of an operating system can operate either singly or in a combination of this options.
In these three domains, a dynamic association of processes can switch permissions be- tween themselves, and each domain may be represented either by using a process or a system/procedure call to other processes depending on the call to be made from the loader.
2.2 SENSOR NODE OPERATING SYSTEM
2.2.1 TINYOS
TinyOS is a component-based operating system that utilizes a unique software architecture specifically designed around a static resource allocation model for resource-constrained sensor network nodes, [27] and a very little bit of it is based on dynamic allocation which also sometimes introduces difficult failure modes into applications. TinyOS is intended for wireless sensor networks which execute concurrent, reactive programs that operates with [1, 10] severe memory and power constraints in an event driven way, with a scheduler operation within a single thread of execution. The system execution is typically triggered when an event posts one or more tasks to the queue and quickly leaves its event context[28].
The components in TinyOS are written in a “Network Embedded Systems C” nesC [7] a dialect of C that adds some new features to support the structure and execution model. TinyOS application gives a warning when any global variable that can be touched by an interrupt handler is accessed outside of an atomic section. Its a graph of components each with its interfaces. An application that typically conserves energy by going to the sleep mode most of the time when not in use to run in low cycle in an interrupt-driven manner. Commands, tasks and events are the mechanism for inter-component commu- nication in TinyOS [10, 29]. A command is a request sent to a component to perform the service of the execution of a task, while an event signals the completion of execution of the service. Command and events cannot block because they are decoupled through a split-phase mechanism with the task.
TinyOS maintains a two-level concurrency scheduling structure, so a small amount of processing associated with hardware events can be performed immediately, while long running tasks are interrupted. The execution model is similar to finite state machine models, but considerably more programmable. Tasks scheduler uses a non-preemptive FIFO scheduling policy, but Interrupts may preempt tasks (and each other), but not dur- ing atomic sections which are implemented by disabling interrupts. The Table 2.1 below shows some of the core interfaces provided by TinyOS. We will discuss the execution model of TinyOS, components and the nesC language in the preceding sections.
13
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
Interface Description ADC Sensor hardware interface Clock Hardware Clock EEPROM Read/Write EEPROM read and write Hardware ID Hardware ID Access I2C Interface to I2C Leds Red/yellow/green LEDs MAC Radio MAC layer Mic Microphone interface Pot Hardware potentiometer for transmit Random Random number generator ReceiveMsg Receive Active Message SendMsg Send Active Message StdControl Init, start, and stop components Time Get current time TinySec Lightweight encryption/decryption WatchDog Watchdog timer control
Table 2.1: Core Interfaces provided by TinyOS-2.x[1]
2.2.1.1 Execution Model
TinyOS uses an event based execution model in providing the levels of operating efficiency required in wireless sensor networks. [30] The approach used for handling this is by executing a handler immediately; this is because of the events that are time critical. The stack memory is allocated for storing activation records and local variables in the execution of a typical task in any operating system and this allows for allocation of a separate stack for each running task. TinyOS was built based on the limitations of memory in most low power microcontrollers, and its applications consists of multiple tasks that all share a single stack in order to be able to reduce the amount of memory employed during execution, because of this design, a task must run to completion before giving up the processor and stack memory to another task. These tasks can be preempted by hardware event handlers, which also run to completion, before giving up the shared stack. A task must store any required state in global memory. A TinyOS scheduler follows a FIFO policy, but other policies have also been implemented such as earliest-deadline first, Round-Robin and Priority [30, 29].
• Event Based Programming
In TinyOS, a single execution context is shared between unrelated processing tasks with each system module designed to operate by continually responding to incoming events. An event arrives with its required execution context and when the event processing is completed, it is returned back to the system. It is proved [31, 32] that event based pro- gramming can be used to achieve high performance in concurrency intensive applications.
• Task
A task is a function which a component tells TinyOS to run later, there are times when a component needs to do something, but it can be done a little later giving TinyOS the
14
CHAPTER 2. BACKGROUND
ability to defer the computation until later, then it can deal with everything else that’s waiting first. An event based program is limited by long-running calculations that affect the execution of other time critical sub-systems. In the case of an event not completing in due time, all other system functions would be halted to allow for the long running computation to be fully executed. Tasks can be scheduled at any time but will not execute until all current pending events are completed. They can also be interrupted by low-level system events allowing long computations to run in the background while system event processing continues. Implementing priority scheduling for tasks is usually done using the FIFO queuing but it is unusual to have multiple outstanding tasks.
• Atomicity
The TinyOS task primitive also provides a mechanism for creating mutually exclusive sections of the code to provide for long-running computations. Although non-preemption eliminates races among tasks, there are still potential races between codes that are reach- able from tasks (synchronous code) and codes that is reachable from at least one interrupt handler (asynchronous code). In interrupt-based programming, data race conditions cre- ate bugs that are difficult to detect. An application uses tasks to guarantee that data modification occurs atomically from the context of other tasks. When tasks run to com- pletion without being interrupted by other tasks, it implies that all tasks will be atomic to other tasks eliminating the possibility of data race conditions between tasks. The pro- grammer can either convert all of the conflicting code to tasks (synchronous only) or use atomic sections to update the shared state to be able to reinstate atomicity in the system.
2.2.1.2 TinyOS Components
The component model provided by nesC allows an application programmer to be able to easily combine independent components provided by interfaces, rather than developing libraries of functions that would be called by user programs into an application specific configuration. These components are separate blocks of code that are defined by interfaces for both input and output. A component must execute a set of commands defined to be able to provide an interface and to be able to use the interface components execute a dif- ferent set of functions, called events [1]. A component that wants to utilize the commands of a specific interface must also implement the events for that same interface. Components that are not used are not included in the application library. When a component structure has been formed they must be organized in an application-specific way to implement the desired application functionality using the various component configurations.
A component has two interfaces that it uses, those provided by it and those used by it. A component must be able to split a large computation into smaller chunks for ex- ecution one after the other. The components wire the functional components together implementing an interface that can be used or provide multiple interfaces as well as mul- tiple instances of a single interface, and the components are re-usable as long as it gives each instance a separate name. The TinyOS component has four interrelated parts: a set of command handlers, a set of event handlers, an encapsulated private data frame, and a bundle of simple tasks. A typical example of how TinyOS components are interrelated is
15
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
the StdControl interface of Main in a surge application[Fig 2.5[10]] which is represented as a directed graph in which the wiring of commands and events between components dictates the edges of the graph.
The StdControl is wired to Photo, TimerC and Multihop where each component has its own namespace which it uses to refer to commands and events which are bidirectional.
The programming structure is as represented below with each interface having its own module.
interface StdControl {
command result t stop ( ) ;
event result t fired ( ) ;
Even result t fire ( ) ;
} interface SendMsg {
command result t send ( TOS Msg *msg, uint16 t address, uint8 t lenght ) ;
event result t sendDone ( TOS Msg *msg, result t success ) ; }
A typical scripting of TinyOS interface based on certain interfaces [1].
The surgeC application scripted above is represented by this diagram shown with the interfaces, events and commands.
16
Figure 2.5: SurgeC [1]
2.2.1.3 The nesC Language
The implementation of TinyOS is written in a dialect of C [7], known as nesC its main goals are to allow for strict checking at compile time and also to ease the development of TinyOS components. It was developed to cater for the large macros present in the C language to express the component graph and the command/event interfaces between components. nesC is a programming model in which components interact through in- terfaces and a concurrency model based on run to completion of tasks. Event handlers can interrupt at compile time and simultaneously prevent all data races condition. The concurrency model of nesC also prevents interfaces from blocking calls[7]. Since most of the program analysis is done at compile time, cross-component optimization is possible providing function in-lining and eliminating the unreachable codes. nesC is a static lan- guage that uses a static memory allocation but with no function pointers. Implemented in a modules of C behavior, configuration is using select and wire mechanism. Despite its lack of memory protection, variables cannot still be directly accessed from outside of a component. This limitations helps the programmer to know resource requirements for a given application and also enforces the programmer to determine requirements in advance preventing it from error prone that may have been imposed by run-time characteristics.
2.2.2 CONTIKI
Contiki is a hybrid model operating system based on a very lightweight event-driven ker- nel protothread with optional per process preemptive multithreading, it is designed in a flexible manner, to allow individual programs and services to dynamically load and unload modules and programs of a large number of sensor devices in a running system [11, 13, 12].
Contiki is designed for a variety of constrained systems ranging from a modern 8-bit microcontrollers for embedded systems to the old 8-bit home computers [33, 12]. The protothread feature is driven by repeated procedure calls to the function in which the protothread is running. Whenever a function is called, the protothread will run until it
17
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
blocks or exits, but the scheduling of protothread is done by the application that exclu- sively uses protothread.
The preemptive multithreading is included as optional in a linked library of programs that explicitly need it. Computations can be performed in a separate thread allowing events to be handled while the computations runs in the background [34].
The main abstraction provided by Contiki uses an event based CPU multiplexing and memory management to support loadable programs while other abstractions are imple- mented as libraries or application programs [35, 11]. The indirect pre-emptive nature of the kernel makes Contiki lack real-time guarantees, the guarantees can be catered for by the use of interrupts provided by an underlying real-time execution or hardware timers, this is the main reason why interrupts are never disabled by default in Contiki. The number of abstractions provided by the Contiki kernel is kept at a minimum level to reduce size and complexity and still make the system flexible. Since the hardware (ESB) for which Contiki is designed does not support memory protection, it has been designed without any protection mechanism in mind. All processes in the ESB[24] hardware share the same address space within the space of 23 bytes in the same protection domain.
2.2.2.1 Design
A running system in Contiki is made up of kernels, libraries, program loader and pro- cesses that can be dynamically loaded or unloaded at run-time. Processes is made up of the application program or services which implements functionality used by more than one application process. The kernel is like the communication channel that allows device drivers and applications communicate directly with the hardware. The kernel only keeps a pointer to the process state in the process private memory. The inter-process commu- nication takes place by posting events.
Figure 2.6: Contiki Partitioning into core and loaded programs
18
CHAPTER 2. BACKGROUND
Contiki system is divided into two modules at compile time: the Core and the loaded program modules Fig 2.6. The core is made-up of four internal modules, the kernel, the program loader, language run-time libraries and communication services module. A single binary image compiled from the core is stored in the devices for deployment; the compiled program cannot be modified once it is being deployed, unless a special boot loader is used to overwrite the core. The program loader obtains the binary images either from the communication stack or the EEPROM and loads the program into the system. The programs to be loaded are first stored in the EEPROM before being loaded in the code memory.
2.2.2.2 Kernel
The Contiki kernel is designed to be very small in terms of code size and memory re- quirements consisting of a lightweight event scheduler that dispatches program execution events to running processes using a polling mechanism to call processes. Polling is used by processes to confirm the status updates of the hardware devices and it can be seen as high priority events that are scheduled in-between each asynchronous event. Event handlers run to completion on schedule because the kernel does not preempt event handlers once scheduled, however event handlers may use internal mechanisms to achieve preemption. CPU multiplexing and message passing are the only basic features provided by the kernel abstractions, others come as built in libraries.
Programs can be linked with libraries in three different ways;
• Statically with libraries that’s are part of the core
• Statically with optional libraries that are part of the loadable program
• Calling libraries through dynamic linking replaceable at run-time
Contiki uses a single shared stack for the execution of its processes. The kernel also supports two kinds of events, namely asynchronous events which are a form of deferred procedure calls and synchronous events mainly used for inter-process communication, but the use of asynchronous events reduces stack space requirements as the stack is shared between each event handlers, and en-queued by the kernel to be dispatched to the target process at a later time. A synchronous event on the other hand immediately causes the target process to be scheduled in a dynamic link manner. A process of looking up the new process ID in the list of processes, storing the ID in an internal variable and calling the event handler of the new process is known as context switching and its time is critical to the performance of the library calls. It occurs when asynchronous events are dispatched and poll events are scheduled, or when synchronous events are passed between processes.
The Contiki kernel does not contain any explicit power save abstractions [33, 36] but leaves this to the application specific parts of the system and networking protocols to care for the idle periods in reducing power consumption.
Pre-emptive multithreading is implemented as an optional library in Contiki with ap- plications that explicitly require a multi-threaded model of operation. The preemption is
19
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
implemented using a timer interrupt that saves the processor registers onto the stack and switches back to the kernel stack. The library provides the necessary stack management functions making threads execute on their own stack until they either explicitly yield or are preempted. The operation that invokes the kernel must first switch to the system stack and turn off preemption in order to avoid race conditions in the running system causing the multi-threading library provide its own events posting functions to the kernel.
2.2.3 MANTISOS (MOS)
The MANTIS (Multimodal Networks of In-situ Sensors) operating system (MOS) [15, 16] is a sensor operating system written in standard C and executed as threads with an in- tegrated hardware and software platform with built-in functionality. Its multi-layered design approach behaves similar to a UNIX runtime environment with a little difference that MOS uses zero-copy mechanisms at all levels of the network stack, including the COMM layer [37]. It is known as multimodal because of its applicability to various de- ployment scenarios such as weather surveys, biomedical research, embedded interfaces, wireless networking research and artistic works. MOS seeks to provide services such as preemptive multithreading using an interface similar to the standard POSIX threads [19] API and support for multiple hardware platforms, which includes the commonly known sensor motes such as MICA2, MICA2DOT and MICAZ Motes,Telosb, Mantis nymph and x86 Linux.
MOS also provides a hardware driver system that incorporates support for a resource- constrained environment, power management, dynamic programming, fast context switch- ing, a round robin scheduling policy and a finite amount of memory capability which can be as low as 500 bytes including kernel, scheduler, and network stack.
In a thread-driven system, except for the shared resources, an application programmer is not really bothered with blocking either definitely or indefinitely during tasks execution, because the scheduler policy used will preemptively time-slice between threads, allowing some tasks to continue execution even though others may be blocked. Concurrency in multithreading helps prevent one long-lived task from blocking execution of a next time- sensitive task.
The user-level threads T3,T4,T5 represents the multithreading present in the MantisOS. Traditional multithreaded embedded operating systems such as QNX [38] and VXWorks [21] occupy a large chunk of memory to execute on micro sensor nodes which is one of the motives behind the creation of MOS. Other key factors includes its flexibility in the form of cross platform [22] support testing across PC’s, PDAs and different micro sensor platforms and also its support for remote management of in-situ sensors through dynamic reprogramming and remote login. The architecture of MOS system API’s can be classified into the Kernel/Scheduler, COMM, DEV and Net layers and other devices.
20
2.2.3.1 MOS Kernel
MOS kernel includes scheduling and synchronization mechanisms and it uses the Portable Operating System Interface for UNIX (POSIX)-like semantics which is based on a multi- threaded kernel. The kernel provides counting semaphores and mutual-exclusion semaphores in a round-robin scheduling mechanism [15, 16] for multiple threading on the same priority level, with the maximum stack space specified for each thread in the same address space, thus allowing the allocation of a block of data memory for the thread’s stack [26].
The MOS kernel executes as part of its idle thread a simple algorithm to determine the CPU power mode. If one thread is running, the CPU is left in active mode and if none is running it is put to power save mode. For an efficient power programming, all devices in MOS are initially set to off state initially, where they consume the minimum amount of power possible.
The pattern of memory management in MOS uses the static allocation of known memory size at start-up beginning with low addresses. At the node start-up, the stack pointer is at the high end of memory, and the initialization thread stack is located at the top block of memory, after starting up, the initialization thread becomes the idle thread keeping the same memory stack space where it is managed as a heap. The stack space is allocated out of the available heap space when a thread is produced the space is then reclaimed on the exit of the thread, this feature makes it easily possible to detect overruns in the memory stack which leaves room for a dynamic reprogramming by the application developer.
21
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
2.2.3.2 COMM and DEV Layers
MOS hardware devices is divided into two main categories, namely the unbuffered de- vices (synchronous) which are associated with the DEV layer and the buffered devices (asynchronous) in receiving data in the COMM layer. Sensors, file systems, and random- number generator are typical examples of synchronous devices which return only after the operation has been completed. They may exist in a single system and can also be accessible through the same set of read, write, mode (On and Off), and ioctl functions as in the UNIX stream functions.
COMM devices are handled separately from DEV-layer devices because they require the ability to receive data in the background during such times when there is no applica- tion thread currently blocked on a receive call. Radio and serial ports are examples of the COMM layers. The COMM layer and the DEV layer have similar interfaces of syn- chronously sending and receiving blocks until a packet is present. Comm layer devices will not receive packets until they are turned on, and when turned on, received packets runs in the background and are buffered. It also provides the ability to perform a select of either non-blocking option or time-out option on multiple devices, when returning a packet on the selected device.
Some of the interfaces used in the programming of MOS is as stated below [15].
Networking: com send, com recv, com ioct, com mode
On-board sensors (ADC): dev write, dev read
Visual Feedback (LEDs): mos led toggle
Scheduler: thread new
2.2.4 SOS
SOS is a sensor operating system that implements messaging, dynamic memory, module loading and unloading, and other services on its kernel. SOS modules are not processes [3, 16, 39, 28] they are scheduled cooperatively with no memory protection but still pro- tects against common bugs using a memory integrity check incorporated in the SOS operating system. Dynamic Reconfigurability of the libraries of the operating system on the sensor node after deployment is one primary motivation and goal for SOS. Another primary goal is that it helps easing programming complexity for programmers by provid- ing commonly needed services and an increasing memory reuse.
Modules are written in the standard C programming language, with modules implement- ing a basic message handler of the normal main function, in a single switch / case block that directs messages to their module specific code. SOS supports compiling unmodified TinyOS modules directly into SOS application code.
Reconfigurability allows for concentration on three major parts of neglect in wireless sensor networks which are fault tolerant, heterogeneous deployments, and new program- ming paradigms. Time critical tasks are improved by moving processing out of interrupt
22
context through priority scheduling.
Fault tolerance issues are addressed in SOS by the ability to incrementally deploy or remove software modules with newer and more stable versions without physical presence or with minimal interruption of node services.
Heterogeneous Deployment of applications on top of homogeneous sensor nodes, is by specialized applications built and configured without a fear of overhead or interactions with other applications; by direct loading of modules on to nodes even after deployment.
New Programming Methodologies; since SOS believes that most applications were built as a single monolithic [14] kernel development of new programming methodologies of dividing this monolithic kernel into easier to understand components that were combined during compilation. SOS expands on this by providing the components separate through the compilation phase enabling construction of true software agents and active network- ing and also exhibiting advances in macro programming, since users no longer focus on programming at the node level (monolithic kernel)[26] but at the application level.
Two applications developed for SOS are; Visa, an application developed for SOS us- ing distributed Voronoi [14] spatial analysis to calculate the area covered by an individual node. Another application is used by motes for augmented recording. Memory integrity checks, RPC mechanisms and discovery of misbehaving modules are also part of the goals set out to be achieved in SOS.
SOS expects and supports multiple modules execution on its kernel at the same time. SOS uses modular updates rather than full binary system images, not forcing a node to reboot after updates and installing updates directly into program memory without expensive external flash access. It includes a publish-subscribe scheme for distributing modules within a deployed sensor network not limited to running the same set of modules and kernels on all nodes in a network. SOS dynamically link modules using a priority scheduling scheme and a simple dynamic memory subsystems, its kernel services support the changes after deployment and provides a high level API thus reducing abstraction implementation by the programmer.
2.2.4.1 Modules
Modules are the independent binaries that implement a task or function, and modification of the kernel is only important when low layer hardware or resource management needs to be modified. A good coupling of the SOS modules helps in reducing high incurring over- head that may occur. Flow of execution enters from two entry mechanisms, the message delivery from the scheduler which is implemented using a specific handler function, and it implements handler functions for the initialization and the final messages by the kernel during module coupling, and calls to functions registered by the module for external use for the operations that need to run synchronously. The function calls are made available through a function registration and subscription scheme by bypassing the scheduler to provide low latency communication to modules [3, 40].
23
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
2.3 COMPARATIVE OVERVIEW
In this section, we present a simple overview of related work already done in comparing some of the most commonly employed sensor node operating system in a wireless sensor network and a background of these four types of sensor node operating systems. Since the development of such is an ongoing process, there are lots of improvement done to update the source codes and also to improve its performance.
An event-driven run-to-completion operating system is well suited to highly memory- constrained devices while is it almost impossible that a multithreaded system would be capable of being implemented in such limited memory and also actually support multiple threads of execution in practice. Event driven execution is also suited for achieving a good energy efficiency in a situation where no events needs to be handled, the system would not execute and can go to the sleep mode. Some researchers argued that events are a bad idea for high concurrency, and believed threads can achieve all of the strengths of events, they proved that improper implementation is the reason for such assumption that threading is a bad idea. Some of the notions as used in both cases can be given below in an event vs thread pattern [41].
• Event handlers vs. Monitors.
• Events accepted by a handler vs. Functions exported by a module.
• SendMessage / AwaitReply vs. procedure call or join
• SendReply vs. Return from procedure
• Waiting for messages vs. Waiting on condition variables.
In[1], Levis et al. proposed TinyOS design and motivations. TinyOS is a state of the art operating system for sensor nodes and it has been ported to many sensor mote platforms. Certain comparisons have been done between TinyOS and various other sensor node op- erating systems.
TinyOS uses a special description language for an event driven operating environment for composing a system of smaller components [10, 42] which are statically linked with the kernel to a complete image of the system. [11] claims that after the linking, mod- ifying the system is not possible; they also proposed using a dynamic structure which allows programs and drivers to be replaced at run-time without linking. This is one of the principle behind the design of SOS also and it was again criticized in [43] that although this function provides the ability to update some of the software modules in individual nodes, and add new modules to nodes after deployment, but since modules in SOS are not processes, they are scheduled cooperatively and they are independent of each other. Therefore, SOS also does not have a global real-time scheduling and thus is unable to guarantee the real-time schedule of modules. Levis and Culler have developed Mate [44], a virtual machine for TinyOS devices, as similarly used in MagnetOS [45]. In [15, 16, 37] order to provide run-time reprogramming, the Code for the virtual machine can be down- loaded into the system at run-time which is specifically designed for the needs of typical
24
CHAPTER 2. BACKGROUND
sensor network applications
In [13] Adam et al, proposed the Contiki operating system, and argued that the ad- vantages of using a virtual machine instead of native machine code is that the virtual machine code can be made smaller, thus reducing the energy consumption of transporting the code over the network. They noted a drawback in this approach as the increased energy spent in interpreting the code for long running programs and the energy saved during the transport of the binary code is instead spent in the overhead of execution of the code. They claim Contiki programs uses native code and can therefore be used for all types of programs, including low level device drivers without loss of execution efficiency.
Some related work has been done where TinyMos was[28] proposed which combines the features of TinyOS and MantisOS in a single application. The combination of this re- flects a similar approach used by Contiki. They offered a solution which provides an evolutionary pathway that ultimately allows nesC applications to execute in parallel with other system threads where TinyOS is run as a single scheduled thread on top of a mul- tithreaded scheduler.
In [16] Shah Bhatti et al, proposed the MANTIS OS, which they claim its multi-threading has more benefits than TinyOS which does not support multimodal tasking well, and its lack of real-time scheduling since all program execution is performed in tasks that run to completion, thus does not make it good for real-time sensor network systems. Man- tisOS uses a traditional preemptive multi-threaded model of operation that also enables reprogramming of both the entire operating system and parts of the program memory by downloading a program image onto EEPROM, then burned into flash ROM. Due to the multi-threaded semantics of MantisOS, every program must have stack space allocated from the system heap, and locking mechanisms must be used to achieve mutual exclusion of shared variables. In contrast to this Contiki that also has its multi-threading capability provided by a library uses an event based scheduler without preemption, thus avoiding allocation of multiple stacks and locking mechanisms.
MANTIS [3] implements a lightweight subset of the POSIX threads API targeted to run on embedded sensor nodes which introduced context switching in its concurrency, this is also a problem in Contiki, another event driven operating system, Contiki that does limit the number of its concurrent threads to two, and SOS addressed this by adopting an event driven architecture which is able to able to support a comparable amount of concurrency without the context switching overhead with its module architecture.
25
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
26
3 EVALUATION ENVIRONMENT
3.1 EXPERIMENTAL SETUP
The installation of TinyOS version 1.x and 2.x was made in different directory on the computer system to allow for a [46, 47] critical look at both the different directory struc- ture, application packages, functionalities and file subdirectory in a side-by-side manner.
The approach started with the initial installation of TinyOS 1.x which was the most stable as at the time the evaluation was started to familiarize with the TinyOS working and functionality environment. Another directory was later made on a different parti- tion to install TinyOS-2.x. The installation was done based on the initial installation of Cygwin, a Linux environment on a windows platform. Since the setup will be on a win- dows platform for execution of the evaluation, The environment variable was configured to enable for communication between the hardware sensor mote (Telosb) and the TinyOS programming structure written with a .nc extension ”‘nesC”’. Other third party tools were also installed including the current and compatible version of Java platform needed in the operation of getting the sensor motes communicate with the sensor operating system, and the other tinyos tools.
TestSerialC is a simple module application, written in nesC that can be used in test- ing that the TinyOS Java toolchain can communicate with a mote over the serial Port which is the Telosb sensor mote. The java application sends packets to the serial port at 1Hz: the packet contains an incrementing counter. When the mote application receives a counter packet, it displays the least three bits on the LEDs on the sensor mote. Likewise, the mote also sends packets to the serial port at 1Hz.
Figure 3.1: Compilation Process of TinyOS
The following modifications was made on the TestSerialC source code and the libraries of the other operating systems was included and used, making a function calls to the neces- sary files needed to run an application using the serial port. This test shows the flexibility
27
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
of TinyOS in collaboration with commonly used sensor node operating systems. The pro- cess of compiling the TestSerial follows the TinyOs compilation process depicted in the Fig 3.1, with the libraries of the other Operating systems used in the kernel folder shown.
Figure 3.2: Compilation process of TestSerial Application on all OS
TinyOS:
con free }
{mem init(((uint8 t*)&TOS NODE ID) + 0x600, 60 * sizeof(message t));
28
sos blk mem free
sos blk mem free}
3.2.1 TINYOS 1.X and 2.X
TinyOS uses a pattern of higher-level programming conventions similar to the one used in Java, with a look into the TinyOS interfaces and components for the operating system reliability. TinyOS has two stable versions till date, with version 2.0 being the most recent and its a great improvement on version 1.x.
3.2.1.1 Directory structure in TinyOS
The directory structure of TinyOS consists of various other subdirectories of interfaces and components which are wired together in generating the applications and a demo that can be programmed into a sensor mote.
29
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
Figure 3.3: Directory Structure of TinyOS-1.x
The directory structure of TinyOS-2.x is similar to that of TinyOS-1.x with a little differences in the number of files present in the cygwin folder. The msp430 folder is also created in the ”‘opt”’ subdirectory.
30
Figure 3.4: Directory Structure of TinyOS-2.x
Some of the most important folders in the directory structure of TinyOS is as discussed below;
3.2.1.2 APPS
This is the entire suite of TinyOS applications for given platforms which in this case is based on the Telosb hardware and a TOSSIM environment for the micaz. The application name is given in tables of applications and the memory needed to compile each of the applications to a Telosb and a micaz on Tossim is also shown. The evaluation is analyzed based on the memory consumption used up by each of this application and a tabulated memory usage is given.
The applications in TinyOS-2.x is a combination of the applications in TinyOS-1.x This combination of several applications into one module has helped to reduce the size of mem- ory required to run multiple applications in a single module. Memory usage required to run several individual applications is lesser when applications are integrated into one another.
A build folder has been generated in each of the apps subdirectory which consists of the compiled application that would be run on the sensor mote. This evaluation is based on the program memory in terms of ROM and RAM consumption required in executing the various applications.
31
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
Figure 3.5: Application Services in TinyOS-1.x
32
3.2.1.3 TOS
The TOS folder also consists of subdirectory with instruction libraries of what goes inside each of the folder and their functions in the execution stages.
Tos/system/. This folder consists of the core TinyOS components, ans also the refer- ence implementations of the TinyOS boot sequence.
Tos/interfaces/. This folder contains the hardware-independent abstractions used in the execution of codes to provide interfaces. Interfaces are modified and created regularly for various TinyOS implementations.
Tos/platforms/. This folder contains codes specific to certain mote platforms that can be used with TinyOS, the folder is important since all platforms are mostly chip dependent.
Tos/chips/. Contains the chips code specific to all the platforms on which TinyOs can be implemented.
Tos/libs/. This is the TinyOS libraries that contains interfaces and components which
33
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
extend the usefulness of TinyOS. They are not viewed as essential to its operation, but needed when a procedure call is particularly needed.
Tos/sensorboards/. This contains the libraries for the sensorboards on which TinyOS can be supported.
3.3 TOOLS.
There are other third party tools that TinyOS needs to have installed and well configured for full funtionality of TinyOS before any compilation and execution can take place. Some of the most important third party tools are described below;
3.3.1 Java
A running environment for Java must be installed and well configured. This application is required for communication between the sensor mote and the operating system.
3.3.2 Java COMM 2.0
A Java toolchain, JavaComm an API is a Java extension that facilitates developing platform-independent communications applications for TinyOS. The JavaComm API (also known as javax.comm.properties) provides applications access to RS-232 hardware (serial ports/USB) for the operating system to be able to communicate with the sensor mote for hardware and software flow-control options. The functional running of the JavaComm allows the serial forwarder to listen to commands coming from the PC into the sensor mote. The comm.jar must also be adequately configured in the environment variables be- fore communication can be established. This allows for port mapping and configuration in determining the baud rate, processing speed, stop bits and parity of transmission of signals.
3.3.3 Graphviz
Graphviz is a open-source graph drawing software from AT&T Labs which provides a general-purpose graph algorithms using transitive reduction in the context of graph draw- ing. It is a graph visualization tools that goes with the installation of TinyOS to generate a graphical algorithm flow chart that is generated while creating the nesC documentation of wiring. The version that currently works with TinyOS is the version 1.10. This must also be ensured to be included in the PATH environment variable.
3.3.4 nesC Convention
The components in TinyOS are written in nesC using a similar naming convention as in Java language. NesC is the compiler for a C-based language designed to support the
34
CHAPTER 3. EVALUATION ENVIRONMENT
TinyOS. The wiring in TinyOS is executed based on nesC files, and all nesC files must have a .nc extension. The nesC compiler requires that the filename match the interface or component name implying that a related interface and component must have the same name.
3.3.5 AVR / MSP GCC Compiler
GCC A Compiler and Interpreter for the microcontroller. The Atmel and Texas Instruments microcontrollers both have different compilers that interprets data from the TinyOS op- erating system to the sensor mote.
Avrgcc is gcc for the avr architecture of the AVR tools of Atmel ATmega128 micro- controllers for all the Micaz family of sensor motes. A customized version of avr-gcc-3.3 for the tinyos-1.x and avr-gcc-3.4.3-1 for the tinyos-2.x is used. The path must also be included in the environment variables by adding it to the PATH in the system properties on a windows platform.
Mspgcc is gcc for the msp architecture of the Texas Instruments MSP430 family of low-power microcontrollers for the Telosb mote. A customized version of msp430tools- gcc-3.2.3-20050607 for tinyos-2.x was installed , signed and unsigned integers of 8, 16, 32, and 64 bit lengths are also supported. It must also included in the PATH environment variables before communication can be established with the hardware mote.
3.3.6 Cygwin
Cygwin is a Linux environment for Windows. The install is from a local folder down- loaded and run from the Cygwin installation files as in the Fig 3.3and Fig 3.4 of the directory structure of TinyOS version 1.x and 2.x. Cygwin must also be configured in the environment variables with all the subdirectories included for TinyOS to function. The interface of the cygwin for the determination of the program memory of the applications is read from Fig 3.7.
3.3.7 Binutils.
Binutils is a collection of binary utilities, including GNU assembler, disassembler and ob- ject file listings from symbols and operators. Binutils includes the conversion of addresses into filenames and line numbers, providing a utility for the creation, modifications and extraction of files from archives and display of profile information. Table 3.1 shows the various versions installed with the two versions of TinyOS installation setup.
3.3.8 LIBC
Libc is a collection of the C library functions, including the string manipulation functions, the read and write access , input and output access definition, bit-level porting, interrupt,
35
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
Figure 3.7: The Cygwin Interface of Blink Application
error and signal handling , speed optimization libraries and size reduction. It is needed for use with GCC on both the Atmel ATmega128 and the Texas Instrument MSP430 microcontrollers.
TinyOS-1.x TinyOS-2.x avr-libc-20030512cvs-1w avr-libc-1.2.3-1 avarice-2.0.20030825cvs-1w avarice-2.4-1 nesc-1.1.2-1 nesc-1.2.7b-1 tinyos-1.1.11Feb2005cvs-3 tinyos-2.0.0-3 tinyos-tools-1.1.0-1 tinyos-tools-1.2.3-1 avr-binutils-2.13.2.1-1w avr-binutils-2.15tinyos-3 avr-gcc-3.3tinyos-1w avr-gcc-3.4.3-1 avr-insight-pre6.0cvs.tinyos-1w avr-insight-6.3-1 tinyos-tools-1.1.0-1 make-3.80tinyos-1 task-tinydb-1.1.3July2004cvs-1 msp430tools-base-0.1-20050607 mspgcc-win32tinyos-20041204-2 msp430tools-binutils-2.16-20050607 graphviz-1.10-1.i386.rpm msp430tools-libc-20050308cvs-20050608
3.3.9 Python-tools
This is needed for the in-system programming of the MSP 430 microcontroller which includes the BSL necessary in installing on the sensor mote from a Cygwin window inter- face, and the parallel port JTAG interface; an in-circuit emulator with on-chip emulation
36
CHAPTER 3. EVALUATION ENVIRONMENT
facilities. This JTAG can be accessed through a Flash Emulation Tool (FET) attached to a PC’s parallel port. The python extension is needed in the hardware independent layer (HIL), which is part of the new hardware abstraction layer introduced in TinyOS-2.x.
3.3.10 Base
The Base system is a fundamental tool that is needed whenever any Avr or msp430 tools package is to be installed. It serves to coordinate the files installed for communication with the microcontroller.
3.3.11 Avarice
Avarice is a program which interfaces with the GNU Debugger gdb, its executed in a controlled manner and prints the data retrieved. Avarice uses the gdb serial debug protocol to send commands to set and remove breakpoint, read and write to memory via a socket that communicates with the gdb. Avarice can be an intermediary between avr-gdb and the AVR JTAG hardware, allowing for debugging of the AVR code as it runs in-system in TinyOS operating system. The AVR JTAG has not been added to the TinyOS tools because interface to the gdb in not yet made available.
3.3.12 Insight
The avr-insight is a GNU debugger for an avr based microcontroller sensor mote. The function also acts as a debugger, simulator and emulator. GDB, the gnu debugger, allows for the opportunity to debug programs written in any particular programming language based on the avr microcontroller platform. The msp also has a gdb which performs the same function as the avr-insight but specific to the msp430.
3.4 PLATFORM
3.4.1 TEST-BED: Telosb
The overall evaluation was carried out on two different microcontrollers which are the AVR microcontroller using the AVR/CC 1000 chipcon on a simulation platform of TOSSIM, a simulator for TinyOS and the TI MSP430 microcontroller, a Telosb mote with a CC2420 chipcon. The figure of Telosb as shown in Fig 2.3.
Telosb is a USB IEEE802.15.4/Zigbee compliant transceiver compatible with the 2.4Ghz ISM band, Its low current consumption makes it good for a fast wakeup from sleep. The bus shared on the Telosb mote is between the radio and the storage subsystems which has a mechanism to prevent conflicts once the system boots. The applications must therefore manually interleave the subsystems or else they will fail [27]. Hence to be able to ade- quately implement a platform on which components of these four operating systems can be tested only the serial communication test can be performed with similar libraries.
37
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
The application folder of TinyOS consists of certain demo application which can be com- piled and programmed onto the sensor mote. Of all this applications, only the TestSerialC module test could be used in conjunction with the other operating systems because of the Java implementation which is equally used by all the other operating systems. The amount of program memory and flash memory used up in compiling each of this applica- tions was taken, and since TinyOS uses a static memory allocation done at compile time, the required memory consumption could be determined from the compiled applications, as programming the executable files on the TinyOS consumes no further memory during execution.
3.4.2 SIMULATION: TOSSIM
TOSSIM [48] is a discrete event simulator designed for the simulation of TinyOS com- ponent based architecture for wireless sensor networks. TOSSIM aims to provide a high fidelity simulation of TinyOS applications execution by providing a hardware resource abstraction layer as opposed to the real world scenario. TOSSIM simulates TinyOS be- havior at a very low bit level to capture all system interrupts. TOSSIM is not accurately modeled because while TOSSIM precisely times interrupts it does not attempt to model execution time.
TOSSIM is based on TinyOs version 1.x and the AVR ATmega128 microcontroller fea- tures has been pre-programmed into the simulator to act as a virtual hardware abstraction for MICAZ. TOSSIM builds directly from the TinyOS components as in the surge applica- tion,in having the capability to wire components together using all the required interfaces. TOSSIM has been designed basically with four main goals viz;
• Bridging: To compile application codes to TOSSIM for specific hardware as needed with easy deployment. Micaz mote platforms is currently the only hardware ab- straction that the current version of TOSSIM supports.
• Scalability for network intensive applications with smaller memory map.
• Fidelity of easy debugging TinyOS radio stack problems which were undetected during testbed deployment
• Completeness in the use of wiring surge to evaluate its complex interaction scenario.
38
4 RESULTS
Considerable efforts of sensor node operating system [49, 50, 51, 52, 22] already introduced by other research organizations and comparisons of each operating system is shown in the Table 4.1.
4.1 THEORETICAL RESULT
Driven (Components)
Threads Modules
Code Size Low ’+’ Medium Medium Highest ’-’ Ease of learning More to
learn (nesC add- on)
Less to learn
Less to learn
Less to learn
Memory Protection None None None None Reconfigurability X X X Real-time guarantee X X Low power mode X X X Fault Tolerance X X X Multitasking X X X Priority Scheduling X X X Context switching X
Table 4.1: Comparative overview features of Sensor node operating systems
The Table 4.1 shows the theoretical results obtained from the features of the four sensor node operating systems. The features considered is based on the commonly observed features required of a sensor node operating system.
Execution Model: The execution model of TinyOS is event driven and has a high level of concurrency which can be handled in a very small amount of space, and its task scheduling pattern also helps to reduce wastage in power and cpu cycles. A threaded ap- proach that is stack-based, would require the reservation of stack space for each execution context, this helps to reduce context switching overhead in TinyOS.
Code Size: The code size of TinyOS is regarded as low meaning reduced, because the determination of library files needed in execution of a simple application is not as much as would be needed when compared to the other sensor node operating systems since
39
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
its execution is determined at compile time because nesC makes it smaller and easier to compile code. Event driven also requires lower resource requirements. This can be a plus for a programmer as lesser files would have to be used.
Memory Protection: A lack of memory protection in operating systems can cause a high performance degradation due to a little programming error in an application which can easily corrupt the state of the operating system and other software components on the node.
Reconfigurability: Reprogrammability is a lacking term in TinyOS because of the in- ability to update codes loaded to a mote during compile time, the code would have to be recompiled after, but not during, but it has a good reconfigurability because of its effi- cient modularity of combining its scheduler and graph of components compiled into one executable makes it highly configurable, with a diversity in its applications and design, making it support reconfigurable hardware and software–hardware co-design.
Real-Time Guarantee: The real-time guarantee of TinyOS lacks the real time schedul- ing as urgent tasks may have to wait for non-urgent ones, this also makes it to also lacking adequate support for multitasking, although interrupts may preempt tasks, but this may also cause a limited hardware parallelism as cpu is hit by many interrupts. A real-time operation has added time constraints which also indicates that the action should be com- pleted at a certain period of time considering its time behaviour, deadline, worst case execution time and resource usage.
Fault Tolerance: Fault tolerance is also known to be low because it lacks kernel protec- tion as it relies on compile time analysis, while this feature is good in the other operating systems, it also introduces overhead on user boundary and in interrupt handling. Two potential modes of failures are attempting to deliver a scheduled message to a module that does not exist on the node, and delivering a message to a handler that is unable to handle the message. The second problem has been solved in the newly released TinyOS 2.x. One of the long term goals of nesC language is to deal with failure and uncertainty amongst varying numbers of motes.
Learning: The ease of learning is more for TinyOS because of the inclusion of nesC which implies the programmer must understand the basics of programming in both the C programming language and nesC while the other operating systems have a clean and simple programming model using just one language of C.
Power: The power saving mode of TinyOS is high when compared to the other op- erating systems because of its ability to go to the sleep mode most times and the strong response time to wake up mode with a low power consumption.
Priority scheduling: Priority Scheduling is not so visible in TinyOS because of the way it handles its task, but it is used in the other operating systems move processing out of interrupt context and provide improved performance for time-critical tasks.
Context Switching: Context switching which is a process of looking up new process
40
CHAPTER 4. RESULTS
identity in the list of processes, storing the id in an internal variable and calling the event handler of the new process with a time critical to the performance. It is well handled by TinyOS.
4.2 EXPERIMENTAL RESULT
The data shown below are the results obtained from the program memory consumption derived from the compilation of the application source codes in TinyOS-1.x and TinyOS- 2.x. The results have been derived and tabulated for the amount of ”‘ROM”’memory which is the code size in bytes and ”‘RAM”’ memory which is the data size in bytes con- sumed at compile time with TinyOS applications. The entire applications of TinyOS is in the data sizes (RAM). For simplicity, we will be representing the code size as ROM and the data size as RAM.
The results have been derived based on compiling the application source codes to a TOSSIM environment and to the Telosb sensor mote. Blink is a simple application
Applications 1.X (bytes)
ROM (Telosb)
RAM (Telosb)
ROM (TOSSIM)
RAM (TOSSIM)
Blink 2730 45 1674 48 BlinkTask 2726 45 1700 49 CntToLeds 2868 47 1828 50 CntToRfm 11598 373 9908 390 CntToLedsRfm 11868 375 10094 391 Generic Base 9212 310 7846 339 Oscilloscope 9062 349 7066 344 OscilloscopeRF 15038 472 11742 490 Table 4.2: Program memory size of Applications in TinyOS-1.x in Test Environment(a)
that starts a timer, blinks and toggles the led whenever it fires, by testing that the boot sequence and millisecond timers are working properly. Blinking at 1Hz, 2Hz, and 4Hz respectively. The interface of a blink application built into the Telosb hardware is as shown in the cygwin interface.
BlinkTask is a simple application that toggles the leds on the mote on every clock interrupt. The way the event is handled shows the major difference between Blink and BlinkTask operation. BlinkTask posts a task to toggle the led while blink toggles the led directly.
CntToLeds A periodic timer is set to fire by a counter and displays the bottom three bits of the counter value through the flashing on the LEDs from the least significant bit to the most significant bit. This is used in TinyOS-1.x while in TinyOS-2.x it is refered to as RadioCountToLeds. This is needed for a simple test of TinyOS networking
CntToRfm This application maintains a counter on a 4Hz timer and sends out the
41
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
value of the counter in an instant active messaging packet on each increment.
CntToLedsandRfm This is similar in operation to the CntToLeds application but also has the advantage of sending out each counter value in an instant active messaging packet.
GenericBase is a PC to sensor network bridge that checks for CRCs by default. Packets received from a PC from the UART are sent out on the radio, and packets received from the radio are sent out on the UART. This allows a PC to monitor network traffic and inject packets into a network. In TinyOS-2.x it is known as the BaseStation.
Oscilloscope is a simple data-collection demo. Oscilloscope periodically samples the default sensor and broadcasts a message after every 10 readings.
OscilloscopeRFThis application uses the ADC to sample several values from a tem- perature sensor and sends them in the form of a message over the radio
Applications I.X (bytes)
ROM (Telosb)
RAM (Telosb)
ROM (TOSSIM)
RAM (TOSSIM)
RfmToLeds 11302 315 9496 331 SecureTOSBase 11028 1403 10198 1451 Sense 5380 100 3664 101 SenseTask 5918 120 3760 118 SenseToLeds 5926 102 3730 103 SenseToRfm 14656 428 11526 443 TOSBase 11028 1403 10198 1451 TransparentBase 10942 565 9950 601 Table 4.3: Program memory size of Applications in TinyOS-1.x in Test Environment(b)
Sense This application periodically samples the default sensor and displays the bot- tom bits of the readings on the leds of the node with the highest three bits of the raw ADC light read to the LEDs. Sense uses different Leds indicator to represent the most significant bit and the least significant bit.
TOSBase is an application that acts as a simple bridge between the serial and radio links in TinyOS-1.x. TOSBase acknowledges a message arriving over the serial link only if the message is successfully enqueued for delivery to the radio link.
SenseTask This Application periodically samples the photo sensor, posts a task to com- pute the average of recent raw samples, and displays the highest 3 bits of the average to the LEDs.
EEPROM(Page) A low-level interface application which gives direct access to per-page read, write and erase operations.
EEPROM(Byte) This is a high-level memory-like interface with read, write and logging operations.It allows arbitrary rewrites of sections of the flash.
42
ROM(TOSSIM) RAM(TOSSIM)
Ident 14124 461 MicaSBTest1 13772 549 MicaSBTest2 4554 111 TestMatchBox(Timing) 18050 879 TestUart 1964 49 TestTinyViz 9924 403 TestTinyAlloc 4610 423 TaskApp 60796 2955 Surge 17390 1949 SimpleCmd 9598 379 SenseLightToLog 15140 575 TestMatchBox(Remote) 20892 652 MicaHWVerify 10812 393 TestSounder 1674 49 TestEEPROM(byte) 14716 812 TestEEPROM(bytespeed) 7684 518 TestEEPROM(EEPROM) 7994 275 TestEEPROM(Page) 8374 431 TestEEPROM(Pagespeed) 5496 480 TestWDT 2232 67 TestDBA 61716 2825 Table 4.4: Program memory size of Applications in TinyOS-1.x on Simulation
Matchbox A simple file system with sequential-only files
SenseLightToLog This Application implements Sensing and StdControl interface. When a command to start sensing is received, it periodically samples the light sensor for N sam- ples and also stores it in the EEPROM. Once the N samples have been collected, the timer is turned off and the sense done event is signalled.
SimpleCmd module demonstrates a simple command interpreter that receives a com- mand message from its RF interface. This triggers a command interpreter task to execute the command, and when it finishes executing, the component signals the upper layers with the received message and the status indicating whether the message should still be processed. The SimpleCmd supports four types of commands;
(i) led on (ii) led off (iii) radio quieter (iv) radio louder Surge is an example application that uses MultiHop ad-hoc routing. Each Surge node takes light readings and forwards them to a base station. The node can also respond to broadcast commands from the base.
TASKApp This is a TinyOS application Tiny Application Sensor Kit (TASK).
43
THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS
TestTinyAlloc This application tests the TinyAlloc dynamic memory allocator by al- locating three chunks of memory, frees one of them, reallocates (resizes) another, then compacts the allocated chunks and checks the the data has not been corrupted.
TestUART is a test application for the UART Debugger component, and it does this by providing an interface for writing messages to the screen during runtime. The application prints an incrementing sequence of numbers on the display. The UART only works when set to 9600bps.
TestWDT The test watchdog application has three states after being properly activated, which are (i)Reset periodically (ii)Not reset by the application (iii) Repeated timer.
Ident is used to identify the group ID of a sensor mote on a network by setting up a string of up to 15 characters in the set ID option.
MicaSBVerify contains other internal applications to test out the mica sensor board.
TestSounder is a simple application that buzz the sounder periodically, a snooze ap- plication can also be employed to put the mote into a low power state for a given period of time.
MicaSBTest1 is an application that test out the magnetometer, accelerometer, and temperature sensor. It demonstrates how to access the data from each individual sensors, how to perform real time ”calibration”, and how to filter and process sensory data for the magnetometer.
MicaSBTest2 tests out light, microphone, and sounder. Momentarily covering the light sensor will trigger a single chirp on t