10
On Testing Wireless Sensor Networks Tomasz Surmacz, Bartosz Wojciechowski, Maciej Nikodem, and Mariusz Slabicki Abstract Testing wireless sensor networks (WSNs) is not a trivial task, due to the massively-parallel communication between many independently running processors. Distributed way of operation does not allow step-by-step execution and typical debugging, so other techniques have to be used, such as detailed packet logging and analysis. We provide a 2-way WSN-to-TCP proxy architecture which extends the typical BaseStation software with packet sniffing and packet sending capabilities. This allows writing and executing WSN test scenarios and automatic test assessment by using typical client-server applications written in any programming or scripting languages. An example of such protocol testing is also shown. 1 Introduction The dominating application of Wireless Sensor Networks is environment monitor- ing. A typical setup used for monitoring consists of a WSN network, a data sink and possibly an off-site server that is used to collect, analyse and provide the data to involved parties. The WSN network in turn consists of a large number of nodes that are resource-constrained, i.e. have low performance, small storage capacity, and are usually battery-powered. The nodes in the network serve two purposes simul- taneously – they collect data from sensors that they are equipped with and take part in routing the messages to the data-sink or Base Station (BS). Since nodes are equipped with low-power radio transceivers and small gain antennas (between 0 and 5 dBi), their effective communication range is usually very short. Therefore they usually communicate in a multi-hop fashion. Many layers of WSN-based moni- toring systems make their behavior hard to understand, develop and test. It usually requires deep understanding of all levels, from sensor hardware and node software Institute of Computer Engineering, Control and Robotics, Wroclaw University of Technology, e-mail: <[email protected]> 1

On testing Wireless Sensor Networks

Embed Size (px)

Citation preview

On Testing Wireless Sensor Networks

Tomasz Surmacz, Bartosz Wojciechowski, Maciej Nikodem, and Mariusz Słabicki

Abstract Testing wireless sensor networks (WSNs) is not a trivial task, due to themassively-parallel communication between many independently running processors.Distributed way of operation does not allow step-by-step execution and typicaldebugging, so other techniques have to be used, such as detailed packet logging andanalysis. We provide a 2-way WSN-to-TCP proxy architecture which extends thetypical BaseStation software with packet sniffing and packet sending capabilities.This allows writing and executing WSN test scenarios and automatic test assessmentby using typical client-server applications written in any programming or scriptinglanguages. An example of such protocol testing is also shown.

1 Introduction

The dominating application of Wireless Sensor Networks is environment monitor-ing. A typical setup used for monitoring consists of a WSN network, a data sinkand possibly an off-site server that is used to collect, analyse and provide the datato involved parties. The WSN network in turn consists of a large number of nodesthat are resource-constrained, i.e. have low performance, small storage capacity, andare usually battery-powered. The nodes in the network serve two purposes simul-taneously – they collect data from sensors that they are equipped with and takepart in routing the messages to the data-sink or Base Station (BS). Since nodesare equipped with low-power radio transceivers and small gain antennas (between0 and 5 dBi), their effective communication range is usually very short. Thereforethey usually communicate in a multi-hop fashion. Many layers of WSN-based moni-toring systems make their behavior hard to understand, develop and test. It usuallyrequires deep understanding of all levels, from sensor hardware and node software

Institute of Computer Engineering, Control and Robotics, Wroclaw University of Technology,e-mail: <[email protected]>

1

2 Tomasz Surmacz, Bartosz Wojciechowski, Maciej Nikodem, and Mariusz Słabicki

through network algorithms used in communication protocols stack, up to the sys-tem and application level. Even bridging the gap between network engineers andthe experts in a given application requires a lot of effort [1].

In real conditions (out-of the lab) various problems may occur: hardware mal-functions, programming bugs or software incompatibility, when implicit rules arenot clear from the API of a module, or a layer of the software stack [5]. This neces-sitates strict testing of components of WSN networks on every level and performingtests of whole systems that are extensive both in time and network size. E.g. propernetwork behavior means also that the power consumption in different scenarios isbound by some limits and it takes time to test this. Due to presence of many physicaleffects on radio transmission, such as like fading, double- and multi-path reflectionsand interference, operating conditions of a WSN are hard to describe and simulateor recreate in laboratory. Therefore, creating realistic, controllable and repeatabletests is difficult [4].

Testing embedded software (e.g. for sensor nodes) poses another type of problemsdue to the lack of input/output devices or the possibilities of error logging. Oneoption is to use JTAG interface to debug a single device, but such an approach is notan option when debugging distributed network algorithms. Debugging protocols iseven harder, and non-scalable, as many devices would have to be run in testing modesimultaneously. Also, nodes usually communicate in event driven or asynchronousmanner and any test should recreate and check time-dependencies.

There is no standard suite of benchmark applications for WSN-based systems,even though there were attempts at creating one. In [2] authors advocate creation ofa standardized benchmark suite for TinyOS-compatible WSN nodes. They presentsample benchmark results regarding performance and power consumption of differ-ent hardware components and call for further work in this area. Such standardizedbenchmark suite could be used to aid development of future hardware. Nazhandaliet al. introduce new metrics in [7] that can be used to evaluate and compare wirelesssensor systems as well as a set of applications representative of typical workload inWSNs. Predictions on how a large-scale WSN deployment will work can be madethrough simulations. This should provide an opportunity to eliminate some errorsin early stages of implementation. However, in some cases even results of detailedsimulations may stand in contradiction to real physical deployments [4] as it is dif-ficult to simulate wave propagation and hence exact network behavior in sufficientdetail.

In this work we describe our experiences with testing applications using TelosBnodes operating with TinyOS operating system [6]. Each node is equipped with3 LEDs that can be used in simple debugging scenarios. E.g. to indicate sendingor reception of a packet, sensor being turned on or a timer firing. However, thesemethods are useless for monitoring the whole network. Moving from simple experi-ments that test the properties of point-to-point communication [8] towards completenetworks, results in increased software complexity. This increased complexity of acomplete WSN testbed was previously described in literature. For example, theauthors of [5] point to software complexity as one of the key reasons of not reach-

On Testing Wireless Sensor Networks 3

ing the goals of their experiment. They also advocate applying rigorous softwareengineering approach to manage the growth in software complexity.

Operation of a WSN network is real-time in the sense that communication isusually asynchronous with time dependencies affecting the behavior of the wholenetwork. Therefore it is not easy to test the proper operation of the whole network.One approach to this problem is to use traces of network communication that weregathered with a packet sniffer to make test scenarios. Then this recorded commu-nication is injected into the network and new logs are analysed. This should resultin much more accurate testing conditions.

To summarize, WSN testing can be done at different levels. At each level adifferent aspect is stressed:

1. node hardware – operation correctness in a range of physical environment condi-tions (e.g. temperature, humidity), performance, energy consumption with stan-dardized benchmarks (including sleep modes),

2. node software – proper operation in response to any message (including erroneousand/or damaged packets),

3. network software – proper operation of the network protocol stack subject toproper and erroneous messages and varying amounts of communication,

4. system level – network and supporting infrastructure, including the BaseStationresiliency and correctness of the data analysis software.

2 Testing environment architecture

For the testing setup we have developed a sniffer-basestation architecture where anembedded system with a Linux operating system with a Wireless Sensor Networkmote attached through a USB port acts as both the BaseStation and the networkobservation point. It can be a laptop computer running any available Linux dis-tribution, or an embedded system, such as BeagleBoard or Raspberry Pi with itsnative Linux version.

The core of our architecture are two programs – TelosBaseStation and NetServ(Fig. 1). The first one is a TinyOS application running on CC2420-equipped motes,such as TelosB or XM1000 architectures. The BaseStation receives all the packetsfrom the radio on a specified channel in a so-called promiscuous mode (i.e. sniffingall the packets that can be heard on the WSN network, not only those which werebroadcast or addressed to the BaseStation). The received packets are timestampedand queued, then resent through the USB port. On the other end of this USBconnection, the NetServ program receives these packets and makes them availablein various ways, by storing them locally (acting as the packet sink) but also actingas a TCP proxy, by offering them to clients in many formats. Multiple clients canconnect simultaneously to get these messages either in binary form (i.e. through atransparent USB-TCP proxy), a PCAP-compatible [3] timestamped packet stream(which can be used by Wireshark or other packet monitoring tools), or an ASCII

4 Tomasz Surmacz, Bartosz Wojciechowski, Maciej Nikodem, and Mariusz Słabicki

stream in hexadecimal form. All these formats allow both remote monitoring andremote data logging which supplements local storage facilities.

Running the BaseStation software in promiscuous mode gives a better view onwhat is happening in the network. For the BaseStation-only way of operation unicasttransmissions are usually used – packets addressed to the BaseStation node can befound in the incoming sniffer stream and for the logging purposes all the otherpackets can be filtered out. However, these extra packets can provide informationabout how messages are routed in the network or a very detailed data about withwhat transmission power must a message be sent in order to reach other nodes(the BaseStation or the next hop towards the BaseStation). For even better insightinto the network operation, more than one sniffing station can be installed. As allthe received packets are timestamped and marked with a tag unique to the sniffingstation, all these packets can be later analyzed together. The NetServ program canconnect simultaneously to USB and TCP sources of packets (i.e. the local sniffingnode and the remote NetServ available through TCP) so all these sniffing pointscan be made available in just one place in real time.

Additionally, the BaseStation software provides a reverse channel through whichclients can send WSN messages to the BaseStation. These messages can instruct theTelosBaseStation software to change its communication parameters (e.g. change thecommunication channel or transmit power) or can be sent to the WSN network. Inthis function the BaseStation software again acts as a proxy, allowing sending anypacket, without checking or analyzing its contents – the only modification done iscalculating the correct CRC at the end of the packet (which simplifies testing whenpackets are created manually from client connections by typing them in hex form orusing copy and paste functions from some predefined examples). The typical usagehowever includes connecting to NetServ from a script written in Perl or Python andsending messages, then observing network response in the same script. This allows

USB

logging

monitoring

TCP

testing

TCP

TCP

progra

mm

ing

USB

WSN

sniffing mote

NetServ

(TelosBaseStation)

Fig. 1: Testing setup

On Testing Wireless Sensor Networks 5

Wiresharkpacket dissector

Wiresharkplugin

TinyOSapplication

nesC TinyOSprogram source

XML packetdescription

XSLTprocessing

nesC headerfiles (structures)

dataparser

Fig. 2: Workflow of a WSN protocol development using formalized description

implementing automated test suites based on “send this packet”, “expect such re-sponse packet(s)” scenarios. In a similar manner a packet replay capability can beimplemented. An external script can log a realistic communication scenario thatis sniffed with a TelosBaseStation program and then recreate this communicationmany times to provide realistic and repeatable communication stress workload.

3 Formalized methods of protocol description

Various tools can be used for observing the WSN traffic, starting with showing araw packet stream, parsing this stream with custom-made interpretation scripts, orusing specialized packet-inspection tools, such as tshark or Wireshark.

In order to facilitate fast development of communication and control protocolswe have adopted a formalized way of specifying the data sent over the WSN. Theformat of all the packets is specified in form of an XML file, describing the fields andvariants of messages to be sent. From this “master” file various resulting formatsmay be generated automatically (Fig. 2), including header files for nesC TinyOSprograms, packet interpretation scripts, and Wireshark dissector code. This simpli-fies investigating the data captured from the air, allows for throughout analysis ofhow the network behaves, and also, prevents many bugs that stem from softwareincompatibilities.

Based on the XML packet description it is also possible to generate test cases fornetwork testing or simplify the needed tools. So far, we have used mostly Pythonscripts for interacting with Netserv program in order to send messages to the WSNnetwork and observe the network response. Examples of such tests are shown inthe next section. The WSN traffic is made available by NetServ in different forms –simple byte-stream, PCAP-timestamped binary data or ASCII data in hexadecimalform. Testing scripts may use these data streams to allow automated verification ofthe tests (compatibility with our XML-based data description as one of the criteria).The PCAP-formatted stream may be connected to a Wireshark network analyzerallowing extended analysis of captured or live data.

6 Tomasz Surmacz, Bartosz Wojciechowski, Maciej Nikodem, and Mariusz Słabicki

4 Testcases

For implementing the BaseStation and serving the captured data with NetServwe have used Raspberry Pi embedded systems. Even though a typical laptop ora desktop computer with Linux operating system could be used for that purpose,a self-contained small system like BeagleBooard or Raspberry Pi is much bettersuited in a real WSN deployment, where it acts as a typical packet sink. As it doesnot need any keyboard or a display for functioning, it can be just plugged in to themains supply and work, collecting the packets. When equipped with a WiFi dongle,it can immediately serve the collected data online. This is also useful in networktesting scenarios, where various observation points can be created by placing severalRaspberry Pi computers at various locations to observe the WSN radio traffic (Fig.1).

Using this architecture we have performed numerous tests for our protocols. Themost important ones were the tests of network self-organization, in which the net-work was switched back and forth between non-organized broadcast-mode operationand the organized mode with routing enabled. Two kinds of control messages wereused for this purpose:

• RESET – Upon receiving a RESET message, a node sets an internal timer for atime period specified in the message payload. After this delay the node reboots.The delay is added to prevent multiple reboots if the node received the sameRESET message several times, as it was retransmitted over the whole network.To test the proper functioning of this routine, we created a scenario where RESETmessages were injected into the network with delay parameters varying from 1 msto 60 s. A misbehaviour of the code would manifest itself in the form of double

repeat {send HELO messagecollect traffic for n secondssend HELO message againcollect more trafficif (broadcast packets found), abortif (no. of hops > threshold ), abort // routing loops foundif (bi-birectional paths ), abort // routing loops found

send RESET messagecollect traffic for n secondsif (unicast packets found), abortif (duplicate transmissions found), report multiple resets and

abort}

Fig. 3: Testing algorithm for HELO/RESET scenario

On Testing Wireless Sensor Networks 7

resets, that is – two or more messages with the same source address and sequencenumbers would be seen if the reset was performed multiple times. Also, no unicastpackets should be observed after the reset.

• HELO – a packet of type HELO is used to let other nodes know about theirneighbours. This is necessary to construct a routing tree in each node. Each nodeis supposed to retransmit every HELO packet it receives, but only once (sequencenumbers are used to detect duplicates) and after increasing the number of hopsparameter. After exchanging a number of HELO packets, a routing tree shouldbe created. Therefore, the test consists of sending a HELO packet from the BSand analysing communication after a set period of time. All the messages shouldfrom thereon i) be sent as unicasts, ii) be targeted at the BS and iii) thereshould be no routing loops.

The automated tests have been performed using algorithm specified in Fig. 3. Sam-ple packets received in these tests are shown in Fig. 4. This example shows networkbehaviour for two consecutive RESET-HELO-HELO tests. Line 1 shows the RE-SET packet sent from the BaseStation running on node 47. Therefore the followingpackets from nodes 7 and 33 (lines 3-5) were sent as broadcasts. That is why thepacket with seq no 1 was received twice – directly from the originating node andretransmitted. However, the first packet from node 34 contained sequence numberequal 3 (line 7), which means that packets with sequence numbers 1 and 2 werelost. After the HELO packet from the Base Station (line 11) the network startedsetting the routing tree. Lines 13-15 show packets which should be routed to theBase Station. Unfortunately there is a routing loop between nodes 7 and 33. If notfor sniffing, such messages would never reach the Base Station. After another HELOpacket in line 17 both nodes 7 and 33 correct their routes to the Base Station.

When the RESET-HELO-HELO test scenario is repeated (lines 23-39) nodes 7and 33 properly establish the routing table, but node 34 apparently did not hearthe first HELO packet and still sends broadcasts. After another HELLO all nodessend messages directly to the Base Station.

Although the above description shows the detailed “manual” analysis of net-work behaviour, the automatic assessment can also detect protocol violations, asdescribed in the testing algorithm (routing loops, wrong type of packets at a certainstage of the protocol, assertions on particular field values, etc.). Such automatedtests can be run several times to test whether the communication protocols areresistant to errors such as lost or duplicate packets, packet collisions or unfortunatetimings.

5 Future work

From the testing perspective, our toolchain has functionality to:

• setup a testing network,• capture WSN messages on a PC and analyze them,

8 Tomasz Surmacz, Bartosz Wojciechowski, Maciej Nikodem, and Mariusz Słabicki

• inject prepared messages into the WSN network through the gateway,• perform automated tests by executing appropriate test scenarios.

Unfortunately, it may still be not enough to accurately observe the whole net-work, because the wireless links are unreliable. If we do not receive the expectedpacket, we can not be sure why it happened. Software and protocol errors or ahigh level of radio noise may be the reasons. Therefore it is necessary to developmethods which can log all activities on each node and then analyze the behaviourof the whole network after the test.

1 15:28:12.668 SEND RESET from=47, to=*, seq=11, delay =768ms ,(7e 44 55 00 ff ff 0000 0e 00 0c 01 05 ff ff 00 2f 00 0b 00 00 00 00 03 00 39 2d 7e)

2 # 2 messages from node 33, one of them through node 73 15:29:02.702 from=33, to=*, via=7, seq=1, hops=2, photo =8293.15 , path =[7 ,33]4 02.720 from=33, to=*, via=33, seq=1, hops=1, photo =8293.15 , path =[33]5 02.813 from=7, to=*, via=7, seq=1, hops=1, photo =8426.67 , path =[7]6 # first packet coming from node 347 06.597 from=34, to=*, via=7, seq=3, hops=2, path =[7,34], photo =255.588 # some strange paths9 16.716 from=34, to=*, via=7, seq=8, hops=3, path =[7,33,34], photo =137.3310 16.736 from=7, to=*, via=33, seq=8, hops=2, path =[33,7], photo =8934.0211 17.008 SEND HELO from=47, to=*, seq=12, (bin: 7e 44 55 00 ff ff 00 00 0e 00 0c

01 05 ff ff 00 2f 00 0c 00 00 00 00 03 00 7d 5d 34 7e)12 # unicast messages , but routing tree is not ok.13 18.398 from=34, to=47, via=33, seq=9, hops=2, path =[33,34], photo =244.1414 18.416 from=33, to=47, via=7, seq=9, hops=3, path =[7,34,33], photo =8319.8515 18.474 from=7, to=47, via=33, seq=9, hops=2, path =[33,7], photo =8888.2416 # another HELLO17 12.020 SEND HELO from=47, to=*, seq=13, (7e 44 55 00 ff ff 00 00 0e 00 0c 01 05

ff ff 00 2f 00 0d 00 00 00 00 03 00 1c 8c 7e)18 12.955 from=33, to=47, via=33, seq=37, hops=1, path =[33] , photo =8316.0419 13.001 from=7, to=47, via=7, seq=37, hops=1, path =[7], photo =8937.8420 14.920 from=33, to=47, via=33, seq=38, hops=1, path =[33] , photo =8319.8521 15.077 from=7, to=47, via=7, seq=38, hops=1, path =[7], photo =8899.6922 # and 34 is somehow gone? No sign of it until RESET23 43.439 SEND RESET from=47, to=*, seq=14, delay =1000ms , (7e 44 55 00 ff ff 00 00

0e 00 0c 01 08 ff ff 00 2f 00 0e 00 00 00 00 03 e8 2c 63 7e)24 # broadcast25 45.257 from=7, to=*, via=33, seq=1, hops=2, photo =8361.82 , path =[33 ,7]26 45.284 from=33, to=*, via=33, seq=1, hops=1, photo =7667.54 , path =[33]27 49.172 from=34, to=*, via=7, seq=3, hops=2, path =[7,34], photo =251.7728 51.596 SEND HELO from=47, to=*, seq=15, (7e 44 55 00 ff ff 00 00 0e 00 0c 01 05

ff ff 00 2f 00 0f 00 00 00 00 03 00 ff ec 7e)29 # 34 did not get the HELO message30 53.007 from=34, to=*, via=33, seq=5, hops=2, path =[33 ,34], photo =205.9931 54.955 from=33, to=47, via=33, seq=6, hops=1, path =[33] , photo =8251.1932 54.977 from=7, to=47, via=33, seq=6, hops=2, path =[33,7], photo =8857.7333 ...34 # another HELO35 15:30:32.947 SEND HELO from=47, to=*, seq=16,(7e 44 55 00 ff ff 00 00 0e 00 0c

01 05 ff ff 00 2f 00 10 00 00 00 00 03 00 6d 51 7e)36 # and each node sends directly to BS (47)37 34.065 from=33, to=47, via=33, seq=26, hops=1, path =[33] , photo =8285.5238 34.083 from=7, to=47, via=7, seq=26, hops=1, path =[7], photo =8831.0239 34.097 from=34, to=47, via=34, seq=26, hops=1, path =[34] , photo =469.21

Fig. 4: Sample packets observed by the testing script

On Testing Wireless Sensor Networks 9

We intend to develop two solutions for this problem:

1. prepare a meta-protocol which can be used for sending debugging information,i.e. routing tables, packet reception rates, RSSI values, etc.

2. gather all debug information in node’s Flash memory and then, after the exper-iment, collect the data via serial or radio connection.

Both methods have some disadvantages. The first one may interfere with the normalnetwork operation and the protocols being tested, as the same underlying communi-cations stack must be used for sending and receiving radio messages. The debuggingmessages may also be lost, and implementing a special reliable protocol with no im-pact on other messages sent through the network seems hard, if possible at all.The second method can be implemented only on a Flash-equipped WSN motes andrequires direct access to each node after the test, if serial/USB transmission is tobe used. However, the log transfers are intended to happen after the testing hasended, so the radio transmission can also be used with reliable protocols, using ac-knowledgements and retransmissions while asking nodes one by one to dump theirdebugging logs over the radio. On the other hand, some nodes may still need manualintervention if during the test they have exhausted their power supplies.

6 Conclusions

When developing communication protocols for WSNs it is hard to effectively test theproper operation of the whole network. The system that we have developed extendsthe functions of a typical BaseStation node by allowing us to send arbitrary messagesfrom the BS and observing the network response through capturing and analyzingthe packets sent between WSN nodes. Test scenarios can be programmed in Python(or any other scripting/programming language) and use TCP client-server modelfor interaction with the network. Automated assessment of repeated test results ispossible and the packet trail of network operation can be used for detailed post-factum analysis of the behaviour of network nodes. To make this trail complete, weintend to extend our system to provide local logging of node operation in node’sFlash memories. After the experiment has ended, this data can be downloadedthrough a reliable communications link. The BaseStation-Netserv architecture ofour system allows both logging such trails and providing a real-time access to WSNmessages through TCP client-server connections. Wireshark packet analyzer can beused as a powerful tool for online or post-factum analysis of captured packets.

Acknowledgement: This work was supported by National Science Centre grantno. N 516 483740.

10 Tomasz Surmacz, Bartosz Wojciechowski, Maciej Nikodem, and Mariusz Słabicki

References

[1] Berezowski K (2012) The landscape of wireless sensing in greenhouse monitoringand control. International Journal of Wireless & Mobile Networks (IJWMN)4(4):141–154

[2] Hempstead M, Welsh M, Brooks D (2004) TinyBench: the case for a standardizedbenchmark suite for TinyOS based wireless sensor network devices. In: LocalComputer Networks, 2004. 29th Annual IEEE International Conference on, pp585–586, DOI 10.1109/LCN.2004.129

[3] Jacobson V, Lere C, McCanne S (2009) libpcap: Packet capture library.Lawrence Berkeley Laboratory, Berkeley, CA

[4] Langendoen K (2006) Apples, Oranges, and Testbeds. In: Mobile Adhoc andSensor Systems (MASS), 2006 IEEE International Conference on, pp 387–396,DOI 10.1109/MOBHOC.2006.278578

[5] Langendoen K, Baggio A, Visser O (2006) Murphy loves potatoes: experiencesfrom a pilot sensor network deployment in precision agriculture. In: Paralleland Distributed Processing Symposium, 2006. IPDPS 2006. 20th International,DOI 10.1109/IPDPS.2006.1639412

[6] Levis P, Madden S, Polastre J, Szewczyk R, Woo A, Gay D, Hill J, Welsh M,Brewer E, Culler D (2004) TinyOS: An operating system for sensor networks.In: Ambient Intelligence, Springer Verlag

[7] Nazhandali L, Minuth M, Austin T (2005) SenseBench: toward an accu-rate evaluation of sensor network processors. In: Workload CharacterizationSymposium, 2005. Proceedings of the IEEE International, pp 197–203, DOI10.1109/IISWC.2005.1526017

[8] Słabicki M, Wojciechowski B, Surmacz T (2012) Realistic model of radio commu-nication in wireless sensor networks. In: Computer Networks, Communicationsin Computer and Information Science, vol 291, Springer Berlin Heidelberg, pp334–343