View
223
Download
0
Category
Tags:
Preview:
Citation preview
A Blueprint for a Manageable and Affordable Wireless Testbed: Design, Pitfalls and Lessons Learned
Ioannis Broustis, Jakob Eriksson, Srikanth V. Krishnamurthy, Michalis FaloutsosDepartment of Computer Science and Engineering
University of California, Riverside
{broustis, jeriksson, krish, michalis}@cs.ucr.edu
TRIDENTCOM 2007
Functional requirements• Tune basic wireless network parameters, implement functionalities
Hardware requirements• Easily extend/update the testbed with new technologies, compatibility
Software requirements• Easily perform s/w configurations and updates uniformy for all devices
Efficiency and social implications• Non-intrusive deployment, limited interference from/to co-located wireless networks
Cost constraints• Low cost, without compromizing the capabilities
Manageability• Remote network configurations, update distributions, log gathering
Motivating factors for our achitectural design
In this paper…
We justify our architectural design choices• Diskless nodes• PoE• Linux NFS boot
We present how we manage our wireless testbed• Central server
Provides Linux image and drivers for nodes Full access to all aspects of the network through this server
We discuss some pitfalls and mistakes to avoid• Transmission power and sensing threshold• Deployment issues
Deployment
31 nodes, deployed in the 3rd floor of the CS building @ UCR H/W components
• Deployed in labs, offices, corridors• Both short and long links maintained
QuickTime™ and aTIFF (LZW) decompressor
are needed to see this picture.
Hardware for the nodes
Remote access through Ethernet interface Low cost Silent, small size
Soekris net4826• 266 MHz CPU• 64-256 MB SDRam • 10/100 Mbit Ethernet port• 2 miniPCI slots• On-board compact flash 128 MB• Serial port
Wireless cards: EMP 8602-6g, a/b/g• Atheros-based chipset• MadWifi driver• 5-dBi dual-band antennae
QuickTime™ and aTIFF (Uncompressed) decompressor
are needed to see this picture.
QuickTime™ and aTIFF (Uncompressed) decompressor
are needed to see this picture.
Testbed management from a central location
Central server • A simple desktop Pentium4@1.8GHz PC with 1 GB of memory• Two Ethernet interfaces (one for Internet, one for the testbed)
Server connected to nodes through a set of switches• Remote access from the server (only) to each individual node
Through secure shell connection (ssh)
PoE (Power over Ethernet)• Our set of switches (DLink-DES-1526) support PoE,
as per the IEEE 802.3af standard• The nodes are empowered directly from the switches
We can power on/off each node remotely, from the server, by (de)activating the PoEon each port of the switch :-) Very useful when nodes hang
Overall connectivity
QuickTime™ and aTIFF (LZW) decompressor
are needed to see this picture.
OS boot for nodes
Main software requirements: • Secure• Easily configurable• Lightweight, due to low CPU/memory of the Soekris boards
Linux, mounted over NFS• Whenever a node is turned on (PoE is activated) it loads a Debian Linux from the
central NFS/bootp/tftp server Bootp for IP assignment (similarly as dhcp)
Update kernels/modules centrally, reboot the nodes, in order for them to get the updates Only kernel and modules are loaded -- minimal memory demands All required files loaded in memory; no need to read/write anything locally on nodes! No disk = lower cost + lower probability of malfunction
Performing and managing experiments
All experiments are controlled by the central server• Server opens an ssh session to a node through wired interface• Initiates an iperf traffic experiment through this session on wireless interface• Closes the ssh sessions after the end of the experiment
Different linux distros can be used for different nodes• Each researcher maintains her/his own linux distro version at the server
The bootp config file is modified before rebooting the testbed At reboot, nodes load the distro pointed by bootp
• Some nodes may boot a different distro than others E.g. some nodes may be configured to be the APs, while others the clients (as long as
the Wifi card supports both AP and client drivers) Or experiments may be run in paraller by different researchers on different nodes, in
different channels… etc.
IP addressing and naming
Server: 10.0.0.1
Switches: 10.0.0.253 - 254
Nodes: static IP assignment• Wired segment: 10.0.0.11 and up• Wireless segment: 192.168.1.11 and up
Node name corresponds to last 8 bits of IP• Node-31 has Ethernet IP 10.0.0.31 and wireless IP 192.168.1.31• Easier to identify/remember nodes, and set-up experiments
Pitfall: Placing nodes close with high power…
… is not efficient in terms of achievable throughput• Experiment:
2 nodes, 3m apart, Tx power = 15 dBm Fully-saturated TCP and UDP traffic from one node to the other
• We observed that the achieved throughput was too low We started increasing the distance between nodes, and observed that the throughput was
increased, until distance = 10m. For distance = 3m, the maximum throughput was achieved for Tx power = 1 dBm.
• We observed similar behavior with 3 different wireless cards, all channels and both frequency bands
• Happens probably due to the fact that the receiver’s A/D converter cannot compensate such a strong signal.
Pitfall: Transmitting with maximum allowable power…
… is not always the best way to go, for some wireless cards• Experiments with a large number
of links, both short and long
• Example: links of node 20 to allof its neighbors Only one activated each time
• Max supported power: 18 dBm• Max throughput at 16 dBm,
and drops for higher power!
• Note that this is not the case for other cards Exists with the EMP 8602-6g Not with the Intel 2915
QuickTime™ and aTIFF (LZW) decompressor
are needed to see this picture.
QuickTime™ and aTIFF (LZW) decompressor
are needed to see this picture.
Conclusions
We have designed and deployed a manageable and affordable wireless testbed• PoE support for (de)activating nodes remotely• NFS, to avoid storing data locally, and managing updates easily• Silent, and small-size nodes, not to disturb people• Linux-based network, to have access to most aspects of the S/W• Manageable, through remote access to a central server
• Some companies and universities have already adopted our architectural decision (Intel research, UCBerkeley, etc.)
Questions?
Thanks :-)
http://networks.cs.ucr.edu/testbed
Recommended