10
Fleye on the Car: Big Data meets the Internet Of Things ABSTRACT Using modern vision algorithms, collision avoidance tech- nologies such as the popular Mobileye [11, 14], are able to “interpret” a scene in real-time and provide drivers with an immediate evaluation based on its analysis. However, such technologies are based on cameras inside or on the car, that are limited to short-range warnings and cannot detect col- lisions, find empty parking slots, traffic jams, or anything outside the camera’s view, which is similar to the driver’s view. We propose an open-code system, called Fleye, that con- sists of an autonomous drone (nano quadrotor) that carries a radio camera and flies few meters on top and outside the car. The streaming video is transmitted in real time from the quadcopter to Amazon’s EC2 cloud together with informa- tion about the driver, the drone, and the car’s state. Com- puter vision algorithms, such as image stabilization, com- bined with existing images and traffic databases of the area, can then process the video on the cloud . The output is then transmitted to the “smart glasses” of the driver. The control of the drone, as well as the sensor data collection from the driver, is done by low cost (<30$) mini-computer. Most of the computations are done on the Internet’s cloud, which provide great potential for networking the cars and Fl- eye sensors. Such a sensor network may significantly reduce the number of accidents, improve traffic congestion, and thus the safety and quality of people’s lives . 1. MOTIVATION An autonomous car is an autonomous vehicle capable of fulfilling the transportation capabilities of a traditional car. As an autonomous vehicle, it is capable of sensing its en- vironment and navigating without human input. Robotic cars exist mainly as prototypes and demonstration systems. On the contrary, vision-based Advanced Driver Assistance Systems (ADAS) such as Mobileye [11, 14] has about 80% of the vision-system market now, with sights on 237 car mod- els from 20 manufacturers by 2016 [5]. Such Systems pro- Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. IPSN 2015 Seatle, WA, USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$15.00. vide warnings for collision prevention, mitigation and offers a wide range of driver safety solutions combining artificial vision image processing, multiple technological applications and information technology. However, both autonomous and driver assistance systems are based on cameras that are positioned inside or on the car itself. As such, their field of view is not very different from that of the driver, and they are less efficient if the driver is already aware of the surrounding objects and pos- sible hazards. In this paper we suggest to extend both the autonomous and assistance systems by sensors that collect different type of data: cameras that are outside the car and are carried by drones, and wearable devices that collect in- formation about the driver. 1.1 Suggested Solution We propose a system, Fleye, which provides the driver a real-time streaming video from the view of a hovering quad- copter above the car. The video is sent to the cloud together with other sensors data from the driver and the car. The output video from the cloud is then projected to the smart glasses of the driver. The driver can also control the quad- copter using a dictionary of voice commands. We chose a quadcopter that is very low-cost, small, safe, and legally provided by popular toy stores such as Amazon and Toys- R-Us. This is why significant part of our code and hardware deal with turning this remote controlled robot into an au- tonomous one. Applications of such a system include: scouting traffic (for example, to avoid traffic jams), taking landscape pictures, detecting obstacles on the road ahead, and tracing empty parking spaces in the street or parking lots. Such a system may also save the driver’s life if, for example, it detects a crossing car around the corner (out of the drivers’s view) that went through a red light and is about to hit the driver. In order to obtain a sensor network from multiple cars, our system is based on internet and high-performance computa- tions on the cloud, instead of an isolated computer inside of each car. There is great potential for such networking of cars and Fleye sensors to significantly reduce the number of accidents, and improve traffic congestion. 1.2 Our contribution Our main contribution is to build an open platform for the Fleyesystem, that is based on low-cost Internet of Things (IoT) hardware, and algorithms that run on the cloud. Some of its sub-systems may be used independently for other pur- poses. For example, our Arduino code can be used to turn most of the popular remote controlled toy robots into an

Fleye on the Car: Big Data meets the Internet Of Thingseducate.intel.com/download/Galileo/Fleye3_Israel.pdf · Intel’s Galileo: A wearable mini-computer (\Internet of Thing" board),

Embed Size (px)

Citation preview

Page 1: Fleye on the Car: Big Data meets the Internet Of Thingseducate.intel.com/download/Galileo/Fleye3_Israel.pdf · Intel’s Galileo: A wearable mini-computer (\Internet of Thing" board),

Fleye on the Car:Big Data meets the Internet Of Things

ABSTRACTUsing modern vision algorithms, collision avoidance tech-nologies such as the popular Mobileye [11, 14], are able to“interpret” a scene in real-time and provide drivers with animmediate evaluation based on its analysis. However, suchtechnologies are based on cameras inside or on the car, thatare limited to short-range warnings and cannot detect col-lisions, find empty parking slots, traffic jams, or anythingoutside the camera’s view, which is similar to the driver’sview.

We propose an open-code system, called Fleye, that con-sists of an autonomous drone (nano quadrotor) that carriesa radio camera and flies few meters on top and outside thecar. The streaming video is transmitted in real time from thequadcopter to Amazon’s EC2 cloud together with informa-tion about the driver, the drone, and the car’s state. Com-puter vision algorithms, such as image stabilization, com-bined with existing images and traffic databases of the area,can then process the video on the cloud . The output is thentransmitted to the “smart glasses” of the driver. The controlof the drone, as well as the sensor data collection from thedriver, is done by low cost (<30$) mini-computer.

Most of the computations are done on the Internet’s cloud,which provide great potential for networking the cars and Fl-eye sensors. Such a sensor network may significantly reducethe number of accidents, improve traffic congestion, and thusthe safety and quality of people’s lives .

1. MOTIVATIONAn autonomous car is an autonomous vehicle capable of

fulfilling the transportation capabilities of a traditional car.As an autonomous vehicle, it is capable of sensing its en-vironment and navigating without human input. Roboticcars exist mainly as prototypes and demonstration systems.On the contrary, vision-based Advanced Driver AssistanceSystems (ADAS) such as Mobileye [11, 14] has about 80% ofthe vision-system market now, with sights on 237 car mod-els from 20 manufacturers by 2016 [5]. Such Systems pro-

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.IPSN 2015 Seatle, WA, USACopyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$15.00.

vide warnings for collision prevention, mitigation and offersa wide range of driver safety solutions combining artificialvision image processing, multiple technological applicationsand information technology.

However, both autonomous and driver assistance systemsare based on cameras that are positioned inside or on thecar itself. As such, their field of view is not very differentfrom that of the driver, and they are less efficient if thedriver is already aware of the surrounding objects and pos-sible hazards. In this paper we suggest to extend both theautonomous and assistance systems by sensors that collectdifferent type of data: cameras that are outside the car andare carried by drones, and wearable devices that collect in-formation about the driver.

1.1 Suggested SolutionWe propose a system, Fleye, which provides the driver a

real-time streaming video from the view of a hovering quad-copter above the car. The video is sent to the cloud togetherwith other sensors data from the driver and the car. Theoutput video from the cloud is then projected to the smartglasses of the driver. The driver can also control the quad-copter using a dictionary of voice commands. We chose aquadcopter that is very low-cost, small, safe, and legallyprovided by popular toy stores such as Amazon and Toys-R-Us. This is why significant part of our code and hardwaredeal with turning this remote controlled robot into an au-tonomous one.

Applications of such a system include: scouting traffic (forexample, to avoid traffic jams), taking landscape pictures,detecting obstacles on the road ahead, and tracing emptyparking spaces in the street or parking lots. Such a systemmay also save the driver’s life if, for example, it detects acrossing car around the corner (out of the drivers’s view)that went through a red light and is about to hit the driver.

In order to obtain a sensor network from multiple cars, oursystem is based on internet and high-performance computa-tions on the cloud, instead of an isolated computer insideof each car. There is great potential for such networking ofcars and Fleye sensors to significantly reduce the number ofaccidents, and improve traffic congestion.

1.2 Our contributionOur main contribution is to build an open platform for

the Fleyesystem, that is based on low-cost Internet of Things(IoT) hardware, and algorithms that run on the cloud. Someof its sub-systems may be used independently for other pur-poses. For example, our Arduino code can be used to turnmost of the popular remote controlled toy robots into an

Page 2: Fleye on the Car: Big Data meets the Internet Of Thingseducate.intel.com/download/Galileo/Fleye3_Israel.pdf · Intel’s Galileo: A wearable mini-computer (\Internet of Thing" board),

autonomous low-cost (<$30) and safe robots, our trackingalgorithm may be used for tracking more general known ob-jects, and the code that transmit data from the IoT boardto the cloud is very general.

The hardware of the system consists of:

• A modern wearable board that runs Linux and trans-mits the data for further computation on the cloud

• A commercial remote controlled quadcopter (toy robot)which we turned into an autonomous robot using con-troller (Arduino) that imitates the transmitted signal,

• An 8-gram camera with a transmitter that is mountedon the robot,

• A receiver that gets the analog video,

• a video grabber that converts it to a digital video,

• Smart glasses that download the processed video andother data from the cloud through an http web-addressand present it to the driver.

• A pair of cameras for tracking the quadcopter.

The cost of each item above, except for the smart glasses,is roughly $40. We used Amazon’s Free Usage Tier thatsuggests 720 hours for free. For optimal results we used apair of OptiTrack’s cheapest cameras, but our tracking codecan be used together with popular web-cameras. The smartglasses costs about $1000 but other devices can be used toalert the driver. For example, using the car’s radio or othersensors that can be connected to the mini-computer in thecar. Experts also expect that such wearable glasses will costless than $299 in the near future [2].

The open software that we developed for our sys-tem can be freely uploaded from our web site (removed foranonymity reasons and will be published in the final ver-sion). It can be used for reproducing the experiments, andadding more capabilities and computer vision algorithms.We hope that future contributions of our group and the com-munity will turn it into a popular and free application forboth the safety and quality of people’s lives .

The software that we developed includes:

• A novel tracking algorithm and its implementation, forlocalizing the quadcopter using regular web-cameras.

• PID controller that stabilizes the quadcopter (hover-ing) in the desired position.

• Signal generation code that runs on the Arduino andsends the desired PPM signals to the quadcopter.

• A computer vision algorithm and its implementationcode for stabilizing the input video from the quad-copter, based on OpenCV [10]. The output (stable)video strean is uploaded to an http address that isdownloaded by the smart glasses in real time.

• Communication code for sending the data from thewearable board to the cloud in real-time.

• Code for adding more voice recognition commands toour system, based on the Sphinx [12] system.

Figure 1: The ladybird quadcopter [7].

Our system do not contain algorithms for traffic anal-ysis such as finding empty parking places, nor an entire so-lution to the coordination problem. These algorithms arebeyond of the scope of this paper and already exist in com-mercial products (e.g. [11, 14]) or open software (usuallybased on the OpenCV [10] library).

Our paper is focused on the hardware and software that ismissing in order to apply such algorithm for the Fleyesystem.It demonstrates a system that enables the flier to track thecar and provide visual feedback outside the normal field ofview of the car. We provide the sample code for stabilizingthe video image above to demonstrate how new computervision algorithms can be plugged into the system using ourcommunication software. We intend to add more computervision algorithms in the future and hope that the computervision community will also contribute to this first initial ver-sion of the system.

2. SYSTEM OVERVIEW

2.1 ComponentsThe hardware of our system consists of the following com-

mercial products (see Fig. 1, 7 and 11):

• Intel’s Galileo: A wearable mini-computer (“Internetof Thing” board), similar to Raspbery pie.

• Walkera’s Ladybird: a small toy drone (quadcopter).No sensors or data transmitters, only a receiver.

• Transmitter: Walkera DEVO 7E Transmitter for send-ing commands to the quadcopter. Comes in the samepackage of the quadcopter.

• Arduino UNO R3: cheap that sends PPM signals tothe remote control of the quadcopter and controls it.

Page 3: Fleye on the Car: Big Data meets the Internet Of Thingseducate.intel.com/download/Galileo/Fleye3_Israel.pdf · Intel’s Galileo: A wearable mini-computer (\Internet of Thing" board),

Figure 2: The Fleye system during a live video stream from the autonomous quadcopter to the smart glassesof the driver.

Figure 3: A snapshot from the video that was downloaded to the smart glasses of the driver in Fig. 2. Itconsists of the post-processed merged and stabalized video that was computed on Amazon’s cloud based ontwo real-time inputs from: (a) the camera on the quadcopter, and (b) the sensors inside the car. The manin this snapshot took the photo in Fig. 2 at the same time, using his iPhone HD camera.

Page 4: Fleye on the Car: Big Data meets the Internet Of Thingseducate.intel.com/download/Galileo/Fleye3_Israel.pdf · Intel’s Galileo: A wearable mini-computer (\Internet of Thing" board),

• TX5805: analog radio low-weight FPV (first pilot view)camera that is mounted on the quadcopter and sendsradio 5.8GHz video signal. Shipped together with theLadybird FPV version.

• RC5805: 5.8Ghz Video receiver that gets the analogstreaming video from the TX5805.

• Diamond VC500: video grabber converts the analogcomposite video from the RC5805 to a digital videothrough a USB.

• Amazon’s web services: for running computer visionalgorithms on the EC2 cloud.

• Cameras (Optitrack Flex 3 or Logitech C920 Web-cam): for tracking the quadcopter.

• VUZIX m100: smart glasses for the driver.

2.2 Hardware SetupThe setup of our system is sketched in Fig. 12. Camera

on the quadcopter. The analog camera, carried by thequadcopter, is connected to a video-transmitter unit whichtransmits the video to the video-receiver via 5.8Ghz radiosignal. The video-receiver is connected to an Analog-to-Digial converter (video capture device) which outputs a dig-ital video to the IOT-board via a USB connection. Thevideo is uploaded to the Internet via the computation unitto the cloud (Amazon EC2).

Cameras on the car. We use a pair of cameras that aremounted on front of the car, and pointed up to make surethat the quadcopter will always be in their field of view; seeFig. 2. In our experiment, we used the cheapest camera ofOptitrack’s Flex 3, but we tested the algorithm also witha regular web-cam (Logitech C920). One of the challengesthat we had to deal with, is that the quadcopter must be inthe field of view of the cameras already in its initial position,before hovering. Hence, we used a simple improvised plasticstick that was glued to the car and carried the robot; seeFig. 13. When the system is activated the quadcopter im-mediately leaves this device. Other solutions, that will allowus to send the quadcopter much higher, include hanging iton top of the ceiling of the parking lot, or use other pair ofcameras that will point to a lower position on the car duringthe initialization of the system.

In the car there is the Galileo board and the smartglasses that the driver wears. The Galileo collects infor-mation from the driver and the car, such as temperatureand speed of vehicle, as well as the video stream from thevideo capture device and upload them as explained above.The glasses download the streaming video and presents it tothe driver.

3. DATA FLOW

Control loop.In order to make the quadcopter hover autonomously at

the desired position, we run a loop that consists of the fol-lowing parts:(i) measuring the 6 degrees of freedom (6DOF)of the quadcopter: orientation and position using trackingand localization algorithms, (ii) decide where the quadro-tor should move using the PID controller, (iii) produce thesignal that imitate the remote control and send to the quad-copter.

Figure 4: Ardunio Uno R3 controller [1].

Figure 5: Easycap Video Grabber [3].

Figure 6: Intel’s Galileo IoT board[4].

Figure 7: Remote control DEVO7E (transmitter) ofthe quadcopter [8].

Page 5: Fleye on the Car: Big Data meets the Internet Of Thingseducate.intel.com/download/Galileo/Fleye3_Israel.pdf · Intel’s Galileo: A wearable mini-computer (\Internet of Thing" board),

Figure 12: The hardware setup of Fleye.

Figure 14: Data flow of the Fleye system

Page 6: Fleye on the Car: Big Data meets the Internet Of Thingseducate.intel.com/download/Galileo/Fleye3_Israel.pdf · Intel’s Galileo: A wearable mini-computer (\Internet of Thing" board),

Figure 8: The base shield for connecting sensors tothe Galileo board.

Figure 9: An example sensor: the barometer.

Figure 10: The analog video receiver RC5805 [3].

Figure 11: The wearable smart glasses of Vuzix [6].

Figure 13: The plastic device that we used to holdthe robot before activating the system, so that thetracking camera could locate it.

Video capture.For the video capture we’ve used the “TX5805” camera

to capture the Ladybird’s view and received the video di-rectly to a computer using a USB Video Grabber that isconnected to RF the video receiver module. The capturecard captures the video and uploadeds it frame by frameto Amazon’s EC2 Cloud. Uploading each frame image (jpgfile) was done by an OpenCV application and the cloud CLI(Command Line Interface) directly to the ’S3’ - the StorageService of Amazon.

Sensors.We’ve connected a pair of sensors inside the car to the

Galileo for demonstration: Temperature and Barometer. Sam-ples from each of these two sensors were uploaded to the webvia the Galileo board every second by a Python script thatsends the information as url arguments.

Cloud computation.The server is a virtual machine (’EC2’ service) running

2 nodeJS servers. One server is getting the sensors sam-ples and stores them. The second server works on port 80and creates an HTML page which contains the last framethat was received from the Ladybird;s camera. The frame isrefreshed using Ajax protocol and shows it on a canvas (af-ter image file was saved locally at the server). The Galileoparameters also showed in the same canvas using jquery an-imated gauges.

Smart Glasses.The input information from the sensors and the unstable

input streaming video is combined into a stabilized videowith augmented information from the sensors. See Fig. 3, 8,and 9. The resulted video is uploaded to the internet anddownloaded by the smart glasses via the internet connection.The final video is then displayed to the driver.

4. LOCALIZATION AND TRACKINGTracking Algorithm. We implemented a novel tracking

Page 7: Fleye on the Car: Big Data meets the Internet Of Thingseducate.intel.com/download/Galileo/Fleye3_Israel.pdf · Intel’s Galileo: A wearable mini-computer (\Internet of Thing" board),

and localization algorithm that identifies the position andorientation of our quadcopter, based on an RGB input im-age from a regular web-camera. We assumed that the quad-copter is equipped with known colored markers. The outputof the algorithm is the 6 degrees of freedom (x,y,z,pitch, yaw,roll). These coordinates are then compared by the PID con-troller to the desired position of the robot. Then, a newcommand is sent to the quadcopter, and the tracking algo-rithm is called again in a loop.

We can get the 6DOF of the quadcopter in high FPS(frames per seconds) with high accuracy as follows. In orderto support fast and robust pose estimation, we use a spe-cific model-based approach for tracking. We model coloredmarkers on the quad as being rigidly transformed and thenprojected onto the camera. In order to allow low-latencytracking, we separate the problem into two phases of de-tecting blobs in the image, and tracking the quad using thedetected blobs as observations. We note that both of thesephases do not require high-level software packages, and yetare highly parallelizable, making them suitable for a rangeof platforms.

Phase I. In phase one we detect points in the image thatare close to the predicted marker colors and are of high uni-formity. This is done by computing the mean and varianceof each pixel, and saving local minima of the variance thatare close to the predicted colors.

Phase II. In this phase we use a particle filtering [13]in the pose space θpose ∈ SE(3). Each point in the tar-get model has a color and position associated with it. Po-sitions are projected according to the pose θpose and thecamera intrinsic parameters, which are known. For the col-ors we assume a linear intensity transformation described byθintensity. Particles are sampled in the Lie-algebra in orderto propose updates, and evaluated according to

P (θ) =∏i

exp

−(ci(θintensity)−cmeas

j(i)

)2

σ2i

−(proj(xi;θpose)−xmeas

j(i)

)2

σ2var

, (1)

where proj (xi; θpose) denotes the projection of each modelpoint to the camera plane after a rigid transformation set byθpose, and ci(θintensity) denotes the color transformationof color i. cmeasj(i) ,xmeasj(i) denote the color and location of theblob nearest in spatial-color space to predicted point i. Inorder to speed up the search we use a standard approximate-nearest-neighbors search tree [9]

In our experiments we used 600 particles to track the tar-get. An example output of the blob detector can be seenin Figure 15. A demonstration of the particle cloud andresulting estimate is shown in Figure 15.

5. AUTONOMOUS QUADCOPTERIn order to implement the link from the IoT board to

control receiver, we require a responsive and low latencysolution. The control receives takes input from a humanin the form of joystick control, or alternatively in the formof a pulse position modulation (PPM) signal, which is ananalog pulse train that mimics human joystick control. Inthis work we implement an interrupt-driven PPM controllerusing a Arduino Uno (which uses a 16 MHz ATmega328microcontroller).

The input PPM signal is a 33.3 kHz 8-channel invertedpulse train with a pulse width of 400 µs and channel values

between 600 and 1500 µs. This was determined by means ofreading a trainer signal from the Devo 7e controller with anoscilloscope. (See figure 16). The first four channels of thePPM signal provide the inputs to the control receiver.

The timing of the PPM signal is crucial, and the con-troller must output a constant pulse train at 33.3 kHz with-out disruption. Any slight interruption to this signal causesa break in the communication link and disables the RC. Thispresents a challenge since the controller must also receive in-put asynchronously from the IOT board, and we cannot, ingeneral, gives guarantees about the timing of this module.The solution was to implement a loop-driven input pollingroutine which populates a register buffer asynchronously asit receives such input from the IOT board; PPM signal gen-erator was implemented as an interrupt routine tried by aprescaler value to the microcontroller clock frequency.

The end result is a robust PPM controller that receivesinput asynchronously, and constantly outputs a stable PPMtrain, even during transition states where the register bufferis being overwritten. Further, this solution is scalable ac-cording to the number of interrupt ports on the microcon-troller – for example, using an ATmega2560 microcontrollerwe are able to facilitate 6 independent PPM ports (and thusup to 6 Flying Eyes) with the same hardware.

6. THE CONTROLLERGiven the current vehicle position x ∈ R3 and desired

vehicle position x̄ ∈ R3 from the tracking sub-system, wecompute the desired roll uψ, pitch uθ, yaw uφ, and throttleut control outputs using four separate PID loops:

e = x− x̄ (2)

uθ = kx,i

∫ex + kx,pex + kx,d

d

dtex (3)

uψ = ky,i

∫ey + ky,pey + ky,d

d

dtey (4)

uφ = kφ,i

∫eφ + kφ,peφ + kφ,d

d

dteφ (5)

uz = kz,i

∫ez + kz,pez + kz,d

d

dtez (6)

ut =1

cos(θ) cos(ψ)(u0 + uz) (7)

The final throttle output is computed by adding combining agravity compensation value u0 that accounts for the weightof the vehicle lifted and finally scaling the thrust accordingto the pitch and roll of the quadcopter.

The yaw control output uφ can be set to 0 if yaw controlis not desired or necessary.

Finally, all outputs uθ, uψ, uφ, ut are capped to remainwithin the physically possible values.

7. CLOUD PROCESSINGThe streaming video data from the camera on the hovering

quadcopter, the cameras on the car, the glasses, and thesensors on the Galileo board are transmitted to Amazon’sEC2 cloud for high performance computation. The result isa processed video image, possibly with additional markersand text, that is uploaded in real time to an http address.The glasses project this http content to the driver.

Page 8: Fleye on the Car: Big Data meets the Internet Of Thingseducate.intel.com/download/Galileo/Fleye3_Israel.pdf · Intel’s Galileo: A wearable mini-computer (\Internet of Thing" board),

Figure 15: Tests on the tracking module. Left: phase I results, black circles denote detector color blobcandidates. Right: phase II results, the red crosses denote the estimated mean pose from the particle filter.Blue dots demonsrate the projection of different pose hypotheses.

Figure 16: A PPM signal that is transmitted from the Arduino to the remote control.

Page 9: Fleye on the Car: Big Data meets the Internet Of Thingseducate.intel.com/download/Galileo/Fleye3_Israel.pdf · Intel’s Galileo: A wearable mini-computer (\Internet of Thing" board),

As a sample application, we provide an algorithm and itsimplementation that runs on the cloud and stabilizes theinput video from the tilting quadcopter. The algorithm alsogets sensors data, such as temperature, from the Galileoboard and add it to the output video. We give more detailsin this section.

Due to various reasons like: winds, hovering accuracy andmore - the video captured from the quadcopter’s camera isunstable and shaky. Therefore, instead of directly displayingthe video on the smart glasses, we run video stabilizationcode and present the output video stream to the driver fora better experience.

8. EXPERIMENTSIn Fig 17 we show the summary of experiments that we

did for testing the stability of the quadcopter with respectto a given computer vision algorithm, by hovering it overa given point on the ground. The timing includes the ra-dio transmission time, the tracking time, the video captureand the PID controller. The y-axis shows the average errorin meters from the quadcopter to the target position thatit supposes to hover in. The x-axis shows the number ofupdates to the quadcopter position in the control loop oftracking and updating the current position.

Artificial empty loop was added to increase the numberof iterations. In practice, this loop is replaced by the com-puter vision algorithm that analyzes the given video image.The minimum update rate was roughly 10 iterations persecond. Longer delay in the loop caused the quadcopter toleave the camera’s field of view. When the delay decreasedtoo much (around 90 iterations per seconds), the error ac-tually gets larger. This is because our PID controller wascalibrated using 20 iterations per second, which is roughlythe time needed by our tracking algorithm. For more ordifferent computer vision algorithms, it would thus help tore-calibrate our PID controller.

Additional videos were removed due to issues of anonymityof the authors. These videos will be published together witha final version of this paper.

9. REFERENCES[1] http://arduino.cc/en/main/arduinoboarduno.

[2] https://gigaom.com/2013/08/08/why-google-glass-costs-1500-now-and-will-likely-be-around-299-later/.

[3] http://www.amazon.com/easycap-audio-video-capture-adapter/dp/b0019sssmy.

[4] http://www.intel.com/content/www/us/en/do-it-yourself/galileo-maker-quark-board.html.

[5] http://www.marketwatch.com/story/mobileye-shares-rally-as-firms-initiate-positive-coverage-2014-08-26.

[6] http://www.vuzix.com/consumer/products m100/.

[7] http://www.walkera.com/en/showgoods.php?id=1652.

[8] http://www.walkera.com/en/showgoods.php?id=450.

[9] S. Arya, D. M. Mount, N. S. Netanyahu,R. Silverman, and A. Y. Wu. An optimal algorithmfor approximate nearest neighbor searching fixeddimensions. J. ACM, 45(6):891–923, Nov. 1998.

[10] G. Bradski and A. Kaehler. Learning OpenCV:Computer vision with the OpenCV library. ” O’ReillyMedia, Inc.”, 2008.

[11] E. Dagan, O. Mano, G. P. Stein, and A. Shashua.Forward collision warning with a single camera. In

0 10 20 30 40 50 60 70 80 90 1000

0.05

0.1

0.15

0.2

0.25

Iterations Per Second

Ave

rage

Err

or (m

eter

)

Figure 17: Stability tests on a hovering ladybirdquadcopter.

Page 10: Fleye on the Car: Big Data meets the Internet Of Thingseducate.intel.com/download/Galileo/Fleye3_Israel.pdf · Intel’s Galileo: A wearable mini-computer (\Internet of Thing" board),

Intelligent Vehicles Symposium, 2004 IEEE, pages37–42. IEEE, 2004.

[12] K.-F. Lee. Automatic Speech Recognition: TheDevelopment of the Sphinx Recognition System,volume 62. Springer, 1989.

[13] B. Ristic, S. Arulampalam, and N. Gordon. Beyondthe Kalman Filter: Particle Filters for TrackingApplications. Artech House, 2004.

[14] G. Stein. System and method for detecting obstaclesto vehicle motion and determining time to contacttherewith using sequences of images, Sept. 26 2006.US Patent 7,113,867.

[15] J. Ueki, J. Mori, Y. Nakamura, Y. Horii, andH. Okada. Development of vehicular-collisionavoidance support system by inter-vehiclecommunications - vcass. In Vehicular TechnologyConference, 2004. VTC 2004-Spring. 2004 IEEE 59th,volume 5, pages 2940–2945 Vol.5, May 2004.