Upload
nazerke-safina
View
12
Download
0
Embed Size (px)
DESCRIPTION
This report is submitted as part requirement for the BSc Degree in Computer Science at UCL. It is substantially the result of my own work except where explicitly indicated in the text.
Citation preview
DEVICE AWARE 3D LIGHT PAINTING
by Nazerke Safina, BSc Computer Science April 2013
Supervisors: Prof Jan Kautz
This report is submitted as part requirement for the BSc Degree in Computer Science at UCL. It is substantially the result of my own work except where explicitly indicated in the text.
The report may be freely copied and distributed provided the source is explicitly acknowledged.
Device aware 3D Light painting
Nazerke Safina, Bsc Computer science
Supervised by Jan Kautz
AbstractThe project was to develop an android application for light painting photography. The
application extrudes 3D models, while taking a picture in a long exposure. This will result in a
picture of 3D objects hovering in the air creating futuristic effect. The first part was recreating few
similar existing applications. The second part of the project is research based, exploring possibility
of using phone's sensors for sensing device motion and incorporating this knowledge to the
application. It particularly focuses on two Android sensors: rotation vector and linear acceleration.
The research part describes many choices made during the development of the object including
rotation representation: Euler angles vs Quaternions, accelerometer choice: acceleration sensor vs
linear acceleration sensor, etc. It also summarises some of the relevant analysis made in this field
like accelerometer noise and filtering. The research part of the project was challenging as phone
accelerometer sensor was never used before for finding position of the device.
The project achieved developed light painting application with rotation sensing capabilities
and also made substantial research on the double integration of the acceleration obtained from the
MEMS sensors incorporated in the smart phones today.
Table of contents
1. Introduction1.1. Background information1.2. Problem statement 1.3.Project's aim and goals1.4. Organization of the report
2. Related work2.1. Outline of the sources of information and related work2.2 Technologies Used
3. Design and implementation3.1.Loading a model into the application and fitting it into the screen3.2. Light painting iteration 1. Slicing 3D geometry3.3. Camera settings for light painting3.4. ImageAverager or if you are too poor for a DSLR3.5. Light painting iteration 2. Device-aware rotation of the geometry 3.6. Light painting iteration 3. Device-aware extrusion of the geometry3.7. GraphHelper
4. Digital Signal Processing or What I would do if I had more time5. Conclusion and evaluation5.1. Future work
5.2. Evaluation
Chapter 1. Introduction
1.1 Background information
Light painting is an old photography technique of taking photo of a light
traces in dark location using long exposure and a light source such as a torch
as a real-world paint brush. The technique was developed for research
purposes as a part of a national program of psychic and physical regeneration
and military preparedness and was used to study movements of human body,
especially gymnasts and soldiers.[1] The first known such light painting
photography is called Pathological walk from in front and was taken by
attaching incandescent bulbs to the joints of a person. As many other military
inventions, light painting has been adapted for more peaceful purposes by
ordinary photographers as an artistic endeavor.
With emergence of smart phones and mass development of applications the
idea migrated to the app markets of Android phones, IPhones and IPads. Most
of them use the device as a simple torch, i.e. a circle in different colors and
sizes or to draw holographic 2D or 3D texts.
The most advanced of such applications is Holographium - currently
available in app store. It has many features such as importing logos, icons and
photos in PNG, BMP, JPG, GIF formats and converting them to 3D objects. It
also has functions of user definable extrusion time and depth. However the
idea of my application came from Making Future Magic of Dentsu London
studio.
1.2.Problem statement:
In order to take good light painting photo it is important to balance between
the exposure time, the speed of the extrusion and screen angle. Thats why it
normally takes many attempts to get it right, mostly photos come out
squashed or blurred and with intrusive haziness. While exposure time depends
on the settings of a camera, the extrusion speed and the screen angle depends
on the person dragging the phone. It has to be possible to adjust the 3D model
being extruded according to the movement and rotation of the device and
make extrusion process independent of the human factor.
1.3.Project's aim and goals
My projects aim is to take advantage of computer graphics techniques and
newly emerging area of Android programming - sensor programming and take
the good old light painting to the next level. The latter, use of phone sensors,
is what makes my application different from the available applications Ive
described and what makes it both challenging and interesting project.
In order to carry out the project, first I had to learn many technologies such
as android programming, especially its Opengl ES API(Opengl for Embedded
Systems) for displaying high-end, animated graphics and Android sensor
programming which requires understanding what are the physical values the
sensors measure, what are the physical intuition behind them and how to
interpret them. As well as these programming interfaces, it was also
necessary to inquire theoretical knowledge and computer graphics concepts
such as camera position, bounding box, projection, transformation matrices,
rotation mathematics such as Euler angles and Quaternions. Of course, I didnt
learn everything, but in order to gain the necessary knowledge I had to cover
much wider range than it was necessary.
Ive carried out my project in incremental and iterative fashion, developing
one goal at each iteration, with each iteration developing on top of existing
functionalities and as such enhancing the application capabilities.
These goals were set in the beginning:
1. Parse a model object from an .obj file and load it into the screen
2. Make it possible to load any object so that the objects centre is in the
centre of the phone screen and the object fits into the screen
3. Move small virtual window along the model in order to get slices or
cross-sections of a model, similar to CAT scan used in medical imaging.
These slices are rendered in the application so that later, while dragging
the phone in front of the camera and taking photo in a long exposure,
the slices are extruded and the composition of these 2D slices form 3D
image in the photo. This is very similar to stop motion, where hundreds
of photos of still objects in different positions form illusion of a
movement.
4. Adding some functionalities to the user interface, like buttons to increase
and decrease near and far planes of the window, i.e. specify the
thicknessof the 3D painting.
To make it independent of the human factor it is possible to sense the
device in the environment and adjust the 3D model accordingly.
Therefore following goals were set as well:
5. Find the appropriate sensor to sense how much was the phone rotated,
i.e. what is the angle of rotation of the phone and use this information to
keep the 3D model stable and stay in its original orientation,
independent from the rotation of a phone.
6. Use the appropriate sensor to extrude slices as quick as the phone is
dragged.
Goals number 5 and 6 are where the capabilities of Android sensors were
to be explored and used to find the rotational angle and speed of the device
movement. This is the research part of the project, which aims to see if it is
possible, are the sensors accurate enough and will they fit into purpose.
The report is organised in sequential fashion, building up the application as it
advances.
1.4 Organization of the report
Chapter 2. Related work
In this chapter I will outline and reference the sources of information: research
papers, source codes and books which assisted my project. I will also list
software tools I used during the development.
Chapter 3. Design and implementation
This chapter is the core of the report where I'll describe the development
process of each iteration and show deliverables of each iteration. I'll also go
through little helper applications which I've built to assist my research.
Chapter 4. Digital Signal Processing or What I would do if I had more time
This chapter talks about types of noise in the accelerometer sensor that
constrains realisation of some ideas. And also describes different filtering
techniques that might be used to eliminate these noise.
Chapter 5. Conclusion and evaluation.
This is capstone chapter where I will sum up the project, its derivations and
lessons learned from it and will provide evaluation of achievements of goals,as
well as future work ideas.
Chapter 2. Related work
2.1. Outline of the sources of information and related work
Androids developer site is the primary place where I learned
programming Android applications
OpenGL ES 2.0 Programming guide by Aaftab Munshi, Dan Ginsburg
and Dave Shreiner is where I learned basics of OpenGL ES.
Android Sensor Programming by Greg Milette and Adam Stroud was of
great help. Part 2 of this book, Inferring information from physical
sensors, gave me extensive base necessary to deal with sensor
programming of my project.
Implementing Positioning Algorithms Using Accelerometers - report
paper by Kurt Seifert and Oscar Camacho describes a positioning
algorithm using the MMA7260QT 3-Axis accelerometer with adjustable
sensitivity from 1.5 g to 6 g. Although this industrial accelerometer
sensor is of better quality in terms of noise and sensitivity compared to
micro-machined electromechanical systems (MEMS) accelerometer
sensors(usually with maximum range 2g) used in current cell phones, it
gives ground to build on.
An introduction to inertial navigation - a technical report by Oliver
J.Woodman, researches the error characteristics of inertial systems. It
focuses on strapdown systems based on MEMS devices and particularly
useful - describes MEMS accelerometers error characteristics such as
constant bias, velocity random walk, bias stability, temperature effects
and calibration errors.
2.2 Technologies Used
This is a brief summary of technologies used in the project, how and where
they were used.
Android SDK(Software Development Kit) provides the API libraries and
developer tools necessary to build, test, and debug apps for Android
including Eclipse IDE and Android Development Tools.
Eclipse IDE is a software development environment with extensible plug-
in system for customising its base workspace.
ADT (Android Development Tools) is a plugin for Eclipse that provides a
suite of Android specific tools for project creation, building, packaging,
installation, debugging as well as Java programming language and XML
editors and importantly, integrated documentation for Android framework
APIs.
OpenGL ES is an API for programming advanced 3D graphics in
embedded devices such as smartphones, consoles, vehicles, etc. It is an
OpenGL API modified to meet the needs and constraints of the
embedded devices such as limited processing capabilities and memory
availability, low memory bandwidth, power consumption factor, and lack
of floating-point hardware.
Rajawali is an OpenGL ES 2.0 Based 3D Framework for Android built on
top of the OpenGL ES 2.0 API. From its many functionalities I have
used .obj file importer, frustrum culling and quaternion based rotations.
MeshLab is a system for the processing and editing of unstructured 3D
triangular meshes. The system helps to process big, unstructured models
by providing a set of tools for editing, cleaning, healing, inspecting,
rendering and converting this kind of meshes. I downloaded many open
source .obj files from different web sites to load them into my
application. Preprocessing those files in MeshLab before using in my
application helped to remove duplicated, unreferenced vertices, null
faces and small isolated components as well as provided coherent normal
unification, flipping, erasing of non manifold faces and automatic filling of
holes.
Octave is a high-level programming language, primarily intended for
numerical computations. It has adapted gnuplot,a graphing utility for
data visualisation. Octave was used in this project to plot graphs.
Chapter 3. Design and implementation
3.1.Loading a model into the application and fitting it into the screen
In my application Ive used models defined in geometry definition file - .obj.
This file format is a simple data-format that represents essential information
about 3D geometry: position of each vertex, the UV position of each texture
coordinate vertex, the faces and the normals. These information takes
hundreds of lines and looks similar to this:
1. v 3.268430 -28.849100 -135.483398 0.752941 0.752941 0.752941
2. vn 1.234372 -4.395962 -4.233196
3. v 4.963750 -28.260300 -135.839600 0.752941 0.752941 0.752941
4. vn 1.893831 -3.827220 -4.498524
...
1178. f 3971//3971 3877//3877 3788//3788
1179. f 2489//2489 2608//2608 2447//2447
1180. f 3686//3686 3472//3472 3473//3473
...
Models are loaded into the application from this file by parsing each line and
saving the information into the Arraylists or a similar data structure of vertices,
normals, texture coordinates, colours and indices. This information after being
sorted into lists, is now ready to be drawn using typical Opengl ES program.
For this purpose Ive used Rajawali, an open source 3D framework for Android,
by Dennis Ippel. It has a file parsing functionality which can import geometry
definition file formats such as .obj, .md2, .3ds and .fbx files and shader
rendering class using Opengl ES, which is precisely what is needed.
Models come in different sizes and are located in different coordinates in
camera space. When loaded it would be nice if a model is automatically placed
in the centre of the screen and fit it, instead of manually changing camera
position or far and near planes or other settings for every model.
To position a model object to the centre of the screen, it has to be placed to
the centre of the bounding box. To deal with bounding box Ive used Rajawalis
BoundingBox class. The centre of the box can be found by finding midpoint of
maximum and minimum points of the bounding box. Once this point is found,
the models position can be simply reset.
Geometry3D anObject = mObjectGroup.getGeometry();BoundingBox aBox = anObject.getBoundingBox();Number3D min = aBox.getMin();Number3D max = aBox.getMax();float averageX = (min.x+max.x)/2;float averageY = (min.y+max.y)/2;float averageZ = (min.z+max.z)/2;
mObjectGroup.setPosition(-averageX, -averageY, -averageZ);
To fit the model into the screen it has to be scaled to fit into unit bounding
box. For that, the scale factor, i.e. how much the model should be scaled,
needs to be calculated. The amount, by which unit bounding box of the model
is bigger than the bounding box of the model, is the scaling factor. The depth,
width and height of the unit box are all equal to 2(|-1|+1).
float bbWidthx = max.x min.x;
float bbHeighty = max.y - min.y;float bbDepthz = max.z - min.z;
float scaleFactorx = 2/bbWidthx;float scaleFactory = 2/bbHeighty;float scaleFactorz = 2/bbDepthz;
float maxScale = Math.min(Math.min(scaleFactory, scaleFactorz),scaleFactorx);
mObjectGroup.setScale(maxScale,maxScale,maxScale);
3.2. Light painting iteration 1. Slicing 3D geometry
Once we are able to load a model into the application and fit it into the screen, simple light painting solution can be developed. The moving window has to be developed which would move along the model and therefore create 2D slices. These sequence of images demonstrate the process of movement of such a window.
Figure 1. Sliding window[2]
Achieving this effect is nothing more than moving Znear and Zfar clipping
planes along the Z axis. By specifying distance between Znear and Zfar, slice
thickness can be defined. The starting point of the window is initial near. The
window stops when reaches some maximum far limit. Figure 3 illustrates this.
float initialFar = 4.2f;float far = initialFar;float initialNear = 3.1f;float near = initialNear;float increment =0.05f / 3; float farLimit = 6.7f;
public void onDrawFrame(GL10 glUnused) {
if(far
The increment describes how fast should the window move, large increment
matches quick movement and small increment is for slow movement. The
increment and other specifiable variables are incorporated into the UI as in
Figure 2.
Opengls onDrawFrame() draws current frame. Frame rate(FPS-frames
per second) can be specified, which also allows to control how quick should the
window move, the larger is the frame rate, the quicker will the window
move.
Figure 3. Illustration of initialFar and initialNear and sliding window
This is enough for the first working light painting iteration - photo of a 3D
models hovering in the air can be taken, but it doesnt sense devices rotation
or its speed, therefore requires the holder to drag the phone in a very stable
manner and in a constant speed. But if the phone is rotated, the resulting
image will be distorted, because the model being extruded doesnt change its
orientation with the rotation of the phone. This can be seen in the Fig. 4
However, dragging phone in a stable manner results in the kind of images as
in Fig. 5.
Figure 4. Distorted image as a result of the device rotation to the right
Figure 5. Light painting while holding the phone straight and stable, without any rotation
3.3 Camera settings for light painting
To take light painting picture a camera with these settings is needed: ISO
range 100-600, shutter speed from 4 seconds to 8 seconds, manual focus and
timer if the release cable isnt at hand. The camera have to be absolutely
stable in the whole process because while its taking a picture during 4 seconds
or whatever the shutter speed was set to, it is extruding the light and needs to
be jitter-free. The tripod is essential for this reason. The darker the room, the
better will be the quality of the resultant picture.
3.4. ImageAverager or if you are too poor for a DSLR
At first I didnt have a proper camera, so I used my laptops webcam to
record a video while I was dragging the phone. The video is then saved as
sequence of images to be averaged later in the application I have written for
this purpose.
The code in Code listing 1 takes sequence of images stored in a list as an input
and averages red, green and blue components of all pixels separately and
creates new raster using these new averaged samples, then sets this new
raster data to newly created BufferedImage. The buffered image is then saved
as a jpeg file.
Figure 6. Averaged images
3.5. Light painting iteration 2. Device-aware rotation of the geometry
Now we can add rotation capabilities to the 3D model. That is, rotating the
object in the opposite direction to the rotation of the device. That is, if the
phone is turned left, turn the object to the right by the same amount of angle,
if the device is turned right, turn the object to the left, etc, so that the model s
orientation doesnt change. Device can rotate around X, Y and Z axes.
Therefore the object has to be able to rotate in 3 axes accordingly. Its also
good idea to add touch event, so that model starts rotating and window starts
moving only after the touch, giving a holder and the camera opportunity to act
in sync.
To find the amount of angle the phone is rotated by we can use Androids
inertial sensors: accelerometer, linear accelerometer, magnetometer, gravity
sensor in combination. The rotation vector sensor can also be used for this
purpose. It is a synthetic sensor that combines the raw data of multiple raw
sensors, possibly accelerometer, magnetometer and may be gyroscope if
available, and modifies some of those raw sensor data to output values
corresponding to the device coordinate system. The output is angular
quantities around axes. Rotation is described about any given axis in terms of
the number of degrees of difference between the device's coordinate frame
and the Earth coordinate frame. They can be measured in degrees or
quaternions, therefore the output can be in 3-vector, rotation matrix or
quaternion forms.
Figure 7. Rotations around different axis.[3]
I have used rotation vector sensor, as all those options give
approximately the same readings, except it is easier to extract data from
rotation vector sensor and process them straightaway. Synthetic sensors are
usually easier to work with, as they were developed with the aim to make
consumption of the data easier and more obvious. They sometimes modify raw
data before outputting.
Another reason I have chosen rotation vector sensor is because its output is
in a form similar to a quaternion, which is an alternative representation of a
rotation. To get a true normalized quaternion from the output of this sensor it
is recommended to use SensorManager.getQuaternionFromVector().[4] It also
provides SensorManager.getRotationMatrixFromVector() for simple Euler
representation of a rotation. The problem with Euler representation of a
rotation is - it causes gimbal lock. Gimbal lock occurs in physical sensors
gimbals, when two gimbals in a three gimbal system driven into parallel
configuration, i.e when two axes in a three axes system align, causing one of
the degrees of freedom being lost and locking the system into rotation in a
deteriorated 2D system. It is still possible to rotate around all 3 axes, but
because of the parallel orientation of two of the gimbal axes there is no gimbal
available to accommodate rotation along one axis. This takes effect when the
phone is upright and the pitch is plus or minus 90 degrees. The sudden
inaccurate changes occur when the phone is rotated from the position when it
is laying down on the surface to the upright position.
Once the rotation amount of the phone is found and outputted as
quaternions, it is possible to use them straight away to rotate the object.
Rajawali library has useful class Quaternion for quaternion processing. Inverse-
ing the quaternion will give the quaternion of the opposite direction:
case Sensor.TYPE_ROTATION_VECTOR:SensorManager.getQuaternionFromVector(Q,event.values);quatFinal = new Quaternion(Q[0],Q[1],Q[2],Q[3]);quatFinal = quatFinal.inverse();mRenderer.setQuat(quatFinal);
Figure 8. inverse-ing results on orientation equal in amount to the inverted angle but in the opposition direction
When the models are loaded
they might be in inconvenient
positions and one would want to
rotate them first before starting to
use the applications primary
functionality. For example the
model might be loaded as in Figure 9 initially. It would be nice if we could
rotate it, programmatically or even by incorporating this functionality into UI,
before starting light extrusion, so that it, for example, is facing us as in Figure
2. And it would be regarded as offset orientation.
Figure 9. Inconveniently facing monkey
Programmatically it is enough to include this line in the renderer's
constructor:
mObjectGroup.rotateAround(Number3D.getAxisVector(Axis.X),90);
Different combination of parameters of axis and angle is possible. But the last
line of the previous code mRenderer.setQuat(quatFinal); resets the orientation of the
model to the orientation defined by quatFinal quaternion, which is derived from
rotation vector sensor readings. So different approach is needed to address
this problem, rather than quaternion inverse.
When the phone is tapped the rotation of the object is enabled and the
readings of the sensor at that moment is saved as initial orientation. The next
time when sensor senses a tiny rotation, it gives readings of new orientation,
then the difference between old orientation (initial orientation) and new
orientation is found, the new orientation is now saved as old orientation. As
such each consecutive time when the sensor delivers new readings, the
delivered readings are saved as new orientation, while the current new
orientation is save d as old orientation. The negative difference between
quaternions is then added to the offset quaternion. The resulting quaternion is
the final orientation. The offset quaternion is a quaternion of offset orientation.
Figure 10. offsetQuaternion (old orientation new orientation) = final orientation. offsetQuaternion is
the result of mObjectGroup.rotateAround(Number3D.getAxisVector(Axis.X),90);
public boolean onTouchEvent(MotionEvent event) {
startSensing = true;firstTimeAfterTap=1;quatInit = quatFinal;return super.onTouchEvent(event);
}
case Sensor.TYPE_ROTATION_VECTOR:SensorManager.getQuaternionFromVector(Q, event.values);Quaternion forLaterUse = new Quaternion(Q[0],Q[1],Q[2],Q[3]);quatFinal = new Quaternion(Q[0],Q[1],Q[2],Q[3]);
if(startSensing ==true){
if(afterTap == 1){offsetQuaternion = mRenderer.getInitialOrientation();}
else if(afterTap>1)
{Quaternion differenceOfRotation =
findDifferenceOfRotation(quatInit,quatFinal);differenceOfRotation.inverseSelf();
offsetQuaternion.multiply(differenceOfRotation);}
quatInit = forLaterUse;afterTap++;mRenderer.setQuat(offsetQuaternion);}
}}
private Quaternion findDifferenceOfRotation(Quaternion initialOrientation,Quaternion finalOrientation)
{initialOrientation.inverseSelf();finalOrientation.multiply(initialOrientation);return finalOrientation;
}
3.6. Light painting iteration 3. Device-aware extrusion of the geometry
In the light painting iteration 1 the sliding window was developed, where Znear and Zfar is moved by incrementing them by some constant amount. This amount is constant during extrusion, therefore to get full, undistorted, realistic image of a 3D model it requires the holder to drag the phone in constant speed. The image below shows what happens if that is not the case, i.e. if the phones speed varied during light painting.
Figure 11. Phone dragged too fast or too slow
It is possible to take good picture with just first iteration of light painting, but it requires many trials to get it right. The Holographium, the light painting application in the market, has features to help the user to control his speed to get undistorted result. For example, extrusion distance shows the distance for which user must drag the device to get an undistorted result. Or the duration of extrusion can be set before the rendering. One-second interval sounds and 50%-done sounds are also included to warn the user to slow down or quicken the dragging of the phone. In my project, I tried to explore the idea of sensing the devices position relative to its starting point to eliminate the need for controlling stability of phones speed in the process. So instead of moving near and far planes by the same amount during extrusion, a solution where this amount is dependent on devices disposition can be proposed. By disposition I mean the difference between devices new position and old position. As the phone changes its position, its new and old positions will be updated accordingly. If the device is dragged slowly the increment will be small, therefore the window will slide slower,if it is dragged quicker, the window speed will increase accordingly. So, instead of float increment =0.05f / 3; it is improved to be float increment = distance; The window can be imagined to be sliding as the device is being dragged. If the device stops moving, sliding window will also stop, because increment = old position new position = 0. But this is in the case if the distance the phone travels can be accurately found. Otherwise, some alpha can be introduced to compensate inaccuracy, such that float increment =alpha*distance; where alpha value has to be chosen through testings among different alpha values, until the increments are close to the distances the phone travels and 3D image is clear.
Now, mathematically, distance can be found by integrating velocity and velocity can be found by integrating acceleration. There are two sensors available in most phones, that can give accelerometer values.
Accelerometer sensor gives raw accelerometer data. When the device is
still on a table and not accelerating, the accelerometer sensor reads approximately a magnitude of g = 9.81 m/s^2. Similarly, when the device is in free-fall and the accelerating towards ground at 9.81 m/s^2, its accelerometer reads a magnitude of 0 m/s^2. Consequently, in order to measure the real acceleration of the device, the contribution of the force of gravity must be eliminated by applying a high-pass filter to the output of accelerometer sensor. A high-pass filter attenuates the slowly varying background and emphasizes the higher-frequency components. Another possible sensor that can be used to fulfill our need is linear accelerometer. It is recommended to use this sensor if all we need is to filter out the constant downward gravity component of the accelerometer data and keep the higher-frequency changes. Greg Millet et al explains it by saying: The Linear Acceleration sensor is synthetic sensor that consists of high-pass filtered accelerometer data, and has already been optimized for the particular hardware sensors in each device that will run your app.[4] Theoretically it is realistic, but innate errors present in the accelerometer sensor such as zero offset and drift will make simple integration give poor results and over time it will quickly explode to give large physically unrealistic numbers even with no actual movement, because offset and drift are accumulated under the integral, each time the integration is calculated. According to Greg Milette et al, The accumulated offset and drift is one reason why you cant measure distance by double integrating[5] Yet one more reason, they point out at the same book, is that since constant non-zero velocity and zero-velocity is the same acceleration, they wont contribute to the double integration, and as a result the calculated distance is meaningless. Even if its assumed that in this application phone has non-constant, non-zero velocity, the drift and offset requires heavy processing to get rid of. The positioning algorithm I have used in my application can be described with this flow chart:
Figure 12. Positioning algorithm
When the device is flat on a table, accelerometer shows non-zero, non-constant readings. If the output signal is not zero when the measured property is zero, it is said, that the sensor has an offset or bias. The graph in Figure 13 showing the raw accelerometer data obtained when phone was dragged along z axis in the device coordinate space for 15 sm as in Figure 12 and held still for a while in this final position. The graph shows how the device was motionless until 10th second and started its movement at this time, then it was dragged for about 5 seconds and stopped. At the moment when it starts its movement, it accelerates by some amount, but when it starts stopping and until the moment it stops deceleration is not equal in magnitude to the acceleration. It will affect the integration error later in the process.
Figure 12 Dragging the phone along its z axes
Figure 13. Raw accelerometer data
The offset cant be completely eliminated, as its value is not constant and predictable. However it is possible to suppress its influence by performing calibration. In my application, Ive decided to store first one hundred readings of the accelerometer sensor, when it is not undergoing any acceleration, in a list and average them in order to get calibrated value. However, observations showed that first twenty five or so readings arent even closely related to physical reality, therefore they were ignored and readings from 26 to 126 were used for calibration. The long term average is commonly used to calibrate readings, the more output is averaged first, the better calibration can be achieved. Once the calibrated value is known it was subtracted from each subsequent reading.
As it was said before, it is only possible to weaken the influence of the offset, because of its fluctuating nature. As such, any residual bias introduced causes an error in position which grows quadratically with time.
While the application is reading first 125 sensor outputs to use for calibration, the phone needs to be still for several seconds. The countdown can be implemented in the UI to keep the user informed about calibration process. Drift in the accelerometer is caused by a small DC bias in the signal. To resolve the problem of a drift, a high-pass filter was applied. There are number of possible filters that could be used, they are discussed in the Chapter 4 of this report. Even after the filtering, a no movement condition wont show zero acceleration. Minor errors in acceleration could be interpreted as a non-zero velocity due to the fact that samples not equal to zero are being summed. This velocity indicates a continuous movement condition and hence unstable
position of the device. To discriminate between valid data and invalid data in the no movement condition, a filtering window was implemented. This method high-pass filters the calibrated data by using weighted smoothing and ignores anything between [-0.02;0.02] by setting them to zero.
private float[] highPass(float x, float y, float z){
float[] filteredValues = new float[3];
lowFrequency[0] = ALPHA * lowFrequency[0] + (1 - ALPHA) * x;lowFrequency[1] = ALPHA * lowFrequency[1] + (1 - ALPHA) * y;lowFrequency[2] = ALPHA * lowFrequency[2] + (1 - ALPHA) * z;
filteredValues[0] = x - lowFrequency[0];filteredValues[1] = y - lowFrequency[1];filteredValues[2] = z - lowFrequency[2];
//window filterif(filteredValues[0]=-0.02){filteredValues[0]=0.0f;}if(filteredValues[1]=-0.02){filteredValues[1]=0.0f;}if(filteredValues[2]=-0.02){filteredValues[2]=0.0f;}
return filteredValues;}After the high pass filter and filtering window the accelerometer data looks like this:
Now, when the accelerometer data is more or less filtered, it can be used to find velocity, further, to find disposition. First integration of accelerometer is v = v0 + at, where t is time interval. Androids SensorEvent data structure contains timestamp - time in nanoseconds at which the event happened, along with other information passed to an app when a hardware sensor has information to report. Using this timestamp information, it is possible to know the time interval at which accelerometer sensor brings new data, i.e. the time interval between sensor events.
float currentTimestamp = event.timestamp ;//nanosecondsfloat timeInterval= (currentTimestamp - timestampHolder)/1000000000;//seconds
timeValues.add(tempHolder);
if(timestampHolder!=0.0f){tempHolder+=timeInterval;}
timestampHolder = currentTimestamp; Velocity is found by calculating its x,y,z components separately and taking square root of the sum of squared components.
v = sqrt(vx^2+vy^2+vz^2)
//find v = v0 + at in 3 dimensionsfloat velocityX = findXVelocity(timeInterval,accelVals[0]);float velocityZ = findZVelocity(timeInterval,accelVals[2]);float velocityY = findYVelocity(timeInterval,accelVals[1]);
finalSpeed = velocityX*velocityX +velocityY*velocityY+velocityZ*velocityZ;finalSpeed = (float) Math.sqrt(finalSpeed);
float initialVelocityX = 0.0f;private float findXVelocity(float timeInterval, float f) {
float finalVelocity = initialVelocityX+ f*timeInterval;initialVelocityX = finalVelocity;return finalVelocity;
}
The graph shows obtained result.
As it can be seen from the graph, when the device stops its motion velocity
doesnt go back to 0. This is due to the fact that accelerometer, even if filtered
wont be equal to zero, and therefore v0 will give accumulated error.
The second integration, d = d0 + v0*t, will give disposition. It was also
calculated by finding its three components separately:
d = sqrt(dx^2+dy^2+dz^2).
//find d = d0 + v0*tfloat positionX = findXPosition(timeInterval,velocityX,accelVals[0]);float positionY = findYPosition(timeInterval,velocityY,accelVals[1]);float positionZ = findZPosition(timeInterval,velocityZ,accelVals[2]);
float finalPosition = positionX*positionX+positionY*positionY+positionZ*positionZ;finalPosition = (float)Math.sqrt(finalPosition);
float initPosition = 0.0f;private float findXPosition(float timeInterval, float velocityX, float acceleration) {
float position = initPosition+velocityX*timeInterval;initPosition = position;return position;
}
The resulting graph is below:
The fact that velocity is never zero when phone comes to rest after the
movement, causes even more error in the second integration as it is shown in
the graph. There are other factors which cause the position to wander off. It
was shown that the accumulated error in position due to constant bias is 4.2.1
oliver woodman. where t is the time of integration.[6]
These graphs described the process when phone is dragged along only one
axes - Z, the case where the phone is dragged making angles with all three
axes is described in Appendix 1.
There are many sources of noise that makes accelerometer sensors very
unreliable for double integration. In the technical report Intro to inertial
navigation, Woodman analyzes these intrinsic noises and gives them numeric
characteristics. [6]
The accelerometer noise comes from electronic noise from the circuitry
that converts motion into a voltage signal and the mechanical noise from the
sensor itself. MEMS accelerometers consist of small moving parts which are
averse to the mechanical noise that results from molecular agitation,
generating thermo-mechanical or white noise.[7] Integrating accelerometer
output containing white noise results in velocity random walk. Woodman
analyzes what effect this white noise has on the calculated position, by double
integrating the white noise and finding the variance. The analysis shows that
accelerometer white noise creates a second order random walk in position,
with a standard deviation standard deviation which grows proportionally to
t^(3/2).
There are many sources of accelerometers electronic noise: shot noise,
Johnson noise, flicker noise and so forth. The flicker noise causes the bias to
stray over time. Woodmans analysis shows that flicker noise creates a second
order random walk in velocity whose uncertainty grows proportionally to
t^(3/2), and a third order random walk in position which grows proportionally
to t^(5/2).
Furthermore, the analysis reveals that gyros white noise also influences
the noisiness of the acceleration. Woodman says, Errors in the angular
velocity signals also cause drift in the calculated position, since the rotation
matrix obtained from the attitude algorithm is used to project the acceleration
signals into global coordinates. An error in orientation causes an incorrect
projection of the acceleration signals onto the global axes. This fact will result
in the accelerations of the device being integrated in the wrong direction.
Moreover, acceleration due to gravity can no longer be correctly removed.
3.7. GraphHelper
Helper class GraphHelper was developed by myself to help to collect
necessary data and write it to a file in the external storage of a phone.
Once the time interval, velocities and positions are calculated this method is
called
saveDataToPlotGraph(timeInterval,raw,offset,highPassed,velocityX,velocity
Y,velocityZ,finalSpeed,positionX,positionY,positionZ,finalPosition);whic
h organizes and saves data in value-timeInterval pairs, to make it easier for
Octave, to read from.
.....
// all we need to plot graph of acceleration_Xfloat[] time_ax = {timeInterval,raw[0]};rawAx.add(time_ax);float[] time_calibrated_ax = {timeInterval,raw[0]-offset[0]};calibratedAx.add(time_calibrated_ax);float[] highpassed_ax = {timeInterval,highPassed[0]};highpassedAx.add(highpassed_ax);
When the application pauses, all the accumulated data are now written into a
file
protected void onPause()
{super.onPause();
mSensorManager.unregisterListener(this,mRotVectSensor);mSensorManager.unregisterListener(this,mLinearAccelSensor);
writeDataToFile();}
GraphHelper graphHelper;private void writeDataToFile() {
graphHelper = new GraphHelper(getApplication());graphHelper.writeToFile("rawAx.txt",rawAx);graphHelper.writeToFile("calibratedAx.txt",calibratedAx);graphHelper.writeToFile(highpassedAx.txt",highpassedAx);
...}
In the GraphHelper class, writeToFile() method will do actual writing to a file
in external storage:
public void writeToFile(String filename,ArrayList data){
if(isExternalStorageAvailable()&&!isExternalStorageReadOnly()){
myExternalFile = new File(directory, filename); try {
outputStream = new FileOutputStream(myExternalFile);writer = new PrintWriter(outputStream);
for(int i=0;i< data.size();i++) {
toWrite = data.get(i);writer.print(toWrite);
} writer.close(); outputStream.close(); } catch (FileNotFoundException e) {
e.printStackTrace();} catch (IOException e) {
e.printStackTrace();}} }
Figure 14. The screenshot from the folder holding all the saved files and the content(time interval -
acceleration) of one of those files.
Chapter 4. Digital Signal Processing or
What I would do if I had more time
There are many advanced filtering techniques that could be tried to get
accurate position data by double integrating the acceleration. The most
recommended of those is Kalman filtering. It is said to provide excellent signal
processing results, but is known for being complicated. With a set of prior
knowledge about the signal, a Kalman filter algorithm can provide an accurate
estimate of somethings true value. Moreover Kalman filters can be applied to
any signal from any sensor.
D Slifka in his thesis explores and experiments with different filtering
techniques that could be applied to acceleration data, such as Finite Impulse
Response(FIR), Infinite Impulse Response and Fast Fourier Transform(FFT). [8]
The IIR and FFT cant be used in real time, therefore is not an option for this
project. The FIR can be used in real time, as it is recursive method, but has a
long delay time, which is also a big drawback for the project application.
Most of the filtering methods require accumulating the data beforehand and
processing it only after enough data was inquired, i.e. they are not real-time
filters. That left me with the impression that if I had more time I would learn
Kalman filtering algorithm and try to apply it to my application. It requires
background knowledge in probability, especially in Bayes law. Then the
scenario for fitting those into the Kalman filter needs to be implemented.
Chapter 5. Conclusion and evaluation
5.1. Future work
The future work will be exploring the filtering mechanisms or the sensors that
coupled with accelerometer would result in more accurate position results. The
best place to start would be Kalman filters, which known to be effective and
resulting good estimates of the noisy data. One way or another, the goal of the
future work is to implement a better positioning algorithm, i.e. way to calculate
accurate position of the phone in the space with respect to its starting point.
5.2. Evaluation
The first iteration of the project developed an application which can transform any .obj model to fit into the screen after it has been loaded. It also accomplished the task of sliding window for getting cross sections of the model object. It provided UI for adjusting parameters to make it easy to experiment with speed, thickness and starting and finishing z points of sliding window. It is extremely helpful to be able to try different settings. Android application development essentials, Opengl ES, and basic computer graphics theory like camera positioning and bounding box were learned during the first iteration of the project. The second iteration of the project enhanced the first iteration by integrating device-aware rotational abilities to the model object. Much knowledge about androids rotation vector sensor and different representations of rotation such as Euler angles and Quaternions were learned during the development. The first and the second iteration of the project were successfully accomplished and makes good working application, which can be used for light painting with a constraint that holder has to drag the phone in constant speed.
While the result of the first iteration, extruding slices of 3D image, resembles
existing light painting applications, the second iteration, where rotation sensor
is used to sense the devices rotation and incorporate the output of the sensor
to the 3D model, is a novelty that improved the technology. Now, users can
rotate the device and dont be afraid of distorted results because of devices
rotation.
The third iteration of the project, although didnt reach the goal - getting
phones accurate disposition by double integrating accelerometer data,
attempted the theoretical base of double integration and has showed that it is
impractical to use acceleration alone for the positioning or the inertial
navigation systems generally. It has to be highly processed by advanced
filtering technologies. The results of each step of double integration were
plotted to visualise the obtained data. The existing noise measurements of
accelerometer sensor and ideas about filtering technologies were researched.
Considerable amount of experiments were held in order to reach accurate
representation of the accelerometer, due to its nature, which senses tiniest
environmental vibrations and natural shaking of the holding hand. Great deal
of knowledge about androids inertial sensors were gained during the
development of the third iteration of the project.
Overall, the considerable amount of effort was put to the project in the given
time constraint and much knowledge and skills were gained during the
development and the research.
Appendix 1.System manual
The Android SDK provides you the API libraries and developer tools
necessary to build, test, and debug apps for Android. With a single download,
the ADT Bundle includes everything you need to begin developing apps:
Eclipse + ADT plugin
Android SDK Tools
Android Platform-tools
The latest Android platform
The latest Android system image for the emulator
Once you download the Android SDK, import the code to the workspace. For
that go to File-> Import->Existing Android Project as shown in the
screenshots. It will open the window where you'll need to show where the
project is. Once you have chosen Finish.
Once it's created, the Rajawali external library needs to be downloaded from
https://github.com/MasDennis/Rajawali and imported the same way as described above. Once the
Rajawali is in the workspace as well, it should be included as external library to the LightPainter3D
project. Right click the project->Properties->Android will bring the following window, where in the
Library section the new library Rajawali needs to be added
Now it's ready to be run in the android device. In the phone's settings
USB debugging must be enabled (usually Settings->Developers-> USB
debugging, but might differ from phone to phone) and then phone should be
connected to the computer via USB. Once connected, run the project as
Android application.
The Android device chooser window must appear
If it doesn't then in Eclipse:
goto run menu -> run configuration.
right click on android application on the right side and click new.
fill the corresponding details like project name under the android tab.
then under the target tab.
select 'launch on all compatible devices and then select active devices from
the drop down list'.
save the configuration and run it by either clicking run on the 'run' button
on the bottom right side of the window or close the window and run again
Appendix 2. User manual
For this light painting application you need a camera with adjustable
shutter speed and ISO range, as well as manual focus. A release cable or
camera's timer is useful to get prepared both you and the application get
prepared. The shutter speed can be around 6 10 seconds, and ISO 200-400.
You might want to experiment with different settings, this will create different
effects. The photo needs to be taken in the dark location and the camera made
sure to not to shake during the process. A tripod will be helpful for this
purpose.
The application allows you to define the thickness of the slice and near
and far planes, as well as frames per second, which is related to how quick the
extrusion be. In my experience the lower fps and the thinner slices create
photos of better quality. The photos I've taken were extruded from the slices of
thickness = 0.1 and 20fps. These are only suggestions, light painting is
creative process!
Different models have different sizes, therefore back or front of the
object might be out of the frustrum. Znear and Zfar regulators were provided
to widen or to narrow the frustrum.
When the camera is set and you are ready, the photo can be taken.
While it is being taken you'll need to drag the phone in front of the camera in a
constant speed. You are not constrained by the occasional rotations of the
device that can happen while you are dragging the phone in the air, the
application senses the rotation and rotates the object accordingly to keep it in
the place, therefore you have more freedom than with other applications of
this kind.
Appendix 3. More results in graphs
Here are the results obtained from dragging phone in the air for 30 sm,
now, making angles with all three axes like so:
Here is the highpass filtered accelerometer data:
Here is the first integration of the acceleration:
Here is the second integration of the acceleration:
Appendix 4. Code listing. Image averaging method
public static BufferedImage average(ArrayList images) {
int n = images.size();
// Assuming that all images have the same dimensions
int w = images.get(1).getWidth();
int h = images.get(1).getHeight();
BufferedImage average =
new BufferedImage(w, h, BufferedImage.TYPE_3BYTE_BGR);
WritableRaster raster =
average.getRaster().createCompatibleWritableRaster();
for (int y=0; y < h; ++y){
for (int x=0; x < w; ++x) {
double sumRed = 0.0f;
double sumGreen = 0.0f;
double sumBlue = 0.0f;
for (int i=1; i
BIBLIOGRAPHY
[1] M. Mulcahy, Chronophotohraphy, Building Better
Humans,http://cinemathequefroncaise.com/Chapter1-1/Figure_B01_01_Station.html
[2] Sliding window screenshots
http://thequietvoid.com/client/objloader/examples/OBJLoader_FaceLocation_MATT
D/applet/index.html
[3] Device orientation screenshots
https://developer.mozilla.org/en-
US/docs/DOM/Orientation_and_motion_data_explained
[4] Milette G, Stroud A, Professional Android Sensor Programming, Chapter 2
[5]Milette G, Stroud A, Professional Android Sensor Programming, Chapter 6
[6] Woodman, O (2007), An introduction to inertial navigation
[7] Miller S, et al, Noise measurement
[8] D Slifka (2004), An accelerometer based approach to measuring displacement of
a vehicle body
Munshi A et al, OpenGL ES 2.0 Programming Guide
Seifert Kurt et al (2007), Implementing Positioning Algorithms Using
Accelerometers
Ribeiro J.G.T. et al, New improvements in the digital double integration
filtering method to measure displacements using accelerometers, p 538-542
Ribeiro J.G.T. Et al, Using the FFT-DDI method to measure displacements
with piezoelectric, resistive and ICP accelerometers
Android: computing speed and distance using accelerometer,
http://maephv.blogspot.co.uk/2011/10/android-computing-speed-and-
distance.html
Slater M et al (2002), Computer graphics and virtual environments