Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
INOM EXAMENSARBETE TECHNOLOGY,GRUNDNIVÅ, 15 HP
, STOCKHOLM SVERIGE 2018
Improving robotic vacuum cleanersMinimising the time needed for complete dust removal
ANDREAS GYLLING
EMIL ELMARSSON
KTHSCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE
Improving robotic vacuum cleaners
Minimising the time needed for complete dust removal
Andreas Gylling Emil Elmarsson
Handledare: Jana Tumova
Examinator: Orjan Ekeberg
KTH
School of Electrical Engineering and Computer Science
June 1, 2018
ii
Abstract
The purpose of this study was to examine the cleaning efficiency of an au-
tonomous vacuum cleaner robot; namely, reducing the cleaning time needed
in an empty room. To do this we explored how the path planning could be
improved upon given access to a dust map that would allow for more sophis-
ticated algorithms depending on the state of the room. The approach we
employed in order to compare different preprogrammed path patterns and
our own greedy heuristic was to create a simulation environment in Unity3d.
In this environment we could create a two dimensional plane to represent the
length and width of a room with the size of our choosing. This plane was
then subdivided into squared cells that would discretise the environment,
which represented the dust map of the room. The tests were conducted in
rooms with different dimensions in order to examine how different strategies’
efficiency developed in relation to each other. Employing an algorithm like
our greedy heuristic after an initial zigzag sweep resulted in a significant im-
provement in comparison to a robot that is restricted to template patterns
only. Future work could involve finding the optimised solution for our heuris-
tic in order to make full use of the dust map and thereby achieve minimal
cleaning time for the robot.
iii
Forbattra robotdammsugare
Minimering av tiden som behovs for fullstandig renhet
Sammanfattning
Syftet med denna studie var att undersoka en sjalvstyrande robotdammsug-
ares stadningseffektivitet och att minimera tiden som kravs for att stada
ett tomt rum. Vi undersokte hur roboten kunde planera sina vagval battre
genom att fa tillgang till en dammkarta, det vill saga ett rutnat som haller
reda pa dammfordelningen i ett rum. Denna dammkarta mojliggor for mer
sofistikerade algoritmer som beror pa statusen i rummet. Vi jamforde olika
rorelsemonster som sicksack, spiralen och vaggkramaren med en egendefinierad
heuristik. Ett enkelt oppet rum simulerades i Unity3d, dar vi testade de olika
algoritmerna. Rummet bestod av ett tvadimensionellt plan som representer-
ade langden och bredden av rummet som gick att justera till vart tycke.
Detta plan delades sedan upp i ett rutnat som representerar var diskretiser-
ade dammkarta for rummet. Testerna utfordes i rum av olika dimensioner
for att se hur de olika stadstrategiernas effektivitet utvecklades i relation till
varandra. Att anvanda sig av en algoritm sa som var heuristik efter ett forsta
svep med ett sicksackmonster resulterade i en signifikant forbattring jamfort
med en robot som ar bunden till fordefinierade rorelsemonster. Framtida
studier skulle kunna forsoka hitta den optimala losningen till var heuris-
tik, sa att dammkartan kan utnyttjas till fullo, och darmed uppna minimal
stadningstid for roboten.
iv
Keywords
autonomous robotic vacuum cleaner, dust map, path planing, coverage
v
Contents
1 Introduction 1
1.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Terms and definitions . . . . . . . . . . . . . . . . . . . . . . . 5
2 Background 6
2.1 Cell decomposition . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Path planning . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Template patterns . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 PPCR algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5.1 Algorithm 1: The pattern method . . . . . . . . . . . . 10
2.5.2 Algorithm 3: The genetic method . . . . . . . . . . . . 11
2.5.3 TSP-based: Hess’ algorithm . . . . . . . . . . . . . . . 12
2.5.4 Algorithm 2: The greedy method . . . . . . . . . . . . 13
3 Methods 15
3.1 Implemented path planning algorithms . . . . . . . . . . . . . 15
3.1.1 Zigzag . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.2 Spiral . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.3 Wall-to-wall . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.4 Greedy PPCR algorithm . . . . . . . . . . . . . . . . . 18
3.1.5 Greedy heuristic . . . . . . . . . . . . . . . . . . . . . 18
3.2 Cleaning strategies . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.1 Strategy 1: pattern-based strategy . . . . . . . . . . . 19
3.2.2 Strategy 2: Proposed greedy heuristic after an initial
sweep . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
vi
3.2.3 Strategy 3: Greedy strategy . . . . . . . . . . . . . . . 20
3.2.4 Zigzag’s direction . . . . . . . . . . . . . . . . . . . . 21
3.3 Building the test system . . . . . . . . . . . . . . . . . . . . . 21
3.4 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4 Results 25
4.1 Obstacle free room with more dust along the walls . . . . . . . 26
4.1.1 Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.1.2 Turns . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.1.3 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2 Obstacle free room with more dust at the centre of the room . 31
4.2.1 Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2.2 Turns . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2.3 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.3 Obstacle free room with uniform dust distribution . . . . . . . 37
4.3.1 Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.3.2 Turns . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3.3 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4 Consequence of zigzag choosing different ways . . . . . . . . . 42
4.4.1 Turns . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4.2 Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.5 Room with obstacles . . . . . . . . . . . . . . . . . . . . . . . 45
5 Discussion 48
5.1 The obstacle free room . . . . . . . . . . . . . . . . . . . . . . 48
5.2 The room with obstacles . . . . . . . . . . . . . . . . . . . . . 49
5.3 Limitations and possible further improvements of the heuristic 50
vii
5.4 Reflection on the hypotheses . . . . . . . . . . . . . . . . . . . 51
5.5 General matters . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6 Conclusion 54
viii
1 Introduction
Path planning of autonomous robots has been a subject of interest since the
beginning of robotics. The goal is to create an algorithm for the robot in
order to manoeuvre within an environment and perform some tasks without
human assistance. The task could range from navigating through a maze and
finding the exit, steering a car on a highway, mowing the lawn, to vacuum
cleaning a living room. We will focus on the latter; namely autonomous
vacuum cleaner robots. More specifically, we will aim for the robot to try to
attain a completely clean room in reasonable time.
Modern robotic vacuum cleaners use multiple types of optical sensors, ul-
trasound, proximity sensors, lasers, magnets and/or cameras to navigate in
their environment. These sensors are then used in combination with different
predetermined path planning algorithms such as wall-to-wall, zigzag, random
walk and spiral patterns. These could be executed separately or combined in
order to ensure that maximum dust removal is achieved (Hasan et al. 2014).
To monitor and assess the cleanliness of the environment the robots could
use a so-called dust map which tracks the dust level in the entire room (Lee
& Banerjee 2015).
Lee & Banerjee state that there are three different strategies that the robotic
vacuum cleaner could employ for its path planning. These are the follow-
ing:
1. Using a dust map which divides the room into cells and measures the
amount of dust in each one. Algorithms that use this map can then
generate efficient paths depending on which cell is still dirty.
2. A template based strategy which uses predefined path patterns (wall-
to-wall, zigzag, etc.)
3. AI based methods. These methods have several learning techniques for
1
generating adaptive paths. In order to do this continuously information
needs to be gathered from the environment.
The focus of our study will be on combining strategy 1 and 2 since we have
chosen to adopt a static environment, i.e. an environment in which there are
no moving people or furniture. In such environments there are no need for
adaptive paths.
1.1 Purpose
The purpose of this case study is to examine the cleaning efficiency of an
autonomous vacuum cleaner robot. Given access to a dust map, how would
the robot determine an effective cleaning path? Could the robot reduce the
cleaning time and consequently decrease the power consumption by combin-
ing several preset patterns? Are there certain environments where such a
combination would be particularly advantageous? How could we improve
upon modern robotic vacuum cleaners in order to reduce the time needed to
clean a room?
1.2 Problem statement
Our main focus will be on determining which path planning algorithms pro-
duce the most efficient result in an open room without obstacles. However,
we will also analyse how the efficiency is affected when obstacles are present
in the room. We wish to find which advantages and disadvantages there are
between different preprogrammed algorithms, assuming the environment is
given and set. For simplicity, we assume that the dust map is given. It can be
obtained by performing a first full-coverage sweep while simultaneously using
sensors to detect the amount of dust remaining under the robot. From now
2
on, our attention will be restricted to planning after the first sweep in order
to minimise the cleaning time. We aim to investigate the following:
• Given that the environment is known and static and with the help of
a dust map; how should the robot move in the room’s current state in
order to minimise the time needed for complete dust removal?
• How does the efficiency of the robot differ between different environ-
ments; namely, an empty room and one filled with obstacles?
In particular, we are interested in the following hypotheses which are based
on intuition:
• H1: As more dust is generally accumulated along the walls and corners
of a room, following the walls of the room after a first sweep will reduce
the cleaning time as the robot only has to clean dirty cells.
• H2: Moving towards the nearest uncleaned cell after the initial full
sweep will reduce the required cleaning time compared to predefined
patterns.
• H3: Employing a dust map algorithm will outperform template pat-
terns no matter the state of the room.
1.3 Assumptions
We intend to simulate a vacuum cleaner robot and its environment with
Unity3d (Unity Technologies 2018), which is an engine for making visual ap-
plications, mostly focused on games. These are the assumptions we make:
• Omniscience - It is assumed that the robot knows everything about
its environment, in the sense that it has access to a map of its surround-
ings as well as of the dust distribution and concentration. Mapping is
beyond the scope of this thesis. Hence, any potential inaccurate sensors
3
which might appear in real robotic vacuum cleaners are disregarded; we
assume that the robot can measure its position accurately. One way for
robots to map an environment previously unknown to them is SLAM
(Simultaneous Localisation And Mapping). SLAM maps while navi-
gating through the environment and determining the absolute position
of the robot. For example, a robot could accomplish this by using ul-
trasonic sensors or laser scanners. These would scan the environment
to know where the obstacles and other terrain are. These obstacles
are usually called landmarks and are used to tell where the robot is in
relation to the room. As more observations are made, the uncertainty
of the landmarks’ position and the measured distances between them
decreases. This means that the position of the robot can be assessed
more accurately. A difficulty that arises is merging the different scans
into one three-dimensional model, as they are taken from different posi-
tions and as the environment might be changing dynamically while the
robot is moving. Another issue is that the sensors might be defective,
so the estimations of the robot might not align with reality very well,
which could result in the robot thinking it is somewhere it actually is
not (Durrant-Whyte & Bailey 2006) (Gansari & Buiu 2014).
• 2D-plane - For the sake of simplicity the environment is assumed to
be constituted by one single floor and represented by a two-dimensional
plane. There is no third dimension involved. We would therefore not
be able to go under furniture like a real robot would, there is either
”infinite” height or none at all. Some robots have cliff sensors for
detecting ledges so as to not fall, like down a flight of stairs for example,
but that is beyond the scope of this thesis.
• Cell decomposition - We assume that the environment is composed
of square cells, like a chessboard, where each cell contains a dust level.
(See section 2.1 for detailed description.)
4
• Static environment - We also assume that the environment is static,
meaning the environment will not change, through moving obstacles for
example.
1.4 Terms and definitions
Table 1: Terms and their definition.
Dust mapData structure that keeps track of the amount of dust
in each cell of a subdivided environment plane.
PPCR Path Planning Coverage Region.
Sweep
The robot has performed a sweep once it has traversed
and cleaned all cells of the environment once. Note that
a successful sweep does not necessarily mean that the
room has been cleaned, several sweeps may be needed
to completely clean the room.
5
2 Background
2.1 Cell decomposition
Lee & Banerjee, Yakoubi & Laskri, Gajjar et al. describe cell decomposition
as a method for managing the environment of the vacuum cleaner as a grid.
The grid is generated by subdividing the two-dimensional plane, in other
words the current room, into smaller squared cells which have the same
side length as the robot’s diameter. The benefit of this strategy is that it
facilitates the robot’s computational work because the direction of the robot
can be decided by analysing the neighbouring cells. This is an advantage as
the robot knows where it is and which areas have been covered, instead of
relying on letting the robot go on in random predefined patterns and hoping
for a complete coverage. Each cell consists of an integer which represents how
much dust it contains and another for if it is an obstacle or not (see Table
2). The predefined pattern algorithms will only ever go in these directions;
up, right, down, and left and do not analyse its neighbouring cells. This sets
them apart from the dust map algorithms, which can move directly to any
of its eight neighbouring cells.
Table 2: This table is an example of cell decomposition. It represents a dust
map that is randomly generated for a 10 x 10 obstacle free room. A cell with
value “obstacle” would have represented an obstacle if there were any.
6
10 10 5 4 10 9 17 13 9 7
15 4 2 2 6 8 4 9 7 9
9 2 2 3 7 9 3 9 8 10
7 2 4 1 3 1 2 6 6 8
12 2 3 8 8 3 9 6 2 14
12 2 6 2 8 2 3 5 1 5
12 8 7 7 5 5 5 6 9 8
5 5 7 3 5 4 3 5 1 11
8 6 1 9 6 8 9 1 4 14
12 11 7 10 12 13 7 11 17 15
2.2 Path planning
One of the key problems for autonomous robots is finding a path that sat-
isfies a goal specification. The most basic constraint is simply moving from
one starting point to a finishing point without hitting any obstacles. Path
planning becomes more difficult as the goal specification and the constraints
grow in complexity, meaning more is expected from the robot. In the example
of the vacuum cleaner, it needs a Path Planning Coverage Region algorithm
(see section 2.4) to keep track of the cleanliness as it goes along. It is not only
about finding a path between two points anymore as the goal specification is
not satisfied until all of the dust has been vacuumed; hence, finding a general
algorithm proves more difficult. Parameters such as path length, travel time
and amount of turns are evaluated when optimising the path planner (Moll
et al. 2015, Gajjar et al. 2017).
7
2.3 Template patterns
Several template patterns have been suggested for path planning in section
1: (Gajjar et al. 2017, Hasan et al. 2014).
• Random walk - The robot will move in a straight direction until an
obstacle is hit. It then rotates a random degree and moves forward
again. This algorithm can be repeated endlessly until the room is
cleaned or be exchanged by another algorithm.
• Wall-To-Wall - Moves along the walls of the room.
• Zig-Zag - Moves in a straight line, then one cell to the side and lastly
it goes back in parallel with the previous direction. This is said to be
the fastest motion for one full sweep of an empty room (Hasan et al.
2014).
• Spiral - Moves outwards and inwards in a spiral motion. It will continue
in its spiral motion until an obstacle is met, after which it will stop.
As we are working with a rectangular environment, we will implement
squared spirals and not circular. Squared spirals, in the sense that the
robot moves in straight lines and turns 90 degrees. The reason for this
is we cannot guarantee a full sweep of a cell otherwise.
2.4 PPCR algorithms
A path plan where the entire accessible part of the environment is sweeped
by the vacuum cleaner robot is known as the ”Path Planning Coverage Re-
gion” algorithm (Yakoubi & Laskri 2016). When the environment has been
mapped, the robot can combine its knowledge with a particular algorithm
in order to ensure a PPCR. Naturally, problems arise if the area were to
be changed dynamically, as the robot never could be sure of whether its
8
current mapping is up-to-date or not. This however, is not an issue in our
implementation as we assume a static environment.
A wide variety of the available robots today only use predefined patterns
(Hasan et al. 2014). One problem with this is that it only guarantees full
coverage in an obstacle free room. Another problem with this strategy is that
the robot will likely move over already cleaned cells as it does not employ
the dust map or analyse the neighbouring cells. This will overtime causes an
abundance of redundant paths (Lee & Banerjee 2015). An efficient trajectory
of a PPCR has minimum amount of turns and revisited clean cells (Yakoubi
& Laskri 2016).
Previous research tends to optimise the robot’s cleaning time by reducing
redundant paths (Yakoubi & Laskri 2016, Gajjar et al. 2017). However, pre-
vious research assumes that once a cell has been swept it is considered clean.
Therefore the optimisation is only about finding the fastest PPCR that has
the least amount of distance travelled. While this has its benefits for speeding
up the full sweep, it assumes that the dust is evenly distributed on the floor
and that one sweep is enough for the robot to collect the dust. In the real
world, dust would likely be amassed in the corners and around obstacles. The
suction would also gradually degrade as more dust is collected. The robot’s
dust container would also eventually have to be emptied. A consequence of
this could be that the robot does not clean the room completely after one
sweep. This is the benefit of the dust map; the robot would know specifi-
cally where the remaining dust is and it can adapt its movement pattern in
accordance with that information (Lee & Banerjee 2015).
2.5 Related work
Once the environment has been divided into cells it is time to find an efficient
path for traversing and cleaning them. In other words the algorithm of choice
9
has to build the PPCR.
2.5.1 Algorithm 1: The pattern method
An example of obtaining a PPCR is presented by Kang et al.. They use
predefined template patterns such as zigzag and/or spiral paths. These pat-
terns are favourable because they are not very complex and creating a path
out of them does not put such a heavy computational burden on the robot.
The environment consists of areas composed of numerous cells/squares. The
environment gets divided into areas through a scan line which scans the area
column by column from left to right. If the scan line detects any changes
from the last scan line, new areas are created (see Figure 1). The areas are
kept in three lists, OPEN, CLOSED and PRIORITY. The OPEN list stores
cells yet to be covered, the PRIORITY list likewise, but its elements are
prioritised unlike those of the OPEN list. That priority is assigned to areas
which are isolated, i.e. have no uncleaned neighbours left. The CLOSED
list contains the areas for which a path has already been calculated, which is
where all areas eventually will have been put once the whole grid has been
cleaned. In each step of the algorithm the area that is the closest to the robot
and has the highest priority is chosen, for which the optimal path then will
be calculated by combining pieces of the predefined template paths.
Cells are treated as areas and not individual cells which is not the case of
the dust map, where each cell is treated separately. The areas are decided
depending on the obstacles present in the room and the algorithm cleans the
areas one by one. We disregarded this algorithm because it deviates from
our implementations of a dust map algorithm too much.
10
Figure 1: Decomposing the room into rectangular areas when encountering
obstacles. Source: (Kang et al. 2007)
2.5.2 Algorithm 3: The genetic method
A more complex approach is proposed by Yakoubi & Laskri. They apply
a genetic algorithm to generate the PPCR. They treat the cells of the grid
as genes and form chromosomes by combining a number of genes. These
chromosomes will then represent the path that the robot could possibly take.
Starting from each of the eight possible directions from the robot’s current
position, different chromosomes will be generated by randomly choosing the
next neighbouring gene. This initial calculation will have generated eight
different paths, that are either straight lines or zigzagging motions. Each
and every one of these paths will be evaluated by the amount of coverage;
how many genes have already been cleaned and how many obstacles are in the
way? The ideal choice is the path in which no cell has been cleaned and which
has no obstacle in the way. Their idea of improving this genetic algorithm
in order to obtain more efficient paths is either through a crossover point
between 2 paths or mutating the chromosome by randomising a completely
11
new neighbour. The crossover method looks for a crossover point which
differs by most one in the two compared chromosomes and then swaps that
gene along with its tail with each other. Which one of the techniques that
is chosen is determined by probability. If one of these methods is employed,
the new child paths are reevaluated and if an improvement is made, the best
child path will determine the path that the robot shall take.
The problem with this implementation is that full coverage can be obtained
very late as probability will control the path of the robot, meaning it can
leave cells uncleaned for a long time if their paths are not chosen. It does
not use a map in a way that is suitable for this study.
2.5.3 TSP-based: Hess’ algorithm
Hess et al. uses a tessellated environment, which essentially is an environment
decomposed into grid cells, similar to ours. Their focus is two-fold; firstly
they introduce a model which estimates the dust distribution over time and
secondly they discuss how this estimation of a dust distribution map can be
used to generate efficient cleaning policies. For the generation of paths, they
propose two policies. The first one is meant to guarantee that the dirtiest cell
in the whole room will be cleaner than a threshold defined by the user. The
second policy is likewise supposed to minimise the dirt level of the dirtiest
cell, but within a cleaning cycle, whose maximum duration is defined by
the user. They apply weights to their cells, which increase or decrease the
urgency of cleaning them. These weights are also user-defined, for example
after having cooked dinner a user might want the robot to focus on cleaning
the kitchen. To generate the paths they model each cell as a node in a graph
with according edges to adjacent nodes (cells). The problem is modelled as a
travelling salesman problem (TSP). For the first policy, they apply a state-of-
the-art TSP-solver to generate the shortest collision-free path that traverses
all cells which exceed the user-defined dust threshold. For the second policy,
12
they apply the TSP-solver iteratively. Each iteration represents a cleaning
cycle and the cleaning is continued until the robot has reached below the
user-defined maximum dirt level.
The efficiency of the implementation varies. When applying the TSP-solver
iteratively, great efficiency is achieved. However, with greater efficiency
comes the drawback of longer computation times. The computation times
with their different policies could range from one second to several minutes.
We did not think their algorithm could guarantee complete cleanliness in real
time.
2.5.4 Algorithm 2: The greedy method
Another way of generating a PPCR is by working with the 8 cells surround-
ing the robot (Gajjar et al. 2017). At the beginning, each cell will have
another integer value, apart from the dust value. This value represents how
many obstacles surround the cell. The walls are also considered an obstacle.
After this the path can be decided by path cost evaluation with the formula
CoverageCost = S − U where S is the number of surrounding cells and U
the number of uncovered cells. The cell with the highest coverage cost lower
than 8 is chosen and the neighbouring cells are incremented by 1, since they
now have one less uncovered neighbour. In Gajjar et al.’s version the priori-
tisation is not put on cells that are already in the direction the robot is facing
if two or more cells have the same coverage cost. If an already visited cell
is chosen again the robot will backtrack until a next unvisited cell is found.
If it keeps backtracking and does not find a new empty cell, no complete
path is available. Otherwise the steps are simply repeated until every cell is
visited.
This algorithm guarantees full coverage and makes use of the dust map with
its coverage cost in the same way the robot uses it to track the dust level in
13
its neighbouring cells. This is the algorithm that we will implement in our
test system.
14
3 Methods
The problem we have decided to focus on is how to reduce the cleaning time.
The key to this is to focus on the state after the first sweep. Consider the
initial zig-zag sweep as the first sweep in our tests. The reason for which is
that the first sweep will clean most of the debris in the room leaving remaining
dirty cells scattered. In a real world scenario, while the robot conducts the
first sweep it will be able to scan every cell and thereby generate a remaining
dust map that could be used in order to create efficient algorithms to proceed
with.
The purpose of the testing suite we have implemented was to compare differ-
ent strategies with different dimensions of the room and dust maps. Three
different cleaning strategies were compared to each other. More details re-
garding the testing is discussed in section 3.4.
3.1 Implemented path planning algorithms
In this section we discuss the path planning algorithms that we have imple-
mented.
3.1.1 Zigzag
Firstly we made sure that the zigzag moved in the direction of the farthest
side of the room. This approach will not require as many turns as performing
them along the shorter side. Meandering movement based on turns is more
costly than just moving straight, the aim should therefore be to take this
into account in order to minimise the number of turns. We conducted tests
where each 90 degree turn took one time unit. This way we could see that
minimising the amount of turns is to be preferred in order to reduce time.
15
The robot kept doing the zigzag pattern until full coverage was achieved. In
Figure 2 you can see a demonstration of this pattern.
Figure 2: An example of how a zigzag pattern could look like.
3.1.2 Spiral
Two variants of this pattern were implemented. The spiral could either be
coiled inward or outward. These patterns are essentially similar, although the
inward one would preferably start in a corner of the room, while the outward
one would start in the middle of the room to achieve maximum coverage
before the robot has to stop. We constructed the algorithm so that it stops
completely when encountering an obstacle or a wall. The spiral will take
an integer as a parameter which specifies the length of the last and longest
arm of the spiral, i.e. it will continue until that spiral arm length has been
achieved. In Figure 3 you can see a demonstration of this pattern.
16
Figure 3: An example of how a spiral pattern could look like.
3.1.3 Wall-to-wall
Dust tends to accumulate along the walls and in the corners, which other
patterns might find hard to reach in short time. Therefore this pattern
is efficient in combination with other template patterns to achieve a fully
cleaned room. It achieves that without having to revisit too many clean cells
that the zigzag and spiral would have to (Hasan et al. 2014). In Figure 4 you
can see a demonstration of this pattern.
Figure 4: An example of how a wall-to-wall pattern could look like.
17
3.1.4 Greedy PPCR algorithm
Algorithm 2 is the the last algorithm described in section 2.5. After each
visited cell had been visited and cleaned, the adjacent cells would be updated.
This update consists of increasing the cell’s surrounding obstacles count by 1.
The robot would then choose the cell with the highest surrounding obstacle
count below 8 to be visited next. This process was repeated until all the
cells had been visited. If two cells would have the same value, the one that
would require the least turning angle was chosen. This was the improvement
we applied from the Gajjar et al. (2017) version. If no adjacent cells had a
value below 8, the robot would backtrack until it finds an empty cell.
3.1.5 Greedy heuristic
In order to answer hypothesis H2 we developed a greedy heuristic that moved
to the nearest uncleaned cell as a minor goal specification. It scans one square
area with radius + 1 cells away from the robot’s current position in order to
find the nearest uncleaned cell; the radius initially being set to 0. If no cell
is found within that perimeter, the radius is incremented by 1 until a cell is
found or there is not a single dusty cell remaining. If several dusty cells are
found within the square and one of them is either in front or behind of the
robot, then they are prioritised. This is due to the fact that the robot will
not have to turn, but instead simply move forward or backward, which makes
them an optimal choice. This saves energy and time because the robot will
not have to turn.
18
radius = 0;
while radius within room do
Find closest dirty cell radius cells from the robot;
if No dirty cells found then
radius++;
end
else
Move to cell;
Clean cell;
radius = 0;
end
endAlgorithm 1: The greedy heuristic’s pseudo code
3.2 Cleaning strategies
The strategies listed below are the different cleaning strategies used by the
robot. They were compared to each other and that comparison constitutes
the results of the simulations.
3.2.1 Strategy 1: pattern-based strategy
Strategy 1 consisted of performing a zigzag pattern followed by a wall-to-wall
and finished off with an inward spiral pattern. This process was repeated
until the room had been cleaned entirely (see Algorithm 2).
We opted to perform a spiral after the wall-to-wall instead of simply utilising
the zigzag pattern in order to reach different cells of the room at different
speeds. The reason for which is if only a few dirty cells remain at a place
where they perhaps could be reached faster by using the spiral instead.
19
while room is dirty do
if room is dirty then
do zigzag;
end
if room is dirty then
do wall-to-wall;
end
if room is dirty then
do inward spiral;
end
endAlgorithm 2: Strategy1’s pseudo code
3.2.2 Strategy 2: Proposed greedy heuristic after an initial sweep
Strategy 2 only performed an initial zigzag pattern as a first sweep followed
by the greedy heuristic until no more dirty cells were found (see Algorithm 3).
if room is dirty then
do zigzag;
while room is dirty do
greedy heuristic;
end
endAlgorithm 3: Strategy2’s pseudo code
3.2.3 Strategy 3: Greedy strategy
The third and last strategy consisted of the heuristic only. The strategy re-
lies on the dust map being known even before the first sweep which will be
20
discussed later (see Algorithm 4).
while room is dirty do
greedy heuristic;
endAlgorithm 4: Strategy3’s pseudo code
3.2.4 Zigzag’s direction
The zigzag pattern was also evaluated for how the efficiency differed depend-
ing on which wall it decided to perform its S-patterns along and how making
the wrong decision could increase the overall time needed to perform the
pattern. In this test the following rooms were used; 10 x 20, 10 x 50, 10 x 80
and 10 x 100.
3.3 Building the test system
Our testing suite was built in Unity3d with C# (Unity Technologies 2018).
Unity3d is a platform which makes it easy to quickly focus on what it impor-
tant, which in our case is the implementation of the different path algorithms.
Having to handle the programming of an intricate visual rendering and co-
ordinate system from scratch is not within the scope of this study.
The room itself was generated by two integer values, width and length. The
cell objects, are represented by a 1x1 square (same length as the robot’s
diameter). The cell contains numerous data fields such as dust value and
the amount of adjacent obstacles (used by the PPCR algorithm described in
section 2.5.4). There is also a field for whether the cell is an obstacle or not,
which is generated at run time to fill out the grid.
21
As mentioned in section 2.4, the dust will most likely not form a uniform
layer. Therefore the main dust map generation is performed by randomising
an integer between 1 and 10 for every cell that is not near an obstacle. An
additional random number is added to the initial dust values of the cells that
are immediately adjacent to an obstacle or a wall. This means that the robot
most likely will have to focus more time on these areas (see Table 2 for an
example). The robot cleans at a rate of five dust per cell visit. The cells
light up green once they have been completely cleaned.
In the second room scenario obstacles were placed inside the room. An obsta-
cle is simply one squared cell in the grid but unavailable for the robot. The
aim for this test was just to analyse how the robot’s efficiency of the initial
sweep was affected compared to an empty room of the same dimension.
3.4 Benchmarks
The testing for the obstacle free rooms were conducted in the dimensions
(width x length): 10x10, 10x20, 10x50, 10x80 and 10x100 with different dust
map generations. Each strategy (see section 3.2) would iterate 100 times in
each room with a newly generated dust map for each iteration. The starting
position of the robot was determined by the last cell it cleaned during the
previous test iteration. In other words, the randomly generated dust map
would be the deciding factor on where the robot would stop depending on
which strategy was used. For the obstacle room obstacle with the dimension
1x1 were placed on random positions throughout the room. They were placed
successively so that we could see what effect it would have on the time,
distance and amount of turns needed to clean the room. If the program read
a cell of the dust map as “Obstacle” instead of an integer, an obstacle was
placed at those coordinates.
The different strategies were compared by their respective times to complete,
22
lengths travelled and amount of turns.
There were three different type of dust map generations in which all the
cleaning strategies were compared to each other. The reason for this was to
see if one strategy outperformed the others no matter the dust distribution.
The different dust maps were the following:
1. More dust gathered at the walls of the room.
cell(x) =
random(1,10)+random(1,10), if x is adjacent to a wall
random(1,10), otherwise
2. More dust gathered at the centre of the room (the inverse of the map
above).
cell(x) =
random(1,10)+random(1,10), if x is not adjacent to a wall
random(1,10), otherwise
3. Uniform distribution across all cells with dust value 1. (Even though
this scenario is unlikely, the purpose is theoretical.)
cell(x) = 1
The tests conducted in the closed room only involve the first sweep. The
reason for that is discussed in the discussions chapter (see Section 5).
23
Figure 5: Visual representation of a 10x10 obstacle free room. A green cell
is free of dust. Grey ones still have dust remaining. The black number
represents the dust level in the cell. The blue circle represents the robot.
(See 3.3 for more information.)
24
4 Results
For the following graphs presented in this chapter we compare the time
needed to clean the room, amount of turns and distance travelled by the
robot. The metrics are the following:
• Time: Each operation took a certain number of time units to execute
in the test system. For example we decided that one 90 degree turn
takes 1 time unit, which would be the same time as going forward one
cell in a straight line. This was decided and not a property that is nec-
essarily true for some robotic vacuum cleaners on the market. Instead
of clocking the real simulated time for the computer to fully clean the
room we measured the amount of time units depending on operations.
The outcome of this gave a more accurate comparison. Nevertheless a
lower value on the Time variable equals a faster algorithm for cleaning
the entire room.
• Turns: One turn is registered when the robot has performed a 90 de-
gree rotation from its current position. A reversing operation is not
equal to making two turning operations and then going forward. In-
stead the robot will simply reverse in a straight line; hence, no turning
operation is registered. For turning operations such as 45 degrees or
similar, the following formula was used for adding data to the Turns
variable: angle/90
• Distance: The distance logged as the robot moves along different cells
was obtained through Unity3d’s Vector3.Distance method. It calcu-
lated the distance between the robot’s last position and its new one.
Because the cell coordinates are calculated within its parent object (the
floor object in Unity3d) it does not imply that a distance of 1 corre-
sponds to moving across one cell. This is due to the way Unity3d han-
25
dles its objects in the project. Distance works in similar way as Time,
we are talking about distance units not meters or cells, although, a
lower value on the Distance variable means less travelled distance by
the robot. It is a relative measurement.
4.1 Obstacle free room with more dust along the walls
This section presents the results of having a dust map generator which allows
for a higher dust level in every cell adjacent to a wall. We let the simulation
run 100 times for each dimension of the room and each strategy, with a
new randomly generated dust map for every iteration. The data results are
presented as an average value with a standard deviation in the graphs and
tables below.
4.1.1 Time
Figure 6 represents the time it took for the robot to clean the room for the
different strategies with the data from Table 3.
26
Figure 6: This graph shows how the three different strategies’ time variable
develops depending on the length of the room (x-axis), with a room width of
10. The blue dots represent Strategy 1, the orange ones Strategy 2 and the
purple crosses Strategy 3 (see 3.2).
Table 3: Time results by using different strategies (see section 3.2) in different
dimensions of the room.
x 10 20 50 80 100
Strategy 1 Avg 3.66 9.02 31.35 64.00 102.79
S.D 0.22 1.59 7.39 16.55 29.14
Strategy 2 Avg 2.72 5.02 12.12 19.20 26.87
S.D 0.12 0.18 0.39 0.51 0.63
Strategy 3 Avg 2.65 5.0 12.21 19.41 24.10
S.D 0.13 0.21 0.34 0.56 0.45
27
4.1.2 Turns
Figure 7 represents amount of turns it took for the robot to clean the room
for the different strategies with the data from Table 4.
Figure 7: This graph shows how the three different strategies’ turn variable
develops depending on the length of the room (x-axis), with a room width of
10. The blue dots represent Strategy 1, the orange ones Strategy 2 and the
purple crosses Strategy 3 (see 3.2).
Table 4: Turn results by using different strategies (see section 3.2) in different
dimensions of the room.
28
x 10 20 50 80 100
Strategy 1 Avg 56.81 83.74 156.86 228.17 311.2
S.D 2.80 13.86 38.35 62.62 92.68
Strategy 2 Avg 52.09 76.10 158.32 240.57 296.06
S.D 2.80 13.86 38.35 62.62 92.68
Strategy 3 Avg 48.53 85.48 181.60 277.66 329.5
S.D 4.55 8.35 18.38 22.73 22.60
4.1.3 Distance
Figure 8 represents distance travelled needed for the robot to clean the room
for the different strategies with the data from Table 5.
29
Figure 8: This graph shows how the three different strategies’ distance vari-
able develops depending on the length of the room (x-axis), with a room
width of 10. The blue dots represent Strategy 1, the orange ones Strategy 2
and the purple crosses Strategy 3 (see 3.2).
Table 5: Distance results by using different strategies (see section 3.2) in
different dimensions of the room.
x 10 20 50 80 100
Strategy 1 Avg 4.63 10.00 32.33 64.98 103.76
S.D 0.23 1.59 7.40 16.55 29.14
Strategy 2 Avg 3.43 5.66 12.79 19.81 24.51
S.D 0.36 0.41 0.52 0.58 0.72
Strategy 3 Avg 3.25 5.66 12.93 20.02 24.75
S.D 0.37 0.43 0.46 0.59 0.55
30
4.1.4 Discussion
Drastic improvements can be seen with our simple adjustment to the cleaning
strategy by using the heuristic after the initial zigzag sweep (Strategy 1
compared to Strategy 2). This also verifies our second hypothesis, which
stated that “moving towards the nearest uncleaned cell after the initial full
sweep will reduce the required cleaning time”. Strategy 2 managed to reduce
the time needed for cleaning the entire room by roughly 26% in the smallest
room and 74% in the 10x100 room (see Figure 6) compared to Strategy 1.
Almost the same numbers apply to the distance variable between these two
strategies. Both Strategy 2 and Strategy 3 resulted in close to identical
improvements when compared against strategy1. However, the reason for
why we suggest using Strategy 2 will be discussed in section 5. Note as well
that the improvements are not conditioned by any lesser amount of turns
conducted as they remained almost the same no matter the strategy. The
only significant difference was the standard deviation of the first strategy,
which is highly dependent on the starting position of the robot. A good
starting position would give better results and vice versa.
4.2 Obstacle free room with more dust at the centre
of the room
This section presents the results of having a dust map generator which allows
for a higher dust level in every cell not adjacent to a wall. We let the
simulation run 100 times for each dimension of the room and each strategy,
with a new randomly generated dust map for every iteration. The data
results are presented as an average value with a standard deviation in the
graphs and tables below.
31
4.2.1 Time
Figure 9 represents the time it took for the robot to clean the room for the
different strategies with the data from Table 6.
Figure 9: This graph shows how the three different strategies’ time variable
develops depending on the length of the room (x-axis), with a room width of
10. The blue dots represent Strategy 1, the orange ones Strategy 2 and the
purple crosses Strategy 3 (see 3.2).
Table 6: Time results by using different strategies (see section 3.2) in different
dimensions of the room.
32
x 10 20 50 80 100
Strategy 1 Avg 5.58 19.10 77.52 165.03 243.23
S.D 0.48 2.41 9.25 21.30 31.06
Strategy 2 Avg 3.44 6.70 16.80 26.98 33.57
S.D 0.13 0.20 0.39 0.50 0.55
Strategy 3 Avg 3.32 6.70 16.77 26.94 33.72
S.D 0.13 0.19 0.39 0.50 0.69
4.2.2 Turns
Figure 10 represents amount of turns it took for the robot to clean the room
for the different strategies with the data from Table 7.
33
Figure 10: This graph shows how the three different strategies’ turn variable
develops depending on the length of the room (x-axis), with a room width of
10. The blue dots represent Strategy 1, the orange ones Strategy 2 and the
purple crosses Strategy 3 (see 3.2).
Table 7: Turn results by using different strategies (see section 3.2) in different
dimensions of the room.
x 10 20 50 80 100
Strategy 1 Avg 86.36 179.70 398.35 601.2 744.48
S.D 6.28 23.14 47.93 78.02 95.12
Strategy 2 Avg 68.09 114.77 258.67 405.44 499.56
S.D 4.49 7.55 12.26 17.13 16.55
Strategy 3 Avg 58.59 113.36 264.31 412.62 511.48
S.D 5.97 8.27 13.06 17.11 24.54
34
4.2.3 Distance
Figure 11 represents distance travelled needed for the robot to clean the room
for the different strategies with the data from Table 8.
Figure 11: This graph shows how the three different strategies’ distance
variable develops depending on the length of the room (x-axis), with a room
width of 10. The blue dots represent Strategy 1, the orange ones Strategy 2
and the purple crosses Strategy 3 (see 3.2).
Table 8: Distance results by using different strategies (see section 3.2) in
different dimensions of the room.
35
x 10 20 50 80 100
Strategy 1 Avg 6.55 20.07 78.50 166.01 244.21
S.D 0.48 2.41 9.25 21.30 31.06
Strategy 2 Avg 3.99 7.23 17.31 27.42 34.05
S.D 0.38 0.45 0.49 0.58 0.63
Strategy 3 Avg 3.84 7.30 17.22 27.42 34.20
S.D 0.36 0.44 0.53 0.64 0.78
4.2.4 Discussion
The amount of time needed, turns conducted and distance covered was gen-
erally higher for this room with the dust in the centre, compared to the other
tests where the dirtiest cells were along the walls. This is to be expected,
since there are more dirty cells to be cleaned repeatedly in the middle of the
room than along the walls. As for the cleaning time, Strategy 2 managed to
reduce the time compared to Strategy 1 by 38% in the smallest room and
87% in the largest room. Compared to the first dust map (see section 4.1)
Strategy 2 performed even 12 percentage points better in this environment
than Strategy 1. As for the amount of turns, a 21% improvement was found
in the smallest room and 33% in the largest room. The total distance trav-
elled was reduced by 39% in the smallest room and 86% in the largest one.
Roughly the same improvement as with Strategy 2 was found when using
Strategy 3 compared to Strategy 1 in every category except for the amount
of turns in the smaller rooms. The reason for why both Strategy 2 and Strat-
egy 3, which employ the implemented heuristic, perform even better in this
scenario is that choosing the non-optimal cell will not be as costly. It will
not have to traverse the entire room back and forth as the outer cells are
completely cleaned before the cells in the middle (see 5.3).
36
4.3 Obstacle free room with uniform dust distribu-
tion
This section presents the results of having a dust map generator where every
cell contained a dust value of one. This meant the robot only had to pass
each cell once in order to clean them. We let the simulation run 100 times
for each dimension of the room and each strategy, with a newly generated
dust map for every iteration. The data results are presented as an average
value with a standard deviation in the graphs and tables below.
4.3.1 Time
Figure 12 represents the time it took for the robot to clean the room for the
different strategies with the data from Table 9.
37
Figure 12: This graph shows how the three different strategies’ time variable
develops depending on the length of the room (x-axis), with a room width of
10. The blue dots represent Strategy 1, the orange ones Strategy 2 and the
purple crosses Strategy 3 (see 3.2).
Table 9: Time results by using different strategies (see section 3.2) in different
dimensions of the room.
x 10 20 50 80 100
Strategy 1 Avg 1.28 2.28 5.28 8.28 10.28
S.D 0 0 0 0 0
Strategy 2 Avg 1.28 2.28 5.3 8.3 10.3
S.D 0 0 0 0 0
Strategy 3 Avg 1.22 2.68 5.72 8.78 10.78
S.D 0.03 0.08 0.08 0.09 0.10
38
4.3.2 Turns
Figure 13 represents amount of turns it took for the robot to clean the room
for the different strategies with the data from Table 10.
Figure 13: This graph shows how the three different strategies’ turn variable
develops depending on the length of the room (x-axis), with a room width of
10. The blue dots represent Strategy 1, the orange ones Strategy 2 and the
purple crosses Strategy 3 (see 3.2).
Table 10: Turn results by using different strategies (see section 3.2) in dif-
ferent dimensions of the room.
39
x 10 20 50 80 100
Strategy 1 Avg 20 20 20 20 20
S.D 0 0 0 0 0
Strategy 2 Avg 20 20 20 20 20
S.D 0 0 0 0 0
Strategy 3 Avg 18.77 44.27 44.12 44.27 44.27
S.D 1.03 3.80 4.07 3.8 3.8
4.3.3 Distance
Figure 14 represents distance travelled needed for the robot to clean the room
for the different strategies with the data from Table 11.
40
Figure 14: This graph shows how the three different strategies’ distance
variable develops depending on the length of the room (x-axis), with a room
width of 10. The blue dots represent Strategy 1, the orange ones Strategy 2
and the purple crosses Strategy 3 (see 3.2).
Table 11: Distance results by using different strategies (see section 3.2) in
different dimensions of the room.
x 10 20 50 80 100
Strategy 1 Avg 2.26 3.26 6.26 9.26 11.26
S.D 0 0 0 0 0
Strategy 2 Avg 2.29 3.27 6.29 9.29 11.29
S.D 0 0 0 0 0
Strategy 3 Avg 1.82 3.67 6.71 9.77 11.77
S.D 0.07 0.08 0.08 0.09 0.1
41
4.3.4 Discussion
Strategy 1 and 2 fared equally well in all three metrics (see section 4), which
is reasonable since they both employ an initial zigzag pattern. In this case
it was sufficient to clean everything because there was only a dust value of
one in every cell and the robot will clean all of it in one sweep. Therefore
Strategy 2 will not ever have to continue with the greedy algorithm (see
section 3.1.5). However, Strategy 3 differed in the amount of turns conducted,
namely a 6% reduction of the turns with Strategy 1 in the smallest room and
a 121% increase for all of the other room sizes. For the distance metric it
also differed a bit, a 19% reduction of the distance with Strategy 1 in the
smallest room and an increase of 5% in the largest room. These differences
could be attributed to the fact that the heuristic just takes the closest tile.
If there are several which are as close, it will leave them behind and have to
revisit them later, which might mean a lot of going back and forth, resulting
in redundant turning and traversing.
4.4 Consequence of zigzag choosing different ways
As the zigzag is essential for the second cleaning strategy, (see 3.2), it’s
important that it remains as efficient as it can possibly be in an empty
room. The correct way is to follow the longest side of the room and not vice
versa.
4.4.1 Turns
Figure 15 shows the amount of turns needed for the zigzag to perform a full
sweep depending on which direction it initially follows. The data is shown
in Table 12.
42
Figure 15: This graph shows how the zigzag’s turn variable develops depend-
ing on the length of the room (x-axis) and which direction it conducts the
movement.
Table 12: The table below is the turn data visualised in Figure 15
x 20 50 80 100
Correct Way 18 18 18 18
Wrong Way 40 100 160 200
We can see that the amount of turns is always two times larger than the
length of the room if the robot chooses the wrong direction to perform the
zigzag, while the correct way is unaffected.
4.4.2 Time
Figure 16 shows the time needed for the zigzag to perform a full sweep
depending on which direction it initially follows. The data is shown in Table
13.
43
Figure 16: This graph shows how the zigzag’s time variable develops depend-
ing on the length of the room (x-axis) and in which direction it conducts the
movement.
Table 13: The table below is the time data visualised in Figure 16.
x 20 50 80 100
Correct Way 1.98 4.68 7.38 9.18
Wrong Way 2.4 6 9.6 12
The consequence in time was that the wrong way was 21% slower in the
smallest room and 31% slower in the largest room.
4.4.3 Discussion
The difference in time needed and amount of turns conducted if the zigzag
chooses the wrong initial direction was significant. This was however ex-
pected as following the shorter wall will result in having to conduct more
turns if a full sweep is to be completed. Observe that, even though it will
always be more time costly choosing the wrong way, the two lines would not
44
diverge the way they do if a turning operation took less time. We decided
that it should take one time unit for a 90 degree turn, therefore the gap be-
tween lines would have be smaller if we would have chosen differently.
4.5 Room with obstacles
As we mentioned in the section 1.2, the main focus of this study was on
obstacle free rooms. However out of curiosity we wanted to analyse how
an efficient PPCR algorithm’s results, such as Algorithm 2 in section 2.5.4,
would compare against the optimal results of an obstacle free room. The
comparison just involved the first sweep as the remaining dust cells would
be optimally cleaned up by the heuristic as evaluated in the obstacle free
room.
In the room with obstacle (see Figure 17), a black cell represents an obstacle
that is unavailable for the robot and Table 14 are the base values compared
to the data in Table 15. The obstacles were randomly placed upon generating
the dust map and and they remained static through out the sweep.
Table 14: This table shows the base case for the zigzag pattern to perform a
full sweep of a 10x10 obstacle free room
Turns 20
Time 1.2
Distance 2.16
Table 15: This table shows the difference in efficiency for the first sweep be-
tween a 10x10 room and a room with a varying amount of obstacles scattered
around.
45
#Obstacles 0 1 2 3 4 5 6 7 8 9 10
Turns 40 43 43 52.5 51.5 50 51.5 52 50 57 55
Time 1.37 1.4 1.39 1.51 1.49 1.46 1.44 1.43 1.41 1.47 1.44
Distance 2.35 2.36 2.35 2.49 2.47 2.83 2.83 2.41 2.8 2.44 2.41
Of the tests conducted there is not a good correlation between the variable
values and number of obstacles more than that they stay fairly consistent.
Note that algorithm 2 never had to perform any backtracking which would
increase the variable values.
46
Figure 17: Visual representation of an obstacle room. A green cell is free
of dust. Grey ones still have dust remaining. A black cell represents an
obstacle. The blue circle represents the robot. The black numbers represent
the remaining dust in each cell. The red number is the surrounding obstacle
value used by algorithm 2 (see 3.1.4).
47
5 Discussion
5.1 The obstacle free room
The results of the different cleaning strategies were collected in three different
types of dust maps. The overall results proved that employing the heuris-
tic will reduce the cleaning time. Both Strategy 2 and Strategy 3 (see 3.2)
had very similar results when compared to Strategy 1. However, in the tests
where only the heuristic (Strategy 3) was employed, a known dust map is
required. This strategy would not be applicable in a real life scenario due to
the fact that the robot has not made an initial sweep of the dust level. An as-
sumed map could be generated, given that data has been collected over time.
However, the zigzag pattern scans the cells while covering the entire room,
without needing the dust map itself. When the zigzag has completed, a dust
map is then complete and up-to-date for the heuristics disposal. Therefore
we would opt for using Strategy 2 as the efficiency between them are almost
identical, but Strategy 2 has the advantage of giving us the correct dust map.
The results of the heuristic proves our second hypothesis H2.
Depending on the battery capacity, meaning how long the robot can clean
without having to stop and recharge, the heuristic approach would theoret-
ically reduce the total time needed by a significant margin. This could of
course vary because both the amount of turns and distance travelled put
strain on the motor which consumes power. However considering the major
improvement on the distance travelled as well as time needed, the assump-
tions seem reasonable.
48
5.2 The room with obstacles
The purpose of the room with obstacles was to illustrate the additional time
needed for the first sweep when the complexity rises, which in turn affects
the energy consumption (see Table 15). Energy consumption will affect the
total cleaning time if the robot has to recharge. We had different amount of
obstacles scattered around the room. This means that there are a number of
cells the PPCR algorithm did not have to cross. In theory this should lower
the distance travelled compared to the zigzag in the obstacle free room.
However, this was not the case for PPCR Algorithm 2. The reason for that
is that it moves in a diagonal snakelike pattern and a lot of turns had to
be conducted. A consequence is that the time increases as well. The PPCR
Algorithm 2 is efficient in its environment, we are just highlighting that
all variables increase when the complexity of the room increases. Template
patters are extremely effective in an obstacle free room, but the room has
to remain that way for these patterns’ efficiency to remain. Otherwise more
sophisticated algorithms are needed, like the PPCR Algorithm 2 that we
implemented.
The only difference with our own heuristic implementation in the room with
obstacles is that the logic of the code has to be improved to move around
obstacles. In example the robot should not be able to move diagonally around
an obstacle. However in the obstacle free room, we have already proven
the benefits of the heuristic with the use of a dust map when cleaning up
the remaining uncleaned cells. Therefore, we opted not to implement this
logic due to the short planning horizon of this study. It would arguably
only give the same results as the obstacle free room, meaning the heuristic
would outperform the template patterns when cleaning up the remaining
dirty cells.
49
5.3 Limitations and possible further improvements of
the heuristic
The greedy heuristic could be optimised further. Its current approach is
greedy due to the fact that it does not know if selecting another cell would
have been a better decision later on. For example, imagine there is one dusty
cell to the right of the robot and one to the left of the robot as well as two
more further to the left. The robot will choose the left side as this is the cell
checked before the right one, meaning it will continue to the left and then
lastly have to go all the way back to the right cell. The heuristic cannot
predict which of the two would be the better choice for minimising the total
distance travelled. This could possibly be mitigated on average by selecting
the direction on random from multiple equally desirable choices. Finding an
optimised dust map algorithm would improve the efficiency of using a dust
map even further. We have proven the efficiency of the dust map technology,
which will help achieving minimal cleaning time needed.
Our heuristic always selected the closest dirty cell and cleaned it once it
arrived. When the robot travelled diagonally in the grid it would partially
overlap several squares on its way, which would raise questions of how the
dust level would be affected in those squares if they were not clean already.
One could probably just calculate how much the robot overlapped the square,
remove a certain percentage of the dust based on how great of a overlap
it was and instead represent the dust with floating-point numbers. This
would possibly prove to be problematic though, because in the predefined
patterns our robot moved one tile at a time and did not clean anything
until it reached its goal, as opposed to a robot in a real environment which
would continuously clean as it went along. The dust values of the cells in our
simulation just represented how much dust each cell contained. There was
no locality to the dust other than that it was within a specific cell. It could
50
be presumed that the dust was evenly distributed in the cells, but not having
any explicit locality to the dust would in our case mean that a cell could be
completely cleaned simply by scraping the corners of it enough times, even
though in reality those corners would have already been cleaned and the dust
would still remain in the middle. To combat this we could rework the dust
map representation, for example by having each pixel represent a cell. Thus
the cells would be smaller and not as large as the robot itself. The robot
would then move continuously, not cell by cell, and clean every dust pixel
it encounters. This would also be a more accurate representation of a real
world vacuum cleaner environment.
5.4 Reflection on the hypotheses
As for the first hypothesis H1 we can conclude that following a wall after
the initial sweep was not necessarily a flawless move. If only one cell had
been completely cleaned along the wall during the initial sweep it would
have been enough redundancy traversing this cell again using the wall-to-wall
pattern instead of our greedy heuristic as it will do the same job without the
redundant cell crossing, unless in the worst case scenario, where it would do
the same job as wall-to-wall. To summarise, we can conclude that using a
targeted dust map algorithm, such as our greedy heuristic, is a better choice
than perform a wall-to-wall pattern after an initial sweep.
The second hypothesis H2 was discussed in section 5.1, where it was con-
cluded that “moving towards the nearest uncleaned cell after the initial full
sweep” did in fact “reduce the required cleaning time compared to predefined
patterns”.
The third hypothesis H3 stated “Employing a targeted dust map algorithm
will outperform template patterns no matter the state of the room.”. We can
conclude that this is not the case. In the uniform dust map, (see 4.3), the
51
heuristic performed worse than the zigzag pattern. However in the other ob-
stacle free room dust scenarios, which are more likely, the targeted algorithm
(the heuristic) proved to be a major improvement compared to template
patterns. The heuristic would have been just as good as the zigzag in the
uniform case if the heuristic were optimal, as it would have mimicked the
zigzag path. So a statement that applies is “Employing an optimal targeted
dust map algorithm will in its worst case perform just as good as the template
patterns no matter the state of the room.”
Template patterns such as the zigzag and the spiral are very effective in envi-
ronments where they can move freely. More specifically, areas that are open.
However when having access to a dust map, they quickly become inferior
as seen by the results. The reason for this is simply that the implemented
algorithms that make use of the dust map can mimic these patterns, possibly
without cleaning redundant cells.
We were interested in the following questions:
“Given that the environment is known and static and with the help of a
dust map; how should the robot move in the room’s current state in order
to minimise the time needed for complete dust removal?” We have given
one greedy solution to this problem. Find the closest cell and move straight
towards it after the initial sweep. An optimised solution can of course be
found as described in section 5.3.
“How does the efficiency of the robot differ between an empty room and one
with obstacles present?” As seen in section 4.5, the amount of time clearly
increases with obstacles in the room. Note however that it does not mean
that the PPCR algorithms are inefficient. It is simply a reminder that once
the robot works in a room with obstacles, which is a likely real life scenario,
more sophisticated algorithms have to be used, as template patterns will
struggle to perform as efficiently as the dust map algorithms.
52
5.5 General matters
Normally when one vacuums an area manually, a sweeping stroke is repeated
on the same area until it has been cleaned. The reason for why this pattern
is not mimicked by the robot is simply a matter of reducing redundancy.
Going back and forth will most likely result in the robot having to go over an
already cleaned cell when it is ready to move on to another area. As seen in
the tests, going over already cleaned cells is costly and will thereby increase
the time required to clean the room.
We previously mentioned that the first hypothesis ended up being falsified.
However, if the robot never had access to a dust map, meaning a cheaper
robot with less sensors, this approach would have been excellent to achieve
maximum dust removal during a short period of time as higher concentration
of dust is collected near the walls and only a proximity sensor would be needed
at that point. Using our heuristic would require sensors for detecting the dust
level as the robot moves along and the knowledge of knowing where its current
position is in the room and where the next goal specification is located. This
would require a larger memory for storing all of this data, which implies
a more expensive robot. In the end this study focuses on improving the
efficiency with a certain technology, specifically the dust map which makes
the wall-to-wall and other predefined patterns method less desirable, but it
is noteworthy to keep in mind for cheaper robots.
53
6 Conclusion
Creating path planning algorithms for an autonomous robotic vacuum cleaner
can not be restricted to sweeping the entire room and potentially repeating
the process for two reasons. The first one being that one sweep is not a
guarantee for absolute cleanliness which was the basis for the dust map. The
second one being that repeating that full sweep of the entire room would
create significant redundancy and thus introduce inefficiency. There is no
need to clean already cleaned cells as the results show.
The utilisation of the dust maps allows for targeted algorithms. Algorithms
that set up minor goal specifications as the robot traverses the room. This
will thereby minimise the cleaning time needed in order to fully clean the
room. We have proposed a greedy heuristic in this study, which was, how-
ever, not fully optimised as it accounted for a very short planning horizon.
Yet it still proved to be a far more efficient approach to minimise the time
compared to limiting the robot to predefined patterns in an obstacle free
room scenario.
In general, obstacles will increase the power consumption drastically as the
robot has to turn and move around them. This is why an efficient PPCR
algorithm is needed for creating an optimised path for the initial sweep.
The downside to using the dust map technology is the need for accurate
sensors that cheaper robot would not be equipped with. However, having
more efficient robots will in turn save energy and compensate the high cost
of the equipment.
The next step would be to further optimise the heuristic. This way the dust
map is used at its full potential. We strongly believe that equipping the
robot with a dust map is the key to make robotic vacuum cleaners efficient.
With it they can focus on areas that need cleaning and thereby minimise the
54
cleaning time needed.
55
References
Durrant-Whyte, H. & Bailey, T. (2006), ‘Simultaneous localization and map-
ping: part i’, IEEE Robotics Automation Magazine 13(2), 99–110.
Gajjar, S., Bhadani, J., Dutta, P. & Rastogi, N. (2017), Complete coverage
path planning algorithm for known 2d environment, in ‘2017 2nd IEEE In-
ternational Conference on Recent Trends in Electronics, Information Com-
munication Technology (RTEICT)’, pp. 963–967.
Gansari, M. & Buiu, C. (2014), Building a slam capable heterogeneous multi-
robot system with a kinect sensor, in ‘Proceedings of the 2014 6th Inter-
national Conference on Electronics, Computers and Artificial Intelligence
(ECAI)’, pp. 85–89.
Hasan, K. M., Abdullah-Al-Nahid & Reza, K. J. (2014), Path planning algo-
rithm development for autonomous vacuum cleaner robots, in ‘2014 Inter-
national Conference on Informatics, Electronics Vision (ICIEV)’, pp. 1–6.
Hess, J., Beinhofer, M., Kuhner, D., Ruchti, P. & Burgard, W. (2013),
Poisson-driven dirt maps for efficient robot cleaning, in ‘2013 IEEE In-
ternational Conference on Robotics and Automation’, pp. 2245–2250.
Kang, J. W., Kim, S. J., Chung, M. J., Myung, H., Park, J. H. & Bang,
S. W. (2007), Path planning for complete and efficient coverage operation
of mobile robots, in ‘2007 International Conference on Mechatronics and
Automation’, pp. 2126–2131.
Lee, H. & Banerjee, A. (2015), ‘Intelligent scheduling and motion control for
household vacuum cleaning robot system using simulation based optimiza-
tion’, pp. 1163–1171.
56
Moll, M., Sucan, I. A. & Kavraki, L. E. (2015), ‘Benchmarking motion plan-
ning algorithms: An extensible infrastructure for analysis and visualiza-
tion’, IEEE Robotics Automation Magazine 22(3), 96–102.
Unity Technologies (2018), https://unity3d.com/. Accessed: 2018-05-02.
Yakoubi, M. A. & Laskri, M. T. (2016), ‘The path planning of cleaner robot
for coverage region using genetic algorithms’, Journal of Innovation in
Digital Ecosystems 3(1), 37 – 43. Special issue on Pattern Analysis and
Intelligent Systems – With revised selected papers of the PAIS conference.
URL: http://www.sciencedirect.com/science/article/pii/S2352664516300050
57
www.kth.se