Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
UNIVERSITY OF CALIFORNIA,
IRVINE
A simulator and assist-as-needed controlstrategy for learning to drive a power
wheelchair
THESIS
submitted in partial satisfaction of the requirements
for the degree of
MASTER OF SCIENCE
in Mechanical and Aerospace Engineering
by
Laura Marchal Crespo
Thesis Committee:
Professor David J. Reinkensmeyer, Chair
Professor Faryar Jabbari
Professor James E. Bobrow
2006
c© 2006 Laura Marchal Crespo
The thesis of Laura Marchal Crespo is approved:
Committee Chair
University of California, Irvine
2006
ii
to Sergio
iii
Contents
LIST OF FIGURES vii
ACKNOWLEDGMENTS x
ABSTRACT OF THE THESIS xi
1 Introduction 1
2 Literature Review 4
2.1 Smart wheelchairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Assist-as-needed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Virtual environments and haptics overview . . . . . . . . . . . . . . . 12
2.4 Line following . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Preliminary Studies 19
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4 Simulation-training program 32
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2 Nonholomonic power wheelchair dynamics . . . . . . . . . . . . . . . 34
4.3 Control interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3.1 Control interfaces overview . . . . . . . . . . . . . . . . . . . . 39
iv
4.3.2 Steering wheel interface . . . . . . . . . . . . . . . . . . . . . 40
4.4 Input control law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.5 Virtual Reality World . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.5.1 Virtual Reality overview . . . . . . . . . . . . . . . . . . . . . 48
4.5.2 V-Realm Builder 2.0 . . . . . . . . . . . . . . . . . . . . . . . 50
4.5.3 Virtual Reality toolbox in Simulink . . . . . . . . . . . . . . . 53
5 Assist-as-needed controller 56
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.2 Look-ahead performance variables . . . . . . . . . . . . . . . . . . . . 58
5.3 Assistance-as-needed. Reducing the controller coefficients as needed. . 66
5.4 Natural and machine variability . . . . . . . . . . . . . . . . . . . . . 68
6 Results 77
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.2 Assistance increases performance . . . . . . . . . . . . . . . . . . . . 78
6.3 Assist-as-needed controller allows learning with small errors . . . . . 79
7 Conclusion 85
7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Bibliography 88
A Visual perturbation experiment 94
A.1 exrotation.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.2 rotation.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
B Simulink training program 100
B.1 Main body diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
B.2 non-holomonic dynamics & input law block . . . . . . . . . . . . . . . 102
B.2.1 Error calculation subsystem . . . . . . . . . . . . . . . . . . . 103
B.2.2 Filter machine variability in corners . . . . . . . . . . . . . . . 103
B.2.3 Filter machine variability in corners . . . . . . . . . . . . . . . 104
v
B.2.4 long velocity. Relation block between longitudinal and angularvelocities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
B.2.5 Input control law . . . . . . . . . . . . . . . . . . . . . . . . . 105
B.2.6 Torque. Torque calculation from linear forces. . . . . . . . . . 106
B.2.7 Non-holomonic dynamic equations . . . . . . . . . . . . . . . 106
B.3 Preparation graphics block . . . . . . . . . . . . . . . . . . . . . . . . 107
B.4 Assistance algorithm block . . . . . . . . . . . . . . . . . . . . . . . . 107
B.4.1 joystick. Assistance algorithm code . . . . . . . . . . . . . . . 108
vi
List of Figures
2.1 Assist-as-needed. Trainers provide manual assistance to the child’shand to move the joystick in the desired direction, but encouragelearning by reducing the assistance as the child improves his skills. . . 10
2.2 Craig Reynolds look-ahead guidance. . . . . . . . . . . . . . . . . . . 15
3.1 Left:path used in the experiment. Right: original path and 45 degreerotate path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Coordinates of the center and corners of a screen of 640x480 pixels. . 24
3.3 Left, the position of the cursor in blue and the path the cursor shouldfollow in black. Right, error done when moving the cursor in a trial. . 25
3.4 Tracking error recorded in 20 successful trials. . . . . . . . . . . . . . 26
3.5 Tracking error and assistance recorded in 46 successful trials. . . . . . 27
3.6 Maximum error for each trial and assistance along trials. . . . . . . . 28
3.7 Moving average tracking error for without assistance experiment (solid)and with assistance experiment (dashed) . . . . . . . . . . . . . . . . 29
3.8 Tracking error and assistance recorded in 20 successful trials (fR =0.999, gR = 0.00005) . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.1 Wheelchair geometry. . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 Impulse Stick Force Feedback Joystick from Immersion. . . . . . . . . 39
4.3 Wheelchair response using Position Control (left) and Velocity Con-trol (right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.4 Logitech MOMO Racing Force feedback wheel and pedal layout. . . . 41
4.5 Joystick Input block in the Virtual Reality Toolbox in Simulink. . . . 42
4.6 Absolute reference and wheelchair reference. . . . . . . . . . . . . . . 43
4.7 Absolute point of view (left) and relative point of view (right). . . . . 44
vii
4.8 Straight lines define the curves in the black line. . . . . . . . . . . . . 48
4.9 Codification of the black line in a *.dat file. . . . . . . . . . . . . . . 49
4.10 Example of a real world design in V-Realm Builder 2.0. . . . . . . . . 50
4.11 Power Wheelchair modeled in Solid Works. . . . . . . . . . . . . . . . 51
4.12 Mobile point of view. . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.13 Virtual Reality Sink block in Simulink. . . . . . . . . . . . . . . . . . 53
5.1 Representation of effect of point of view. . . . . . . . . . . . . . . . . 59
5.2 A: Angle between distance error and wheelchair is 90, the assistanceover driver’s hands is large. B: Angle between distance error andwheelchair is less than 90, the assistance over driver’s hands is smaller.C: Angle between distance error and wheelchair is 0, there is no as-sistance over driver’s hands. . . . . . . . . . . . . . . . . . . . . . . . 60
5.3 Path to follow (dashed) and path done (solid) by the wheelchair whenthe wheelchair moves autonomously without interaction with the driver. 61
5.4 The shape of the control signal in a trial. The solid line representsthe force done by the steering wheel on the driver’s hands. The valueof the derivative of the orientation angle is showed in dashdot line,the look-ahead distance error is in dotted line, and the look-aheaddirection error is dashed line. . . . . . . . . . . . . . . . . . . . . . . 62
5.5 Left: Direction error versus distance in a trial. Right: Distance errorversus distance in a trial. . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.6 Look-ahead direction error (dashed line) and real direction (solid line)error versus distance. . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.7 Deadband example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.8 Real direction error (dashed line), deadband-filtered error (thick solidline), and derivative of the look-ahead direction error (thin solid line)versus distance in a simple trial. . . . . . . . . . . . . . . . . . . . . 70
5.9 Real distance error (dashed line), filtered error (thick solid line), andderivative of the look-ahead direction error (thin solid line) versusdistance in a simple trial. . . . . . . . . . . . . . . . . . . . . . . . . 71
5.10 The coefficient curve can have positive slope when large errors aremade. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
viii
5.11 (a) Path to follow (solid line) and real path followed by the driver(dashed line). (b) Distance error (dashed line), the filtered distanceerror (thick solid line), and the derivative of the look-ahead directionerror versus distance. (c) Direction error (dashed line), the filtered di-rection error (thick solid line), and the derivative of the look-ahead di-rection error respect distance. (d) Coefficients of the assist-as-neededcontroller respect to the distance driven. . . . . . . . . . . . . . . . . 73
5.12 Filtered distance error and assistance respect to distance driven. . . . 74
5.13 Assistance coefficients affected by alpha (solid line) and non affected(dashed line) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.14 Mean distance error and factor alpha vs. trial. . . . . . . . . . . . . . 76
6.1 Distance error done with assistance (solid line) and without assistance(dashed line) vs. distance. . . . . . . . . . . . . . . . . . . . . . . . . 79
6.2 Subject running the second experiment. . . . . . . . . . . . . . . . . . 81
6.3 Top: Maximum error vs. trial. Trials with asterisk are withoutassistance, and with circles with assistance. Bottom: distance errorand assistance vs. total distance. . . . . . . . . . . . . . . . . . . . . 82
6.4 Force and assistance coefficient vs. distance. . . . . . . . . . . . . . . 84
B.1 Simulink main body diagram. . . . . . . . . . . . . . . . . . . . . . . 101
B.2 Nonholomonic dynamics & input law block . . . . . . . . . . . . . . . 102
B.3 Error calculation subsystem . . . . . . . . . . . . . . . . . . . . . . . 103
B.4 Filter machine variability in corners. . . . . . . . . . . . . . . . . . . 103
B.5 Input control law. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
B.6 Preparation graphics block. . . . . . . . . . . . . . . . . . . . . . . . 107
B.7 Assistance algorithm block. . . . . . . . . . . . . . . . . . . . . . . . 107
ix
Acknowledgments
I would like to thank my advisor Dr. David J. Reinkensmeyer, for welcome me
in the Biomechatronics lab, for his advise and great motivation, and for making the
transfer from pure mechanics field to the amazing Biomedical world so smooth.
In addition, my sincere appreciation to the Balsells Fellowship Program for of-
fering me the opportunity to come to the University of California, Irvine. Specially
thanks to Professor Roger Rangel, for his support and great advice.
Thanks to the members of the Biomechatronics lab, for be always ready to be a
subject in any of my experiments (although some of them were so tedious..). Thanks
to the rest of the committee members, Professor James E. Bobrow and Professor
Faryar Jabbari.
And finally, special thanks to all my family and friends, specially to my partner
Sergio. Thanks for listening to me during my long dissertations about this thesis.
The effort in this thesis is nothing in comparison of the effort you made to come
and stay with me in the United States.
x
Abstract of the Thesis
A simulator and assist-as-needed control strategy for learning
to drive a power wheelchair
by
Laura Marchal Crespo
Master of Science in Mechanical and Aerospace Engineering
University of California, Irvine, 2006
Professor David J. Reinkensmeyer, Chair
Independent mobility is crucial for children’s cognitive, social, and physical de-
velopment. Unfortunately, children with severe disabilities often do not learn how
to drive a power wheelchair because of the high cost of the training and the limited
number of specialized therapists.
The goal of this project is design a simulation-training program that allows
disabled children to learn how to drive a power wheelchair in a safe environment
with less supervision.
In conventional driver’s training, rehabilitation therapists walk with the power
wheelchair and provide manual assistance to the child’s hand to move the joystick
in the desired direction, but encourage learning by reducing the assistance as the
child improves his skills. Our approach is based on this so-called ”Assist-as-needed”
therapy. A force feedback joystick is used to apply forces that help the chair to follow
xi
a black line on the floor. This force is dependent of the error made by the driver
and is reduced as the driver learns how to use the joystick to move the wheelchair
along the desired path.
This thesis demonstrates in a preliminary experiment that assistance-as-needed
can reduce large steering errors that could discourage and be dangerous for a child,
while still allowing learning to occur. We describe the design of a driving simula-
tor that can be used as a tool to study power wheelchair training, and a control
algorithm for a force feedback joystick that can robotically assist in steering the
wheelchair through the simulator.
xii
Chapter 1
Introduction
According to United Cerebral Palsy Association [1] about 764,000 people in
United States are affected by one or more symptoms of cerebral palsy (CP). Nowa-
days about 8,000 babies and infants are diagnosed with CP per year, and about
1,200-1,500 preschool children are diagnosed each year.
Cerebral palsy refers to a group of disorders that affect a person’s ability to move
and to maintain balance and posture. People with CP have damage to the part
of the brain that controls muscle tone. The brain damage usually occurs during
fetal development or shortly after birth. Depending on the brain areas damaged
the symptoms can vary from muscle tightness to spasticity; involuntary movement;
disturbance in gait or mobility, difficulty in swallowing and problems with speech.
The 70% of children with CP exhibit other disabilities as seizure disorders, vision
impairment, hearing loss, and especially mental retardation [2].
Mental retardation is the most common developmental disorder [3] in United
States. Mental retardation is characterized by a low intelligent quotient (IQ) and
by limitations in the daily life, such as communication and sociability, self-care,
and getting along in social situations and school activities. Children with mental
1
retardation can and do learn new skills, but more slowly than children without
developmental disorders.
There is no cure for CP, neither for mental retardation, but training is important
in order to help the child to achieve maximum potential and development. This
training or management must be started as soon as the disability is diagnosed to
reduce the effects of developmental disorders.
Several studies [4] [5] have demonstrated that the use of independent mobility is
crucial for the development of children’s cognitive, emotional, and psychosocial de-
velopment. Independent mobility increases the number of self-initiated movements
and the communication with adults. The most important fact is that powered mo-
bility provides great motivation in child learning.
For children with physical disabilities, Independent mobility is difficult to achieve,
especially at young age. Manual wheelchairs and power wheelchairs are a great help
for physically disabled children, since they become a tool for exploration, locomotion,
and play. Unfortunately, many children with disabilities do not achieve independent
mobility, especially at a young age when this stimulus of mobility particularly in-
fluences development.
Many disabled children can learn how to drive a power wheelchair after intensive
practice with the help of specialized therapists. The training is labor-intensive and
requires highly skilled therapists. For these children, it is then the lack of access
to effective training programs that limits learning and thus independent mobility.
Access to training is limited by the availability of trained therapists, and the nature
of the management that makes the training process very expensive. A trained
therapist has to be present at all moments, since driving a power wheelchair can
be dangerous because of the large errors children make when they first command a
wheelchair.
The goal of this Thesis project is to take a first step in the introduction of a new
appropriate rehabilitation technology to train disabled children to drive a power
wheelchair. The main idea of our approach is to design a smart wheelchair able
to teach disabled children how to steer a joystick to navigate safely and without
professional therapist supervision, but in a controlled clinical environment. Such a
2
smart wheelchair would make the training more accessible and cheaper, since the
child would be able to train without supervision, and thus without the extra cost
that a full time therapist would introduce.
3
Chapter 2
Literature Review
2.1 Smart wheelchairs
In June 2000 a clinical survey about the adequacy of power wheelchair control
for persons with severe disabilities [6] showed that the number of patients that
have problems with steering and maneuvering task with power wheelchairs is 40%.
Clinicians reported that 44-49 % of their patients who are unable to operate a power
wheelchair would benefit with the introduction of computer-controlled navigation,
or “smart wheelchair”.
A“smart wheelchair” is a commercial power wheelchair with a controller and
a set of sensors that helps people with severe disabilities to drive safely for long
periods of time and that interact effectively with the user. Some groups have de-
veloped smart wheelchairs that help patients that cannot drive a normal power
wheelchair to increase their mobility and get more independence. One of the first
smart wheelchairs was developed by IBM T.J. Watson Research Center, the head-
quarters for the IBM Research Division. The wheelchair was named Mr. Ed and
4
was implemented by Connell and Viola, [7]. The system is a chair mounted on top
of a robot to make it mobile. The user could control the robot using a joystick and
also delegate control to the system to perform functions such as avoid obstacles or
follow other moving objects. The input to the robot comes from bumper switches at
the front and rear of the robot, eight infrared proximity sensors for local navigation
and two sonar sensors at the front of the robot for following objects.
One more recent example of a robotic wheelchair is Wheelesley [8]. Wheelesly
was built by KISS Institute for Practical Robotics (KIPR), a private non-profit
community-based organization that works with all ages to provide improved learning
and skills development through the application of technology, particularly robotics.
Wheelesley was derived from TinMan I and TinMan II [9] also built by KISS In-
stitute. This new wheelchair is a semiautonomous system, allows the user to give
higher-level commands (direct the wheelchair to a desired location, giving commands
such as “forward” or “right”), and the robot executes low-level commands (avoiding
obstacles and keeping the chair centered in a hallway using the sensors arranged on
the chair). One of the basic ideas of Wheelesley is that she is a reactive system; she
does not use maps for navigation. One of the advantages of this strategy is that
users are not limited to one particular location by the need of maps or environment
modifications, so it is useful inside and outside environments.
Another smart wheelchair developed in the last years can be found in the Univer-
sity of Michigan [10]. Navchair shares vehicle control decisions with the wheelchair
operator regarding obstacle avoidance, door passage, maintenance of a straight path,
and other aspects of wheelchair navigation to reduce the motor and cognitive re-
quirements for operating a power wheelchair. Navchair introduces new navigation
assistance routines, which improve the efficiency of the wheelchair, the Minimum
Vector Field Histogram (MVFH) and the Vector Force Field (VFF).
Private companies such as Applied AI Systems have developed commercial smart
wheelchairs. AAI is a research and development company specializing in intelligent
mobile robots and their applications. They develop and market intelligent mobile
robots to universities, government, and research institutions around the world. In
1998, the Tao project became a reality [9]. Tao incorporated functional modes
5
including basic collision avoidance, passage through a narrow corridor, entry through
a narrow doorway, maneuver in a tight corner, and a new navigation function: the
landmark-based navigation. To perform this last function two CCD color cameras
on-board the chair are used to identify landmarks in the environment, so that the
chair can travel from its present location to a given destination by following them.
AAI is commercializing the latest developed wheelchair: the Tao-7 (2005) that allows
safety navigation in/outdoor environments.
The TIDE program in Spain developed TetraNauta [11]. This system provides
autonomous navigation by following landmarks in the environment. The patients
indicate where they want to go and the chair controller does the navigation auto-
matically. The user can recover manual control whenever he or she desires to do so.
Although there are some disadvantages such as the chair can only follow the pro-
posed drive path, so that the driver has no the freedom to chose the path to follow,
the system results in a cheap wheelchair that helps solve the mobility problems of
disabled people, since the detector used is low price (a standard black and white
video camera).
Other recent smart wheelchair research projects are focused on improving the
reactive navigation control and developing new systems that can be added to a
variety of commercial power wheelchairs with minimal modification.
An example of improving the reactive navigation control is Robchair [12] de-
veloped in Coimbra, Portugal. Robchair is a smart wheelchair that runs under a
Reactive Shared-Control (enables a semi-autonomous navigation of a wheelchair in
unknown and dynamic environments) based on a Fuzzy-logic controller.
Most of the systems cited above are reactive; a mapping of the place where
the subject is navigating is not necessary. Such an approach is appropriate only
for people who can learn how to use the interface to communicate with the chair
controller and specify the steps in order to move. These previously developed smart
wheelchairs allow handicapped subjects to increase their mobility, and assume that
the user’s ability to control the wheelchair remains fixed. In other words, this
previous work compensates for disabilities, instead of seeking to improve motor skills
so that the user can learn to drive a conventional and cheaper power wheelchair.
6
An exception to this purely compensatory approach appears to be research by
the Call Center (Communication Aids for Language and Learning), a small unit
within the Department of Educational Studies, in the Faculty of Education, The
University of Edinburgh. The CALL Center provides specialist expertise in tech-
nology for children who have speech, communication and/or writing difficulties, in
schools across Scotland. This institution has developed a smart wheelchair [13] that
improves mobility in children that cannot control a usual power chair because of
perceptual or cognitive problems. The goal of the wheelchair is not only to improve
the mobility of disabled children, but also to study how mobility affects their cogni-
tive, social, and physical development. In order to study these facts, the Call Centre
introduced twelve Smart Wheelchairs into three Edinburgh based special schools,
and report trials by a number of all age children with various special needs.
The call Centre smart wheelchair can be driven using single or multiple switches,
a scanning direction selector, a proportional joystick, with communication aid or
with a laptop computer. Bumpers protect the pilot and the environment - on colli-
sion the chair stops and takes avoiding action. A track follower lets the chair follow
lines on the floor from room to room or through tight situations like doorways. The
line follower is used when a child has learning difficulties and driving a reactive
wheelchair is too complicated, they do not understand the sequences of movements
that makes them go from one place to other and are not able to avoid any obstacle.
With a line follower, the child only has to give the order to go. Another use is to
motivate a child who doesn’t have motivation for a simple chair, maybe because he
doesn’t relate the chair with going to some place interesting. The line follower makes
fewer demands on children’s physical, cognitive, and perceptual skills and can be
used by more severely disabled children. With a line follower the adult supervision
can be reduced. There is the danger that only following a line the child will not
have a change of learning new skills. Usually the line follower is a previous step to a
reactive smart chair and can finish with the normal use of a commercial wheelchair.
Some rehabilitation therapists [14] prefer to not use a line follower because of the
risk of not developing new skills. Such training uses a power wheelchair (ex. Entra
Tiro [15]) and requires intensive professional supervision and the cost of the training
7
becomes greater. This kind of teaching is based on manually guiding the child’s hand
on the joystick, teaching them to move the hand according to the direction to follow.
Another exception to the compensatory approach is provided by the Spanish
institution IAI-CSIC (Instituto de Automatica Industrial-Spanish National Council
for Science Research), which has developed recently an example of a specialized
smart wheelchair for disabled children. The PALMA project [2] is an assistive
robotic vehicle for disabled children that focuses on teaching disabled children how to
drive a power wheelchair. The PALMA power wheelchair is a smart wheelchair with
a reactive navigation system that helps the child to safety drive avoiding obstacles.
The power wheelchair is provided with an ultrasonic belt lobe for obstacle detection.
There are a total of 8 ultrasonic detectors around the wheelchair to assure the system
safety; as well a maximum speed of 0.8m/s, the maximum speed at witch the obstacle
detection system avoids impact.
The smart power wheelchair has the appearance of a toy, and that makes it very
attractive to the children. The goal of the PALMA project is to teach disabled
children aged from 3 to 7 years old how to drive a simple power wheelchair. The
interface between the child and the power wheelchair can be changed depending of
the child disability. The research group designed a simple board with four buttons
to go forward, backward, to the right and to the left, and an extra bottom for
stopping. They also consider the possibility of a joystick interface. The difficulty
level of driving is different depending of the child skills. The power wheelchair has
different levels of autonomy, from higher to lower vehicle autonomy. The professional
educators working in the project defined six difficulty levels, which requires more
intervention by the child as the level becomes higher. The educator decides when
to upgrade the level, analyzing the ability of the child, and the learning process.
The research group ran a clinical validation of the PALMA wheelchair, and the
results were very promising. Six children from 3 to 7 years old participated in
the study. All six had different mental retardation levels and none of them had
experience in power mobility. All the children learned how to drive the power
wheelchair until level 4 (levels go from 1 to 6) in only 6 training sessions, where each
session was 15 minutes long. After these promising results, the authors launched an
8
industrialization project to the rehabilitation market.
The key idea of the present project is to develop a line follower smart wheelchair,
a safe and intelligent system that has the advantage that it does not require direct
supervision from a therapist, but that also does not statically remain in a sim-
ple transportation tool since the error-less control provided to the child gradually
decreases as he learns how to drive the power wheelchair.
9
2.2 Assist-as-needed
Errorless learning is a teaching technique that is focused in reducing the errors
done by a subject when he/she is learning a new skill. This method of errorless
learning was applied first by Terrace [16], teaching pigeons to distinguish between a
red and a green key. Errorless is the contrary teaching method than errorful. The
errorful method involves the learner in trial-and-error guesses, creating errors that
can affect the learning process of the subject, such as amnesic patients or children
with learning difficulties. The errorless learning technique consists in presenting the
correct information or procedure so that the erroneous answers are minimized [17].
The errorless teaching process is helpful since it increases the chances to learn,
avoiding large errors that can discourage and be dangerous to the patient. The
assistance-as-needed technique for wheelchair driver’s training is based in errorless
teaching process. The therapist helps the patient to make a movement minimizing
the steering error, applying only the force needed to finish the movement(figure 2.1),
but encourages learning by reducing the assistance as the patient improves his/her
skills.
Figure 2.1: Assist-as-needed. Trainers provide manual assistance to the child’s handto move the joystick in the desired direction, but encourage learning by reducingthe assistance as the child improves his skills.
An assistance-as-needed based robotic device has been developed recently in UC
10
Irvine [18]. The aim of the robotic device is to work as a physical rehabilitator
to teach how to walk after a neurologic injury. The authors express the goal of
assistance-as-needed as an optimization problem that minimizes a cost function
defined as a weighted sum of the assistance and error Eqn. (2.1). Minimizing this
cost function requires minimizing the amount of assistance applied but also reducing
the error made by the subject.
J =1
2· (xi+1 − xd)
2 +λR
2· (Ri+1)
2 (2.1)
The controller introduced by the authors has a parallel form to the neural-
learning law identified for internal model formation in [19], where the output force
from muscular activity on the trial i+1th movement is proportional to the previous
output and the error performed in the previous movement.
ui+1 = fH · ui − gH · (xi − xd) (2.2)
Where u is the value of the force generated by the motor system, fH is the
forgetting factor (0 < fH < 1), gH is the learning gain, xi is the position in the
i iteration and xd is the desired position defined by the path. Note that for this
definition of internal model, as the error made by the subject becomes larger, the
larger the response from the nervous system, and the subject learns faster.
The mathematical form of the assist-as-needed controller used in [18] has the
form:
Gi+1 = fR ·Gi + gR · |xi − xd| (2.3)
Where G is the amount of assistance applied, fR is the forgetting factor that
governs the forgetting process, where 0 < fR < 1, gR is the error-based feedback
gain, xi is the performance variable in the i iteration and xd is the desired value of
the performance variable. The assistive algorithm is then based on the error and the
previous assistance value, so that the assistance will be reduced as the procedure is
repeated thanks to the forgetting factor if the error is small enough.
11
2.3 Virtual environments and haptics overview
In recent years the number of research groups working in virtual reality and haptic
devices as tools to improve motor rehabilitation has grown considerably. A virtual
environment (virtual reality VR) is created through software and hardware to sim-
ulate a real environment. A haptic device is a interface device that can apply force
on a subject.
Virtual Reality is a powerful tool for motor rehabilitation [20]. VR provides the
three main elements necessary for a successful motor learning: repetitive practice,
performance feedback, and motivation. Repetitive practice is done through an input
device (haptic or not), the performance feedback is through the real time graphics
on the screen, and motivation is generated by the video game concept of the virtual
environment program. VR has some advantages with respect to practice in the real
world. First, it is possible to simplify the task, making it easier at the beginning
of the training. A virtual training will be always safer than training in real world,
and can be made entertaining, since it is presented to the subject in a game format
making the subject more proactive.
Several studies ( [21] [22] ) have shown that tasks learned by healthy subjects
through VR simulation transfer to real world. Some recent studies show that dis-
abled subjects can also learn new skills using VR and transfer the new learned skills
to real world. Virtual reality training have been proved to be useful for training
disabled people including stroke survivors and Parkinson patients.
An interesting experiment developed by Webster et. al. [23] shows that patients
with stroke and unilateral neglect syndrome can learn how to steer a conventional
wheelchair using VR and transfer their learned skills to real world. The VR sim-
ulation program is a simple non immersive VR, patients view a virtual wheelchair
on the computer screen, and they can drive through the virtual environment. They
navigate through the VR room avoiding obstacles, using as interface a usual hand
controller, or a wheelchair simulator when the patients are strong enough to propel
by themselves. The wheelchair simulator was composed by two pedals to turn left
and right, and a right-sided wheel to move forward or backward.
12
We can find several studies that show how subjects learn to move in novel move-
ment trajectories better when haptic guidance is applied. In the University of Cali-
fornia Irvine an experiment was ran to show how haptic demonstration can improve
the performance when tracking a curve trajectory in three dimensions [24]. The
experiment was performed using a phantom device that created a virtual channel in
the desired path, so that a force was generated when the subject leaved the desired
curve. The results demonstrate that the subjects improved their ability to follow
the path when training with the haptic device.
There are some research groups working in introducing force feedback joysticks
in the control of a power wheelchair. An experiment to evaluate if the use of force
feedback enhances the performance of experienced power wheelchair users was run
in the University of Pittsburgh [25]. A virtual reality system was created to evaluate
the use of the force feedback joystick when navigating in an unknown environment.
When the chair was in danger of collide with a virtual obstacle, the feedback joystick
applied a force on driver’s hand to steer away from the obstacle. The results showed
that the performance of the experienced driver increased when the assistance was
activated.
In the University of Metz, in France, a similar study was performed with healthy
subjects [26]. A commercial force feedback joystick was used with a simulation
program to evaluate how the performance of the driver changes when assistance is
applied. The results show that when navigating in an unknown environment there
are fewer collisions when the force feedback assistance is activated.
A recent work [27] shows how predictive haptic guidance helps to improve the
performance of dynamic tasks such as driving. The author shows how a predictive
haptic assistance can improve the driving performance, compared to no assistance
and a standard feedback controller. They used a feedback steering wheel with a
simulation program to show how a predictive haptic guidance substantially reduces
the error done by a driver when performing a novel movement.
13
2.4 Line following
Mobile robots have become very popular in the last years and have been widely
studied. One of the problems of mobile robot that researchers wanted to solve is
the autonomy. An autonomous robot is able to navigate without communication
with the operator. Line following has become one of the cheapest and more feasible
ways to operate a robot autonomously. There are different kinds of mobile robots.
In our study we focus in non-holomonic mobile robots, although we can find in
the literature many different types of robot, such as car-like robots [28] or hexapod
robots such as Rhex [29].
Line following consist in a predetermined path that the mobile robot has to
follow autonomously, deciding the better steering signal to navigate in a smooth
way. There are different ways to define the path to be followed. The path can
consist in a saved path in the memory of the robot, a white or yellow line on the
floor that the robot “sees” thanks to a vision system, or a magnetic wire under
the floor that the robot can detect. After the introduction of CCD cameras the
cheapest and easier way to run a line follower mobile robot is using a white line on
the floor [30].
There are several studies about strategies to get an accurate and optimal line
follower algorithm. The first control laws were based in modern control. Most
of the controllers were PID controllers that used the direction error and position
error in order to get a good and smooth movement [31] [32]. For these linear or
nonlinear feedback systems Lyapunov analysis ensures global asymptotic stability
of the closed loop-system. Such feedback systems work well when the speed is not
large. In order to maintain smoothness for a wide range of speeds, [33] introduces
feedforward control using looking ahead vision points, and uses also feedback with
independent vision points. These vision points vary independently for the feedback
and feedforward depending of the speed, so that for feedforward, as we increase the
speed, the range of the vision point is shortened, and as we decrease the speed the
range is longer. We can predict the steering corrections using feedforward in order
to minimize steering corrections.
14
In the virtual reality world and videogames industry we can find several works
about steering behaviors for autonomous elements. In particular C.W. Reynolds [34]
developed a new control idea to follow a known path using a look-ahead approach.
He defined a linear look-ahead position predictor depending of the linear velocity of
the vehicle and compared it with the desired position in the path at that time as
depicted in figure (2.2).
Figure 2.2: Craig Reynolds look-ahead guidance.
Although PID controller works well, in most of the controls systems the uncer-
tainties are ignored and it is necessary to have a total knowledge of the dynamics.
New control strategies have been introduced in order to solve these problems. Fuzzy
Logic control system is the most widely used nowadays.
Fuzzy Logic control is a simple control based in human behavior. Contrary to the
classic logic based in ”true” and ”false”, Fuzzy Logic works with intervals of truth.
The system is described using human language. The inputs are processed using
language rules of the form IF..THEN. Fuzzy Systems don’t need total knowledge of
the dynamics of the mobile robot, so that the complexity of the control is reduced.
We can find an example of navigation of a mobile robot using Fuzzy logic in [35].
This paper introduces the cruise control speed in the fuzzy algorithm. The main
idea is to control robot speed depending of the complexity of the line. For example
15
for a straight road the maximum speed is possible, and for a narrow and complex
road the speed must be reduced.
Although Fuzzy Logic is a very powerful and simple control strategy, there are
also some disadvantages. The generation of the IF..THEN rules in Fuzzy controls
can become very complex depending of the number of inputs to the system. In [36]
the authors introduce an algorithm that looks for the fuzzy rules for an optimal
navigation, so that it is not always necessary to have expert knowledge to run the
application. Another problem is that although Fuzzy Control solves the problem
of the perfect knowledge of the vehicle dynamics, there is still the problem of the
uncertainties. Type-2 Fuzzy logic System minimizes the effects of uncertainties that
usually affect a usual fuzzy logic system, although it makes Fuzzy logic system more
difficult to understand and manipulate. Fuzzy type-2 reduces this uncertainties
since the membership function are not crispy but fuzzy [37]. We can find some
applications of Type-2 Fuzzy to control mobile robots in literature. In [38] the
authors describe an obstacle avoidance robot and a corridor follower with infrared
sensors. They measured the time needed to finish a circuit with “true” and “false”
logic, Type-1 Fuzzy, and Type-2 Fuzzy. The time needed to finish the circuit was
always shorter when using Type-2 Fuzzy than with Type-1 Fuzzy and classic logic.
In addition to Fuzzy Logic Neural networks have been applied to line following.
Neural networks are control strategies modeled based on the neurological functions of
the brain. Neural networks use a data ’training set’ to create rules capable of making
behavior predictions. This kind of control solves the problems of uncertainties and
the necessity to have a total knowledge of the dynamics. We can find several uses
of this control in mobile robots in the literature. In [39] the authors describe a
Neural Network capable of learning the entire nonholomonic mobile robot dynamics
on-line. The Neural Network controller can deal with bounded disturbances and
unmodeled dynamics in the mobile robot successfully. In [40] the authors designed
an intelligent vehicle that can follow a white line on the floor with an omni-direction
camera. The authors developed a steering algorithm based on human reaction in
driving, the Human Imitating Steering Control Algorithm using neural networks.
This algorithm allows a faster and more accurate response since the nonholomonic
16
dynamics are not required to be solved, reducing considerably the computational
time.
Fuzzy Logic and Neural Networks can work simultaneously to get the best part
of these two different controllers. Fuzzy Logic rules and membership have to be
tuned by an expert to get a good response of the system. The named Neuro-fuzzy
control is defined as a controller that automatically tunes the Fuzzy control rules and
membership functions by using the learning ability of Neural networks [41]. In [42]
the effectiveness of Neuro-fuzzy applied in mobile robot navigation is illustrated. A
non-holomonic mobile robot has to perform a defined tracking path, and it is shown
that as more trials are performed, the response of the robot is better, until finally
the controller achieves an accurate and smoothed movement.
We can find line follower algorithms in many research fields. An example is in
the AGV field (Auto Guided Vehicles), which usually has been guided with magnetic
wires. In [30] the authors assert that white or yellow tape and a CCD camera is
cheaper than a magnetic band, because of the low cost of the material and the low
cost of maintenance. An AGV guided with CCD cameras cannot run fast because
the acquisition of images and processing of images is slow. In order to increase the
traveling speed of the AGV they introduce optical fibers as guidance method, similar
to the magnetic tapes, but cheaper. The AGV is provided with a CCD camera and
four groups of fiber sensors. The camera has a wide view field, and is useful for
speed and control strategy as recognition of obstacles. The optical fiber sensors are
used to get the information about position and direction error and are used for the
steering control. In this particular case a PID control is used for steering and Fuzzy
Control for select speed.
The Spanish university of Seville has designed a power wheelchair that follows a
tape on the floor using a cheap CCD camera. The aim of this project is to develop
a controller for people with severe disabilities to autonomously navigate in indoors
environments. Although many of the ”smart” power wheelchairs nowadays are based
in reactive navigation, the authors chose a line follower since they were looking for
a cheap system [11]. Another example of line following in smart wheelchairs is the
power wheelchair developed at the Call Center, described earlier. This institution
17
has developed a smart wheelchair [13] that improves mobility in children that cannot
control a usual power chair because of perceptual or cognitive problems. The smart
wheelchair can run in reactive mode, but when the disability of the child is severe,
can be selected the mode ’line following’. In this case a CCD camera tracks the land
marls on the floor.
A very dynamic research group in autonomous vehicles is the American National
Automated Highway System Consortium (NAHSC). The goal of NAHSC is to de-
velop driver assistance technologies in motor vehicles as cars and buses in order
to have safer and optimal highways. NAHSC works with universities and private
companies in the development of new technologies to get safe and autonomous cars.
An example of the use of line follower can be seen in the NavCar in Carnegie Mellon
University. NavCar [43] is a semiautonomous car that has integrated the AUto-
motive Run-Off-Road Avoidance system (AURORA), a warning for the driver that
triggers when he is too close to the road line. Most of the accidents in highway
are because of incorrect changes of lane. The hardware consist in a visual system,
a color camera, mounted in the side of the car, that detects the line and knowing
the velocity of the car is able to calculate the time to cross the line and alerts the
driver of the danger. The system is calibrated to recognize automatically the line,
independently of the kind of weather and kind of line in a precise moment.
Another example of the use of line following control can be seen in a new rapid
bus in Las Vegas. Maxride is a middle way between bus and a usual trail bus
that operates in North Las Vegas. It is a new generation of urban transportation
provided with a camera, that tracks two dashed white lines that define the path to
follow. The system is not fully automatic, the driver has to accelerate and brake,
but the bus parks by itself.
18
Chapter 3
Preliminary Studies
3.1 Introduction
The hypothesis of these preliminary studies is that subjects can learn to move in
a disturbed visual field without making large errors when assistance as needed is
applied. The experiment is based on steering a mouse cursor on the computer
screen following a trajectory that is divided into horizontal and vertical sections.
There is no force applied on the subject hand, only a visual disturbance that rotates
45 degrees clockwise the actual cursor position with respect to the normal mouse
movement.
The motivation of our experiment is to study how humans adapt to a novel en-
vironment with assistance as needed while following a circuit. This driving learning
could be used to teach disabled persons, as for example children with severe and
multiple disabilities, who could not use ordinary power wheelchairs to move. Re-
ducing the error done is important when teaching a disabled child to drive a power
wheelchair following a black line on the floor. A large error defined as the distance
19
of the wheelchair to the black line can be dangerous and discourage the child.
Many studies have shown how humans adapt to novel environments reaching
[44] and walking [45] [19]. There are also some studies about how interaction of
visual and proprioceptive feedback improves adaptation during reaching [46]. Recent
studies show how assistance-as-needed can help to learn a new task without making
large kinematics errors [18].
We define that a subject has learned how to drive under a visual disturbance
field when the error made as driving the cursor on the screen is less than a maximum
value that is considered normal.
20
3.2 Methods
This experiment was realized using Cogent 2000 developed by the Cogent 2000
team at the FIL and the ICN and Cogent Graphics developed by John Romaya at
the LON at the Wellcome Department of Imaging Neuroscience. Cogent 2000 is
a free distributed Matlab Toolbox for presenting stimuli and recording responses
with precise timing. Cogent 2000 (nowadays renamed Cogent) provides utilities for
the manipulation of sound, visual stimulation, keyboard recording, mouse position
recording, joystick inputs, serial port, parallel port, subject responses and physio-
logical monitoring hardware when working with Matlab. Cogent 2000 was used in
this experiment to present images on the desktop screen, and to record the mouse
path performed by the subject of study.
A path formed by vertical and horizontal black lines sections is displayed on the
computer screen (figure 3.1, left).
Figure 3.1: Left:path used in the experiment. Right: original path and 45 degreerotate path.
The subject had to perform two different experiments. The first one was done
under visual perturbation and the second one with visual perturbation and assis-
tance as needed. The experiments consist in following with the mouse cursor the
black circuit on the screen. He/she starts in the arrow position at the bottom of
the screen, and finishes at the box at top. Each trial finishes automatically when
after tracing in order all the horizontal and vertical lines, the box is reached. The
21
subject is able to see the path he/she is doing, since visual feedback is provided,
and has to try to do the least possible error in each trial. The error is computed
as the absolute value of the difference between the actual position and the desired
position (over the line). The tracking error for each position is computed, along
with the maximum error for each circuit, defined as the magnitude of the maximum
deviation from the desired path in a trial.
The visual perturbation is defined as a 45 degrees clockwise rotation of the actual
position with respect to the initial point (blue arrow in figure 3.1). When the subject
tries to follow a vertical or horizontal straight line on the computer screen using the
mouse as input device, he/she would see his/her trajectory rotated on the screen
(figure 3.1, right, green line). The position used to compute the performance error is
the position showed on the screen, since this is the one that the performer perceives
as his/her error and tries to reduce.
The assistance is defined as a reduction of the visual rotation perturbation so
that the final perturbation that affects the subject is reduced in angle. We provided
assistance to the subject so that the performance of the task became easier at the
beginning, reducing the big errors that a large visual rotation perturbation would
cause. Instead of a large visual perturbation of 45 degrees, the assistance reduces
this angle, making the action easier and more comfortable for the subject. We base
the value of the visual assistance in the model of robot assistance as an adaptive
algorithm described in [18]. The assistance algorithm is defined as:
Gi+1 = fR ·Gi + gR · |xi − xd| (3.1)
Where G is the value of the rotation assistance (counterclockwise visual rotation
angle), fR is the forgetting factor, gR is the learning gain, xi is the position in the i
iteration and xd is the desired position defined by the path. The assistive algorithm
is an iterative algorithm that bases the value of the next visual assistance on the
actual error and the actual assistance value. Note that fR must be less than 1, so
that if error is small enough the value of the assistance will decrease as more path is
followed (as more tasks are finished). As the subject learns how to steer the mouse,
the assisting counterclockwise angle is reduced, and the total visual perturbation
22
increases until the visual assistance become zero.
The assistive algorithm is similar to the neural learning algorithm identified
in [19], where the output force from muscular activity on the trial i+1th movement
is proportional to the previous output and the error performed in the previous
movement.
ui+1 = fH · ui − gH · (xi − xd) (3.2)
Where u is the value of the force generated by the motor system, fH is the
forgetting factor (fH < 1), gH is the learning gain, xi is the position in the i iteration
and xd is the desired position defined by the path. Note that for this definition of
internal model, as the error made by the subject increases, the larger the response
from the nervous system.
The subject that performed the experiments is a 26 years old right handed
healthy male. Two experiments were performed. The first one consisted in 20
successful trials. A successful trial is defined as the complexion of the circuit in a
span of time between 20 and 50 seconds. If a trial is completed in less or more time,
the trial is not considered valid. There was a 10 seconds break between trials to
allow the subject to rest the hand muscles and eyes. During this short break the
subject was able to move freely the hand and don’t look the computer screen, but
not allowed to move the mouse as he/she is looking at the screen.
The second experiment was performed with assistance 3 hours after completing
the first experiment. In the second experiment the subject had to finish a number
of successful trials enough to make the assistance become zero. The assistance was
defined as in equation (5.1) with constant values fR = 0.9999, gR = 0.000007.
We can see that the value of the forgetting factor is less than 1, and the value of
the learning gain is small. We are interested in a slow reduction of the assistance,
so that the subject can learn the new rotation perturbation continuously without
large changes in the perturbation during the performance of the task, avoiding large
errors that sudden changes in the assistance can cause.
The initial value of the assistance was a rotation of 45 degrees counterclock-
wise, so that the subject couldn’t feel any visual perturbation when the experiment
23
started. As the experiment goes on the assistance is reduced following the error-
based assistance algorithm defined in equation 1 until the assistance becomes zero
and the subject only feel the 45 clockwise visual perturbation. We recorded the
tracking error, the maximum error for each trial, and the value of the assistance for
each position.
Cogent 2000 works in pixels by default since was created to work with graphics
on computer screens. The position and error are expressed in pixels to simplify the
calculus, although it is possible to use coordinates with visual angle as the units.
The coordinate system is expressed in Cartesian coordinates (x,y) with the origin
(0,0) at the center of the computer screen. The x-axis increases to the right and the
y-axis increases upwards. The screen is 640 pixels wide and 480 pixels high. The
coordinates of the center and corners of the screen in pixels are showed in figure 3.2
Figure 3.2: Coordinates of the center and corners of a screen of 640x480 pixels.
24
3.3 Results
The tracking error, maximum error in each trial, and the trajectory of the mouse
on the screen were recorded for each trial for both the first and second experiment.
In the first experiment the subject has to follow the line under visual perturbation
without assistance during 20 trials. In figure 3.3 the trajectory of the cursor and the
tracking errors in a single initial trial are showed. The graphic of the path followed
by the subject in one trial (blue line) and the path the subject should follow (black
line) are showed on the left of figure 3.3. The tracking error versus the amount of
distance in pixels steered during the same trial is showed on the right of figure 3.3.
400 200 0 200 400300
200
100
0
100
200
300A: The Tracking Task
pixels
pixe
ls
0 200 400 600 800 10000
5
10
15
20
25
30
35
40B: Error during one trial
Distance moved along path [pixels]
Err
or [p
ixel
s]
Figure 3.3: Left, the position of the cursor in blue and the path the cursor shouldfollow in black. Right, error done when moving the cursor in a trial.
From figure 3.3 we note that the error is large at the beginning of the trial,
and as the subject moves along the path he learns how to move under the visual
perturbation. The learning process causes the error to decrease, since the subject
learns how to react against the visual perturbation to increase his performance and
make smaller errors. Note that the largest errors occur when changing the direction
of the path from vertical to horizontal or horizontal to vertical. Along each straight
line the subject learns how to perform under the perturbation, and he also learns
how to change the direction doing less error.
The tracking error recorded for the 20 successful trials with a constant visual
rotation perturbation is shown in figure 3.4.
25
0 0.5 1 1.5 2
x 104
0
5
10
15
20
25
30
35
coordinates/pixels
erro
r
error with 45 deg. impairment
Figure 3.4: Tracking error recorded in 20 successful trials.
From figure 3.4 we can see that the error decreases as the average of distance
moved along consecutive paths increases. We can say that an internal model has
been formed, since after using the mouse with the visual rotation perturbation it
is possible observe aftereffects. We can see large errors at the beginning of the
experiment that are bigger than 30 pixels. The goal of the assistance-as-needed is
to eliminate these large errors at the beginning of the training.
For the second experiment with assistance we needed 46 trials in order to reduce
the assistance to null counterclockwise rotation. After eliminating the assistance
only the 45 degrees clockwise rotation visual perturbation was applied. The tracking
error, magnitude of the maximum error for each trial, and the assistance during
all the experiment were measured. In figure 3.5 we can see the evolution of the
assistance measured in degrees (thick line), and tracking error in pixels (thin line)
as the average of distance in pixels moved along consecutive paths increases.
We can see that the error is almost constant along the trials; there are not large
errors at the beginning of the task as in the previous experiment, neither when the
26
0 1 2 3 4 5 6
x 104
0
10
20
30
40
50
coordinates/pixels
deg
error evolution as less assistance
erro
r
Figure 3.5: Tracking error and assistance recorded in 46 successful trials.
assistance is totally removed. The error remains constant less than 10 pixels as
at the end of the first experiment. There are two peaks in positions 0.2x104 and
6.4x104. These large errors are the result of accidents. An obstacle on the table
where the subject was moving the mouse caused the first peak, and the second one
was caused by a malfunction of the mouse.
The results from the first experiment without assistance and the results from the
second experiment with assistance as needed are compared in figure 3.6. The mag-
nitude of the maximum errors done in each trial in the experiment with assistance
as needed (dashed line) and without assistance (solid line) are plotted on the right
of figure 3.6. The maximum errors for the first 10 trials of the experiment without
assistance are larger than the maximum errors done when assistance as needed was
applied. The maximum errors done in the first trials in the first experiment are
larger than 30 pixels, three times larger than the error done with assistance that are
smaller than 10 pixels. The experiment without assistance is faster when forming an
internal model; it only takes 10 trials to have an acceptable error with a 45 clockwise
visual rotation perturbation and for the experiment with assistance 46 trials were
27
needed. Although learning is faster without assistance than with assistance, the
maximum errors are three times larger.
5 10 15 205
10
15
20
25
30
35C: Error during Training
Trial
Max
. Err
or [p
ixel
s]
10 20 30 400
10
20
30
40
50D: Corrective Rotation
Trial
Rot
atio
n [d
eg]
assistanceno assistance
assistanceno assistance
Figure 3.6: Maximum error for each trial and assistance along trials.
The assistance in the second experiment, not only affects the maximum error in
each trial and the time to learn the new task, but also the tracking error. In figure 3.7
there are plotted the smoothed tracking error versus the average of distance in pixels
moved along the consecutive paths for the experiments with assistance (dashed line)
and without assistance (solid line). The tracking error when assistance is provided is
smaller than the tracking error when there is not assistance. The tracking error for
the experiment with assistance is almost constant for the 46 trials that the subject
needs to learn the task under visual perturbation. However the tracking error in the
first trials of the first experiment decreases as the subject learns how to correct the
visual perturbation.
28
0 2 4 6 8
x 104
0
5
10
15
pixel/coord
pixe
l/coo
rd
MOVING AVERAGE
with assistancewithout assistance
Figure 3.7: Moving average tracking error for without assistance experiment (solid)and with assistance experiment (dashed)
It is important to choose the values of the forgetting factor fR and gain factor
gR correctly to get a slow but efficient assistance curve. We performed another
experiment with a 25 right-handed female changing the values of these parameters.
For this experiment the values were fR = 0.999, gR = 0.00005. In figure 3.8 the
evolution of assistance (dashed line) and tracking error (solid line) as the average
of distance moved along the consecutive path in pixels increases are plotted. We
stopped the experiment after 20 trials. At this time the value of the assistance
was 5 degrees counterclockwise rotation. We can see that the assistance as needed
curve decreases faster than in the second experiment. The error at the beginning of
the experiment where less than 5 pixels, but as the assistance is reduced the error
became larger, getting values larger than 10 pixels, two times the initial values. The
subject can learn how to correct the errors generated by the visual perturbation,
but since the assistance as needed curve decreases too fast the errors made are large.
29
0 0.5 1 1.5 2 2.5 3
x 104
0
10
20
30
40
50
pixel/coordinate
error and assistance evolution
erro
r (p
ixel
s)
degr
ee
Figure 3.8: Tracking error and assistance recorded in 20 successful trials (fR =0.999, gR = 0.00005)
30
3.4 Discussion
We hypothesized that subjects can learn to move in a disturbed visual field without
making large errors when assistance as needed is applied. We have shown that this
hypothesis is valid. It is possible to learn how to steer the mouse cursor with an
impairment defined as a visual clockwise rotation without creating large errors if
we assist only as needed. When learning without assistance large errors are made.
For the task of driving a wheelchair, these errors can be dangerous for the subject,
for example when a disable child with perceptual or cognitive problems is learning
drive a power wheelchair. In this case large errors can be dangerous for the child if
there is not supervision from a rehabilitation therapist.
The learning process with assistance is slow, since large errors are not allowed.
The learning of the internal model is dependent on the error done as show in equation
(5.2); if large errors are not made, the subject will need more time to learn how to
move in the new environment. There are some studies that show how large errors
can enhance the learning process, since internal models are error-based [19] [47].
The good results of this experiment encourages us to continue the study im-
plementing a force-feedback joystick that simulates the usual joystick that power
wheelchair have to control the direction and velocity of the chair. The next step
would be to replace the mouse and introduce the joystick as the interface between
the subject and the computer. The joystick not only would work as an input device,
but also would generate forces that would make the chair to come back to the line
when the error is larger than a maximum dangerous value. The subject would feel
the force generated by the force feedback joystick, learning how he has to move the
joystick to follow the line. These forces that assist the driver when driving would
be decreased as the individual learns how to drive without making large errors.
31
Chapter 4
Simulation-training program
4.1 Introduction
In the preliminary experiment we showed that a subject could learn to steer a mouse
under a visual-motor perturbation while the error made is limited when assistance-
as-needed is applied. After the good results in the preliminary experiment we de-
veloped a most realistic simulation-training program. We designed the simulation
program using Simulink and the Virtual Reality Toolbox in Matlab.
The goal of this new program was to develop a more realistic environment and a
more realistic wheelchair dynamics. The simulation consists of a circuit on the floor
defined by a black line that the subject has to follow. The environment is modeled
after a house with doors, corridors, and furniture. It was designed using the program
V-Realm Builder 2.0. The subject moves the wheelchair in the virtual environment
from the point of view of sitting on the wheelchair, using a force feedback steering
wheel.
We use a commercial force feedback steering wheel instead of a force feedback
32
joystick, since the low cost joysticks in the market are too weak for our application.
The steering wheel is a conventional game controller that can be found in a normal
game store for a low price (< 100$). The assistance was applied as a force on the
subject’s hands, so that the steering wheel corrects the subject trajectory making
him to come back to the line when he is doing an error. This is analogous to the
technique that therapists follow when they teach young children how to drive a
power wheelchair. The force on the hand is dependent on the error created and the
patient skills, so that the assistance-as-needed always encourages the driver to learn
to drive.
33
4.2 Nonholomonic power wheelchair dynamics
In the preliminary experiment we did not take into account the dynamics of the
wheelchair. The mouse cursor moved on the computer screen without any restric-
tion, so that if the subject changed the direction of movement with the mouse,
automatically the mouse cursor on the screen changed the direction of movement.
This kind of dynamics is different from those of a wheelchair.
A nonholomonic power wheelchair can be described with a dynamic model of
the system. The power wheelchair has two independent motors in the right and
left wheels that apply independent torques on the axis of the wheels. The power
wheelchair can steer right and left by rotating the motors at different torques. In
order to turn right the motor in the left axis wheel generates more torque than in
the right axis wheel, so that the left wheel rolls faster and the wheelchair can turn
to the right. The amount of torque difference makes the turning faster or slower.
Due to the nonholomonic dynamics of the power wheelchair the changes of direc-
tion are slower than using a dynamics that treats the wheelchair as a simple point of
mass without any kind of restriction or constraint. A non-holomonic mobile robot
has two important constraints that limit his free movement, the conditions of pure
rolling and non-slipping, and the condition that the robot can move only in the
direction normal to the driving wheels axis.
The wheelchair geometry used in this study follows the scheme showed in fig-
ure 4.1. The fixed frame is defined by XOY , and XBnYB defines the body frame
that moves with the wheelchair.
Five generalized coordinates can describe the configuration of the robot.
q = [ x y θ φr φl ]
where x and y are the coordinates of C (center of gravity) in Cartesian coordi-
nates, θ is the orientation of the robot with respect to the horizontal in the fixed
frame, and φr and φl are the angles of the radius of the right and left driving wheels
with respect to XB.
The dynamics equations of a nonholomonic power wheelchair are described in
34
Figure 4.1: Wheelchair geometry.
equation 4.1. [48]
M(q) · q + C(q, q) · q = E(q) · τ − AT (q) · λ (4.1)
where M(q) is an nxn symmetric, positive definite matrix, C(q, q) is the cen-
tripetal Coriolis matrix, E(q) is the n × r input transformation matrix, τ is the
r × 1 input vector of torques on the wheels axis, A(q) is the mxn Jacobian ma-
trix associated with the nonholomonic constraints, and λ are the m × 1 Lagrange
multipliers.
This kind of mobile robot are subjected to the conditions of pure rolling and non-
slipping and to the condition that the robot can only move in the direction normal
to the driving wheels axis (XB). These constraints are defined by the equations:
x · cos θ + y · sin θ + Rθ = r · φr
x · cos θ + y · sin θ −Rθ = r · φl
y · cos θ − x · sin θ = 0
35
From these equations we get the Jacobian matrix A(q).
A(q) =
sin θ − cos θ 0 0 0
cos θ sin θ R −r 0
cos θ sin θ −R 0 −r
The kinematics constraints equations can be expressed as:
A(q) · q = 0 (4.2)
Let S(q) be a n −m full-rank matrix with columns the null space of A(q). We
can write for example the S(q) matrix in the form:
S(q) =
r2· cos θ r
2· cos θ
r2· sin θ r
2· sin θ
r2R
− r2R
1 0
0 1
so that:
A(q) · S(q) = 0 (4.3)
Using equations (4.2) and (4.3):
A(q) · q = 03×1
A(q) · S(q) = 03×2
Suppose a velocity v(t) so that:
A(q) · S(q) · v(t) = 03×2 · v(t) = 03x1
A(q) · q = 03×1 = A(q) · S(q) · v(t)
q = S(q) · v(t)
q = S(q) · v(t) + S(q) · v(t) (4.4)
where v(t) is a vector defined as v(t) = [ φr φl] , the angular velocities of the
right and left wheels.
36
Using equation (4.4) in equation (4.1)
M(q) · (S(q) · v(t) + S(q) · v(t)) + C(q, q) · (S(q) · v(t)) = E(q) · τ − AT (q) · λ
Now multiplying ST by the equation above:
ST (q) ·M(q) · (S(q) · v(t) + S(q) · v(t)) + ST (q) · C(q, q) · (S(q) · v(t)) =
ST (q) · E(q) · τ − ST (q) · AT (q) · λ
Since ST (q)AT (q) = 0 we can rewrite the equation above as:
M · v + C · v = B · τ (4.5)
where
M = ST ·M · S
C = ST · (M · S + C · S)
B = ST · E
It can be proved that these matrixes above have the form:
M(q) =
[r2·(m·R2+I)
4R2 + Iwr2·(m·R2−I)
4R2
r2·(m·R2−I)4R2
r2·(m·R2+I)4R2 + Iw
]
C(q) =
[0 r2·mc·d·θ
2R
− r2·mc·d·θ2R
0
]
B(q) =
[0 r2·mc·d·θ
2R
− r2·mc·d·θ2R
0
]
Where τ = [ τr τl ]′are the torques applied on the right and left wheels, m =
mc + 2mw, I = mcd2 + 2mwR2 + Ic + 2Im, mc is the mass of the mobile robot
platform, mw is the mass of each wheel with the motor, Ic is the moment of inertia
of the platform about the vertical axis through C, Iw is the moment of inertia of
the wheel and motor about the wheel axis, Im is the moment of inertia of the wheel
and motor about the wheel diameter.
37
Note that equation (4.8) is expressed in angular velocities and accelerations of
the wheels and in the independent torques generated in each one of the wheel axes.
In order to get the wheelchair trajectory and orientation we use the relationship
between the wheels angular velocities and the linear velocities expressed in equation
(4.4).
38
4.3 Control interfaces
4.3.1 Control interfaces overview
Two different interface communication devices were studied in this project. The first
interface used was a commercial force feedback joystick. After some experiments,
we realized that the joystick was too weak to accurately move the hand. The second
interface device is a commercial force feedback steering wheel. Actual commercial
steering wheels in the market can apply torques up to 2 Nm. Because of this high
force capability we decided to move to a steering wheel and avoid the problem of
weak interface.
There are more robust joysticks in the market that can generate more force
on the user’s hand. For example the Impulse Stick Force Feedback Joystick from
Immersion in figure 4.2 can create a maximum of 14.5 N in each axes. These
arcade game controllers are designed for tough applications including location based
entertainment and industrial control. However the price of the standard configured
unit is 3495$, and thus we decided to use the lower cost steering wheel to test the
concept.
Figure 4.2: Impulse Stick Force Feedback Joystick from Immersion.
Most of the commercial power wheelchairs are driven using a joystick interface.
39
The subject introduces the desired direction and velocity of the wheelchair using
the joystick. Most of the commercial power wheelchairs are driven using a joystick
as input and using velocity control as control law.
There are two ways to control the movement with a joystick, position control
and velocity control [49] (figure 4.3). Using position control the joystick position
determines the wheelchair position, and with velocity control the joystick position
determines the direction and velocity of the wheelchair. With velocity control, the
more you incline the joystick the faster the wheelchair will move in the direction of
the joystick position.
Figure 4.3: Wheelchair response using Position Control (left) and Velocity Control(right).
The velocity control allows the user to remain stationary when the joystick is
released in the center position, since the velocity in this position is zero. As we
expect moving great distances a velocity control is more appropriate. All commercial
power wheelchairs provided with a joystick interface use velocity control to navigate,
so that we decide to select velocity control to make the simulation program more
realistic.
4.3.2 Steering wheel interface
The steering wheel selected for the experiment was a Logitech MOMO Racing Force
feedback wheel. The MOMO Racing wheel is a USB cheap commercial steering
wheelchair, used mainly in PC car racing games. It comes with pedals to simulate the
40
brake and accelerator in a car. The MOMO Racing wheel is one of the commercial
steering wheels that applies more torque over the driver’s hand; this is the main
reason to choose this wheel in front of other cheaper steering wheels in the market.
The steering wheel can apply torques of more than 2 Nm [50] and has a span of 270
degrees. The steering wheel is very comfortable, although the pedals are not very
realistic; the steering wheel is ergonomic and easy to drive.
Figure 4.4: Logitech MOMO Racing Force feedback wheel and pedal layout.
In figure 4.4 we can see the layout of the steering wheel used in this project. We
only used the steering wheel, the brake, and the accelerator in the experiment. The
other buttons were programmed to stop the simulation, and the stick shifter was
annulled.
The driver introduces the desired direction of the wheelchair steering the steering
wheel, and introduces the desired velocity using the accelerator and brake. Contrary
to the joystick velocity law, the accelerator and break work using acceleration and
deceleration, so it is necessary to introduce an integrator to the entry of the system
to change to a velocity control.
41
The direction and velocity introduced by the user through the joystick or steering
wheel is captured using the Simulink block Joystick Input in the Virtual Reality
Toolbox. This Simulink block gets the information of the axes, buttons, and point
of view from the input interface and reflects this information in the outputs of the
block. This block also supports force-feedback devices as inputs, so that the input
interface applies forces to the axes depending of the input applied to the Joystick
input block. When working with the joystick, the force input is a vector of two
elements, since we can apply forces in the x/y−axis separately. When working with
the steering wheel the force input is an element, since the force is only applied on
the steering wheel, not the pedals.
Figure 4.5: Joystick Input block in the Virtual Reality Toolbox in Simulink.
42
4.4 Input control law
The steering of the wheelchair can be referenced to two different frames, an absolute
frame and a mobile frame. The absolute frame is defined by XOY in figure 4.6.
The mobile frame is defined by YBnXB, centered in the center point of the axis that
passes through the center of the car’s wheels, and that moves with the car. The
angle of the mobile reference with respect to the absolute reference is defined by the
angle θ, the angle between XB and X.
Figure 4.6: Absolute reference and wheelchair reference.
When two different frames with relative velocity between them are defined, it
is possible to define also two different points of view. A point of view situated in
the fixed frame would see the body frame move along a fixed path. A point of
view situated in the body frame would see the fixed frame move with respect the
wheelchair, while the main body of the wheelchair remains fixed.
When driving we should choose between an absolute point of view and a relative
point of view. An absolute point of view is taken when driving the wheelchair with
the point of view fixed in an absolute reference (figure 4.7 left). A relative point of
view is taken when driving with a point of view fixed in a mobile reference (figure
4.7 right).
43
Figure 4.7: Absolute point of view (left) and relative point of view (right).
The point of view is important in order to process the movement of the steering
wheel. The steering of the wheel and the value of velocity longitudinal is relative
to absolute references. In order to simulate the driving process we are interested in
a relative point of view fixed in the wheelchair. The rotation relationship between
the fixed and mobile reference is dependent on the angle θ between the wheelchair
and the fixed horizontal.
Rj =
[cos(θ) − sin(θ)
sin(θ) cos(θ)
]
The velocity in the absolute reference introduced by the interface, xj and yj are
transformed to the relative frame using the rotation matrix Rj.
[xjde
yjde
]= Rj ·
[xj
yj
]
The desired velocity relative to body frame that the steering wheel introduces is
used to control the position of the power wheelchair from the realistic point of view
of a person seated on the center of the body frame.
We created a feedback controller in order to control the wheelchair navigation.
We compare the actual velocity direction and magnitude of the wheelchair with
the desired velocity introduced by the steering wheelchair in order to make the
44
velocity error as small as possible. The variables used in this control were not only
the longitudinal and transverse velocity of the vehicle, xdot and ydot, but also the
angular velocity of the vehicle θdot. We define the control law as:
Fx = −B(xdot − xjde)
Fy = −B(ydot − yjde)
Fθ = −B(θdot − θde)
(4.6)
We introduce the angular velocity of the car in the control law. We are not
interested in fast changes in the car’s orientation, which can generate oscillations
and shaking behavior. Although the wheelchair is able to follow the trajectory that
we introduce through the steering wheel only with the control of the longitudinal
and transversal velocity of the vehicle, the wheelchair oscillates along the path,
showing an unnatural and uncomfortable behavior. In order to avoid oscillations as
the wheelchair navigates we control the desired angular velocity to be zero (θde = 0).
In this way we get an accurate and smooth movement of the power wheelchair.
Note that in the above equations the control law is expressed in forces instead
of torques, as defined in equation 4.8. We need a relationship between forces and
torques to be able to control the wheelchair movement.
Fx = −B(xdot − xjde)
Fy = −B(ydot − yjde)
Fθ = −B(θdot − θde)
(4.7)
M · v + C · v = B · τ (4.8)
The relationship between the derivative of the generalized coordinates and the
angular velocities of the wheels is defined by S(q).
q = S(q) · v(t) →
x
y
θ
φr
φl
=
r2· cos θ r
2· cos θ
r2· sin θ r
2· sin θ
r2R
− r2R
1 0
0 1
[φr
φl
]
45
As S(q)(2 : 2) is a singular matrix, we can not get a direct relation between
[ x y] and [ φr φl] , which would help us to get a relationship between force and
torque.
We create a new relationship between the force in the x, y, z axis and the torque
applied on the wheels axis with the form:
τ = α · T ·Rf · [ Fx Fy]′ + γ · [ Fθ
R−Fθ
R]′ (4.9)
With Fx the force in x direction, Fy is the force in the y direction, Fθ is the
torque in z direction as defined in equation 4.8, α and γ are constants, and R is the
distance between the center of the body frame to each one of the wheels.
The matrix T and Rf are defined as follows.
T =
[a b
b −a
]
Rf =
[cos θ sin θ
− sin θ cos θ
]
The Rf matrix projects the horizontal force Fx, and the vertical force Fy in the
body frame. The matrix T generates a difference in the torques applied in each
wheel to be able to steer the wheelchair. For example, if the total force needed to
control the wheelchair makes the robot turn right, the left wheel has to roll faster
than the right wheel, in other words, the torque applied on the left wheel has to
be larger than the torque applied on the right wheel. Contrary, if the total force
makes the wheelchair to turn left, the torque applied on the right wheel has to be
larger than the torque applied on the left wheel. The values of the constants in the
T matrix a and b are used to tune the sensitivity of the wheelchair in the XB and
YB axis. If the relationship of a over b is greater than one (a/b > 1) the wheelchair
is less sensitive to turn than going straight. Contrary, if the relationship of a over
b is less than one (a/b < 1), the wheelchair is more sensitive to turn than going
straight.
As we can see in equation 4.9 the torques on the wheels are also dependent on the
torque on the z-axis Fθ defined in equation 4.7. The torque Fθ is transformed into
46
forces on the center of the wheels using the relationship Force = Torque/distance.
When the z-torque Fθ applied is positive, the torque in the right wheel is increased
a certain magnitude depending of the z-torque, and the torque in the left wheel is
reduced by the same magnitude.
Once we get the magnitude of the torque control, we can substitute into the
movement equation 4.8.
47
4.5 Virtual Reality World
4.5.1 Virtual Reality overview
We created a Virtual Reality world in order to run our simulation. The virtual
reality world is presented as a house with furnished rooms, doors, and corridors,
where the wheelchair has to navigate following a black line on the floor using a
CCD camera. The black line is 0.5 meters width and is formed by straight lines.
Straight lines form the corners, and the maximum angle between lines is 30 degrees
to avoid the wheelchair losing the line at large speeds (figure 4.8). The acquisition
and processing of images using a CCD is slow, so that the changes of line direction
when running at high speed can not be large, otherwise the wheelchair would lose
the path to follow.
Figure 4.8: Straight lines define the curves in the black line.
The black line is defined as a series of the coordinates of the important points that
forms a line. Important points are for example the initial or final point in a segment
48
of line. The information of these points is saved in a *.dat file. Each different circuit
has his corresponding file *.dat where the information of the important points of
the path are saved. In figure 4.9 we can see the codification of the black line in a
*.dat file. Each row corresponds to a different point. The first column is the order
of the point, the second column is the x coordinate of the point, the third column
is the y coordinate and the last column is the direction of the line that follows that
point.
Figure 4.9: Codification of the black line in a *.dat file.
We need a more accurate description of the black line in order to be able to
calculate errors and distances driven. Using the *.dat file we generate a new matrix,
named lineb, with the interpolation of the coordinates and direction of the impor-
tant points of the line. The result is a large matrix with tree columns representing
the x coordinate and the y coordinate of the point, and the direction of the line that
goes through the point.
We created the virtual reality word using the program V-Realm Builder 2.0.
With this program we designed the graphics of the environment, and we defined the
different points of view of the driver. The main point of view of the simulation is as
seated on the wheelchair and moves as the wheelchair moves. In order to link the
virtual reality world to our model we used the Reality Virtual Toolbox in Simulink.
49
4.5.2 V-Realm Builder 2.0
The virtual reality world was created using the program V-Realm Builder 2.0. This
program is a powerful three-dimensional package for the design of 3D objects and
virtual worlds. This is an easy tool, but has the limitation that it is not possible to
create very realistic and complex objects as other products in the market. V-Realm
Builder 2.0 was created to design low level objects that minimize the size of files,
and be able to run virtual words through Internet at real time. In figure 4.10 there
is an example of a real world design in V-Realm Builder 2.0.
Figure 4.10: Example of a real world design in V-Realm Builder 2.0.
In order to create a virtual world the first step is to create the objects that we
want to be mobile in the simulation. In this case we designed a power wheelchair
in Solid Works and exported the object to the V-Realm Builder. Because of the
limitation in creating complex objects in V-Realm Builder, when designing realistic
and complex objects it is better to use a more powerful design program, such as
Solid Works. Solid Works has the advantage that the designed objects can be save
directly as *.WRL files (also known as VRML Virtual Reality Modeling Language),
50
that can be imported from V-Realm Builder.
The wheelchair designed in Solid Works (figure 4.11) was separated in three
different blocks that move in different manner in the virtual world. The first block
consists in the body of the power wheelchair, the seat and wheel motors. The right
wheel (tire and rim) forms the second block, and the left wheel defines the third
block. In order for these three different blocks to have independent movement, they
have to be defined separately in the V-Realm Builder with different names.
Figure 4.11: Power Wheelchair modeled in Solid Works.
The resulting movement we will see in the virtual reality would be the body
of the wheelchair, and the right and left wheels, which move independently with
angular velocities relative to the body frame. When turning to the right, we will see
the left wheel with an angular velocity with respect to the body frame larger than
the relative angular velocity of the right wheel.
Once we define the mobile parts, the next step is defining the fixed environment.
In our simulation we created a house-like environment, with rooms, corridors, doors
and furniture. We wanted to create a usual indoor environment, where the floor is
flat, and the wheelchair driver has to go through narrow doors and narrow corridors
to move from a room to the next one. There is a black line on the floor that the driver
51
has to follow. The black line is situated in a well-defined road of 2 meters wide. The
road has two different goals; determine the security zone for the wheelchair, and to
create a movement feeling, since it is furnished with a parquet floor of parquetry in
Solid Works.
It is possible to define different points of view when running the simulation in V-
Realm Builder 2.0, i.e. we can choose an absolute point of view fixed in the absolute
frame, or we can define a point of view that is mobile with the wheelchair. In our
case, we defined a mobile point of view, fixed in the wheelchair’s body. The point
of view is situated 1.5 meters in the z-axis of the mobile frame, and makes an angle
of 45 degrees with the vertical (figure 4.12). The goal of the mobile point of view
is that simulates the usual point of view of a person seated in the wheelchair who
is looking at the black line on the floor to drive an accurately. The driver fixes the
sight in a point ahead from his actual position, so that the sight of points further
away from this look-ahead point is reduced.
Figure 4.12: Mobile point of view.
52
4.5.3 Virtual Reality toolbox in Simulink
Simulink has a viewer as the default method for viewing virtual worlds. In the
Virtual Reality Toolbox in Simulink there is a block named Virtual Reality Sink
that writes the data from the Simulink model to the virtual world created with
V-Realm Builder (figure 4.13).
Figure 4.13: Virtual Reality Sink block in Simulink.
The VR Sink block has as inputs the values of movement that we want to apply
to our virtual world. In our case we are interested in seeing how the wheelchair
moves along the virtual world environment. We need to introduce the position and
orientation of the wheelchair body, and the position and orientation of the right
and left wheels. In this way we need six different inputs to the VR Sink, one for
the wheelchair position, one for the wheelchair orientation, and in the same way, a
pair of inputs for the position and orientation of the right wheel, and a pair for the
position and orientation of the left wheel.
From the movement equations defined in 4.2 we know the position and orienta-
tion of the wheelchair body, and the angle of the radius of the wheels with respect
to the longitudinal axis of the body frame. In order to get the absolute position
of the wheels, and the absolute orientation of the wheels we need to transform the
information from the movement equations.
It is clear that the absolute position of the wheelchair wheels can be expressed
as:xwl = xbody + R sin θ
ywl = ybody −R cos θ
53
xwr = xbody −R sin θ
ywr = ybody + R cos θ
Where xbody and ybody are the X and Y position coordinates of the wheelchair
body, and R is the distance between the center of the body frame to each one of the
wheels.
The absolute direction of the wheels must be introduced using the direction
of rotation in vector form and the angle of rotation, since this is how VR Sink
recognizes the rotation. As the wheels are subjected to two different rotations, the
rotation of the wheelchair θ, and the rotation of the wheels φr and φl around XB,
we have to write the two rotations in a vector and a unique angle form.
We define the θ , φr and φl rotation matrices as:
Rθ =
cos θ 0 − sin θ
0 1 0
sin θ 0 cos θ
Rφl =
cos φl − sin φl 0
sin φl cos φl 0
0 0 1
Rφr =
cos φr − sin φr 0
sin φr cos φr 0
0 0 1
The rotation matrix of the left wheel composed by the rotation matrix Rθ and
Rφlis Rl = Rθ ·Rphil. Rl is a orthogonal matrix, so that we can get the value of the
rotation angle using the matrix orthogonal property:
Tr(Rl) = 2 · cos(wl) + 1 → wl = a cos(Tr(Rl)− 1
2)
If the value of the rotation angle is zero, the rotation can be around any vector.
If the value of rotation is larger than zero, we can use the following orthogonal
property to get the rotation vector.
V lx =Rl(3, 2)−Rl(2, 3)
2 sin(wl)
54
V ly =Rl(1, 3)−Rl(3, 1)
2 sin(wl)
V lz =Rl(2, 1)−Rl(1, 2)
2 sin(wl)
We proceed in the same way with the right wheel using the rotation angle φr.
55
Chapter 5
Assist-as-needed controller
5.1 Introduction
The goal of the simulation program is to apply assistance as needed to the driver
to steer smoothly the wheelchair following a black line on the floor without making
large errors. The assistance is defined as the force that the force feedback steering
wheel applies on the driver’s hands.
There are two important premises that assistance as needed has to satisfy. First,
the assistance has to be reduced as the driver learns how to steer without making
large errors. Secondly, the assistance has to be proportional to the error, in this
way as the assistance is reduced, the driver is more free to drive with larger errors,
but there is going to be always a greater force for greater errors, making the power
wheelchair always safe.
The force that the steering wheel applies on the driver’s hands is dependent
on the distance error, direction error, and the derivative of the wheelchair orien-
tation angle. Distance error is defined as the distance between the center of the
56
wheelchair and the black line. The direction error is defined as the angle between
the wheelchair direction and the direction of the black line. The derivative of the
wheelchair orientation angle is defined as the rate of change of the orientation of the
wheelchair.
The performance variables of the system are the direction and distance errors. In
order to get the value of this distance error, we look for the point in the black line (in
lineb) that is nearest to the actual position of the wheelchair. The distance error
is the square root of the projection square of the distance between the center of the
wheelchair and the nearest point in the lineb matrix in the perpendicular direction
of the black line. The third column of the matrix lineb defines the direction of the
line used in the projection of the error. The direction error is the difference between
the actual direction of the wheelchair and the direction of the line that goes through
the nearest point in the black line.
57
5.2 Look-ahead performance variables
In section 4.5.2 we saw the importance of the point of view. The idea of point
of view is also important when designing the assist-as-needed controller. As the
driver is steering a nonholomonic vehicle, when turning she/he has to start the
movement before the center of masses is on top of a point on the track with a
changing direction, otherwise the direction and distance error are large. The driving
action is then dependent on what we see in front of us instead of the nearest point
to the wheelchair on the black line. Also note that the distance before the turning
used when driving is dependent on the speed of the vehicle. We need less distance
when the velocity is slow, and we have to turn well in advance of the curve when
the velocity is fast.
We translate the look-ahead idea to the controller. Instead of a usual feedback
using the distance and direction error with respect to the nearest point on the
line, we use the distance and direction error with respect to the point situated a
determined distance ahead of the nearest point of the vehicle in the black line. In
figure 5.1 we can see how the wheelchair uses the position of the point in Fa and
direction of the line through Fa instead of the point in Fn. The velocity dependent
distance ahead of the nearest point is defined as d.
The matrix with the black line information lineb is created doing interpolation
of the principal points of the path. In the interpolation method the distance between
consecutive points does not have to be equal. So that the distance ahead the nearest
point to the vehicle in the black line, has to be calculated for each nearest point.
We use the look-ahead distance error, look-ahead direction error, and the deriva-
tive of the wheelchair orientation angle in our controller. The look-ahead distance
error in the controller is defined as the square root of the square of the distance
between the point a determined distance ahead the nearest point to the wheelchair
on the black line and the actual position of the wheelchair. The look-ahead direc-
tion error is the difference between the actual orientation of the wheelchair and the
direction of the line that goes through the point a determined distance ahead of the
nearest point on the black line. The derivative of the wheelchair orientation angle is
58
Figure 5.1: Representation of effect of point of view.
defined as the rate of change of the orientation angle of the wheelchair in the actual
position.
Using the looking ahead error values in our controller not only can we reduce the
error made when driving, but also we compensate the delay of the steering wheel.
The response of the steering wheel is slower than the response of a usual joystick,
since in order to apply a 90 degree angle turn in the wheelchair, it is necessary to
turn the wheel 135 degrees. This is not typical for a car, where the relationship
between the angle turned by the steering wheel and the angle turned b the car is
bigger.
The look-ahead distance error used in the controller is dependent on the orien-
tation of the wheelchair. The assistance related to the look-ahead distance error
is dependent on the angle between the look-ahead direction error vector and the
wheelchair orientation.
In figure 5.2 three different possibilities of wheelchair orientation are represented.
In all three configurations the look-ahead distance error is invariant. In the first
figure on the left (A) the angle between distance error and wheelchair is 90, the
59
assistance over the driver’s hands is large, since the direction of the wheelchair is
perpendicular to the desired direction needed to come back to the line. In the figure
at center, the angle between distance error and wheelchair is θ, smaller than in figure
on the left, the assistance over driver’s hands is smaller than in the first case. In
figure 5.2 on the right (C) the angle between distance error and wheelchair is 0, there
is no assistance over driver’s hands, since the driver is in the necessary direction to
correct the error.
Figure 5.2: A: Angle between distance error and wheelchair is 90, the assistanceover driver’s hands is large. B: Angle between distance error and wheelchair is lessthan 90, the assistance over driver’s hands is smaller. C: Angle between distanceerror and wheelchair is 0, there is no assistance over driver’s hands.
The assistance caused by the look-ahead distance error is not only dependent on
the magnitude of the error, but also dependent on the angle between the direction
of the wheelchair and the direction of the look-ahead distance error. The assistance
related to the look-ahead distance error is dependent of the projection of the look-
ahead distance error vector in the y-axis of the body frame. The rotation matrix is
dependent on the orientation angle of the wheelchair θ and has the form:
Rass =
[sin θ cos θ
− cos θ sin θ
]
We are only interested in the assistance in the YB direction (turning right or
left), since the assistance is only applied by the steering wheel. The accelerator and
brake pedals are not affected by the assistance; their use is up to the driver. We
60
defined a safe span of velocities, so that the driver can increase or reduce the speed
of the power wheelchair using the brake and accelerator, but never go faster than a
safe maximum. [edisYB
edisXB
]=
[sin θ cos θ
− cos θ sin θ
]·[
edisx
edisy
]
The assistance is also dependent of the look-ahead error orientation of the
wheelchair. The look-ahead direction error is defined as the difference between the
actual orientation of the wheelchair and the direction of the line that goes through
the point a determined distance ahead of the nearest point on the black line. This
dependency is very important to turn accurately in the curves. The curves are de-
fined as straight lines that have a maximum angle between two consecutive lines
less or equal to 30 degrees. Because of the nonholomonic dynamics of the vehicle,
it is not possible to turn the wheelchair without doing any error, unless we stop
the wheelchair in the conjunction of two consecutive lines. In order to reduce this
error the driver has to start turning the wheelchair before the center of the vehicle
reaches the curve, as see in figure 25.3. This anticipatory reaction is marked by
the look-ahead direction error in the controller, and is based on the human reaction
when driving. One of the most important points of the described controller is to
get a system that behaves as human as possible, in order to reach an appropriate
driving technique.
Figure 5.3: Path to follow (dashed) and path done (solid) by the wheelchair whenthe wheelchair moves autonomously without interaction with the driver.
61
The controller is also dependent on the derivative of the orientation angle of the
vehicle. The derivative of the orientation angle is defined as the rate of change in
the direction of the wheelchair. The derivative has as the function to reduce the
oscillations of the system. We observed that a controller that only uses the look-
ahead distance error, and look-ahead direction error could not reduce the oscillation
in the system. The derivative of the orientation angle introduces impedance in the
steering wheel, and makes the movement smoother.
The final controller used in this work has the form:
Fass = Kd · edisYB+ Ka · eang + Ba · eang
The shape of the control signal in a path is showed in figure 5.4. In this trial
the controller works autonomously without interaction with the driver.
0 20 40 60 80 100 120
−10
−5
0
5
10
distance (m)
erro
r
CONTROLLER SHAPE
forcederivative orientationlook−ahead distance errorlook−ahead direction error
Figure 5.4: The shape of the control signal in a trial. The solid line represents theforce done by the steering wheel on the driver’s hands. The value of the derivativeof the orientation angle is showed in dashdot line, the look-ahead distance error isin dotted line, and the look-ahead direction error is dashed line.
We can see in figure 5.4 that the derivative of the orientation angle of the
wheelchair (dotted line) has an opposite sign than the look-ahead distance error
62
and the look-ahead direction error. This difference in sign makes the force on the
driver’s hands decrease when making a fast change of direction. This difference in
sign is useful when reducing the oscillation of the vehicle, that are critical mainly
after curves or after rectifying large errors, but affects negatively the system when
turning in curves. Although the look-ahead distance error and look-ahead direction
errors are hardly ever zero in curves at the same time, the derivative of the orien-
tation angle can make the resulting force zero or make the force have the contrary
direction. When this happens, the driver can feel an undesirable peak of force that
makes him turn in the contrary direction. In order to avoid this non-human behavior
we introduce saturation in the derivative orientation signal. The saturation assigns
a maximum value to the derivative component of the controller.
When driving autonomously, most of the large errors are made in curves. These
errors cannot be canceled since they have their origin in the nonholomonic dynamics
of the system. It is possible to reduce the error to a minimum value that we consider
acceptable. In figure 5.5 we can see the two performance variables of the system,
the direction error and the distance error in a trial when driving autonomously. The
maximum errors of the system are of the order of 0.3 radians for the direction error
and 0.1 meters for the distance error. These maximum errors are always measured
in curves. The errors done in straight lines are of the order of 100 times smaller.
The errors done in straight lines could be canceled increasing the coefficient of the
orientation derivative in the controller, since this increases the impedance of the
steering wheel, but as explained before, increasing the derivative coefficient of the
controller makes the system behave less human on curves.
Using a look-ahead controller reduces the errors done by the system when driving
autonomously. In figure 5.6 are represented the look-ahead direction error and the
real direction error in a trial when the steering wheel moves autonomously. The
look-ahead direction errors are of the order of twice greater than the real direction
errors done. A recent study [27] shows that with a look-ahead controller the error is
not only reduced when the mobile robot is moving autonomously, but also is reduced
when the driver is using a haptic interface to navigate in a simulation program. The
research group investigated how an assistive haptic interface, similar to a steering
63
0 20 40 60 80 100 120
−0.2
−0.1
0
0.1
0.2
0.3
distance (m)
erro
r (r
ad)
Direction error vs. distance
0 20 40 60 80 100 1200
0.02
0.04
0.06
0.08
0.1
0.12
distance(m)
erro
r (m
)
Distance error vs. distance
Figure 5.5: Left: Direction error versus distance in a trial. Right: Distance errorversus distance in a trial.
wheel, helps the driver to reduce the errors done when driving using two different
kinds of controllers, a look-ahead controller and a usual feedback controller. They
demonstrated that an assistive haptic interface controlled by a look-ahead controller
reduces drastically the errors done compared to a usual feedback controller, similar
to the present results.
64
0 20 40 60 80 100 120
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
distance (m)
erro
r (r
ad)
look−ahead direction error/ direction error vs. distance
look−ahead direction errordirection error
Figure 5.6: Look-ahead direction error (dashed line) and real direction (solid line)error versus distance.
65
5.3 Assistance-as-needed. Reducing the controller
coefficients as needed.
The assistance is defined as a force that the steering wheel applies on the subject’s
hands. It is clear that as the error becomes larger, the larger the force, but note
that at equal errors, if the PD coefficients are larger (Kp, Ka, and Ba), the force is
also larger. We want a safe controller that helps the subject to learn how to drive
a wheelchair without doing large errors. In order to control the force applied by
the steering wheel, we will vary the values of the PD constants. In this way, as
the driver learns how to steer, the controller reduces the values of the PD constants
allowing more freedom (more error). The steering wheel will apply less force to the
same error value.
An important requirement of the controller is safety. Varying the PD coefficients
we avoid dangerous large errors, since the force is directly dependent of error. The
controller gives more freedom around the black line as the driver learns, but always
limiting large errors that can be dangerous and discouraging to the subject.
We base the value of the assistance in the model of robot assistance as an adaptive
algorithm described in [18]. The assistance algorithm is defined as:
Gi+1 = fR ·Gi + gR · |xi − xd| (5.1)
Where G is the value of the PD constant in the assistance controller, fR is the
robot forgetting factor, gR is the robot learning gain, xi is the position or direction
in the i iteration and xd is the desired position or direction defined by the path.
The assistive algorithm is based in the error and the previous assistance value. Note
that fR muss be less than 1 in order to decrease the value of the assistance as more
path is followed.
The assistive algorithm is similar to the neural learning algorithm identified
in [19], where the output force from muscular activity on the trial i + 1th movement
is proportional to the previous output and the error performed in the previous
movement.
ui+1 = fH · ui − gH · (xi − xd) (5.2)
66
Where u is the value of the force generated by the motor system, fH is the
forgetting factor (fH < 1), gH is the learning gain, xi is the position in the i iteration
and xd is the desired position defined by the path. Note that for this definition of
internal model, as the error made by the subject increases, the larger the response
from the nervous system.
Each controller coefficient is calculated through the assistance algorithm using
the error of the performance variable (direction error or distance error) related to
the coefficient. In order to get the value of the coefficient of the look-ahead distance
error, the distance error is used. The distance error is defined as the distance
between the center of the wheelchair and the closest point to the wheelchair on he
black line. In order to get the value of the coefficient of the look-ahead direction
error or the coefficient of the derivative of the orientation angle, the direction error
is used. The direction error is defined as the angle between the wheelchair direction
and the direction of the closest point on the black line. Note that in the assistance
algorithm we work with real errors instead of look-ahead errors, since we want the
assistance related to the real performance of the driver at each moment.
67
5.4 Natural and machine variability
As the subject learns how to drive the wheelchair without doing large errors, the
assistance is reduced because of the action of the forgetting factor. We have seen
that there is always an error done in curves caused by the nonholomonic dynamics
of the wheelchair. These errors are also present when the wheelchair is driven
autonomously, affecting to the operation of the assistance algorithm since the system
consider them as errors done by the driver. Although the driver drove perfectly,
without doing any error, the assistance would not decrease because the system is
detecting the error caused by the nonholomonic dynamic of the wheelchair.
−15 −10 −5 0 5 10 15−10
−5
0
5
10
real error
filte
red
erro
r
deadband
Figure 5.7: Deadband example.
In order to separate the errors done by the driver and the errors done by the
system itself we create a dead zone in each curve. We used the derivative of the look-
ahead direction error to detect when the wheelchair is doing a curve. We couldn’t
use distances or time to control when the wheelchair makes a curve, since they are
different in each trial. The look-ahead direction error changes drastically when a
curve is seen, and the steering movements to correct the error are also quick changes
in the direction of the car. These quick corrections of the direction makes the value
of the derivative of the look-ahead direction error larger than a threshold. The
threshold indicates that there is a corner and it is necessary to apply a deadband as
68
shown in figure 5.7 to eliminate the intrinsic error generated by the nonholomonic
dynamics of the vehicle from the total error done.
Next figures show how the derivative of the look-ahead direction error (thin
solid line) is bigger than a threshold value when approaching a corner. When the
derivative error is bigger than this value we subtract an empirically determined
value to the direction error and position error. The empirically determined value
is a constant value of the error that we consider normal and associated to the
nonholomonic dynamics of the wheelchair. This value is different for the direction
error and distance error and is used to create a dead zone in the error signal. If the
value of the error in the curves is larger than the value of the error associated to the
dynamics of the vehicle, the filtered error will be the real error minus the associated
error.
In figure 5.8 the real direction error (dashed line) and the deadband-filtered
direction error (thick solid line) are represented in a simple trial with driver and
steering wheel interaction. The large direction errors done in curves because of the
dynamics of the wheelchair are removed, and only the errors created by the driver
are represented in the filtered error. In this example we forced some errors to show
how the filter does not eliminate the errors created by the driver. Note that in
straight lines the total error and the filtered error are coupled.
In figure 5.9 the real distance error (dashed line) and the filtered distance error
(thick solid line) are represented. Most of the errors done in curves because of the
dynamics of the vehicle are filtered. The errors done in straight lines, although
larger than the maximum error associated to the nonholomonic dynamics of the
wheelchair, are coupled with the filtered error signal.
The deadband-filtered direction and distance errors are used in the assistance
algorithm instead of the real errors in order to avoid the undesirable effect of the
non-holomonic behavior errors. They will be used also as the performance variables
of the system.
In figure 5.11 are shown the results of a driver when performing a circuit when
the steering wheel assists the driver as needed. The driver forced some errors to
show how the system reacts. In figure 5.11 (a) is plotted the path to follow in solid
69
0 20 40 60 80 100 120
−0.2
−0.1
0
0.1
0.2
0.3
distance (m)
erro
r (r
ad)
direction error/ filtered direction error vs. distance
direction errordeadband−filtereddirection errorderivative look−aheaddirection error
Figure 5.8: Real direction error (dashed line), deadband-filtered error (thick solidline), and derivative of the look-ahead direction error (thin solid line) versus distancein a simple trial.
line and in dashed line the real path followed by the driver. We can see how natural
the movement is and the lack of large oscillations after the curves. In figure 5.11 (d)
are plotted the coefficients of the assist-as-needed controller respect to the distance
driven. Note that the assistance is reduced as the driver moves on the line (thanks
to the forgetting factor).
The relationship between the real error made by the driver and the coefficient of
the assistance is shown in figure 5.10. As the distance error remains small, the value
of the coefficient related to the distance error decreases. If we force large distance
errors for a long time, we can see how the coefficient slope becomes positive.
70
Figure 5.9: Real distance error (dashed line), filtered error (thick solid line), andderivative of the look-ahead direction error (thin solid line) versus distance in asimple trial.
71
0 20 40 60 80 1000
2
4
6
8
10
distance (m)
dist
ance
err
or (
m)
dependency of coefficient value and error
distance assistance coefficientdeadband−filtered distance error
Figure 5.10: The coefficient curve can have positive slope when large errors aremade.
72
30 20 10 0 10 20 300
10
20
30
40
x axis (m)
y ax
is (
m)
(a) Path to follow
0 20 40 60 80 100 1200
0.1
0.2
0.3
0.4
0.5
distance (m)
posi
tion
erro
r (m
)
(b) distance error
0 20 40 60 80 100 120
0.2
0.1
0
0.1
0.2
0.3
(c) direction angle
distance (m)
dire
ctio
n er
ror
(rad
)
0 20 40 60 80 100 1200
2
4
6
8
10(d) Assistance
distance (m)as
sist
ance
mag
bitu
d
real error
deadband filtered error
direction error derivative
real error
deadband filtered error
direction error derivative
KaKdBa
Figure 5.11: (a) Path to follow (solid line) and real path followed by the driver(dashed line). (b) Distance error (dashed line), the filtered distance error (thicksolid line), and the derivative of the look-ahead direction error versus distance. (c)Direction error (dashed line), the filtered direction error (thick solid line), and thederivative of the look-ahead direction error respect distance. (d) Coefficients of theassist-as-needed controller respect to the distance driven.
The assist-as-needed algorithm is dependent on the error done by the driver.
We have eliminated the errors done by the dynamics of the wheelchair, but there
is a natural variability in the driver’s performance that affects the final value of
the assistance. As more paths are driven when doing small errors, the assistance is
smaller thanks to the forgetting factor, but never reaches the value zero because of
the learning gain. In figure 5.12 is plotted the assistance and filtered distance error
respect the distance driven. In this example for distances driven larger than 800
meters the value of the coefficients of the assist-as-needed controller are constant
and different to zero and so is the mean of the filtered distance error.
The final constant value of the assistance coefficients and the amount of path
that the driver has to follow in order to get a stable value varies with each driver.
We are interested in reducing the assistance until zero assistance. In order to reach
73
0 200 400 600 800 1000 1200 1400 16000
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
distance (m)
erro
r (m
) F
orce
filtered distance error and assistance vs. distance
filtered distance errorKd
Figure 5.12: Filtered distance error and assistance respect to distance driven.
this aim we have to force the assistance to decrease when it arrives to a stable value.
A stable value of assistance means a stable value of the mean of the error.
We force the assistance (the force on the driver’s hands) to decrease by a factor
alpha when the difference of the mean of the assistance coefficients in two consecutive
trials is less than a threshold. The assistance force is defined as the control law:
Fass = Kd · edisYB+ Ka · eang + Ba · eang
When the mean assistance in two consecutive trials does not vary substantially
we force the assistance on the driver hands to be reduced by a factor alpha.
F = α · Fass
The factor alpha is fixed in each trial and varies from trial to trial only when
the difference of the mean of the assistance coefficients in two consecutive trials is
less that a threshold.
αi+1 = fα · αi + gα · |enat − emeani|
74
Where fa is the forgetting factor of the value of alpha, ga is the learning gain
of alpha, enat is the maximum error that we consider natural, and that we are not
interested in make zero, and emeani is the mean error of the i trial. The mean error
is calculated using the deadband-filtered distance error.
Each trial has a length of approximately 90 meters depending of the driver skills.
We force the assistance force to decrease by a factor alpha, but we still calculate the
assistance coefficients using the assistance algorithm. The assistance coefficients are
then used as performance variables. If the difference of the assistance coefficients
mean in the next trial and the previous mean is still less that the threshold, and
the mean error is not large with respect to the natural error allowed, the alpha
coefficient is decreased, meaning that the assistance force will decrease more. The
program proceeds until the force on the driver’s hands is zero (alpha=0). The
simulation is then stopped. In the case that the difference of the two consecutive
assistance coefficients mean is larger that the threshold, the next trial would start
with the assistance force dependent on the current assistance coefficients.
In figure 5.13 an example of how the program eliminates the natural variability
is shown. The dashed line represents the distance assistance coefficient without
being affected by the alpha factor. The solid line represents the distance assistance
coefficient affected by the coefficient alpha. Note that the solid line decrease until
zero, but the dashed line remains almost constant at a value different to zero, the
value of the assistance coefficients that generate the assistance force are forced to
zero by the factor alpha. Note that the curve of the real coefficient (affected by
alpha) is not smooth after trial 12, and we can see how the coefficient varies from
trial to trial affected by the change of value in the factor alpha.
In figure 5.14 the value of the factor alpha and the distance error in dm are
represented. Note that it is not until trial 12 that the alpha factor starts diminishing,
so that the mean value of the distance assistance coefficient in trials 10 and 11 are
almost the same. Note that the alpha’s range of change is constant until trial 19.
During this trial the slope of the curve changes lightly, because the mean value of
the error is greater than the maximum error that we consider natural (0.05 m), and
makes the alpha value to decrease less than in previous trials.
75
2 4 6 8 10 12 14 16 18 20 22 240
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
trial
assi
stan
ce
Theoretical and forced assistance coefficients
teoretical coefficients (K)real coefficients (alpha*K)
Figure 5.13: Assistance coefficients affected by alpha (solid line) and non affected(dashed line)
2 4 6 8 10 12 14 16 18 20 220
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
trial
dist
ance
err
or (
dm)
mean distance error and alpha coefficient
mean distance error
alpha coefficient
Figure 5.14: Mean distance error and factor alpha vs. trial.
76
Chapter 6
Results
6.1 Introduction
In this chapter we describe a series of experiments that evaluated the hypothesis
that it is possible to learn how to drive a power wheelchair without doing large
errors using an assist-as-needed steering wheel.
Specifically, we ran two different experiments. The first one shows that the error
done when driving is reduced drastically when assistance is applied. The second
one proves our hypothesis, that the driver learns how to steer the power wheelchair
when assistance-as-needed is applied with the steering wheel.
77
6.2 Assistance increases performance
When the assistance is applied, the error made by the driver is reduced drastically.
In order to show that the error made by the driver is reduced when assistance is
applied we ran an experiment with a healthy subject. The subject was a healthy
28 year old right handed male. The subject had to run the training simulation first
with constant assistance and then run the simulation without assistance. The order
of the trials is important to avoid the possibility that the difference in error done
could be related to the formation of an internal model of driving.
We can see in figure 6.1 the distance errors that resulted from the experiment.
The errors showed are real distance errors, so that the error done by the non-
holomonic dynamics of the power wheelchair are not neglected. The errors done
without assistance are substantially larger than the errors done when assistance is
applied.
The more critical situation is when the driver has to turn in the first curve (about
20 m from starting point). The error done without assistance reaches a maximum of
1.5 meters, a large error if we consider that the width of the corridor is 4 meters, and
the width of the wheelchair is 0.5 meters. Note that the error done between curves
(between peaks in the graphic) is also substantially larger when driving without
assistance that when driving with assistance.
Note that as the driver steered the wheelchair first with assistance it is possible
that he could have learned how he should drive the wheelchair during this first trial.
However, although the driver had some idea of how to steer the wheelchair, he still
made large errors when assistance was removed, mostly in curves.
We showed with this experiment that steering the power wheelchair with assis-
tance improves substantially the performance of the task, reaching at the end of the
experiment distance errors of a maximum of 13 cm.
78
0 20 40 60 80 100 1200
0.5
1
1.5
Distance(m)
Dis
tanc
e er
ror
(m)
Distance error with & without assistance
error with assistance
error without assistance
Figure 6.1: Distance error done with assistance (solid line) and without assistance(dashed line) vs. distance.
6.3 Assist-as-needed controller allows learning with
small errors
A second experiment was run to show that the driver learns how to steer without
doing large errors when assistance-as-needed is applied.
The experiment consisted in 17 consecutive trials (17 simulation runs), with 10
seconds break between trials. In each trial assistance, no assistance, or assistance-
as-needed was provided. The experiment protocol was as follows:
Trial 1: drive with assistance.
Trial 2: drive without assistance.
Trial 3-13: drive with assistance-as-needed.
79
Trial 14-17: drive without assistance.
The experiment was performed with a healthy 28 years old right handed male.
We did not inform him about which trials where with and without assistance, and
instructed him to concentrate on steering the wheelchair his best. The subject had to
drive with both hands on the steering wheel, in a comfortable position (ten minutes
to two) ( see figure 6.2). The driver was able to move freely the hands in the 10
seconds break and look around. The subject was also able to stop the simulation if
necessary.
The total experiment was about 20 minutes, although this value can change
depending on the driver skills and the maximum allowed velocity of the wheelchair.
The experiment was carried at a constant velocity of 6 m/s. The driver did not
have to care about the acceleration and break pedals, since the system accelerates
by itself until reaching the maximum of 6 m/s.
For each trial the tracking distance error, direction error, filtered errors, the
distance driven, the assistance coefficients value along the trial, the total force on
the driver hands, and maximum error for each trial were measured.
80
Figure 6.2: Subject running the second experiment.
81
In figure 6.3 the maximum error vs. trial, and the distance error and assistance
vs. total distance driven in the experiment are plotted. The trials marked with as-
terisks are without assistance, and the trials market with a circle are with assistance.
2 4 6 8 10 12 14 160
0.5
1
1.5
2MAX. ERROR VS. TRIAL
trial
max
. err
or (
m)
0 200 400 600 800 1000 1200 1400 1600 1800 20000
0.5
1
1.5
2ERROR & ASSISTANCE VS. DISTANCE
distance (m)
erro
r (m
)
assistance off
assistance on
distance error
assistance
Figure 6.3: Top: Maximum error vs. trial. Trials with asterisk are without assis-tance, and with circles with assistance. Bottom: distance error and assistance vs.total distance.
In the section above we showed that the assistance increases substantially the
task performance. In figure 6.3 we show again that this hypothesis is correct. The
first and third trials are with assistance, and the second one is without assistance.
Note that the maximum error in trial two is on the order of three times the error in
82
the first trial.
In trials 3-13 the assistance as needed is on, and adapts to the subject learning
process. The value of the coefficients of the controller (assistance) decrease as the
driver learns how to steer the wheelchair. Note that the error is small compared to
the error done without assistance (trial 2). Note that the error is almost constant
as the assistance decreases. No large errors are done while the assistance as needed
is on.
In trial 14 the assistance is removed, and remains off until the end of the ex-
periment. We can see that the maximum error increases lightly at trial 14, and
decreases again until a value close to the value before removing the assistance. The
maximum error in trial 14 is substantially smaller than the error in trial 2 (without
assistance), demonstrating that learning happened during assistance.
In figure 6.4 is plotted the total force on the driver hands. Note that the force
is reduced as the assistance coefficients are reduced, but there there is still some
assistance force at trial 13, the last trial with assistance.
83
0 500 1000 1500 2000−20
−15
−10
−5
0
5
10
15
20
distance (m)
forc
e
Force vs. distance
total force
assistance coefficient
Figure 6.4: Force and assistance coefficient vs. distance.
84
Chapter 7
Conclusion
7.1 Summary
In preliminary studies we proved that a subject can learn how to steer a mouse cursor
following a path on the screen with an impairment defined as a visual clockwise
rotation without making large errors if we assist only as needed. The assistance
in the preliminary studies was defined to diminish the visual perturbation. The
subject learned how to steer under only visual help, since no force was applied on
the subject’s hand. The error remained small during the learning process thanks to
the slowly diminishing of assistance.
In the second study we proved the hypothesis that a subject can learn how to
drive a power wheelchair without making large errors when assistance as needed is
applied through a force feedback steering wheel. In this second case it was unnec-
essary to impose a simulated impairment because the high speed of the wheelchair
made the task difficult enough. The assistance was defined as a torque applied on
the subject’s hands through the force feedback steering wheel that helped the sub-
85
ject to drive the power wheelchair following a black line on the floor without making
large errors. The error remained small when the assistance was applied, and the
subject could learn how to drive without making large errors after several trials with
assistance as needed.
Both experiments showed that the assistance as needed in both forms, visual and
haptic application, is useful to teach a new complex task such as steering a mouse,
or steering a virtual power wheelchair, without making large errors that could be
dangerous for some tasks or discourage the subject.
In order to prove our hypothesis in the second study we developed a power
wheelchair driving simulator that can be used also as a tool to study power wheelchair
training. It has been shown previously [20] that the tasks learned using virtual real-
ity simulation programs by healthy and disabled subjects can be transferred to the
real world. A virtual environment to learn how to drive a power wheelchair has as
advantages the fact that it is safer and potentially more entertaining than the real
world.
We also developed a control algorithm for a force feedback steering wheel that can
robotically assist in steering the wheelchair through the simulator. We introduced
the “look-ahead” idea in the controller and showed that the force feedback steering
wheel and the driver cooperate without fighting each other. This result was recently
independently shown in [27].
These results are the first step in the development of a Smart Wheelchair that
increases the quality of life of disabled children and their families by allowing them
to learn to drive without need for continuous supervision.
7.2 Future Work
This research provides the foundation for new results in future works in short and
long terms.
At short term we plan to design a new experiment to compare how well human
subjects learn to drive for three randomized, training groups: a group that receives
86
assistance-as-needed, a control group that receives no-assistance, and a second con-
trol group that receives fixed assistance. So far we showed that the subject learns
after assistance-as-needed is applied, but we can not discard the idea that the sub-
ject would also learn also with fixed assistance. This possibility can only be tested
with a controlled experiment with many subjects.
Another possible direction of future research would be the study of how to choose
the values of the forgetting factor (fr), and learning gain value (gr) in the learning
algorithm. We showed that depending on the value of these two values, the learning
curve and subject’s performance change considerably. We want learning to happen
as quickly as possible, but errors to be small. Also an interesting question to answer
is if these values vary between subjects.
At long term we plan to transfer the assistance-as-needed algorithm to a com-
mercial, powered wheelchair by integrating the force feedback joystick with the
wheelchair, and developing an optical line-following system that accurately senses
the line tracking error in a look-ahead fashion. Actually we send a proposal to build
the real smart wheelchair, and that would be the subject of my PhD.
To conclude, a similar assist as needed approach might be useful for learning
how to drive other vehicles, such as a motorcycle or airplanes.
87
Bibliography
[1] “Celebral palsy fact sheet”. United Cerebral Palsy. URL
http://www.ucp.org/ucp_generaldoc.cfm.
[2] Ceres, R., Pons, J. L., Calderon, L., Jimenez, A. R., and Azevedo, L., 2005.
“A robotic vehicle for disabled children”. IEEE Engineering in Medicine and
Biology Magazine, 24(6), November/December, pp. 55–66.
[3] “Mental retardation”. Department of Health and Human Ser-
vices, Centers for Disease Control and Prevention. URL
http://www.cdc.gov/ncbddd/dd/ddmr.htm.
[4] Deitz, J., Swinth, Y., and White, 2002. “Powered mobility and preschoolers
with complex developmental delays”. American Journal of Occupational Ther-
apy, 56, pp. 86–96.
[5] Sullivan, M., and Lewis, M., 1993. “Contingency, means-end skills, and the use
of technology in infant intervention”. Infants and Young Children(5), pp. 58–77.
[6] Fehr, L., Langbein, W. E., and Skaar, S. B., 2000. “Adequacy of power
wheelchair control interfaces for persons with severe disabilities: A clinical sur-
vey”. Journal of Rehabilitation Research and Development, 37(3), May/June,
pp. 353–360.
[7] Connell, J., and Viola, P., 1990. “Cooperative control of a semi-autonomous
mobile robot”. Robotics and Automation, 2, May, pp. 1118–1121.
[8] Yanco, H. A., 1998. “Wheelesley: A robotic wheelchair system: Indoor navi-
gation and user interface”. Assistive Technology and AI, 1458, pp. 256–268.
88
[9] Gomi, T., and Griffith, A., 1998. “Developing intelligent wheelchairs for the
handicapped”. Assistive technology and AI, 1458, p. 150.
[10] Simpson, R. C., Levine, S. P., Bell, D. A., Jaros, L. A., Koren, Y., and Boren-
stein, J., 1998. “Navchair: An assistive wheelchair navigation system with
automatic adaptation”’. Assistive technology and AI, 1458, pp. 235–255.
[11] Balcells, A. C., and Gonzlez, J. A., 1998. “Tetranauta: A wheelchair controller
for users with very severe mobility restrictions.”. TIDE, July.
[12] Pires, G., and Nunes, U., 2002. “A wheelchair steered through voice commands
and assisted by a reactive fuzzy-logic controller”. Journal of Intelligent and
Robotic Systems, 34(3), pp. 301–314.
[13] Nisbet, P., Craig, J., Odor, J., and Aitken, S., 1996. “’smart’ wheelchairs for
mobility training”. Technology and Disability, 5, pp. 49–62.
[14] Lisbeth, N. “Driving to learn”. URL
http://www.lisbethnilsson.bd.se/method.htm.
[15] Permobil. “Entra tiro, the training tool”. URL
http://www.lisbethnilsson.bd.se/entratiro_eng.htm.
[16] Terrace, H., 1963. “Discrimination learning with and without errors”. Journal
of Experimental Analysis of Behavior, 6, pp. 1–27.
[17] Page, M., Wilsom, B. A., Shiel, A., Carter, G., and Norris, D., 2006. “What
is the locus of the errorless-learning advantage?”. Neuropsychologia, 44(1),
pp. 90–100.
[18] Emken, J. L., Bobrow, J. E., and Reinkensmeyer, D. J., 2005. “Robotic move-
ment training as an optimization problem: designing a controller that assists
only as needed”. Rehabilitation Robotics, 2005. ICORR 2005. 9th International
Conference, pp. 307–312.
[19] Emken, J. L., and Reinkensmeyer, D. J. “Robot-enhanced motor learning:
Accelerating internal model formation during locomotion by transient dynamic
amplification”. IEEE Trans. Neural Systems & Rehabil Eng, 13.
89
[20] Holden, M. K., 2005. “Virtual environments for motor rehabilitation: Review”.
CyberPsychology & Behavior, 8(3), pp. 187–211.
[21] Rose, F., Attree, E., Brooks, B., Parslow, D., Penn, P., and Ambihaipahan,
N., 2000. “Training in virtual environments: transfer to real world tasks and
equivalence to real task training”. Ergonomics, 43(4), pp. 494–511.
[22] Brooks, B. M., Mcneil, J. E., Rose, F., Attree, E. A., and Leadbetter, A. G.,
2000. “Route learning in a case of amnesia: A preliminary investigation into the
efficacy of training in a virtual environment”. Ergonomics, 43(4), pp. 494–511.
[23] Webster, J. S., McFarland, P. T., Rapport, L. J., Morrill, B., Roades, L. A.,
and Abadee, P. S., 2001. “Computer-assisted training for improving wheelchair
mobility in unilateral neglect patients”. Archives of Physical Medicine and
Rehabilitation, 82(6), pp. 769–775.
[24] Liu, J., Emken, J., Cramer, S., and Reinkensmeyer, D. “Learning to perform
a novel movement pattren using haptic guidance: Slow learning, rapid forget-
ting, and attractor paths”. Proceedings of the 2005 IEEE, 9th International
Conference on Rehabilitation Robotics, June-July.
[25] Protho, J. L., LoPresti, E. F., and Brienza, D. M., 2000. “An evaluation of
an obstacle avoidance force feedback joystick”. The Proceedings of the Annual
RESNA Conference, Orlando, FL, June-July, pp. 447–449.
[26] Fattouh, A., Sahnoun, M., and Bourhis, G., 2004. “Force feedback joystick
control of a powered wheelchair: Preliminary study”. 2004 IEEE International
Conference on Systems, Man and Cybernetics, 3, Oct., pp. 2640 – 2645.
[27] Forsyth, B. A., and MacLean, K. E., 2006. “Predictive haptic guidance: Intel-
ligent user assistance for the control of dinamic tasks”. IEEE Transactions on
Visualization and Computer Graphics, 12(1), January-February, pp. 103–113.
[28] Rezaei, S., Guivant, J., and Nebot, E. M., 2003. “Car-like robot path follow-
ing in large unstructured environments”. Proceedings of the 2003 IEEWRSJ
Intl.Conference on Intelligent Robots and Systems Las Vegas, Nevada, October.
90
[29] Skaff, S., Kantor, G., Maiwand, D., and Rizzi, A. A., 2003. “Inertial navigation
and visual line following for a dynamical hexapod robot”. Proceedings of the
2003 IEEWRSJ Intl.Conference on Intelligent Robots and Systems Las Vegas,
Nevada, October.
[30] Fang, Q., and Xie, C., 2004. “A study on intelligent path following and con-
trol for vision-based automated guided vehicle”. Proceedings of the 5’ World
Congress on Intelligent Control and Automation, Hangzhou. P.R. China, June.
[31] del Rio, F. D., Jimenez, G., Sevillano, J. L., Vicente, S., and Civit-Balcells,
A., 1999. “A generalization of path following for mobile robots”. Proceedings
of the 1999 IEEE International Conference on Robotics & Automation Detroit,
Michigan, May.
[32] Tayebi, A., and Rachid, A., 1996. “Path following control law for an indus-
trial mobile robot”. Proceedings of the 1996 IEEE International Conference on
Control Applications Dearborn, MI, September.
[33] Hayakawa, Y., W. R., Kimura, T., and Naito, G., 2004. “Driver-compatible
steering system for wide speed-range path following”. IEEE/ASME Transac-
tions on mechatronics, 9(3), September, pp. 544– 552.
[34] Reynolds, C., 1999. “Steering behaviors for autonomous characters”. Proc.
Game Developers Conf, March, pp. 763–782.
[35] Antonelli, G., and Chiaverini, S., 224. “Experiments of fuzzy lane following for
mobile robots”. Proceeding of the 2004 American Control Conference Boston,
Massachusetts, June-July.
[36] Abdessemed, F., Benmahamme, K., and Monacelli, E., 2004. “A fuzzy-based re-
active controller for a non-holonomic mobile robot”. Robotics and Autonomous
Systems, 47, May, pp. 31–46.
[37] Mendel, J. M., and Bob John, R. I., 2002. “Type-2 fuzzy sets made simple”.
IEEE transactions on fuzzy systems, 10(2), April, pp. 117–127.
91
[38] Phokharatkul, P., and Phaiboon, S., 2004. “Mobile robot control using type-2
fuzzy logic system”. Proceedings of the 2004 IEEE, Conference on Robotics,
Automation and Mechatronics, Singapore, Decembre.
[39] Fierro, R., and Lewis, F. L., 1998. “Control of a nonholonomic mobile robot
using neural networks”. IEEE transactions on neural networks, 9(4), July,
p. 589600.
[40] Youchun, X., Rongben, W., Libing, and Shouwen, J., 2000. “A vision navigation
algorithm based on linear lane model”. Proceedings of the IEEE Intelligent
Vehicles Symposium 2000 Dearborn, MI, October.
[41] Horikawa, S., Furuhashi, T., and Uchikawa, Y., 1992. “On fuzzy modeling using
fuzzy neural networks with the back-propagation algorithm”. IEEE Transac-
tions on Neural Networks, 3(5), September, pp. 801–806.
[42] Watanabe, K., Tang, J., Nakamura, M., Koga, S., and Fukuda, T., 1996. “A
fuzzy-gaussian neural network and its application to mobile robot control”.
IEEE transactions on control systems technology, 4(2), March, p. 193199.
[43] Chen, M., Jochem, T., and Pomerleau, D., 1995. “Aurora: A vision-based
roadway departure warning system”. IEEE Conference on Intelligent Robots
and Systems, Human Robot Interaction and Cooperative Robots, 1, August,
pp. 243–248.
[44] Scheidt, R. A., Dingwell, J. B., and Mussa-Ivaldi, F. A., 2001. “Learning to
move amid uncertainty”. Journal of Neurophysiology, 86, pp. 971–985.
[45] Reinkensmeyer, D. J., Emken, J. L., Liu, J., and E., B. J., 2004. “The nervous
system appears to minimize a weighted sum of kinematic error, force, and
change in force when adapting to viscous environments during reaching and
stepping”. Advances in Computational Motor Control III, October.
[46] Scheidt, R. A., Conditt, M. A., Secco, E. L., and Mussa-Ivaldi, F. A., 2005.
“Interaction of visual and proprioceptive feedback during adaptation of human
reaching movements”. Journal Neurophysiology, 93, January, pp. 3200 – 3213.
92
[47] Wei, Y., Bajaj, P., Scheidt, R., and Patton, J., 2005. “A real-time hap-
tic/graphic demonstration of how error augmentation can enhance learni”.
IEEE-International Conference on Robotics and Automation, Barcelona, Spain,
April.
[48] Hu, T., Yang, S. X., Wang, F., and Mittal, G. S., 2002. “A neural network
controller for a nonholomonic mobile robot with unknown robot paremeters”.
IEEE International Conference on Robotics & Automation Washintongton, DC,
May.
[49] Romich, B., Hill, K., LoPresti, E., Spaeth, and Donald, 2001. “Integration of
electronic external devices for powered mobility systems”. Resna.
[50] Johnson, M. J., Trickey, M., Brauer, E., and Feng, X., 2004. “Theradrive:
A new stroke therapy concept for home-based, computer-assisted motivating
rehabilitation”. Proceedings of the 26th Annual International Conference of the
IEEE EMBS. San Francisco, CA, USA, September.
93
Appendix A
Visual perturbation experiment
A.1 exrotation.m
%visual perturbation experiment
m=0;phi=45*pi/180; %initial assistance value in radiansclear error recorr time phit maxerrorerror=0;recorr=0;phit=45;
for i=1:100rotation
if t>20 || t<50 %we are only interested in trials%done between 20 and 50 seconds
time(i)=t; %time needed to finish the trial ithmaxerrors(i)=maxerror; %maximum error in trial ithphit=[phit;a(:,7)]; %visual perturbation in degrees along trial itherror=[error;a(:,3)]; %error in pixels along trial ithrecorr=[recorr;a(:,8)]; %distance steered in pixels along trial ithi=i
elseendpause(10) %ten seconds break
if phi<0break;
endendfigure(4)plot(recorr,error)hold on
94
plot(recorr,phit*180/pi);
A.2 rotation.m
%cogent program for visual perturbation
% Configure mouse in polling mode with 1ms intervalrecycle(’on’)delete(’rotation.res’)config_mouse(10);config_display(1,1,[1 1 1],[1 0 1],’Helvetica’,20);config_results(’rotation.res’);config_log(’rotation.log’);config_data(’caminolight2.dat’);
start_cogent;
% Define variable map to contain current information about mousemap = getmousemap;%define initial position% x = -0;% y = -230;
%get num of trajectories of the way from the data filenumtraj=countdatarows;%load data from data filedata=load(’caminolight2.dat’);
f=0.9999; %forgetting factorg=0.000007; %gain
xm=data(1,2); %x coordinate mouse position (pixels)ym=data(1,3); %y coordinate mouse position (pixels)coord=[xm ym]’;
clear p E v; cla clearpict(2);
%define impairment rotationsthe=-45*pi/180; Ri=[cos(the) -sin(the); sin(the) cos(the)];
loadpict(’camino.bmp’,2);% Draw line picture in fix picture
j=2; tic while j<=numtraj
%charge car in the bufferpreparestring( ’O’, 2, coord(1), coord(2));
%draw picture (way and car) on the screendrawpict(2);
95
% Update mouse map using readmouse.readmouse;
% Update coordinates for text.(configurable to input perturbation% movementsxi=sum(getmouse(map.X));yi=sum(getmouse(map.Y));
xm = xm + xi;ym = ym - yi;
%define assistance rotationsRa=[cos(phi) -sin(phi); sin(phi) cos(phi)];
%total rotationR=Ra*Ri;
coord=(eye(2)-R)*[data(1,2) data(1,3)]’+R*[xm ym]’;x=coord(1);y=coord(2);
%update camino coordinatesxx=data(j,2);yy=data(j,3);
if data(j,1)==1 %up verticalif y< data(j,5) %yy
if j>2if (x-xx)^2>(y-data(j-1,3))^2 & pasatrajecto==0
e=sqrt((y-data(j-1,3))^2);m=m+abs(xi);
elsee=sqrt((x-xx)^2);pasatrajecto=1;m=m+abs(yi);
endelse
e=sqrt((x-xx)^2);m=m+abs(yi);
endd=(x-xx);xt=xx;yt=y;%count=count+1;preparestring(’vertical upward’,2,-250,190-j*10);
else%save number iteration until jump next traject.p(j)=m;j=j+1; %if this trajectori is finish jump to the nest onepasatrajecto=0;
endelseif data(j,1)==2 %downward vertical
96
if y>data(j,5) %yyif j>2
if (x-xx)^2>(y-data(j-1,3))^2 & pasatrajecto==0e=sqrt((y-data(j-1,3))^2);m=m+abs(xi);
elsee=sqrt((x-xx)^2);pasatrajecto=1;m=m+abs(yi);
endelse
e=sqrt((x-xx)^2);m=m+abs(yi);
endd=(x-xx);xt=xx;yt=y;preparestring(’vertical downward’,2,-250,190-j*10);
elsep(j)=m;j=j+1;pasatrajecto=0;
endelseif data(j,1)==3 %horizontal right
if x< data(j,4) %xxif j>2
if (y-yy)^2>(x-data(j-1,2))^2 & pasatrajecto==0e=sqrt((x-data(j-1,2))^2);m=m+abs(yi);
elsee=sqrt((y-yy)^2);m=m+abs(xi);
endelse
e=sqrt((y-yy)^2);m=m+abs(xi);
endd=(y-yy);xt=x;yt=yy;preparestring(’horizontal right’,2,-250,190-j*10);
elsep(j)=m;j=j+1;pasatrajecto=0;
endelseif data(j,1)==4 %horizontal left
if x> data(j,4) %xxif j>2
if (y-yy)^2>(x-data(j-1,2))^2 & pasatrajecto==0e=sqrt((x-data(j-1,2))^2);m=m+abs(yi);
else
97
e=sqrt((y-yy)^2);m=m+abs(xi);
endelse
e=sqrt((y-yy)^2);m=m+abs(xi);
endd=(y-yy);xt=x;yt=yy;preparestring(’horizontal left’,2,-250,190-j*10);
elsep(j)=m;j=j+1;pasatrajecto=0;
endelseif data(j,1)==5 %no more trajectories
break;else
fprintf(’\No correct circuit.\n\n’);break
end
%load results of this iteration to results fileif xi~=0 & yi~=0
phi=f*phi-g*e;else
phi=phi;end
addresults(x,y,e,d,xt,yt,phi,m)
end
toc t=toc;
stop_cogent;
a=load(’rotation.res’);
maxerror=max(a(:,3));
figure(1)plot(a(:,1),a(:,2));hold on;plot([0;data(:,2)],[-203;data(:,3)],’k’);hold off;
title(’position car’)
figure(2)plot(a(:,8),a(:,3));hold on;
98
plot(a(:,8),a(:,7)*pi/180);
hold off;set(gca,’xtick’,p);grid;
title(’error’)
figure(3)plot(a(:,4));set(gca,’xtick’,p);grid
title (’relative displacement’)
99
Appendix B
Simulink training program
B.1 Main body diagram
100
Figure B.1: Simulink main body diagram.
101
B.2 non-holomonic dynamics & input law block
Figure B.2: Nonholomonic dynamics & input law block
102
B.2.1 Error calculation subsystem
Figure B.3: Error calculation subsystem
B.2.2 Filter machine variability in corners
Figure B.4: Filter machine variability in corners.
103
B.2.3 Filter machine variability in corners
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function to calculate the direction & distance errors, %%%% look-ahead errors, and total distance %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [e, d, assx, assv ,angleerror,angle, velo] = error(lin,xd,yd,x,x0,y, y0, th)
i=1; emin=100; j=1;%calculating the nearest point on the linewhile i<length(lin(:,1))
error=sqrt(((x-lin(i,1))^2+(y-lin(i,2))^2));if error<emin
emin=error;j=i;
elseemin=emin;
endi=i+1;
end
%calculating the distance erroranglee=lin(j,3)*pi/180; Ranglee=[sin(anglee) cos(anglee);-cos(anglee) sin(anglee)]; errproj=Ranglee*[x-lin(j,1); y-lin(j,2)];e=abs(errproj(1));
d=sqrt((x-x0)^2+(y-y0)^2); %total distance drivenvelo=(xd^2+yd^2)^(1/2);
%calculating the point on the line a dfront distance aheaddfront=0; k=j; while dfront<1.42 && k<length(lin(:,1))
dfront=dfront+((lin(k+1,1)-lin(k,1))^2+(lin(k+1,2)-lin(k,2))^2)^(1/2);k=k+1;
end
angle=th+4*pi-lin(k,3)*pi/180; %look-ahead direction errorangleerror=th+4*pi-lin(j,3)*pi/180; %direction error
%look-ahead distance errorassxb=(x-lin(k,1)); assyb=-(y-lin(k,2)); angli=th+4*pi+pi/2;Rass=[cos(angli) -sin(angli);sin(angli) cos(angli)]; ass=Rass*[assxbassyb]’; assx=ass(1);
assvb=Rass*[xd yd]’; assv=assvb(1);
104
B.2.4 long velocity. Relation block between longitudinaland angular velocities
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% this block relates wheel velocities %% phir & phil with longitudinal velocities xdot & ydot %% and direction velocity thdot %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [xdot,ydot,thdot] = velo(th,rd,R,phirdot,phildot)% This block supports an embeddable subset of the MATLAB language.% See the help menu for details.
S=[rd/2*cos(th) rd/2*cos(th); rd/2*sin(th) rd/2*sin(th); rd/(2*R)-rd/(2*R)];v=[phirdot phildot]’;
xdot = S(1,:)*v;ydot = S(2,:)*v;thdot= S(3,:)*v;
B.2.5 Input control law
Figure B.5: Input control law.
105
B.2.6 Torque. Torque calculation from linear forces.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Function to calculate Torques from linear forces %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [torquer,torquel] = tr(forcex,forcey,forceth,th,R)% This block supports an embeddable subset of the MATLAB language.% See the help menu for details.
T=[1 0.93;1 -0.93];Rf=[cos(th) sin(th);-sin(th) cos(th)];torque=T*Rf*1.5*[forcex forcey]’+1.5*[forceth/R;-forceth/R];
torquer=torque(1);torquel=torque(2);
B.2.7 Non-holomonic dynamic equations
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Non-holomonic dynamic equations %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [phirddot, philddot] =vd(torquer,torquel,thdot,rd,m,R,I,Iw,mc,d,phirdot, phildot)% This block supports an embeddable subset of the MATLAB language.% See the help menu for details.
M=[(rd^2*(m*R^2+I)/(4*R^2))+Iw rd^2*(m*R^2-I)/(4*R^2);rd^2*(m*R^2-I)/(4*R^2) (rd^2*(m*R^2+I)/(4*R^2))+Iw];
C=[0 rd^2*mc*d*thdot/(2*R);-rd^2*mc*d*thdot/(2*R) 0];
Bt=eye(2,2);v=[phirdot phildot]’;vdot = inv(M)*(Bt*[torquer torquel]’-C*v);phirddot=vdot(1);philddot=vdot(2);
106
B.3 Preparation graphics block
Figure B.6: Preparation graphics block.
B.4 Assistance algorithm block
Figure B.7: Assistance algorithm block.
107
B.4.1 joystick. Assistance algorithm code
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% assistance coefficients update %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [fx, fy,kj2,ka2,Ba2,Bj2] =forjoy(assx,assv,angle,angleerror, thdot, e,kj, ka, Ba, Bj)% This block supports an embeddable subset of the MATLAB language.% See the help menu for details.
kj2=0.99986*kj+0.002*e;ka2=0.99986*ka+0.002*abs(angleerror);Ba2=0.99987*Ba+0.0002*abs(angleerror);Bj2=0;
assy=0;
fx=kj2*assx+ka2*angle+Ba2*thdot-Bj2*assv;fy=kj2*assy;
108