Upload
vandang
View
242
Download
0
Embed Size (px)
Citation preview
Gesture Control in a Virtual Environment
Zishuo CHENG
29 May 2015
A report submitted for the degree of Master of Computing of
Australian National University
Supervisor: Prof. Tom Gedeon,
Martin Henschke
COMP8715: Computing Project
Australian National University Semester 1, 2015
Page 1
Acknowledgement
I would like to sincerely appreciate my supervisors Professor Tom Gedeon and PhD student
Martin Henschke for their constant guidance and kindest assistance along the research process.
Their expertise, patience, enthusiasm and friendship greatly encouraged me.
Page 2
Abstract
In the recent years, gesture recognition has gained increasing popularity in the field of
human-machine interaction. Vision-based gesture recognition and myoelectric
recognition are the two main solutions in this area. Myoelectric controllers collect
electromyography (EMG) signals from user’s skin as the inputs. MYO armband is a
new wearable device launched by Thalmic Lab in 2014. It is an innovation which
accomplishes gesture control by detecting motion and muscle activities. Moreover,
compared to EMG recognition, vision-based devices aim to achieve gesture recognition
in the way of computer vision. Kinect is a line of motion sensing input device released
by Microsoft in 2010 which recognises user’s motion through cameras. Since both of
these two methods have their own advantages and drawbacks, this project aims to assess
the performance of MYO armband and Kinect in the aspect of virtual control. The
analytic result is given for the purpose of refining user experience.
Keywords: MYO armband, Kinect, electromyography signal, vision-based, gesture
recognition, Human-computer Interaction
List of Abbreviations
EMG Electromyography
HCI Human-computer Interaction
IMU Inertial Measurement Unit
GUI Graphical User Interface
NUI Natural User Interface
SDK Software Development Kit
Page 3
Table of Contents
Acknowledgement..………………………………………………………………………………………………….1
Abstract..………………………………………………………………………………………………………………….2
List of Abbreviations…………………………………………………………………………………………………2
List of Figures……………………………………………………………………………………………………………5
List of Tables………………………………………………………………………………………………………….…5
1. Introduction…………………………………………………………………………………………………….……6
1.1 Overview………………………………………………………………………………………………...6
1.2 Motivation……………………………………………………………………………………………...6
1.3 Objectives……………………………………………………………………………………………….7
1.4 Contributions………………………………………………………………………………………….7
1.5 Report Outline………………………………………………………………………………………..7
2. Background………………………………………………………………………………………………………….8
2.1 MYO armband………………………………………………………………………………………..8
2.2 Kinect sensor…………………………………………………………………………………………10
3. Methodology………………………………………………………………………………………………………13
3.1 Assessment on User-friendliness…………………………………………………………..13
3.1.1 Training Subjects…………………………………………………………………….13
3.1.2 Evaluating Degree of Proficiency…………………………………………….14
3.1.3 Evaluating User-friendliness……………………………………………………15
3.2 Assessment on Navigation…………………………………………………………………….17
3.2.1 Setting up the Virtual Environment…………………………………………17
3.2.1.1 Virtual Environment Description………………………………17
3.2.1.2 Settings of the Tested Devices………………………………….18
3.2.2 Experimental Data Collection………………………………………………….19
3.2.2.1 Navigation Data and Time………………………………………..20
3.2.2.2 Error Rate…………………………………………………………………21
3.2.2.3 Subjective Evaluation……………………………………………….21
3.3 Assessment on Precise Manipulation…………………………………………………….21
3.3.1 Setting up the Virtual Environment…………………………………………21
Page 4
3.3.1.1 Virtual Environment Description………………………………21
3.3.1.2 Settings of the Tested Devices……………………………….…22
3.3.2 Experimental Data Collection………………………………………………….23
3.3.2.1 Moving Range of Arm……………………………………………….23
3.3.2.2 Interaction Events and Time…………………………………….25
3.3.2.3 Error Rate…………………………………………………………………26
3.3.2.4 Subjective Evaluation……………………………………………….26
3.3.4 Assessment on Other General Aspects……………………………………26
3.3.5 Devices Specification and Experimental Regulation………………..27
4. Result Analysis…………………………………………………………………………………………………….28
4.1 Result Analysis of Experiment 1…………………………………………………………….28
4.1.1 Evaluation of Proficiency Test…………………………………………………28
4.1.2 Evaluation of Training Time…………………………………………………….29
4.1.3 Evaluation of User-friendliness……………………………………………….30
4.2 Result Analysis of Experiment 2…………………………………………………………….30
4.2.1 Evaluation of the Number of Gestures…………………………………….30
4.2.2 Evaluation of Error Rate………………………………………………………….31
4.2.3 Evaluation of Completion Time……………………………………………….31
4.2.4 Self-Evaluation………………………………………………………………………..32
4.3 Result Analysis of Experiment 3…………………………………………………………….32
4.3.1 Evaluation of Moving Range……………………………………………………32
4.3.2 Evaluation of Error Rate………………………………………………………….32
4.3.3 Evaluation of Completion Time……………………………………………….33
4.3.4 Self-Evaluation………………………………………………………………………..33
4.4 Analysis of Other Relevant Data…………………………………………………………….34
5. Conclusion and Future Improvement………………………………………………………………….34
5.1 Conclusion…………………………………………………………………………………………….35
5.2 Future Improvement……………………………………………………………………………..36
Reference……………………………………………………………………………………………………………….37
Appendix A……………………………………………………………………………………………………………..38
Appendix B……………………………………………………………………………………………………………..39
Page 5
List of Figures
Figure 1: MYO armband with 8 EMG sensors [credit: Thalmic Lab]……………………….9
Figure 2: MYO Keyboard Mapper [credit: MYO Application Manager] …………………10
Figure 3: A Kinect Sensor [credit: Microsoft]………………………………………………….………11
Figure 4: Skeleton Position and Tracking State of Kinect Sensor [credit: Microsoft
Developer Network] ……………………………………………………………………………………………….12
Figure 5: Graph for the Test of Degree of Proficiency of Cursor Control …………………15
Figure 6: Flow Chart of Experiment 1……………………………………………………………………..16
Figure 7: 3D Demo of the Virtual Maze in Experiment 2…………………………………………16
Figure 8: One of the Shortest Paths in Experiment 2…………………………………………….….17
Figure 9: 3D Scene for Experiment 3……………………………………………………………………….22
Figure 10: Euler Angles in 3D Euclidean Space [credit: Wikipedia, Euler Angles] ….23
List of Tables
Table 1: Interaction Event Mapper of MYO in Experiment 2…………………………………..17
Table 2: Interaction Event Mapper of Kinect in Experiment 2………………………………….18
Table 3: Interaction Event Mapper of MYO in Experiment 3…………………..………………21
Table 4: Error Rate & Incorrect Gesture for Proficiency Test of MYO armband………28
Table 5: Completion Time in Cursor Control Test 28…………..…………………………….……28
Table 6: Total Training Time for MYO armband and Kinect sensor…………………………28
Table 7: Subject’s Rate for the User-friendliness of MYO and Kinect……………………..29
Table 8: The Number of Gestures Performed in Experiment 2…………………………………29
Table 9: Error Rate in Experiment 2…………………………………………………..……………………30
Table 10: Completion Time in Experiment 2…………………………………….………………………31
Table 11: Subject’s Self-Evaluation of the Performance in Experiment 2…………..……31
Table 12: Range of Pitch Angle in Experiment 3………………………………………………………32
Table 13: Range of Yaw Angle in Experiment 3………………………………………………………32
Table 14: Time Spent in Experiment 3…………….………………………………………………………33
Table 15: Subject’s Self-Evaluation of the Performance in Experiment 3…………..……33
Page 6
Chapter 1
Introduction
1.1 Overview
In recent years, traditional input devices such as keyboards and mouse are losing an
amount of popularity due to an absence of flexibility and freedom. Compared to
traditional graphical user interface (GUI), a natural user interface (NUI) enables
human-machine interaction via the people’s common behaviours such as gesture, voice,
facial expression, eye movement and so on so forth. The concept of NUI was developed
by Steve Mann in 1990s [1]. In the last two decades, developers made a variety of
attempts to improve user experience by applying NUIs. Nowadays, NUIs as discussed
in [2] are increasingly becoming an important part of the contemporary human-machine
interaction.
Electromyography (EMG) signal recognition plays an important role in NUIs. EMG is
a technique for monitoring the electrical activity produced by skeletal muscles [3]. In
recent years, there is a variety of wearable EMG devices released by numerous
developers, such as MYO armband, Jawbone and some types of smartwatch. When
muscle cells are electrically or neurologically activated, these devices monitor the
electric potential generated by muscle cells in order to analyse the biomechanics of
human movement.
Vision-based pattern recognition is another significant part in NUI study which has
been studied since the end of 20th century [4]. By using camera to capture specific
motion and patterns, vision-based devices enable to recognise the message that human
being attempt to convey. There are many innovations in this area such as Kinect and
Leap Motion. Generally speaking, most of vision-based devices perform gesture
recognition through monitoring and analysing motion, depth, colour, shape and
appearance [5].
1.2 Motivation
Even though EMG signal recognition and vision-based pattern recognition have been
studied for many years, they are still far to break the dominance of the traditional GUI
based on keyboard and mouse [6]. Moreover, both of them have their own problems
which are the bottleneck of their development. Due to the defects of them, this project
chooses MYO armband and Kinect as the typical example of EMG signal recognition
and vision-based recognition and attempts to assess the their performance in virtual
environment in order to identify the specific aspects that need to be improved by the
Page 7
developers in the future. Moreover, the project also aims to summarise some valuable
lessons for human-machine interaction.
1.3 Objectives
The objectives of this project are to evaluate the performance of MYO armband and
Kinect in the aspect of gesture control, to investigate the user experience of these two
devices, and to attempt to identify if any improvement could be reached for the
development of EMG and vision-based HCI.
1.4 Contribution
Firstly, the project set up a 3D maze as the virtual environment to support the evaluation
of gesture control. Secondly, the project used the Software Development Kit (SDK) of
MYO armband and Kinect sensor to build the connection with the virtual environment.
Thirdly, there were three HCI experiments held in the project. Last but not least, the
project evaluated the experimental data and summarised some lessons for EMG signal
recognition and vision-based pattern recognition.
1.5 Report Outline
This project report is divided into five chapters. After this introduction, Chapter 2
introduces the background of MYO armband and Kinect sensor including the features
and limitations of them. In Chapter 3, the research methodology is explained in details.
It introduces the three experiments held in this project. Chapter 4 aims to analyse and
discuss the experimental data from various dimensions. Lastly, the final conclusion and
future improvement are discussed in Chapter5.
Page 8
Chapter 2
Background
The project selects MYO armband and Kinect as the typical device in the area of EMG
signal recognition and vision-based pattern recognition respectively. By evaluating the
performance of these two devices, the researcher enables to identify the advantages and
defects of these two ways of gesture control. Thus, in this chapter, the features,
specifications and limitations of MYO armband and Kinect are explained in more
details.
2.1 MYO armband
MYO armband is a wearable electromyography forearm band which was developed by
Thalmic Lab in 2013 [4]. The original aim of this equipment is to provide a touch-free
control of technology with gestures and motion. In 2014, the developer Thalmic Lab
released the first shipment of the first generation product [4]. The armband allows user
to wirelessly control the technology in the way of detecting the electrical activities in
user’s muscle and the motion of user’s arm.
One of the main features of MYO armband is that the band reads the electromyography
signals from skeletal muscles and use it as the input commends of the corresponding
gesture control events. As Figure 1 shows, the armband has 8 medical grade sensors
which are used to monitor the EMG activities from the surface of user’s skin. To
monitors the spatial data about the movement and orientation of user’s arm, the
armband adopts a 9-axis Inertial Measurement Unit (IMU) which includes a 3-axis
gyroscope, a 3-axis accelerometer and a 3-axis magnetometer [12]. Through the sensors
and IMU, the armband enables to recognise user’s gestures and track the motion of
user’s arm. Moreover, the armband uses Bluetooth 4.0 as the information channel to
transmit the recognised signals to the paired devices.
Page 9
Figure 1: MYO armband with 8 EMG sensors [credit: Thalmic Lab]
Another feature of MYO armband is the open application program interfaces (APIs)
and free SDK. Based on this feature, more people can be involved to build solutions for
various uses such as home automation, drones, computer games and virtual reality.
Thalmic Lab has released more than 10 versions of SDK since the initial version Alpha
1 was released in 2013. According to the log in [10], numerous new features were added
into the SDK in each update to make the development environment more powerful. In
Beta release 2, gesture data collection was added. Thus, developers enable to collect
and analyse gesture data in order to help improve the accuracy of gesture recognition.
In the latest version 0.8.1, a new function called mediaKey() was added into the SDK,
which allow to send media key events to system. So far, the MYO SDK has become a
mature development environment with plenty of well-constructed functions.
Nevertheless, there are a few drawbacks in the current generation of MYO armband.
First of all, the poses that can be recognised by the band is limited. In the developer
blog in [10], they announced that MYO armband can recognise 5 pre-set gestures
including fist, wave left, wave right, finger spread and double tap. By setting up the
connection through Bluetooth 4.0, users are able to map each gesture into a particular
input event in order to interact with the paired device. On the one hand, the developers
of the armband tend to simplify the human-machine interaction. Therefore, using only
5 gestures to interact with the environment is a user-friendly design which largely
reduces the operation complexity. However, on the other hand, this design makes some
restrictions on application development. Secondly, the accuracy of gesture recognition
is not satisfactory, especially in a complex interaction. When a user aims to implement
a complicated task with a combination of several gestures, the armband is not sensitive
enough to detect the quick change of user’s gestures.
Page 10
Figure 2: MYO Keyboard Mapper [credit: MYO Application Manager]
2.2 Kinect sensor
Kinect is a line of motion sensing input devices released by Microsoft in 2010. The first
generation Kinect was designed for the use of HCI in the video games listed in Xbox
360 store. Since its released date, Kinect sensor has attract the attention of numerous
researchers because of its ability to perform vision-based gesture recognition [4].
Nowadays, Kinect is not only used for entertainment, but also for other purposes such
as model building and HCI research. In the later chapter of this report, numerous parts
of the HCI experiments are designed based on the product’s characteristics discussed
in the following paragraphs.
One of the key characteristic of Kinect sensor is that it adopts to use 3 cameras to
implement pattern recognition. As Figure 3 shows, a Kinect sensor consists of a RGB
camera, two 3D depth sensors, a build-in motor and a multi-array microphone. The
RGB camera is a traditional RGB camera which generates high-resolution colour
images in real-time. As mentioned in [13], the depth sensor is composed of an infra-red
(IR) projector and a monochrome complementary metal–oxide–semiconductor (CMOS)
sensor. By measuring the reflection time of IR ray, the depth map can be presented. The
video streams of both the RGB camera and depth sensor use the same video graphics
array (VGA) resolution (640 × 480 pixels). Each pixel in the RGB viewer corresponds
to a particular pixel in the depth viewer. Based on this working principle, Kinect sensor
is able to display the depth, colour and 3D information of its captured objects.
Another characteristic of Kinect sensor is its unique skeletal tracking system. As Figure
4 illustrates, Kinect uses 3-dimensional positions prediction of 20 joints of human body
from a single depth image [7]. Through this system, Kinect is able to estimate the body
parts invariant to pose, body shape, appearance, etc. This system allows developers to
use the corresponding build-in functions in Kinect SDK to retrieve the real-time motion
and poses. Thus, it not only provides a powerful development environment for the
application developers, but also enhances the user experience of the Kinect applications.
Page 11
The SDK in the third characteristic which enables Kinect to gain popularity. Similar to
MYO armband, Kinect also has a non-commercial SDK released by Microsoft in 2011.
In each updated version, Microsoft is attempting to add more useful functions and
features and keeps optimising the development environment. For example, in the latest
version SDK 2.0 released in October 2014, it supports wider horizontal and vertical
field of view for depth and colour. For the skeletal tracking system, Microsoft increased
number of joints that can be recognised from 20 to 25. Moreover, some new gestures
such as open and closed hand gestures were also added into the SDK.
However, Kinect sensor also has its own defects. Firstly, although Microsoft keeps
improving the SDK, the depth sensor still has a limited sensing range. The sensing
range of depth sensor is from 0.4 meters to 4 meters. But the calibration function
performs differently in terms of the distances between objects and Kinect sensor.
According to the research in [7], to achieve best performance, Kinect sensor is
suggested to be located within a 30cm × 30cm square at a distance of between 1.45 and
1.75 meters from the user. Secondly, the data of depth image measured by Kinect sensor
is not reliable enough. The depth images can be interfered by some noises such as light
and background.
Figure 3: A Kinect Sensor [credit: Microsoft]
Page 12
Figure 4: Skeleton Position and Tracking State of Kinect Sensor [credit: Microsoft
Developer Network]
Page 13
Chapter 3
Methodology
This chapter introduces the details of the three HCI experiments held in this project.
The main purpose of this phase is to design the experimental methodology in order to
investigate the performance and user experience of MYO armband and Kinect sensor
in the area of gesture control. The chapter contains five sections. Section 3.1 describes
the first experiment in details. This experiment aims to help volunteers get familiar with
the use of MYO armband and Kinect sensor and to evaluate the user-friendliness of
them. Section 3.2 introduces the second experiments. In this experiment, a virtual
environment is implemented in order to investigate the navigation performance of the
devices. Sections 3.3 explains the third experiment. This experiment also set up a virtual
environment to assess the performance of precise manipulation of each device. Section
3.4 illustrates other general points investigated in the experimental questionary. Lastly,
Section 3.5 introduces the specification of the experimental devices and rules.
3.1 Assessment on User-friendliness
This section introduces the Experiment 1 held in the project. There are two purposes
for holding this experiment. Firstly, since both of MYO armband and Kinect sensor
require special gestures to interact with the virtual environment. Therefore, before
holding the experiments to evaluate their performance in virtual control, it is important
to train subjects to be familiar with the use of these two devices. Secondly, if subjects
are novice users of MYO armband and Kinect sensor, it is a good chance to investigate
the user-friendliness of the devices. The process of this experiment is shown as Figure
6.
3.1.1 Training Subjects
There are two phases in this experiment. For each subject, they are firstly required to
learn the use of MYO armband. At the beginning of this phase, there is a demo video
about using MYO armband shown to each subject. The contents of the demo video
include wearing the armband, performing sync gesture, using IMU to track the arm
motion and performing the five pre-set gestures which are fist, fingers spread, wave left,
wave right and double tap. After displaying the demo video, subjects are asked to
attempt to use the armband by themselves. Therefore, each subject needs to wear on
and sync the armband with the paired experimental computer. After syncing
successfully, they need to perform the five gestures and use their arms to control the
cursor on the screen of the paired computer.
Page 14
The second phase of this experiment is training the subject with the use of Kinect sensor.
There is also a demo video shown to each subject, which includes the contents of
activating the sensor, calibrating pattern recognition and tracking arm motion. Similar
to the first phase, subjects are asked to active the Kinect sensor and do the calibration
task by themselves. After this, they are also required to use their arm to control the
cursor on the screen of the paired computer.
3.1.2 Evaluating Degree of Proficiency
Since one of the purpose of this experiment is to train user to use the tested devices,
therefore evaluating the degree of user’s proficiency is meaningful and important. In
this experiment, only if the subject’s degree of proficiency is acceptable, he/she is
allowed to do the Experiment 2 and 3. A program is implemented to assess each
subject’s degree of proficiency when they are using the devices to do the test.
To evaluate subject’s degree of proficiency of using MYO armband, two aspects are
monitored and assessed by the program. Firstly, the program selects one of the five
gestures randomly and then generates the text version of the chosen gesture on the
screen. The program repeats to do this task in ten times, and each gesture will be
selected by the program in two times. Subjects should perform the same gesture as they
watch on the screen. An error will be counted if the subject performs a different gesture
from the gesture shown on the screen. Secondly, the program generates a graph
(1240×660 pixels) shown as Figure 5. There are five red points located at 15×330,
1225×330, 620×15, 620×645 and 620×330 respectively. As the graph is displayed on
the screen, the cursor will be re-generated to the point 0×0 on the graph. Subjects are
asked to use MYO armband to control the cursor to reach all the five points in one
minute. A failure will be counted if the time is up. During these two tests, only if subject
completes the first test with an error rate less than 20% and completes the second test
within 1 minute, the subject will be assess as qualified. If the subject is not quailed,
he/she is required to redo the failed part until it is passed.
Similar to the evaluation on subject’s degree of proficiency of using MYO armband,
the program use the same graph to monitor subject’s degree of proficiency of Kinect
sensor. Since the manipulation on Kinect sensor does not need to perform any specific
gestures, there is no need to ask subjects to perform gestures in this evaluation.
Therefore, subject will be considered to be qualified if he/she can complete the cursor
control test in 1 minute. However, if the subject fails, he/she needs to redo it until it is
passed.
Page 15
Figure 5: Graph for the Test of Degree of Proficiency of Cursor Control
3.1.3 Evaluating User-friendliness
To evaluate the user-friendliness, there are four aspects taken into account. Firstly, for
each experimental device, a time will be counted after showing the demo video. The
time will be stopped until the subject is proficient at manipulating this device. Thus,
this time record (named as ‘TotalTime’) illustrates how long a novice user spends on
getting familiar with the operation of each device. Secondly, the time that each subject
used in the cursor control test is also recorded (named as ‘CursorControlTime’). Thirdly,
for MYO armband, the error rate of its first test is recorded as ‘ErrorRate’. Lastly, when
a subject passes all the training tests, they are asked to give a subjective evaluation
about the user-friendliness of MYO armband and Kinect sensor. The question for this
aspect is that “Do you think MYO armband/Kinect sensor is user-friendly”. There are
five degrees for them to choose which are strongly agree, agree, uncertain, disagree and
strongly disagree.
Page 17
3.2 Assessment on Navigation
This section introduces the Experiment 2 held in the project. The purpose of this
experiment is to test the performance of MYO armband and Kinect sensor in the aspect
of navigation, and to compare with traditional input devices. There is a virtual maze set
in this experiment in order to support the evaluation. Moreover, to make the data
analysis more conveniently, the interaction events of each tested device (i.e. MYO
armband, Kinect sensor and keyboard) have been pre-set rather than being customised.
Therefore, all the subjects need to use same input commends to interact with the virtual
environment, and are not allow to set the interaction events according to their personal
preferences.
3.2.1 Setting up the Virtual Environment
This sub-section introduces the details of the virtual environment used in Experiment 2
and the settings of three tested devices. The virtual environment is a 3D maze. Subjects
are required to use keyboard, MYO armband and Kinect sensor to move from the
starting point to the specified destination.
3.2.1.1 Virtual Environment Description
The virtual environment used in this whole project is a 3-demensional maze written in
C#. The virtual maze consists of 209 objects. Each object in this virtual environment is
mapped into a corresponding 2-dementional texture image. To enhance the sense of
virtual reality, the player in the maze is shown in a first-person perspective. As Figure
7 shows, the structure of the maze is not complicated, which contains 4 rooms, 5 straight
halls, 3 square halls and 2 stair halls. Each part in the maze is used for different testing
purpose. In this experiment, the starting position is set at a corner of Room 1. To save
more time in this experiment, the camera can be switched to this starting point by
pressing key ‘1’ on the keyboard. Therefore, researcher will press key ‘1’ when the
subject is going to take this test. One of the shortest paths is shown as Figure 8 which
is considered as the expected value in this experiment. Each subject are asked to attempt
their best to trace this shortest path.
Figure 7: 3D Demo of the Virtual Maze in Experiment 2
Page 18
Figure 8: One of the Shortest Paths in Experiment 2
There are four interaction events set in this navigation task, which include moving
forward, moving backward, turning left and turning right. It is important to notice that
when turning left/right happens, the camera will be rotated to left/right rather than being
horizontally shifted to left/right. Therefore, if users want to move to left/right, they need
to turn the camera to the left/right first, and then move forward from the new direction.
3.2.1.2 Settings of the Tested Devices
The three tested devices in this experiment are MYO armband, Kinect sensor and
keyboard. For each of the device, the interaction events mentioned in the previous sub-
section are mapped into the corresponding gestures or keys. Moreover, since MYO
armband and Kinect sensor cannot be directly connected with the virtual environment,
it is necessary to build a connector in the code of the maze. In the process of building
the connectors, the MYO SDK 0.8.1 and Kinect SDK 1.9 was used. The settings of the
three devices is explained as below.
Firstly, the settings of MYO armband is shown in Table 1. There is an ‘Unlock’ event
set into MYO mode in order to reduce the misuse. Thus, unless subject performs double
tap to unlock the armband, other four gestures will not be detected. It is important to
notice that the experiment does not adopt to use Finite State Machine (FSM) as the
mathematical model. Therefore, users need to hold a gesture in order to keep the event
being continued.
Gesture Interaction Event
Fist Move Forward
Fingers Spread Move Backward
Wave Left Turn Left
Wave Right Turn Right
Double Tap Unlock
Table 1: Interaction Event Mapper of MYO in Experiment 2
Page 19
Secondly, because the version of Kinect SDK used in this experiment does not support
hand gesture recognition, the Kinect sensor still needs to be used with mouse. When
subjects are standing in front of the cameras of Kinect sensor, they are required to hold
a mouse in their right hand. After Kinect mode is launched, the vision-based sensor will
track subject’s right shoulder, elbow and hand. Thus, subject is able to control the cursor
on the screen by moving his/her right hand. The interaction event mapper is shown as
Table 2. The cursor is constrained within the frame. Therefore, the cursor will be forced
to stay at a border if user is trying to move the cursor out of the frame. If the position
of the cursor is located on a border of the frame, the corresponding arrow will be
displayed. Then if user holds both left and right buttons of the mouse held in his/her
right hand, the user will be able to move toward or turn to the corresponding direction.
Cursor Position Arrow Interaction Event
Cursor.X≤ 0 ← Turn Right
Cursor.X≥ Width → Turn Left
Cursor.Y≤ 0 ↑ Move Forward
Cursor.Y≥ Heigth ↓ Move Backward
Table 2: Interaction Event Mapper of Kinect in Experiment 2
Thirdly, the setting of keyboard is based on the custom of most 3D games. Therefore,
key ‘W’ maps moving forward, key ‘S’ maps moving backward, key ‘A’ maps turning
left, key ‘D’ maps turning right.
3.2.2 Experimental Data Collection
This sub-section introduces the types of data collected in Experiment 2 and the method
used in data collection. The following types of data are considered to be meaningful for
the evaluation of the performance of the tested devices in navigation task.
3.2.2.1 Navigation Data and Time
Firstly, if a moving or turning event is triggered, a clock function will be activated and
keep counting time until the program of the virtual maze is closed. Therefore, it can
calculate subject’s completion time in this task.
Moreover, the clock function will be reactivated per 0.02 second. For each time the
clock function is activated, the experimental program will also record the navigation
data. Therefore, each piece of navigation data includes the type of movement (move/
turn), direction (forward/ backward/ left/ right) and the corresponding time.
Lastly, in MYO mode, the navigation data also includes the status of the armband
(locked/ unlocked), the hand that armband is worn on (R/ L) and the gesture performed
currently (rest/ fist/ fingers spread/ wave in/ wave out/ double tap). It is important to
notice that subjects are allowed to use either their right or left arms to perform this task.
Thus, there are two gestures have different names from the gestures shown in Table 1.
The MYO armband is able to recognise which hand the user is using. Therefore, if the
subject uses right hand, the ‘Wave Left’ gesture in Table 1 will be recorded as ‘wave
Page 20
in’, and ‘Wave Right’ will be recorded as ‘wave out’. However, if the subject uses left
hand to do this task, the ‘Wave Left’ gesture will be recorded as ‘wave out’, and ‘Wave
Right’ will be recorded as ‘wave in’.
Pseudo Code of Collecting Navigation Data
InputMode = {KEYBOARD, MYO, KINECT}
MYO = (Status, Hand, Gesture)
Status = {unlock, lock}
Hand = {L,R}
Gesture = {rest, fist, fingers spread, wave in, wave out, double tap}
Event = (Movement, Direction)
Movement = {MOVE, TURN}
Direction = {FORWARD, BACKWARD, LEFT, RIGHT}
while virtual maze is launched
Clock clock = new Clock
case InputMode.KEYBOARD:
StreamWriter file = new
StreamWriter("Keyboard_Ex2_NavigationData.txt")
if Event is triggered
if triggerTime = 0.02 sec
file.write(clock.elaspedTime() + Event.Movement +
Event.Direction)
triggerTime.Clear()
EndIf
EndIf
Break
case InputMode.KINECT:
StreamWriter file = new
StreamWriter("Kinect_Ex2_NavigationData.txt")
if Event is triggered
if triggerTime = 0.02 sec
file.write(clock.elaspedTime() + Event.Movement +
Event.Direction)
triggerTime.Clear()
EndIf
EndIf
Break
case InputMode.MYO:
StreamWriter file = new
StreamWriter("MYO_Ex2_NavigationData.txt")
if Event is triggered
if triggerTime = 0.02 sec
file.write(clock.elaspedTime() + Event.Movement +
Event.Direction + MYO.Status + MYO.Hand + MYO.Gesture)
triggerTime.Clear()
EndIf
EndIf
Break
EndWhile
*Note: The settings of the interaction events are contained in the virtual maze which is not listed in this pseudo code.
Page 21
3.2.2.2 Error Rate
The error rate also can be considered as recognition error. For example, if a subject
performs ‘Fist’ gesture in MYO mode, but the armband recognises it as ‘Double Tap’,
a recognition error will be counted. Due to the limit of the devices, they cannot detect
and calibrate errors by themselves. Therefore, it needs a camera to record a video when
a subject is perform this task and researcher needs to review the video to detect
recognition errors. If researcher finds the gesture that the subject performed had wrong
feedback in the virtual environment, a recognition error will be counted.
3.2.2.3 Subjective Evaluation
After completing this test, subjects are asked to give a subjective evaluation to their
performance for each tested device they used in this experiment. There are five degrees
for them to choose, which include ‘Excellent’, ‘Good’, ‘Average’, ‘Poor’ and ‘Very
Poor’. Moreover, they are asked to choose their favourite device in this task and to list
the reasons of their choices.
3.3 Assessment on Precise Manipulation
This section introduces the Experiment 3 held in the project. The purpose of this
experiment is to test the performance of MYO armband and Kinect sensor in the aspect
of precise manipulation, and to make a comparison with traditional input devices.
Similar to Experiment 2, subjects are asked to perform the precise manipulation in a
virtual environment and the interaction events of each tested device (i.e. MYO armband,
Kinect sensor and mouse) has been pre-set. The task for subjects in this experiment is
using the tested device to pick up the keys generated on the screen, and using the keys
to open the corresponding doors.
3.3.1 Setting up the Virtual Environment
This sub-section introduces the details of the virtual environment used in Experiment 3
and the settings of three tested devices. The virtual environment is a 3D scene. Subjects
are required to use mouse, MYO armband and Kinect sensor to select and drag the keys
to the corresponding doors. Compared to Experiment 2, even though there are less
interaction events set in this experiment, it requires subjects to control the cursor
precisely and to perform the gestures more proficiently.
3.3.1.1 Virtual Environment Description
The virtual environment used in this experiment is a square hall located in the 3-
demensional maze introduced in Experiment 2. It can be considered as a scene because
users are not allowed to move around. Same as Experiment 2, the scene also uses a
first-person perspective. As Figure 9 shows, two keys are generated one by one.
Subjects are asked to drag the key to the corresponding lock in order to open the door.
After the first door is opened, the first key will be disappeared automatically and second
Page 22
key will be displayed on the screen. To save more time in this experiment, after
launching the virtual, the researcher will press key ‘2’ to switch the camera to this scene.
There are three interaction events set in this precise manipulation task, which include
controlling cursor, selecting key, and grabbing key. Same as Experiment 2, the
experiment does not use Finite State Machine (FSM) as the mathematical model.
Therefore, users need to hold the input commend if they want the corresponding
interaction event to be continued.
3.3.1.2 Settings of the Tested Devices
The three tested devices in this experiment are MYO armband, Kinect sensor and
mouse. For each of the device, the interaction events mentioned in the previous sub-
section are mapped into the corresponding gestures or keys.
Firstly, the settings of MYO armband is shown as Table 3. Same as the setting in
Experiment 2, ‘Unlock’ event is also set into MYO mode to reduce the misuse in this
experiment. However, to simplify the manipulation, the ‘Unlock’ event shares the same
input gesture with ‘Toggle Mouse’ event. Therefore, if users perform ‘Finger Spread’
gesture, the MYO armband will be unlocked and allow user to use arm to control the
cursor. Moreover, to keep the event being continued, users should hold a gesture until
they want to stop this event. When event ‘Grab’ is continued, users are able to drag the
key in terms of the movement of cursor.
Gesture Interaction Event
Fist Grab
Fingers Spread Unlock and Toggle Mouse
Double Tap Select
Table 3: Interaction Event Mapper of MYO in Experiment 3
Secondly, similar to the setting in Experiment 2, Kinect mode also needs mouse to
trigger the interaction events. However, the cursor will be tracked by the vision-based
sensor of Kinect instead of mouse. Thus, when users use their right hand to put the
cursor on the handle of the key, they are able to press the left mouse button the trigger
the ‘Select’ event. Then they are able to hold both of left and right mouse button to
trigger the event ‘Grab’ in order to drag the key to its corresponding lock.
Thirdly, mouse is set in general sense. When left button is pressed and cursor is on the
handle of the key, the key will be selected. Then if both of left and right button are
being held, the key will be dragged as the cursor moves.
Page 23
Figure 9: 3D Scene for Experiment 3
3.3.2 Experimental Data Collection
This sub-section introduces the types of data collected in Experiment 3 and the method
used in data collection. The following types of data are considered to be meaningful for
the evaluation of the performance of the tested devices in precise manipulation.
3.3.2.1 Moving Range of Arm
Since subjects need to use the motion of their arms to control the cursor on the screen
when they are using MYO armband and Kinect sensor to perform the task in this
experiment, therefore monitoring the moving range of subjects’ arms is meaningful to
the evaluation. To calculate the moving range, the Euler Angle in 3-dementional
Euclidean space is used. Euler Angle uses 3 angles to describe the orientation of a rigid
body in 3-demntional Euclidean space [8]. The angle α, β, γ shown in Figure 10 respect
to the parameter yaw, roll and pitch used in this experimental code. Since subjects do
not need to roll their wrists in this test, therefore parameter roll is not taken account
into the evaluation. However, to ensure the data integrity, roll angle is still collected in
the experiment data. In [8], the researchers built a device to emulate upper body motion
in a virtual 3D environment and used tri-axial accelerometers to detect human motions,
which is similar to the idea of Experiment 3. The measurement method used in [8] is
also reasonable to be applied into this experiment. That is, since each user has different
Page 24
height and length of arm, it is hard to compare the Euler angles among numerous
subjects. Therefore, the Euler angles in radian need to be converted to a scale in order
to make the evaluation more reasonable and convincing. By using the formula provide
by Thalmic Lab in [10], the angles in this experiment can be converted into a degree
from 0 to 18.
𝑅𝑎𝑑𝑖𝑎𝑛𝑟𝑜𝑙𝑙 =𝐴𝑛𝑔𝑙𝑒𝑟𝑜𝑙𝑙 + 𝜋
2𝜋× 18
𝑅𝑎𝑑𝑖𝑎𝑛𝑝𝑖𝑡𝑐ℎ =𝐴𝑛𝑔𝑙𝑒𝑝𝑖𝑡𝑐ℎ +
𝜋2
𝜋× 18
𝑅𝑎𝑑𝑖𝑎𝑛𝑦𝑎𝑤 =𝐴𝑛𝑔𝑙𝑒𝑦𝑎𝑤 + 𝜋
2𝜋× 18
In MYO SDK 0.8.1, the developers use a quaternion to calculate the angle of roll, pitch
and yaw. The parameters in the quaternion are x, y, z, w. The component ‘w’ respects
the scalar of this quaternion, and the component x, y, z respect the vectors in this
quaternion [9]. To calculate the angle of roll, pitch and yaw, it needs to apply the
formula provided by the developers in [10]. After calculating the angle of roll, pitch
and yaw, it is able to use the formula above to convert the angle radian to a specific
scale.
𝐴𝑛𝑔𝑙𝑒𝑟𝑜𝑙𝑙 = 𝑎𝑡𝑎𝑛2(2 × (𝑤 × 𝑥 + 𝑦 × 𝑧), 1 − 2 × (𝑥 × 𝑥 + 𝑦 × 𝑦))
𝐴𝑛𝑔𝑙𝑒𝑝𝑖𝑡𝑐ℎ = asin (max (−1, min (1,2 × (𝑤 × 𝑦 − 𝑧 × 𝑥)))
𝐴𝑛𝑔𝑙𝑒𝑦𝑎𝑤 = 𝑎𝑡𝑎𝑛2(2 × (𝑤 × 𝑥 + 𝑥 × 𝑦), 1 − 2 × (𝑦 × 𝑦 + 𝑧 × 𝑧))
For the Kinect SDK in [11], it is unfortunately that the library of Euler angle function
can be only used to track the pose of head rather than hand. However, since wearing
MYO armband does not influence the pattern recognition of Kinect sensor, the subjects
are asked to wear the MYO armband to calculate the Euler angle of their arms when
the virtual maze is under the Kinect mode.
Figure 10: Euler Angles in 3D Euclidean Space [credit: Wikipedia, Euler Angles]
Page 25
3.3.2.2 Interaction Events and Time
Firstly, as same the sub-section 3.2.2.1, as the subject triggers the interaction events, a
clock function will be activated and keeps counting time until the program of the virtual
maze is closed. The clock function will be reactivated per 0.02 second. For each time
the clock function is generated, the experimental program will also record the
interaction data which includes the status of the key (held/ not held), the type of event
(select/ grab) and the corresponding time. In addition, in MYO and Kinect mode, the
three degrees of each Euler angle are recorded. Lastly, in MYO mode, the interaction
data also includes the status of the armband (locked/ unlocked), the hand that armband
is worn on (R/ L) and the gesture performed currently (rest/ fist/ fingers spread/ double
tap). Since the gesture ‘Wave Left’ and ‘Wave Right’ are not mapped into any
interaction event in this experiment, the MYO armband will not give a feedback to these
two gestures.
Pseudo Code of Collecting Euler Angle and Interaction Data
InputMode = {MOUSE, MYO, KINECT}
MOUSE = CursorPosition
MYO = (Status, Hand, Gesture, EulerAngle, CursorPosition)
KINECT = (EulerAngle, Position)
Status = {unlock, lock}
Hand = {L,R}
Gesture = {rest, fist, fingers spread, double tap}
EulerAngle = (rollScale, pitchScale, yawScale)
CursorPostion = (X,Y)
Event = {SELECT, GRAB}
Key = {held, not held}
while virtual maze is launched
case InputMode.MOUSE:
Clock clock = new Clock
StreamWriter file = new
StreamWriter("MOUSE_Ex3_InteractionData.txt")
if Event is triggered
if triggerTime = 0.02 sec
file.write(clock.elaspedTime() + key + Event +
CursorPostion.X + CursorPosition.Y)
triggerTime.Clear()
EndIf
EndIf
Break
case InputMode.KINECT:
Clock clock = new Clock
StreamWriter file = new
StreamWriter("Kinect_Ex3_InteractionData.txt")
if Event is triggered
if triggerTime = 0.02 sec
file.write(clock.elaspedTime() + key + Event +
CursorPostion.X + CursorPosition.Y +
EulerAngle.rollScale + EulerAngle.pitchScale +
EulerAngle.yawScale)
triggerTime.Clear()
Page 26
EndIf
EndIf
Break
case InputMode.MYO:
Clock clock = new Clock
StreamWriter file = new
StreamWriter("MYO_Ex3_InteractionData.txt")
if Event is triggered
if triggerTime = 0.02 sec
file.write(clock.elaspedTime() + key + Event +
Event.Direction + MYO.Status + MYO.Hand + MYO.Gesture
+ CursorPostion.X + CursorPosition.Y +
EulerAngle.rollScale + EulerAngle.pitchScale +
EulerAngle.yawScale)
triggerTime.Clear()
EndIf
EndIf
Break
EndWhile
*Note: The settings of the interaction events are contained in the virtual maze which is not listed in this pseudo code.
3.3.2.3 Error Rate
Same as Experiment 2, the error rate in this experiment also indicates the recognition
error of the tested devices. The way to identify the error is also using camera to shoot a
video for each subject and reviewing the video to identify the errors.
3.3.2.4 Subjective Evaluation
After completing this test, subjects are asked to give a subjective evaluation to their
performance for each device they used in Experiment 3. Same as Experiment 2, the
degrees for them to choose are ‘Excellent’, ‘Good’, ‘Average’, ‘Poor’ and ‘Very Poor’.
Moreover, they are asked to choose their favourite device in precise manipulation and
to list the reasons of their choices.
3.4 Assessment on Other General Aspects
There are some other questions listed in the questionary. Before subjects doing the
experiments, they need to fill out their name, gender, date of birth, and contact number,
and answer some the pre-experiment questions, including “How many years have you
used computer with keyboard and mouse”, “Did you used any other NUI input device
before?”, “Did you use MYO armband/ Kinect sensor before”. These two parts aims to
investigate the subject’s background and provide more dimensions for data evaluation
in the next chapter.
Page 27
Apart from that, after completing all the three experiments, the subjects are asked to
give a subjective assessment on the overall performance of MYO armband, Kinect
sensor. Moreover, they are also asked to answer the questions that “Do you have the
willingness to use MYO armband/ Kinect sensor to replace mouse and keyboard in the
future”. The post-experiment questions aim to investigate the user experience in the
perspective of subjects. It may provide a different view from the evaluation based on
the data collected by the experimental program.
3.5 Devices Specification and Experimental Regulations
The computer used in these three experiments is Asus F550CC. The product
specification is shown in Appendix A. When subjects are using MYO armband or
Kinect sensor to perform a task, they are required to stand at a distance of approximately
1.5 meters from the computer screen. Moreover, no barrier is allowed to block subject’s
view, arm or the lens of Kinect sensor. Lastly, the experiments are followed the
National Statement on Ethical Conduct in Research Involving Humans.
Page 28
Chapter 4
Result Analysis
This chapter discusses the experimental data collected in the three HCI experiments
introduced in the previous chapter. The main purpose of this phase is to assess the
performance and user experience of MYO armband and Kinect based on the
experimental data. Due to the constraint on time, there was few time left after setting
up the virtual environment. Moreover, because it takes average more than 1 hour for
each subject to do the three experiments, there are only five subjects have convened in
the experiments so far. The data analysis in this chapter is based on the data set of the
current five subjects. However, as the environment and the connections have been built,
the later research can be continued based on the result of this project.
The subjects convened in the experiments consist of 1 female and 4 males. Their age
range is from 22 to 26. All of the subjects are the novice users of MYO armband
whereas one of them had tried to use Kinect sensor for 1 hour in the purpose of
entertainment. Moreover, all of them have used keyboard and mouse for more than 10
years. Therefore, they can be considered as the expert users of traditional input devices.
Lastly, during the three experiments, the four male subjects are right-handers, and they
used their right hand to hold the mouse and to wear the MYO armband. The female
subject used her left hand to wear the MYO armband, but she used right hand to hold
the mouse.
4.1 Result Analysis of Experiment 1
This section explains the results of Experiment 1. The result analysis is based on three
aspects including the result of proficiency test, the total training time for each subject
and their first impression of MYO armband and Kinect sensor.
4.1.1 Evaluation of Proficiency Test
As mentioned in the previous chapter, there are two types of data collected in this test.
For the proficiency test of MYO armband, the error rate (i.e. ‘ErrorRate’) of performing
five pre-set gestures and the completion time (i.e. ‘CursorControlTime’) of the cursor
control test were collected. For the proficiency test of Kinect sensor, only
‘CursorControlTime’ was collected. The Table 4 and 5 shows the result of the
proficiency test.
From Table 4, it illustrates that no subject had failed in the test of performing the five
pre-set gestures. However, the error rate is not satisfactory because two of the subjects
completed the task with 20% error rate which is the maximum value being acceptable.
Page 29
Moreover, 3 subjects made mistake in performing ‘Wave In’ gesture. This does not
reflect that they are not familiar with this gesture. From the data in later experiments, it
shows that the recognition accuracy of ‘Wave In’ is much lower than other four gestures.
From the Table 5, it shows that all of the subjects spent less time in this task when they
were using MYO armband. Therefore, it could mean that MYO armband performs
better in cursor control. This guess has been proved by the data collected in Experiment
3. It is also important to notice that the Subject 5 spent 58.88 seconds on the Kinect
cursor control test which is much more that the time other subject spent. Even though
it is still within the tolerance range, it strengthen the conclusion that Kinect has worse
performance in toggling cursor.
Subject No Total Training Time Error Rate Incorrect Gesture
1 1 20% Fingers Spread, Wave Out
2 1 20% Double Tap, Wave In
3 1 0% N/A
4 1 10% Wave In
5 1 10% Wave In
Table 4: Error Rate & Incorrect Gesture for Proficiency Test of MYO armband
Subject No CursorTimeMYO CursorTimeKinect
1 12.39 sec 21.35 sec
2 17.61 sec 19.78 sec
3 14.41 sec 21.43 sec
4 7.88 sec 26.18 sec
5 15.11 sec 58.88 sec
Average Time 13.48 sec 33.72 sec
Table 5: Completion Time in Cursor Control Test
4.1.2 Evaluation of Training Time
The total training time of each subject is shown in Table 6. It shows that subjects
apparently spent less time on the training of Kinect sensor. This is simply to be
explained. Because the training of MYO consists of two tests whereas the training of
Kinect contain only one test, therefore average training time of Kinect is much less than
MYO. From this data, it reveals that MYO would have lower user-friendliness due to
the longer training time. This opinion matches the subjects’ subjective evaluation on
the user-friendliness of MYO armband and Kinect sensor.
Subject No CursorTimeMYO CursorTimeKinect
1 124.85 sec 38.70 sec
2 119.95 sec 63.42 sec
3 97.23 sec 82.74 sec
4 92.45 sec 81.51 sec
5 118.95 sec 79.58 sec
Average Time 110.69 sec 69.19 sec
Table 6: Total Training Time for MYO armband and Kinect sensor
Page 30
4.1.3 Evaluation of User-friendliness
In Table 7, subjects’ evaluation on the user-friendliness is shown. The mode value for
MYO is 3 while that for Kinect is 4. Even though there could be some other reason
impacts on their choice, such as personal interest, it still can conclude that Kinect is
more user-friendliness because it requires less amount of training tasks. Moreover, the
result in 4.1.1 may also help to strengthen this point of view. In 4.1.1, four of the
subjects made mistakes in the gesture performing test and three of them failed in
performing ‘Wave In’ gesture. Therefore, this failure experience could cause their
negative impression on MYO armband.
Subject No MYO Kinect
1 3 (Uncertain) 2 (Disagree)
2 3 (Uncertain) 4 (Agree)
3 3 (Uncertain) 5 (Strongly Agree)
4 3 (Uncertain) 4 (Agree)
5 2 (Disagree) 3 (Uncertain)
Mode 3 (Uncertain) 4 (Agree)
Table 7: Subject’s Rate for the User-friendliness of MYO and Kinect
4.2 Result Analysis of Experiment 2
This section explains the results of Experiment 2. The result analysis is based on four
aspects including the number of gestures used in the task, error rate, the time spent for
each device and subject’s self-evaluation of their performance in Experiment 2.
4.2.1 Evaluation of the Number of Gestures
The number of the total gestures that a subject performed by using each device is shown
in Table 8. According to the shortest path listed in Figure 8 in chapter 3, the expected
value of this task is 8. From this table, it can conclude that using keyboard can perform
less gestures than using MYO and Kinect. Moreover, when the subjects were using
MYO armband, they performed the largest number of gestures. The reason of this is
explained in the next sub-section.
Subject No MYO Kinect Keyboard
1 16 13 11
2 14 11 10
3 10 10 15
4 16 12 11
5 13 17 11
Average Value 14 13 12
Table 8: The Number of Gestures Performed in Experiment 2
Page 31
4.2.2 Evaluation of Error Rate
Firstly, the path that all of the subjects used keyboard walk through generally matches
the expected value shown in Figure 8 in Chapter 3. Moreover, as Table 9 shows, the
error rate of keyboard is approximately equal 0. However, it is interesting to notice that
even though Keyboard has no error in this task, the average number of gesture is still
four more than the expected value. This is because subjects need a period of the reaction
time to receive the feedback from the screen. Therefore, the convened subjects often
turned to a larger angle or moved further than they actually wanted. In this case, they
had to adjust the direction which triggered more gestures.
Secondly, the error rate of Kinect is 0.04%. This could be considered as an outlier to
some extent. For Subject 5, the number of gestures he used is much more than other
subjects. Moreover, the other four subjects did not come across recognition error in the
process of this task whereas he had four times of error recognition. The reason of this
was mentioned by him in the questionary. Because he set a very large range when he
was doing the calibration of Kinect, therefore sometimes he was not able to use his arm
to move the cursor to the frame border in order to move and turn.
Thirdly, the error rate of MYO armband is 16.68%, which is not satisfactory. The
subjects had to adjust the direction to correct the feedback produced by incorrect gesture
recognition. Therefore, when the subjects were using MYO armband, the number of
total gestures used in this task was also larger than that of other two devices.
Subject No MYO Kinect Keyboard
1 3/16 0 0
2 2/14 0 0
3 1/10 0 0
4 4/16 0 0
5 2/13 4/17 0
Average Rate 16.68% 0.04% 0
Table 9: Error Rate in Experiment 2
4.2.3 Evaluation of Completion Time
Table 10 shows the time that each subject spent on the navigation task in Experiment 2
by using each device. It matches the result in Table 8 and 9. Due to the larger number
of total gestures and high error rate, using MYO armband costed the subjects much
more time to complete the navigation task. Moreover, even though the error rate of
Kinect sensor is 0.04% and the average value of Kinect in Table 8 is only1more than
that of keyboard, the average time of using Kinect is still 43 seconds more than that of
keyboard. This is because that pressing the keys on the keyboard costs far less time than
moving the cursor to the borders of the frame.
Page 32
Subject No MYO Kinect Keyboard
1 82.96 sec 61.06 sec 7.56 sec
2 73.77 sec 51.20 sec 10.78 sec
3 52.82 sec 33.85 sec 16.78 sec
4 92.01 sec 40.99 sec 9.35 sec
5 70.85 sec 63.08 sec 10.14 sec
Average Time 74.48 sec 54.00 sec 10.92 sec
Table 10: Completion Time in Experiment 2
4.2.4 Self-Evaluation
Table 11 shows the subject’s self-evaluation of their performance of using the tested
devices in this experiment. This result matches all the results analysed from Table 8, 9
and 10. Therefore from these four tables, it can be summarised that keyboard performs
best in the navigation task. The performance of Kinect is better than that of MYO
because of the shorter time and lower error rate.
Moreover, the comments written by subjects provide some new viewpoints to
investigate their user experience. Subject 3 felt dizzy when he used the MYO to do the
navigation task in the 3D virtual maze. This symptom might be caused by the high error
rate of MYO armband. In addition, Subject 5 felt nervous when the MYO armband
vibrated. According to the working principle of MYO armband, when the armband
detects a pre-set gesture, it will vibrate in order to give a feedback to user. However,
when users are doing a task which need to frequently change the gestures, the motor
will keep vibrating according to this setting. This may reduce the user experience to
some extent.
Subject No MYO Kinect Keyboard
1 2 (Poor) 3 (Average) 4 (Good)
2 3 (Average) 3 (Average) 5 (Excellent)
3 3 (Average) 4 (Good) 5 (Excellent)
4 2 (Poor) 4 (Good) 5 (Excellent)
5 3 (Average) 2 (Poor) 4 (Good)
Mode 3 (Average) 3(Average)/4(Good) 5 (Excellent)
Table 11: Subject’s Self-Evaluation of the Performance in Experiment 2
4.3 Result Analysis of Experiment 3
This section explains the results of Experiment 3. The result analysis is based on four
aspects including moving range of subject’s arm, error rate, the time spent foe each
device and subject’s self-evaluation of their performance in Experiment 3.
4.3.1 Evaluation of Moving Range
In this evaluation, the angle pitch and yaw have been converted to a scale from 0 to 18.
The range of each angle is equal to the difference between minimum value and
Page 33
maximum value of each angle. From the Table 12 and 13, it can get the conclusion that
Kinect requires much larger range of pitch and yaw angle than MYO armband.
Subject No (𝑀𝑌𝑂𝑚𝑖𝑛 , , 𝑀𝑌𝑂𝑚𝑎𝑥 ) MYORange (𝐾𝑖𝑛𝑒𝑐𝑡𝑚𝑖𝑛 , 𝐾𝑖𝑛𝑒𝑐𝑡𝑚𝑎𝑥) KinectRange
1 (5, 9) 4 (3, 13) 10
2 (6, 9) 3 (2, 11) 9
3 (3, 9) 6 (1, 10) 9
4 (4, 9) 5 (2, 12) 10
5 (4, 9) 5 (2, 10) 8
Average Range 4.6 9.2
Table 12: Range of Pitch Angle in Experiment 3
Subject No (𝑀𝑌𝑂𝑚𝑖𝑛 , , 𝑀𝑌𝑂𝑚𝑎𝑥 ) MYORange (𝐾𝑖𝑛𝑒𝑐𝑡𝑚𝑖𝑛 , 𝐾𝑖𝑛𝑒𝑐𝑡𝑚𝑎𝑥) KinectRange
1 (8, 10) 2 (7, 13) 6
2 (8, 10) 2 (6, 13) 7
3 (7, 13) 6 (7, 14) 7
4 (8, 12) 4 (7, 13) 6
5 (8, 13) 5 (7, 14) 7
Average Range 3.8 6.6
Table 13: Range of Yaw Angle in Experiment 3
4.3.2 Evaluation of Error Rate
Different to the result in Experiment 2, the error rates of all the three devices are
approximately equal to 0. It is reasonable that the error rate of mouse and Kinect is
equal to 0. However, the performance of MYO armband is far better than its
performance in Experiment 2. This results could be explained in the following two
aspects. Firstly, since there are only three gestures (‘Fist’, ‘Double Tap’ and ‘Fingers
Spread’) used in this experiment and these three gestures have good recognition rate,
therefore the performance is improved dramatically. Secondly, it is also possible that
MYO armband is better at monitoring a continued gesture and worse at recognising a
sequence of gestures in a short time.
4.3.3 Evaluation of Completion Time
The time that each subject spent on the navigation task in Experiment 2 by using each
device is listed in Table 14. The result of this table is positively correlated to the moving
range of subjects’ arms. It seems that larger moving range increases the time spent in
this task. Since mouse was used on a mouse pad (22 cm ×18 cm) during the experiment,
therefore the moving range of it is the smallest one. Since both of the average range of
pitch and yaw angle of using MYO are smaller than those of using Kinect, therefore,
the average time of using MYO is also shorter.
Page 34
Subject No MYO Kinect Mouse
1 20.49 sec 21.38 sec 18.06 sec
2 19.89 sec 28.41 sec 12.10 sec
3 30.26 sec 30.53 sec 19.11 sec
4 26.67 sec 34.06 sec 19.64 sec
5 27.86 sec 27.17 sec 22.09 sec
Average Time 25.03 sec 28.31 sec 18.20 sec
Table 14: Time Spent in Experiment 3
4.3.4 Self-Evaluation
Table 15 shows the subject’s self-evaluation of their performance of using the tested
devices in precise manipulation. The result matches the previous result analysis as well.
Most subject thought that their performance of using MYO is better than that of using
Kinect. However, same to the result in Experiment 2, traditional input device still
performed best among the three tested devices.
Moreover, Subject 5 mentioned that the reaction speed of Kinect sensor is slower than
MYO. Similar to his comments, Subject 1 also left the comments that it was hard for
him to control the cursor in Kinect mode. From their comments, it implies that Kinect
sensor is less sensitive in motion tracking. Compared with the depth sensor in Kinect,
the 9-axis IMU can monitor the motion of user’s arm much easier. Therefore, this could
be a competitive advantage of EMG recognition devices over vision-based device.
Subject No MYO Kinect Mouse
1 4 (Good) 3(Average) 5 (Excellent)
2 4 (Good) 3 (Average) 5 (Excellent)
3 3 (Average) 4 (Good) 5 (Excellent)
4 4 (Good) 3 (Average) 5 (Excellent)
5 3 (Average) 3 (Average) 5 (Excellent)
Mode 4 (Good) 3 (Average) 5 (Excellent)
Table 15: Subject’s Self-Evaluation of the Performance in Experiment 3
4.4 Analysis of Other Relevant Data
Apart from the results analysed above, it is also important to notice the following result.
Firstly at the end of Experiment 2 and 3, the subjects were asked which device they
want to use in navigation task and precise manipulation. Although the answer is
keyboard and mouse, the reason of their choice is not only because of the better
performance, but also due to their strong habits of using traditional input devices.
Moreover, for the evaluation of overall performance, the rate of Kinect is slightly higher
than MYO. It reveals that even though MYO has better performance in precise
manipulation, the high error rate in the first part left strong dissatisfaction to the subjects.
Last but not least, some of the subjects are willing to use MYO or Kinect in the future.
However, they just want to use them for entertainment purpose.
Page 35
Chapter 5
Conclusion and Future Improvement
5.1 Conclusion
In this project, a virtual environment is developed via Visual Studio 2013 in order to
support the evaluation of gesture control performance of EMG and vision-based devices.
The project chooses MYO armband released by Thalmic Lab and Kinect sensor by
Microsoft as the typical example of EMG and vision-based devices to be assessed. To
initiate the HCI experiments, the connection between devices and virtual maze is also
developed by using the SDK launched by the developer of each device.
To evaluate the gesture control performance, three experiments with different purposes
has been held. Some parts of the experimental data were collected by the experimental
code and the rests parts were recorded in the questionary.
In summation, from the experiments, it can conclude that the following lessons. Firstly,
Kinect sensor is more user-friendly than MYO armband because it does not need to
train the users with specific gestures. Moreover, Kinect sensor has better performance
in navigation task while MYO armband is good at supporting precise manipulation.
However, both of them also have limits and defects. As the typical example of EMG
device, MYO armband has high error rate when users frequently change their gestures.
The less sensibility of Kinect reveals that vision-based device has slower reaction speed
than EMG device in motion tracking. Last but not least, compared to traditional device,
both of EMG and vision-based devices has worse performance and poorer user
experience. Hence, it is still a long way for NUI devices to break the dominance of
traditional input devices.
Page 36
5.2 Future Improvement
First of all, due to the constraint on time, the diversity of the subjects is insufficient.
Therefore, one of the future work should be convening more subjects in various
diversities. For example, it would be good to investigate the user experience of some
young children who do not have long history of using mouse and keyboard.
Secondly, the latest version of Kinect sensor has added a library to recognise hand open
and hand close gestures. Therefore, the user experience of Kinect could be enhanced if
the gesture recognition function can be added into the virtual maze in the future.
Finally, as the SDK of these two devices is becoming increasingly powerful, more
parameters of Kinect sensor and MYO armband can be monitored in the future
experiments in order to improve their performance.
Page 37
Reference
[1] Mann,S, 2001, Intelligent image processing. John Wiley & Sons, Inc.
[2] Petrovski, A, 2014. Investigation of natural user interfaces and their application in
gesture-driven human-computer interaction. In Information and Communication
Technology, Electronics and Microelectronics (MIPRO), 2014 37th International
Convention on. Opatija, 26-30 May 2014. IEEE. pg. 788 - 794.
[3] Gary, K, 2004. Research Methods in Biomechanics. 2nd ed. Champaign, IL:
Human Kinetics Publ.
[4] Doumanoglou, A, Asteriadis, S, Alexiadis, D, Zarpalas, D and Daras, P, 2013. A
Dataset of Kinect-based 3D Scans. IVMSP Workshop, 2013 IEEE 11th,
10.1109/IVMSPW.2013.6611937, pg. 1-4.
[5] Chen, W, Lin, Yand Yang, S, (2010). A generic framework for the design of
visual-based gesture control interface. In Industrial Electronics and Applications
(ICIEA), 2010 the 5th IEEE Conference. Taichung, 15-17 June 2010. Taiwan: IEEE.
pg. 1522 - 1525.
[6] Lee, B, Isenberg, P Riche, N.H and Carpendale, S, 2012. Beyond Mouse and
Keyboard: Expanding Design Considerations for Information Visualization
Interactions. Visualization and Computer Graphics, 18/12, pg. 2689 - 2698.
[7] Tao, G, Archambault, PS and Levin, MF, 2013. Evaluation of Kinect Skeletal
Tracking in a Virtual Reality Rehabilitation System for Upper Limb Hemiparesis.
Virtual Rehabilitation (ICVR), 2013 International Conference,
10.1109/ICVR.2013.6662084, pg. 164 - 165.
[8] Abdoli-Eramaki, M and Krishnan, S, 2012. Human Motion capture using Tri-
Axial accelerometers. Multidisciplinary Engineering, [Online]. 100/10, pg. 1-49
[9] Thalmic Lab. 2015. MYO SDK Sample Code. [ONLINE] Available at:
https://developer.thalmic.com/docs/api_reference/platform/hello-myo_8cpp-
example.html [Accessed 29 May 15].
[10] Thalmic Lab. 2015. Myo SDK. [ONLINE] Available at:
https://developer.thalmic.com/docs/api_reference/platform/index.html. [Accessed 29
May 15].
[11] Microsoft Corporation. (2014) Kinect for Windows SDK. [Online]. Available at:
http://msdn.microsoft.com/en-us/library/hh855347.aspx [Accessed 29 May 15].
[12] Peters, T, 2014. An Assessment of Single-Channel EMG Sensing for Gestural
Input. Dartmouth Computer Science Technical Report, 2015/767, pg. 1-14.
[13] DiFilippo, N.M. and Nicholas, M, 2015. Characterization of Different Microsoft
Kinect Sensor Models. Sensors Journal, PP/99, pg. 1.
Page 38
Appendix A: Experimental Devices Specification
1. Experimental Computer:
Product Name: ASUS F550CC
Operating System: Windows 8.1
Processor: Intel(R) Core(TM) i7-3537U
RAM: 12.0 GB
Graphic Card: NVidia® GeForce™ GT 720M 2GB
USB Port: 2.0× 1 and 3.0× 1
2. Experimental Keyboard: ASUS 348mm keyboard with 19mm full size key pitch,
integrated Numeric keypad
3. Experimental Mouse:
Product Name: Logitech Wireless Mouse M235
Mouse Dimensions (height x width x depth): 55 mm x 95 mm x 38.6 mm
Mouse Weight (including battery): 84 g (2.96 oz)
Sensor technology: Advanced Optical Tracking
Sensor Resolution: 1000
Number of buttons: 3 (Left Button, Right Button, Scroll Wheel)
Wireless operating distance: Approx. 10m*
Wireless technology: Advanced 2.4 GHz wireless connectivity
4. MYO armband:
Size: Expandable between 7.5 - 13 inches (19 - 34 cm) forearm circumference
Weight: 93 grams
Thickness: 0.45 inches
Sensor: Medical Grade Stainless Steel EMG sensors, Highly sensitive nine-axis
IMU containing three-axis gyroscope, three-axis accelerometer, three-axis
magnetometer
Processor: ARM Cortex M4 Processor
Haptic Feedback: Short, Medium, Long Vibrations
5. Kinect Sensor for Xbox 360:
Viewing angle: 43° vertical by 57° horizontal field of view
Vertical tilt range: ±27°
Frame rate (depth and colour stream): 30 frames per second (FPS)
Page 39
Appendix B: Experimental Survey
Experimental Survey for Gesture Control in
Virtual Environment
Details of Subject
Subject Number: ___________ Full Name: _______________________________
Gender: Male / Female Date of Birth (dd/mm/yyyy): ____/____/______
Contact Number: ______________________________________________________
Questionary
1. How many years have you used computer with keyboard and mouse?
2. Did you used any other NUI input device before? If so, please list here.
3. Did you use MYO armband before? If so, please list how long you have used it.
4. Did you use Kinect sensor before? If so, please list how long you have used it.
5. Do you think MYO armband is user-friendly?
1 2 3 4 5
Strongly Disagree Disagree Uncertain Agree Strongly Agree
6. Do you think Kinect sensor is user-friendly?
1 2 3 4 5
Strongly Disagree Disagree Uncertain Agree Strongly Agree
Page 40
7. Please rate your performance of using MYO armband in Experiment 2
1 2 3 4 5
Very Poor Poor Average Good Excellent
Any comments in this part?
_____________________________________________________________________
_____________________________________________________________________
8. Please rate your performance of using Kinect sensor in Experiment 2
1 2 3 4 5
Very Poor Poor Average Good Excellent
Any comments in this part?
_____________________________________________________________________
_____________________________________________________________________
9. Please rate your performance of using keyboard in Experiment 2
1 2 3 4 5
Very Poor Poor Average Good Excellent
Any comments in this part?
_____________________________________________________________________
_____________________________________________________________________
10. Which one (MYO/Kinect/Keyboard) would you prefer to use for navigation
task? And Why?
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
11. Please rate your performance of using MYO armband in Experiment 3
1 2 3 4 5
Very Poor Poor Average Good Excellent
Any comments in this part?
_____________________________________________________________________
_____________________________________________________________________
Page 41
12. Please rate your performance of using Kinect sensor in Experiment 3
1 2 3 4 5
Very Poor Poor Average Good Excellent
Any comments in this part?
_____________________________________________________________________
_____________________________________________________________________
13. Please rate your performance of using mouse in Experiment 3
1 2 3 4 5
Very Poor Poor Average Good Excellent
Any comments in this part?
_____________________________________________________________________
_____________________________________________________________________
14. Which one (MYO/Kinect/Mouse) would you prefer to use for the precise
manipulation like the task in Experiment 3? And Why?
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
15. Please rate the overall performance of MYO armband
1 2 3 4 5
Very Poor Poor Average Good Excellent
16. Please rate the overall performance of Kinect sensor
1 2 3 4 5
Very Poor Poor Average Good Excellent
17. Do you have the willingness to use MYO armband to replace mouse and
keyboard in the future?
1 2 3 4 5
Strongly Disagree Disagree Uncertain Agree Strongly Agree
Please write down the reason of your choice?
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________
Page 42
18. Do you have the willingness to use Kinect sensor to replace mouse and
keyboard in the future?
1 2 3 4 5
Strongly Disagree Disagree Uncertain Agree Strongly Agree
Please write down the reason of your choice?
_____________________________________________________________________
_____________________________________________________________________
_____________________________________________________________________