Upload
macbaed
View
60
Download
0
Embed Size (px)
DESCRIPTION
MeMoMLMeModules Markup LanguageTowards a Meta-Language for a Tangible User Interface (TUI) Toolkit
A Thesis presented for the degree of
Master of Science in Computer Science
MeMoML
MeModules Markup Language
Towards a Meta-Language for a Tangible User Interface (TUI) Toolkit
David Bächler
DIVA research group
Department of Informatics
University of Fribourg
Information and Multimedia
Systems group
University of Applied Sciences
of Western Switzerland, Fribourg
September 2006
Supervisors:
Elena Mugellini
(senior assistant, University of Applied Sciences of Western Switzerland)
Omar Abou Khaled
(professor, University of Applied Sciences of Western Switzerland)
Rolf Ingold
(professor, University of Fribourg)
I may not have gone where I intended to go,
but I think I have ended up where I needed to be.
Douglas Adams
iii
MeMoML - MeModules Markup Language
Abstract
The MeMoML master's project takes place in the framework of the MeModules
project1, which is about designing and implementing a system for creating and
managing tangible shortcuts to multimedia information. This project focuses on two
main goals: (a) the control of devices in the everyday life and also (b) information
categorization in order to improve information access and retrieval.
MeModules are tangible links (physical objects) between the human memory and
reachable information. Every tangible link needs to be described via a prede�ned
scenario which involves three main parts: 1) the MeModule - i.e. the tangible link,
2) the Player - i.e. the electronic device where the information is played, and 3)
the Result - i.e. what the user perceives as a result of the interaction between the
MeModule and the Player, hence the data and its action.
In order to o�er a �exible, device-independent and easy-to-use language for de-
scribing scenarios, a formal markup language, called MeMoML, is proposed. The
MeMoML-GUI allows to model scenarios using drag and drop without touching di-
rectly the XML code. Further, there is a dedicated MeModules engine
(MeMoEngine). The MeMoEngine identi�es objects and executes actions that were
described in the scenario.
Keywords: Tangible User Interface (TUI), XML, Toolkit, MeMoML, MeModules
1MeModules http://www.memodules.ch
Acknowledgments
I would like to thank:
Elena Mugellini (senior assistant, University of Applied Sciences, Fribourg)
It was a pleasure to work closely with her.
Omar Abou Khaled (professor, University of Applied Sciences, Fribourg; co-leader
of the Multimedia and Information systems group)
From the �rst day on, I could always count on his support.
Rolf Ingold (professor, University of Fribourg; leader of the DIVA research group)
O�cially responsible for this master project, he didn't say no for doing the project
at the University of Applied Sciences.
Denis Lalanne (senior assistant, University of Fribourg)
We had interesting discussions and an interesting tangible interfaces seminar.
Sandro Gerardi (software engineer, University of Applied Sciences, Fribourg)
We had long discussions for the integration with another sub-project. He will con-
tinue working with the code.
other supporters:
Paul Naggear (trainee, University of applied sciences, Fribourg)
Khristo EL Soury (trainee, University of applied sciences, Fribourg)
Bruno Dumas (assistant, University of Fribourg)
Florian Evequoz (assistant, University of Fribourg)
My family
My colleagues
iv
Terms and expressions
AR: Augmented reality
GUI: graphical user interface
MeMoML: MeModules Markup Language
MeMoML Environment: contains the di�erent tagged objects and devices that can
perform actions on data
MeMoML Scenario: the main component of MeMoML; the objects are grouped
into scenarios which form the environment (all possible events)
MeMo - MeModules: Memory modules, physical shortcuts to (multimedia)
information
MeMoEngine: interprets the MeMoML document and executes the de�ned actions
MeMoML-GUI: graphical user interface to generate automatically a MeMoML
document
MeMoML-GUI MeModule: source; the initial physical object with an RFID tag
MeMoML-GUI Player: device; the physical object that can show, play, etc. data
v
vi
MeMoML-GUI Result: target + action; the chosen action for the targeted data
RFID - Radio Frequency IDenti�cation: objects can be identi�ed by an RFID tag
that contains speci�c information (ID number)
RFID-Tag: a kind of sticker with an ID
TouchMe: a separate project; like an extended MeMoEngine
TUI: tangible user interface
UbiComp: ubiquitous computing; computers are everywhere in our daily life
VR: Virtual reality
WYSIWYG: what you see is what you get
Contents
Abstract ii
Acknowledgments iv
1 Introduction 1
1.1 Digital overload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Tangible User Interfaces (TUIs) . . . . . . . . . . . . . . . . . . . . . 1
1.3 MeModules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3.1 MeMoML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Content overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 The MeModules Project 4
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Tangible user interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.1 About TUIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.2 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
vii
Contents viii
2.3.3 Classi�cation . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Three principles of MeModules . . . . . . . . . . . . . . . . . . . . . 8
2.5 Main functioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Markup language for tangible user interfaces . . . . . . . . . . . . . . 11
2.6.1 Heterogeneous environments . . . . . . . . . . . . . . . . . . . 11
2.6.2 Multitude of parameters for devices . . . . . . . . . . . . . . . 12
2.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3 State of the art 13
3.1 Markup Languages for TUIs . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.1 TouchMe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.2 TUIML (TUIMS) . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.3 UserML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.4 MRIML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.5 PML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Multimodal markup languages . . . . . . . . . . . . . . . . . . . . . . 20
3.2.1 EMMA (W3C) . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.2 M3L . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.3 Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Contents ix
4 MeModules Markup Language (MeMoML) 26
4.1 Document Engineering modeling approach . . . . . . . . . . . . . . . 26
4.2 De�nition of MeMoML . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.3 De�nition of scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4 MeMoML model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4.1 MeMoEnvironment . . . . . . . . . . . . . . . . . . . . . . . . 28
4.4.2 MeMoCon�g . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.4.3 Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.4.4 Target and source . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.5 Scenario examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.5.1 Sample scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.5.2 Sample scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.6 Modeling process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.6.1 At the beginning . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.6.2 First versions . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.6.2.1 Con�guration �le . . . . . . . . . . . . . . . . . . . . 35
4.6.2.2 Object, communication and action part . . . . . . . 35
4.6.3 Improved version . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.6.4 During the implementation phase of the MeMo-GUI and the
MeMoEngine . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.6.5 During the integration phase with the TouchMe project . . . . 36
Contents x
4.7 Evolution of MeMoML . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.7.1 Scenario equation . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.7.2 MeMoML improvements . . . . . . . . . . . . . . . . . . . . . 38
4.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5 MeMoML: Implementation 42
5.1 MeMoML-GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.1.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.1.2 Thoughts about the user interface . . . . . . . . . . . . . . . . 43
5.2 Demo-Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.2.1 What the user sees . . . . . . . . . . . . . . . . . . . . . . . . 44
5.2.2 What is created in the background . . . . . . . . . . . . . . . 46
5.3 MeMoML-GUI evolution . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.4 MeMoEngine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.4.2 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.4.3 Communication process . . . . . . . . . . . . . . . . . . . . . 48
5.4.4 Description of con�guration . . . . . . . . . . . . . . . . . . . 48
5.4.5 Functioning of the MeMoEngine . . . . . . . . . . . . . . . . . 48
5.4.6 MeModules console . . . . . . . . . . . . . . . . . . . . . . . . 49
5.5 UML diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Contents xi
5.5.1 Use case diagram . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.5.2 Sequence diagram . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.5.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.5.4 The central package impl . . . . . . . . . . . . . . . . . . . . . 54
5.6 Used Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6 Conclusions and future work 56
6.1 The MeMoML project . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.2 Possible future work and extensions . . . . . . . . . . . . . . . . . . . 57
6.2.1 Integration with other software . . . . . . . . . . . . . . . . . 57
6.2.2 MeMoML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.2.3 The MeMo-GUI . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.2.4 The MeMoEngine . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.3 The MeModules project in general . . . . . . . . . . . . . . . . . . . . 59
6.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Bibliography 61
Appendix 67
Contents xii
A Auxiliary documents 67
A.1 Detailed UML diagrams . . . . . . . . . . . . . . . . . . . . . . . . . 68
A.1.1 Class diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 68
A.1.2 Sequence diagram . . . . . . . . . . . . . . . . . . . . . . . . 69
A.2 CD-ROM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
A.3 License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
List of Figures
2.1 Logo MeModules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 MeModules research axes . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 MeModules principles . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 The MeModules idea . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1 TouchMe logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Functioning of TouchMe . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3 TUIMS architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.4 TUIMS graphical editor . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.5 UserML used in an editing tool . . . . . . . . . . . . . . . . . . . . . 18
3.6 Usage of a MRIML document . . . . . . . . . . . . . . . . . . . . . . 19
3.7 PML; room interacting with Little Red Riding Hood . . . . . . . . . 20
3.8 EMMA in the multimodal framework . . . . . . . . . . . . . . . . . . 21
3.9 SmartKom logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.10 How SmartKom works . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1 MeMoEnvironment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
xiii
List of Figures xiv
4.2 MeMoCon�g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.3 SimpleScenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.4 Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.5 Data part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.6 Contact part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.7 Application part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.8 MeMoML v2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.9 MeMoScenario v2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.10 MeMoEnvironment v2 . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.1 GUI prototype with freely placable objects . . . . . . . . . . . . . . . 43
5.2 GUI prototype with line-connected objects . . . . . . . . . . . . . . . 43
5.3 MeMoGUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.4 Part of a MeMoML �le . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.5 Puzzle user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.6 MeModules console v1 . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.7 MeModules console v2 . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.8 Use case diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.9 Sequence diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.10 UML overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Chapter 1
Introduction
1.1 Digital overload
We have an information overload in our daily life and the amount of data even
increases. Information gets more and more virtual in our daily life and thus, people
are often experiencing the "lost in infospace" e�ect. Access to digital data is in
general not done in a very natural manner. What we often miss, are tangible,
physical shortcuts (reminders) to our information, like used books in our shelves
[27,33].
1.2 Tangible User Interfaces (TUIs)
The last decade has seen a lot of new research aimed at fusing the physical and
digital worlds. This work has led to the development of a collection of interfaces al-
lowing users to take advantage of their own skills and to interact collaboratively with
augmented physical objects in order to access and manipulate digital information.
These interfaces are referred to as Tangible User Interfaces (TUIs).
1
1.3. MeModules 2
Interaction with TUIs is based on user's existing skills of interaction with the real
world. It o�ers the promise of interfaces that are quicker to learn and easier to use.
However, these interfaces are currently more challenging to build than traditional
user interfaces.
1.3 MeModules
MeModules tries to design and implement a system for creating and managing tangi-
ble shortcuts or reminders to (multimedia) information. The traditional way of com-
munication with the computers should be changed, or at least enhanced: from the
traditional graphical user interface (GUI) towards a tangible user interface (TUI).
The interaction with the electronic devices should become more natural than it is
actually the case. Instead of adapting us to the devices, the project tries to make
the interaction with these electronic devices more natural.
1.3.1 MeMoML
Why do we need just another markup language? If we are working with tangible
interfaces, we must de�ne precisely the environment that we are dealing with. The
information has to be structured because di�erent applications should speak the
same "language". Information should be shared in a XML-like manner, hence we
use a document engineering approach [8].
All shortcuts to information and actions have to be described and the physical
devices must be con�gured. All this should be done in a formal way. Therefore we
need a kind of intermediary markup (or meta) language to describe such scenarios.
Every tangible link is described via a prede�ned scenario. This scenario consists of
physical objects, communication processes and targeted actions. This is done in an
XML-like manner.
1.4. Content overview 3
Actually, the �eld of markup languages for tangible user interfaces is a pretty new
area. MeMoML should be a �exible, device-independent and easy-to-use language
(XML dialect) for describing such scenarios. But at the same time it should be as
general as possible.
1.4 Content overview
This chapter 1 contains a short overview over the di�erent elements of the document.
In chapter 2, the reader �nds information about the MeModules project in general,
as MeMoML is just a part of the bigger project.
Chapter 3 is about the state of the art of Markup Languages for tangible interfaces.
The MeModules Markup Language (MeMoML) is described in detail in chapter 4.
This is the main part of the thesis.
In chapter 5, the implementation of MeMoML is explained. This means that the
software that was developed to create and show MeMoML �les is explained.
And chapter 6 �nally contains the conclusions of the work. This includes a conclusion
to the MeMoML project, possible future work and extensions of MeMoML as well
as a short conclusion to the MeModules project in general.
Chapter 2
The MeModules Project
This chapter gives a general overview over the MeModules [17] project. The main
goals and functioning are described as well as some information about tangible user
interfaces in general is given.
2.1 Introduction
The MeModules project [17] is about designing and implementing a system for
creating and managing tangible shortcuts to multimedia information. It is a two-
year project of the University of Applied Sciences of Western Switzerland [34] and
the University of Fribourg [38] (plus partners).
Figure 2.1: Logo MeModules
4
2.2. Goals 5
This project focuses on two main goals: (a) information categorization in order to
improve information access and retrieval and (b) the control of devices in the every-
day life. MeModules are tangible shortcuts or reminders to multimedia information.
The physical objects are associations to electronic data.
2.2 Goals
We have an information overload in our daily life and the amount of data even
increases. Information gets more and more virtual in our daily life and thus, people
are often experiencing the �lost in info space� e�ect. What we often miss, are tangible
shortcuts (reminders) to our information, like used books in our shelves [27, 33].
MeModules tries to design and implement a system for creating and managing tangi-
ble shortcuts or reminders to (multimedia) information. The traditional way of com-
munication with the computers should be changed, or at least enhanced: from the
traditional graphical user interface (GUI) towards a tangible user interface (TUI).
The interaction with the electronic devices should become more natural than it is
actually the case. Instead of adapting us to the devices, the project tries to make
the interaction with these electronic devices more natural.
Information should also be categorized in order to improve information access and
retrieval. When we access data, we have to think about how to model, store and
retrieve it. Another aspect of the MeModules project is the user centered approach.
The user's needs and wishes are studied for delivering suitable and usable MeM-
odules. Finally, adapted smart sensors are needed in order to be able to deliver
information in an user adapted form. All devices of the everyday life should be
easily accessible.
Hence, there are three research axes: information management, user centered design
and smart sensors design and information management. Figure 2.2 shows more
details on the research axes of the MeModules project.
2.3. Tangible user interfaces 6
Figure 2.2: MeModules research axes
2.3 Tangible user interfaces
Tangible user interfaces are human computer interfaces that one can touch and
feel. Manipulating data gives a physical, or at least visible feedback. The user
manipulates physical objects.
2.3.1 About TUIs
The last decade has seen a lot of new research aimed at fusing the physical and
digital worlds. This work has led to the development of a collection of interfaces al-
lowing users to take advantage of their own skills and to interact collaboratively with
augmented physical objects in order to access and manipulate digital information.
These interfaces are referred to as Tangible User Interfaces (TUIs).
Interaction with TUIs is based on user's existing skills of interaction with the real
world. It o�ers the promise of interfaces that are quicker to learn and easier to use.
However, these interfaces are currently more challenging to build than traditional
user interfaces.
2.3. Tangible user interfaces 7
2.3.2 Challenges
Following are a number of conceptual, methodological and technical challenges TUI
developers face. A more comprehensive discussion of these challenges can be found
in [27].
• Interlinked Virtual and Physical Worlds:
While graphical user interfaces rely only on virtual objects, tangible user in-
terfaces make use of both virtual and physical objects, which coexist and
exchange information with each other.
• Continuous and Distributed Interaction:
TUIs provide users a set of physical objects with which they can interact in a
discrete or continuous fashion. In addition, multiple users can simultaneously
interact with multiple physical objects. In existing user interface paradigms
each interactive component encapsulates its behavior. However, the behavior
of a physical object in a TUI may change in di�erent contexts of use.
• No Standard Input / Output devices:
Currently there are no standard input or output devices for accomplishing
a given task in a TUI. Each technology currently requires a di�erent set of
physical devices and code instructions.
• Early Feedback:
TUIs use novel hardware that may not be available early in the design pro-
cess. Thus, a rapid prototype to simulate the functionality and the hardware
is needed. However, building a proof of concept prototype using available tech-
nology may require rewriting the TUI software when the actual deployment
technology is selected.
2.4. Three principles of MeModules 8
2.3.3 Classi�cation
TUIs can be classi�ed into the following categories:
• Tangibles, souvenirs and memories
Shortcuts to (multimedia) information
• Medias and Tangibles
Interactive media manipulation
• Story telling and Tangibles
Support and �x ideas. Human factors in computing systems
• Tangible visualizations
Manipulating graphic information
• Creativity, art and Tangibles
Just for fun
MeModules belong to the �rst point.
There are other and more detailed classi�cations. This one was taken from Denis
Lalanne's introduction [14] to the seminar 05/06 on Tangible User Interfaces at the
University of Fribourg. More information is available on the site of the seminar [15].
2.4 Three principles of MeModules
As human beings we tend to make associations in our brain and with icons. Di�erent
images and souvenirs in our brain are connected together in a certain order. An
image can bring back all these �thoughts�. At the same time these souvenirs are
also connected to physical items. When we see a certain seashell, we remember the
holidays when we picked up this seashell. The physical reminder is like an anchor
in the real world. Finally, when we have to solve more complex problems, we often
do that by manipulating real physical things and splitting up the complex problem
into smaller problems.
2.4. Three principles of MeModules 9
Summary of the three human characteristics that are also the three main principles
of MeModules:
• Associative memory
We need cerebral connectors: In our mind, images are always connected to
memories.
• Physical reminders
Need to roost information in the real world: A tangible object has some mean-
ing to us because it is connected with the associative memory.
• Action materialization
Manipulate to solve complex problems: We are used to move and combine
graspable objects to create or something or start an action.
Figure 2.3: MeModules principles
There are further a lot of physical and psychological factors. All these things, and
not only technological aspects, are also studied in the MeModules project.
2.5. Main functioning 10
2.5 Main functioning
MeModules are tangible links between the human memory and reachable informa-
tion. The physical reminders are tiny objects with RFID tags that are links to
information sources. These sources can be further accessed by several electronic
devices.
Every tangible link needs to be described via a prede�ned scenario which involves
three main parts: 1) the MeModule - i.e. the tangible link, 2) the Player - i.e. the
electronic device where the information is played, and 3) the Result - i.e. what the
user perceives as a result of the interaction between the MeModule and the Player,
hence the data and its action.
Figure 2.4: The MeModules idea
Figure 2.4 explains the transfer of information from the associative memory via a
physical reminder to the real world: In our memory the last holidays are associated
with some pictures (associative memory). The physical reminder is a seashell that we
brought with us from the holidays. It represents now the holiday pictures (source).
And �nally, by manipulating the seashell, we can see our pictures on a certain
device (beamer). The action (show pictures) for the target was precon�gured for
this speci�c target.
2.6. Markup language for tangible user interfaces 11
2.6 Markup language for tangible user interfaces
Why do we need just another markup language? If we are working with tangible
interfaces, we must de�ne precisely the environment that we are dealing with. The
information has to be structured because di�erent applications should speak the
same "language".
All shortcuts to information and actions have to be described and the physical
devices must be con�gured. All this should be done in a formal way. Therefore we
need a kind of intermediary markup (or meta) language to describe such scenarios.
Every tangible link is described via a prede�ned scenario. This scenario consists of
physical objects, communication processes and targeted actions. This is done in an
XML-like manner.
Actually, the �eld of markup languages for tangible user interfaces is a pretty new
area. MeMoML should be a �exible, device-independent and easy-to-use language
(XML dialect) for describing such scenarios. But at the same time it should be as
general as possible.
2.6.1 Heterogeneous environments
In an environment, all devices must be formally described. We have to deal with very
di�erent locations and devices (e.g. beamer, mobile phone, MP3 Player, etc.). Each
device has di�erent functions and communication capabilities. But nevertheless all
this heterogeneity should be formalized and therefore accessible in a standardized
way.
2.7. Conclusion 12
2.6.2 Multitude of parameters for devices
In general there are a multitude of particular accesses to devices. A device can also
have di�erent functions (e.g. a mobile phone could also play music). The access
to all these devices in an environment should be standardized. We have extremely
di�erent attributes and parameters for each single device, therefore all the di�erent
aspects must be formally described. Each device should always be accessible in the
same manner.
2.7 Conclusion
The MeModules projects aims to facilitate the interaction with computers. The
human-computer interface should become more natural as associations to data can
be made through a physical reminder. The overload of information could be reduced
by introducing natural associations to data. The computer should work a little bit
more like the human brain. But the projects also covers psychological and physical
aspects.
Every tangible link needs to be described via a prede�ned scenario which involves
three main parts: 1) the MeModule - i.e. the tangible link, 2) the Player - i.e. the
electronic device where the information is played, and 3) the Result - i.e. what the
user perceives as a result of the interaction between the MeModule and the Player,
hence the data and its action.
A Markup Language should be able to describe such scenarios for tangible user
interfaces. But it should not be complete but at the same time not be too complex
and easily extensible. The created �le will be used for exchanging data between
di�erent parts of the software.
Chapter 3
State of the art
This chapter provides a collection of di�erent work in the �eld of markup languages
for tangible user interfaces. It gives an overview over existing work in that �eld.
A selection of markup languages is presented more in detail. The last part also
describes some markup languages for multimodal environments in general.
3.1 Markup Languages for TUIs
There is not yet a lot of existing work in the �eld of speci�c meta- or markup
languages for tangible user interfaces. It is a very new area. Most of the few
projects are not older than three years or still ongoing. One will �nd much more
work on markup languages for ubiquitous or multimodal projects.
But there are a lot of projects that deal with tangible user interfaces in general.
Just most of them do not use a meta-language.
13
3.1. Markup Languages for TUIs 14
3.1.1 TouchMe
TouchMe [26] was the diploma project of Andy Gonzalez at the University of Ap-
plied Sciences, Fribourg (2005). This was a preliminary project for MeModules.
Andy developed a Java framework for accessing numerical data via RFID (objects).
Applications and actions can be started and data can be accessed with a RFID
reader. Data is stored in an XML �le. This was the �rst project in the sense of the
MeModules project.
Figure 3.1: TouchMe logo
TouchMe is still under development for the MeModules project by another team and
is now called MeModules console. Also see chapter �ve for further explanations.
TouchMe focused more on the hardware aspect of RFID. The main goal was to have
a �rst application that can associate a RFID tag to an application or data.
Andy also did the �rst steps towards a categorization and representation of objects
in an XML schema. MeMoML took this framework as a starting point. Some parts
of the code and the XML schema could be reused in a modi�ed way.
3.1. Markup Languages for TUIs 15
Figure 3.2: Functioning of TouchMe
3.1.2 TUIML (TUIMS)
TUIMS [16, 28, 29] is a project at Tufts university (USA). It dates from 2005. TU-
IMS proposes a new class of software tools for TUIs: the Tangible User Interface
Management System (TUIMS). Unlike existing physical toolkits which provide sup-
port and abstractions for a speci�c set of sensing mechanisms, the TUIMS provides
a higher level model that is aimed at capturing the essence of the tangible inter-
action. It allows developers to easily specify a TUI using a specialized high level
description language (TUIDL). This technology independent speci�cation can then
be translated into a program controlling a set of physical objects in a speci�c target
technology.
TUIML (Tangible User Interface Markup Language) is a high level description lan-
guage for TUIs. The TUIML design provides support for the entire life cycle of a
TUI while using XML as an underlying technology. TUIML prede�nes �ve basic
modules: Task, Domain, Representation, TAC [27] and Control, each describing a
di�erent aspect of the TUI system. The task and domain modules describe the
semantics of the TUI system. The representation and TAC modules describe the
syntax of the TUI system. The representation module de�nes a set of logical physical
objects and their properties. The TAC component de�nes the context for interaction
actions performed upon these logical physical objects and determines which seman-
tic functions are invoked as a result of an interaction action. The control component
3.1. Markup Languages for TUIs 16
Figure 3.3: TUIMS architecture
describes the control �ow of the tangible interaction and the TUIs initial state.
The design of the TUIMS architecture is intended to facilitate the development of
technologically portable TUI systems and allow TUI developers to extend the num-
ber of input and output technologies supported by the TUIMS. The modeling tools
are aimed at assisting developers in building the model while hiding the syntax of
the modeling language. They provide a convenient way to specify the interface and
access existing speci�cations. The implementation tools translate the TUIML spec-
i�cation into programming language source code, thus assisting the TUI developer
in the implementation process.
3.1. Markup Languages for TUIs 17
Figure 3.4: TUIMS graphical editor
This project is the most similar to the MeMoML project. TUIMS is a real TUI
project (and not ubiquitous in general). They also suggest a new markup language
for TUIs (TUIML) and a GUI to interact with the system an facilitate manipula-
tions.
3.1.3 UserML
User Modeling Markup Language (UserML) for Ubiquitous Computing [10]
UserML has a modularized approach in which several modules are connected via
identi�ers (IDs) and references to identi�ers (IDREFs). Using XML as knowledge
representation language has the advantage that it can be used directly in the Internet
environment. With this method, the tree structure of XML can be extended to
represent graph structures. The work focuses on the content level, to say which
information will be send and not how the information will be send. The level of how
to send XML messages between di�erent user model applications, sensors, smart
objects and so on, will be solved by the Web Service Architecture of the World
Wide Web Consortium, where the interaction between so called service requestors
and service providers will be de�ned.
3.1. Markup Languages for TUIs 18
The UserML approach separates between two di�erent levels. On the �rst level,
a simple XML structure for all entries of the partial user model is de�ned. These
UserData elements consist of the elements: category, range and value. On the second
level, there is the ontology that de�nes the categories.
Figure 3.5: UserML used in an editing tool
The advantage of this approach is that di�erent ontologies can be used with the
same UserML tools. Thus di�erent user modeling applications could use the same
framework and keep their individual user model elements.
A User Model Editor transforms UserML into XForms with XSLT. XForms docu-
ments can be interpreted by web browsers or mobile devices with a java vm. UserML
can also be generated from a database.
3.1.4 MRIML
Mixed Reality Interface Markup Language (MRIML) is part of the Framework for
Multimodal VR and AR User Interfaces [6]. The framework is presented by the
Fraunhofer Institute for Applied Information Technology (FIT).
User Interface De�nition Languages (UIDLs) try to automate the generation of
user interface code, based on a common representation for di�erent devices such as
desktop and mobile PCs, PDAs, and mobile phones. UIDLs are meta-languages,
which are based on text-based descriptions, e.g. XML. There should be a simple
possibility to de�ne user interfaces by specifying their attributes within a single
document. MRIML is a vocabulary especially created to support VR and AR user
3.1. Markup Languages for TUIs 19
interfaces. There is further a component for an automated generation of speci�c
user interfaces directly from MRIML documents.
Figure 3.6: Usage of a MRIML document
Using MRIML requires the de�nition of the user interface elements within an UIDL
document using the MRIML vocabulary. After the document is complete (valid
XML and MRIML), it is passed to the rendering unit, together with the vocabulary
itself. The rendering unit consists of a set of individual renderers for each target
platform. After parsing the document, the individual renderers create the user
interfaces for the individual platforms according to the overall description. The
main bene�t of MRIML is its ability to use a single user interface description for
both graphical- and AR user interfaces without the necessity of programming.
3.1.5 PML
The Physical Markup Language (PML) [20,21] has been developed by Philips. PML
is a form of markup language to describe physical environments and the objects
within them, their relationships to persons, to each other and the surrounding space.
�Your room is the browser.� The Philips vision of Ambient Intelligence is very user
centric, with a vision that seems to be much less about the technology and more
about putting the person in control. PML provides a common language for the
creation, distribution and sharing of new experiences (interaction with the environ-
ment). These can be used to enhance new and existing content.
3.2. Multimodal markup languages 20
Figure 3.7: PML; room interacting with Little Red Riding Hood
Within a location the devices controlled by the PML language act as parts of a
browser. Together they render the experience. Each device contains a component
that interprets the PML related to the devices capabilities. The key challenge for
the language is to provide the tools to capture all aspects of a real world experience
in such a way that they can be easily authored and communicated. The language
is XML compliant.
3.2 Multimodal markup languages
Tangible interfaces can be considered as one among the di�erent modalities, which
can be used to interact with a personal computer. For the sake of completeness this
paragraph provides a short overview of the most important multimodal languages.
3.2.1 EMMA (W3C)
EMMA [46] (Extensible MultiModal Annotation Markup Language) Allows multiple
ways for the user to interact with an application. It was developed as a standard
for multimodal applications used on the World Wide Web.
This markup language represents user input. It is intended for use by systems that
provide semantic interpretations for a variety of inputs, including but not necessarily
limited to, speech, natural language text, GUI and ink input. EMMA serves as a
vehicle for transmitting user's intention throughout applications in a standardized
way.
3.2. Multimodal markup languages 21
Figure 3.8: EMMA in the multimodal framework
The main components of an interpreted user input in EMMA are the instance data,
an optional data model, and the meta-data annotations that may be applied to that
input.
The �rst steps towards EMMA date from 2004 and the last working draft from
September 2005. It is still under development.
3.2.2 M3L
M3L is a markup language that has been developed for the SmartKom [44] project.
SmartKom is a big European project dealing with multimodal interfaces.
Figure 3.9: SmartKom logo
M3L is designed as a complete XML language that covers all data interfaces within
this complex multimodal dialog system. Instead of using several quite di�erent XML
languages for the various data types, there is an integrated and coherent language
3.2. Multimodal markup languages 22
speci�cation, which includes all sub-structures that may occur on the di�erent pools.
In order to make the speci�cation process manageable and to provide a thematic or-
ganization, the M3L language de�nition has been decomposed into about 40 schema
speci�cations. The basic data �ow from user input to system output continuously
adds further processing results so that the representational structure will be re�ned,
step-by-step. The ontology that is used as a foundation for representing domain and
application knowledge is coded in the ontology language OIL.
Figure 3.10: How SmartKom works
The tool OIL2XSD transforms an ontology written in OIL into an M3L compatible
XML Schema de�nition.
3.2.3 Others
The following list mentions only a small part of other existing multimodal markup
languages. The purpose is not to be complete. This are all languages that were
found during the search for markup languages for TUIs.
• InkML (Ink Markup Language) [47]
The Ink Markup Language serves as the data format for representing ink en-
tered with an electronic pen or stylus. The markup allows the input and
processing of handwriting, gestures, sketches, music and other notational lan-
guages in Web-based (and non Web-based) applications. It provides a common
format for the exchange of ink data between components such as handwriting
and gesture recognizers, signature veri�ers, and other ink-aware modules.
3.2. Multimodal markup languages 23
• VoiceXML (Voice Extensible Markup Language) [42,49]
VoiceXML is designed for creating audio dialogs that feature synthesized speech,
digitized audio, recognition of spoken key input, recording of spoken input,
telephony, and mixed-initiative conversations. Its major goal is to bring the
advantages of web-based development and content delivery to interactive voice
response applications.
• XHTML+Voice (X+V) [43,50]
X+V incorporates a subset of VoiceXML, a fully standardized and complete
markup language. X+V uses the most essential elements of VoiceXML, ap-
plying them to the speci�c task of speech-enabling application interfaces.
• SALT (Speech Application Language Tags) [24]
Salt extends existing Web markup languages to enable multimodal and tele-
phony access to the Web.
• CCXML (Call Control Extensible Markup Language) [45]
CCXML is designed to provide telephony call control support for dialog sys-
tems, such as VoiceXML.
• SCXML (State Chart Extensible Markup Language) [48]
SCXML provides a generic state-machine based execution environment based
on CCXML and Harel State Tables. SCXML is able to describe complex
state-machines.
• MPML (Multimodal Presentation Markup Language: MPML) [18]
MPML is an XML-based language developed to enable the description of mul-
timodal presentation using character agents in easier way.
• SABLE (A Synthesis Markup Language) [23]
The SABLE speci�cation is an initiative to establish a standard system for
marking up text input to speech synthesizers.
• AIML (Arti�cial Intelligence Markup Language) [1]
The goal of AIML is to enable pattern-based, stimulus-response knowledge
3.2. Multimodal markup languages 24
content to be served, received and processed on the Web and o�ine in the
manner that is presently possible with HTML and XML. It is used for ALICE
[2] chat robots.
• CUIML (Cooperative User Interfaces Markup Language) [7]
CUIML was developed as part of the DWARF project. The goal of DWARF
is the development of a framework for augmented reality applications running
on wearable computers.
• XIML (Extensible Interface Markup Language) [22, 51]
XIML provides an universal speci�cation for interaction data and knowledge
for user interfaces.
• SML (Service Modeling Language) [30]
SML is used to model complex IT services and systems, including their struc-
ture, constraints, policies, and best practices. SML is based on a pro�le of
XML Schema and Schematron. It should simplify IT management. This is a
very new proposal from a consortium of big companies like Microsoft, BEA,
IBM, BMC, Cisco, Intel, HP, Dell and Sun.
3.3. Conclusion 25
3.3 Conclusion
TUIs are no more an extremely new research �eld and there di�erent projects and
papers on tangible user interfaces in general. But most of them do not include
a meta language for TUIs. More common are markup languages for multimodal
or ubiquitous environments in general. They are just often designed for a special
purpose of a special system and not specially for TUIs.
According to the state of the art, a simple markup language just for a tangible
user interface is missing. TUIML (see section 3.1.2) is the most similar to the
MeModules project. But this language is too complex for our needs. We just need a
simple and �exible way of storing and exchanging data between di�erent components
of a TUI toolkit. The document engineering approach [8] uses (XML-)documents
that serves as medium to transmit data between di�erent programs or processes.
This way the TUI toolkit can be much more modular. Each component can be
developed independently while the communication is done via the newly created
markup language.
For storing and accessing all the elements of the TUI in a standardized way, we need
a new Markup Language. MeMoML (MeModules Markup Language) is designed
for that purpose. It is a XML-like meta-language that o�ers a �exible, device-
independent and easy-to-use way for describing scenarios. The MeMo-GUI allows
to model scenarios using drag and drop without touching directly the MeMoML
code. The MeMoEngine �nally identi�es objects and executes actions that were
described in the scenario.
Chapter 4
MeModules Markup Language
(MeMoML)
This chapter describes a newly developed markup language for the MeModules [17]
project (MeMoML). The �rst part introduces the concept of MeMoML, while the
second part describes in details the XML-based model. And �nally, two concrete
scenarios are created.
4.1 Document Engineering modeling approach
MeMoML is based on a Document Engineering modeling approach [8] which is
evolving as a new scienti�c discipline for specifying, designing, and implementing
systems belonging to very di�erent business domains. Document Engineering uses
XML technology. Much of the business transacted on the Web today takes place
through information exchanges made possible by using documents as interfaces.
Document engineering is needed to analyze, design, and implement these information
exchanges. A (XML-)document serves as medium to transmit data between di�erent
programs or processes.
26
4.2. De�nition of MeMoML 27
4.2 De�nition of MeMoML
MeModules Markup Language (MeMoML) is a formal meta-language that describes
all the MeModules (physical objects), the players (electronic devices) and the result
(the data plus the chosen action) with their speci�c parameters. MeMoML is an
XML dialect.
4.3 De�nition of scenarios
Scenarios are the main components of MeMoML. The objects are grouped into
scenarios which form the environment (all possible events). A scenario is a con-
crete situation that consists of MeModules 1) the MeModules i.e. the tangible links
(source), 2) the Players i.e. the devices where the information is played (target) and
3) the Results i.e. what the user perceives as a result of the interaction between the
MeModule and the Player (data + action). An environment consists of all scenarios
and represents all resulting possibilities for events.
4.4 MeMoML model
A MeMoEnvironment consist of two di�erent parts:
• The MeMoCon�g describes the di�erent devices and their con�guration de-
tails.
• The scenario consists of one or more (sub-)scenarios.
A scenario describes the source and the interaction with the source. In the
source we have data, contact or application elements and their di�erent pa-
rameters. The interaction de�nes the devices and their possible actions.
4.4. MeMoML model 28
4.4.1 MeMoEnvironment
A MeMoEnvironment simply consists of a MeMoCon�g part and a scenario part.
Figure 4.1: MeMoEnvironment
4.4.2 MeMoCon�g
A device is a physical object (e.g. beamer, mobile phone, CD player). This device
has some possibilities to play or show given data. There are also di�erent commu-
nication channels and restrictions for each device. All devices have to be formally
speci�ed in the MeMoCon�g part of the XML document, as you can see on �gure
4.2.
Figure 4.2: MeMoCon�g
4.4. MeMoML model 29
4.4.3 Scenario
The scenario part is the main part. In this section a scenario (simpleScenario), or a
combination of scenarios (complexScenario), can be stored in a certain structure.
We have Objects, Data, MeModules, Player and Results
Figure 4.3: SimpleScenario
We divided a simple scenario into source and the interaction with the source.
4.4. MeMoML model 30
Source has an RFID tag. It can be data, contact or application. Data means e.g.
pictures, music �les, etc. with all its parameters and restrictions. Under contact
we understand persons or groups with their details. And application �nally means
a type of application with all its parameters. Figure 4.3 shows the elements of a
simpleScenario.
Interaction contains two items: Action and target. Target has the same ObjectType
as source has. Action de�nes the interaction with the target. See �gure 4.4 for
details.
Figure 4.4: Action
4.4. MeMoML model 31
4.4.4 Target and source
Target and source are both of type ObjectType that contains a data, contact and
application part.
In the data part, the path to data on the computer together with its type is stored.
Figure 4.5: Data part
4.4. MeMoML model 32
The contact part can contain information about persons and the groups they belong
to.
Figure 4.6: Contact part
4.4. MeMoML model 33
Information about applications is stored in the application part. This includes the
type of the software application and where it can be found.
Figure 4.7: Application part
4.5. Scenario examples 34
4.5 Scenario examples
4.5.1 Sample scenario 1
From my holidays in Greece I have a seashell. I associate this seashell with my
holidays. In the system I can now associate the physical object with some data
(holiday pictures). If I put the seashell in front of the beamer, I want to see my
holiday pictures.
At the same time I associate a special CD cover with Greek music.
If I combine now the two (sub-)scenarios, I get a slide-show with music.
• MeModule (source): seashell; CD cover
• Player (device): beamer; CD player
• Result (target: data+action): show pictures, play music
If the two MeModules of the sub-scenarios are combined: play slide-show
4.5.2 Sample scenario 2
I want to send all holiday pictures to a friend. The seashell stands for the holiday
pictures and a business card indicates my friend's email address. The pictures are
sent to my friend.
• MeModule (source): seashell; business card
• Player (device): beamer; PC
• Result (target: data+action): show pictures; send pictures
If the two MeModules are combined: send pictures to address
4.6. Modeling process 35
4.6 Modeling process
This chapter describes the modeling process of MeMoML. It shows that in di�erent
phases of the project certain changes were made. This document will not re�ect all
discussions and all minor changes. It just describes some major changes during the
modeling process.
4.6.1 At the beginning
The modeling of the MeModules Markup Language (MeMoML) was a longer process.
During several weeks di�erent version were discussed in team. The starting point
was the XML schema of the TouchMe[TouchMe] project. The �rst steps towards a
formal categorization of data and devices were already done. This schema has been
developed, adapted and extended. The resulting schema tries to be as general as
possible. Further extensions will be possible.
The most complex task was to formally describe a scenario. The description of a
scenario must be very �exible, but at the same time very precise.
4.6.2 First versions
4.6.2.1 Con�guration �le
There must to be a separate con�guration �le for each device because of the het-
erogeneity of the environment. A device needs a normal description, location and
communication capabilities parameters.
4.6.2.2 Object, communication and action part
The scenario was described in a separate �le. A TAGObject contained an object,
communication and action part. An object contained data, contact and application
information.
4.6. Modeling process 36
4.6.3 Improved version
We realized that the separation of object, communication and action was not opti-
mal.
Objects and devices should be separated. This led us to the next version: We
separated source and interaction. In interaction, action was connected with the
target. The object of v1 became the source. The target and action of v1 were
merged into interaction.
The idea behind is, that in real life we have data (source) and interaction with that
data.
The name TAGObject disappeared and was replaced by simpleScenario, because a
scenario can be composed of several simpleScenarios.
4.6.4 During the implementation phase of the MeMo-GUI
and the MeMoEngine
Several mistakes in reasoning were detected during the implementation phase of
the software. There was some redundant data that made the schema very big and
complex. These elements were safely deleted.
Source and target were separated. But both are still the same kind of object (Object-
Type). Con�guration was separated from the scenario tag because it is independent
from the scenarios.The Communication was also paged out to the new con�guration
part.
4.6.5 During the integration phase with the TouchMe project
When we tried to convert a TouchMe XML document into a MeMoML XML doc-
ument we discovered certain weak points in the MeMoML design. The integration
4.7. Evolution of MeMoML 37
of two systems is always a challenging task. This is also the phase when the weak
points are discovered. There are now di�erent people working with MeMoML that
were not involved in the �rst phase of the project. Each one has new ideas for
improvements. This is a big chance to eliminate errors that were made and proceed
to a new level of the project. But at the same time it is a critical point because not
every idea can be realized. The original idea and mission has to be conserved.
4.7 Evolution of MeMoML
Two trainees Paul Naggear and Khrysto El Soury have worked on MeMoML during
two months. Due to the former basic work done on MeMoML, they could propose
an improved version of it. The group could progress much faster, because they
constructed on existing work.
4.7.1 Scenario equation
In order to de�ne formally the meaning of a "scenario", they introduced an equation
that describes this. Some former designations were changed or adapted.
n∑1
S + TI →n∑1
{A
n⋃i=1
Ci on P with Ap
(n⋃
i=1
Pmi
)}S : Source (physical object with an RFID tag)
TI : Tag Interpreter (reads tagged object and launches scenario)
A: Action (the chosen action)
C : MeMoCluster (virtual directory containing data resources)
P : Peripheral (output of the operations)
Ap: Application (executes tasks)
Pm: Parameter (all additional parameters for the application)
4.7. Evolution of MeMoML 38
4.7.2 MeMoML improvements
The group has revised the �rst MeMoML proposition. They introduced mainly a new
layer. Everything that was in the scenario part before, is now in the con�guration
part. Each element in the environment part has now its own ID. The scenario part
only contains links to the items IDs in the environment part.
This change allows to have a clearer separation between the environment and the
scenarios. It allows to be more �exible and to avoid redundancy. Each element has to
be de�ned only once and can then be used several times. The changed designations
are explained in section 4.7.1.
Figure 4.8: MeMoML v2
The following two �gures show the details of scenario and environment. Scenario is
the newly introduced layer that contains references to the IDs described in environ-
ment.
4.7. Evolution of MeMoML 39
Figure 4.9: MeMoScenario v2
4.7. Evolution of MeMoML 40
Figure 4.10: MeMoEnvironment v2
4.8. Conclusion 41
4.8 Conclusion
There is a lot of work behind the actual proposal of MeMoML. During the developing
phase di�erent working drafts were made. MeMoML has evolved in di�erent steps
and during di�erent phases of the whole project. There were a lot of changes during
the modeling process. The model has now become pretty big and complex as it
should be as general as possible. Scenarios form the environment that represents
all possible events. The two main elements of a scenario are the source and the
interaction with the source. The source represents the data. The interaction is
splitted into device and the action that this device will do.
It is just a proposal for a markup language for TUIs and will probably continue to
evolve in the near future. The future use in software applications will show new
ideas of representing or accessing the data.
MeMoML could evolve to a language that covers more multimodality than it does
now, as the MeModules projects has di�erent multimodal aspects. The Document
Engineering modeling is becoming even more important in the future, as more and
more processes will be exposed as (web-)services and the exchange of information
will be done via XML �les.
Chapter 5
MeMoML: Implementation
This chapter shows the implementation details of the software. The functioning of
the MeMoML-GUI and the MeMoEngine are explained. The technical details are
shown in the UML diagrams. Finally, the main parts of the GUI are described.
5.1 MeMoML-GUI
The MeMoML-GUI has been developed before the MeMoEngine. A visual tool for
editing scenarios is important for the end-user. But the main focus was on the
MeMoML language and the GUI served more a proof of concept.
5.1.1 Goals
The user should have the possibility to visually con�gure a scenario with tangible
interfaces. The con�guration is saved in an XML �le. This �le should not be exposed
to the end-user. The interaction is done via drag and drop in a GUI. The GUI should
be intuitive and logic.
Parameters of devices, data and application can be edited in pop-up menus for each
item.
42
5.1. MeMoML-GUI 43
The MeMoML-GUI should facilitate the interaction with the MeMoML language.
The MeMoML code is created �behind the scene�.
5.1.2 Thoughts about the user interface
At the beginning we wanted to have one big canvas (�gure 5.1) and connect all
elements of scenario with lines (�gure 5.2). The elements should be freely movable.
This design was extremely �exible.
Figure 5.1: GUI prototype with freely placable objects
Figure 5.2: GUI prototype with line-connected objects
5.2. Demo-Application 44
Finally we adapted the user interface in order to simplify the interaction with the
program. The user can still place the icons with drag and drop, but just at speci�c
targets. The targets are divided into three zones (see �gure 5.3):
• MeModules: the tangible object with its attributes.
• Player: the physical device that shows or plays something
• Results: the date and the action that has been chosen
This should especially clarify the di�erence between object and device. A mobile
phone, for example can at the same time be a tangible object and a device. It can
be a shortcut to information and the medium to show or play data.
5.2 Demo-Application
The GUI (visual editor) allows the user to easily describe the interaction scenarios via
a graphical interface. A MeMoML �le is created when the users saves his scenario.
5.2.1 What the user sees
The interface is composed of three main regions (MeModules - lilac, Player - light
blue, Results - light green) which correspond to the three main parts of the scenario:
source, device and interaction (target+action)1.
The user can drag and drop from the right and left panel within the three regions:
either objects to MeModules and players, or data to results. The MeModules part
represents the physical objects, the MeModules. These MeModules are identi�ed
by an RFID code. The players are devices that show or play something. In the
results part is the data and the action that is associated with the data object. The
properties of all objects can be adapted via a context menu.
1See the following �gure
5.2. Demo-Application 45
Figure 5.3: MeMoGUI
All icons are described by a type: for instance an icon representing photos, belongs
to the information-type icon which means that it cannot be moved neither in the
MeModules region nor in the Player region. On the contrary, the icon representing
a mobile phone, which can be both a MeModule and a Player, belongs to both
MeModules-type and Player-type and can be moved in both regions. The result
describes a full scenario that can be saved, but also opened and edited.
5.3. MeMoML-GUI evolution 46
5.2.2 What is created in the background
A part of a MeMoML �le is shown in the next �gure.
Figure 5.4: Part of a MeMoML �le
This resulting XML �le is then validated against the MeMoML model.
5.3 MeMoML-GUI evolution
The former mentioned group of Paul Naggear and Khrysto El Soury (see chapter
four) has also worked on the improvement of the graphical interface. They also had
a di�erent technology approach.
The �rst user interface with line-connected elements2 was too �exible. But the GUI
with the �ve columns3 has some restrictions and is maybe not as intuitive for the
end user as we hoped.
2See �gure 5.23See �gure 5.6
5.4. MeMoEngine 47
The new interface is a kind of puzzle. In fact it is a compromise of the former two
interfaces. Further, the GUI was developed with the Windows Presentation Foun-
dation (WPF) on .NET 3.0. Java is pretty heavy for designing graphical interfaces.
The WPF framework allows to develop the GUI faster and especially to adapt the
user interface in a much easier way. But, of course, the program is no more platform
independent.
Figure 5.5: Puzzle user interface
5.4 MeMoEngine
5.4.1 Introduction
Parts of the MeMoEngine were taken from the improved version of the TouchMe
project. But the MeMoEngine only simulates inputs from RFID tags. It just enables
to do some basic functions. There is no hardware reader connected to the software.
TouchMe is much more sophisticated in the sense of input and output with real
hardware. That project has been developed as a parallel project and is now called
5.4. MeMoEngine 48
MeModules console. The MeModules console could be combined with the MeMoML
project. The only thing that had to be changed was the handling of the XML �le
for storing data in a MeMoML manner.
5.4.2 Goals
The MeMoEngine is the part of the software that interprets the XML �le that has
been generated by the MeMoML-GUI. An environment with tangible interfaces is
executed. The MeMoEngine identi�es objects, ensures communications processes,
and executes actions, described in the scenario. It handles the formally described
scenario.
5.4.3 Communication process
On the physical level, there are several devices attached to a PC. The software does
not directly interact with the devices. It communicates with the software layer. This
software layer must be present in order to ensure the communication process from
the PC to the hardware device and vice versa.
In the MeModules console, Phidgets [19] are used to make the link between software
and hardware. The core hardware is a RFID tag reader.
5.4.4 Description of con�guration
The con�guration of all devices is described in the MeMoML �le that also stores
the scenarios. In this �le the device is formally described. There is also information
about the location (path) and the communication capabilities.
5.4.5 Functioning of the MeMoEngine
The MeMoEngine parses the stored data of a MeMoML document to identify objects
and execute a scenario. The tag reader gives an ID number to the system. The sys-
5.4. MeMoEngine 49
tem then decides what to do according to the settings that were described formerly
as a scenario in a MeMoML �le. In our system, the tag reader is not integrated.
Hence the RFID numbers have to be simulated.
5.4.6 MeModules console
The former TouchMe project has evolved towards a MeModules console. Phidgets
[19] are integrated into a box. The user can now interact with this console and open
applications, open Skype, listen to music, view pictures and videos, etc.. The �rst
console was a not very professional looking little black box and served as a proof of
concept.
Figure 5.6: MeModules console v1
The second generation looks much more professional. The interface changed a bit.
The software behind can now also read and store MeMoML �les.
Figure 5.7: MeModules console v2
5.5. UML diagrams 50
5.5 UML diagrams
5.5.1 Use case diagram
All use cases of the MeMoML-GUI are described in the use case diagram (see �gure
5.4). This are all possible actions a user can do. �Include� means that an use
case requires imperatively another use case. �Extend� describes an use case that
continues the behavior of the base use case. Conceptually, the extending use case
inserts additional actions into the base use-case.
An user can open an existing scenario or create a new one. If it is a new scenario, he
must edit it. In both cases there is a possibility to save changes and run the scenario.
According to the three item types MeModule, Player and Result a di�erent pop-up
menu is opened when the user edits the speci�c items of the scenario. But each case
includes the possibilities of adding, removing and editing the item.
5.5. UML diagrams 51
Figure 5.8: Use case diagram
5.5. UML diagrams 52
5.5.2 Sequence diagram
This sequence diagram (see �gure 5.5) shows the open, edit, save and run functions
of the software. It describes the elements the user interacts with in the GUI. The
pop-up items are used to change the parameters of the items.
Figure 5.9: Sequence diagram
A sequence diagram with more details on the involved classes can be found in ap-
pendix A.1.2.
5.5. UML diagrams 53
5.5.3 Overview
The class diagram shows all classes and the packages of the software with all their
dependencies. The most important package �impl� is explained in the following
sub-section 5.4.4. A more detailed diagram can be found in the appendix A.1.1.
Figure 5.10: UML overview
5.6. Used Software 54
5.5.4 The central package impl
The package impl contains some central classes that need some explanations. MeM-
oData is a factory class. It is called at the start of the program and instantiates
most other classes.
The class DataTransfer is the central class to exchange data between di�erent parts
of the program. Each class that needs to exchange data with other classes calls
DataTransfer to do that.
The data of the di�erent items is stored in DTPictures (Data Transfer Pictures).
Each item with an icon is an instance of a DTPicture. The PictureTransferHandler
manages the transfer of information from one picture to another.
This architecture facilitates the way of adding new parts to the software.
5.6 Used Software
The application was written with in Java with Sun's NetBeans [31] IDE 6.0 devel-
opment version. The UML diagrams were generated by NetBeans IDE 5.5 preview.
With the integrated GUI-builder, NetBeans was especially handy for designing the
user interface. NetBeans also uses Ant [3] as its Java project system. That means
that the program can be compiled even without using NetBeans.
The application has been compiled with the Java [32] 5 compiler.
For the XML reading and writing, JDOM [13] and SAX [25] were used. JDOM works
very well with SAX. SAX is an almost ideal event model for building a JDOM tree
(parsing). And when the tree is complete, JDOM makes it easy to walk the tree.
Since SAX is so fast and memory-e�cient, SAX doesn't add a lot of extra overhead
to JDOM programs.
5.7. Conclusion 55
5.7 Conclusion
The GUI has evolved in di�erent steps. Some initial ideas were changed during the
implementation phase. The end user should not have too much �exibility, but at
the same time there should not be too many restrictions. The idea with the freely
placable objects connected by lines was too confusing for the users because it was
extremely �exible. There were not enough restrictions. The actual interface has
constraints that support and facilitates the user interaction. The puzzle approach
is a kind of compromise of the two former GUIs.
Java is a bit heavy on graphical applications. It is not very simple to change rapidly
parameters of the interfaces. With the new technology approach with WPF and
.NET 3.0 it will be possible to make changes in a very short time. But the program
is then limited to Windows.
The GUI should be simple and self-explaining. We realized that the visual editor is
very important. Further research has to be done with end-users in order to have an
interface that they like to use.
On the architectural side, the software has some classes that should facilitate future
extensions. The package impl contains some central classes for the exchange of data
and serves as a central access point for new extensions.
Chapter 6
Conclusions and future work
This chapter concludes the thesis. Some conclusion for the MeMoML project and the
MeModules project in general are made. Also possible future steps are mentioned.
Some of these steps were already made by another team of the MeModules project.
6.1 The MeMoML project
The MeModules Markup Language is an important part of the MeModules project.
To design a meta language was a complex and very interesting task. But the end-
user should not interact directly with MeMoML. The user just wants a simple GUI.
That's why the MeMoML-GUI is even more important than we thought at the
beginning. It has to be as simple as possible to design a scenario with tangible
interfaces. The aim is to simplify the access to data, so the de�nition of such a
scenario has to be simple too.
The GUI has evolved in several steps and is now pretty simple. But further usability
tests should be done with real users in order to validate the concept.
56
6.2. Possible future work and extensions 57
6.2 Possible future work and extensions
6.2.1 Integration with other software
In a �rst step, as described in the MeMoEngine part, MeMoML and MeMoML-GUI
has been combined with the MeModules console. Integrating di�erent projects is
always a challenging task because di�erent people and groups are involved. The re-
sulting software will then probably be extended with other parts that were developed
in the MeModules project (e.g. special visualization of data).
6.2.2 MeMoML
MeMoML is a core component of the MeModules project. The meta language will
be used for several sub-projects that deal with tangible interfaces.
During the implementation phase of the software we realized that MeMoML has
become pretty complex. It is di�cult to understand the structure of MeMoML for
external persons not involved in the project. Even for simple scenarios, a big and
complex XML �le is generated. This is a point that could be improved in the future.
But simplifying the schemas without loosing any generality or �exibility is not that
easy.
There also should be a possibility to describe applications, data, etc. and in the
con�guration part of MeMoML. Else one has to rede�ne the same items again and
again for each (sub-)scenario. We saw this problem during the implementation
phase of the software. The revision of MeMoML goes into that direction and has
introduced a new layer for scenarios that contains references to the items in the
con�guration part.
The whole schema should probably be a little bit less complex. It is possible that
there is still redundant data that could be eliminated.
The MeModules Markup Language is still evolving and will be adapted and im-
proved, if it is needed, in further projects.
6.2. Possible future work and extensions 58
6.2.3 The MeMo-GUI
The GUI probably has to be adapted and improved in order to be as easy as possible
to use. Non-skilled people who know nothing about MeModules or MeMoML should
be able to use it. Maybe it will be useful to work out an easy to understand user
manual in order to support the end user. Future projects will probably focus a bit
more on the user interaction aspect.
The puzzle approach resembles to the �rst version of the GUI with the items con-
nected by lines. That version was too �exible and the version with the three di�erent
columns maybe to restrictive. The puzzle approach seems to be a good compromise.
Creating a GUI with Java can be pretty complex and it takes some time for doing
changes. Other graphical frameworks are actually being evaluated (the puzzle in-
terface was done with WPF on .NET 3.0). Making changes and adaptations to the
GUI should be easy and fast; further, there are probably more graphical possibilities
than with Java.
6.2.4 The MeMoEngine
The MeMoEngine is not very sophisticated. It just does some basic functions because
we did not want to do the same work twice. As the TouchMe project has been
improved in parallel with the MeMoML project, the resulting MeMoudles console
could serve as a better MeMoEngine. For integration, the �le format of the XML
document of TouchMe could be changed to the MeMoML format. The hardware
integration in the MeModules console is already done.
6.3. The MeModules project in general 59
6.3 The MeModules project in general
MeModules is a very interesting project. To improve or extend the way people act
with computers and how they access information is an important task for the future.
Because more and more people will work and interact with computers. But the
interaction should be naturally and easy. Facilitating the access to (multimedia)
information is a challenging task. There is also a big psychological and physical
aspect that is also treated in the MeModules project.
But on the other site it is not easy to change the manner we already interact with
computers. The computer scientists should always be aware that �normal� people
work with computers in a di�erent manner them. In the best case, the machine
should adapt to the needs of the person and not vice versa.
6.4. Conclusion 60
6.4 Conclusion
The MeModules project is a very interesting project as it deals with possible future
interfaces. MeMoML is just a small part of it, but will be useful for the whole
project. It was interesting to be part of a bigger team and to see how di�erent sub-
projects evolve. All these little projects are part of one big project. The integration
of all projects into a toolkit will be a challenging task.
The TUI toolkit is just a one little step towards a new interaction with computers. In
my opinion Tangible User Interfaces will become more and more important. As the
computer will disappear physically from our desks and hide into the environment,
we will need other ways for interacting with it. In the (near?) future we will not
even realize that we are interacting with a machine as this will be done in very
natural way.
But the social aspects of such ambient environments are important. The technology
should be accepted by the normal user. And in order to be accepted it has to
add some new value and be very simple, as one of the goal of the project is to
reduce complexity. New ways of interaction with computers can maybe lead to even
di�erent uses that we don't even imagine yet.
Bibliography
[1] AIML. Arti�cial Intelligence Markup Language. Web: http://www.alicebot.
org/TR/2005/WD-aiml/ (visited: January 2006).
[2] A.L.I.C.E (Arti�cial Linguistic Internet Computer Entity) chat robot. Web:
http://www.alicebot.org/ (visited: January 2006).
[3] Apache ANT. Web: http://ant.apache.org/ (visited: March 2006).
[4] Ali M.F. et al.: Building multiplatform user interfaces using UIML. in Computer
Aided Design of User Interfaces (CADUI'02). 2002. Web: http://arxiv.org/
pdf/cs.HC/0111024 (visited: December 2005). (PDF on CD).
[5] Barrett R, Maglio P.: Informative Things: How to attach information to the real
world. In Proceedings of ACM Symposium on User Interface Softare Technology
(UIST '98). 1998. Web: http://www.almaden.ibm.com/cs/people/pmaglio/
pubs/infothings2.pdf#search=%22How%20to%20attach%20information%
20to%20the%20real%20world%22 (visited: December 2005). (PDF on CD).
[6] Broll W., Lindt I., Ohlenburg J., Linder A.: A Framework for Realiz-
ing Multi-Modal VR and AR User Interfaces. Fraunhofer Institut für Ange-
wandte Forschung. 2005. Web: http://www.fit.fraunhofer.de/gebiete/
mixed-reality/publications/broll05.pdf (visited: December 2005). (PDF
on CD).
[7] CUIML. Arti�cial Intelligence Markup Language. Web: http://wwwbrueg20ge.
in.tum.de/publications/includes/pub/sandor01cuiml/sandor2001cuiml.
pdf (visited: January 2006).
61
Bibliography 62
[8] Glushko R. J. and McGrath T. Document Engineering: Analyzing and Designing
Documents for Business Informatics and Web Services. MIT Press. 2005.
[9] Hasler foundation. Web: http://www.haslerstiftung.ch/ (visited: December
2005).
[10] Heckmann D., Krueger A.: A User Modeling Markup Language (UserML) for
Ubiquitous Computing. In 9th International Conference of User Modeling 2003.
UM 2003 Johnstown, USA. pp. 393-397. 2003. (PDF on CD)
[11] Ishii H., Ullmer B.: Tangible Bits: Towards Seamless Interfaces between Peo-
ple, Bits and Atoms. in Proceedings of the ACM Human Factors in Computing
Systems (CHI 97). Atlanta, USA. 1997.
[12] Jacob R.J.K., Deligiannidis L., Morrison S.: A Software Model and Spec-
i�cation Language for Non-WIMP User Interfaces. in ACM Transactions
on Computer-Human Interaction (TOCHI), 6(1): pp. 1-46. 1999. Web:
http://www.cs.tufts.edu/~jacob/papers/tochi.pmiw.pdf (visited: Decem-
ber 2005). (PDF on CD)
[13] JDOM. Java Document Model. Web: http://www.jdom.org/ (visited: April
2006).
[14] Lalanne Denis: Introduction to TUI seminar. DIVA group. Department of
Informatics. University of Fribourg. Web: http://diuf.unifr.ch/people/
lalanned/Seminar/Seminar0506/TangibleInterfaces.ppt (visited: January
2006).. (PPT on CD)
[15] Lalanne Denis: Site of Seminar on tangible user interfaces. DIVA group. De-
partment of Informatics. University of Fribourg. Web: http://diuf.unifr.ch/
people/lalanned/seminar0506.htm (visited: January 2006).
[16] Leland N. , Shaer O., Jacob R.J.K., TUIMS: Laying the Foundations for a
Tangible User Interface Management System, Pervasive 2004 Conference Work-
shop on Toolkit Support for Interaction in the Physical World (2004). Web:
Bibliography 63
http://www.eecs.tufts.edu/~oshaer/TUIMS.pdf (visited: December 2005)..
(PDF on CD).
[17] MeModules. Web: http://www.memodules.ch (visited: August 2006).
[18] MPML. Multimodal Presentation Markup Language. Web: http://www.miv.
t.u-tokyo.ac.jp/MPML/en/ (visited: December 2005).
[19] Phidgets. Web: http://www.phidgets.com (visited: December 2005).
[20] Philips Research. PML - physical markup language. Web: http://www.
research.philips.com/technologies/syst_softw/pml/downloads/pml.pdf
(visited December 2005). (PDF on CD)
[21] PML. A geekier version of Little Red Riding Hood. Web: http://www.
we-make-money-not-art.com/archives/007862.php (visited: June 2006).
[22] Puerta A., Eisenstein J.: XIML: A Common Representation for Interaction
Data. IUI2002: Sixth International Conference on Intelligent User Interfaces.
2002. Web: http://www.iuiconf.org/02pdf/2002-002-0043.pdf#search=
%22XIML%3A%20A%20Common%20Representation%20for%20Interaction%
20Data%22 (visited: December 2005). (PDF on CD).
[23] SABLE. Speech Synthesis Markup Language. Web: http://www.bell-labs.
com/project/tts/sable.html (visited: January 2005).
[24] SALT. Speech Application Language Tags. Web: http://www.saltforum.
org/ (visited: January 2006).
[25] SAX. Simple API for XML. Web: http://www.saxproject.org/ (visited:
April 2006).
[26] Scheurer R., Lalanne D., Gonzalez A.: TouchMe. Interfaces tangibles pour
accéder à des albums photos et disques musicaux. Travail de diplome 2005. Uni-
versity of Applied Sciences, Fribourg. 2005. (PDF on CD).
Bibliography 64
[27] Shaer, O., Leland N., Calvillo-Gamez E.H., and Jacob R.J.K.: The TAC
Paradigm: Specifying Tangible User Interfaces. in: Personal and Ubiquitous
Computing. pp. 359-369. 2004.
[28] Shaer O., Jacob R.J.K.: Toward a Software Model and a Speci�cation Language
for Next-Generation User Interfaces, Proc. ACM CHI 2005 Workshop on The
Future of User Interface Software Tools (2005). Web: http://hci.stanford.
edu/srk/chi05-ui-tools/Shaer.pdf (visited: December 2005). (PDF on CD).
[29] Shaer O., Jacob R.J.K.: TUIMS: Laying the Foundations for a Tangible User
Interface Management System. Report. 2005. Web: http://www.cs.tufts.edu/
tech_reports/reports/2005-2/report.pdf (visited: December 2005). (PDF
on CD).
[30] SML. Service Modeling Language. Web: http://www.microsoft.com/
windowsserversystem/dsi/serviceml.mspx (visited: August 2006).
[31] Sun's NetBeans IDE. Web: http://www.netbeans.org (visited: April 2006).
[32] Sun's Java Homepage. Web: http://java.sun.com/ (visited: May 2006).
[33] Ullmer, B.: Tangible Interfaces for manipulating aggregates of digital informa-
tion. PhD thesis, Massachusetts Institute of Technology. 2002. (PDF on CD)
[34] University of Applied Sciences, Fribourg. Web: http://www.eif.ch (visited:
August 2006).
[35] University of Applied Sciences Fribourg. Information and Multimedia Sys-
tems group. Web: http://www.eif.ch/fr/rad/institut-tic/groupes_de_
competences/xsim/presentation.jsp (visited: June 2006).
[36] University of Applied Sciences, Geneva. Web: http://www.hesge.ch/heg/
(visited: March 2006).
[37] University of Applied Sciences, Wallis. Web: http://www.hevs.ch/ (visited:
March 2006).
Bibliography 65
[38] University of Fribourg. Web: http://www.unifr.ch (visited: August 2006).
[39] University of Fribourg. Informatics Department. Web: http://diuf.unifr.ch
(visited: August 2006).
[40] University of Fribourg. Informatics Department. DIVA (Document, Image and
Voice Analysis) research group. Web: http://diuf.unifr.ch/diva (visited:
August 2006).
[41] University of Siena. Web: http://www.unisi.it/ (visited: March 2006).
[42] VoiceXML Forum. Web:http://www.voicexml.org/ (visited: January 2006).
[43] VoiceXML Forum. X+V. XHTML + Voice. Web: http://www.voicexml.org/
specs/multimodal/x+v/ (visited: January 2006).
[44] Wahlster W., Reithinger N., Blocher A.: SmartKom: Towards Multimodal
Dialogues with Anthropomorphic Interface Agents. In Wolf, G., Klein, G.
(eds.), Proceedings of International Status Conference "Human-Computer Inter-
action", DLR, Berlin, Germany. pp. 23-34. 2001. Web: http://www.dfki.de/
~wahlster/Publications/MTI-SmartKom.pdf (visited: December 2005). (PDF
on CD)
[45] W3C: CCXML. Call Control eXtensible Markup Language. Web: http://www.
w3.org/TR/ccxml/ (visited: January 2006).
[46] W3C: EMMA. Extensible MultiModal Annotation Markup Language. Web:
http://www.w3.org/TR/emma/ (visited: January 2006).
[47] W3C: InkML. Ink Markup Language. Web: http://www.w3.org/TR/InkML/
(visited: January 2006).
[48] W3C: SCXML. State Chart Extensible Markup Language. Web: http://www.
w3.org/TR/scxml/ (visited: January 2006).
[49] W3C: VoiceXML. Web: http://www.w3.org/TR/voicexml/ (visited: January
2006).
Bibliography 66
[50] W3C. XHTML+Voice. X+V. Web: http://www.w3.org/TR/xhtml+voice/
(visited: January 2006).
[51] XIML. Extensible Interface Markup Language. Web: http://www.ximl.org/
(visited: December 2005).
Appendix A
Auxiliary documents
67
A.1. Detailed UML diagrams 68
A.1 Detailed UML diagrams
A.1.1 Class diagram
This class diagram shows all classes, packages and how they are connected together.
A.1. Detailed UML diagrams 69
A.1.2 Sequence diagram
This sequence diagram shows all the involved classes for opening, editing, running
and saving scenarios.
A.2. CD-ROM 70
A.2 CD-ROM
Contents of the CD-ROM:
• Masters Thesis (PDF)
• Separate abstract (PDF and HTML)
• Two Presentations (PPT)
• XML schemas of MeMoML
• Source code
• Binaries
• Required libraries
• JavaDoc
• Downloaded papers (PDF)
A.3 License
Copyright (c) 2006 David Bächler
Permission is granted to copy, distribute and/or modify this document under the
terms of the GNU Free Documentation License, Version 1.2 or any later version
published by the Free Software Foundation, with no Invariant Sections, no Front-
Cover Texts, and no Back-Cover Texts.
The GNU Free Documentation License can be read from http://www.gnu.org/
licenses/licenses.html#FDL.