6
Evaluating geometrical properties of virtual shapes using interactive sonification Miguel Alonso 1 , Simon Shelley 1 , Dik Hermes 1 1 Human Technology Interaction, Eindhoven University of Technology, P.O. Box 513, NL 5600 MB, Eindhoven, The Netherlands Phone: +31 40 247 2888, Fax: +31 40 244 9875, E–mail: [email protected] Armin Kohlrausch 1,2 2 Digital Signal Processing Group Philips Research Europe–Eindhoven, High Tech Campus 36, NL 5656 AE, Eindhoven, The Netherlands E–mail: [email protected] Abstract—This papers presents some of our research about the use of sound in a multimodal interface. The aim of this interface is to support product design, where the designer is able to physically interact with a virtual object. The requirements of the system include the interactive sonification of geometrical data relating to the virtual object. In this paper we present three alternative sonification approaches designed to satisfy this condition. We also outline a user evaluation strategy aimed at measuring the performance and added value of the different sonification approaches. Index Terms—sonification, sound synthesis, modal synthesis, virtual objects, haptics. I. I NTRODUCTION Nowadays, a research topic that has attracted much attention in the area of product design is offering designers the oppor- tunity to exploit their abilities to create and manipulate virtual objects using multi-sensory interaction. In such a system a number of sensory modalities can be used to simultaneously communicate to the user information relating to the product. In addition, the product designer is ideally able to manipulate the object of interest using a hands-on intuitive approach. From a user point of view, a significant advantage of using a Virtual Environment (VE) is the ability to communicate information that is generally not perceived in the real world. So far this research has been largely concentrating on visual and haptic sensory modalities and few attempts have been made to incorporate sound into the process. The research described in this article is being carried out as part of the SATIN project (Sound And Tangible Interfaces for Novel product design), which consists of an augmented reality environment where the user can manipulate virtual 3-D objects [1]. The user will be able to see the object and to both explore and modify the shape of the object through the use of touch. For this project, currently the user interaction is limited to a 2-D cross-sectional slice of the virtual object’s surface, however the user is free to select this slice from any part of the 3-D object. An example of a cross-section of a 3-D virtual object is illustrated in Figure 1. A specific requirement of the SATIN project is the presen- tation to the user of data relating to shape and curvature of Fig. 1. Example of a virtual 3-D object with a 2-D cross-sectional slice highlighted. the surface of the interactive object. This article focuses on this particular aspect. Shape and curvature data are of special interest to designers when considering the aesthetic quality of the object’s surface. In industrial design for example, a smooth surface that is continuous in terms of curvature is highly desirable because discontinuities in curvature result in a disconnected appearance in the light reflected from the surface. Designers refer to such surfaces as Class A surfaces and they consider them as being aesthetically appealing [2]. Sonification is considered to be an advantageous solution for presenting geometrical data, as previously shown in [3]. There are other potential media for communicating this information such as visualization and haptic devices. These media, how- ever, are chiefly employed in the SATIN project to present to the user the shape of the virtual object. For this reason, sound provides a useful medium to display and highlight a wider range of geometrical properties, many of which are not normally detectable by touch or vision. This paper is organized as follows. Section II presents the principles and motivation for the use of sonification in the SATIN project. In section III we propose and describe three different sonification strategies. In section IV a procedure for their evaluation is presented. II. THEORETICAL BACKGROUND The representation of geometrical data using audio signals is a form of sonification. Sonification is defined by the board HAVE 2008 – IEEE International Workshop on Haptic Audio Visual Environments and their Applications Ottawa – Canada, 18-19 October 2008 978-1-4244-2669-0/08/$25.00 ©2008 IEEE

[IEEE 2008 IEEE International Workshop on Haptic Audio visual Environments and Games (HAVE 2008) - Ottawa, ON, Canada (2008.10.18-2008.10.19)] 2008 IEEE International Workshop on Haptic

  • Upload
    armin

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Evaluating geometrical properties of virtual shapes

using interactive sonification

Miguel Alonso1, Simon Shelley1, Dik Hermes1

1Human Technology Interaction,

Eindhoven University of Technology,

P.O. Box 513, NL 5600 MB,

Eindhoven, The Netherlands

Phone: +31 40 247 2888, Fax: +31 40 244 9875,

E–mail: [email protected]

Armin Kohlrausch1,2

2Digital Signal Processing Group

Philips Research Europe–Eindhoven,

High Tech Campus 36, NL 5656 AE,

Eindhoven, The Netherlands

E–mail: [email protected]

Abstract—This papers presents some of our research aboutthe use of sound in a multimodal interface. The aim of thisinterface is to support product design, where the designer is ableto physically interact with a virtual object. The requirementsof the system include the interactive sonification of geometricaldata relating to the virtual object. In this paper we presentthree alternative sonification approaches designed to satisfy thiscondition. We also outline a user evaluation strategy aimed atmeasuring the performance and added value of the differentsonification approaches.

Index Terms—sonification, sound synthesis, modal synthesis,virtual objects, haptics.

I. INTRODUCTION

Nowadays, a research topic that has attracted much attention

in the area of product design is offering designers the oppor-

tunity to exploit their abilities to create and manipulate virtual

objects using multi-sensory interaction. In such a system a

number of sensory modalities can be used to simultaneously

communicate to the user information relating to the product.

In addition, the product designer is ideally able to manipulate

the object of interest using a hands-on intuitive approach.

From a user point of view, a significant advantage of using

a Virtual Environment (VE) is the ability to communicate

information that is generally not perceived in the real world. So

far this research has been largely concentrating on visual and

haptic sensory modalities and few attempts have been made

to incorporate sound into the process.

The research described in this article is being carried out

as part of the SATIN project (Sound And Tangible Interfaces

for Novel product design), which consists of an augmented

reality environment where the user can manipulate virtual 3-D

objects [1]. The user will be able to see the object and to both

explore and modify the shape of the object through the use of

touch. For this project, currently the user interaction is limited

to a 2-D cross-sectional slice of the virtual object’s surface,

however the user is free to select this slice from any part of

the 3-D object. An example of a cross-section of a 3-D virtual

object is illustrated in Figure 1.

A specific requirement of the SATIN project is the presen-

tation to the user of data relating to shape and curvature of

Fig. 1. Example of a virtual 3-D object with a 2-D cross-sectional slice

highlighted.

the surface of the interactive object. This article focuses on

this particular aspect. Shape and curvature data are of special

interest to designers when considering the aesthetic quality

of the object’s surface. In industrial design for example, a

smooth surface that is continuous in terms of curvature is

highly desirable because discontinuities in curvature result in a

disconnected appearance in the light reflected from the surface.

Designers refer to such surfaces as Class A surfaces and they

consider them as being aesthetically appealing [2].

Sonification is considered to be an advantageous solution for

presenting geometrical data, as previously shown in [3]. There

are other potential media for communicating this information

such as visualization and haptic devices. These media, how-

ever, are chiefly employed in the SATIN project to present

to the user the shape of the virtual object. For this reason,

sound provides a useful medium to display and highlight a

wider range of geometrical properties, many of which are not

normally detectable by touch or vision.

This paper is organized as follows. Section II presents the

principles and motivation for the use of sonification in the

SATIN project. In section III we propose and describe three

different sonification strategies. In section IV a procedure for

their evaluation is presented.

II. THEORETICAL BACKGROUND

The representation of geometrical data using audio signals

is a form of sonification. Sonification is defined by the board

HAVE 2008 – IEEE International Workshop on Haptic Audio Visual Environments and their Applications Ottawa – Canada, 18-19 October 2008

978-1-4244-2669-0/08/$25.00 ©2008 IEEE

of the International Community for Auditory Display (ICAD)

as “the use of non-speech audio to convey information. More

specifically, sonification is the transformation of data relations

into perceived relations in an acoustic signal for the purposes

of facilitating communication or interpretation” [4]. In the

context of the SATIN project the sonification is dependent on

the interaction between the user and the virtual object and this

type of sonification is referred to as interactive sonification

[5], [6]. The sound that is produced therefore may depend

both on the geometrical parameter value to be sonified, and

on the nature of the user interaction. In general a mathematical

relationship, referred to as a mapping, is required between the

geometrical data to be sonified and some specific property,

or properties, of the sound. Such mappings result in sounds

that are not naturally produced by the equivalent real-world

objects. Owing to their abstract nature, we characterize such

sounds as symbolic [7].

Although in the SATIN project the user will be able to select

from a larger number of geometrical features to be sonified,

for the purpose of this article we only focus on the following

properties.

• Curve shape. Since we only consider a 2-D cross-

sectional slice of the virtual object’s surface, the curve

shape is a plane curve C represented by a two-

dimensional function, such as the one highlighted in

Figure 1.

• Curvature. For a plane curve C, the curvature at point p

has a magnitude equal to the reciprocal of the radius of

the osculating circle, i.e. the circle that shares a common

tangent to the curve at the point of contact. The curvature

is a vector that points to the centre of the osculating

circle. For a plane curve given explicitly as C = f(p)the curvature is given by:

K =

d2Cdp2

(

1 +[

dCdp

]2)

3

2

. (1)

In this project we only focus on the sonification of

the curvature’s magnitude, however, we also consider

whether the curve is concave (K > 0) or convex (K < 0).

An optional specific sonification will also be provided

at points when the curvature sign changes. These are

referred to as Inflection Points, and they are found in

the same location as the roots of the second derivative of

the curve shape.

• Tangency, this is defined as the value providing the angle

between the tangent line at given point p on the plane

curve C and a reference direction. We consider tangent

to be the first derivative of the curve shape as a function

of position, i.e. dCdp

.

• Discontinuities in either curve shape, curvature or tan-

gency. Every point in the plane curve C for which any

of the three previously mentioned geometrical properties

step or jump abruptly from a connected point of the

graph to another is considered to represent a discontinuity.

(a)

(b)

(c)

Surface

Fig. 2. Modes of user interaction with the virtual model.

Formally, in this case a discontinuity is a point for which

the limits from the left and right sides both exist but are

not equal to each other.

For the sake of clarity, curve shape, curvature and tangency

are considered as continuous geometrical parameters, whereas

discontinuities and inflection points are referred to as discrete

geometrical parameters. Currently, we only consider the soni-

fication of one continuous geometrical parameter at a time, and

the user is able to select the parameter of interest in this case.

On the other hand, it is possible in our application to sonify

multiple discrete geometrical parameters simultaneously. It

is also possible to sonify continuous geometrical parameters

and discrete geometrical parameters simultaneously. Again the

discrete geometrical parameters that are to be rendered audible

can be selected according to user preference.

During the sonification of these geometrical properties three

different modes of haptic interaction are considered, as illus-

trated in Figure 2. These modes are used to evaluate and

explore the shape of the virtual object. The first involves

the impact between the user’s hand/finger and the model

at a certain point, for example by tapping the object, as

shown in Figure 2(a). The second, Figure 2(b), involves a

sustained contact with the object without movement. Finally,

the third type of interaction, Figure 2(c), involves sliding a

hand or finger over the object. These modes of interaction

are an important consideration when designing the sonification

process, as they directly influence the sound that is produced.

For example if the user taps the haptic interface at a given

point, the sound produced may be different to that produced

if a sustained contact with the surface is made at that same

point.

The SATIN project consists of a consortium of nine part-

ners, spread across a range of different European countries.

Different tasks within this project have been assigned to each

partner and the sonification module is being developed in

parallel to the other system components [1]. As the final haptic

interface is in development, we are currently using a digital

drawing tablet (sometimes referred to as a Wacom tablet) as

an alternative input device. Contact between the drawing tablet

pen and the tablet is analogous to contact between the user’s

finger and the haptic device in the SATIN project. The position

of contact along the x-axis of the drawing tablet is equivalent

to the position of finger contact along the length of the selected

cross-sectional profile represented by the haptic interface. To

ensure future compatibility, the resolution of the drawing tablet

has been matched to the specifications of the haptic device.

The Max/MSP environment is used for the purpose of

development and prototyping of the audio feedback module.

Max/MSP is a graphical development environment for music,

sound synthesis and multimedia. The program is highly mod-

ular, with a library of existing routines in the form of shared

libraries. In addition, an Application Program Interface (API)

allows third-party development of new routines, called external

objects. The sound is output via a stereo speakers setup.

III. SONIFICATION APPROACHES

The selection of sound synthesis approaches and mapping

strategies to sonify the geometrical data of interest has a

significant effect on how the information is perceived by the

user. In practice there is an unlimited range of options for

sonification, with the result that there is a large degree of

freedom in the development of the sounds. At this stage we

have implemented a number of sonification strategies based

on the sound synthesis approaches described below.

In each of these approaches, the sound that is produced

depends on the position of contact between the user’s finger

and the haptic device, the geometrical parameter associated

with that position and the mode of interaction. If no contact

is made, no sound is produced.

A. Basic synthesis

This is a straightforward and empirical sound synthesis

approach that also served as a proof of concept during the

early stages of development. The first step for this approach

is to select a carrier sound. Ideally this carrier must have a

single discernible fundamental frequency, or centre frequency,

associated with it. In the simplest case a pure tone can be used,

or other basic signals such as a periodic sawtooth waveform

or narrow-band noise. For this approach, the sound that is

produced depends only on whether or not there is contact

between the user’s finger and the haptic interface, and not

on the specific modes of interaction presented in Figure 2.

The value of the continuous geometrical parameter of in-

terest is mapped to the frequency associated with the carrier

sound. The mapping is implemented in such a way that

the minimum absolute value of the geometrical parameter

Fig. 3. GUI for basic synthesis sonification.

is mapped to a minimum frequency and the maximum ab-

solute value found in the dataset is mapped to a maximum

frequency. Additionally, linear changes in the geometrical

parameter of interest are mapped to logarithmic changes in

the sound carrier frequency. Frequency is a good choice for

this mapping because humans perceive frequency changes

with a relatively high resolution. Typically, frequency changes

can be determined with an accuracy of up to 0.3% [8]. In

addition, this use of frequency as relevant mapping parameter

has the advantage that its perception is generally not dependent

on the acoustic characteristics of the space surrounding the

system. The minimum and maximum frequency values should

be chosen to give a wide frequency range, well within the

human audible frequency range. During operation, the user is

able to select the parameters of this sonification strategy, such

as the minimum frequency value, the frequency range and the

carrier signal, according to his/her preference. This is done

using a simple Graphical User Interface (GUI), as depicted in

Figure 3.

It is also possible to apply a modulation to the carrier

sound using a low frequency oscillator. This could take the

form of either an amplitude modulation (AM) or a frequency

modulation (FM). The frequency of this oscillator can also be

mapped to the continuous geometrical parameter of interest.

Such a mapping has the advantage that a parameter value

of 0 can be directly mapped to a modulation frequency of

0 Hz. This would result in no audible modulation and would

allow the user to recognize a null parameter value with relative

ease. Other mappings are possible, depending on the choice of

sound carrier. For example in the case of narrow-band noise,

the bandwidth of the noise can be used.

In the case of curvature data, it is necessary to inform the

user about whether the curvature is positive or negative. Here,

the sign of the curvature is mapped to the stereo panning

of the sound output in the following way. If the data are

negative, the output is weighted to the left channel. More

precisely, if the absolute value of the negative data is more

than 20% of the maximum absolute value of the curvature

data, the sound is output entirely in the left channel, and no

output is heard in the right channel. Similarly, if the data are

positive, the output is weighted to the right channel, and if

the value is more than 20% of the maximum absolute value

of the dataset, the sound is output entirely in the right channel.

Between these two reference points, the output is linearly cross

faded between the two channels and a curvature value of 0 is

mapped to a sound reproduced from the middle between the

two loudspeakers. This mapping for curvature is in fact also

applied in all sonification strategies described in this paper.

To signify discontinuities in the curvature data, two different

modulations are applied to the sound. When the finger is

moved across a discontinuity position on the surface, the

amplitude of the audio output is rapidly increased and then

decreased, resulting in a click-like sound played in synchrony

with the fingers movement across the discontinuity. This sound

is easily distinguished from the pure tones associated with

ordinary curvature values and gives the user initial feedback

about the presence and the approximate position of the discon-

tinuity. In order to allow a precise exploration of the position of

the discontinuity, a specific sound is reproduced if the finger

rests on the exact position of the discontinuity. This sound

is made by rapidly alternating between the two frequencies

which correspond to the curvature values on either side of

the discontinuity. Discontinuity positions are thus the only

points along the surface at which the reproduced sound is not

stable over time, but is frequency-modulated. This modulation

continues until the finger moves away from the spatial position

of the discontinuity.

B. Modal synthesis

Modal synthesis is a physical modeling technique for sound

rendering, with theoretical roots in modal analysis [9]. The

aim of modal synthesis is to mimic the dynamic properties

of an elastic structure in terms of its characteristic modes of

vibration. A physical structure has an inherent set of modes

of vibration, determined by its material, its dimensions and

the conditions at its boundaries. Each mode can be defined

by its resonant frequency, damping factor and mode shape

(eigenfunction). In modal analysis, the goal is to separate

the equations of motion of a structure, so that they can be

solved separately and individual modes can be calculated.

By adding together these respective modal responses, the

frequency response of the entire structure can be found [9].

The modal synthesis implementation presented here is based

on research by van den Doel and Pai [10]. In this approach,

a modal model M is described for N modes at a specific

location on a physical object as: M = {fM , dM ,aM }. In

this definition, fM , dM , and aM are vectors of length N

containing the modal frequencies (in Hertz), the angular decay

rates (in Hertz) and the amplitude coefficients respectively.

The impulse response y(t) of M at the position of interest

in the object is

y(t) =N

n=1

aMne−dMn

t sin(2πfMnt) (2)

for t ≥ 0 and is zero for t < 0 and y(t) corresponds to

the audio signal as a function of time. In [10] the authors

describe different methods to obtain the modal data from

real physical objects. Formulating these data is a non-trivial

operation, however, the authors have provided a wide range

of models whose modal parameters have been pre-calculated

according to the techniques presented in their paper.

In practice, Eq. (2) is implemented as a bank of second order

resonant band-pass filters. We have integrated the modal syn-

thesis method in this way with the Max/MSP sound platform

in the form of an external library written in the programming

language C. This library is able to load any modal synthesis

model file whose format is compatible with that defined in

[10].

In our implementation, to communicate continuous geomet-

rical data we map the magnitude of the parameter of interest to

the frequency scaling factor of the modal synthesis model. This

modification results in the replacement of fM in Eq. (2) with

f S = αfM , where α is the frequency scaling factor which is

directly proportional to the geometrical value of interest. The

relationship between the scaling factor and the geometrical

parameter is chosen in such a way that the spectral response of

the scaled model remains within the human audible frequency

range. The perceptual effect of frequency scaling in this way

gives the impression that as the scaling value α increases, the

modeled object M decreases proportionally in size. In order

to investigate this mapping strategy we loaded a selection of

different models proposed in [10]. For the case of curvature

sonification, we also experimented with the model of a circular

metal plate. The motivation to use this particular model stems

from the fact that curvature is the reciprocal of the radius of

the osculating circle of the curve at the point of interest, as

mentioned in section II. Since this model was not available in

the model library we implemented it by calculating the modal

frequencies by solving the wave equation for a circular plate

[11].

The sonification application has been developed with a GUI

that allows the user to control the frequency scaling and also

allows to scale the damping and amplitude coefficients of the

modal synthesis model. This GUI gives the user the ability to

explore the effects of modifying the model parameters and to

tune them according to his/her preferences.

In practice, there are two important considerations in the

modal synthesis approach described here. The first is the

nature of the signal used in the model implementation for the

excitation of the band-pass resonant filters, and the second is

how the application reacts to the different modes of interaction

described by Figure 2. In fact in our application, the nature

of the excitation signal is influenced by the interaction mode

as follows. If the user taps the haptic interface, as illustrated

by Figure 2(a), the model is excited by a short impulse.

In the case of sustained contact with the haptic interface,

without movement of the finger (Figure 2(b)), the model

is only excited at the initial moment of contact, as in the

previous case. Following this initial excitation at the moment

of impact, the excitation depends on a preset defined by the

user. Either no excitation signal is produced, resulting in no

sound, or a continuous excitation signal is produced, with

constant parameters, consisting of user definable colored noise.

Finally, if the user’s finger slides over the haptic interface,

Figure 2(c), the excitation signal also consists of continuous

user definable colored noise. The user is also able to select

an option where the speed of the finger is mapped to the

amplitude of this noise carrier.

Concerning the sonification of discrete geometrical pa-

rameters, for example discontinuities and inflection points, a

different modal synthesis model to that used for the continuous

geometrical parameter currently being sonified is employed.

Ideally the model should be chosen so that its characteristic

sound can be easily distinguished from the sound of its

counterpart used for the continuous geometrical data. For

example, to sonify discrete geometrical parameters we have

used bell, wooden table or glass models. The only excitation

signal used for this model is a short impulse which is triggered

when the user comes into contact with the point where the

discrete geometrical parameter of interest is located. The

model settings for these parameters can also be modified by

the user through the GUI.

C. Wavetable Sampling Synthesis

Wavetable sampling synthesis [12] is an alternative ap-

proach to sound generation that can be used for the sonification

of geometrical parameters, as described previously. In order

to demonstrate how this approach can be used, we have

developed two alternative sonification strategies. As for the

other sonification approaches described here, a GUI is also

provided to allow the user to tune parameters relating to the

synthesis and mapping strategies involved in the process.

For the first sonification strategy, a continuous sound is

produced whilst the user remains in contact with the haptic

device. This continuous sound is made up of a pre-prepared

sample, or wavetable, that is repeatedly played back without

interruption, i.e. in a loop. To demonstrate this technique, we

chose a recording of the sound of a car engine as the sample

to be used. This sound is designed in such a way that, when

played back in a repeating loop, no audible interruption is

heard and the sound is similar to that of a car engine that

is running continuously. During the user’s exploration, the

continuous geometrical parameter of interest is mapped to

the speed of playback of the sample. As in the sonification

approaches described previously, this mapping is designed in

such a way that the spectral response that results from this

synthesis method remains within the human audible frequency

range.

For the second sonification strategy, the continuous ge-

ometrical data of interest are represented using a slightly

different method to that described previously. In this case, a

pre-prepared audio sample is chosen that is relatively short and

impulse-like. For our demonstration, we use a recorded sound

of a table tennis ball bouncing on the table. Whilst the user’s

finger remains in contact with the haptic device, this sample is

played back repeatedly, but with a period of silence between

each repetition. Although the sample playback speed remains

constant, the continuous geometrical parameter of interest is

mapped in such a way that an increase in the data value results

in a decrease in the length of the silence period between each

repetition. This has the effect of increasing the repetition rate

of the sample, without changing its pitch.

Discrete geometrical parameters can also be indicated using

sample-based synthesis, where different parameters of interest

can be represented by different audio samples. The samples

used in this case should also have characteristic sounds that

can be easily distinguished from the sounds used simultane-

ously to sonify other data. These samples are played back only

once, and are not repeated continuously. They are triggered

when the user comes into contact with the point where the

discrete geometrical parameter of interest is located. As an

example, in our demonstration we use the recorded sound of

a car-horn to indicate a discontinuity and the recorded sound

of a car changing gear to indicate an inflection point.

IV. EXPERIMENTAL USER EVALUATION OF SOUND

FEEDBACK

For the experimental evaluation of the sonification applica-

tion and the usability of the different sonification approaches,

we propose to measure the efficiency, the effectiveness, and the

satisfaction of the users. In the evaluation we will first generate

a number of different synthetic curve shapes (the other geo-

metrical properties of interest are calculated as a by-product).

These shapes, along with their geometrical properties, will

be rendered audible using a pre-defined set of sonification

strategies based on the methods described in section III. From

this, the users’ ability to derive a correct mental image of

the curve shape and its related geometrical properties will

be used as a measure for the usability of the system. In the

experiments, the users will be asked to touch and explore the

haptic interface, and to listen to the sound produced. The sub-

jects will consist of mathematically well trained subjects who

have a well developed idea of a mathematical function. They

should at least have followed elementary courses in calculus,

so that they are familiar with the concepts of discontinuity,

zero crossing, or positive and negative functions. The usability

of the system will be determined according to three criteria.

The first is the number of errors made by the users and the

second is the time it takes to perform the task. Finally, for

each sonification approach the users will be asked to rate on

a scale from 1 to 5 the naturalness, the intuitiveness, and the

difficulty of the task.

V. CONCLUSIONS

In this paper, we have presented the theoretical background

and motivation for the use of sonification in the context of

the SATIN project [1]. This project consists of a multimodal

augmented reality environment where the user can interact

with virtual 3D objects and sonification is considered to

be a complementary modality for evaluating a number of

geometrical properties relating to the objects, as detailed in

section II. An advantage of adopting a sonification approach

is that it can help the user to perceive geometrical properties of

shapes, such as curvature and tangency, that otherwise would

be hardly detectable using vision or touch. In section III, we

have proposed three different approaches to the sonification

problem. The first is based on the use of relatively simple

sound synthesis techniques. The second involves the use

of a physical modeling method known as modal synthesis.

The final approach employs pre-prepared sounds using the

technique known as wavetable sampling synthesis. All of the

proposed sonification strategies have been developed with the

fundamentals of human hearing in mind, however, the appli-

cations have been designed with a GUI that allows the user to

adjust sonification parameters according to his/her preferences.

In addition, we have shown how sound can be useful in

multimodal interfaces as a supportive and complementary tool.

Further testing is required to determine the improvement in

performance over purely graphical and haptic environments. A

user evaluation strategy has been outlined in order to measure

the added value of sound in the SATIN interface, and also to

compare different sonification techniques.

ACKNOWLEDGMENTS

The research work presented in this paper has been sup-

ported by the European Commission under the project FP6-

IST-5-054525 SATIN – Sound And Tangible Interfaces for

Novel product design.

REFERENCES

[1] (2006, October) Sound and tangible interfaces for novel product design.[Online]. Available: http://www.satin-project.eu/

[2] C. A. Catalano, B. Falcidieno, F. Giannini, and M. Monti, “A surveyof computer-aided modeling tools for aesthetic design,” Journal of

Computing and Information Science in Engineering, vol. 2, pp. 11–20,March 2002.

[3] R. Minghim and A. R. Forrest, “An illustrated analysis of sonificationfor scientific visualisation,” in Proc. 6th IEEE Visualization Conference,1995, pp. 110–117.

[4] (1997) Sonification report: Status of the field and research agenda. [On-line]. Available: http://www.icad.org/websiteV2.0/References/nsf.html/

[5] T. Hermann, “Sonification for exploratory data analysis,” Ph.D. disser-tation, Bielefeld University, Germany, 2002.

[6] T. Hermann and A. Hunt, “An introduction to interactive sonification,”IEEE multimedia, vol. 12, no. 2, pp. 20–24, 2005.

[7] S. Shelley, M. Alonso, D. Hermes, and A. Kohlrausch, “On the use ofsound for representing geometrical information of virtual objects,” inProceedings of the 14th International Conference on Auditory Display,Paris, France, July 2008, p. P 12.

[8] C. C. Wier, W. Jesteadt, and D. M. Green, “Frequency discrimination asa function of frequency and sensation level,” Journal of the Acoustical

Society of America, vol. 61, pp. 178–183, 1977.

[9] T. D. Rossing and N. H. Fletcher, Principles of Vibration and Sound,2nd ed. Springer, 2004.

[10] K. van den Doel and D. K. Pai, Audio anecdotes: tools, tips, and

techniques for digital audio. AK Press, 2004, ch. Modal Synthesisfor Vibrating Objects.

[11] A. W. Leissa, Vibration of Plates. Washington, D.C.: NASA SP-160,NASA, 1969.

[12] D. C. Massie, Applications of Digital Signal Processing to Audio and

Acoustics. Kluwer Academic Publishers, 1998, ch. Wavetable SamplingSynthesis, pp. 311–342.