593
9th International Conference on Music Perception and Cognition 6th Triennial Conference of the European Society for the Cognitive Sciences of Music Abstracts Edited by Mario Baroni, Anna Rita Addessi, Roberto Caterina, Marco Costa Alma Mater Studiorum University of Bologna Italy August 22 – 26, 2006

Abstract Book

Embed Size (px)

Citation preview

Page 1: Abstract Book

9th International Conference on Music Perceptionand Cognition

6th Triennial Conference of the European Society for theCognitive Sciences of Music

AbstractsEdited by Mario Baroni, Anna Rita Addessi, Roberto Caterina,

Marco Costa

Alma Mater StudiorumUniversity of Bologna

Italy

August 22 – 26, 2006

Page 2: Abstract Book
Page 3: Abstract Book

9th International Conference on Music Perceptionand Cognition

6th Triennial Conference of the European Society for theCognitive Sciences of Music

Abstracts

Edited byMario Baroni, Anna Rita Addessi,

Roberto Caterina, Marco Costa

Alma Mater StudiorumUniversity of Bologna

Italy

August 22 – 26, 2006

Typesetting and editorial assistance Leonardo Corazza

Front cover: Graphical elaboration from Edouard Manet “Le Fifre”, 1866, Paris: Musée d’Orsay

Page 4: Abstract Book
Page 5: Abstract Book

Contents

I Tuesday, August 22th 2006 35

1 Keynote Lecture I 37

1.1 The nature of music from a neuropsychologist’s perspective . . . . . . . . . . . . 37

2 Symposium: Musical time, narrative and voice: Exploring the origins of human mu-sicality I 39

2.1 Narrative, splintered temporalities and the unconscious in 20th century music . . 40

2.2 Musical motives of infant communication: Expressions of human rhythm andsympathy make dialogues of purpose in experience . . . . . . . . . . . . . . . . 40

2.3 Proto-music is the food of love: An ethological view of sources of emotion inmother-infant interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

2.4 Music, meaning, and narratives of human evolution . . . . . . . . . . . . . . . . 42

3 Rhythm I 43

3.1 Synchronization of perceptual onsets of performed bass notes . . . . . . . . . . . 43

3.2 The correlation between a singing style where singing starts slightly after the ac-companiment and a groovy, natural impression: The case of Japanese pop divaNamie Amuro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.3 Groove within a jazz rhythm section: A study of its production and control . . . . 45

3.4 The influence of tempo on movement and timing in rhythm performance . . . . . 45

3.5 Analysis of local and global timing and pitch change in ordinary melodies . . . . 46

3.6 Selecting among computational models of music cognition: Towards a measure ofsurprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

5

Page 6: Abstract Book

6 CONTENTS

4 Timbre I 49

4.1 Determining the Euclidean distance between two steady-state sounds . . . . . . . 49

4.2 The relevance of source properties to timbre perception: Evidence from previousstudies on musical instrument tones . . . . . . . . . . . . . . . . . . . . . . . . 50

4.3 Effect of critical band data reduction of musical instrument sounds . . . . . . . . 51

4.4 Investigating piano timbre: Relating verbal description and vocal imitation to ges-ture, register, dynamics and articulation . . . . . . . . . . . . . . . . . . . . . . 52

4.5 Timbral description of musical instruments . . . . . . . . . . . . . . . . . . . . 53

4.6 Comparing real-time and retrospective perceptions of segmentation in computer-generated music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5 Symposium: Tempo and Memory 55

5.1 Cues for tempo preference and tempo memory of imagined compositions . . . . 55

5.2 The effect of exposure and expertise on timing judgments: Preliminary results . . 57

5.3 Audio-vision: Visual input drives perceived music tempo . . . . . . . . . . . . . 58

5.4 Memory representations of musical tempo: Stable or adaptive? . . . . . . . . . . 59

6 Neuroscience I 61

6.1 Psychophysiological investigation of emotional states evoked by music . . . . . . 61

6.2 Investigation of brain activation while listening to and playing music using fNIRS 62

6.3 Neural correlates of musically-induced emotions: An fMRI-study . . . . . . . . 63

6.4 The use of music in everyday life as a personality dependent cognitive emotionalmodulation-strategy for health . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

6.5 Probing the representation of melody, an ERP study . . . . . . . . . . . . . . . . 64

6.6 A natural high: The physiological and psychological effects of music listening onformer ecstasy users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

7 Social Psychology 67

7.1 Self-representation of bimusical Khanty . . . . . . . . . . . . . . . . . . . . . . 67

7.2 Music and identity of Brazilian Dekasegi children and adults living in Japan . . . 68

7.3 New roles of music in contemporary advertising . . . . . . . . . . . . . . . . . . 69

7.4 Analysing music and social interaction: How adolescents talk about musical rolemodels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

7.5 Choosing music in the Internet era . . . . . . . . . . . . . . . . . . . . . . . . . 70

7.6 Shared Soundscapes: A social environment for collective music creation . . . . . 71

Page 7: Abstract Book

7

8 Poster Session I 73

8.1 An “of intellect sensation” at the root od the thought about artistic gesture . . . . 73

8.2 Comparative analysis of the emotional intensity when performing different styles.A study of cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

8.3 Representation of musician’s body: The part of emotion . . . . . . . . . . . . . . 75

8.4 The functional neuroanatomy of temporal structure in music and language . . . . 75

8.5 A test of the role of positive and negative affect in the prediction of PerformanceAnxiety severity in a sample of piano students . . . . . . . . . . . . . . . . . . . 76

8.6 Coping strategies for performance anxiety in musicians . . . . . . . . . . . . . . 77

8.7 Perception of structural boundaries in popular music . . . . . . . . . . . . . . . 78

8.8 The role of emotions in music creativity . . . . . . . . . . . . . . . . . . . . . . 79

8.9 Analysis of stylistic rules of folk music from the standpoint of the rules of excitement 80

8.10 Memory for lyrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

8.11 Expressiveness in music: The movements of the performer and their effects on thelistener . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

8.12 Markov processes and computer aided music composition in real-time . . . . . . 82

8.13 Online measurement of emotional musical experiences using internet-based meth-ods - an exploratory approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

8.14 The role of emotions for musical long-term memory . . . . . . . . . . . . . . . . 84

8.15 Children’s improvisations: The development of musical language . . . . . . . . . 85

8.16 The sound of suspense: An analysis of music in Alfred Hitchcock films . . . . . 85

8.17 Using modern popular songs: Enhancement of emotional perception when devel-oping sociocultural awareness of foreign language students . . . . . . . . . . . . 86

8.18 Time perception of unimodal or crossmodal auditory-visual events in normal sub-jects: Effects of aging and of musical education . . . . . . . . . . . . . . . . . . 87

8.19 How far is music universal? An intercultural comparison . . . . . . . . . . . . . 88

8.20 The effect of music on subjective emotional state and psychophysiological re-sponse in singers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

8.21 The evolution of musical styles in a society of software agents . . . . . . . . . . 89

8.22 Unobtrusive practise tools for pianists . . . . . . . . . . . . . . . . . . . . . . . 90

8.23 What type of BGM is appropriate for a residential space? A study of relationbetween a “healing music” and “healing space” . . . . . . . . . . . . . . . . . . 90

Page 8: Abstract Book

8 CONTENTS

8.24 How the brain learns to like - an imaging study of music perception . . . . . . . 91

8.25 Effects of identification and musical quality on emotional responses to music . . 92

8.26 The cognition of intended emotions for a drum performance: Differences and sim-ilarities between hearing-impaired people and people with normal hearing ability 93

8.27 Compactness and Convexity as models for the preferred intonation of chords . . 94

8.28 Affective characters of music and listeners’ emotional responses to music . . . . 95

8.29 Agent-based melody generation model according to cognitive and bodily features:Toward composition of Japanese traditional pentatonic music . . . . . . . . . . . 96

8.30 Fetal and neonatal musical brain processes . . . . . . . . . . . . . . . . . . . . . 97

8.31 A comparison of the effects of music and speech prosody on three dimensions ofaffective experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

8.32 Influences of musical training and development on neurophysiological correlatesof music and speech perception in children . . . . . . . . . . . . . . . . . . . . . 98

8.33 A survey study of emotional reactions to music in everyday life . . . . . . . . . . 99

8.34 The effects of pre-existing moods on the emotional responses to music . . . . . . 100

8.35 Emotion-relevant characteristics of temperament and the perceived magnitude oftempo and loudness of music . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

8.36 Constructing a support system for self-learning playing the piano at the beginningstage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

8.37 Dissimilarity measures and emotional responses to music . . . . . . . . . . . . . 102

8.38 Historiometric analysis of Clara Schumann’s collection of recital notes: Life-spandevelopment, mobility, and repertoire . . . . . . . . . . . . . . . . . . . . . . . 103

8.39 Melodic intervals as reflected in body movement . . . . . . . . . . . . . . . . . 104

8.40 A foreign sound to your ear: A comparison of american vs. german-speaking BobDylan fans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

8.41 Musical qualities in the infant cry . . . . . . . . . . . . . . . . . . . . . . . . . 106

8.42 A descriptive analysis of university music performance teachers’ sound-level ex-posures during a typical day of teaching, performing, and rehearsing. . . . . . . . 107

8.43 On machine arrangement for smaller wind-orchestras based on scores for standardwind orchestras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

8.44 Processing of self-made and induced errors in musicians . . . . . . . . . . . . . 109

8.45 Imaging the music of Arvo Pärt: The importance of context . . . . . . . . . . . . 109

8.46 Quotation in jazz improvisation: A database and some examples . . . . . . . . . 110

Page 9: Abstract Book

9

8.47 Tension and narrativity in tonal music . . . . . . . . . . . . . . . . . . . . . . . 111

8.48 Recognition of similarity relationships between time-stretched spectral structures 111

8.49 Cognitive and spatial abilities related to young children’s development of absolutepitch perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

8.50 Mathematical framework for analyzing qualitative information flow in processesof teaching musical renderings . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

8.51 Strong emotions in music and their psychophysiological correlates . . . . . . . . 113

8.52 The interplay between music and language processing in song recognition . . . . 114

8.53 Musical creativity and the teacher. An examination of data from an investigationof Secondary school music teachers’ perceptions of creativity . . . . . . . . . . . 115

8.54 Rhythmic grouping construction in the first movement of Paul Hindemith’s fluteand piano sonata performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

8.55 Convergences between learning to play the piano and motorics . . . . . . . . . . 116

8.56 Music, consciousness and unconscious . . . . . . . . . . . . . . . . . . . . . . . 117

8.57 Cultural aspects of music perception . . . . . . . . . . . . . . . . . . . . . . . . 118

8.58 The unusual symmetry of musicians: Musicians show no hemispheric dominancefor visuospatial processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

8.59 A study on the effect of musical imagery on spontaneous otoacoustic emissions inmusicians and non-musicians . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

8.60 Musical structure processing in autistic spectrum disorder . . . . . . . . . . . . . 121

8.61 Preparing a musical performance: Rehearsal techniques . . . . . . . . . . . . . . 122

8.62 Some considerations on algorithmic music and madrigals of Gesualdo da Venosa 122

8.63 The algorithmic music and the Musical Offering of J.S.Bach . . . . . . . . . . . 123

8.64 Facial emotional expression recognition after bilateral amygdala damage: The useof a sonorousmusical context . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

8.65 Processing of music syntactic information in brain lesioned patients - an ERP study 124

8.66 Single time-controlled fractional noise algorithm . . . . . . . . . . . . . . . . . 125

8.67 Development of musical preferences of elementary school children . . . . . . . . 126

8.68 Music perception understanding is a prerequisite to implementing computer aidedmusical analyses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

8.69 Why do mothers sing to their children? . . . . . . . . . . . . . . . . . . . . . . . 127

8.70 Timbrescape: A musical timbre and structure visualization method using tristim-ulus data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

Page 10: Abstract Book

10 CONTENTS

8.71 Drumming it in: Towards a greater awareness of individual learning styles in in-strumental teaching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

8.72 Unveiling power of music: Socio-historical and psychological perspectives . . . 129

8.73 Online investigation of affective priming of words and chords . . . . . . . . . . . 130

8.74 Algorithmic prediction of music complexity judgements . . . . . . . . . . . . . 131

8.75 The sun, the moon and the self-taught musician . . . . . . . . . . . . . . . . . . 132

8.76 Multi-agent composition model based on sensory consonance theory . . . . . . . 132

8.77 Musical interpretation and the content of music . . . . . . . . . . . . . . . . . . 133

8.78 The Ethiopian lyre Bagana: An instrument for emotion . . . . . . . . . . . . . . 134

8.79 The relationship between narcissism and music performance anxiety . . . . . . . 135

8.80 A comparison of the effects of music and physical exercise on spatial-temporalperformance in adolescent males . . . . . . . . . . . . . . . . . . . . . . . . . . 135

8.81 Automatic emotion classification of musical segments . . . . . . . . . . . . . . . 136

8.82 Effects of state anxiety on performance in pianists: Relationship between compet-itive state anxiety inventory-2 subscales and piano performance . . . . . . . . . . 137

8.83 The recognition of emotional qualities expressed in music . . . . . . . . . . . . . 138

9 Symposium: Gesture, anticipation and expression: The origins of human musicalityII 139

9.1 Musicality and the human capacity for culture . . . . . . . . . . . . . . . . . . . 140

9.2 Trajectories of expression in musical interaction and the need for common ground 140

9.3 Meaning in motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

9.4 The “attunement” between children and a musical machine . . . . . . . . . . . . 141

10 Education I 143

10.1 Music students at a UK conservatoire: Identity and learning . . . . . . . . . . . . 143

10.2 An exploration of the formation of primary school student- teachers beliefs re-garding creative music making . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

10.3 External representations in teaching and learning of music . . . . . . . . . . . . 145

10.4 Conceptions of musical ability . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

10.5 University to career transitions: Finding a musical identity . . . . . . . . . . . . 146

10.6 Children’s practice of computer-based composition as a form of play . . . . . . . 147

Page 11: Abstract Book

11

11 Emotion I 149

11.1 Walking and playing: What’s the origin of emotional expressiveness in music? . . 149

11.2 What music appears in musical peak experiences? . . . . . . . . . . . . . . . . . 150

11.3 Peak experience in music: A case study with listeners and performers . . . . . . 151

11.4 Strong experiences in music, their categorisation and connection with personalitytraits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

11.5 Quantification of Gabrielsson’s relationships between felt and expressed emotion 153

11.6 Emotional communication mediated by two different expression forms: Drawingsand music performances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

12 Symposium: Acquired Musical Disorders 155

12.1 Harmonic priming in an amusic patient: The power of implicit tasks . . . . . . . 156

12.2 Dystimbria: A distinct musical syndrome? . . . . . . . . . . . . . . . . . . . . . 156

12.3 Focal dystonia in musicians: An acquired musical disorder? . . . . . . . . . . . 157

12.4 Music reading deficiencies and the brain . . . . . . . . . . . . . . . . . . . . . . 158

13 Rhythm II 161

13.1 Towards a style-specific basis for computational beat tracking . . . . . . . . . . . 161

13.2 Metrical interpretation modulates brain responses to rhythmic sequences . . . . . 162

13.3 Anticipatory behaviour in music: Towards a new approach to musical synchro-nization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

13.4 An intercultural study on tempo perception in Japanese court music gagaku . . . 164

13.5 An investigation of pre-schoolers’ corporeal synchronization with music . . . . . 164

13.6 A non-human animal can drum a steady beat on a musical instrument . . . . . . 165

14 Demonstrations I 167

14.1 The continuous response measurement apparatus (CReMA) . . . . . . . . . . . . 167

14.2 The application to implement artificial internal hearing . . . . . . . . . . . . . . 168

Page 12: Abstract Book

12 CONTENTS

15 Pitch I 169

15.1 Harmonizing a tonal melody at the age of 6-15 years . . . . . . . . . . . . . . . 169

15.2 The effect of musical training and tonal language experience on the perception ofspeech and nonspeech pitch and musical memory . . . . . . . . . . . . . . . . . 170

15.3 The expressive intonation in violin performance . . . . . . . . . . . . . . . . . . 171

15.4 Are early-blind individuals more musical than late-blind? . . . . . . . . . . . . . 172

16 Education II 173

16.1 The effect of background music on the interpretation of a story in 5 year old children173

16.2 Interpretation of the emotional content of a musical performance by 3 to 6-year-old children . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

16.3 Music lessons and emotional intelligence . . . . . . . . . . . . . . . . . . . . . 175

16.4 Children’s perception of some basic sound parameters . . . . . . . . . . . . . . . 176

17 Performance I 177

17.1 A methodology for the study and modeling of choral intonation practices . . . . 177

17.2 Pop-E: A performance rendering system for the ensemble music that consideredgroup expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

17.3 A comparative study of air support in the trumpet, horn, trombone and tuba . . . 179

17.4 The visual impact of specific body parts on perceived conducting expressiveness . 180

18 Music Therapy I 181

18.1 Focus on music: Reporting on initial research into the musical interests, abili-ties, experiences and opportunities of visually impaired children with septo-opticdysplasia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

18.2 Empowering musical rituals as a way to promote health . . . . . . . . . . . . . . 182

18.3 Playing with autism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

18.4 An investigation of the effects of music in the treatment of patients with dementia 184

19 Rhythm III 187

19.1 Timing in sequences: Development of a visuospatial representation method andevidence for rhythmic categorical perception . . . . . . . . . . . . . . . . . . . . 187

19.2 Groove microtiming deviations as phase shifts . . . . . . . . . . . . . . . . . . . 188

19.3 Detecting changes in timing: Evidence for two modes of listening . . . . . . . . 188

Page 13: Abstract Book

13

20 Workshops 191

20.1 An introduction to musicians’ gestures: Recording, analyzing, and reporting onthe “body language” of musicians . . . . . . . . . . . . . . . . . . . . . . . . . 191

20.2 Sight-reading strategy: A cognitive psychology approach . . . . . . . . . . . . . 192

II Wednesday, August 23th 2006 193

21 Symposium: The influence of preference upon music perception 195

21.1 An investigation of the effects of post-operative music listening in hospital settings 196

21.2 The effects of preferred music listening on pain . . . . . . . . . . . . . . . . . . 196

21.3 A qualitative analysis of everyday uses of preferred music across the life span . . 197

21.4 The effects of preferred music on driving game performance . . . . . . . . . . . 198

21.5 The effects of preferred music listening in college students and in renal failurepatients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

22 Education III 201

22.1 Children’s responses to 20th century “art” music, in Brazil and Portugal . . . . . 201

22.2 The early development of three musically highly gifted children . . . . . . . . . 202

22.3 Creating original operas with special needs students . . . . . . . . . . . . . . . . 203

22.4 Age differences in listening to music while studying . . . . . . . . . . . . . . . . 204

22.5 Instrumental lessons: What do they expect? The role of gender in pupil/teacherinteraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

22.6 Some trends in Estonian music education in the 21th century . . . . . . . . . . . 205

23 Pitch II 207

23.1 Scaled harmonic implication and its realization: Searching for a unified cognitivetheory of music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

23.2 Relative pitch learning: The advantages of active training and Asian ethnicity . . 208

23.3 Melodic expectancy in contemporary music composition. Revising and extendingthe Implication-Realization model . . . . . . . . . . . . . . . . . . . . . . . . . 209

23.4 Influence of tonal context on tone processing in melodies . . . . . . . . . . . . . 210

23.5 Priming by non-diatonic chords: The case of the Neapolitan chord . . . . . . . . 211

23.6 Singing along with music to explore tonality . . . . . . . . . . . . . . . . . . . . 212

Page 14: Abstract Book

14 CONTENTS

24 Symposium: Infants’ experience and expression of musical rhythm 213

24.1 Infants’ experience of rhythm: Contributions from nature and nurture . . . . . . 213

24.2 Investigating the rhythms and vocal expressions of infant musicality, in Crete,Japan and Scotland . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214

24.3 Do infants dance to music? A study of spontaneous rhythmic expressions in infancy215

24.4 Japanese home environments and infants’ spontaneous responses to music: Initialreports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216

25 Emotion II 219

25.1 How does music induce emotions in listeners? The AMUSE model . . . . . . . . 219

25.2 Cross-cultural approach to emotions in choir singing . . . . . . . . . . . . . . . 219

25.3 Effects of mode, consonance, and register in a picture-, and word-evaluation af-fective priming task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220

25.4 Singing, music listening, and music lessons on children’s sensitivity to emotionalcues in speech . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

25.5 Induction of anxiety with music: Is it related with attentional and memory biasestoward threatening images? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222

25.6 Decoding emotion in music and speech: A developmental perspective . . . . . . 223

26 Musical Style 225

26.1 A timed Blindfold Test: Identifying the jazz masters in short time spans . . . . . 225

26.2 Parameters distinguishing baroque and romantic performance . . . . . . . . . . . 226

26.3 Style processing: An empirical research on Corelli’s style . . . . . . . . . . . . . 227

26.4 An exploration of musical style from human and connectionist perspectives . . . 228

27 Perception I 229

27.1 Infants perception of timbre in music . . . . . . . . . . . . . . . . . . . . . . . . 229

27.2 From sound to symbol: Connecting perceptual knowledge and conceptual under-standing in music theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230

27.3 Notational audiation is the perception of the “mind’s voice” . . . . . . . . . . . . 231

27.4 A criterion-related validity test of selected indicators of musical sophisticationusing expert ratings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232

Page 15: Abstract Book

15

28 Rhythm IV 233

28.1 The sonic illusion of metrical consistency in recent minimalist composition . . . 233

28.2 Keeping the tempo and perceiving the beat . . . . . . . . . . . . . . . . . . . . . 234

28.3 Captured by music, less by speech . . . . . . . . . . . . . . . . . . . . . . . . . 235

28.4 The counterpart of time-shrinking in playing regular sounding triplets of tones onthe alto recorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235

29 Education IV 237

29.1 Assessment criteria of composition: A student perspective . . . . . . . . . . . . 237

29.2 Dramatising the score. An action research investigation of the use of Mozart’sMagic Flute as performance guide for his clarinet Concerto . . . . . . . . . . . . 238

29.3 Machine arrangement in a modern Jazz-style for a given melody . . . . . . . . . 239

29.4 Technological instruments for music learning . . . . . . . . . . . . . . . . . . . 240

30 Musical Meaning I 241

30.1 “Nearly Stationary”: The use of silence in Cage’s String Quartet in four parts . . 241

30.2 A method for recognising the melody in a polyphonic symbolic score . . . . . . 242

30.3 How the timing between notes can impact musical meaning . . . . . . . . . . . . 242

30.4 Influence of expressive music on the perception of short text messages . . . . . . 244

31 Neuroscience II 245

31.1 Neural mechanisms underlying multisensory processing in conductors . . . . . . 245

31.2 Automatic pitch processing in different kinds of musicians . . . . . . . . . . . . 246

31.3 The actions behind the scenes: What happens in your brain when you listen tomusic you know how to play? . . . . . . . . . . . . . . . . . . . . . . . . . . . 247

31.4 The brain in concert - activation during actual and imagined singing in professionals247

32 Symposium: Musical communication 249

32.1 Musical communication in a social world . . . . . . . . . . . . . . . . . . . . . 250

32.2 Imitation and elaboration: Processes in young children’s improvisation . . . . . . 250

32.3 Bodily acts: Vocal performance re-considered . . . . . . . . . . . . . . . . . . . 251

32.4 Talking about music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252

Page 16: Abstract Book

16 CONTENTS

33 Perception II 255

33.1 Similarity measures for tonal models . . . . . . . . . . . . . . . . . . . . . . . . 255

33.2 An interval cycle-based model of pitch attraction . . . . . . . . . . . . . . . . . 255

33.3 Evaluation and applications of tonal profiles for automatic music tonality description256

33.4 A comparison between the temporal and pattern approach to virtual pitch appliedto the root detection of chords . . . . . . . . . . . . . . . . . . . . . . . . . . . 257

33.5 The geometry of musical chords . . . . . . . . . . . . . . . . . . . . . . . . . . 258

34 Cognition I 261

34.1 Music beyond the score: Its meaningful mathematics, modeling, performing - andthe flow of spoken language . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

34.2 Motor-mimetic images of musical sound . . . . . . . . . . . . . . . . . . . . . . 262

34.3 Exact measures of musical structure for predicting memory for melodies . . . . . 263

34.4 An enculturation effect in music memory performance . . . . . . . . . . . . . . 264

34.5 Application of a new method for consistency assessment and grouping of listeners’real-time identification of musical phrase parts . . . . . . . . . . . . . . . . . . . 265

34.6 Training a classifier to detect instantaneous musical cognitive states . . . . . . . 266

35 Symposium: Music and imagery: Stimulation through simulation 269

35.1 Studying musical imagery: Context and intentionality . . . . . . . . . . . . . . . 270

35.2 Emotion in real and imagined music: Same or different? . . . . . . . . . . . . . 270

35.3 Anticipatory auditory images and temporal precision in music-like performance . 271

36 Neuroscience III 273

36.1 Can amusics perceive harmony? . . . . . . . . . . . . . . . . . . . . . . . . . . 273

36.2 Sounds of intent: Initial mapping of musical behaviours and development in pro-foundly disabled children . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274

36.3 Imaging the neurocognitive components of absolute pitch . . . . . . . . . . . . . 275

36.4 Music perception and production in a severe case of congenital amusia . . . . . . 276

36.5 Train the brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277

36.6 Characteristics of harmonic patterns and their contribution to the stylistic ideal:An ERP study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278

Page 17: Abstract Book

17

37 Symposium: European Teachers and Music Education - (EuroTEAM) 279

37.1 Music teachers as researchers - a meta-analysis of Scandinavian research on musiceducation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280

37.2 From the training to the job: The beginning years as a music teacher . . . . . . . 280

37.3 University students, music teachers and social representations of music . . . . . . 281

37.4 Music education and music teacher training in Poland: The paradox . . . . . . . 282

38 Perception III 283

38.1 Effect of carrier on the pitch of long-duration vibrato tones . . . . . . . . . . . . 283

38.2 The information dynamics of melodic boundary detection . . . . . . . . . . . . . 284

38.3 Cross-domain mapping and the experience of the underlying voice-leading . . . . 285

38.4 Beethoven’s last piano sonata and those who chase crocodiles: Cross-domain map-pings of auditory pitch in a musical context . . . . . . . . . . . . . . . . . . . . 286

38.5 Zone of proximal development, mediation and melodic graphic representation . . 287

39 Music Therapy II 289

39.1 The effect of hypnotic induction on music listening experience of high and lowmusical involvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289

39.2 Diagnosing level of mental retardation from music therapy improvisations: Acomputational approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290

39.3 A study of relational communication process in music therapy . . . . . . . . . . 291

39.4 Music therapy and aggression in 50 children with mild mental handicap: A clinicaltrial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292

39.5 The “Virtual Music Maker”: An interactive human-computer interface for physicalrehabilitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293

39.6 Clinical evaluation of the treatment of high blood pressure with receptive musictherapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293

40 Symposium: Longitudinal case studies of preparation for music performance 295

40.1 An overview of the longitudinal case study method for studying musicians’ practice 296

40.2 Shared performance cues: Predictors of expert individual practice and ensemblerehearsal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297

40.3 Action, thought, and self in cello performance . . . . . . . . . . . . . . . . . . . 297

40.4 Variability and automaticity in highly practiced performance . . . . . . . . . . . 298

Page 18: Abstract Book

18 CONTENTS

41 Rhythm V 301

41.1 Cognitive and affective judgements of syncopated themes . . . . . . . . . . . . . 301

41.2 Investigating computational models of perceptual attack time . . . . . . . . . . . 302

41.3 Computer analysis of performance timing . . . . . . . . . . . . . . . . . . . . . 303

41.4 Beat induction with an autocorrelation phase matrix . . . . . . . . . . . . . . . . 303

41.5 Perceiving and cognizing the mathematical processes in music composition in20th century . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304

41.6 A time-based approach to musical pattern discovery in polyphonic music . . . . 305

42 Demonstrations II 307

42.1 Digital representation of musical vibrato . . . . . . . . . . . . . . . . . . . . . . 307

42.2 PracticeSpace: A platform for real-time visual feedback in music instruction . . . 308

III Thursday, August 24th 2006 311

43 Symposium: Music in everyday life: A lifespan approach 313

43.1 Toddlers’ musical worlds: Musical engagement in 3.5 year olds . . . . . . . . . . 314

43.2 Music preference, music listening, and mood regulation in preadolescence . . . . 314

43.3 Differences in adolescents’ use of music in mood regulation . . . . . . . . . . . 315

43.4 Music preferences in adulthood: Why do we like the music we do? . . . . . . . . 316

44 Development 319

44.1 Lullaby and good night . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319

44.2 To sing or not to sing: Communication in early social interactions . . . . . . . . 320

44.3 Understanding performance anxiety in the adolescent musician: Approaches toinstrumental learning and performance . . . . . . . . . . . . . . . . . . . . . . . 321

44.4 Seeing into the future? Predicting achievement at a conservatoire . . . . . . . . . 321

44.5 “Anything Goes”: A case-study of extra-curricular musical participation in anEnglish secondary school . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322

44.6 The effect of real-time visual feedback on the training of expressive performanceskills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323

Page 19: Abstract Book

19

45 Cognition and computational 325

45.1 Melodic perception in multi-voice music: The peceptual role of contour . . . . . 325

45.2 Towards better automatic genre classifiers by means of understanding human de-cisions on genre discrimination . . . . . . . . . . . . . . . . . . . . . . . . . . . 326

45.3 “Voice” separation: Theoretical, perceptual and computational perspectives . . . 327

45.4 The evolution of melodic complexity in the music of Charles Parker . . . . . . . 328

45.5 A user-dependent approach to the perception of high-level semantics of music . . 328

45.6 Acquiring new musical grammars - a statistical learning approach . . . . . . . . 329

46 Symposium: Similarity perception I 331

46.1 Similarity relations in listening to music : How do they come into play . . . . . . 331

46.2 Growing oranges on Mozart’s apple tree: Intra-opus coherence and aesthetic judg-ment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332

46.3 Approaches for content-based retrieval of symbolically encoded polyphonic music 333

46.4 Transportation distances and human perception of melodic similarity . . . . . . . 333

47 Musical Meaning II 335

47.1 The Hausdorff metric in the Melody Space: A new approach to Melodic Similarity 335

47.2 Body and mind in the pianist’s performance . . . . . . . . . . . . . . . . . . . . 336

47.3 Personality correlates and the structure of music preferences . . . . . . . . . . . 336

47.4 Musical communication and musical comprehension: A perspective from pragmatics337

47.5 Sense and meaning in music: The correspondences hypothesis . . . . . . . . . . 338

47.6 Musical meaning: Imitation and empathy . . . . . . . . . . . . . . . . . . . . . 339

48 Symposium: Music in infancy: The importance of context 341

48.1 The role of context on the perception of tone chroma in infants and adults . . . . 342

48.2 Variability in infants’ responses to music . . . . . . . . . . . . . . . . . . . . . . 342

48.3 Perception of rhythm and meter in infancy . . . . . . . . . . . . . . . . . . . . . 343

Page 20: Abstract Book

20 CONTENTS

49 Education V 345

49.1 Gender effects in young musicians’ mastery-oriented achievement behavior andtheir interaction with teachers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345

49.2 Musical transfer-effects revisited: Preliminary results from a study among 21 pri-mary schools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346

49.3 The singing lesson. Learning and non verbal languages . . . . . . . . . . . . . . 347

49.4 Early acquisition of musical aural skills . . . . . . . . . . . . . . . . . . . . . . 347

49.5 A longitudinal study of young female professional singers . . . . . . . . . . . . 348

50 Emotion III 351

50.1 Emotion and meaning in music fifty years later: Delayed realization of some ofLeonard Meyer’s implications . . . . . . . . . . . . . . . . . . . . . . . . . . . 351

50.2 Regulation of emotions by listening to music in emotional situations . . . . . . . 352

50.3 Music-listening practices in workplace settings in the UK . . . . . . . . . . . . . 353

50.4 A cross-cultural study of the perception of emotions in music: Effects of rhythmand pitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353

50.5 Spatio-temporal connectionist models to study the dynamics of music perceptionand emotional experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354

51 Symposium: Similarity perception II 357

51.1 Melodic and contextual similarity of folk song phrases . . . . . . . . . . . . . . 357

51.2 Similarity perception of melodies and the role of accent patterns . . . . . . . . . 358

51.3 Melodic identification strategy for automated pattern extraction . . . . . . . . . . 359

51.4 Melodic similarity as determinant of melody structure . . . . . . . . . . . . . . . 360

52 Musical Meaning III 361

52.1 Musical tension/release patterns and auditory roughness profiles in an improvisa-tion on the Middle-Eastern mijwiz . . . . . . . . . . . . . . . . . . . . . . . . . 361

52.2 Some factors affecting the successful use of digital sounds within a hybrid pipe-digital organ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362

52.3 Broadening musical perception by AKWEDs technique visualisation . . . . . . . 363

52.4 The effects of non-musical components on the ratings of performance quality . . 364

52.5 Music, movement and marimba: An investigation of the role of movement andgesture in communicating musical expression to an audience . . . . . . . . . . . 364

Page 21: Abstract Book

21

IV Friday, August 25th 2006 367

53 Symposium: Music and media 369

53.1 How music influences absorption in film and video: The Congruence-AssociationistModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370

53.2 Perception of opening credit music in Hollywood feature films . . . . . . . . . . 370

53.3 Perception of iconicity in musical/visual patterns . . . . . . . . . . . . . . . . . 371

53.4 Facial expressions of pitch structure . . . . . . . . . . . . . . . . . . . . . . . . 372

54 Rhythm VI 375

54.1 Perceptual motion: Expectancies in movement perception and action . . . . . . . 375

54.2 Perception and production of short western musical rhythms . . . . . . . . . . . 376

54.3 Co-operative tapping and collective time-keeping - differences of timing accuracyin duet performance with human or computer partner . . . . . . . . . . . . . . . 377

54.4 Trends in/over time: Rhythm in speech and music in 19th-century art song . . . . 378

54.5 Polyrhythmic communicational devices appear as language in the brains of musi-cians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379

54.6 Measuring swing in Irish traditional fiddle performances . . . . . . . . . . . . . 379

55 Memory 381

55.1 Testing the influence of the group for the memorisation of repertoire in Trinidadand Tobago steelbands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381

55.2 Footprints of musical phrase structure in listeners’ responses to different perfor-mances of the same pieces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382

55.3 Working memory for music and language in normal and specific language im-paired children . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383

55.4 Predicting memorization efficiency through compositional characteristics . . . . 384

55.5 Music and memory in advertising: Music as a device of implicit learning and recall 384

55.6 Language use in autobiographical memory for musical experiences . . . . . . . . 385

Page 22: Abstract Book

22 CONTENTS

56 Symposium: Interactive reflective musical system for music education 387

56.1 Interactive reflexive musical systems . . . . . . . . . . . . . . . . . . . . . . . . 388

56.2 Pedagogical perspectives of the Interactive reflexive musical systems . . . . . . . 389

56.3 Interactive music technologies in early childhood music education . . . . . . . . 389

56.4 London Symphony Orchestra discovery music technology unit . . . . . . . . . . 390

56.5 Rhythm interaction modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391

57 Perception and Cognition 393

57.1 Reawakening the study of musical syntactic processing in aphasia . . . . . . . . 393

57.2 Perception of music by patients with cochlear implants . . . . . . . . . . . . . . 394

57.3 Cognitive bases of musical and mathematical competence: Shared or independentrepresentations? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395

57.4 Enhanced information processing speed in musicians compared to non-musicians 396

57.5 Individual differences in music perception . . . . . . . . . . . . . . . . . . . . . 396

57.6 Long-term effects of music instruction on cognitive abilities . . . . . . . . . . . 397

58 Pitch III 399

58.1 Tipping the (Fourier) balances: A geometric approach to representing pitch struc-ture in non-tonal music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399

58.2 The perception of non-adjacent harmonic relations . . . . . . . . . . . . . . . . 400

58.3 Vertical and horizontal dimensions in the spatial representation of pitch height . . 401

58.4 Setting words to music: An empirical investigation concerning effects of phonemeon the experience of interval size . . . . . . . . . . . . . . . . . . . . . . . . . . 402

59 Education VI 405

59.1 Aspects of musical movement representation, in early childhood music education 405

59.2 Refining a model of creative thinking in music: A basis for encouraging studentsto make aesthetic decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406

59.3 Effects of different teaching styles on the development of musical creativity . . . 406

59.4 The effects of the “musicogram” on musical perception and learning . . . . . . . 407

Page 23: Abstract Book

23

60 Perception IV 409

60.1 The perception of accents in pop music melodies . . . . . . . . . . . . . . . . . 409

60.2 Popular music genre as cognitive schema . . . . . . . . . . . . . . . . . . . . . 410

60.3 Jazz, blues, and the language of harmony . . . . . . . . . . . . . . . . . . . . . 410

60.4 Does Musical Melodic Intelligence enhance the perception of mandarin lexicaltones? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411

61 Perception V 413

61.1 A large-scale survey regarding listeners’ tastes to sung performances . . . . . . . 413

61.2 German fricatives /X/ & /Ç / in singing. Testing a training model . . . . . . . . . 414

61.3 Muugle: A framework for the comparison of Music Information Retrieval methods 415

61.4 Does melodic accent shape the melody contour in Estonian folk songs? . . . . . 416

62 Ethnomusicology 417

62.1 A Look at the adaptation and cognitive process in Ghantu performance as practicedby the Gurungs of Nepal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417

62.2 A comparison of Western classical and vernacular musicians’ ear playing abilities 418

62.3 The Aksak rhythm: Structural aspects versus cultural dimensions . . . . . . . . . 419

62.4 Perception, effect and the power of words. An overview on song-induced healingprocesses in eastern Amazonia . . . . . . . . . . . . . . . . . . . . . . . . . . . 420

63 Poster session II 421

63.1 Rhythm sensibility : A dichotomy . . . . . . . . . . . . . . . . . . . . . . . . . 421

63.2 Music as communication: The listening pyramid . . . . . . . . . . . . . . . . . 421

63.3 The history of European music as a passage from classical physics to quantummechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422

63.4 Reconstructing “Incontri di fasce sonore” by Franco Evangelisti . . . . . . . . . 423

63.5 Dual-task and psychophysiological measurement of attention during music process-ing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424

63.6 Integration of non-diatonic chords into diatonic sequences: Results from scram-bling sequences with secondary dominant chords . . . . . . . . . . . . . . . . . 425

63.7 The Singing difficulty in Dotted Rhythm: Towards an understanding of the influ-ence of mother tongue on young children’s musical behaviour . . . . . . . . . . 426

Page 24: Abstract Book

24 CONTENTS

63.8 Time series analysis as a method to characterize musical structures . . . . . . . . 426

63.9 The influence of musical expertise on music appreciation . . . . . . . . . . . . . 427

63.10Changing the pacing stimulus intensity does not affect sensorimotor synchronization428

63.11The blue chords in rock music: Some possible meanings . . . . . . . . . . . . . 429

63.12Facial expressions and piano performance . . . . . . . . . . . . . . . . . . . . . 430

63.13Musical chord categorization in the brains of theorists and instrumentalists . . . . 431

63.14A computational approach for measuring articulation . . . . . . . . . . . . . . . 431

63.15Where is the “one”? Tapping on the perceived beginning of repeating rhythmicalsequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432

63.16The role of pitch interval magnitude In melodic contour tasks . . . . . . . . . . . 433

63.17Historical development of expertise in jazz double-bass players: Increased techni-cal performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434

63.18Singing performance as a motivation to involve pupils in singing activities . . . . 435

63.19Relative sensitivity to melodic and phonetic strings changes between 8 and 11months . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435

63.20Can rhythm help children in reading and writing difficulties? . . . . . . . . . . . 436

63.21Exploring the relation between the singing activity, the personality of singers, andtheir state and trait anger levels . . . . . . . . . . . . . . . . . . . . . . . . . . . 437

63.22Investigating the association between personality, anger and the singing activity . 438

63.23Does information or involvement increase reported enjoyment of classical music? 439

63.24Musicality of mother-infant vocal interactions . . . . . . . . . . . . . . . . . . . 440

63.25Rhythm-melody interaction: Is rhythmic reproduction affected by melodic com-plexity? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440

63.26New technologies for new music education: The Continuator in classroom setting 441

63.27Effects of articulation styles on perception of modulated tempos in violin excerpts 442

63.28The music education of seven cantons of French-speaking Switzerland: A com-parative study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443

63.29Semantic and rethoric in the “mute” recitatives and lieder by P. Hindemith . . . . 443

63.30What kinds of lyrics are more communicative to patients with autism? . . . . . . 444

63.31Nonmusicians might not know what is in or out of tune . . . . . . . . . . . . . . 445

63.32Latent absolute pitch: An ordinary ability? . . . . . . . . . . . . . . . . . . . . . 446

Page 25: Abstract Book

25

63.33An analysis of “Successful Sight Singing, A Creative Step By Step Approach” forprinciples of comprehensive musicianship . . . . . . . . . . . . . . . . . . . . . 447

63.34The tone-melody interface in popular songs written in tone languages . . . . . . 448

63.35Relationships of dynamics, rhythm performance, and other elements of music withoverall ratings of wind band performances . . . . . . . . . . . . . . . . . . . . . 449

63.36Musical impression and contribution of structural factors . . . . . . . . . . . . . 450

63.37ERD/ERS analysis of EEG reveals differences between musicians and non-musiciansduring discrimation of pitches . . . . . . . . . . . . . . . . . . . . . . . . . . . 450

63.38Hearing colors: The role of visual cues in pitch recognition and encoding . . . . 451

63.39Temporal attention in short melodies . . . . . . . . . . . . . . . . . . . . . . . . 452

63.40Temporal stability in repeated music listening tasks . . . . . . . . . . . . . . . . 453

63.41Absolute Pitch in transposed-instrument performers. A case study . . . . . . . . 453

63.42Preverbal interaction skills and intonation: The role of musical elements . . . . . 454

63.43Interactions between phonemes and melody in the perception of natural and syn-thesized sung syllables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455

63.44Image and imagination in music perception . . . . . . . . . . . . . . . . . . . . 456

63.45Metric structure, performer intention, and referent level . . . . . . . . . . . . . . 457

63.46Developmental changes in auditory tempo sensitivity: Support for an age-specificentrainment region hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 458

63.47Linguistic and musical stress in Russian folk songs . . . . . . . . . . . . . . . . 459

63.48A research about the beliefs on teaching, learning and performing music improvi-sation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459

63.49How much time does it last a piece of music? A research considering the musictempo, the duration and the counting during listening . . . . . . . . . . . . . . . 460

63.50Music Performance Anxiety among professional musicians and music students: Aself-report study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462

63.51Study of timbre as music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463

63.52“Perfect tempo” and the interpretation of metronome numbers . . . . . . . . . . 463

63.53Short and long term musical training influence pitch processing in music and lan-guage: Event-Related brain Potentials studies of children . . . . . . . . . . . . . 464

63.54Different effects of auditory stimuli on human autonomic cardiovascular rhythms 465

63.55A proposal on a learning system to realize maestro’s favorite SPL balance in a chorus466

Page 26: Abstract Book

26 CONTENTS

63.56How does a child acquire the expression of auftakt? . . . . . . . . . . . . . . . . 467

63.57The structure of fluctuated two tonics in Western-Japanese mixed music: Whysome of mixed music have been accepted, whereas others were rejected? . . . . . 468

63.58Subjective evaluation of common singing skills using the rank ordering method . 469

63.59Assessing the role of melodic and rhythmic factors in structuring musical salience 470

63.60Muscular Tensions in musical performance: a set of measurement . . . . . . . . 471

63.61Digital pulse forming in wind instrument synthesis . . . . . . . . . . . . . . . . 472

63.62Collateral and negative effects of sounds and music perception in normal subjectsand in psychiatric patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473

63.63Music as temporal art: Links between music and time . . . . . . . . . . . . . . . 473

63.64Changing the plot: Music can establish the narrative in film . . . . . . . . . . . . 474

63.65Adults hear the rhythm they feel through active and passive body movement . . . 475

63.66Tonal pitch memory in infants . . . . . . . . . . . . . . . . . . . . . . . . . . . 475

63.67Playing with sounds: Intervention on bullying at school . . . . . . . . . . . . . . 476

63.68Automatic characterization and generation of expressive ornaments from bassoonaudio recordings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477

63.69The perception of local and global timing in simple melodies . . . . . . . . . . . 478

63.70Investigating performance practice by piano students in Bahia (Brazil) . . . . . . 479

63.71A system yielding the optimum chord-form sequence on the guitar . . . . . . . . 480

63.72Visual Gestures: Perceptual costs and benefits in the performance of live music . 481

63.73Hearing reductions: A perceptual examination of a Schenkerian analysis of theAria from Brahms’ Variation and Fugue on a Theme by Handel (Op. 24) . . . . . 481

63.74Pitch and tempo precision in the reproduction of familiar songs . . . . . . . . . . 482

63.75Hand-clapping songs: A natural ecological medium for child development . . . . 483

63.76Musicality, musical expertise, and pitch encoding: An MEG study . . . . . . . . 484

63.77Integrated supervision in music therapy: Reading, analysis and testing of the cog-nitive, emotional and affective-relational aspects of the musico-therapic process . 485

63.78Perception of temporal structures in melodies with glides: A basic discriminationof isochrony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486

63.79Investigating psychomotor learning of adult students using piano trill performanceas an outcome measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486

63.80Natural sound as object for perception . . . . . . . . . . . . . . . . . . . . . . . 487

Page 27: Abstract Book

27

63.81SRA: An online tool for spectral and roughness analysis of sound signals . . . . 488

63.82Types of metrical patterns in Serbian folk music . . . . . . . . . . . . . . . . . . 489

63.83The cultural significance of Absolute Pitch . . . . . . . . . . . . . . . . . . . . . 490

63.84Social implications of iPod use on a large university campus . . . . . . . . . . . 491

63.85Musical and general absolute pitch in humans . . . . . . . . . . . . . . . . . . . 491

63.86You are the music while the music lasts*: An investigation into the perceptualsegregation of dichotically-embedded pitch (*T. S. Elliot) . . . . . . . . . . . . . 492

63.87Music in working memory? Examining the effect of pitch proximity on the recallperformance of non-musicians . . . . . . . . . . . . . . . . . . . . . . . . . . . 493

64 Symposium: Music psychology pedagogy 495

64.1 Suitability and usefulness of available books on music psychology for teaching indifferent institutional contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . 495

64.2 Integrating educational technology tools and online learning environments into acourse on Psychoacoustics and Music Cognition . . . . . . . . . . . . . . . . . . 496

64.3 Structuring the argument of a theoretical paper: A guideline and its reception byadvanced undergraduate musicologists . . . . . . . . . . . . . . . . . . . . . . . 497

64.4 Teaching music psychology to psychologists and to musicians: Differences in con-tent and method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498

65 Education VII 501

65.1 Development of musical preferences in adulthood . . . . . . . . . . . . . . . . . 501

65.2 Effects of auditory feedback in the practice phase of imitating a piano performance 502

65.3 Quality piano instruction affects at-risk elementary school children’s cognitiveabilities and self-esteem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502

65.4 Teacher - student relationship in Western classical singing lessons . . . . . . . . 503

65.5 BoomTown Music - a co-creating way to learn music within formal music education504

65.6 Environment, motivation and practice as factors of the instrumental performancesuccess in elementary music education . . . . . . . . . . . . . . . . . . . . . . . 505

66 Rhythm VII 507

66.1 A hierarchy of rhythm performance patterns for firstgrade children (ages six and seven) . . . . . . . . . . . . . . . . . . . . . . . . . 507

66.2 The effect of tempo on the perception of anacruses . . . . . . . . . . . . . . . . 508

66.3 Detecting subjective rhythmic attending in ERP . . . . . . . . . . . . . . . . . . 508

66.4 Context effects on the experience of musical silence . . . . . . . . . . . . . . . . 509

Page 28: Abstract Book

28 CONTENTS

67 Symposium: The ecology of flow experience: Cognitive, social, and pedagogical ob-servations of children’s music making 511

67.1 A community of learners: Young music-makers scaffolding flow experience . . . 512

67.2 Young children’s musical experiences with a flow machine . . . . . . . . . . . . 513

67.3 Engaging classrooms: Flow indicators as tools for pedagogical transformation . . 514

67.4 In the mood: Exploring flow states in musicians . . . . . . . . . . . . . . . . . . 515

68 Pitch IV 517

68.1 Are scale degree “qualia” a consequence of statistical learning? . . . . . . . . . . 517

68.2 Functional differences between the tonotopic and periodic cues in discriminationof melodic pitch intervals under a diatonic context . . . . . . . . . . . . . . . . . 518

68.3 Judgments of distance between trichords . . . . . . . . . . . . . . . . . . . . . . 519

68.4 The effect of categorization training on pitch contour judgments in Shepard tones 520

68.5 Musically Puzzling: Sensitivity to global harmonic and thematic relationships . . 520

68.6 Context affects chord discrimination . . . . . . . . . . . . . . . . . . . . . . . . 521

69 Performance II 523

69.1 The advantage of being non-right-handed in a piano performance task (sight reading)523

69.2 Emphasizing voices in polyphonic organ music: Issues of expressive performanceon an instrument with fixed tone intensity . . . . . . . . . . . . . . . . . . . . . 524

69.3 Variation in expressive physical gestures of clarinetists . . . . . . . . . . . . . . 524

69.4 The effects of concert dress and physical appearance on perceptions of female soloperformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525

70 Education VIII 527

70.1 Notational strategies as representational tools for sense-making in music listeningtasks: Limitations and possibilites . . . . . . . . . . . . . . . . . . . . . . . . . 527

70.2 Vocal creativity: How to analyse children’s song making processes and its devel-opmental qualities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528

70.3 Organists’ strategies for musical learning and performance in improvisation . . . 529

70.4 Preschool children’s peer teaching: A case study of interactive operation . . . . 530

Page 29: Abstract Book

29

71 Emotion IV 533

71.1 Implicit measures of musical emotion . . . . . . . . . . . . . . . . . . . . . . . 533

71.2 Explaining the total sonority and affective valence of chords . . . . . . . . . . . 533

71.3 Principles for expressing emotional content in turntable scratching . . . . . . . . 534

71.4 How do musicians deal with their medical problems? . . . . . . . . . . . . . . . 535

72 Perception VI 537

72.1 Memory improvement while hearing music: Effects of musical structure . . . . . 537

72.2 Mapping temporal expectancies for different rhythmical surfaces: The role of met-rical structure and phenomenal accents . . . . . . . . . . . . . . . . . . . . . . . 538

72.3 The effect of pitch, tempo and proportional pitch and tempo manipulation on iden-tification of iconic television themes . . . . . . . . . . . . . . . . . . . . . . . . 538

72.4 The automaticity of music reading . . . . . . . . . . . . . . . . . . . . . . . . . 539

V Saturday, August 26th 2006 541

73 Symposium: For an anthropology of musical language as a form of human commu-nication 543

73.1 Music is a celebration of mimesis; demonstrating our capacity to create, share andcommunicate gestures of emotio/expressive value . . . . . . . . . . . . . . . . . 544

73.2 How sound can make sense by being organized in time . . . . . . . . . . . . . . 544

73.3 Perceptive approach of the improvised modes: Cognitive ethnomusicology/interculturalcomparison of the musical listening . . . . . . . . . . . . . . . . . . . . . . . . 545

74 Education IX 547

74.1 Interest in music and shift toward other fields in children aged 1.6-4 . . . . . . . 547

74.2 Cyclic-harmonic cognitive representations of rhythm: Implications for music ed-ucation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548

74.3 Music education and critical thinking in early adolescence. A Synectic Literacyintervention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549

74.4 Mapping musical development in Brazil: Children’s musical practices in Maran-hão and Pará . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550

74.5 Everyday music among under two-year-olds . . . . . . . . . . . . . . . . . . . . 550

74.6 Preschool children self-initiated movement responses to music in naturalistic set-tings: A case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551

Page 30: Abstract Book

30 CONTENTS

75 Pitch V 553

75.1 Music complexity measures predicting the listening experience . . . . . . . . . . 553

75.2 Pseudo-Greek modes in traditional music as result of misperception . . . . . . . 554

75.3 Pitch spelling using compactness . . . . . . . . . . . . . . . . . . . . . . . . . . 555

75.4 On the human capability and acoustic cues for discriminating the singing and thespeaking voices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556

75.5 Pitch perception of sounds with different timbre . . . . . . . . . . . . . . . . . . 557

75.6 Toward realizing automatic evaluation of playing scales on the piano . . . . . . . 557

76 Cognition II 559

76.1 Improving algorithmic music composition with machine learning . . . . . . . . . 559

76.2 More about music, language and meaning: The follow-up of Koelsch et al. (2004) 560

76.3 Emergence of harmonic progression using multi-agent composition model . . . . 560

76.4 Measuring melodic redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . 561

76.5 Testing lerdahl’s Tonal Space Theory - listener’s preferences of performed tonalmusic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562

76.6 An acoustical and cognitive approach to the semiotics of sound objects . . . . . . 563

77 Timbre and Perception 565

77.1 Violin portamento: An analysis of its use by master violinists in selected nineteenth-century concerti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565

77.2 Perceptual correlates of violin acoustics . . . . . . . . . . . . . . . . . . . . . . 566

77.3 Construction of a harmonic phrase . . . . . . . . . . . . . . . . . . . . . . . . . 567

77.4 Establishing an empirical profile of self-defined “tone deafness” . . . . . . . . . 568

77.5 Tonal function modulates speed of visual processing . . . . . . . . . . . . . . . 569

77.6 Music and speech as languages of sound . . . . . . . . . . . . . . . . . . . . . . 569

78 Keynote Lecture II 571

78.1 Ethnomusicological research on the organization of musical time . . . . . . . . . 571

Thematic Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585

Page 31: Abstract Book

Conference Committees andSponsors

Organizers• Mario Baroni, Department of Music and Performing Arts, University of Bologna, Italy• Anna Rita Addessi, Department of Music and Performing Arts, University of Bologna, Italy• Roberto Caterina, Department of Psychology, University of Bologna, Italy• Marco Costa, Department of Psychology, University of Bologna, Italy

Advisory Committee• Mari Riess Jones, President of SMPC• Sun-hee Chang, Chair of KSMPC• Eugenia Costa-Giomi, co-organizer for ICMPC4• Irène Dèliège, organizer for ICMPC3• Roger Kendall, organizer for ICMPC2• Shin-ichiro Iwamiya, President of JSMPC• Scott Lipscomb, organizer for ICMPC8• Orlando Musumeci, President of SACCOM• Susan O’Neill, co-organizer for ICMPC6• Kengo Ohgushi, President of APSCOM and organizer for ICMPC1• Bruce Pennycook, co-organizer for ICMPC4• John Sloboda, co-organizer for ICMPC6• Kate Stevens, President of AMPS and organizer for ICMPC7• Suk Won Yi, organizer for ICMPC5

Scientific Committee• Lola Cuddy, Queen’s University, Canada• Irène Deliège, University of Liège, Belgium• Alf Gabrielsson, Uppsala University,Sweden• Michel Imberty, University of Nanterre, France

31

Page 32: Abstract Book

32 Conference Committees and Sponsors

• Carol Krumhansl, Cornell University, USA• Gary McPherson,University of Illinois at Urbana-Champaign, USA• Kengo Ogushi, Kyoto City University of Arts, Japan• Richard Parncutt, University of Graz, Austria• John Sloboda, University of Keele, UK

Scientific Advisory Board• Mayumi Adachi, Hokkaido University, Japan• Rita Aiello, New York University, USA• Christina Anagnostopoulou, Queen’s University, UK• Margaret Barrett, University of Tasmania, Australia• Mireille Besson, INCM-CNRS Marseille, France• Emmanuel Bigand,University of Bourgogne, France• Leslie Bunt, University of the West of England, UK• Antonio Camurri, University of Genova, Italy• Hyun Ju Chong, Ewha Womans University, South Korea• Eric Clarke, University of Sheffield, UK• Ian Cross, University of Cambridge, UK• Lori Custodero, Columbia University, USA• Jane Davidson, University of Sheffield, UK, and University of Western Australia, Australia• Peter Desain, Nijmegen Institute for Cognition and Information, The Netherlands• Göran Folkestad, Lund University, Sweden• Bob Gjerdingen, Northwestern University, USA• Wilfried Gruhn, University of Freiburg, Germany• Susan Hallam, University of London, UK• David J. Hargreaves, Roehampton University, UK• Yuzuru Hiraga, University of Tsukuba, Japan• Henkjan Honing, University of Amsterdam, The Netherlands• David Huron, Ohio State University, USA• Patrik Juslin, Uppsala University, Sweden• Youn Kim, Seoul National University, South Korea• Reinhard Kopiez, University of Hannover, Germany• Alexandra Lamont, Keele University, UK• Marc Leman, Ghent University, Belgium• Luisa Lopez, University of Rome “Tor Vergata”, Italy• Jukka Louhivuori, University of Jyväskylä, Finland• Yoshitaka Nakaijima, Kyushu University, Japan• Marta Olivetti Belardinelli, University of Rome “La Sapienza”, Italy• Bengt Olsson, Göteborg University, Sweden• Francois Pachet, CSL-SONY, Paris, France• Caroline Palmer, Ohio State University, USA• Isabelle Peretz, University of Montrèal, Canada• Dirk-Jan Povel, University of Nijmegen, The Netherlands• Andrzej Rakowski, Academy of Music “F. Chopin”, Poland• Bruno Repp, Haskins Laboratory, USA• Petri Toiviainen, University of Jyväskylä, Finland

Page 33: Abstract Book

33

• Sandra Trehub, University of Toronto at Mississauga, Canada• Colwyn Trevarthen, University of Edinburgh, UK• Graham Welch, University of London, UK• Susan Young, University of Exeter, UK• Gianni Zanarini, University of Bologna, Italy• Marcel Zentner, University of Geneva, Switzerland

Sponsors• Society for Music Perception and Cognition (SMPC)• European Society for the Cognitive Sciences of Music (ESCOM)• Asia-Pacific Society for the Cognitive Sciences of Music (APSCOM)• Australian Music & Psychology Society (AMPS)• Japanese Society for Music Perception and Cognition (JSMPC)• Korean Society for Music Perception and Cognition (KSMPC)• Argentine Society for the Cognitive Sciences of Music (SACCOM)• International Society for Music Education (ISME)• Society for Education Music and Psychology Research (SEMPRE)• Department of Music and Performing Arts, University of Bologna• Department of Psychology, University of Bologna• Faculty of Education, University of Bologna• Faculty of Psychology, University of Bologna

Page 34: Abstract Book
Page 35: Abstract Book

Part I

Tuesday, August 22th 2006

35

Page 36: Abstract Book
Page 37: Abstract Book

Keynote Lecture I 11.1 The nature of music from a neuropsychologist’s perspec-

tive

Isabelle Peretz

International Laboratory for Brain, Music and Sound Research (BRAMS), University of Montreal,Montreal, Quebéc, Canada

Music is generally regarded as a refined product of human culture. Such a perspective hasled most cognitive scientists to characterize music as the product of a general-purpose learningsystem. In a sense, contemporary composers and ethnomusicologists reinforce this cultural per-spective on music. Modern composers argue that musical preferences are culture-specific and canbe modified by exposure alone. Musicologists typically study music as a social construct thatvaries from culture to culture, rejecting cross-cultural quests for universals underlying the diver-sity. Yet, common principles may underlie the world’s diverse musical cultures. These principlesmay also be guided by innate mechanisms and be associated to dedicated neural networks. Inother words, music might be in our nature. The consideration of music as a biological functionrather than as a cultural invention is relatively recent and hence, is far from established. The ob-jective of my talk is to present the latest findings in the field of neuropsychology that are directedagainst an exclusively cultural perspective on music. I will mainly focus my presentation on thecondition of tone-deafness and review all the available evidence regarding the domain-specificityof the disorder, its heredity, and its neural basis.

[email protected]

37

Page 38: Abstract Book
Page 39: Abstract Book

Symposium: Musical time,narrative and voice: Explor-ing the origins of human mu-sicality I

2

Convenor: Michel Imberty

Narrative time is undoubtedly an essential aspect of human experience. Narrative expresses thetemporal flow of human existence. Time in a sense becomes human through narration, whether itbe verbal or nonverbal. Does narrative play a central role in organizing musical experience? Moregenerally, is the structure of experience proto-narrative by nature?

A linguistic narrative can be defined structurally with regard to its paradigmatic features andwith regard to a definite syntactic order which generates an irreversible fictional time. What mat-ters is the simple continuous line of successive episodes progressing from a past to an as yetundetermined future buffered by an arch-like structure grounding in its conclusion, or end-point,the unpredictable meanders of the intrigue. Without this link which transcends the episodes andevents of the story, there can be no real narrative. Its moment-to-moment directionality whichis experienced as an undetermined expectation of what will happen, is essential for grasping thestory. Narrative coherence resides in the dynamic entrainment and the persuasive hold on theimagination generated by the rhythms of linked episodes and narrative time.

The voice and vocal exchanges between mothers and babies build proto-narrative structures.Voice itself, and its intonation contours which are not yet language, sets in motion the lines of ourpersonal and intimate histories that we later learn to tell with words. This musical vocality is thefoundation of narrative recounting based on language.

It is then no great surprise that we find proto-narrative structures everywhere, in all fields ofhuman activity and particularly in musical activity: proto-narrative can be defined as an orientedtemporality which organises emotional and cognitive states, and music may then appear as a priv-ileged form of this temporalising of feelings made self-conscious. The extraordinary expressivepowers of music are connected with their universal pre-linguistic roots for the experience of art.

39

Page 40: Abstract Book

40Symposium: Musical time, narrative and voice: Exploring the origins of human musicality

I

2.1 Narrative, splintered temporalities and the unconscious in20th century music

Michel Imberty

University of Paris X Nanterre, France

Narrative structures the human experience of time. But does it also organise musical experi-ence ? In literature, we can identify the ways in which the experience of time is structured whichis anterior to the story itself and which can be called proto-narrative. This structure organises thecoherence and unfolding of the narrative through alternating successions of tensions an releases,repetitions and variations, full and dense moments and empty ones, expectations, surprises andsatisfactions. According to J. Kramer, if music is experienced fundamentally as a succession of“moments-after-moments”, this succession also creates the various kinds of continuities and dis-continuities of musical time. The most common one, the one which corresponds to the immediatefeelings of any listener but also to the most general description of the music, is that of linearcontinuity. In other words it is the experience of something unfolding. Audible events occur intime, within a flux which makes them follow each other in a more or less homogeneous fashion.However, during the 20th century this directional linearity has gradually lost importance, and it isclear that Schoenberg introduced a discontinuity which struck this linearity. But other composersalso did away with this proto-narrative linearity: for example, “Jeux” by Debussy contains manypassages in which certain events emerge whose origin and direction appear to belong not so muchto the score as to a virtual musical time. We must then try to understand how music can at timesconvey a feeling of strong directionality implying a proto-narrative structure with a oriented lineof dramatic tension, and at other times can on the contrary foster feelings of (dis-oriented) a non-directional or poly-directional fragmentation implying a discontinuity of the temporal flux andthe superimposition of numerous lines of dramatic tension which do not have the same evolutionor ending. We will try to make sense to these splintered forms of time through psychoanalyticconcepts.

Key words: Narrative structure, Continuity / discontinuity, Experience of time

[email protected]

2.2 Musical motives of infant communication: Expressions ofhuman rhythm and sympathy make dialogues of purposein experience

Trevarthen Colwyn

University of Edinburgh, UK

Infants, like adults, move with rhythmic gestures that express innate emotion. They are readyat birth to take turns in musical “dialogue”, attracted to engagement with human gestures. They aresympathetic to emotion – resonating to the impulses in movement, imitating, seeking engagementin protoconversations or playful duets of agency. Their feelings about contacts and relationships

Page 41: Abstract Book

Tuesday, August 22th 2006 41

are expressed in the tensions and contours of muscular energy in their vocalizations and gestures,and how these are regulated in the rhythms, phrases and narratives of performance. These changesof expressions can be measured precisely, and the laws of their production determined. Micro-analysis of vocal exchanges between adult caregivers and infants, even those prematurely bornand still before term, proves the infant is seeking interaction with the contingent expressive ges-tures of a partner to regulate mutual experience. The behaviour in a protoconversation with themother or with her performance of a baby song has the pulse and dynamic characteristics of animprovised musical duet that tells a story. Adult and infant sense one another through all modal-ities, harmonizing together the rise and fall of effort, anticipating and receiving the climax andresolution.

Malloch’s theory of the Communicative Musicality of vocal interplay between infants andadults defines the measures of “pulse”, “quality” and “narrative” revealed by meticulous acousticanalysis. We have used this model to chart the development of varieties and conventions of ex-pression in infant games over the first year, and to investigate distortions when normal sympathyof companionship breaks down. This work helps explain how the expressive sequences of proto-musical narratives carry meaning, and how through improvisation, they lead to mastery of manykinds of language, many arts and many technical skills.

All musical narratives, even the most arcane, appeal somehow to the polyrhythmic humansense of agency with a versatile body, and to the playful engagement of sympathy between differ-ent wills. This sympathy can evoke any and all of the emotions, every feeling for the experienceof life shared through time.

Key words: Expressive gesture, Communicative musicality, Narrativity

[email protected]

2.3 Proto-music is the food of love: An ethological view of sourcesof emotion in mother-infant interaction

Dissanayake Ellen

University of Washington, USA

Music emotion Psychologists of music are increasingly aware of the importance of mother-infant interaction to the development of music as expression and experience. An ethological viewof mother-infant interaction contributes additional insight into the essential importance of musicto human nature. The musical (or proto-musical) elements of mother-infant interaction and theirpsychobiological effects on both partners are evolutionary solutions to a very real anatomicalproblem that affected the bipedal genus Homo from at least two million years ago. As the resultof increasing brain size along with a reshaped birth canal, infants had to be born increasinglyprematurely and remained helpless for much longer than other primate young. For infants’ verysurvival, it was necessary that mothers care willingly for their demanding offspring. Mother-infantinteraction, building upon primate precursor affinitive behaviors, was the “solution.” It makesuse of features that characterize ritualized behaviors in other animalsUaffiliative vocal, visual,and gestural signals are formalized, exaggerated, repeated, and elaborated as well as mutuallyinfluenced within a shared temporal framework. Sensitivity to these “proto-musical” featuresfacilitates emotional responses to succession or unfolding, anticipation, tension and relaxation,

Page 42: Abstract Book

42Symposium: Musical time, narrative and voice: Exploring the origins of human musicality

I

repetition and variation, turn-taking, synchronization, and shared unity or conjoinment, whichgenerate love (mutuality and bonding) in both partners as well as eventually making possiblemore complex musical emotion.

Key words: Interaction mother-infant, Music emotion, Human development

[email protected]

2.4 Music, meaning, and narratives of human evolution

Tolbert Elisabeth

Johns Hopkins University, USA

In this paper, I will investigate how implicit assumptions about music are woven into contem-porary narratives of human cognitive evolution. Although music is often tangential to the largerevolutionary debates, it is nevertheless frequently offered as “proof” of the evolutionary continu-ity between humans and other animals, sometimes in the guise of a song-like “protolanguage”posited as the “missing link” between animal communication and human language proper. Notsurprisingly, the implicit musicality of “protolanguage” and other hypothetical intermediate formsis conceptualized in terms of contemporary Western understandings of music, in particular, as anexpression of unmediated, non-symbolic meaning.

These implicit ideas about music are problematic for developing an evolutionary plausibletheory of musical meaning, yet paradoxically also reveal clues about music’s evolutionary ori-gins. The apparent immediacy of musical meaning is merely an artifact of music’s immediatesonic presence; musical sounds seem to imply unmediated bodily gestures indicative of socialmotivations and intentions, although they are not literally so. In other words, the immediacy ofmusical sound is implicitly conflated with the idea that music has unmediated, “natural,” mean-ing. Although this conflation of sonic presence and meaning has been thoroughly debunked byscholars such as musicologists and ethnomusicologists, who generally posit a non-natural, cultur-ally determined relationship between musical sounds and meanings, an examination of the ideaof “unmediated” musical meaning as used in evolutionary narratives provokes fundamental ques-tions about how humans create narrative from supposedly “unmediated” bodily experience. Asnarrative itself depends upon the social ability to interpret events from multiple viewpoints, i.e.,to mediate social meanings, it is hypothesized that these same social abilities are used to mediatethe so-called “unmediated” sounds of music, pointing to the origins of musicality in the broaderevolutionary context of the social bases of human symbolic thought.

Key words: Cognitive evolution, Narrative, Human symbolic thought

[email protected]

Page 43: Abstract Book

Rhythm I 33.1 Synchronization of perceptual onsets of performed bass notes

Ives Chor, Richard Ashley

Northwestern University, USA

The physical onset and the perceptual onset of a tone are not identical. Musicians intendingto play “in time” (temporally synchronized with a reference, such as a metronome) must initiatea note early, preceding the reference by a time interval equal to the instrument’s rise time; suchissues are common in jazz, where rhythm instruments like bass and drums should be in closealignment.

This study focuses on the temporal alignment of single-instrument (acoustic or electric bass)musical excerpts with an isochronous reference. The goal is to determine empirically how aninstrument’s waveform aligns with the reference, and to see whether this validates theoreticalmodels of perceptual onset as a function of waveform. By extension, the study aims to find howmusicians control their motor behavior to produce a desired perceptual onset.

The materials studied are recordings of jazz bassists performing walking bass lines. The ex-perimental portion of the study consists of an analysis of existing recordings and a listening ex-periment. In the analysis of recordings, bass notes are examined for their temporal alignment witha reference point in the music, typically a hi-hat or ride cymbal. In the listening experiment, par-ticipants are presented with recorded excerpts of a walking bass line overlaid with a click tracktemporally offset by varying amounts, and asked to judge whether the bass notes are early, syn-chronized, or late with respect to the click track.

To account for differences in “feel” - individual tendencies to precede or follow the beat bysmall but relatively constant time intervals - we use recordings made by musicians who performprofessionally on both acoustic double-bass and electric bass guitar. By comparing, across thetwo instruments, a single musician’s temporal placement of notes with respect to a reference, thestudy aims to isolate the perceptual onset as a function of instrument waveform while controllingfor individual differences in feel.

Data collection is currently underway. Results of the study will be complete by mid-March,and presented at the conference.

Key words: Timing, Synchrony, Perceptual onset

[email protected]

43

Page 44: Abstract Book

44 Rhythm I

3.2 The correlation between a singing style where singing startsslightly after the accompaniment and a groovy, natural im-pression: The case of Japanese pop diva Namie Amuro

Masashi Yamada

College of Informatics and Human Communication, Kanazawa Institute of Technology, Japan

BackgroundNamie Amuro is the most widely recognized diva of the 90’s Japanese popular music. Her singingstyle is recognized as “groovy”, and critics say that her style is deeply influenced by the after- beatsinging found in Hip-Hop or Rap music. In a previous study, asynchrony between the onsets ofthe accompaniment and singing was determined in “Never End”, one of Amuro’s magnum opuses.The results showed that the asynchrony between the accompaniment and the singing was within50 ms. However, for the initial notes in several phrases the onset of singing was delayed 70-90 msfrom that of the accompaniment.

AimsThe aim of the present study is to clarify the effect of Amoro’s singing style on the perceptualimpression in cases where she starts singing 70-90 ms after the accompaniment for the initialnotes of phrases.

MethodPerformances for an excerpt of Never End were synthesized with various timings in which brasssounds replaced the singing (melody), and perceptual experiments were conducted using Sheffe’spaired comparison method. In Experiment 1, for the initial notes of the phrases, the onsets ofthe melody were set at various timings to the accompaniment. In Experiment 2, a 70-ms delayof the melody was applied to various notes in the phrases. The perceptual impression of theseperformances were estimated on three scales; “naturalness”, “degree of groove”, and “synchronybetween the melody and the accompaniment”,

ResultsThe results showed that for the initial notes of the phrases, a performance in which a 70-ms delaywas applied to the melody was perceived as “not just synchronous”, but “groovy” and “natural”. Aperformance with a delay larger than 90 ms, as well as a performance in which the melody beganprior to the accompaniment, was perceived as “asynchronous”, “unnatural” and “not groovy”. Aperformance in which a 70-ms delay was applied to notes other than the initial notes sounds also“asynchronous”, “unnatural”, and “not groovy”.

ConclusionsNamie Amuro is aware of such effects empirically and applies a delayed singing technique to theinitial note of phrases.

Key words: Timing, Singing, Groove

[email protected]

Page 45: Abstract Book

Tuesday, August 22th 2006 45

3.3 Groove within a jazz rhythm section: A study of its pro-duction and control

Mark Doffman

The Open University, UK

Jazz musicians cite “groove” as one of the key components of successful performance. Whilemusicians appear to play in synchrony when grooving, at micro-timing levels, groove can beshown to be a dynamic state; rhythm sections play “with” the time as well as play “in” time bypushing ahead of and pulling back from one another. Beyond tapping studies in psychology liter-ature, there has been relatively little “real world” examination of musicians’ quasi- synchronousplaying.

This case study conducted at University of Sheffield (2005) isolates some particular elementsof groove for further exploration. Its starting point is the manipulation of micro-timing by musi-cians; what has been termed “participatory discrepancies”. Given that these expressive deviationsoccur usually around or below recognised thresholds for event detection, how well do musicianstranslate their intentions to manipulate time into their performance and do any consistent patternsemerge from such “playing with time” ?

To answer these questions, quantitative data were collected from a series of semi-controlledperformances by jazz musicians. Performance data came from audio recordings of a bassist anddrummer playing two choruses of a twelve bar blues. Performances were played in three inten-tional conditions - normal playing, bass pushing ahead and drums pushing ahead; 15 performancesin total. Sound wave analysis was conducted through a commercially available sound editing pack-age and timing data then put through statistical software.

Results of timing data from performances showed 1] that musicians were able to manipulatetheir playing according to intention to a level of significance, 2] that there were consistent andsignificant effects on synchrony between players through playing intention, 3] that metrical struc-ture also revealed an impact on synchrony between and 4] a consistent pattern of voice leadingbetween ride cymbal and bass.

The results of this study suggest some correspondence with the descriptions of groove bymusicians; results also seem to support more recent embodied explanations for the fine levelsof sensori-motor control evident in this study. It is recognised that more research is needed tounderstand the control of dynamic patterns in groove but that this study may contribute towardsthat end.

Key words: Jazz, Micro-timing, Performance

[email protected]

3.4 The influence of tempo on movement and timing in rhythmperformance

Carl Haakon Waadeland

Department of Music, Norway

Page 46: Abstract Book

46 Rhythm I

BackgroundAn often used strategy in musical training is to rehearse a musical phrase at a slow tempo andgradually increase the tempo until the musician is able to play the phrase at the intended tempoin a comfortable way. If the difference between the initial slow tempo and the intended tempo forperformance is large, this transition in tempo might require quite different movement strategiesthat are likely to have implications for the rhythmic timing of the performance.AimsA focus in this study is to investigate to what extent different tempi impose playing conditions thathave noticeable effects on movement and timing in rhythm performance. To exemplify this wepresent an experiment where we study jazz drummers’ performances of swing grooves in tempiranging from 60 to 300 beats pr. minute, and ask how these various tempo conditions are reflectedin: a) movement patterns, b) swing ratios (rhythmic subdivision), and c) dynamics.MethodSubjects participating in this experiment are drummers acquainted with jazz drumming. Theywere asked to use one drumstick to play swing grooves on a force plate (Kistler). For comparisonthey also performed the same grooves on a cymbal. The equipment used for measurements ofmovements is well known within empirical movement science (Proreflex camera system).ResultsThis experiment demonstrates that the movement patterns of swing performances in slow, mediumand fast tempo are fundamentally different. The differences are displayed both in the interrelationsbetween movements of drumstick, hand and wrist, and also in global movement characteristicsshown in spectral analysis of the movement trajectories. Influence of tempo on timing is mostnoticeable related to the swing ratio, but tempo has also an effect on the distribution of accents inthe performance.ConclusionsA performance of swing groove in fast tempo is quite different from speeding up a performance inslow tempo. This is an observation which is likely to be valid for rhythm performances extendingthe performance of swing grooves, and should be taken into account in musical training. Moreover,this result is of interest for the construction of models of rhythm performance that claim validityacross tempo transitions.

Key words: Swing, Tempo, Movement

[email protected]

3.5 Analysis of local and global timing and pitch change in or-dinary melodies

Roger Watt, Sandra Quinn

Department of Psychology, University of Stirling, UK

It is well known that large melodic pitch changes occur less frequently than small ones. Wereport further statistical effects in the temporal distribution of different sizes of pitch change. Weuse 3,000 ordinary western melodies, each with a prescribed tempo, so that note timings could begiven in seconds. Our analysis concerns the temporal distribution of the various sizes of successivepitch changes from from -12 to +12 semitones.

Page 47: Abstract Book

Tuesday, August 22th 2006 47

1. The distribution of occurrences of each size of pitch change is uniform across time in themelodies, after the first 1 second.

2. There is an inverse relationship between the mean (absolute) pitch change size and the meantime interval between successive notes: melodies with larger pitch changes tend to be faster.

3. The distributions of time intervals between successive occurrences of each pitch change sizewere constructed. These were modelled by the gamma distribution with 2 independent parameters,M (mean) and G (shape). a) As expected, M varies inversely with the frequency of that size ofpitch change. b) However, the shape of the distribution, G, also depends on pitch change size. Thevalue of (G- 1) can be interpreted as the number of other events that must occur before a repeatof the same type of might occur. For pitch changes 0 to 5 and 7 semitones, G is close to 1.0; for8,9,12 semitones, G is close to 2.4; for 6,10,11 semitones, G is close to 4.

4. For each melody, we construct a function showing the log of the reciprocal of the frequencyof the most recent pitch change varying over time. This function shows the temporal rise and fallin likelihood of the melody. Fourier analysis of these functions shows a regular pattern of coherentvariability with a period of between 2 and 10 seconds. Low likelihood portions of a melody arebalanced by higher likelihood ones over a time scale of a few seconds.

We discuss these findings in terms of multi-scale approaches to the perception of timing rela-tions in music.

Key words: Timing, Melody, Pitch change

[email protected]

3.6 Selecting among computational models of music cognition:Towards a measure of surprise

Henkjan Honing

University of Amsterdam, The Netherlands

BackgroundHow should we select among computational models of cognition? This question has recentlyattracted much discussion (e.g., Roberts & Pashler, 2000; Pitt & Myung, 2002). While the mostcommon way of evaluating a computational model is to see whether it shows a good fit with theempirical data, the discussion addresses problems that might arise with the assumption that this isactually strong evidence for the validity of a model.AimsHowever, the aim of this paper is not to add to this lively debate in a philosophical or methodolog-ical sense. Instead, it will focus on a specific problem from music cognition, i.e., modeling thetemporal aspects of music. It presents a case study on how one can select between one and anothercomputational model, informed by the methodological discussion mentioned above.Main ContributionTwo families of computational models will be compared (cf. Honing, 2005). The first takes akinematic approach (K-model) to the modeling of expressive timing in music performance: whattiming patterns are commonly found in music performance and how do they conform to the lawsof physical motion. This approach will be contrasted with a perceptual approach (P-model) thatpredicts the amount of expressive freedom a performer has in the interpretation of a rhythmic

Page 48: Abstract Book

48 Rhythm I

fragment. The two approaches will be compared using three different model selection criteria: 1)goodness-of-fit (GOF), 2) model simplicity, and 3) the degree of surprise in the predictions.ImplicationsWhile both models fit the empirical data (cf. Friberg & Sundberg, 1999) equally well, in the lightof what counts as strong evidence for a model’s validity U namely that it makes limited range,non- smooth, and relatively surprising predictions Uthe perception-based model is preferred overthe kinematic model.ReferencesFriberg, A., & Sundberg, J. (1999). Does music performance allude to locomotion? A model offinal ritardandi derived from measurements of stopping runners. Journal of the Acoustical Societyof America, 105, 1469-1484.

Honing, H. (2005). Is there a perception-based alternative to kinematic models of tempo ru-bato? Music Perception, 23 (1). 79-85.

Pitt, M. A., & Myung, I. J. (2002). When a good fit can be bad. Trends in Cognitive Science,6 (10), 421- 425.

Roberts, S., & Pashler, H. (2000). How persuasive is a good fit? A comment on theory testing.Psychological Review, 107(2), 358-367.

Key words: Model selection, Theory testing, Expressive timing

[email protected]

Page 49: Abstract Book

Timbre I 44.1 Determining the Euclidean distance between two steady-

state sounds

Hiroko Terasawa1, Malcolm Slaney2, Jonathan Berger1

1CCRMA, Stanford University, California, USA2CCRMA, Stanford University and Yahoo Research, Sunnyvale, California USA

BackgroundTimbre is a key distinguishing feature in the characterization and classification of musical, speechand environmental sound. However, descriptions of timbre are often self-referential (for example,describing a bowed sul tasto violin as “flautando”, an oboe sound as “nasal”, etc), and existingperceptual models of timbre are not quantifiable.AimsOur goal is to find a computationally viable model or representation of timbre that is isomorphicwith human perception. We describe this model as a timbre space that is consistent with perception(that is, able to accurately predict the perceptual distance between two sounds) and parsimonious(that is, a model with orthogonal axes and linear such that any sound may be described as lyingalong a straight line between two other sounds.MethodTo find an appropriate timbre model we explore three representations of timbre: Mel-frequencyCepstral Coefficients (MFCC), Pollard’s tristimulus model of timbre, and Stabilized Wavelet MellinTransform as well as a strawman which we call linear frequency coefficients (LFC). We synthesizediverse timbres, and measure the match between each of these signal representations’ coefficientsand perceptual relational judgements of human subjects. We measure the parsimony of the repre-sentation by assuming a linear model and fitting the data to a Euclidean model.ResultsWe found that a model for timbre based on the MFCC representation accounts for 66 % of theperceptual variance, and finally determine that a timbre space based on MFCC is a good model fora perceptual timbre space,ConclusionsIn this paper, we articulate a set of criteria for evaluating a timbre space, describe four representa-

49

Page 50: Abstract Book

50 Timbre I

tions of timbre, and measured subject’s perceptual distance judgments.The result is interesting because we have shown an objective criterion that describes the quality

of a timbre space, and established that MFCC parameters are a good perceptual representation forstatic sounds. Previous work has demonstrated that MFCC (and other DCT-based models) producerepresentations that are statistically independent. This work suggests that the auditory systemis organized around these statistical independences and that MFCC is a perceptually orthogonalspace.

Key words: Timbre, MFCC, Mellion transform

[email protected]

4.2 The relevance of source properties to timbre perception:Evidence from previous studies on musical instrument tones

Bruno Giordano, Stephen McAdams

CIRMMT, Schulich School of Music, McGill University, Montreal, Québec, Canada

BackgroundResearchers have pointed out two possible explanations for timbre perception. The first focuseson its acoustical correlates. The second conceives it as the detection of sound source properties.A more complete approach should consider both these levels. Most research on the perceptionof musical timbre has focused on its acoustical correlates. Therefore, little empirical evidencesupports a focus on the properties of the sound source: the instrument.

AimsPublished data on timbre perception was reanalyzed. Source distinctions based on the instrumentfamily or on the type of excitation were tested concerning their ability to account for the data.

MethodSeven datasets from instrument identification studies that reported confusion matrixes were con-sidered. Chordophones and four classes of aereophones (air jet, single reed, double reed and lipreed) were assumed to belong to different instrument families. Observed and chance probabili-ties of correct instrument and family identification, and of within- and between-family confusionswere calculated.

Seventeen timbre spaces, derived from dissimilarity ratings of musical instrument tones wereconsidered. Two source criteria were considered: family (aereophones, chordophones, idio-phones, membranophones) and excitation type (impulsive, continuous, multiple impacts). Wetested whether stimuli belonging to the same class, defined by both criteria in isolation or in con-junction, occupied disjoint regions in the spaces.

ResultsCorrect identification was higher for families than for individual instruments. However, chanceprobability was also higher for families than for instruments. Most interestingly, within-familyconfusions were higher than between-family confusions with chance probabilities following theopposite trend. These results characterized all datasets.

With all timbre spaces, different excitation types occupied disjoint regions. Disjoint regionswere occupied by different families in 12 of the 17 spaces and in 3 additional spaces when one or

Page 51: Abstract Book

Tuesday, August 22th 2006 51

two data points were removed. Disjoint regions were observed in 2 spaces when excitation typeand family were considered jointly.ConclusionsSound source properties are reflected in timbre perception data. This might indicate their pres-ence at the representational level, as well as the fact that they define classes of perceptually dis-criminable stimuli. In any case, they should be considered in future empirical and theoreticalinvestigations on musical timbre.

Key words: Timbre, Source perception, Musical instruments

[email protected]

4.3 Effect of critical band data reduction of musical instru-ment sounds

James W. Beauchamp1, Andrew B. Horner2, Lydia Ayers2

1School of Music and Department of Electrical and Computer Engeneering, University of Illinoisat Urbana-Champaign, USA2Department of Computer Science, Hong Kong University of Science and Technology

Data reduction with critical bands has been used for audio compression schemes such as MP3where data is reduced by selective bit reductions based on masking considerations. This paperexplores the degree to which timbre of individual harmonic musical sounds can be represented byspectral data reduced by critical bands.

Two methods of critical band data reduction (CBDR) were tested. First, pitch-synchronoustime-varying spectrum analysis is applied to several different 2-s duration instrument tones pitchedat Eb4 and to seven piano tones pitched at octaves between A0 and A6. With the first method (Tests1 and 2), partials are partitioned into separate critical bands within which the partial amplitudesare set proportional to the band’s rms amplitude-vs.-time envelope, while preserving the averagewithin-band spectrum, and partial frequencies are set to track an implied fundamental for theband. With the second method (Test 3), instrument spectra are smoothed with a frequency-sweptbandpass filter whose bandwidth varies with center frequency according to critical bandwidth. Inall cases, resynthesis is accomplished by ordinary time-varying additive synthesis. In addition, inorder to test the effect of bandwidth on discrimination, bandwidths are varied by using a criticalbandwidth multiplier (CBM).

Using 20 listeners (10 musicians, 10 non-musicians) forced choice listening tests were con-ducted to test discrimination between the full resynthesis and instrument sounds processed byCBDR. Psychometric functions giving percent correct vs. CBM clearly showed discriminationas an increasing function of multiplier. For Test 1 (bassoon, flute, trumpet, harpsichord, harp,marimba, vibraphone, and violin), with CBM varied from 1 to 12, results were highly depen-dent on instrument. Surprisingly, for five instruments discrimination was 75-85% for CBM = 1,whereas casual listening predicted guessing level discrimination. For Test 2 (piano tones), dis-crimination was below 65% for CBM = 1 and the 75% discrimination point was only reached forCBM = 4 (for A0). For Test 3 (bassoon, clarinet, flute, horn, oboe, saxophone, trumpet, and violin)discrimination varied from guessing to 77% for CBM = 1 and was above 90% for CBM = 2.5. Inconclusion, CBDR causes subtle but discriminable changes to musical instrument sounds.

Page 52: Abstract Book

52 Timbre I

[This work was supported by the Hong Kong Research Grants Council’s CERG Project 613505.]

Key words: Critical band, Data reduction, Musical instrument sounds

[email protected]

4.4 Investigating piano timbre: Relating verbal description andvocal imitation to gesture, register, dynamics and articula-tion

Madeleine Bellemare, Caroline Traube

Laboratoire d’Iinformatique, Acoustique et Musique (LIAM), Faculté de Musique, Université deMontréal, Québec, Canada

BackgroundThe art of piano playing demands a highly methodical and constant refinement of various ges-ture parameters in order to obtain a desired sound. Performers often discuss this dimension oftheir playing, but without necessarily referring to the corresponding gesture parameters directly.Nonetheless, they call upon a vast vocabulary to describe the nature of their sound; examples ofadjectives include champagne, metallic, plush, round, veiled.

AimsThis study aims to explore the verbal description of piano timbre according to pianists of highcaliber and its relation to gesture, register and dynamics. It also aims to seek vocal analogiesthrough onomatopoeia.

MethodTwo sets of 8 participants were asked to select and define 10 adjectives that best describe pianotimbres. They were then asked to provide synonyms and antonyms, describe how each timbreis produced, designate for which register and dynamic levels each adjective is most suitable, vo-calize each timbre, and finally organize the adjectives according to similarity and other gestureparameters.

ResultsNearly 100 descriptors were collected. A correlation between proprioceptive gesture and vocab-ulary can be traced; for example, a round sound entails a slower attack with the soft part of thefingertips. 6 descriptors are specific to certain dynamic levels, though others have recommendedranges. Significant timbres include complex timbres (combination of at least two sonic elementsinto one resulting sound object) where articulation - the relation between one note to the next -is ofprime importance. When arranged on plane according to similarity, the descriptors and their corre-sponding onomatopoeia form a continuum whose poles are analogous those of vowels articulatorytriangle, illustrating transitions between open and round to spread and closed phonemes.

ConclusionsThis research can contribute to the development of more elaborate pedagogical methods. Theprocess of verbalizing the correlation between a given sound and its gesture optimizes practicingsince clarifying proprioceptive sensations develop motor skills. This study can also help to unearththe practical knowledge and understanding of sound that performers develop through years of

Page 53: Abstract Book

Tuesday, August 22th 2006 53

practice - a knowledge that is shared almost exclusively within the oral context of the teaching aninstrument’s practice.

Key words: Timbre, Piano, Verbalization

[email protected]

4.5 Timbral description of musical instruments

Alastair Disley, David Howard, Andy Hunt

Department of Electronics, Audio Laboratory, The University of York, UK

BackgroundMusicians intuitively describe timbre using adjectives such as bright or clear. The controls of mostsynthesis methods rarely have an intuitive relationship with the timbre produced. If musicians wereable to control a synthesiser using timbral descriptors, they could create sounds in a more intuitivemanner.

AimsThis research aims to establish a set of uncorrelated and consistently used adjectives that can beused to control a future synthesis system. Adjectives will be gathered from both previous studiesin this area, and from musicians in response to musical stimuli. These will then be refined in alistening test to determine which adjectives are useful in this context.

MethodMany timbral adjectives were selected from previous studies and refined in a pilot experiment.Several new words were suggested by multiple listeners, resulting in a set of fifteen adjectives:bright, clear, warm, thin, harsh, dull, nasal, metallic, wooden, rich, gentle, ringing, pure, per-cussive and evolving. These were used as rating scales (e.g. bright to not bright) in a listeningexperiment. Fifty-nine musicians used these adjective scales to describe twelve instrumental sam-ples chosen to explore the timbre space of conventional Western instruments.

ResultsThe way in which listeners used the timbral adjectives was analysed, with particular attention toconsistency of usage, whether good use was made of the full rating scale range, whether adjectiveswere correlated with others, and how confident listeners were in their judgements. This resulted inthe rejection of the adjectives evolving, metallic, pure, rich, and wooden. Opposition was clearlydemonstrated between some words, notably bright with dull, and harsh with gentle.

ConclusionsSome timbral adjectives used by musicians are uncorrelated, consistently understood across mul-tiple listeners, and should prove useful for controlling future synthesis systems. The followingterms are proposed subject to certain qualifications:

bright, clear, warm, thin, harsh, dull, nasal, gentle, ringing, and percussive.

Key words: Timbre, Adjectives, Semantics

[email protected]

Page 54: Abstract Book

54 Timbre I

4.6 Comparing real-time and retrospective perceptions of seg-mentation in computer-generated music

Freya Bailes, Roger Dean

University of Canberra, SCRG, Australia

BackgroundStudies of the perception of music have neglected computer-generated music founded on structuresof timbre and texture, though recent research examining such music (Bailes & Dean, 2005) hasdemonstrated the ability of “non-musicians” to detect segmentation in a forced-choice task. Thisparadigm requires a retrospective judgement subsequent to hearing an entire sound passage. Whileinformative, this kind of task should be compared with the real-time task of indicating when achange in segmentation is heard during listening.AimsAn experiment explored the perception of segmentation in one- and two-segment passages ofcomputer-generated sound. The aim was to compare participant accuracy and response time inboth retrospective and real-time tasks.Method36 different sound items comprised one long segment of sound, or sequences of two algorithmi-cally different segments of sound. Fourteen participants, unselected for musical training, weretested individually, hearing the stimuli over headphones. They heard each item two times, andwere encouraged to indicate where, if at all, a change in segment occurred by pressing the spacebar as soon as possible during listening (real-time task). Between hearings, participants wereasked to press “1” or “2” to indicate how many segments they heard (retrospective task). Theywere free to change their mind at any stage, so that their final response best reflected their overalljudgement.ResultsSegments were identified well (91%), and errors in identifying segmentation in the real-time andretrospective tasks were well correlated [r = 0.586; d.f. = 502; p <0.001]. Key press was signifi-cantly closer to the algorithmic point of segmentation in the second hearing (mean lag of 252ms)than the first (lag of 446ms) [t289 = 10.9; p < 0.00001].ConclusionsParticipants were consistent in their segmentation judgements in real-time as well as in a retro-spective classification, suggesting that moments of segment change contained enough informationto elicit accurate observations in real-time. This supports the alternative use of real-time measuresin future perceptual experiments using longer and more complex musical sequences, which mightbe expected to overburden memory in a retrospective task.

Key words: Real-time perception, Segmentation, Computer-generated music

[email protected]

Page 55: Abstract Book

Symposium: Tempo andMemory

5

Convenor: Oliver Vitouch and Henkjan HoningDiscussant: Daniel Levitin

How is musical tempo mentally represented? Are these representations temporally and con-textually stable, even in the sense of “absolute tempo”? What are the determinants of tempo pref-erences, and is there stability of timing across different tempi? This session links to a symposiumon “Tempo and Memory” held at the 5th Triennial ESCOM conference (2003) in Hanover (Con-venor: Wolfgang Auhagen). It aims to unravel basic cognitive principles of perception, memory,and choice in the tempo domain, and to discuss their interplay in music perception and perfor-mance.

Auhagen & Busch investigate factors that influence the accurate imagination of musical tempi.They find that motor programs are helpful, but not mandatory for developing stable tempo repre-sentations, and that rhythmic complexity negatively interacts with stability. - Honing & Ladinigpresent evidence that timing is not relationally invariant across different tempi. They analyze in-fluences of expertise, familiarity, and genre on sensitivity for expressive timing, and conclude thattempo and timing are inseparably intertwined. U Vitouch, Sovdat & Höller test the assumptionthat the visual tempo of a film sequence has cross-modal effects on the perceived/retrieved tempoof the score. They demonstrate tempo transfer effects of this kind for both non-musicians andmusicians. - Strauss et al. aim to translate a memory adaptation effect known from research onface perception into the auditory domain. They test the influence of tempo-modified (accelerated)pieces on memory for original tempo, and find that this treatment strongly impairs seeminglystable memory representations.

5.1 Cues for tempo preference and tempo memory of imaginedcompositions

Wolfgang Auhagen, Veronika Busch

Department of Musicology, Martin Luther University, Halle-Wittenberg, Germany

55

Page 56: Abstract Book

56 Symposium: Tempo and Memory

BackgroundDo stable tempo preferences for imagined compositions require motor programs acquired by play-ing the compositions (Rötter 1997), or can they develop independent of those (Auhagen 2003)?New results of the analysis of Auhagen’s experiment and new data regarding the ability to memo-rize tempo will be discussed.

AimsData analysis focussed on systematic effects in the variation of tempo preferences for imaginedcompositions. Are these effects specific to subjects, subject groups (expertise), or compositions?A follow-up experiment questions the role of body movements for memorizing preferred tempi.

MethodThe scores of eight piano compositions were presented to 10 non-pianists and 47 pianists (compo-sition played or not). In five sessions, subjects imagined each composition in subjectively preferredtempi using an electronic metronome. Subjects answered a questionnaire and their behaviour dur-ing tempo determination was observed. Analysis focussed on intra- and inter-personal dispersionof preferred tempi (Auhagen 2003) and the development of tempo preferences in the course ofthe experiment. In a follow-up experiment, experts determined their preferred tempi for the samecompositions. They were allowed any kind of body movements (group 1) or none (group 2).

ResultsPianists who had played the compositions showed most stable tempo preferences. But indepen-dent of any playing experiences, general familiarity with a composition also supported stabletempo preferences, although subjects did not recall tempi of specific audio recordings. As phys-ical movements subjects mainly showed tapping along the imagined beat, but also playing on avirtual piano and vocalising. Usage of those assisting movements declined in the course of the ex-periment. Analysis of the questionnaire revealed that rhythmic and melodic components are mostimportant for imagining compositions, whereas rhythmical structure and performance aspects arethe main criteria for tempo determination. The composition with the most complex rhythmicalstructure showed the least stable tempo preferences.

ConclusionsMotor programs appeared helpful but not mandatory for establishing stable tempo preferences.In imagining a composition and determining a tempo rhythmic components were most relevant,which is reflected in subjects’ beat tapping behaviour and in the relation between rhythmical com-plexity and stability of tempo preferences. Further analysis will show whether physical embodi-ment of musical tempo supports tempo memory.

References

Auhagen, Wolfgang (2003). Preferred Tempi of Imagined Compositions. In: R. Kopiez et al.(Hg.), Proceedings of the 5th Triennial ESCOM Conference, Hannover 2003, p. 639-642.

Rötter, Günter. (1997). Musik und Zeit. Kognitive Reflexion versus rhythmische Interpreta-tion. Lang: Frankfurt/M.

Key words: Tempo, Memory, Body motion

[email protected]

Page 57: Abstract Book

Tuesday, August 22th 2006 57

5.2 The effect of exposure and expertise on timing judgments:Preliminary results

Henkjan Honing, Olivia LadinigMusic Cognition Group, University of Amsterdam, The Netherlands

BackgroundPerceptual invariance has been studied and found in several domains of cognition. However,for certain aspects of music (e.g., melody) there is more agreement about perceptual invarianceunder transformation than for others, such as expressive timing under tempo transformation. Pre-vious studies have shown that experienced listeners can distinguish between an original and atempo- transformed audio recording by focusing on the expressive timing, which was interpretedas counter- evidence for the relational invariant timing hypothesis which predicts that a tempo-transformed performance will sound equally well (Honing, in press a).AimsThis study tries to disentangle the various factors that might have contributed to this result. A pilotstudy (Honing, in press b) showed that listeners familiar with the jazz repertoire are slightly moresensitive to expressive timing than listeners familiar with the classical repertoire. Based on thesefindings in the current study we investigate the possible influence of 1) familiarity with a musicalgenre (exposure) and 2) the level of musical training (expertise) on the judgments made.MethodParticipants were asked to compare pairs of audio recordings, one of which has been tempo-transformed to make them similar in tempo. They were instructed to focus on the expressivetiming, and to indicate which one of the pair is an original recording. Compared to previous studiesthe group of participants was extended with non-musicians to see if the ability to discriminate wassimply due to high musical skills. The musical material consisted of audio recordings from theclassic, jazz and pop repertoire.ResultsA first analysis of the pilot study shows that expertise has no significant effect on the judgmentsmade, while familiarity (i.e., preferred musical style) did have an effect. Further and more detailedanalysis of the judgments made to the other genres by the same respondents, and the effect ofmusical training (cross validations) will be reported at the conference.ConclusionsThe preliminary results suggest that familiarity has a larger effect on discriminating an originalfrom a tempo-transformed performance than musical training. We interpret these results as furtherevidence for the sensitivity of listeners to timing deviations in music performance, while these canbe modulated by familiarity, but not so much by their musical training.References

Honing, H. (in press a). Evidence for tempo-specific timing in music using a web-based ex-perimental setup. Journal of Experimental Psychology: Human Perception and Performance.

Honing, H. (in press b). Is expressive timing relationally invariant under tempo transformation?Psychology of Music.

Key words: Expressive timing and tempo, Musical expertise, Exposure

[email protected]

Page 58: Abstract Book

58 Symposium: Tempo and Memory

5.3 Audio-vision: Visual input drives perceived music tempo

Oliver Vitouch, Sandra Sovdat, Norman Höller

Cognitive Psychology Unit, Department of Psychology, University of Klagenfurt, Austria

BackgroundMusic is known to potentially affect the perception of visual scenes (e.g., Vitouch, 2001), asproficiently demonstrated in the movies. But do films also influence the perception of music? Thisstudy investigates cross-modal influences in perception, taking influences of “visual tempo” onperceived/retrieved music tempo as a model.

AimsUsing different film scenes in combination with the same score, the aim is to test if the visualinformation has transfer effects on the perception and/or memory of the music. This would speakfor holistic effects in the processing of audio-visual (and other multi-media) information.

MethodIn two fully independent experiments, n = 80 (Exp. 1, non-musicians and amateur musicians)and m = 80 (Exp. 2, music students) subjects were presented with brief movie scenes. In abetween-subjects design, scenes were either presented in original or dubbed versions, each scorebeing combined with two visual scenes of different visual tempo and/or number of shots. Afterwatching the scene, subjects were instructed that they now should specify the tempo of the musicfrom memory using a metronome. Standardized positive/negative deviations from the objectivemusic tempo were used as the dependent variable.

ResultsIn both experiments, visual tempo significantly influenced the retrieved music tempo. This wasalso evident in a higher ratio of “alla breve” specifications in the “visually slow” conditions and“double-time” specifications in the “visually fast” conditions, respectively. As predicted, musicstudents generally gave more accurate estimates, but showed a structurally similar tempo transfereffect. In Exp. 1, with an objective music tempo of 76 bpm, mean tempo estimates were 83 bpm(SD = 19) and 94 bpm (SD = 22) for visually slow vs. fast scenes. In Exp. 2, using music with 76bmp and 118 bpm, visually slow vs. fast scenes elicited average tempo estimates of -7% vs. +4%.Effects were stable across different types of music and videos.

ConclusionsBoth studies demonstrate clear crossover effects: There is not just “musical driving” of film scenes,but also “visual driving” of music perception. Results hint to holistic memory representations ofaudio-visual material.

References

Vitouch, O. (2001). When your ear sets the stage: Musical context effects in film perception.Psychology of Music, 29, 70-83.

Key words: Tempo perception, Memory for tempo, Cross-modal transfer

[email protected]

Page 59: Abstract Book

Tuesday, August 22th 2006 59

5.4 Memory representations of musical tempo: Stable or adap-tive?

Sabine Strauss1, Oliver Vitouch1, Olivia Ladinig2, Dorothee Augustin3, Claus-ChristianCarbon3, Helmut Leder3

1Cognitive Psychology Unit, Department of Psychology, University of Klagenfurt, Austria2Music Cognition Group, Department of Musicology and Institute for Logic, Language and Com-putation, University of Amsterdam, The Netherlands3General Psychology and Social Psychology, Department of Psychology, University of Vienna,Austria

BackgroundLong-term memory processes play a central role for the way we represent the world. They providestandards for comparison of familiar content with novel stimuli, but must also allow for adaptiveupdating of these standards. For instance, human faces are typically assumed to elicit stable mem-ory representations. However, Webster & MacLin (1999) have shown that presenting extremelydistorted faces affects the ability to distinguish between original and slightly shifted versions. Car-bon & Leder (2005) demonstrated that this adaptation effect also holds for highly familiar faces(celebrities).

AimsThis project tests if these findings can be generalized to the auditory domain (music, voices). Thepresent study deals with the stability of tempo representations: Does the perception of extremelyaccelerated versions of familiar pieces affect judgments about their original tempo?

MethodThirty subjects (mean age = 26.5, SD = 7.5) were tested with six TV title themes, pre-selectedfor familiarity. In the treatment phase, they heard either the “original” or an “extreme” version(digitally accelerated, + 30%, no pitch modification) for 15 s. This was followed by 2 s silence,3 s pink noise, and 2 s silence. In the probe phase, they heard either the “original” or a “shifted”version (+ 10%) for maximally 30 s. In a yes-no task, subjects decided whether the test stimuluswas the original version of the theme. The design was fully balanced, with the treatment (originalvs. extreme) x probe (original vs. shifted) x theme combinations resulting in 24 trials.

ResultsA three-way repeated-measures ANOVA revealed a clear-cut treatment x probe interaction (acrossall themes; p < .001). Correct rejection of a “shifted” probe dropped from 68% in the controlcondition to 28% in the “extreme” treatment condition.

ConclusionsResults are consistent with Carbon & Leder (2005), and demonstrate the domain-generality of theadaptation effect. They point to flexible updating processes in the formation of long-term memorystandards, and are in line with Dudai’s (2004) neurophysiological theory of adaptive memoryformation. While memory representations of tempo may be generally stable (Levitin & Cook,1996), they are strongly susceptible to context information.

References

Carbon, C.-C., & Leder, H. (2005). Face adaptation: Changing stable representations of famil-iar faces within minutes? Advances in Cognitive Psychology, 1, 1-7.

Page 60: Abstract Book

60 Symposium: Tempo and Memory

Dudai, Y. (2004). The neurobiology of consolidations, or, how stable is the engram? AnnualReview of Psychology, 55, 51-86.

Levitin, D. J., & Cook, P. R. (1996). Memory for musical tempo: Additional evidence thatauditory memory is absolute. Perception & Psychophysics, 58, 927-935.

Webster, M. A., & McLin, O. H. (1999). Figural aftereffects in the perception of faces. Psy-chonomic Bulletin and Review, 6, 647-653.

Key words: Tempo, Long-term memory, Adaptation effect

[email protected]

Page 61: Abstract Book

Neuroscience I 66.1 Psychophysiological investigation of emotional states evoked

by music

Nastja Gloeckner

Forschnungsnetz Mensch und Musik, Universität Mozarteum Salzburg, Germany

BackgroundExactly how, to what degree and under what conditions does music intensify or induce emotions?This question was addressed by recording and attempting to explain cognitive and emotional re-actions to music and associated cerebral and vegetative parameters in a clearly defined contextof a music examination. The results may enable the application of modern vegetative monitor-ing processes in music therapy. Moreover, a better understanding of emotion-specific patterns ofvegetative body functions should facilitate future measurements of emotional states.

Method12 musicians and 12 non-musicians participated. Each first filled in questionnaires on personalitytraits, musical preferences and psycho-physiological health. S/he then listened to eight selectedpieces of music of 3-4 min duration. Each piece expressed one of the basic emotions well-being,happiness, sadness or anger. Immediately following each composition, s/he rated the music andher/his present emotional, cognitive and muscular state.

We are studying the degree to which the cognitive-emotional process is reflected in the vege-tative system as reflected by variations in skin resistance, skin potential, and muscle activity. EEGmeasurements of the electrical signal flow to the brain during music listening are being used toidentify the contralateral control of the peripheral nervous system by the central nervous system.

The musical and physiological data are being analysed and correlated using time-series analy-sis, Fourier analysis, and estimation of textural density of the musical parameters using AlisOndasoftware.

ResultsA preliminary analysis of the data suggests that the four emotional expressions of the musicalpieces are rated extensively similarly by the subjects. The highest differences appear betweenthe two groups (musicians and non-musicians), primarily concerning dimension “mood” ratherthan dimension “activation”. The self-ratings show that activating music (expressing happiness or

61

Page 62: Abstract Book

62 Neuroscience I

anger) has more influence on emotion, cognition and muscle activity than calming music (express-ing well-being or sadness).

The analysis of the physiological data showed that the interhemispheric differences concerningactivation found in the EEG are reflected in the vegetative parameters as well.

Further analyses have to be done to survey in what way the self-rated psycho-physiologicalstatus is reflected in the nervous functions and how the recorded physiological data are related tothe musical data.OutlookThe study aims to improve knowledge of the effects of music on emotion and cognition and thelink between emotions and physiological parameters of the vegetative nervous system.

Key words: Effect of music, Cognition - emotion, Central and vegetative nervous system

[email protected]

6.2 Investigation of brain activation while listening to and play-ing music using fNIRS

Haruhiro Katayose, Noriko Nagata, Koji Kazai

School of Science and Technology, Kwansei Gakuin University, Japan

A human-being listens to and plays music, sometimes to be healed, relaxed, concentrated,and sometimes to raise the moral for battles or religious services. Music has power to movehuman mind and spirit. There have been a plenty of investigations which aim at revealing whichmusic nature moves human-being, mainly from psychological approach. Recent studies usingPET or fMRI have shown brand-new findings on the human brain mapping related to musicalactivities. This paper investigates brain activation measurement using fNIRS (functional Near-Infra-Red Spectroscopy). fNIRS allows a subject to move to some extent, if the attached light-fiberprobes are fixed to the subject’s head, which is desired nature to measure brain activity in playingmusic.

We have been conducting two types of studies using fNIRS. One is a study focusing on active-ness/concentration toward music. The other is a study featuring Japanese-drums, where the effectsof “sound and vibration” and “interaction field” are investigated.

As for the first study, we used an original conducting (or taping) interface called iFP. The usersof iFP are able to control tempi and dynamics of a prescribed expressive performance template,maintaining delicate nuance within a beat. For the experiments, the subjects were instructed to lis-ten, listen carefully imagining the next musical progressions, and play with iFP, and brain activityat the frontal lobe was measured. The experimental results showed that HbO (Oxy- hemoglobin)index at Fz position was lower when the subjects listened to the music carefully, and played withthe iFP. The decrease was more salient with the iFP. These results corresponded very well to thereports of the subjects’ introspection regarding concentration to the music.

From the study using Japanese-drums, we obtained results that suggest 1) beating a realJapanese- drum activates subject’s right frontal robe in several parts more, than just beating atoy drum- pad, the sound and vibration of which are cheep, and 2) a subject’s brains is activatedmore while he/she is in a group session than in a solo session. In the full paper, we would like topresent the detailed experimental design and the farther results.

Page 63: Abstract Book

Tuesday, August 22th 2006 63

Key words: Brain activity, fNIRS, Performance

[email protected]

6.3 Neural correlates of musically-induced emotions: An fMRI-study

Gunter Kreutz1, Ulrich Ott2, Sina Wehrum3

1Royal Northern College of Music, Manchester, UK2Bender Institute of Neuroimaging, University of Giessen, Germany3Department of Psychology, University of Giessen, Germany

Neural correlates of musically-induced emotions have been subject of several brain imagingstudies over the past years. In the present study, selections of instrumental classical and con-temporary pieces of music, which were pre-tested to induce basic emotions, were used within anfMRI-scanning protocol. Participants (N = 25) provided ratings of felt emotion, arousal, and va-lence after listening to each of twenty-five excerpts representing “joy”, “sadness”, “peacefulness”,“anger”, and “fear”. In addition, psychometric measures of mood changes as well as physiologicalmeasures were recorded. Music listening, evaluation of stimuli, and contrast of listening modu-lated by valence (and arousal) ratings led to significant activations in a number of brain regions,including temporal and occipital cortex, and cerebellum. Significant voxel clusters were found bymeans of parametric analyses when brain activations were modulated by ratings of valence, hap-piness, and peacefulness. Additional region of interest analyses were conducted using contrastsof listening modulated by subjects’ ratings. Again, significant activations were found for valence,happiness, and peacefulness only. They included anterior cingulum, basal ganglia, insula, and nu-cleus accumbens. These results suggest that positive emotions induced by music listening activatebrain structures in cortical and sub-cortical limbic structures. Implications for future investigationsof emotion induction by music stimulation are discussed.

Key words: Emotion, Brain, fMRI

[email protected]

6.4 The use of music in everyday life as a personality depen-dent cognitive emotional modulation-strategy for health

Richard von Georgi1, Phillip Grant2, Susanne von Georgi3, Stefan Gebhardt1 Department of Medical Psychology and Medical Sociology, Justus-Liebig-University, Giessenand Department of Musicscience, Justus-Liebig-University, Giessen, Germany2Department of Psychology and Sportscience, Justus-Liebig-University and Institute of Anatomyand Cell Biology Justus-Liebig-University, Giessen, Germany3Center for Psychiatry, Justus-Liebig-University, Giessen, Germany

Page 64: Abstract Book

64 Neuroscience I

BackgroundRecent studies show music being used situation-dependent in everyday life to a) to affect theemotional processing of existing states, b) to modulate attention processing and c) to establishor sustain social connections (e.g. DeNora 1999; Sloboda & O’Neill, 2001; North et al., 2004).Other studies stress the dependence of musical preference on biologically determined personalitytraits (e.g. Robinson et al., 1996, Rawlings et al., 2000). Data on the connection of health andpersonality (e.g. Booth-Kewley & Friedman, 1987; Marsland et al., 2001) and on successfultherapeutic implementation of music (Phumdoung & Good, 2003; McDonald et al., 2003) lead tothe possible conclusion of everyday use of music having vital influence on over all well-being.AimsAim of this paper is to examine, whether the use of music as a personality-dependant modulation-strategy moderates the supposed direct influence of personality on health.Method148 students were asked to fill in the following questionnaires: PANAS (Watson et al., 1988),BIS/BAS (Carver & White, 1995), SKI (von Georgi & Beckmann, 2004), FPI-R (Fahrenberg etal. 2001) the Inventory for the assessment of Activation- and Arousal-modulation through Music(IAAM), as well as a questionnaire on general health. The IAAM assesses functional utilisation ofmusic on the following dimensions: Relaxation (RX), Cognitive Problem Solving (CP), Reductionof negative Activation (RA), Fun Stimulation (FS) and Arousal Modulation (AM) (von Georgi etal. 2004, 2005). Apart from correlation analyses, a structural equitation model according to theaim of the paper was tested against alternative models with the program AMOS.ResultsThe results show: (a) Musical preferences are significantly correlated with the modulation- strate-gies RA and FS (p<0.05); (b) negative emotionality coincides with health, wherever the personality-dependant modulation-strategies RX, CP and RA have a significant opposite positive effect andimprove the prediction (p=0.373); (c) variables of positive emotionality or modulation via musicare not connected to health (p<0.001).ConclusionsThe empirical data clearly shows the development of personality-dependant strategies to modulatenegative emotions. These strategies can be measured and have direct positive influence on physicalwell-being. Positive activation through music, on the other hand, does not appear to play a vitalrole. The results are interpretable on the basis of functional utilisation of music for the modulationof personality-dependant neurophysiological-emotional activation processes.

Key words: Health, Emotion, Personality

[email protected]

6.5 Probing the representation of melody, an ERP study

Rebecca Schaefer, Peter Desain

Music, Mind, Machine Group, Nijmegen Institute of Cognition and Information, Radboud Univer-sity, The Netherlands

BackgroundThough quite a few studies have been conducted to demonstrate the ERP response to missing notes

Page 65: Abstract Book

Tuesday, August 22th 2006 65

or unexpected pitches (cf. Besson, Faita & Requin, 1994), very little work has been done on thesystematic investigation on the traces of the evolving mental representation of a musical melody.This is a pity as the question of how the structure of melodies is stored (as sequence of pitches, asinterval jumps, contour, as pitch functions within the key, or as a combination of these), has beenvery productive in behavioral studies (cf. Dowling, Tillmann & Ayers, 2001).

AimsIn our experiment we investigated the full ERP traces during listening and imagining four melodies(Magulis). Grouping the data in different ways over the melodies (contrasting different pitches,interval jumps, harmonic functions, beat level and contour turns) we aim to find more pronounceddifferences for aspects that are more fundamental to the processing and encoding of melodic struc-ture.

MethodThe experiment was conducted at Stanford CLSI, as a part of a larger study. 18 subjects received36 repetitions of 4 melodic sequences, followed by a segment with a probe tone (not studied here).EEG was recorded with electrodes placed according to the International standard 10/20 system.

ResultsAll pitches were labeled according to their position in the melody, the interval from the pitchpreceding it, the beat level, harmonic function, and place in melodic contour. Preliminary resultsshow much clearer patterns for perception than for imagery, which is likely due to individualstrategies in mental imagery. In perception, it appears that specific events (first note, highestmetrical level, certain interval jumps) cause a different structure of the traces. It remains to beseen if this will enable a decomposition of the ERP trace into components linked to the structurecoded in a mental representation.

Conclusions The investigation of the EEG signature of single pitches in the context of a melodywill shed more light on the nature of mental representations of musical melodies. The standarddecomposition of ERP components (N1, P2, P300, ..) fails when rapidly evolving stimuli arepresented, and a clearer view is needed about how traces of the parts and the aspects of complexmaterial are combined into the trace of the evolution of an integrated percept.

Key words: Melody perception, Melodic imagery

[email protected]

6.6 A natural high: The physiological and psychological effectsof music listening on former ecstasy users

Claire Connell, Steven M. Demorest

University of Washington, Seattle, USA

BackgroundThe use of music for achieving altered states is a well-established phenomenon in the ethno-musicology literature. More recently, a number of writers have observed that long-time Ecstasy(MDMA) users claim the ability to recreate the Ecstasy state during a rave without taking the drug.Others have suggested that such an experience can be recreated through even a single triggering

Page 66: Abstract Book

66 Neuroscience I

mechanism (Takahashi, 2004). There is also a debate about possible long-term neurotoxicity ofMDMA in humans leading to depression (see Curran, 2000).AimsThe purpose of this study was to test the psychophysiological responses of Ecstasy users andcontrols when listening to recordings of rave and rock music. The specific hypotheses were: 1.Ecstasy users will have a physiological response to rave music in the form of an increased heartrate. 2. Ecstasy users may have a significantly more negative psychological response to all musicthan controls. 3. Ecstasy users will have significantly higher TMD scores on the POMS moodinventory.Method9 Ecstasy users (abstained for at least 1 month) and 10 controls (mean age 24.5) were exposed totwo 2-minute examples of live recordings of rave music and two examples of rock music matchedfor tempo and drive. Physiological responses were measured using a heart rate monitor and psy-chological responses were measured using a three factor semantic differential task. The POMS-Brief mood inventory was administered prior to the music listening.ResultsEcstasy subjects had a significantly higher increase in heart rate when listening to rave music thancontrols, but no difference when listening to rock. Ecstasy subjects responded more positivelyto rave music on the value and color dimensions of the semantic differential task than controls.Ecstasy subjects did not have a significantly higher (toward depression) total mood disturbancescore on the POMS-brief.ConclusionsThese results suggest that former Ecstasy users do experience a different psychophysiologicalreaction to rave music that may be specifically linked to previous experiences at raves while onEcstasy. The lack of difference in mood scores calls into question the argument that Ecstasy isneurotoxic in humans, though the sample was small.

Key words: Affective response, Heart rate, Ecstasy (MDMA)

[email protected]

Page 67: Abstract Book

Social Psychology 77.1 Self-representation of bimusical Khanty

Triinu Ojamaa

Department for Ethnomusicology, Estonian Literary Museum, Estonia

BackgroundThe Khanty can be characterized as quasi-assimilated people. They are educated in Russianschools and therefore have become bilingual and bimusical. The middle-aged Khanty are stillable to sing in the traditional manner but they have accepted the superiority of Russian (music)culture. Under these sociopolitical circumstances they try to “modernize” their traditional music.The Khanty songs have been exclusively solo songs with quite extensive improvisational freedomboth on the level of lyrics and melody. The desired result of the on-going modernization appearsto be ensemble singing in perfect unison. The Khanty regard this as one of the main characteristicsof Western vocal music, which should grant their traditional culture higher recognition among thedominant “others”.

AimsThe aim of the study is to investigate how the bimusical Khanty implement their double-identityvia music macking.

MethodThe Khanty female singers have been observed in a natural setting. They demonstrate their music-making abilities by singing Khanty traditional solo songs, traditional solo songs in chorus, and aRussian children song learned at school.

ResultsThe performance of Khanty traditional solo songs showed that the singers knew the traditionalstyle of singing. Their performance of the Russian song demonstrated that they were musicalfrom the point of view of Western music: they were able to sing in unison correctly. The singers’attempt to sing the Khanty song in unison failed, however. The lyrics caused no problems asthey used a textbook that contained lyrics written down in collaboration. But they could not gainconcensus in singing an identical melody.

ConclusionsThe singers commented the result of their group singing to be an unsatisfactory musical product

67

Page 68: Abstract Book

68 Social Psychology

both 1. in the traditional context where ensemble singing was not practiced; 2. in the context ofhigh (Russian) music culture because their unison was not perfect. The singers stressed the needfor further training to reach a performance level that would enable to represent the Khanty not asculturally inferior but as the people capable of music-making according to the standards of the“higher” culture.

Key words: Bimusicality, Musical behaviour, Social context

[email protected]

7.2 Music and identity of Brazilian Dekasegi children and adultsliving in Japan

Beatriz Ilari

Federal University of Paraná,Curitiba, Brazil

BackgroundThe Dekasegi movement (in Japanese “to work away from home”) started officially in 1990, whenthe revised immigration law allowed for descendents to legally work in Japan. Despite the largeimmigration contingency of the last two decades, adaptation in Japan is still very difficult forBrazilians, with the Japanese idiom being the first barrier. Upon arrival in Japan, many Braziliansface a major identity crisis as they have Japanese features but are culturally a mix of Brazilian andJapanese. These conflicts, along with all problems that are inherent to immigration in a foreigncountry, represent a major challenge for Japanese society. Given the relationship between musicand identity, it is important to understand the role of music in the construction of cultural identityof Dekasegi children and adults living in Japan.AimsThis study investigated the relationship between music and cultural identity of Dekasegi childrenand adults living in Aichi-ken, Japan.MethodSemi-structured interviews on life stories, immigration and musical experiences were conductedwith Dekasegi children (n=11) and adults (n=7) living in Homi-danchi and Kariya-shi. Eachinterviewee was also asked to sing a favorite song in Portuguese or Japanese.ResultsAll interviewees mentioned that they sang or listened to Brazilian music as a way to “stay closer toBrazil” and to preserve their “brasilianidade”. Most adults also mentioned a preference for Enka,arguing that it is a Japanese musical style similar to Brazilian Sertanejo. When asked to sing asong of their preference, all participants sang in Portuguese. Interestingly, most children selectedpatriotic songs to sing.ConclusionsInterview data suggested that most Dekasegi children and adults used music as a way to reassuretheir Brazilian identities and as a form of comfort during the ever-lasting period of adaptation inJapan. Whereas Brazilian music seemed to be used as a means to preserve and nurture “Brazilianidentity”, Japanese music was incorporated into daily listening habits only if it appeared to besimilar to Brazilian music. Participants who were fluent in Japanese were apparently more adapted

Page 69: Abstract Book

Tuesday, August 22th 2006 69

and integrated into Japanese society, which was also reflected by their preferences for Japanesemusic.

Key words: Identity, Immigration, Dekasegi movement

[email protected]

7.3 New roles of music in contemporary advertising

Stanislao Smiraglia

Department of Human and Social Sciences, University of Cassino, Italy

BackgroundMusic in contemporary advertising is analyzed in a social-psychological perspective (Hargreavesand North, 1997) by considering how everyday listening to music (Sloboda, 2001) reflects onsocial representations and brand attitude. Generally, in the advertising literature the stress is onthe consumeristic effect while the cultural fall-out of music is not evaluated in depth. Not veryfrequently are music and advertising intended in a holistic way and, generally, music is treated asa mere peripheral factor of persuasion.

AimsWe conceive advertising not as a simple stimulus but as a polysemic language, and music in ad-vertising not as a simple addendum. Advertising and music consuming should be referred tothe environment and the situation in a very broad sense, as both contribute to reality construc-tion. Moreover we trust that music cognition, brand attitude and social identity processes widelyinfluence each other. In any case, the congruence of the global advertisement (brand, music, en-dorsement) is particularly relevant (musical fit).

Main Contribution Referring to the Elaboration Likelihood Model of Persuasion (Petty, Ca-cioppo, 1983) the author tries to assert that, in the current advertising panorama, music is nota peripheral trigger but a central factor of the elaboration probability. The analysis of emergingadvertising and music trends, viewed as a molar point, implies that music may take on a fundamen-tal and sometimes controversial role on brand attitude evolution and beyond. This theoretical viewseems to be backed by early data concerning verbal contents of threads and discussions developedby the on-line communities devoted to this topic. We have been collecting and analysing, withquali-quantitative methods (Grounded Theory), a large series of verbal formulations expressed inthe context of adv focus forum devoted to music. We have been treating these artefacts as socialrepresentations of responsiveness to advertising and music stimuli.

Implication This reinterpretation of the music role enables the author to examine person-environmentinteractions across a new range of everyday real-world phenomena. Music in advertising is a rel-evant element of the current material history, and ever more should the musical experience beanalyzed with attention to this emerging cultural and political role characterization.

Key words: Advertising, ELM, Social representations

[email protected]

Page 70: Abstract Book

70 Social Psychology

7.4 Analysing music and social interaction: How adolescentstalk about musical role models

Antonia Ivaldi

Royal Northern College of Music, Manchester, UK

BackgroundDespite the growth of research in discursive psychology and social interaction, there is little re-search that has examined interaction in a musical context. Conversation analysis is one of the keymethodological approaches to analysing social interaction, focusing more on how speech is actu-ally produced. Previous research has advocated that conversation analysis addresses real- worldissues. Indeed, its application to areas such as mediation, political rhetoric, the design of infor-mation technology, and the treatment of speech disorders, has already been documented. Suchresearch has suggested patterns in interaction, but what are the patterns in music talk?AimsThrough analysing adolescents’ talk of musical role models, the study aimed to: 1. Identify the in-teraction practices used by adolescents as they talk about their musical role models 2. Identify howadolescents categorize role models from classical and popular genres within their talk 3. Identifyhow adolescents use talk to negotiate conflict between playing an instrument and maintaining apopular image.MethodSeven focus groups lasting between 60-90 minutes were conducted, each involving four males andfour females between 14-15 years of age. Participants were presented with 19 pictures of famousmusicians, from classical and popular genres (e.g., Pavarotti, Charlotte Church, Robbie Williamsand Britney Spears) and were asked to discuss whether they were familiar or unfamiliar figures,and whether they were liked or disliked and the reasons why.ResultsThe analysis explored how the adolescents constructed, and moved in and out of, different socialgroups and identities, as they talked about their role models. In particular, the study highlightedthe rhetorical devices used by adolescents when talking about musicians from classical and pop-ular genres, and how by using such devices, they were able to position themselves in categoriesassociated with wealth, privilege, and popularity.ConclusionsThere is little research that explores interaction in a musical context. This study attempted toaddress the absence of research in this area, identifying in particular the specific patterns used byadolescents in their talk of musical role models. This study therefore informs both our understand-ing of music and interaction, and the construction of adolescent social identities.

Key words: Social interaction, Musical role models, Adolescent identity

[email protected]

7.5 Choosing music in the Internet era

Rossana Dalmonte, Marco Russo

Page 71: Abstract Book

Tuesday, August 22th 2006 71

University of Trento, Italy

BackgroundThe birth of Internet, and the spread of personal computers, modified the perception of music.Especially the digitalized sounds and their transmission through the net gave birth to new formsof music supply via World Wide Web and Streaming [Mari 1999; Merriden 2001; Prato 1995;Silva-Ramello 1999], with sociological and legal after-effects [Darias de las Heras 2003; Di Carlo2000]. The search for music is often rather casual; but recently the activation of specialized sitesallowed to find pieces consistent with particular preferences [Matarazzo 2001].

AimsThis study aims to point out the strategies of the users of a music site (in this case CoCoA of theUniversity of Trento) in choosing some pieces among all the available items. The site was createdDecember 2002 and has 84.366 registered members. The downloaded pieces are 2.382.633 and thecompilations are 798.830. These data allow supposing that they can rightly describe the behaviourof the internet users for the search of classical music.

MethodWe examined the logos automatically produced by the server at each entry and how the musicaldata were found. The principal criteria have been drown from studies of social psychology [Car-daci 2001; Cavazza 1996; Wallace 2000], especially regarding the quantity of the copied piecesand the entry modalities. Another source of methodology was the literature on the psychologyof selections [Baron 2000; Hastie-Dawes 2002; Rumiati 1990, 2000; Rumiati-Bonini 1996, 2001;Smith-Shanteau-Johnson 2004; etc.].

ResultsThe first kind of results defines the profile of the average user of CoCoA; the second describesin general the musical preferences of the users; the third regards the way different pieces aredownloaded and combined in a compilation.

ConclusionsThe results indicate that the choices of the CoCoA users are greatly influenced by the nature ofthe site itself, so that Internet does not appear as a neutral space. The profile of the average-userhas precise characters and his choices appear influenced by the modality of presentation of thepieces in the site. Even the union of different pieces in a compilation is not completely free, andnot always depending from pure musical categories.

Key words: Choise of music, Musical preferences, Music and internet

[email protected]

7.6 Shared Soundscapes: A social environment for collectivemusic creation

Joana Cunha e Costa1, Álvaro Barbosa2, Daniela Coimbra3

1Research Center for Science and Technology of the Arts (CITAR), Portugues Catholic University,Porto, Portugal2Music Technology Group Audiovisual Institute Pompeu Fabra University, Barcelona, Spain

Page 72: Abstract Book

72 Social Psychology

3Centre for the Study of Music Performance Royal College of Music London, UK

BackgroundThe development of internet communication and computer devices leads to the appearance of anew domain of collective music creation. Both musicians and computer science engineers areworking together in networked music, creating instruments that allow the emergence of what hasbeen titled Shared Soundscapes (Barbosa, 2003). This collaboration brings out a new conceptualspace for musical creation (Boden, 1990; 1996) that is being explored by these specialists. Thisnew domain offers two very attractive features, from a creativity point of view. On the one hand itsinstruments are being built in a user-friendly way, on Internet. This allows that any interested per-son in any place can try and indeed be able to create music. On the other hand, and in accordancewith l’esprit du temps, it is made to enable collective creation. In order to both understand andexplore the real possibilities of this conceptual space, we will apply Csikszentmihalyi’s SystemicPerspective of Creativity (1998).AimsThe present paper aims to characterise the domain of networked music and identify the featuresthat might aid or hinder collective musical creativity. We will to this by using Csikszentmihalyiconcepts of Symbolic Domain, Social Field and Innovative Person (Csikszentmihalyi, 1998; Naka-mura and Csikszentmihalyi, 2001). According to this theory, creativity results from the interactionof these three factors. Providing a set of operational concepts for an integrated multidimensionalanalysis, this may well be an useful tool for assessing why creativity may or may not occur innetworked music. Aditionally, to facilitate a deeper understanding of this new domain, we willpresent a case study of a networked music instrument: The Public Sound Objects (PSOs - Barbosa;2002).Main ContributionThis paper is a contribution to the literature on the social psychology of musical creativity. Morespecifically, it presents a contribution to the assessment of the social impact of different techno-logical features on collective creativity in a developing domain.ImplicationsThese new musical instruments created within the framework of networked music may assumedifferent functional characteristics related to composition or improvisation, communication andsocial interaction. We expect that from this work we can provide some strategies for exploringthese characteristics and thus contribute to the development of this new creative domain.

Key words: Musical creativity, Networked music, Collective creation

[email protected]

Page 73: Abstract Book

Poster Session I 88.1 An “of intellect sensation” at the root od the thought about

artistic gesture

Véronique Alexandre Journeau

University of Paris IV Sorbonne, FranceUniversity of Paris VII Denis Diderot, France

While devoting to his art, a musician cannot miss, now and then - in an unforeseeable manner-, the feeling of a rare sensation, the sensation of the “right” gesture. Such a sensation is first of allspontaneous, a coincidence, because that artistic mastery does not seem to come from a control bymind but, on the contrary, appears in a momentary loss of mind’s hold. Therefore, one wishes tocross from an unthought sensation up to a “thinking the sensation” position. The aim of this paperis to study the process of carrying out artistic gestures, the passage between instinct and mastery,with references to Greek philosophers for the relation between representation and appropriation,and to Chinese ones for a phenomenological approach and the transmission between master anddisciple without wording.

When an artist tries to find it again by the way, the power, of mind, the “true” sensation be-comes “of intellect” (conscious mastery) in the meaning of Diogenes the Babylonian. Such a raresensation is characterized by Li Xiangting, a Chinese guqin zither contemporary musician, as a“particular inspiration appearing in the moments where the musician reaches the supernatural”.Stoics described the dual nature of perception and of logos, and compared notions acquired bylearning with pre-notions directly issued from Nature. Chinese people has developed an artisticappreciation by the way of metaphors which create mental images, thanks to the representation ofideas through the perception of their principles in Nature.

Experience of such a sensation inspires a circular thought about the intention at the back ofgesture, that leads us to structure in a triad the relation - traditionally conceived as two- poled -between theory and practice, in a “va-et-vient” (to and fro) between action and perception, percep-tion and conceptualization, conceptualization and action. A third world, of sensations, imaginaryand inward language, takes then its full value between the physical word and the notional world.

In conclusion, “Perception may be considered as a “true-conditional” stage culminating ina perceptual judgment” (“The notion of inward language”, Jean-Michel Fortis), an aestheticaldiscernment even.

73

Page 74: Abstract Book

74 Poster Session I

Key words: Sensation, Thought, Gesture

[email protected]

8.2 Comparative analysis of the emotional intensity when per-forming different styles. A study of cases

Arantza Almoguera

Universidad Pública de Navarra, SpainEscuela de Música Joaquín Maya de Pamplona, Spain

BackgroundSome researches show how music is one of the most effective inducers to intense emotional expe-riences. But only in the last few years the study of the musical emotional response has received theattention it deserves. Plus, we have to add that almost all the studies are focused on the connectionbetween emotion and musical expression and its interaction with the answer of the listener. Sothat our research is centered on the emotional intensity felt by the musicians in the course of theperformance.

AimsOur research seeks to deepen into the subjective world of the performer about the emotion duringthe performance of two works: J.S. Bach Trio Sonata BWV 1039 (Baroque) and W.A. MozartQuartet in D (Classicism). Furthermore we investigate into the possible coincidence among theindividuals on the passages that moved them the most and starting from these results make acomparative study between those two styles about the feasible causes that influence that emotionalintensity.

MethodGranted that it is only possible to have access verbally to the subjective world of the performer,once they played the pieces, we proceeded to the realization of the individual interviews that laterwere analysed together with the sheet music.

ResultsIn general we could see how the passages in which the performer experience a bigger emotionalintensity are those in wich his or her voice becomes the leading role. Besides, the passages markedon the Baroque score show similarities on the musical characteristics (dissonances, armonical andritmical changes), but they didn’t coincide. And in the Classical score, passages were scarcelyremarked and the performers would allude to a relaxed, general enjoyment emotion rather thanmusical climaxes.

ConclusionsOne of the aspects that affect the performer when experiencing emotional intensity is their roleinto the piece. We also can denote the musical characteristics and the very own musical density ofeach style as aspects that affect the emotion felt by the performers.

Key words: Emotional intensity, Performance, Emotional response

[email protected]

Page 75: Abstract Book

Tuesday, August 22th 2006 75

8.3 Representation of musician’s body: The part of emotion

Fraboulet Aurélie

Laboratoire Psychomuse, Centre de Recherche en Psychologie et Musicologie Systématique, Uni-versité Paris X - Nanterre, France

Within the framework of research on the sources of the human musical quality, we proposeto explore the relation between the emotional expression and the instrumental movement of themusicians.

This study fit into a work of thesis on the self perception of the musician’s body. The firstresults showed us that there were social representations of the musical forms (classical music andjazz), conveying particular and different conceptions to the body. Next we asked ourselves aboutthe origins of such conceptions. And our attention headed for the musical teaching and moreparticularly the musician’s body during his formation. We observed that the body is considereddifferently in the musical education of the two musical forms: classical music and jazz. In clas-sic music education, the instrumental movement is much defined, precised. Whereas we didn’tobserve the same demand in jazz’s education.

Since our subject of thesis is the self perception of the musicians, our objective was to explorethe link between the body and more particularly their movements and their feelings.

Our hypothesis was that expression of feeling lives through particular instrumental movements.These movements come from musical education. That why we work on two types of instrumentalmovements: functional movements which come from the musical education and non functionalmovement which are characteristic of each musician.

The experimental method consisted of a video analysis of pianists’ movements during a musi-cal production. We compared a subjective analysis of pianists and a musical teachers’ analysis oftheses movements. The results showed that expression of emotion live through the non functionalmovement.

This study will enable us to conclude on the relation between a particular conception of thebody conveyed by the social representations of two musical forms and the emotional expressionof the musicians.

Key words: Social representation of music, Musician’s body, Instrumental movement

[email protected]

8.4 The functional neuroanatomy of temporal structure in mu-sic and language

Anjali Bhatara1, Vinod Menon2, Daniel Levitin1

1Department of Psychology, McGill University, Canada2Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, USA

BackgroundMusic and language are similar in that they are both complex auditory phenomena with a syntax

Page 76: Abstract Book

76 Poster Session I

and structure that are dependent on the temporal sequence of events in a sentence or chord progres-sion. Patel, Gibson, Ratner, Besson, & Holcomb (1998) compared language and music syntax andfound similar P600 ERPs in response to complex syntax. Using MEG, Maess, Koelsch, Gunter &Friederici (2001) found that the syntax of music is processed in Brodmann Area (BA) 44, whichis a part of Broca’s area. Levitin & Menon (2003) disrupted the syntax of music by scrambling itand randomly rearranging 250-350 msec chunks. They then played normal and scrambled (scr.)music for participants in an fMRI scanner and found activation in BA 47, which they argued is aneural locus for processing of temporal structure.AimsThe aim of the present study is to directly compare the processing of temporal structure in musicand language. We used the Levitin & Menon paradigm with music and speech stimuli that werematched for emotion, arousal and attention.MethodWe used fMRI to examine brain responses in 20 right-handed non-musicians as they listened tomusic, speech, and their scr. counterparts. In Experiment 1, subjects listened to music versus scr.music segments, as in our previous study. In Experiment 2, subjects listened to speech versus scr.speech segments. Brain responses were compared within and across the two experiments.ResultsWe found activation for music - scr. music in Brodmann Area (BA) 47, part of the left inferiorfrontal cortex (LIFC). Our results concur with those from Levitin & Menon (2003). For speech -scr. speech we found activation in the LIFC also, but it was posterior and superior to that from themusic, in Brodmann Area 45.ConclusionsThese results suggest that distinct regions of the LIFC may contribute to the processing of temporalstructure in music and speech.

Key words: Language, Prefrontal cortex, fMRI

[email protected]

8.5 A test of the role of positive and negative affect in the pre-diction of Performance Anxiety severity in a sample of pi-ano students

Carolina Bonastre, Roberto Nuevo

Universidad Autónoma de Madrid, Spain

BackgroundThe conceptual distinction between positive and negative affect as independent but correlated di-mensions has demonstrated a high heuristic value for distinguishing between different anxiety andmood disorders. In this sense, it has been proposed that a general negative affect factor could playan important role in several anxiety disorders. The association between positive and negative affectand Music Performance Anxiety (MPA) has not been well established, although there are descrip-tive results suggesting that only Negative Affect could be relevant for MPA (i.e., Karspensen, &Götestam, 2002).

Page 77: Abstract Book

Tuesday, August 22th 2006 77

AimsPresent work aimed to analyze the fit of a model establishing a relation between positive andnegative affect and MPA to further test the differential relation between the two dimensions ofaffect and MPA.

Method60 students of advanced courses of piano (61.7 % females; mean of age = 21.6, SD = 5.9) com-pleted the following questionnaires during collective classes: Spanish version of the Music Per-formance Anxiety Inventory for Adolescents (MPAI-A; Osborne, & Kenny, 2004); Penn StateWorry Questionnaire- Abbreviated (PSWQ-A; Meyer et al., 1990; Spanish version by Nuevo etal., 2002); Positive and Negative Affect Schedule scales (PANAS; Watson et al., 1988; Spanishversion by Sandín et al., 1998).

ResultsStructural equation modelling was used to test the fit of a model with positive and negative affectpredicting the level of MPA. The fit indices pointed out to an adequate fit (AGFI = .971; RMSEA= .000; CMIN/DF = .341). Only Negative affect had a significant regression weight. An additionalmodel with the inclusion of the trait-worry level (PSWQ-A) in a intermediate step was analyzed.Although the fit indices remained in good levels they were slightly worsened. And only the weightsbetween negative affect and PSWQ-A and between negative affect and MPA were statisticallysignificant, while weights between PSWQ-A and MPA, and between Positive Affect and PSWQ-A practically irrelevant.

ConclusionsThese preliminary results provide support to the potential relevance of Negative Affect in MPA,which could have important implications both in the prevention and treatment of MPA, and in theconceptual distinction of the MPA as a specific type of anxiety.

Key words: Music performance anxiety, PANAS, Stage fright

[email protected]

8.6 Coping strategies for performance anxiety in musicians

Carolina Bonastre, Roberto Nuevo

Universidad Autónoma de Madrid, Spain

BackgroundMusic Performance Anxiety (MPA) is a frequent and pervasive problem for many professionalmusicians and music students. According with the main competing models about the developmentand maintenance of clinical anxiety, how people cope with anxiety could play a central role inchronification of anxiety and the study of adaptive and disadaptive ways of coping with anxietyin specific areas could provide keys for its prevention and treatment. Several recent works (e.g.,Fehm, & Schmidt, 2005) have described long- and short-term strategies used for musicians forcoping with stage fright.

AimsPresent work aimed to analyze the association between different kinds of coping strategies and thelevel of MPA in a sample of music students.

Page 78: Abstract Book

78 Poster Session I

Method60 students of advanced courses of piano (61.7 % females; mean of age = 21.6, SD = 5.9) com-pleted the Spanish version of the Music Performance Anxiety Inventory for Adolescents (MPAI-A;Osborne, & Kenny, 2004) and an Abbreviated form of the Spanish version of the COPE inventory(Carver et al., 1989), with specific instructions for music anxiety situations.

ResultsCorrelations between specific coping strategies and MPA were positive and significant only for useof substances (p<.01), and negative and statistically significant for Restraint Coping and PersonalGrowing (p<.05). Sample was split into three groups according to the score in the MPAI-A:low-, medium-, and high-MPA. According with post-hoc comparisons, high-MPA subjects hadsignificantly higher scores in use of substances (p<.05), while Distraction and Acceptance weremostly used by low and medium MPA subjects. Finally, a stepwise regression analyses was carriedout with the different strategies of coping regressed on the level of MPA. Only Restraint Coping(with a positive weight) and Use of Substances (with a negative weight) entered in the function. Rwas .453, and R2 was .205.

DiscussionPresent results suggest, as expected, that problem- or emotion-focused coping seem to be relatedwith low levels of MPA whereas dysfunctional coping (like use of drugs) seem to be related withhigh levels of MPA. Although preliminary and still correlational important pedagogical and pre-ventive implications can be speculatively suggested from these results.

Key words: Music performance anxiety, COPE, Stage fright

[email protected]

8.7 Perception of structural boundaries in popular music

Michael Bruderer1, Armin Kohlrausch2, Martin McKinney2

1Human-Technology Interaction, Technische Universiteit Eindhoven, The Netherlands2Philips Research Laboratories Eindhoven, The Netherlands

Music is often structured into sections and phrases, which listeners perceive through variouscues. Previous studies on the perception of musical sectioning have examined standard musictheoretical cues, such as those described in Lerdahl and Jackendoff’s “A Generative Theory ofTonal Music” (1983), and have dealt mainly with Western classical monophonic excerpts.

There is, however, still insufficient understanding of the cues listeners actually use, what theirrelative saliences are and whether they also apply to the perception of section and phrase bound-aries in popular polyphonic music. Furthermore, it is not clear how consistent listeners are in theirperception of section and phrase boundaries.

The aim of this study is to examine if listeners with various amounts of musical training showconsistent perception of phrase and section structure in Western popular music. A second goal isto examine if the theoretical-based rules of Lerdahl and Jackendoff correspond with phrase- andsection-boundary perception for subjects listening to popular music.

Our experiment had two parts. In the first part, twenty listeners were asked to segment sixpopular songs into phrases, sections, or passages by pressing a key while listening to the music.This procedure is repeated three times for each song by each listener. The key presses were

Page 79: Abstract Book

Tuesday, August 22th 2006 79

analyzed over all subjects and a histogram was computed to indicate the number of subjects whoplaced a boundary at a specific point in time. Based on this histogram, a number of congruentboundaries were chosen from each song. In the second part of the experiment listeners were askedto rate these chosen boundaries for their salience and to describe what aspects of the music signalwere essential for the perception of each specific boundary.

The results of the first part show that, for each song, a number of boundaries were indicated byall subjects, i.e. there exists consistency between subjects about prominent boundaries. At present,we are comparing conceptual boundaries based on Lerdahl and Jackendoff with perceptual bound-aries measured in our experiment and are analyzing the relation between frequency of congruentboundary indications and rated salience. These results will be reported at the ICMPC.

Key words: Perception, Cognition, Music structure

[email protected]

8.8 The role of emotions in music creativity

Jan Cedervall

Scandic Medialab, Sweden

Claims have been made to the effect that the emotional content of a piece of music can revealsomething about the state of emotions of the composer during the creative process. This claim isour background puzzle. It may seem quite reasonable if we limit our thinking to some improvisa-tions, but perhaps it’s too strong a claim if we think of more elaborate compositions. Perhaps theconnection between the emotional state of the creator and the emotional content of the music isnot as direct and simple as one can first suspect. Never the less, to have good credibility, a claimsuch as the one discussed, ought to be accompanied with an account for the relationships betweenemotional processes and creative processes.

The aims of this philosophical study have been to give possible answers to questions like; canwe find an influence, from state of emotion, to the piece of music created? Provided that that isthe case. In what way does the emotional processes then affect the creative processes? What arethe relationships between emotions and creativity? Why do emotions affect creativity? Can onecreate a musical culture without emotions?

The main contribution is possible answers and explanations to the questions above, althoughthe answers are somewhat incomplete. The relation between emotion and creativity is multifold.Emotions do not only have a direct affect on creativity but also have an effect when building themusical experiences that constitutes the building blocks that music creativity rests upon. Both themore direct affect on creativity and the effect on musical experience receive possible explanations.

If the answers point us in the right direction this implies that we will understand the creativeprocess better. By picturing the functional role of emotions we may achieve insights into theemotional process it self. In the longer run we may also understand the relationship betweencomposer and composition better. The limits of artificial, emotionless creativity will also appearclearer for us; which in turn may lead to better and more realistic computational models of musiccreativity.

Key words: Creativity, Induction, Emotion

[email protected]

Page 80: Abstract Book

80 Poster Session I

8.9 Analysis of stylistic rules of folk music from the standpointof the rules of excitement

Dalia Cohen1, Yossi Saporta2

1Hebrew University of Jerusalem, Israel2Bar-Ilan University, Israel

In the present paper, we define styles of folk music, comparing Israeli songs with Westernfolk songs. The stylistic rules, formulated by a computer, are characterized mainly in terms ofprinciples of excitement versus calm in very general terms.

Many attempts have been made to characterize folk song, including the Israeli songs, by theprevalence of certain phenomena, but without relating to the meaning of the rules.

AimsTo examine the styles with consideration of their stylistic ideals; to deepen our understanding ofthe relations between the principles governing rules, the ideal, and cognitive constraints; and todevelop a relevant and efficient formulation of the characteristics of style.

Assumptions

• Styles vary in terms of the types of experiences that they evoke, in keeping with aestheticideals.

• The principles of organization can be expressed by means of “learned” and “natural” schemata(the later are familiar to us from outside music as well).

• There is a universal relationship between the principles governing the rules and the types ofexperiences that represent the ideals.

The principles from which we selected specific realizations:

• Deviations from normative ranges of occurrence

• Curves of change at all levels regarding pitch

• Clarity of the tonality

• Types of meter and rhythm

• Types of forms and structures

Israeli song. Israeli song is relatively new, we know who the composers were, it is written innotation, it is still being composed, and it is packed with declared ideology and emotion.

The material examined. Four groups of folk songs: “classical” Israeli songs; Western folksongs; and two selections of songs from two of the most prominent later Israeli composers. Thefour groups were compared with one another and with other scholars’ findings regarding otherrepertoires.

ResultsThe findings indicate that there are significant and interesting differences and similarities amongthe groups, that there is some “competition” among schemata, and that the Israeli songs can be

Page 81: Abstract Book

Tuesday, August 22th 2006 81

considered very exciting. We learned about relevant rules of styles and how to formulate them.These findings reinforce the assumptions concerning the significance of the schemata in the vari-ous.

Key words: Stylistic rules and ideas, natural schemata

[email protected]

8.10 Memory for lyrics

Marco Costa, Leonardo Corazza, Pio Enrico Ricci Bitti

Department of Psychology, University of Bologna, Italy

BackgroundHow much do we remember of song lyrics? By far, human voice is present in the largest part ofsold and listened music. Pure instrumental music occupies a very little market sector. Furthermore,in Italy, for example, examining charts from 1970 to 2005, we have that 74% of sold music isactually sung in English and not Italian, and many listeners do not have a good English knowledge.Not directly understanding the lyric, therefore, seems not to affect the aesthetic and emotionalvalue of music. Many examples can be found in the history of music in which the meaning of whatis sung is not comprehensible for the co-occurrence of concomitant voices, as in counterpoint,or because the singer through vocal ornaments underline musical aspects that conflict with lyricunderstanding.AimsThe aim of the study was to verify the amount of lyric words of an unpublished song that isremembered at a short interval after it has been listened to.MethodParticipants were 30 university students (16 males and 14 females; mean age 21), without anysystematic musical training. Participants had to listen to two unpublished songs, one with com-plex arrangement (four instruments), and one with simple arrangements (one instrument). Controlcondition was a recited lyric (without musical accompaniment) of an unpublished song. Presen-tation order of the two experimental conditions and of the control condition was counterbalancedwithin subjects. Participants were told that the study was focused on aesthetical judgments ofunpublished songs and were not aware of the aims. At the end each participants was requested toevaluate the pleasantness of each song, and to report as much words they remembered of the twosongs, and of the recited lyric.ResultsMemory scores in the three conditions were significantly different. In the control condition partic-ipants correctly reported 12.05% of the lyric. In the experimental condition with simple arrange-ment the memory score was 4.02% of the lyric, whereas in the experimental condition (complexarrangement) the memory score was 3.86%. Complexity of musical arrangement, as well as theevaluated pleasantness of the songs did not influence the amount of remembered words (r = 0.24).ConclusionsMemory for lyrics is very poor, also when assessed at short interval after listening. Whereas thepresence of human voice is very important in determining pleasantness of a song, the content ofwhat is effectively sung seems much less critical. Apart few exceptions, lyrics have a poetic like

Page 82: Abstract Book

82 Poster Session I

style, semantically evocative, and syntactically poorly structured, which hinder the memorizationof the text. The results of this study foster the hypothesis that much of the evocative power ofmusic originates from vocal affective prosody, and that the semantic counterpart of language isnot as important as vocal-prosodic properties.

Key words: Lyrics, Memory, Voice

[email protected]

8.11 Expressiveness in music: The movements of the performerand their effects on the listener

Eugenia Costa-Giomi1, Charlene Ryan2, Marcello Wanderly2

1Center for Music Learning, University of Texas at Austin, USA2School of Music, McGill University, USA

Musicians’ careers depend largely on the formal and informal evaluations they receive whenperforming. Research has shown that the evaluation of performance quality is affected by a varietyof extraneous factors such as the attractiveness, sex, and race of the performer and the time of dayand order of the performance, for example. The purpose of this investigation was to study the ef-fects of an important visual component of musical performance: the movements of the performer.Performers engage in expressive movements even when asked to avoid them. Although it is knownthat the audience interprets the movements, it is unclear how they affect the listeners’ perceptionof musical expressiveness and preference.

Four clarinetists were recorded playing a 40-second excerpt of a Stravinsky piece and thenasked to play it again without moving and yet again exaggerating their expressive intentions.Musicians (n=39) were presented with pairs of the audio or audiovisual recordings of the per-formances and asked to choose the most expressive as well as the most preferred rendition of thepiece in each pair. The performances within each pair were always played by the same performerand all possible combinations of performances were used. The repetition of selected performancespairs allowed to assess the reliability of the listeners.

The results showed that the movements of the performers magnified the expressiveness oftheir playing and that the visual information allowed listeners to discriminate better between per-formances. However, exaggerated movements often produced a decrease in listener’s preferencefor the musical interpretation, and, for certain performers, judges actually preferred the renditionplayed without movements. As in previous studies, it is evident that listeners have great difficultyin evaluation performances reliably.

Key words: Musical expressiveness, Movement, Performance evaluation

[email protected]

8.12 Markov processes and computer aided music compositionin real-time

Domenico De Simone

Page 83: Abstract Book

Tuesday, August 22th 2006 83

Conservatory Santa Cecilia of Rome, Italy

Last year, as part of the research on algorithmic composition, developed with the MaestroGiorgio Nottoli at the Conservatory of Music “Santa Cecilia” in Rome, an interactive methodologyfor use in real time of the Markov processes was perfected. A composition for euphonium and live-electronics with the title “BI[OS]” implies the use of “dynamic” Markov matrixes derived fromit. “Dynamic” in the sense that the Markov matrixes are exposed, in real-time, to continuousconsequential mutations through the processing of the data of the input live part (pitch, loudness,etc.) to which is given a “weight” able to evolve one or more of the initial matrixes of the workin a direction that is directly connected to what the instrumentalist performs and which in turncan modify his own choice of execution making the man-computer interaction bi-directional. Thecomputer again gives us the sound of the euphonium, but with some differences due to the actionof the Markov matrixes. In order to obtain the above result an original C++ software has beendeveloped. The paper is going to describe in detail the various phases of this research and duringthe presentation I’d like to propose some sound examples.

Key words: Markov processes, Computer aided music composition, Real-time

[email protected]

8.13 Online measurement of emotional musical experiences us-ing internet-based methods - an exploratory approach

Hauke Egermann1, Frederik Nagel2, Reinhard Kopiez3, Eckart Altenmüller2

1University of Music and Drama, Hanover, Germany2Institute of Music Physiology and Musicians’ Medicine, Germany3Institute of Research in Music Education, Germany

BackgroundMusic is able to induce strong emotions (so-called chills) (Sloboda, 1990; Grewe, et al., submitteda). The continuous measurement of emotions induced by music has been established as a standardmethod of measurement (Schubert, 1999). However, in a previous lab study conducted by ourresearch group, using the EMuJoy software for the continuous rating of emotions (Nagel, et al.,in press), chills were revealed to be rare events. Additionally, because of a small sample size,ratings varied highly between subjects. Thus, due to its high potential to increase sample size,web experimenting could be a solution with moderate effort (Reips, 2002).AimsWhether or not the findings of Nagel, et al. (in press) are replicable is the leading question of thisstudy. As an initial step, the web-based version of the EMuJoy software (called ESeRNet) musttherefore be pre-tested. The adequacy of inducing emotions through web-based music transferwill be investigated for one as well as the reliability of the measurement and data transfer ofonline research via the Internet.MethodSubjects/Stimuli: 30 participants (musicians and non-musicians) took part in the research andlistened to 7 pieces of music from a previous study (Grewe, et al., submitted b). To test retest-validity, subjects participated in the study twice with a break of two weeks in between.

Page 84: Abstract Book

84 Poster Session I

Materials For the continuous measurement of emotions based on the two-dimensional emotionspace by Russel (1980), the Java-Applet ESeRNet has been implemented into a website (htmlformat). Ratings of felt emotions are shown for the dimensions of valence and arousal by movinga computer mouse and chills are expressed by pressing the mouse button.

Procedure The online questionnaire contains four parts: a. Instructions (information about thebackground/technical requirements). b. Warm-up (rating practicing using the ESeRNet software).c. Ratings of felt emotions for 7 pieces of music. d. Personal information (questions concerningmusical education, short personality inventory etc.).

ResultsPreliminary results of the pre-testing are being prepared and will be available in April.

ConclusionsThe pre-test has shown that a reliable measurement of emotions, which were induced by auditorystimulation via the Internet, is a promising approach. This could lead to a deeper insight into theemotions experienced while listening to music as a part of our everyday life.

Key words: Web experiment, Emotion, Internet research

[email protected]

8.14 The role of emotions for musical long-term memory

Susann Eschrich1, Thomas Muente2, Eckart Altenmueller1

1Institute of Music Physiology and Musicians’ Medicine, Hanover University of Music and Drama,Germany2Department of Neuropsychology, Magdeburg University, Germany

BackgroundMusic can elicit strong emotions (Krumhansl, 1997; Panksepp & Bernatzky, 2002) and can beremembered, possibly in connection with these emotions, even years later. However, the episodicmemory for music rated as highly emotional compared to less emotional music has not yet beenexamined. Episodic memory is defined as the kind of memory which allows one to remember pastevents of one’s own life (Tulving, 1985).

AimsWe investigated in this study whether music rated as emotional is kept better in episodic long-term memory than music rated as less emotional and to examine the influence of musical structureon memory for music.

MethodIn this study 20 nonmusicians participated. As stimuli, 104 pieces of symphonic film music werecut down to 20 to 30s of length (first session) and 10s of length (second session). In a pre- assess-ment, these pieces were categorized by valence and arousal ratings as well as musical structure.Subjects were divided into two groups. In the first session, both groups listened to 52 target pieces.The emotion group rated valence and arousal induced in them by each piece on a five point scaleaccording to the two-dimensional valence-arousal model of emotions by Russel (1980). The de-tection group estimated the length of each piece. In a second session one week later, recognitionof the target pieces was tested. Both groups had to rate valence and arousal.

Page 85: Abstract Book

Tuesday, August 22th 2006 85

ResultsIn both groups, music pieces rated with extreme (low and high) arousal and valence were recog-nized significantly better than pieces rated with moderate arousal and valence. The emotion groupremembered better than the detection group although emotional ratings did not differ between bothgroups. Pieces with moderate to fast tempo, moderate to high loudness and several repetitions ofthe motive were remembered significantly better.ConclusionsMusic pieces rated as emotional are kept better in memory. Valence and arousal seem to be impor-tant for episodic long-term memory for music. Evidently, strong emotions related to the musicalexperience facilitate memory formation and retrieval. Moderate to fast tempo, moderate to highloudness and repetition of motives seem to support musical memory formation.

Key words: Episodic memory, Musical memory, Emotions

[email protected]

8.15 Children’s improvisations: The development of musicallanguage

Costa-Giomi Eugenia1, Taylor Don2, Hoplaros George1

1Center for Music Learning, University of Texas at Austin, USA2School of Music, University of North Texas, USA

Research on young children’s spontaneous vocal improvisations and older children’s instru-mental improvisations show clear developmental changes in their understanding of the musicallanguage. Unfortunately, no studies have focused on children’s vocal improvisations throughoutthe elementary years. This study investigated the vocal improvisation of children ages 5 - 10 (n= 33). The children participated in singing activities at school but did not read music. They wereprovided with a group lesson on improvisation after which each individual student was asked tosing Happy Birthday and then make up songs looking at the music scores of three unfamiliar chil-dren’s tunes. Children rehearsed the lyrics and then recorded the performance of their improvisedsongs. The performances were transcribed and analyzed in terms of the use of melodic and rhyth-mic patterns, cadence, musical form, contour, range, pitch and rhythmic variety and repetition,tonal stability, meter, integration of music and text. The developmental trends found in children’suse of the musical language are discussed.

Key words: Improvisation, Children, Singing

[email protected]

8.16 The sound of suspense: An analysis of music in AlfredHitchcock films

Nathan Fink

Page 86: Abstract Book

86 Poster Session I

University of Texas at Austin, USA

Music plays a very important role in cinema. In many films by acclaimed director, AlfredHitchcock, the soundtrack is as important as the film itself. Often called the “Master of Suspense”,Hitchcock is also known for creating scenes that keep the audience at the edge of their seats. Thisproject investigated the specific musical techniques used to evoke a feeling of suspense in filmsby Alfred Hitchcock. Six films which represent a 30-year span of Hitchcock’s career were usedin the analysis. Scenes that contained moments of suspense were identified and the music foreach of these scenes was analyzed (N=13). The analysis indicates that silence was used mostoften during moments of suspense in Alfred Hitchcock films. Silence was represented in variousforms: abrupt silence within a musical sound track, total lack of musical soundtrack, and completesilence. Findings support Weis’ (1982) argument for the importance of silence as both a formaland thematic element throughout Hithchock’s films. Results also suggest that silence is effectivein creating a feeling of suspense.

Key words: Film music

[email protected]

8.17 Using modern popular songs: Enhancement of emotionalperception when developing sociocultural awareness of for-eign language students

Anna Fomina

Kharkiv National Pedagogic University, Ukraine

BackgroundTraditionally popular songs have been extensively used in teaching foreign languages as meansof raising students’ motivation to study the language, lowering the anxiety level and providingeasier and better memorization of new vocabulary items, grammar structures and pronunciationpatterns. Recent tendencies in foreign language teaching methodology require development ofnot only linguistic but also sociocultural competence of students. Being a valuable material forlanguage teaching, specially selected songs are “culture capsules” that carry information aboutpeople’s mentality and ways of life. This suggests the use of songs for teaching foreign cultures.Review of publications in methodology of teaching shows that the potential of using popular songsfor raising sociocultural awareness has been underexploited.

AimsThis study investigates the didactic potential of modern popular songs employment in developingstudents’ sociocultural awareness, advantages of this material over the traditionally used texts andthe challenges the use of songs in class encounters.

MethodThe experimental teaching was conducted with 104 participants - 7 groups of the 3rd-year studentsof Foreign Languages Department, Kharkiv National Pedagogic University, who took the course“Language and Culture”. Different groups were offered to work with various authentic British

Page 87: Abstract Book

Tuesday, August 22th 2006 87

and American texts on the same topics: experimental groups were decoding sociocultural infor-mation from selected modern popular songs, control groups analyzed traditionally used teachingmaterials.

ResultsModern popular songs proved to be more effective in raising students’ cross-cultural awarenessthan traditionally used texts. They are a powerful motivator for sociocultural search and analysis,activate students’ emotional perception of the material, provoke group discussions, thus leadingto a deeper understanding of cultural peculiarities and better recall. Furthermore, the results ofpre- and post-teaching questionnaires show significant change in the students’ attitudes towardsthe use of song material in class, from initially considering it appealing but not appropriate for theuniversity setting (89%) to acknowledging numerous didactic advantages of this material (99%)after the experimental teaching.

ConclusionsThe study shows that modern popular songs can be effectively applied to teaching not only foreignlanguages but even such complex areas as foreign cultures.

Key words: Modern popular songs, Teaching foreign languages, Sociocultural awareness

[email protected]

8.18 Time perception of unimodal or crossmodal auditory-visualevents in normal subjects: Effects of aging and of musicaleducation

Francesca Frassinetti, Tamara Gattus, Mariagrazia Benassi

Department of Psychology, University of Bologna, Italy

Previous studies have shown an enhancement of visual perception by crossmodal auditory-visual interaction. One of the characteristic of a visual or an auditory event is its duration. Theability to estimate time has been shown to be a fundamental component of cognition due to arecent identification of brain mechanisms specialized for the encoding of stimulus duration. Thefirst aim of the present study was to verify whether the ability to time short intervals is moreaccurate when it is based on multiple sources of information, derived from different modalities,than when it is based on only one source of visual or auditory information. Moreover the effectsof normal aging and of musical education on such processes have been studied. Eighteen normalvolunteers (6 elderly without musical education and 12 young, 6 without and 6 with musicaleducation) participated in two experiments. Both experiments assessed discrimination of unimodalauditory, unimodal visual and crossmodal auditory-visual signal durations ranging from 7 to 63 s.In Experiment 1 a duration bisection task was used whereas in Experiment 2 a time production taskwas used. In Experiment 1 the three groups over-estimated the duration of stimuli independent ofthe modality. Moreover, young participants with musical education were more accurate, in eachmodality, than the other two groups. Young participants without musical education were lessaccurate with unimodal, visual and auditory stimuli, than with crossmodal stimuli, whereas agedparticipants were less accurate with unimodal auditory and with crossmodal stimuli, than withvisual stimuli. In Experiment 2 the three groups produced shorter intervals than their objective

Page 88: Abstract Book

88 Poster Session I

duration thus showing again an overestimation of the duration of the stimuli: the longer was theduration of the interval, the shorter the produced interval. Young participants without musicaleducation were less accurate than the other two groups, especially with unimodal stimuli, whereastheir performance improved with crossmodal stimuli. The results show an over-estimation ofauditory and visual signal durations and, more interesting, an enhancement of the estimation ofcrossmodal signal durations when the estimation of unimodal signals duration is less accurate.Normal aging and musical education influence these processes.

Key words: Time duration, Crossmodal auditory-visual perception, Musical education

[email protected]

8.19 How far is music universal? An intercultural comparison

Tom Fritz, Daniela Sammler, Stefan Koelsch

Max Planck Institut for Cognitive and Neural Sciences, Germany

A battery of behavioural music experiments was conducted with individuals who had neverbefore been exposed to music of a western tonal system to distinguish aspects of music that areuniversally recognized from such that are recognized only in a certain cultural environment. Theirperformance was compared with those of a western control group. In the test battery experimentswere included to investigate three major fields of music research: The comprehension of: 1. chordfunctions (“music syntax”), 2. meaning associated with music (“music and semantic”), and 3.emotion in music.

Key words: Music ethnology

[email protected]

8.20 The effect of music on subjective emotional state and psy-chophysiological response in singers

Dario Galati, Tommaso Costa, Elena Rognoni, Anna Pisterzi

University of Turin, Italy

The present study addressed the topic of emotion in music, examining the effect of the emo-tional content of a melody on the singer’s affective experience. The main goal was to investigatethe subjective emotional experience and the psychophysiological variation during the interpreta-tion of different melodies characterized by specific mode, tempo and pitch . According to canonsof musicology and past research the melodies were composed in order to elicit positive and neg-ative emotions. The hypothesis was that subjective experience of singers, unaware of the targetemotional content of melodies, was influenced and in agreement to the pleasantness or unpleas-antness of music. Further the skin conductance level was expected to specifically change as thearousal and pleasantness level of emotion. We used 10 unpublished melodies (5 positive and 5

Page 89: Abstract Book

Tuesday, August 22th 2006 89

negative) written by different professional composers and 32 professional singers participated tothe experiments. The singers, paired for age and sex, were equally divided in each vocal range(basso, tenore, contralto, soprano). Results showed that the singers self-reported emotional experi-ence was coherent with the affective content of the melodies indicating that the emotion of musicwas transmitted to singers. The physiological responses exhibited a significant variation duringboth positive and negative musical interpretation. This preliminary study empirically supports theability of music to influence the emotional state, even when the performer is in a different or neu-tral emotional state. Further studies could be carried out in order to clarify the role of prosody, theintrinsic characteristic of score on emotional responses to music and the emotional transmissionto the listeners.

Key words: Music, Emotion, Singer

[email protected]

8.21 The evolution of musical styles in a society of softwareagents

Marcelo Gimenes, Eduardo Miranda, Chris Johnson

University of Plymouth, UK

BackgroundA growing number of researchers, such as D. Horowitz, E.R. Miranda and F. Pachet, among others,are developing computer models to study cultural evolution, including musical evolution. Thismusicology trend consists in the study of music with computational modelling and simulation.

AimsWe propose a computer model that simulates artificial societies that evolve different musical stylesbased on processes of musical interaction. We aim the comprehension of music evolution as socialphenomena that happen through interactions among multiple agents.

MethodWe developed a system inspired in real life situations, specially in the ability that human beingshave to influence each other through the transmission of small music elements (memes), a par-adigm taken from Richard Dawkin’s theory of memes. At the beginning of a given simulation,a number of agents are created. Musical interactions happen at any time these software agentsperform any sort of musical activity such as listening to, practicing or composing pieces of music.

ResultsWe verified the rationale of the system through a first prototype that was implemented focusingmainly on rhythm analysis and production. Agents were able to evolve their own rhythmic world-view through a number of interactions with different compositions in a controlled environment.In the current version of the system, while maintaining the meme paradigm, we deal with morecomplex musical structures including pitch, vertical structures and intensity information takenfrom piano improvised music. Music style evolution can be observed through the analysis ofthe weights that different musical memes exhibit during the agent’s learning and the productionstages. Weights represent the relative importance of the memes in the agent’s internal states (“stylematrices”).

Page 90: Abstract Book

90 Poster Session I

ConclusionsExperiments are being carried out with different sources of data according to musical genres andstyles. Throughout these simulations we are being able to demonstrate how the exposure to differ-ent rhythmic material can ultimately shape the “musical knowledge” of an agent.

Key words: Music evolution, Computational model, Memetics

[email protected]

8.22 Unobtrusive practise tools for pianists

Werner Goebl1, Gerhard Widmer2

1Austrian Research Institute for Artificial Intelligence, Austria2Department of Computational Perception, University of Linz, Austria

Research on music expression has advanced over the past years considerably and many com-putational approaches to music expression have been developed. But still, this knowledge hasonly scarcely found its way to music practice; useful and intuitive applications that musicians areable and willing to use in every day practice life are seldom. In order to bridge the gap betweenresearch and practice, we propose an intelligent visualisation system that displays performanceinformation in real time while a pianist is practising. It shows expressive content of certain welldefined features of piano performance such as synchronization of chords or regularity of recurringtone patterns. The most important property of our system is that it contains certain knowledgeabout what to display, and it then automatically decides what and how to display it. The musiciandoes not have to deliberately press any buttons or drag any sliders to get what he or she wantsto see. We argue that computational tools can only be sensibly used in practice at the instrumentwhen they are unobtrusive in nature, that is, the user does not have to switch repeatedly betweenpiano keyboard and mouse or touch pad, each time loosing the feel for the music instrument. In itscurrent state, the system comprises (1) a chord display that shows onset synchronization and tonebalance of chords, (2) a pattern display that displays timing and dynamics deviations of repeatedpitch patterns, and (3) an comprehensive piano roll display that visually models the acoustic de-cay of the tones and their acoustic behaviour when the pedals are pressed. It processes symbolicinformation from computer-monitored instruments (MIDI). To identify tone patterns, it adopts anautocorrelation routine on the relative pitch information. To display timing deviation, it comparesthe current note onset with an onset expectation derived from a simple beat tracking algorithm.First qualitative evaluations with professional pianists assert the usefulness of the proposed tools.

Key words: Music practice, Music performance, Visualisation

[email protected]

8.23 What type of BGM is appropriate for a residential space?A study of relation between a “healing music” and “heal-ing space”

Yasuhiro Goto

Page 91: Abstract Book

Tuesday, August 22th 2006 91

Faculty of Psychology and Applied Communication, School of Humanities, Hokusei Gakuen Uni-versity, Japan

An experiment was performed in order to investigate the influence of so called “healing music”as a BGM on evaluation of “healing space.”

In previous researches, the nature of healing music was investigated. The results showed that atune which tempo was slow, the number of timbre used in the tune was limited, and the changes oftempo and/or pitch was few, was highly evaluated as a healing music. For example, classic musicin slow tempo was frequently regarded as a healing music.

In this experiment, a “healing space” was set up actually. In about 5-mats room, a white sofa(125cm(W) x 76cm (D) x 60cm(H)) , green elliptic cotton rug (170cm(the major axis) x 120(theminor axis)), a “yucca” (a plant with beautiful leaves) , and wooden table (120cm(W) x 50cm(D)x 40cm (H)) were placed. Moreover, a cushion was put on the sofa and a glass vase was placed onthe table.

Two types of BGM were prepared; one was estimated by listeners who did not participatein this experiment more effective for healing and another was less effective. Participants wereasked to enter such a healing space described above and to judge which type of music was moreappropriate for the space. They were also asked to rate whether that space was felt to be “healing”or not.

The following results were obtained: 1) a music which was more effective for healing wasjudged more appropriate for healing space than music which was less effective for healing; 2) thespace was estimated more healing when BGM was a more healing music than that of less healingmusic; and 3) feeling which was expressed by a term “calm”, “comfortable”, and/or “wide” wasalso highly rated when healing music was used.

Finally, how to design a healing space was discussed in terms of healing music as BGM.

Key words: BGM, Healing music, Spatial design

[email protected]

8.24 How the brain learns to like - an imaging study of musicperception

Anders C. Green1, Peter Vuust3, Andreas Roepstorff3, Hans Stødkilde-Jørgensen2, Klaus B.Bærentsen1

1Department of Psychology, University of Aarhus, Denmark2MR Research Centre, Aarhus University Hospital, Denmark3Center for Functionally Integrative Neuroscience, Aarhus University Hospital, Denmark

BackgroundBehavioural experiments have shown an interesting connection between how well one knowssomething and how much one likes it. This is the so-called mere exposure effect(1). This effecthas also been found for music perception(2). However, the neural correlates for the phenomenonare not well understood.AimsThe aim of the study is to find the neural correlates of the mere exposure effect for music per-ception. It is hypothesised that brain activity during melody perception will depend on how well

Page 92: Abstract Book

92 Poster Session I

known the melody is. The actual presentation frequency, the memory rating as well as the asso-ciated brain activity pattern will correlate with the subjective rating. For seldom heard melodies,many brain regions should show activation, including higher level auditory areas, association ar-eas, and areas supporting a heightened attention function, i.e. primarily in frontal cortex. Wellknown melodies should activate a lesser total extent of areas, since the cognitive burden will havelightened and become more automatic. Additional brain regions will probably show up for theperception of well known melodies, f.i. memory areas such as the hippocampus.MethodStimuli: 36 specially composed, similar piano melodies of 15 seconds duration, which followcertain musicological guidelines. Keys used are Ab, C, and E; both major and minor modesutilised. Subjects: 20 non-professional, but musically interested right-handed adults, aged 20-35.Equal gender distribution.

Design Learning phase: 172 presentations (total) of 18 different melodies in three groups witha presentation frequency of 2, 8, and 16 times, respectively. Pseudo-randomised. Furthermore, 6target melodies are used, which the subject must identify by a button press. Scan phase: 24 totalpresentations of different melodies in four groups: 18 from the learning phase, plus 6 new ones.The subject task is to rate their liking of each melody immediately after its completion. Memoryphase: 30 total presentations of different melodies in five groups: 24 from the scan phase, plus 6new ones. The subject task is to rate their certainty of having heard each melody before.

Scanning GE MR scanner, TR=3000 msec. Total length of scan 9 min. Block design; si-lence and chromatic scale as baselines. Data analysis: Preprocessing (realignment, normalisation,smoothing) using SPM2(3). Parametric design, mixed effects model.Results and ConclusionsStudy in progress, results to be presented in poster.References

Zajonc (1968). Attitudinal effects of mere exposure. Journal of Personality and Social Psy-chology, 9, 1-28.

Szpunar, K. K., Schellenberg, E. G. & Pliner, P. (2004). Liking and memory for musicalstimuli as a function of exposure. Journal of Experimental Psychology: Learning, Memory, andCognition, 30 (2), 370-381.

http://www.fil.ion.ucl.ac.uk/spm/

Key words: fMRI, Perception, Aesthetic response

[email protected]

8.25 Effects of identification and musical quality on emotionalresponses to music

Oliver Grewe1, Frederik Nagel1, Reinhard Kopiez2, Eckart Altenmüller1

1Institut für Musikphysiologie und Musikermedizin, Hochschule für Musik und Theater Hannover,Hannover, Germany2Institut für Musikpädagogische Forschung, Hochschule für Musik und Theater Hannover, Ger-many

Page 93: Abstract Book

Tuesday, August 22th 2006 93

BackgroundMusic can arouse extraordinarily strong emotional responses up to ecstatic “chill” experiencesdefined as “goosepimples” and as “shivers down the spine”.AimsThe aim of this study was to study the effects of identification with and of the musical qualityof a musical interpretation on emotional reactions. Additionally distinct musical structures werecompared regarding the emotional responses they elicited.MethodTwo versions of the “Confutatis”, “Lacrimosa” and “Rex tremendae” from Mozarts Requiem (KV626) sung by two different choirs of musical layman were used as stimuli. Additionally we usedthe “Tuba mirum” and “Dies irae” as professional versions conducted by Karajan, which wereplayed twice in each experimental session. We asked 52 participants (32 from the two choirswho performed the two versions of the Requiem, 20 from two control choirs) to give continuousself-reports of the intensity of their perceived emotional reactions. Physiological measurements(skin conductance response [SCR], heart rate [HR], breathing rate [BR]) were recorded and syn-chronized with the other data. After each piece participants filled in questionnaires regarding theirliking of the interpretation, their perceived emotions (on the dimensions valence and arousal andby choosing adjectives from a list) and bodily reactions.ResultsWe hypnotized that participants would show the strongest emotional reactions in response to their“own” version and interpretation of the Requiem. Preliminary results give evidence that the mu-sical quality of performances (as rated by the participants) show a stronger effect on emotionalintensity and chill reactions, though. Several structural elements were found to be emotionally in-fluential. It showed that emotional reactions and chills can just in part be repeated giving the samestimulus twice. Breathing rate had a strong influence on skin conductance response and heart rate.ConclusionsMusical quality seems to have a stronger influence on emotional reactions than identification witha “personal musical interpretation”. Distinct structural elements show a relation to emotional re-action. The effect of breathing rate needs to be discussed when using physiological measurementsas an indicator of emotional reactions. This work was supported by the DFG (grant no. AL 269-6)and the Center for Systemic Neurosciences Hannover

Key words: Emotions, Music, Chills

[email protected]

8.26 The cognition of intended emotions for a drum perfor-mance: Differences and similarities between hearing-impairedpeople and people with normal hearing ability

Rumi Hiraga1, Akio Yamasaki2, Nobuko Kato3

1Bunkyo University, Japan2Shoin Women’s College, Japan3Tsukuba Institute of Technology, Japan

Page 94: Abstract Book

94 Poster Session I

BackgroundHaving taught computer music to hearing-impaired students for six years, we believe that thehearing-impaired have an interest in music. Thus, we set our goal of creating an assistance sys-tem where both the hearing-impaired (HI) and people with normal hearing ability (NH) can playinstruments in an ensemble style and where they can communicate emotions through drum per-formances.

AimsThe goal was to better understand whether these two groups of people can understand each other’sbasic emotions through drum performances. HIs and NHs listened to performances conducted byHIs, by NHs of musically untrained (amateurs), and by professional drummers. Then the effectsof the type of player and listener on the cognition of emotions were analyzed.

MethodWe used drum performances with the intended emotion of joy, sadness, fear, or anger played byHIs (n=11), amateurs (n=5), and professionals (n=2). Each performance was about 15 seconds.NH (n=33) and HI (n=10 and n=15 for subjects who listened to performances by HIs and perfor-mances by other two types of performers respectively) subjects listened to the performances andchecked the emotion they felt during the performance.

ResultsWe used a pair of two-way ANOVAs for the correct rate of cognition of an intended emotion.The factors were (1) emotional intention (4 levels) and hearing ability (2 levels) and (2) emo-tional intention and type of performer (3 levels). The hearing ability showed a significant differ-ence (p<.05) in the cognition of emotion during the performances by HIs and professionals. HIscould not differentiate between the performances of different types of performer while NHs could(p<.05). However, no significant difference was found when HIs and amateurs did a multiplecomparison test. The correct rate of the four emotions was significantly different (p<.05) acrossperformer and listener types.

ConclusionsThe results indicate that both HIs and NHs understand, at a basic level, performances by HIs andamateurs. This implies that there can be a common representation and understanding of emotionin natural rhythm. With our planned system, we will use performance visualization to remove thesignificant differences in the correct rate of cognition of emotion.

Key words: Emotion, Hearing-impairment, Performance

[email protected]

8.27 Compactness and Convexity as models for the preferredintonation of chords

Aline Honingh

Institute for Logic, Language and Computation, Amsterdam, The Netherlands

Background

Page 95: Abstract Book

Tuesday, August 22th 2006 95

Several theories of intonation have been developed (Helmholtz 1863; Euler 1739; Terhardt1984; Plomp and Levelt 1965; Sethares 1993), but there is still not one unambiguous simpletheory that tells us how to tune notes and chords when playing music.Aims

In this paper we focus on the tuning of chords. From the result that all diatonic chords areconvex in the Euler lattice representing note names (Honingh and Bod 2005), we hypothesize thatthese convex sets represent the preferred intonation of these chords in the Euler lattice represent-ing frequency ratios. To test this hypothesis, we use Euler’s Gradus Suaventatis as consonancemeasure, since it is applicable for all types of sounds and has frequency ratios as input.Method

We consider chords as sets of points in the 3-dimensional Euler lattice. Each note name canbe represented by multiple positions in the lattice. The syntonic comma (81/80) is the factor thatchanges the frequency ratio but keeps the note name constant. Multiplying a frequency ratio bythis factor means shifting a point in the lattice over a fixed euclidean distance. A chord consistingof note names has therefore more than one representation in terms of frequency ratios and theproblem is to find the most consonant representation. We created a Mathematica program thatvaries the coordinates of all elements of a chord such that for all possible representations of thechord the compactness and convexity is calculated.Results

For existing chords (taken from Piston and DeVoto 1989), the correlation between the mostconsonant and the most compact chords is 100 percent, and the correlation between most conso-nant and convex chords is 94 percent. If also virtual chords are taken into account representingall possible combinations of elements in the Euler lattice, the average result is a correlation of 85percent between the most consonant and the most compact chords, depending on the number ofnotes in a chord; the relation to convexity is work in progress.Conclusions

The compactness-convexity model presents an intuitive view on consonance of chords, andcan be used as a model to quickly find the preferred tuning of a chord. The model matches theperformance of Euler’s Gradus function while having the advantage of being simple to use.

Key words: Intonation, Convexity, Consonance

[email protected]

8.28 Affective characters of music and listeners’ emotional re-sponses to music

Etsuko Hoshino

Ueno Gakuen University, Tokyo, Japan

Some researchers have proposed that listeners’ perception of emotional expression should bedistinguished from listeners’ own emotional reactions (Nakamura, 1983, 1984; Gabrielsson, 2001,

Page 96: Abstract Book

96 Poster Session I

2002). Is it possible to make a distinction empirically between emotion perception (to perceiveemotional characters of music) and emotion induction (listeners’ emotional response to music)?Forty students majoring in music were asked to listen to phrases from two orchestral pieces. Halfof them were instructed to choose an emotional category which fitted the phrase as emotionalcharacter, while the other half chose the emotion aroused by the phrase, from twelve categories ofemotion.

The results showed that, while it was hardly possible to discriminate between what charactersthose phrases had and what emotions they aroused, there were some differences between the twokinds of “music emotion”. For example, negative arousing emotions (e.g. tension, nervousness,hostility) were chosen in greater number by the “emotion induction” group than the “emotionperception” group for modulation spots of each piece.

The same procedure was carried out with another forty students who did not major in mu-sic. There were also some differences of emotional responsiveness between musicians and non-musicians. For example, the responses chosen by non-musicians were scattered widely throughoutthe emotional categories, while musicians chose a small circle of categories for the same musicalphrase. As the listeners were asked to rate their mood or feelings before listening, the relationshipbetween their own feeling and the results of the two kinds of “music emotion” was discussed fromthe point of therapeutic view that music can and does have an “effect of feeling adjustment”.

Key words: Emotion in music, Emotional responsiveness, Feeling adjustment of music

[email protected]

8.29 Agent-based melody generation model according to cog-nitive and bodily features: Toward composition of Japanesetraditional pentatonic music

Yuriko Hoteida, Shintaro Suzuki, Takeshi Takenaka, Kanji UedaResearch into Artifacts, Center for Engineering (RACE), University of Tokyo, Chiba, Japan

This paper is aimed at the generation of monophony music by single-agent-based computersimulation using reinforcement learning. We focus on how melodic order emergence based oncognitive and bodily features. The agent recognizes pitches and metrical structure of past fournotes, decides pitch and duration of next note, and gets value of its action as a reward. The modeluses elements of gestalt theory (e.g. smoothness of pitch changes as good continuation) and bodilyfeatures (e.g. vocal constraint) as evaluation methods of agent’s action. Additionally, we set someconditions by the combination of evaluation methods. The agent gets reward when it decides anaction (successive reward) and when it reaches regulation length of music (total reward). In themodel, we set two-four time and the regulation length is four bars. The agent can move in rangeof two octaves and have three choices of duration (half note, quarter note, eighth note).

We focus on Japanese traditional children’s songs which consist of five notes (Japanese minorpentatonic scale), because these songs are sorted out well according to the Japanese mode theory.Asian traditional mode music has been well investigated and is known that they have some melodicrules.

Simulation results have some characteristics consist with existing Japanese music. We conducta psychological experiment in which participants evaluate the impression of the melodies; partlyfrom the simulation results in some conditions and from part of existing melody.

Page 97: Abstract Book

Tuesday, August 22th 2006 97

Key words: Reinforcement learning, Monophony generation

[email protected]

8.30 Fetal and neonatal musical brain processes

Minna Huotilainen

Helsinki Collegium for Advanced Studies; Cognitive Brain Research Unit, Department of Psychol-ogy, University of HelsinkiBioMag Laboratory, Helsinki University Central Hospital, Finland

The human fetus, during at least the 3 last moths of uteral life, lives in a rich environment ofsound sensations, associated with corresponding somatosensory and hormonal sensations, with anauditory system capable of disentangling very subtle sound features. The sound environment isregulated by the mother, her speaking, singing, and listening habits, together with her voluntaryand unconscious reactions to the sounds (dancing, playing, changes of mood, blood pressure,pulse, hormonal levels etc.). During this period of time, the human fetus has a very sensitiveassociative memory system, and thus the frequent associations, for example a certain musicalpiece associated with physiological signs of relaxed and positive mood, are effectively stored.Thus, the new-born infant can not be considered as a starting point or tabula rasa but rather a stilldeveloping individual already equipped with a set of auditory memories and associations availablefor regulating reactions to familiar sounds (Moon and Fifer, J Perinatol. 2000, 8). Event-relatedbrain responses (event-related potentials ERPs recorded with electroencephalography EEG, andtheir magnetic counterparts recorded with magnetoencephalography MEG) offer a non-invasive,feasible, yet direct measure to study the reactions evoked by musical sounds in fetuses and new-born infants (for a recent review, see Anastasiadis et al., Curr Ped Rev., 2005, 1). Recent MEGresults from fetuses show that the short-term memory system is functional (Huotilainen et al.,NeuroReport, 2005). ERP studies demonstrate the neonatal brain in action streaming sounds intocoherent percepts and learning actively and effectively from exposure (Winkler et al., PNAS, 2004,Cheour et al., Nature Neurosci., 2004). These results, demonstrated by the fetal MEG and neonatalMEG and ERP techniques and by behavioural methods, are changing our view on the developinghuman auditory and memory systems. More research is needed on the effect of exposure to musicat different ages, since it is possible that the human auditory and memory systems could benefitfrom musical or other sound exposure in cases of, for example, problems in language acquisition.

Key words: Fetus, Brain, Exposure

[email protected]

8.31 A comparison of the effects of music and speech prosodyon three dimensions of affective experience

Gabriela Ilie, William Forde Thompson

University of Toronto, Canada

Page 98: Abstract Book

98 Poster Session I

BackgroundResearch suggests that music and speech prosody communicate emotions in similar ways (Ilie &Thompson, in press; Juslin & Laukka, 2003). Exposure to music can even induce changes in moodand arousal (Husain, et al., 2002). In this study, we compared the capacity for music and speechprosody to induce changes in mood valence, energetic arousal, and tension arousal. Loudness,pace, and pitch height were manipulated in samples of music and speech. After exposing listenersto stimuli for eight minutes, affective consequences were assessed.AimsOur goal was to (a) confirm that exposure to auditory stimuli can influence mood valence, energeticarousal, and tense arousal; (b) determine the effects of manipulating loudness, pace and pitchheight on these affective dimensions, and (c) compare the effects in music and speech.MethodEight music and eight speech samples were manipulated in loudness (loud, soft), pace (fast, slow)and pitch height (high, low). Listeners heard one of the conditions for eight minutes and were as-sessed for changes in affective experience using the “Profile of Mood States” and direct measures.ResultsManipulations influenced affective experience similarly in music and speech. Loudness influencedtension; pace influenced energy; and pitch height influenced mood valence. A few differences inthe experiential effects of music and speech were also observed.ConclusionsExposure to acoustic qualities induced changes in affective experience similarly whether embed-ded in music or speech. Comparing results to an earlier (perceptual) study, experiential effects ap-pear to be more specific than perceptual effects. We propose that music and speech are associatedwith a common underlying mechanism for linking acoustic qualities with affective experience.References

Ilie, G. & Thompson, W.F. (in press). A comparison of acoustic cues in music and speech forthree dimensions of affect. Music Perception.

Husain, G., Thompson, W.F. & Schellenberg, E.G. (2002). Effects of musical tempo and modeon arousal, mood, and spatial abilities: Re-examination of the “Mozart effect”. Music Perception,20 (20), 151-171.

Juslin, P.N. & Laukka, P. (2003). Communication of emotions in vocal expression and musicperformance: Different channels, same code? Psychological Bulletin, 129, 770-814.

Key words: Affective experience, Mood and arousal, Music and speech

[email protected]

8.32 Influences of musical training and development on neu-rophysiological correlates of music and speech perceptionin children

Sebastian Jentschke, Stefan Koelsch

Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

Page 99: Abstract Book

Tuesday, August 22th 2006 99

Music and language are human universals involving perceptually discrete elements organizedin hierarchically structured sequences. There is a need for principles (referred to as syntax) gov-erning the combination of these structural elements into sequences. A violation of expectanciesconcerning syntactic regularities may be reflected by two ERP components: the ERAN (EarlyRight Anterior Negativity) which is evoked by a violation of musical regularities and the ELAN(Early Left Anterior Negativity) which is linked to syntax processing in the language domain.Both ERAN and ELAN are early ERPs reflecting structure building processes. The ERAN hasbeen shown to be larger in adults with formal musical training than in those without, indicatingthat more specific representations of musical regularities lead to heightened musical expectancies.Moreover there is evidence suggesting that both components are generated in comparable brain re-gions, especially the inferior fronto-lateral cortex. Therefore, it seems plausible to expect transfereffects due to shared processing resources. The aim of this study is to investigate these issues inchild development. We conducted two experimental sessions for children of different age groups(7, 9 and 11 years old) which either had musical training or not. In the music experiment chordsequences ending either with a (regular) tonic or with an (irregular) supertonic were presentedwhereas in the language experiment syntactically correct and incorrect sentences were used. Weexpected the amplitude of the ERAN to vary with age and the amount of musical training. Fur-thermore, we hypothesized that the ELAN is present at an earlier age and is larger in amplitudein musicians due to transfer effects. Preliminary analyses indicate a difference between musiciansand non-musicians (in the group of the 11-year-olds) for the ERAN as well as for the ELAN.These differences are less obvious in the younger age groups (most presumably due to less musi-cal training). The results indicate effects of musical training on structure building processes in themusic domain and a transfer to similar processes in the language domain.

Key words: Music, Language, ERP

[email protected]

8.33 A survey study of emotional reactions to music in every-day life

Patrik N. Juslin1, Simon Liljeström1, Petri Laukka1, Daniel Västfjäll2, Lars-Olov Lundqvist3

1Department of Psychology, Uppsala University, Sweden2Department of Psychology, Göteborg University, Sweden3Department of Behavioural, Legal, and Social Sciences, Örebro University, Sweden

Empirical studies suggest that people value music largely because of its abilities to expressand induce emotions. Yet, little is known about the conditions under which listeners normallyexperience emotions to music in everyday life. The aim of this study was to investigate (a) theprevalence of different emotional responses to music, (b) possible psychological mechanisms, (c)specific emotional episodes featuring music, (d) listeners’ personal strategies for inducing emo-tions with music and (e) selected background variables. A self-administered questionnaire featur-ing 32 items was sent to a random sample of 1.500 Swedish citizens between the ages of 18 and65. The results provide unique information about the conditions - including the music, the person,and the situation - under which a listener is most likely to respond emotionally to music, and offerclues to how music may serve different functions in different social contexts.

Page 100: Abstract Book

100 Poster Session I

Key words: Music, Emotion, Everyday life

[email protected]

8.34 The effects of pre-existing moods on the emotional re-sponses to music

Kari Kallinen, Timo Saari, Niklas Ravaja, Mikko Salminen

Knowledge Media Laboratory, Helsinki School of Economics, Finland

In the present paper, we examined the effects of autobio-graphically induced mood and musicon emotional evalua-tions of and psychophysiological responses to music in 48 subjects. Partici-pants listened to music after a mood induc-tion. Both music and induction varied on the dimen-sions of valence (pleasant - unpleasant) and arousal (high - low). During mood induction andlistening to music, psycho-physiological responses were measured continuously to assess physio-logical arousal (indexed by electrodermal activity) and the valence of the emotional state (indexedby facial muscle activity) of the participant. After listening to music, participants evaluated themusic using pictorial scales for valence and arousal. As expected, subjects were in a more positiveemotional state during listening to pleasant than unpleasant music and also evaluated the musicmore positively after a pleasant compared to an un-pleasant pre-existing mood. As also expected,high-arousal music and pre-existing mood generated both higher physio-logical arousal and higherarousal ratings compared to low-arousal pre-existing mood and music. We found no support forthe principle of mood-congruency, which posits that individuals preferentially process emotionalstimuli that are congruent in emotional tone with their current mood state.

Key words: Mood induction, Emotion, Psychophysiological responses

[email protected]

8.35 Emotion-relevant characteristics of temperament and theperceived magnitude of tempo and loudness of music

Joanna Kantor-Martynuska

Department of Psychology, Warsaw School of Social Psychology, Warsaw, PolandLEAD CNRS, Universite de Bourgogne, Dijon, France

BackgroundThe research is based on the assumption that the emotion-relevant characteristics of temperament(perseveration, emotional reactivity, and neuroticism) described in the Regulative Theory of Tem-perament (Zawadzki and Strelau, 1997) and in Eysenck’s theory of personality (1967) are stablefactors influencing the representation, i.e. the perceived magnitude, of tempo and loudness ofmusic.

Page 101: Abstract Book

Tuesday, August 22th 2006 101

AimsThe study explored the relationship of individual differences in the above mentioned characteris-tics of temperament, with the perceived magnitude of tempo and loudness of music. Spontaneousregulation and estimation of tempo and sound level are regarded as indicators of the perceivedstimulatory value of music.

MethodMagnitude production, magnitude estimation, and cross-modal scaling techniques were used. InExperiment 1, subjects adjusted tempo and loudness of four piano excerpts with four initial levelsof the regulated parameter, to the imagined level of comfort. In two versions of Experiment 2,subjects estimated both parameters on a numerical scale, and then they scaled them visually witha slider on a line symbolizing a range of all the possible magnitudes.

ResultsThe study reveals a negative relationship of all the emotion-relevant variables studied, with thecomfortable level of both tempo and loudness (except for perseveration). Besides, perseverationwas found to be positively associated with the estimated and visually scaled tempo, and withthe estimated loudness of music. In turn, emotional reactivity was positively associated with theestimated and visually scaled loudness.

ConclusionsThe study proves that the perceived magnitude of tempo and loudness of music, the dimensionsthat form the arousing potential of music, is associated with the emotion-relevant temperamentaltraits. The association of the perceived tempo with the emotion-relevant traits is better documentedthan that of the perceived loudness. Emotional arousability, regarded to be common for persever-ation, emotional reactivity, and neuroticism which are highly intercorrelated, and a tendency torespond with annoyance to the auditory stimuli whose arousing potential exceeds the optimumlevel, may be responsible for the differentiation of the responses to the arousing properties ofmusic.

Key words: Emotion-relevant trait, Temperament, Tempo, loudness

[email protected]

8.36 Constructing a support system for self-learning playingthe piano at the beginning stage

Tamaki Kitamura, Masanobu Miura

Department of Media Informatics, Faculty of Science and Technology, Ryukoku University, Japan

BackgroundPeople who have desired to win skill of playing piano have been increasing in Japan. To acquirethe ability of playing the piano is kind of a dream for non-pianists of wide age ranges. They often,however, give up practicing playing the piano even if they have special interests to be able to play.Possible reasons are a) they cannot choose appropriate exercises of practicing playing the pianoby themselves, b) it is difficult for them to be instructed directly from experts because of variouskinds of restricted resources, c) there is no appropriate circumstances of self-learning playing thepiano, and so forth.

Page 102: Abstract Book

102 Poster Session I

AimsThis paper aims at realizing a support system for self-learning playing the piano at the beginningstage. The system is conceptually designed as a computer-based interactive system, and it isexpected that using the system non-pianists of wide age ranges are able to learn playing the pianowithout experts’ instructions.MethodDesigned here is an original procedure, strongly dependent to a famous textbook “Methode Rose”, and some ways of producing exercises described in “Bayer” are extracted manually tobe included. The proposed system will be implemented by employing this designed procedure.Developed here is a subsystem, which automatically generates basic exercises of learning observ-ing records of his/her weak points. Other facilities of it are a) analyzing his/her performancerecorded using a MIDI sequencer, b) extracting inaccuracies and/or deviations concerning timing,MIDI-velocity and duration, and c) showing obtained defects of users to a PC-display togetherwith optimum exercises corresponding to their errors.ResultsDeveloped in this study is an automatic generation system of exercises appropriate for practice atthe beginning stage, and it can generate a wide variety of exercises, whereas conventional text-books have limited number of exercises. Presenting optimum fingerings corresponding to eachexercise reduces the difficulty of practicing playing the piano.ConclusionsIn this study a procedure of practicing playing the piano is proposed by referring famous pro-cedures conventionally employed, and a support system for self-learning is partially realized byimplementing the proposed procedure.

Key words: Piano, MIDI, Self-learning

[email protected]

8.37 Dissimilarity measures and emotional responses to music

Nicholas Knouf

Media Laboratory, Massachusetts Institute of Technology, USA

BackgroundA common method of studying emotional responses to music is to have listeners describe theirinternal states using words. Many studies, however, constrain the listener to one or two valenceddimensions. Empirical results, as well as introspective experience, suggests that emotional re-sponses to music are much more elaborate and oftentimes conflicting.AimsThe goal of this study was to develop a more nuanced method of measuring emotional response tomusical excerpts. We explored self-report by having subjects choose from a subset of descriptorsand analyzed the data using dissimilarity metrics.MethodWe presented to musically-trained listeners (N=7) twenty-two excerpts (M=53s) of non-vocalpieces from Western music. Following each excerpt, listeners saw a list of sixteen words (chosen

Page 103: Abstract Book

Tuesday, August 22th 2006 103

from prior studies) and selected the words that described their emotional experience. Selectionof a word then required listeners to rate the strength on a scale from 1.0 to 9.0 (unselected wordsreceived a value of 0.0).

We used four dissimilarity metrics to compute the distance between descriptors: 1) non-Euclidean measures such as Manhattan and Hamming distance (where a word’s strength was oneif selected, zero otherwise); 2) two measures developed to incorporate prior hypotheses: com-mon distance, which only computed the distance between words if they were both selected; andweighted Manhattan distance, where non-common selection of words was weighted less than com-mon selection. This created a dissimilarity matrix which was visualized in 2D using classicalmulti-dimensional scaling (MDS).

ResultsVia the MDS solution and across all measures, we viewed a segregation of terms into axes of pas-sive to active (calm and relaxed on one end, ecstatic, tense, and angry on the other) and pleasurableto sad. These results fit well with other studies that have found similar distributions of terms, butrequired subjects to rate all of the words.

ConclusionsWhile our results are preliminary, they do suggest the utility of the proposed approach. We allowedsubjects to choose only those words that were pertinent to their emotional experience. The earlyresults also corroborate with prior findings in the literature. Further work will involve developmentof more domain-specific dissimilarity measures.

Key words: Emotion, Methodology

[email protected]

8.38 Historiometric analysis of Clara Schumann’s collection ofrecital notes: Life-span development, mobility, and reper-toire

Reinhard Kopiez1, Andreas C. Lehmann2

1Hochschule für Musik und Theater Hannover, Germany2Hochschule für Musik Würzburg, Germany

BackgroundThe Zwickau collection of 1,312 concert program leaflets includes all concerts that Clara Schu-mann (1819-1896) gave between 1828 and 1891. This historically unique collection presents anexhaustive documentation of a performer’s career from age nine to age 71. To date, archival datafrom performing musicians has rarely been investigated by musicologists. Employing an interdis-ciplinary, quantitative historiometric approach, we analyzed this data descriptively and theoreti-cally against the backdrop of her family situation.

AimsThe most pertinent questions were: (a) how were Clara Schumann’s concert activities influencedby her family situation?; (b) how did her repertoire develop considering her intent to disseminateGerman music in general and of Robert Schumann’s music in particular?

Page 104: Abstract Book

104 Poster Session I

MethodThe program leaflets were entered into a database and prepared for computer-assisted analysis.536 solo piano and chamber music pieces with a definite contribution from Clara Schumann wereselected from the 20,000 program entries.

ResultsAnalysis show that the yearly frequency of concerts mirrors her personal circumstances (criti-cal life events). The most frequently performed five composers (Schumann, Chopin, Beethoven,Mendelssohn, and Bach) contributed 72 % of all performances, although Clara Schumann per-formed works of a total of 77 composers. These proportions confirm current bibliometric modelsand can be found in other creative domains. Furthermore, Clara Schumann performed in 160cities, but 50 % of her concerts were given in merely seven cities (London, Leipzig, Vienna,Berlin, Dresden, Hamburg, and Frankfurt). While display of virtuosity marked the beginning ofher career, later choice of repertoire was governed by more complex factors.

ConclusionsTo summarize, our study revealed that Clara Schumann’s artistic career was sensitive to biograph-ical events. Through her enlightened program conception, she increasingly dispersed the worksshe deemed worthy. However, she could not free herself from the demands of the audience. Heralmost exclusively German repertoire probably played an important role in the development of acorresponding canon which influences concert life into our time.

Key words: Life-span development, Historiometry, Repertoire analysis

[email protected]

8.39 Melodic intervals as reflected in body movement

Göran Krantz1, Guy Madison2, Björn Merker2

1R. Steiner University College, Järna, Sweden2Uppsala University, Sweden

BackgroundIt is often assumed that musical intervals carry meaning beyond that of the simple pitch differencethey embody. In earlier studies, we found evidence that melodic intervals tend to be perceivedin emotional and aesthetic terms with some degree of consistency. Might such effects extendbeyond the perceptual domain into bodily expression as well? Already Hornbostel in his essayMelodischer Tanz of 1903 followed by Truslit 1938 discussed melodic movement in relation tobody movement, but research on this topic is rare.

AimsThe aim of the study was to explore whether any consistent patterns of response might emergewhen volunteers are asked to perform body movements in response to heard melodic intervals.

MethodTwenty-eight participants were asked to portray their impression of melodic intervals using bodymovement. Stimuli consisted of the ascending (melodic) intervals in c-major, diatonic scale. Tocontrol for possible sequence effects intervals were presented in a different random order for eachparticipant. The movements were recorded on video and subjected to ratings of 7 bipolar word

Page 105: Abstract Book

Tuesday, August 22th 2006 105

pairs: up-down, outwards-inwards, tension-relaxation, repulse-receive, asymmetric-symmetric,straight-round, and gloomy-cheerful by 10 observers.ResultsResults show a systematic relation between interval and body movement. Movements performedto the intervals fifth, sixth, seventh and octave received high ratings for “up”, “outwards” and“cheerful” with a peak for the sixth. The movements rated most “down” and “gloomy” wereperformed for the major second, those rated most “round” for the major third and most symmetricfor the major sixth. High ratings on “tension” were noted for the major seventh. Some movementsseems to reflect the size of the pitch difference across intervals, but this pattern tends to be brokenby some of the intervals, suggesting an influence of factors other than the size of the pitch step.ConclusionsThe results extend our earlier studies of the effects of melodic intervals by showing that simplemelodic intervals not only tend to receive consistent interpretations in emotional and aestheticterms, but lead participants to express them in bodily attitudes and movements that show a measureof consistency across individuals.

Key words: Melodic intervals, Body movement, Cross modal expression

[email protected]

8.40 A foreign sound to your ear: A comparison of americanvs. german-speaking Bob Dylan fans

Susanne Kristen 1, Stephen Dine-Young 2

1University of Würzburg, Germany2Hanover College, USA

AimsThe study compares the reactions of fans to Bob Dylan’s music in the artist’s original culture, theUnited States, vs. fans in Germany, Austria and Switzerland.

Sample A total of 84 participants: 41 American participants (4 female & 37 male; age range,20-60 years) and 43 German -speaking participants (4 female & 39 male; age range 15-58).MethodAn internet-questionnaire asked respondents about the influence Bob Dylan has had on their every-day life, as well as demographic questions such as their age and the decade when Bob Dylan wasmost important to them. The open-ended responses to these questions were analyzed using theLIWC2001 computer software. This program, which can use either a German or American dictio-nary, analyses the frequency of both grammatical and thematic content in qualitative data.ResultsThere was a significant negative correlation in both cultures between people’s age and the decadeBob Dylan was most important to them. Significantly more American than German-speaking fansexperienced his influence as being life-shaping, and Americans tended to respond to his musicin a more emotional manner than his German-speaking fans. In addition, American fans usedsignificantly more “I” statements and other forms of self-reference when talking about his mu-sic. In contrast, Europeans reported significantly more negative emotions, sadness and cognitivereactions than Americans did.

Page 106: Abstract Book

106 Poster Session I

ConclusionsConsistent with previous research, these results suggest that fans are most influenced by the musicthey encountered in their adolescence. At the same time, at least some artists such as Dylan con-tinue to have significance for their audiences as they get older. Compared to fans from a differentculture, the approach of fans from the artist’s original culture seems to be more emotional/visceralthan cognitive/intellectual. Audiences immersed in the same cultural context as the artist may bemore likely to respond emotionally, while fans from a different culture form attachments that areless immediate and more intellectually based. In addition, a strong emotional attachment to popu-lar music may be more closely related to “life-changing” consequences than attachments to musicthat are more cognitive in nature.

Key words: Social psychology of music, Identity development, Cross-cultural research

[email protected]

8.41 Musical qualities in the infant cry

Daniela Lenti Boero, Gianni Nuti

Department of Psychology, University of the Valle d’Aosta, Italy

IntroductionThe cry is the first sound emitted by the infant, it is an high intensity, mammalian like, and some-times unpleasant signal, and many studies demonstrated that it might be among the most importantelicitors of child abuse in the first months of life. Recently, a questionnaire compiled in NorthernEurope by neo-parents of infants aged 6, 4, and 2 months showed that 5%, 4%, and 2.2% of themrespectively, admitted to have battered or smothered the infant in order to stop him/her crying.

Although cry is emitted in a high variety of contexts, and the contextual differentiation ofcrying has been debated since earliest studies, actually, most authors agree that cry’s variationare related to infant’s arousal rather than different contexts. However, some kind of “musical”characteristics might be less or more aversive than other, according to different arousal.

Method and ResultsTwenty full-term normal human newborns (aged one to four days), belonging to normal families,had their cries recorded during pain (high arousing situation), hunger (low arousing situation) andmanipulation stimuli (medium arousal). Musical quality of the cries are evaluated by an expertmusician for each infant in each context. The aim is to relate infant cries with musical stylistic andformal models complexes, but rooted into bio-psychological fundaments of human relation andcommunication.

In particular melodic aspects are analysed: pitch modulation, frequency contours, durations,pause length, evolution of the intensities, timbres variables, and temporal structures. Eventually,parallels between melodic categories found in infant cries and cultural models taken from West-ern’s and European’s musical repertories enclosing homologous characteristics are carried out.

Key words: Cry, Voice, Emotion

[email protected]

Page 107: Abstract Book

Tuesday, August 22th 2006 107

8.42 A descriptive analysis of university music performanceteachers’ sound-level exposures during a typical day ofteaching, performing, and rehearsing.

Sandra Mace

Music Research Institute at the University of North Carolina at Greensboro, USA

The purpose of this study was to describe sound-level exposures of university music perfor-mance teachers. Using personal dosimeters, sound-level exposures were measured across twowork days. The primary research question was “Do university music performance teachers expe-rience sound levels that result in dose percentages that meet or exceed standards recommendedby the National Institute for Occupational Safety and Health (NIOSH) or the Occupational Safetyand Health Administration (OSHA) in group and individual teaching environments during typicalwork days? Over the time of a career, sound-level exposure that exceeds recommendations placesa person at risk for noise-induced hearing loss (NIHL).

Thirty-seven university music performance teachers wore Cirrus Research doseBadge dosime-ters. Dosimeters were placed on a shoulder in a manner that would not interfere with performanceon their instruments during teaching. Additionally, the effects of other variables on sound-levelaverages were examined, including, teaching specialization, type of teaching activity, number ofstudents or participants in teaching activity, and performance level(s) of participants in a teachingactivity.

Thirteen music performance teachers (35%) experienced sound levels that resulted in dose per-centages exceeding standards recommended by NIOSH for single days measured. Two-day aver-ages showed that 12 music performance teachers (32%) experienced sound levels that resulted indose percentages exceeding standards recommended by NIOSH. Five music performance teach-ers (14%) experienced sound levels that resulted in dose percentages exceeding standards recom-mended by OSHA for single days measured. Two-day averages showed that 2 (5%) music per-formance teachers experienced sound levels that resulted in dose percentages exceeding standardsrecommended by OSHA.

Although 35% of the music teachers experienced averaged sound levels that place them atrisk for NIHL, only 8% (n = 3) reported using hearing protection (earplugs). Timbral differencewas the reason most often cited by participants who stated why they chose not to use hearingprotection. Seemingly, some musicians are willing to compensate for hearing loss in the futurerather than timbral differences in the present.

Key words: Sound-level exposure, Dosimeters

[email protected]

8.43 On machine arrangement for smaller wind-orchestras basedon scores for standard wind orchestras

Hiroshi Maekawa1, Norio Emura1, Masanobu Miura2, Masuzo Yanagida1

Page 108: Abstract Book

108 Poster Session I

1Doshisha University, Kyoto, Japan2Ryukoku University, Shiga, Japan

BackgroundRecently, there are many amateur wind-orchestras, but many of them are formed of less playersor instruments compared to those of standard wind orchestras. So, arrangement is required forsmaller wind-orchestras for proper or endurable performance, though arrangement is a tough taskfor amateur musicians. A machine arrangement system is required for wind-orchestra to reducethe burden of arrangement.AimsTo develop a scheme of machine arrangement from scores for standard wind-orchestras to thosefor smaller formation.MethodThe proposed system generates scores for wind-orchestras of smaller formation from scores forstandard wind-orchestras. The performance of the proposed system simulates human arrange-ments. The process can be divided into following three phases.

1. Phrase Segmentation The system divides the scores at time points where the mood of musicchanges significantly.

2. Part Extraction The system extracts important parts in each phrase up to the number ofplayers in the target orchestra.

3. Instrument Assignment

The system assigns each part to an instrument the target orchestra has.ResultsThough the three phases mentioned above have been implemented as independent subsystems,these subsystems have not been integrated as a unified system. So the results shown here are thoseof each subsystem.

1. Phrase Segmentation. Phrase segmentation by the proposed system and that by humanarranger almost coincide with each other in two musical pieces for wind-orchestras, butnot, in one musical piece among three.

2. Part Extraction. Part extraction by the proposed system roughly coincides with that by 7human arrangers having experience in playing wind instruments for both musical piecesinvestigated.

3. Instrument Assignment. Instrument assignment by the proposed system perfectly coincideswith one of the results by 7 human arrangers noted above at least in two musical piecesnoted above.

ConclusionsIn the phrase segmentation phase, the proposed system generates some results that differ from byhumans. The reason for this might be the fact that the time span used for evaluating synchroniza-tion is too short. Improvement in this defect is required. Integration of the three subsystems isessential for construction of the whole arrangement system.

Key words: Machine arrangement, Wind orchestra, Phrase segmentation

[email protected]

Page 109: Abstract Book

Tuesday, August 22th 2006 109

8.44 Processing of self-made and induced errors in musicians

Clemens Maidhof, Martina Rieger, Stefan Koelsch

Max Planck Institute of Human Cognition and Brain Sciences, Leipzig, Germany

Musicians can be seen as motor experts with pronounced associations between an action and aperceivable consequence of that action. Because of the complexity of a musical act, the constantmonitoring of ones own actions with auditory, visual and tactile feedback is of crucial importance,and can be used to detect and possibly correct errors. The present study investigates the neu-rophysiological correlates of error processing in musicians. In the action-condition, 12 pianistshad to play fast tone sequences bimanually. In the perception-condition, the task was to listen tothose sequences. During both conditions, the electroencephalogram was recorded. Event-relatedbrain potentials for induced errors (random manipulation of the auditory feedback of single key-presses in the action-condition, and random manipulation of tones in the perception-condition re-spectively) and self-made performance errors (only action condition) were analyzed. Preliminaryresults show that induced errors elicited in the action-condition an feedback Error-related Negativ-ity (ERN) and in the perception-condition a potential strongly reminiscent of the feedback ERN.Future studies have to examine whether the two potentials reflect the same process, which mightindicate that similar neuronal processes of error detection operate in musicians during perceptionand action.

Key words: EEG, Event related brain potentials, Performance

[email protected]

8.45 Imaging the music of Arvo Pärt: The importance of con-text

Kaire Maimets-Volt

Estonian Academy of Music, Tallinn, Estonia

BackgroundMore than 20 compositions of Arvo Pärt, one of the best-known contemporary Estonian com-posers, have been used as background music in films (e.g. by Bernardo Bertolucci, Julie Bertucelli,Jean- Luc Godard, Michael Moore, Gus van Sant, Tom Tykwer). Most of these compositions areinstrumental pieces of Pärt’s tintinnabuli style; the characteristic formal feature of the more oftenexploited ones is their exploring the ramifications of a single element of musical structure.AimsOn the example of the use of Pärt’s early tintinnabuli works “Für Alina” (1976) and “Spiegel imSpiegel” (1978) in film soundtracks this paper aims to show that 1) Pärt’s music occurs in narra-tive situations where it is necessary to express a single unambiguous idea or emotional content,or emphasise it over something else; 2) this content tends to be very similar in films otherwiseextremely diverse in terms of plot or genre, and it tends to be so regardless of which works of Pärtare used. Therefore the final aim of this paper is to describe the extramusical field of associationof Pärt’s music in contemporary film.

Page 110: Abstract Book

110 Poster Session I

Main contributionAnalyses of Pärt’s tintinnabuli music have usually been focused on the score, not the experienceof sound, apparently because it seems safer to explicate Pärt’s original method of compositionthan listeners’ aesthetic perception of sounding music. (The latter we would rather find in CD-booklets, concert programmes, previews and reviews, etc.) We assume, though, that filmmakershave chosen Pärt’s music for its acoustical properties. Furthermore, filmmakers tend to perceiveand interpret this music in a similar manner, and use on similar occasions. Thus in researching howPärt’s music is set to interact with other means of expression (image, speech, non-musical sound)in the context of film narrative, we acquire valuable information on how this music is experienced.ImplicationsWe hold that meaning in music is context-dependent, i.e. music has the potential for specificmeanings to emerge under specific circumstances. This research implies that film contexts revealsuch aspects of Pärt’s music that canonical musicological discourse would rather not bring forth.

Key words: Audio-visual perception, Film music, Arvo pärt

[email protected]

8.46 Quotation in jazz improvisation: A database and someexamples

Marco Mangani, Roberta Baldizzone, Gianni Nobile

Faculty of Musicology, University of Pavia and Cremona, Italy

While quotation in general has recently become a main topic of the musicological research,nquotation in jazzz, as David Metzer states, nhas received little attentionz (“Quotation and Cul-tural Meaning in Twentieth-Century Music”, Cambridge 2003, p. 50 n. 10). In this paper wefocus on the use of quotation on the part of the jazz soloist in order to integrate improvisation.This practice reveals in the first instance some specific aspects of the musical background of a jazzplayer: the case of Charlie Parker, whose quotations range from traditional tunes to Stravinsky,is well known and has recently been reappraised (for instance, in the interesting, even if discon-tinuous, book written by Gianfranco Salvatore, Viterbo 2005), while the cognitive implicationsof Parker’s compositional/improvisational method were faced several years ago by Perlman andGreenblatt (“Miles Davis Meets Noam Chomsky...”, in “The Sign in Music and Language”, ed.W. Steiner, Austin 1981). Parker, anyway, was by no means the only jazz player who had recourseto quotation; and quotation is not to be considered as a mere “indicator” of the soloist’s musicalbackground. We are convinced that in several jazz styles quotation represents a downright struc-tural factor of the improvisational path; and that, as such, it has several cognitive implications. Inorder to properly evaluate the role of such quotations, the simple “aural/mnemonic” individuationof their sources is not enough: We need instead a systematic description of their location (“in whatChorus?”; “to what formal part of the Standard does the quotation correspond?”), their melodicand harmonic features and even their role in terms of performance (“studio or live”?; “at the end ofa piece to increase applauses?” and so on). To this end, we propose here a pattern of database andshow some interesting results concerning three emblematic cases: Sidney Bechet, Duke Ellingtonand Ella Fitzgerald.

Key words: Jazz, Quotation, Improvisation

Page 111: Abstract Book

Tuesday, August 22th 2006 111

[email protected]

8.47 Tension and narrativity in tonal music

Luca Marconi

Conservatory of Music Giuseppe Verdi of Como, Italy

In many theories and discourses on tonal music it is possibile to find the application of cate-gories of “tension” and “narrativity”; but the relationship between the application to tonal musicof the former category and the application to the same music of the latter has not been deeplyconsidered. This paper will try to approach the above-mentioned relationship, connecting theo-ries on tonal music to those on narrativity and to those which consider the energetic and kineticexperiences lived going toward a goal undergoing a tension between two opposite tractions. Theabove-mentioned connection will be searched referring to the cognitive sciences of music, to thesemiotics of music and to the cognitive semantics.

Key words: Tension, Narrativity, Tonality

[email protected]

8.48 Recognition of similarity relationships between time-stretchedspectral structures

Renato Messina

Istituto Superiore di Studi Musicali “Vincenzo Bellini”, Catania, Italy

The purpose of the poster is to present a test-based approach for recognizing from the spectralstructure of unpitched short sounds (from 50-100 milliseconds) whether several time-stretchingresponse functions are perceptually similar. For example, in a spectral composition the problemmay be to determine whether the harmonicity/inharmonicity percentage defined in a small tempo-ral scale by mathematical ratios, apply to each of the different time-stretching factors, preservingthe object identity.

The investigation proposed here combine perceptual information from auditory inspection ofseveral structures which is carried out by means of similarity comparison of the time-stretchingmodels of the structure to be inspected with several reference patterns. Let (a1, . . . am) be p-dimensional parameters associated with m response models of the same type. This study is con-cerned with the syntactical comparison of a1,. . . am to individuate the timbral properties that aresalient for similarity or identity judgments.

The patterns spectral structures, implemented in Max/MSP environment, are based on additivesynthesis and share the same frequency band. Such structures are arranged both in parallel andin sequential level. The parallel level controls additive synthesis components in the frequencydomain; the sequential level implements piecewise-linear envelopes according to three stages inthe time domain: 1) attack transients; 2) stationary state with a specific formantic region; 3)

Page 112: Abstract Book

112 Poster Session I

extinction transients. The time-stretching models are obtained by multiplying all the weightedsegments of the sequential level functions by a common coefficient (approximately from 2000%to 60000% of the original tempo). The test package is based on a linear increment in the structuresenvelopes complexity: s1=p1; s2=p1*2; etc., where s is the structure and p is the number ofparameters of both sequential and parallel functions.

The test results are interpretable: (1) in relation to the morphological characteristics of thestructure of which the subjects can detect the transformations by each of the stretching modelsproposed; (2) in relation to perceptual performances comparison between different levels of com-plexity. The results may also suggest to investigate the perception thresholds in an inverse mappingof large-scale temporal events converted into compacted data structures.

Key words: Time-stretching, Spectral, Similarity

[email protected]

8.49 Cognitive and spatial abilities related to young children’sdevelopment of absolute pitch perception

Maria Teresa Moreno Sala

University of Laval, Québec, Canada

The purpose of the present study was to investigate whether there were cognitive and spatialabilities related with the development of absolute pitch perception during early childhood. Youngchildren’s performance (n = 88) was examined in a variety of cognitive and spatial ability tasksafter two months of focused instruction on absolute and relative pitch. The results indicate thatsome cognitive and spatial abilities are related with absolute pitch memory development. Childrenwho memorize pitches better have a more sequential and a less simultaneous way of processinginformation. Moreover, some differences have also been found in spatial tasks. The relationshipsbetween the type of intellectual processing and the development of pitch memory are discussed.

Key words: Absolute pitch, Cognitive abilities, Young children

[email protected]

8.50 Mathematical framework for analyzing qualitative infor-mation flow in processes of teaching musical renderings

Tatsuo Motoyoshi1, Hiroshi Kawakami1, Kentaro Toda2, Takayuki Shiose1, Osamu Katai1

1Graduate School of Informatics Kyoto University, Japan2Osaka Gakuin University, Japan

In this paper, we propose a mathematical framework for analyzing a transmission process ofmusical renderings.

Page 113: Abstract Book

Tuesday, August 22th 2006 113

Suppose that some music players communicate each other about musical renderings. Theproblem is understanding this process and showing how to transmit effectively. In these days,several methods for generation of musical renderings are proposed. However, these methods arebased on computational models and for mechanical instruments, therefore these models consist ofpredetermined rules which are described in symbol data in detail.

On the other hand, human music players use not only literal representation such as symbol databut also metaphorical representation when they communicate about musical renderings.

Using literal representation, limited ways of performances which are generated from concreteinstructions of renderings are given to players. On the other hand, using metaphorical representa-tion allows them to generate new artistic performance.

Therefore, by interpreting metaphorical representation as media, we have to consider not onlyquantitative aspect but also qualitative aspect of information flows between transmitters and re-cipients to understand these processes. Nevertheless, there are no theoretical frameworks for dis-cussing such flows of qualitative information.

First, we discussed differences of artistic performances whether players use metaphorical rep-resentation or not. Then, we proposed a mathematical framework for analyzing these informationflow using Channel Theory.

Using this framework, we can understand that various ways of artistic performances are gener-ated from metaphorical representation. Furthermore, the criteria of good transmission of musicalrendering are discussed based on Channel theory.

Key words: Channel theory, Qualitative information flow, Musical rendering

[email protected]

8.51 Strong emotions in music and their psychophysiologicalcorrelates

Frederik Nagel, Oliver Grewe, Reinhard Kopiez, Eckart Altenmüller

University of Music and Drama Hanover, Germany

Strong emotions while listening to music can be accompanied by psychophysiological re-sponses, such as goose pimples or shivers down the spine, the so called chills (Panksepp 1995).About 50%-90% of music listeners occasionally experience such bodily reactions (Goldstein1980). The relationship between self-reported chills and psychophysiological parameters, suchas heart rate (HR), heart rate variability (HRV) and skin conductivity response (SCR) as indicatorsfor sympathetic activity (Witvliet and Vrana 1995; Rickard 2004), was investigated in 38 subjectswho listened to musical pieces and reported emotions they felt in real-time using the computerbased self report system “EMuJoy” (Nagel, Kopiez et al. 2006).

HRV was found to increase in musical pieces in which chills occurred compared to those pieceswithout chills. In pieces in which chills were reported, SCR and HR were increased in coincidencewith chills in 79.8% and 70.6% respectively.

The maximum SCR lagged behind the onset of self-reported chills by 3.5s and the HR by1.5s. The averaged physiological time series showed a pattern of responses to chill experiences.The relationship between psychological and physiological time series was quantified. Since thereare cases with dissociated physiological and psychological responses, physiological data are notappropriate as sole indicators of chills, but they are needed to substantiate self-reports. (208 words)

Page 114: Abstract Book

114 Poster Session I

ReferencesGoldstein, A. (1980). “Thrills in response to music and other stimuli.” Physiological Psychology8 (1): 126-129.

Nagel, F., R. Kopiez, et al. (2006). “Continuous measurement of perceived emotions in music:Basic aspects of data recording and interface features.” (in press). Panksepp, J. (1995). “Theemotional sources of chills” inducted by music. Music Perception 13 (2): 171-207.

Rickard, N. S. (2004). “Intense emotional responses to music: a test of the physiologicalarousal hypothesis.” Psychology of Music 32(4): 371-388.

Witvliet, C. V. and S. R. Vrana (1995). “Psychophysiological responses as indices of affectivedimensions.” Psychophysiology 32(5): 436-43.

Key words: Emotion, Chill, Music

[email protected]

8.52 The interplay between music and language processing insong recognition

Tomoko Nakada, Jun-ichi Abe

Hokkaido University, Japan

Singing and listening to a song are two of the most complex and universally human of cognitiveactivities. A song has two kinds of information, i.e., words (language information) and melody(music information). Does melody facilitate (or inhibit) word processing, and do words facilitate(or inhibit) melody processing? Or, are these two kinds of processing independent of each other?These questions are explored in two separate experiments.

In Experiment 1, 40 university/college students were presented with either 20 spoken words(condition 1) or sung words (condition 2) and they were asked to attend to the words. Immediatelyafterwards, in the test phase, they were administered a word recognition task in which they heardthe same words along with 20 distracter items and they were asked to respond “old” or “new”.Half of those in condition 1 heard the words spoken, and half heard the words sung. Condition 2was the same. If word processing is completely separable from melody processing, there shouldbe no difference between the four conditions in the test phase.

In Experiment 2 used the same design with 20 university/college students but the focus wason melody recognition. The presentation consisted of a hummed melody (condition 1) or a sungmelody with words (condition 2) and students were asked to attend to the melody. Likewise, ifmelody processing is completely separable from word processing, there should be no differencebetween the four conditions in the test phase.

In Experiment 1, using A’scores (a nonparametric measure of the accuracy of discrimination),melody facilitated word recognition when the initial presentation of words was sung (condition 2).Similarly, in Experiment 2, words facilitated melody recognition in the sung condition (condition2) but inhibited performance in the hummed condition (condition 1).

These results suggest that word and melody processing are not independent. Whether words fa-cilitate or inhibit performance is dependent on the initial conditions and focus of attention. Wordsfacilitate melody and vice-versa, except for recognition of a hummed melody.

Key words: Song recognition, Words, Melody

Page 115: Abstract Book

Tuesday, August 22th 2006 115

[email protected]

8.53 Musical creativity and the teacher. An examination ofdata from an investigation of Secondary school music teach-ers’ perceptions of creativity

Oscar Odena

University of Barcelona, SpainEscola Superior de Musica de Catalunya, Barcelona, Spain

BackgroundThe centralised production of National curricula during the last decade has made activities labelledas “creative”, such as composition and improvisation, compulsory in many countries. Neverthe-less, the term creativity and how creativity might be identified is rarely examined. Recent studies(Odena, 1999, 2001) suggest that music teachers interpret creativity and its assessment in personalterms. England is an ideal setting for this enquiry, with a third of its Secondary school curriculumfor music devoted to “creative skills”.AimsThis investigation is concerned with how music teachers interpret the meanings of creativity. Inthis paper issues related to two research questions are considered focusing on (a) how participantscharacterized creativity in their discourse and (b) the relationship between their perceptions andtheir backgrounds.MethodAdopting a qualitative approach (Lincoln & Guba, 1985), videotaped extracts of lessons on com-position and improvisation with pupils aged 11-14 were used as a basis for discussion duringin- depth interviews with six teachers. Participants were also asked to reflect on specific in-stances that had shaped the direction of their musical outlook by completing a Musical CareerPath questionnaire. The investigation was divided into four stages: (1) examination of the mean-ings attached to the word “creativity” and review of previous studies, after which a four-foldframework for researching teachers’ perception of creativity in music education was put forward(Pupil-Environment-Process-Product); (2) discussion of the methodological assumptions under-pinning the research, including issues relating to data collection; (3) examination of data usingcontent analysis with the assistance of the computer programme NVivo (Gibbs, 2002); and (4)implications.ResultsTwenty-eight categories and subcategories that complemented the four-fold framework emergedfrom the analysis of the interviews. Selected categories are discussed in this paper. Regarding therelationship between their perceptions and their backgrounds, the teachers’ experiences fell intothree strands, namely musical, teacher-education, and professional teaching.ConclusionsData analyses indicate that the most influential strand is “musical”. Participants with composingexperience and practical knowledge of different music styles were more articulate at describingthe environment for creativity and how this might be assessed in pupils’ work. Educational impli-cations based on these findings are considered.

Page 116: Abstract Book

116 Poster Session I

Key words: Creativity, Secondary schools, Teachers’ perceptions

[email protected]

8.54 Rhythmic grouping construction in the first movement ofPaul Hindemith’s flute and piano sonata performance

Jose A. Ordoñana

Universidad del País Vasco, Universidad Pública de Navarra, Escuela de Música de Bergara(Gipuzkoa), Spain

The rules of A Generative Theory of Tonal Music (GTTM) by the authors Lerdahl and Jack-endoff were validated in musicians and nonmusicians subjects of previous researches, and in thoseinvestigations it was found that these subjects do not show an evident difference in rhythmic group-ing behaviour. On the other side, there are also researches about the accents during musical perfor-mance showing important rhythmic and metric accents differences in the versions of the evaluatedsubjects.

The present work pretends to know the rhythmic grouping way in music performance ofmiddle-level conservatoire students. The score piece of music and the interpretations of three flutestudents were analyzed. The music work was the first movement of Paul Hindemith’s flute andpiano sonata. The present research also produced comparisons between students’ performancesand others from three professional instrumentalists, using commercial recordings on compact-disc.The score and performances analysis were made under the regulations of GTTM (Generative The-ory of Tonal Music) and computer medium downloaded for psychoacoustic analyzing. Interviewswith the three students, research participants, were also made, in order to get subject matter infor-mation to be contrasted with the data picked up by the analysis of performances and comparisonsdone by the three professional musicians.

Like in other researches, the results do not offer important differences about grouping struc-ture, and at the present study, with independence from the technical level of the evaluated subjects.Here is where the more remarkable differences about rhythmic structure appeared. Some accentmovements are found in the different piece performances between performers and the results of-fered by the piece analysis under the regulations of GTTM (Generative Theory of Tonal Music).Studies that could investigate influent factors concerning accent variations in music performanceare suggested.

Key words: Perception, Cognition, Grouping structure

[email protected]

8.55 Convergences between learning to play the piano and mo-torics

Alessandra Padula

Page 117: Abstract Book

Tuesday, August 22th 2006 117

University of L’Aquila, Italy

Traditionally Italian music pedagogy traditionally focuses mainly on general music educationrather than on studies concerning the learning of an instrument; nevertheless, learning to play aninstrument is doubtlessly an important part of music education.

Children often begin to learn to play an instrument during primary school, when they are 4 to11 years old. It is well-known that playing an instrument promotes general artistic and specificmusic competences, which help expression and communication and therefore make autonomy andsocialization easier.

Moreover, 4-11-year-old children can often obtain other important benefits from learning toplay an instrument (especially piano); this study deals with these benefits by analysing conver-gences between learning to play the piano and motorics.

This study examines: - the motoric goals that can be achieved by learning to play the piano(development of abilities in perception and in body imaging, space- and time-structuring, increaseof coordination) - the pianistic goals that can be achieved by studying motorics (improvement ofrelaxation, balance, joints mobility, motion economy) - the importance to achieve these goals forthe general development of children.

The analysis of the contribution that piano playing can make to a harmonic development ofmany abilities in normal children as well as in those with special needs, demonstrates that pianoplaying can be useful and effective in schools and in rehabilitation centres.

Therefore, the results of this study may be useful to: - those who work with children in thisage-range - educators of play groups - primary school teachers - teachers of children with specialneeds - music education teachers - instrument teachers - rehabilitation technicians.

- those who plan and organize empowerment and integration projects - councillors responsiblefor municipal departments - pedagogical managers in boarding-schools, reform schools, treatmentcentres for troubled youth, boot camps, etc.

- those who research on music and on motorics - musicians - psychologists - pedagogists andspecial pedagogists - therapists for neuro-and psychomotility in the evolutive age.

Key words: Motorics

[email protected]

8.56 Music, consciousness and unconscious

Alessandra PadulaUniversity of L’Aquila, Italy

Musicians usually have a rational approach towards music and consider it as a matter to bestudied, mastered and analysed, a subject to be examined with their mind rather than with theirheart in order to interpret the composer’s message and to teach its exact vocal or instrumentalrealization to students who attend music courses.

The average listener usually has an emotional approach towards music and is more interestedin the moods which follow the listening to the music rather than in the conscious analysis of whathe is listening to.

Nevertheless, as musicians relate to listeners who do not always have a rational approachtowards music, they have to understand what listeners needs and emotional approaches are, inorder to use all these aspects in the best way.

Page 118: Abstract Book

118 Poster Session I

This study is based on two guidelines: 1. How and why music induces emotions; 2. If and howthe choice of a music piece might fulfil one’s needs.

First we investigate the links among music, reminiscences, imagination and emotions and wehighlight how each basic music element (sounds, silences, etc.) causes “psychosomatic” effects onthe listener; these effects are indeed the combination of physical effects (due to the five senses) andemotional effects (personal reminiscences, collective reminiscences). Then we analyse the mainmechanisms that link sounds and meanings, trying to explain their functions by mean of severalexamples. Since everyone’s behaviour can be conditioned by personal emotions and since emo-tions are deeply linked to the unconscious, it is interesting to find out what the basic motivationsof each individual are and investigate if listening to music may be linked to these motivations.

This study intends to highlight how listening to music may satisfy some deep needs as: 1.Emotional and physical self-confidence u originality and creativity; 2. Sharing ideas and emotions;3. Glamour, esteem and respect; 4. Affinity; 5. Regression; 6. Immortality.

The results of this study may be useful to: 1. Experts in music perception and cognition; 2.Composers; 3. Music players and singers; 4. People working in the marketing field; 5. Advertisingpsychologists.

Key words: Emotion in music

[email protected]

8.57 Cultural aspects of music perception

Evagelos Paraskevopoulos1, Kyrana Tsapkini1, Isabelle Peretz2

1Department of Psychology, Aristotle University of Thessaloniki, Greece2Department of Psychology, University of Montreal, Canada

BackgroundThe importance of musical ability has been shown to be of fundamental value in the understandingof the human cognitive function. The evaluation of music perception through case studies in cog-nitive neuropsychology has already provided some useful insights about this function. However,the existing music perception batteries do not take into account cultural issues and differencesand therefore cannot be used worldwide to validate their universal claims about human musicalcognition.AimsThe aim of this study is to validate a battery for the evaluation of musical abilities and the detectionof amusia in cultures that do not share the western musical tradition.MethodWe adapted the Montreal Battery of Evaluation of Amusia (Peretz, I., Champod, A., S., & Hyde,K., 2003) into the requirements of eastern music traditions, namely of Greek music where rhythmand melody scales are different from the ones used in western music. We used 30 target melodiesand their corresponding variations in order to manipulate changes in melody and rythm, i.e., twodifferent eastern scales, a majorlike one (Hijazz) and a minorlike one (Sabah), and two differentmeters, 2/4 and 9/8 which exist in traditional Greek music. The form of the battery has remainedthe same and it consists of 6 tests to evaluate the perception of: contour, intervals, rhythm, meterand musical memory.

Page 119: Abstract Book

Tuesday, August 22th 2006 119

ResultsWhile still in the process of normalization, preliminary results from 2/3 of the data already seemto correspond to the original MBEA norms. We hope than in a few weeks we will have all the dataneeded for the normalization and that reliable data will be available in due time for the submissionof the full paper.ConclusionsA Greek/eastern Amusia Battery can provide new and interesting data about the musical ability ofnon-western populations and become a useful tool in order to evaluate cultural aspects of musicalperception.

Key words: Musical cognition, Amusia evaluation, Cultural differences

[email protected]

8.58 The unusual symmetry of musicians: Musicians show nohemispheric dominance for visuospatial processing

Lucy Patston, Michael Corballis, Lynette Tippett

Department of Psychology, University of Auckland, Auckland, New Zealand

BackgroundRecent research suggests that musicians show superior performance processing visuospatial ma-terial relative to non-musicians. Musicians performed significantly better on the Judgment of LineOrientation task (Sluming et al. 2002) and were faster than non-musicians at determining whichside of a horizontal or vertical reference line a target dot had appeared (Brochard, Dufour & De-pres, 2004).AimsExperiment 1 was conducted to assess differences in visuospatial attention in musicians and non-musicians using a line bisection task. The purpose of Experiment 2 was to investigate the neuralbasis of differences in visual processing in musicians and non-musicians, by comparing interhemi-spheric transfer times (IHTTs) of visual stimuli across the corpus callosum using EEG.MethodExperiment 1: Twenty expert musicians and 20 non-musicians completed a line bisection task,in which lines were bisected into two even parts using the right and left hands. Experiment 2:Seven expert musicians and ten non-musicians were administered the Poffenberger task (detec-tion of simple visual stimuli flashed to either visual field) during EEG recording. Differences inelectrophysiological latencies of the occipital N1 were measured bilaterally to deduce IHTTs.ResultsExperiment 1: Non-musicians bisected lines to the left of centre by approximately 2%, consistentwith previous research and supporting dominance of the right hemisphere for visuospatial atten-tion. In contrast, the musicians bisected lines to the right of centre, and were also more accurateoverall. Experiment 2: Non-musicians showed significantly faster right-to-left than left- to-rightIHTTs, consistent with a right hemisphere advantage for visual processing. Musicians, however,produced equal IHTTs in both directions, and were significantly slower than non-musicians forright-to-left IHTT.

Page 120: Abstract Book

120 Poster Session I

ConclusionsMusicians lack the normal asymmetry for line bisection and for interhemispheric transfer times ofvisual stimuli. Both findings suggest spatial attention and visual processing may be representedmore bilaterally in musicians than in non-musicians, and results indicate a reduced dominance ofthe right hemisphere for visual processing in musicians. Although it is not known how bilateralrepresentation enhances cognitive performance in visuospatial processing, it is likely to be relatedto early neural plasticity in musicians.

Key words: Musicians, Visuospatial processing, Line bisection

[email protected]

8.59 A study on the effect of musical imagery on spontaneousotoacoustic emissions in musicians and non-musicians

Gabriela Pérez-Acosta, Alejandro Ramos-Amézquita

Universidad Nacional Autonoma de Mexico, Mexico

The present study hypothesizes that musical images created in the brain could have an effecton cochlear activity; such an effect could be detected through the observation of spontaneousotoacoustic emissions (SOAEs) behavior. Production of SOAEs will be measured in ca. 55 womenand 55 men. It has been suggested that there is a stronger efferent influence on the auditory systemof individuals with previous musical training than on that of those without it. Therefore, half ofthe population studied will consist of professional musicians and the other half of non-musicians.A familiar musical tune will be chosen and the subjects will be trained in the task of evoking it.The musical tune (a simple melodic line) will be recorded with a sampled piano sound on a singletrack.

The training will consist of listening to it until all subjects report that they are able to evoke themusical tune. They will be asked to indicate the beginning and end of the task of evoking in orderto measure its time span (which should be very similar to the actual duration of the melody). Thesubjects will then be seated in an anechoic chamber at the “Centro de Ciencias Aplicadas y De-sarrollo Tecnológico” (CCADET) at the “Universidad Nacional Autónoma de México” (UNAM).An Etymotic Research ER-10B+ low-noise microphone will be inserted in the ear canal of thesubjects. Five-minute samples of ear-canal signals will be obtained, digitized and processed in or-der to extract frequency and amplitude data on SOAEs. This procedure will be carried out before,during and after the musical image creation task.

Results will then be analyzed to compare the difference between SOAEs of musicians andthose of non-musicians. Changes in frequency and amplitude are expected to be found, whichshould be more noticeable in musicians than in non-musicians.

Key words: Music imagery, Spontaneous otoacoustic emissions, Efferent influences

[email protected]

Page 121: Abstract Book

Tuesday, August 22th 2006 121

8.60 Musical structure processing in autistic spectrum disor-der

Eve-Marie Quintin1, Anjali Bhatara1, Eric Fombonne2, Daniel J. Levitin3

1Levitin Laboratory for Music Perception, Cognition and Expertise, McGill University, Canada2Montreal Children’s Hospital, McGill University Health Centre, Canada3Department of Psychology, McGill University, Canada

BackgroundWeak central coherence theory of cognitive processing states that individuals with autistic spec-trum disorder (ASD) focus on the details of a stimulus (local processing) owing to an impairedability to form a global, integrated, ”perceptual whole” (Frith & Happé, 1994). Individuals withASD exhibit spared or enhanced perception of local musical elements such as absolute pitch(Heaton, et al., 1998), but they can perceive contour, a global property of music phrase (Foxton etal., 2003).AimsTo test weak central coherence theory in ASD using musical stimuli and to test the specific hy-pothesis that individuals with ASD will be able to segment music into natural syntactic- structuralboundaries and that they can attend to global musical structure by ordering musical segmentscorrectly to create a proper musical whole.MethodTwo novel experiments were used to explore musical structure processing in autism. Children 14to 17 years old with ASD and typically developing normal controls were presented with “MusicBlocks”TM, a musical jigsaw puzzle made of 5 plastic cubes, each playing a segment of a melody.Rearranging the cubes alters the piece’s melody (global structure) while leaving local structure(groups of notes that constitute a phrase or motif) intact. Children were asked to reconstruct afamiliar instrumental tune after several listenings. In a second experiment, children were asked tosegment music into meaningful phrases as a measure of their ability to perceive structure in music.Once they had become acquainted with the musical examples, participants were asked to “parse”the musical example into subsegments, by pressing the space bar on the computer to indicatewhere in the music they believe each phrase or “part” of the music ends and another begins. Meanfull-scale, verbal, and nonverbal IQ were also assessed using the PPVT, the WISC-IV, or the WAISand the amount of time spent listening to, playing, and singing music were used as covariates inour analyses using the SAMMI (Levitin, et. al, 2004) musical aptitude instrument.ResultsFor both experiments, data will be analyzed with ANOVA and ANCOVA to test for main andinteraction effects of diagnosis group, type of music, and item.ConclusionsTaken together, these experiments assess the ability of children with ASD to perceive structure inmusic, an aspect of musical knowledge. The results will significantly advance theories of cognitiveprocessing in autism and increase knowledge about the capacities and communication channelsavailable to people with ASD.

Key words: Musical structure processing, Autistic spectrum disorder, Central coherence theory

[email protected]

Page 122: Abstract Book

122 Poster Session I

8.61 Preparing a musical performance: Rehearsal techniques

Sonia Ray, Thiago Cazarim

Federal Univeristy of Goiás-Brazil, Brazil

AimsThe objective of this study is to determinate: 1. which aspects are common for both prepara-tion rehearsals and dress rehearsals? 2. is there any particular technique applied only for dressrehearsals? 3. how do groups deal with mistakes on rehearsals?ContextSignificant attention has been given to studies on musical performance approaching interdisci-plinary issues associating music and physics, mathematic, physiology, anatomy, psychology andneurology. Repercussion on that can be verified by numerous congresses and research groupsworldwide. On preparing a performance, musicians often have to deal with a number of pre-dictable situations as well as unexpected issues. These issues grow in complexity on preparingchamber music performances.MethodologyA study approaching a sequence of 8 rehearsals, a dress rehearsals and a performance by Cazarimand Ray (2004) video-taped and analyzed 10 chamber music groups formed by undergraduate andgraduate students at Federal University of Goiás. The study was complemented by a short surveyaddressed to all volunteers. Survey answers were compared to the aspects observed on tapes andhelped us to get to the results.Results1. Chamber groups observed few or non presence of planning by the musicians for their rehearsals;2. Last minute changes are not considered as part of preparation for performances; 3. Mostperformances presented mistakes not made on dress rehearsals.Key ContributionPerformers seem to agree that rehearsal time must be administrated with efficiency in order toget fast and satisfactory results. Despite this fact, studies considering techniques for rehearsalson preparing a musical performance have received little attention from researchers, although theissue was appointed as an extremely relevant topic a long time ago by Sloboda (1989). This workconsidered chamber groups interactions during their regular rehearsals as well as during their dressrehearsals.

Key words: Performance, Rehearsal technique, Chamber music

[email protected]

8.62 Some considerations on algorithmic music and madrigalsof Gesualdo da Venosa

Francesco Russo1,2, Biancamaria Criscuolo3

1Mathematics Department R.Caccioppoli, University Federico II of Naples, Naples, Italy2Conservatory of Benevento, Italy

Page 123: Abstract Book

Tuesday, August 22th 2006 123

3Liceo Sociopsicopedagogico, Italy

The object of this work is an algorithmic study of the madrigals of Carlo Gesualdo da Venosa.Here we build a simulation model which allows to compose and classify an arbitrary madrigal ofthe Renassaince period.

Gesualdo summarized the most important harmonic techniques to write a Renassaince madri-gal, so we have translated these harmonic criteria in a suitable mathematical language, using linearsystems. On the other hand a similar approach can be extended to madrigals of other times, givingdifferent numerical values to the coefficients of the associated linear system.

Key words: Harmony, Mathematics, Music

[email protected]

8.63 The algorithmic music and the Musical Offering of J.S.Bach

Francesco Russo

Mathematics Department of the University Federico II of Naples, ItalyConservatory of Benevento, Italy

The object of this work is an algebraic - geometric analysis of the canons of the Musical Of-fering of J.S.Bach. Here is built a simulation model, thanks to the logic of imitation and thanks tothe classical techniques of the harmony, which are present in the Musical Offering. Many of thesetechniques give only musical criteria to understand, compose and classify a canon, so we havetranslated them in the mathematical language, using the affinities of the euclidean bidimensionalspace. More generally a similar approach allows to write each canon of each period in a suitablelinear system. A treatment of this kind for a musical piece offers a new method to compose, givingdifferent numerical values to the coefficients of the linear system.

Key words: Harmony, Mathematics, Music

[email protected]

8.64 Facial emotional expression recognition after bilateral amyg-dala damage: The use of a sonorousmusical context

Antonio Salgado1, Ana Maria Ildefonso2

1Departamento de Comunicação e Arte, Universidade de Aveiro, Portugal2Faculdade de Ciências Humanas e Sociais, Universidade Fernando Pessoa, Porto, Portugal

BackgroundIn the context of bilateral amygdale damage, Calder et al. (1996) described studies of two peoplewith impaired recognition of facial expressions and showing deficits in the recognition of fearwhen tested with photographs showing facial expressions of emotion from Ekman and Friesen

Page 124: Abstract Book

124 Poster Session I

(1976) series. The finding that the recognition of fear, and other basic emotions, is consistent withother reports of effects of bilateral amygdale damage (Adolphs, Tranel, Damasio, & Damasio etal., 1994, 1995; Young et al., 1995).

AimsThis paper is a case study, SM, a description of one people with a bilateral amygdale damagecaused by herpes simplex encephalitis. SM, showed impairments of expression perception anda severe deficit affecting the recognition of almost all the basic emotions, when tested with pho-tographs showing facial expressions of emotion from Ekman and Friesen (1976) series. The casestudy described in this paper includes also a set of different tests using facial expressions and fa-cial movements expressing emotional content supported by a sonorous/musical context (Salgado,2001, 2003; Salgado & Wing, 2003).

MethodBased on Salgado (2001, 2003) and Salgado & Wing (2003) experiments and findings the researchhere undertook investigates if a sonorous/musical context may offer a better support for the recog-nition of different facial expressions/movements with emotional content, also in SM case study.SM has been submitted to a series of videotaped facial expressions which movements re-presenteddifferent emotional contents. These videotaped faces were showed with and without sonorous el-ements: the voice (sung or spoken) expressing the same emotional content as the face and/or amusical context supporting the emotional content expressed.

ResultsThe first experiments undertook in this investigation seem to show that, after some months ofintense training, SM was able to better identify the showed facial expressions of the basic emotionsfrom Ekman and Friesen (1976) series.

ConclusionsIt seems that, in case study SM, the use of sonorous/musical context when showing facial expres-sions/movements with emotional content helped SM to identify and recognize the facial expres-sions together with the emotions re-presented.

Key words: Emotion, Facial expression, Music meaning

[email protected]

8.65 Processing of music syntactic information in brain lesionedpatients - an ERP study

Daniela Sammler, Stefan Koelsch, Angela D. Friederici

Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

Previous fMRI, MEG and EEG studies suggest, that the pars opercularis of the inferior frontalgyrus (IFG) and the anterior part of the superior temporal gyrus (aSTG) are involved in music syn-tactic processing. Activations are bilateral, but tend to be right lateralized. We conducted an ERPstudy with patients suffering from focal unilateral lesions in one of the above mentioned structuresto investigate, whether IFG and aSTG are necessarily involved in music syntactic processing, andto disentangle the relative contribution of the single structures. The Early Right Anterior Nega-tivity (ERAN), an ERP component elicited by irregularities within harmonic progressions, served

Page 125: Abstract Book

Tuesday, August 22th 2006 125

as an index for an intact music syntactic processing. Furthermore, the ability of patients to detectharmonic irregularities and their performance in the Montreal Battery of Evaluation of Amusia(MBEA) were measured. We hypothesized, (a) that ERAN amplitude and behavioral performancewould be reduced in all patient groups compared to healthy controls, (b) that ERAN amplitude anddiscrimination performance would be reduced to a greater extent in patients with right compared toleft hemisphere lesions, and (c) that ERAN amplitude and performance would be diminished morestrongly in patients with lesions of IFG compared to aSTG. Patients and controls were presentedwith chord sequences ending either on a music syntactically regular (tonic) or irregular (dominant-to-the-dominant) chord. During EEG, participants were not informed about the different sequenceendings, but watched a silent movie. During the behavioral experiment, participants were asked todifferentiate between regular and irregular endings. Preliminary data analysis revealed, that mostpatient groups performed above chance and showed an ERAN component despite their lesion.However, compared to controls patients showed a reduced ERAN amplitude, a reduced discrimi-nation performance, and lower MBEA scores. In particular, patients with left hemisphere lesionsshowed deficits in temporal tasks of the MBEA, whereas right hemisphere lesions were associatedwith impaired pitch processing. Furthermore, ERAN amplitude and discrimination performancetended to be more strongly reduced in patients with right than with left hemisphere lesions, andIFG lesions tended to diminish the ERAN amplitude and performance to a greater extent thanaSTG lesions.

Key words: Music, ERP, Patients

[email protected]

8.66 Single time-controlled fractional noise algorithm

Riccardo Santoboni, Costantino Rizzuti

Conservatory of Music N. Piccinni of Bari, Italy

Musical composition has been deeply influenced by mathematic processes during the centuries.Since 80s, chaos and fractal geometry have strongly affected the development of a new field ofmusical research, with the use of fractional noises. Fractional noises have a power spectral densityof 1/fB; this is the way the power density decreases with increasing frequency over a spectralrange. Fractional noises exhibit in fact a wide range of autocorrelation trends associated with thevariation in the index B, these being produced by random processes characterized by differentmemory degrees. Fractional noises with integer index (white: B=0, pink: B=1, and brown: B=2)have been widely used either for sound synthesis or/and to control musical parameters such as pitchand duration or complex patterns time distributions. In literature, we fond specific approachesfor separate fractional noise generation for each kind of integer index B. Even though a uniquemathematical formula already exists to describe this process, at the moment we could find nomention of any method to generate all kinds of fractional noises with real index B, using a singlealgorithm.

The purpose of this work is to present a single algorithm that generates fractional noises char-acterized by every value of the exponential real index B defined in the interval [0, 2]. The Bcoefficient can be controlled during the process in order to produce a family of fractional noiseswith changing power spectral density on time.

Page 126: Abstract Book

126 Poster Session I

The algorithm was implemented by VST Plug-In technology, in order to have wide rangeaudio software compatibility. The results of our plug-in were compared with several integer Bnoise algorithms available from most common audio software, with an excellent data matching.

Key words: Fractional noises, Noise generation, Computer-aided composition

[email protected]

8.67 Development of musical preferences of elementary schoolchildren

Gabriele Schellberg

Catholic University Eichstätt/Ingolstadt, Germany

BackgroundThe subject of this empirical study is the development of musical preferences during the yearsof elementary school. In six to ten year olds an openness for different styles of music can (still)be observed, which Hargreaves (1982) called “open-earedness”. In 2000, a total of 591 childrenrated their likes or dislikes of eight musical excerpts in a questionnaire. This cross-section studyfound that musical preferences change even during the years of elementary school (Gembris &Schellberg 2003).AimsThis paper deals with the following questions: To which extent does open-earedness still exist withelementary school children? How does it change in the course of elementary school? Thereforea longitudinal study was conducted to examine the development of the musical preferences ofstudents in the course of three school years second to forth grade.MethodBy means of a questionnaire specially developed for elementary school pupils a total of 109 chil-dren in second grade (7 - 8 years old) rated their likes or dislikes of ten short pieces of music(approx. 80 sec.), which were representative of a range of different music styles (vocal music -operetta and opera excerpts, instrumental classical music, pop music, 20th century art music andethnic music). The same questionnaire was repeated one and two years later. 92 children (41 girlsand 51 boys) of an elementary school in Bavaria/Germany participated in all three years.ResultsThe results show clear age-related changes. With increasing grade and age, all pieces of mu-sic were judged more negatively. These changes in preferences are highly significant (p = .000,Friedman) regarding all types of music. There were also gender related differences: With few ex-ceptions girls generally judged excerpts more positively than boys. All in all, pop music receivedthe most positive evaluation.ConclusionsThe results confirm our former studies (Gembris & Schellberg 2003, Schellberg & Gembris 2004).Most of the younger pupils in the second grade liked or tolerated the music, whereas in third andmore clearly in fourth grade, children reacted more negatively to all excerpts except pop music.The result has implications for music education: Music teachers should take advantage of theopen- earedness of the first years of elementary school.

Page 127: Abstract Book

Tuesday, August 22th 2006 127

Key words: Musical preferences, Elementary school, Development

[email protected]

8.68 Music perception understanding is a prerequisite to im-plementing computer aided musical analyses

Michael Schutz

University of Virginia, USA

BackgroundMusic theorists have long dreamed of applying the speed and computational powers of computersto complex analyses of music. Currently available tools such as HUMDRUM have demonstratedthe potential benefits of a flexible, malleable tool to assist with tedious analytical tasks. Evenif sizeable technical hurdles for implementing advanced music analysis systems are overcome,it remains unclear whether accurate analyses based on music theory rules are possible withoutaccounting for the role of the perceptual system in music listening.

AimsTo demonstrate the schism between a programmed set of music theory rules and actual musicperception.

MethodBy replicating several standard analyses using automated software designed for general purposeanalytical tasks, it is possible to view the role of the perceptual system by juxtaposing the resultsof rigid, rule-based search schemas with sensible human-performed analyses.

ResultsThere are many similarities between human and computer analyses, confirming generally acceptedrules for analysis contain much merit. However, in several cases anticipated patterns represent buta fraction of the total patterns located through musically reasonable rules.

ConclusionsThe automation of any non-trivial analytical tasks cannot be carried out via blind adherence totraditional music theory rules. While there are tremendous future possibilities for computer assis-tance in the process of music analysis, progress in such a complex task can only proceed with amore sophisticated understanding of the role of the perceptual system in musical communication.

Key words: Computer aided analysis, Music theory, Post-tonal music

[email protected]

8.69 Why do mothers sing to their children?

Daniele Schön1, Sylvain Moreno1, Maud Boyer & Regine Kolinsky2, Mireille Besson 1, Is-abelle Peretz3

Page 128: Abstract Book

128 Poster Session I

1Institute of Cognitive Neurosciences of the Mediterranean, CNRS and University of the Mediter-ranean, Marseille, France2Unité de recherche en Neurosciences Cognitives, Faculté des Sciences psychologiques et del’Education, Bruxelles, Beligium3LNMCA, Psychology Department, University of Montreal, Quebéc

Saffran et al. (1996, 1999) have shown that adults and infants can use the statistical proper-ties of syllable sequences to extract words from continuous speech. Moreover, such a statisticallearning ability can also operate with non-linguistic stimuli such as tones (Saffran et al, 1999). Ina series of studies we compared learning based on speech sequences to learning based on sung se-quences. We hypothesized that, compared to speech sequences, a consistent mapping of linguisticand musical information would enhance learning. In a series of 3 experiments subjects underwenta learning session of seven minutes during which they listened to 7 minutes of a continuous speechstream resulting from the concatenation of six three-syllables nonsense words. Then, in order totest learning, subjects performed a two-alternative forced-choice test. In the first experiment, us-ing spoken syllables, participants performance was not significantly different from chance level.In the second experiment, the syllables of the continuous stream were sung using always the samepitch for a given syllable. This resulted in an identical and superimposed statistical structure ofsyllables and tones. Participants performance showed a clear learning. In order to test whethersuch learning was due to the structural properties of the stimuli or to a general arousal effect ofadding a musical dimension we combined syllables and pitches in such a way that linguistic andmusical boundaries did not happen anymore at the same time. Participants performance was sig-nificantly different from chance level and was better than that obtained in Exp. 1, but worse of thatin Exp. 2. It seems therefore that both precise structural properties (superposition of transitionalprobabilities) and a general arousal factor of music play an important role in language learningfacilitation.

Key words: Language learning, Songs, Segmentation

[email protected]

8.70 Timbrescape: A musical timbre and structure visualiza-tion method using tristimulus data

Rodrigo Segnini

Center for Computer Research in Music and Acoustics, Stanford University, USA

The tristimulus (Pollard and Jansson, 1982) is a timbre descriptor based on a division of thefrequency spectrum in three bands. Likewise its homonymous model in the visual domain, it ap-proximates a perceptual quality through parametric control of a physical quantity. A commonway of representing this data is by circumscribing a point in a triangular space with each cornerrepresenting one band. However, as the number of analysis frames increases, the risk of superim-position and cluttering of the many data points increases as well, limiting the value of this measurefor a human observer.

In this work we introduce Timbrescape, a method for visualizing tristimulus data as well asstructural boundaries based on timbre information. This method uses a multi-timescale aggrega-tion that offers at-a-glance information for the whole piece; it is based on the Scoregram (Segnini

Page 129: Abstract Book

Tuesday, August 22th 2006 129

and Sapp, 2005). We conducted an experiment to verify that the structural boundaries suggestedby Timbrescape are related to the way a human listener would partition a piece.

Results suggest that Timbrescape captures well the surface features present in a frequencyspectrum, and captured by the tristimulus model, on which humans may rely to segment musicalinformation.

Key words: Music structure, Timbre

[email protected]

8.71 Drumming it in: Towards a greater awareness of individ-ual learning styles in instrumental teaching

Gareth Smith

Graduate of MA Music Education from Institute of Education, LondonCo-authored article, just submitted, with Dr. Colin Durrant, from Institute of Education, London

This qualitative piece of research was undertaken to explore how people learn to play thedrums. A literature review outlines theories of learning, ability, motivation, and the developmentof musicians. The study considers two young drummers, whom the researcher taught prior toand during the study. Preliminary observations were made about the subjects; both drummersand their mother were interviewed; the boys’ lessons were recorded on mini disc, and notes takenfrom these recordings. The researcher finds from these sources that the boys exhibit differentlearning preferences. Findings are interpreted with reference to Gregorc’s Mind StylesTMmodel.It is asserted in conclusion that, while many factors affect the progress of individuals in any givendomain, teachers have a responsibility to be sensitive to students’ individual learning preferences,if teachers are to help students to fulfil their learning potential.

Key words: Drumming, Learning styles, Gregorc

[email protected]

8.72 Unveiling power of music: Socio-historical and psycho-logical perspectives

Stefanie Stadler Elmer

University of Zurich, Switzerland

The easiest and most primitive form of making music is singing. There are plenty of hymnsthat tell about positive benefits of music on humans. Music should enhance moral courage, intel-ligence, and social competences. The power of music is obvious: at all times, societal institutionssuch as religion, education, and politics had been using music and especially songs to convey val-ues and to influence people. Yet, it is wrong to attribute only positive effects to music. For a betterunderstanding of the role of music in society and its power, two lines of arguments are brought

Page 130: Abstract Book

130 Poster Session I

forward: a) a historical perspective and b) a functional analysis of singing from a psychologicalpoint of view.

a) Brief insights into the history of a singing culture are presented. Empirical data consists ofanalyses of song books that were used during the 20th century and of recent observations found inthe mass media. These analyses show the ways in which songs are used as symbolic means, andhow they represent collective communication. Moreover, they allow to reconstruct changes of asociety’s values and norms over a period of time. b) The functions of singing is analysed. Psy-chologically, the emotions are the focal point. Collective singing produces feelings of belongingtogether. This in turn produces the willingness to identify with group values and to set up col-lective identities (cultural, national, political etc.). Song singing or music making connects groupmembers together and enhances social coherence. At the same time, it marks out non-members.Song singing or music making is never neutral, since it occurs within traditional frameworks andsymbolises values. An essential feature of power becomes evident: use and misuse are closetogether.

Key words: Effects of music, Functions of singing, Cultural identity

[email protected]

8.73 Online investigation of affective priming of words and chords

Nikolaus Steinbeis, Stefan Koelsch

Max-Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

In an affective priming paradigm 19 nonmusicians were presented with either consonant ordissonant sounding chords followed by affectively valenced words, either pleasant or unpleasantin meaning. A considerable body of literature, as well as a previous rating study already indicatedthat consonant chords are considered significantly more pleasant than dissonant chords. Partici-pants were instructed to evaluate the emotional valence of the target words. In line with the litera-ture, it was hypothesised that target words would be evaluated significantly faster when precededby a chord matched in valence (dissonant chord - unpleasant word; consonant chord - pleasantword). To observe the underlying neural processes of affective integration, an EEG was recorded.It was found that matched emotional target words were evaluated significantly faster and withfewer errors than non-matched target words. Additionally, the analysis of the event-related poten-tials (ERPs) showed a considerably larger centro-parietal N400 for non-matched words comparedto matched ones. This ERP is usually interpreted as a classic marker for semantic priming andsuggests in conjunction with the behavioural data, that single chords can communicate semanticinformation.

To see if the integration of chords into a previous affective context is subserved by the sameneural processes, we conducted a second experiment using the same stimuli, but switching theorder of prime-target presentation. This time, participants were first presented with a word, whichwas then followed by either a consonant or dissonant chord, and were asked to evaluate theperceived pleasantness of the chord. Reaction times showed no significant differences betweenmatched and non-matched target chords. Accordingly the ERP analysis revealed no N400, but alater globally distributed positivity for chords non-matched in emotional valence. These findingssuggest that the global positivity may reflect the processing of a more general mismatch betweenthe sound of the chord and its preceding affective context and that for nonmusicians, even though

Page 131: Abstract Book

Tuesday, August 22th 2006 131

chords can prime subsequent semantic processing, they do not possess a concrete meaning inthemselves.

Key words: Emotion, Event-related potentials

[email protected]

8.74 Algorithmic prediction of music complexity judgements

Sebastian Streich, Perfecto Herrera

Pompeu Fabra University, Barcelona, Spain

BackgroundWith the huge amount of music available in form of digital audio files today, new methods formanagement and retrieval are in demand. It is therefore a key aspect to obtain semantically mean-ingful descriptions of music without the need for costly and unreliable manual annotation. Wefocus here on the algorithmic prediction of music complexity, which is known to be related tosubjective liking and thus forms an interesting descriptor for automated music recommendation.

AimsIn the research presented here, we evaluate the accuracy of algorithmic music complexity pre-dictions. Several algorithms are considered that compute complexity estimates solely based onthe music audio signal. The different algorithms focus each one on a different musical facet (e.g.rhythm, instrumentation, harmonies). In order to test the capacities of these algorithms (individ-ually and jointly) we match their output against subjective complexity ratings collected through alistening test and look for significant correlations.

MethodIn the listening test subjects were asked to rate up to 30 music excerpts that were randomly pickedfrom a collection of 82 tracks. Rating consisted on judging the complexity of the music and alsoindicating their familiarity with and their liking of each excerpt. Additionally, in a separate partof the survey, data about the subjects’ usual music listening habits were collected. An automaticanalysis of musical features was used to build predictive models of the excerpts’ complexity.

ResultsIn order to evaluate the models we have used a subset of 43 excerpts, those that have been consis-tently and coherently rated by most of our 16 listeners. With this set, we found some significantcorrelations between the averaged subjects’ ratings and the algorithmic predictions. In particular, asimple rhythmic complexity model and a model based on the loudness fluctuations by themselveswere already significant at the p<0.05 level.

ConclusionsWhile for certain music it seems impossible to give a correct complexity prediction, since amonghuman listeners there will be no agreement either, for other types of music it emerges that algo-rithmic complexity predictions are feasible to a certain degree by only relying on the audio signal.

Key words: Music complexity, Algorithmic prediction, Music information retrieval

[email protected]

Page 132: Abstract Book

132 Poster Session I

8.75 The sun, the moon and the self-taught musician

Francesco Stumpo

A lot of young people begin to love music by listening to their friends to play a musical in-strument, such a guitar, or plaining an instrument themselves. Generally for the new generations,“music” is “popular music” and “popular music” is “popular song” (Vulliamy-Lee 1976), there-fore, the musical repertoire they identify with most consists of popular songs (Stumpo 2001).

The first thing young people learn to play is harmonical turn-over based on the fundamentalstructure of tonal music: I-IV-V.

However, they often play often the series of chords mechanically, with no concept take preciseconciseness of what they are doing.

The most aim of the poster is to understand the functional role of harmony in a song to relatethe chords to the words: lyrics are, is in fact, a part fundamental in popular songs. We are going toanalyse poetically (Jakobson 63), texturally (Moore 1992)and harmonically (De Natale 86), threewell-known italian songs from seventies to present which talk about the sun and the moon.

The first of them is “La canzone del sole” (The sun song) by Mogol-Battisti, a piece thatunites three Italian generations. It is written in hendecasyllab and is musically based on the cyclicsequence, such a baroque form of passacaglia (Stumpo 1992), of the three fundamental chords:I-V-IV-V-I. In this case the sun is related to a cycle of major chords. The second song examinedwill be a hit from eighties which, is still well-known today: “Luna” (Moon) by Morra-Togni. Inthis song we are going to attribute to the minor chord to the moon. We will then see, in particular,how, in relation to the words and to the texture, the series of chords without common sounds givea sense of movement, as opposed to the series of chords with common sounds which give a staticsense.

Finally we will analyse the change of mode in the song from the nineties “Il sole e la luna”(Thesun and the moon) by Ron. In this song we will see the harmonic variation in relation to the spatialvariation of the sun and of the moon. We are going to demonstrate that in all these three songsmusic and words move in a common ground and in the same way.

[email protected]

8.76 Multi-agent composition model based on sensory conso-nance theory

Shintaro Suzuki, Yuriko Hoteida, Takeshi Takenaka, Kanji Ueda

Research into Artifacts, Center for Engineering (RACE),University of Tokyo, Chiba, Japan

This paper presents multi-agent composition model for understanding human composition.Previous studies in cognitive psychology have employed analytical approaches and have triedto create a model of the human composition process. These approaches seem to be insufficientbecause processes of human composition have variety and depend on persons or situations. Con-sequently, we aim to understand synthetic aspects of human composition; emergence of musicalorder through interaction among cognitive features. For this purpose, we construct multi-agentcomposition model, by which the musical note agent interacts with other agents based on cognitive

Page 133: Abstract Book

Tuesday, August 22th 2006 133

features. The agents learn suitable relationships with other agents through reinforcement learning.They perceive the state of environment (the positional relationship of the other agents), select theaction (the distance from the present position to the next one) and get the reward (the value ofnew position according to cognitive features) by discrete time-step. We generate chords (chordsequence) through simulation using our model and examine what kind of musical orders emergentin them through comparison with music theory and thorough psychological experimentation. Wespecifically examine sensory consonance which is originally used for the relationship of two tonesexisting simultaneously (spatial consonance) as basic feature of music cognition. Additionally,we propose to extend the concept of consonance to the time dimension (temporal consonance)in consideration of short term memory (STM). As results, generated chords have the order thatcomponent tones were identical to a major scale from the perspective of musical theory and havethe order of “sound clarity” at the aspect of human impression using temporal consonance. We,therefore, inferred the possibility that tonal music structure has generated through positional de-termination based on extended sensory consonance. Through this study, we confirm the utility ofour approach using multi-agent composition model to understand the synthetic aspects of humancomposition.

Key words: Multi-agent, Sensory consonance, Composition

[email protected]

8.77 Musical interpretation and the content of music

Anders Tykesson

Academy of Music and Drama, Göteborg University, Sweden

In an interview Jean Sibelius talked about his 5th Symphony as “pure music” which he ex-plained as “musical thoughts”, i.e. “thoughts that could only be expressed in music”. Beingagreed with Sibelius’ opinion of absolute music, is it at the same time possible to talk about andto describe a musical content with a view to interpret the music?

A musical performance is, in my opinion, an act of narration, also in a “pure” musical way.But a piece of written music in itself cannot tell anything except what is written in the score. Thisimplies to the musician that a high level of consciousness is necessary, partly to the structures ofthe piece, partly to his or her way of interpreting it. The knowledge and the experiences of themusician are crucial to the interpretation and to what the performance will tell the listener.

Paul Ricoeur’s model of interpretation in which understanding of a text is mediated by theexplanation of the structures, and interpretation implies an appropriation in the form of self-understanding, is an interesting model for musical interpretation. Hans-Georg Gadamer’s viewof the experience of art as a participation in knowledge is a challenge for the musical interpreter,because it makes special demands on consciousness in the interpretation.

To describe what the interpreter will mediate will make him or her more conscious in the inter-pretation. Accordingly, language, written or spoken, may be a link between analysis of structuresand the sounding interpretation. The question is how you can use the language for the purpose ofinterpreting music. What terms do you use, which metaphors?

With the paper I will present analyses of chamber music, which has been used and discussedin practical works of interpretation.

Key words: Musical interpretation, Music theory, Musical hermeneutics

Page 134: Abstract Book

134 Poster Session I

[email protected]

8.78 The Ethiopian lyre Bagana: An instrument for emotion

Stéphanie Weisser

FNRS-Université Libre de Bruxelles, Belgium

BackgroundThe bagana is a ten-stringed box-lyre of the Amhara of Ethiopia. As a paraliturgical and solo in-strument played only for religious and meditative purposes, it often creates immediate and intenseemotions for both players and listeners. A bagana performance is never staged and the emotionalreaction is generated by musical and contextual means only.

AimsBased on informations collected during four fieldworks held in Ethiopia (2002-2005), the presentstudy aims to understand by which means such a strong emotion is created through multidiscipli-nary analyses.

MethodsStructural, textual and acoustical analyses have been made in order to determine intrinsic char-acteristics of the bagana repertoire. Participative observations, films and informal interviews ofplayers and listeners were conducted in order to identify extrinsic emotional characteristics of theprivate and public performances of the instrument.

ResultsThe bagana repertoire is characterized by its strongly repetitive nature and its rythm uses scram-bling processes of the time referent. The lyrics of the bagana songs refers mostly to biblicalfigures, appealing to the range of religious emotions. Acoustically, the bagana produces specificlow buzzing sounds which constitute the sonorous identity of the instrument. The singing phona-tion types used with the bagana are breathy and harsh and spectral fusions often occurs betweeninstrumental and vocal sounds. These sound colors are culturally coded. Players and listeners re-port feeling strong emotions rarely expressed through bodily reactions except for tears. The strongreligious significance, the values embodied and the high status of the instrument are of significantimportance for the meaning of a bagana performance and the emotions felt during it.

ConclusionsThis study of the bagana and the emotions it raises has brought into light several methodologicalproblems but also its relevance. One of the main goals of bagana playing and listening is toexperience strong emotions and an ethnomusicological study of this instrument would be at leastincomplete without including taking this factor into account.

Key words: Emotion and traditional music, Perception and musical characteristics, Culturally-coded timbres

[email protected]

Page 135: Abstract Book

Tuesday, August 22th 2006 135

8.79 The relationship between narcissism and music perfor-mance anxiety

Jolanta Welbel

Chopin Academy of Music, Poland

BackgroundIn order to apply the effective form of management of MPA there is a need for differential di-agnosis in relation to personality characteristics. Previous researches on MPA in effect of factorand correlation analysis establish the multidimensional structure of this phenomenon.One of themost important cognitive dimension in MPA is allocation of attentional resources in music perfor-mance . Music oriented vs. ego oriented cognitive reaction characterised on the one end by theability to focus attention in music and performed task- on the other end by ego concentrated cog-nitions and self presentation concerns. Ego involved behaviour is connected with the self-esteemof performer.All this psychological constructs are very close to the narcissism as a continuouspersonality variable .

AimsThe main goal of this reasearch was to find the underlaying cognitive mechanism and motiva-tional affective process of MPA and to find the answer on question if there is relationship betweenMPA and two central characteristics of narcissism - self-enhancement and keeping unrealistic selfstandards

MethodThe participants were 100 students from Chopin Academy of ‘Music in Warsaw-soloists in manyconcerts, auditions and examinations. All students were administered: The Check List of Symp-toms and Impact of Performance Anxiety (J.Welbel 1997) Polish version of NPI(Narcissistic Per-sonality Inventory-Raskin R.,Terry H.,1988) with distinguished four scales: Need for Admiration,Leadership, Vanity, Self-Sufficiency.

ResultsThe curvilinear relationship between the level of narcissism and the MPA characterised by thenegative impact on performance occurs.

ConclusionsThe method of boosting and enhanced self-esteem for music performers with narcissistic traits mybe counterproductive for reaching the optimal level of activation and music oriented concentrationduring public performance.

Key words: Music performance anxiety, Stage fright

[email protected]

8.80 A comparison of the effects of music and physical exerciseon spatial-temporal performance in adolescent males

Simon Williams, Samia Toukhsati, Nikki Rickard

Page 136: Abstract Book

136 Poster Session I

Monash University, Australia

The aim of this study was to compare the effects of exposure to music by Mozart and physicalexercise on spatial-temporal reasoning performance. Sixty-seven participants aged between 12 to18 years of age were randomly assigned to one of three treatment groups: physical exercise, musicby Mozart or silence. A mixed-model design compared pre- and post-intervention arousal (indexedusing objective physiological measures), mood (indexed using the Profile of Mood States) andperformance on the Paper Folding and Cutting (PF&C) test across the three independent groups.The results of the study supported the hypothesis that the exercise group would show a signifi-cantly greater increase in physiological arousal (blood pressure and heart rate) in comparison tothe Mozart and control groups. In contrast, there were no observable differences in mood scoresacross the three groups. Finally, the findings did not demonstrate a performance enhancing ef-fect of exercise or exposure to Mozart’s Sonata (K448) in comparison to the control group onPF&C performance. This constitutes another failure to replicate the Mozart effect, in addition towhich the findings may pose some challenge to the Arousal-Mood hypothesis in that increases inphysiological arousal did not translate to improvements on the PF&C.

Key words: Mozart effect, Arousal-mood hypothesis, Spatial-temporal performance

[email protected]

8.81 Automatic emotion classification of musical segments

Tien-Lin Wu, Shyh-Kang Jeng

Department of Electrical Engineering, National Taiwan University, Taiwan

Most previous studies of emotion in music concentrate in their qualitative relationship, andmost music content-based retrieval researches focus on classification of genres. There is not muchworks yet on classifying emotions of musical segments automatically. In this paper a novel ap-proach is proposed to automatically classify the emotions of musical segments. The possibleapplications include the personal music DJ and music selection for musical therapy. The firststep of this approach is to extract music segments that most listeners will report similar emotion.Such segments are called segments of significant emotional expression (SSEE). The SSEE are ob-tained by letting the subjects continuously record their emotional appraisals (valence-arousal) bysoftware FEELTRACE when they listen to classical musical works, and extract the segments thattheir ranges of recorded valence and arousal values by 50% and 80% subjects are smaller than agiven threshold. Next, we extract 55 low-and-middle-level musical features of those SSEE by pro-grams MARSYAS and PsySound. The 55 musical features are then processed by feature-selectionmethods to reduce the dimension of the feature space. Finally applied are clustering algorithmsfor 2-class (+/- for the valence and the arousal), and 4-class (the four quadrants of the valence-arousal emotional space) classification. Results by 6 feature-selection methods (Exhaustive Fea-ture Selection, Genetic Culling, Hierarchical Dimensionality Reduction, Independent ComponentAnalysis, Multidimensional Scaling, and Principal Component Analysis) and 10 clustering algo-rithms (Ada_Boost, Backpropagation_CGD, Backpropagation_SM, Backpropagation_Stochastic,CART, Ho_Kashyap, Least_Square, Normal Density Discriminant Function, K-NN, and SupportVector Machine) for 2-class and MANOVA for 4-class classification have been compared. Thebest accuracy of labeling is very satisfactory (90%) by cross-validation. From the labeled results

Page 137: Abstract Book

Tuesday, August 22th 2006 137

we also find some interesting relationships between the emotion class and the 55 musical fea-tures. For example, frequency centroid, spectral dissonance, tonal dissonance, short term loudnessmaximum, and sound volume are found the dominant features for the 4-class classification.

Key words: Emotion, Segmentation, Classification

[email protected]

8.82 Effects of state anxiety on performance in pianists: Re-lationship between competitive state anxiety inventory-2subscales and piano performance

Michiko Yoshie, Kazuo Shigemasu

Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo,Japan

Music performance anxiety (MPA) is a serious problem among musicians. However, few ques-tionnaires to measure MPA have been validated. Recently, an attempt was made to apply Com-petitive State Anxiety Inventory-2 (CSAI-2; Martens, Vealey, & Burton, 1990) to MPA (Miller& Chesky, 2004). CSAI- 2 was developed to assess pre-competition anxiety in athletes, pro-moting the advancement of research concerning relationship between anxiety and performance insport. The questionnaire is composed of three subscales: cognitive anxiety, somatic anxiety, andself-confidence. So far, only the self-confidence subscale has been found to consistently predictperformance. Jones and Swain (1992) added “direction” scales to CSAI-2 to assess interpretationof each anxiety symptoms. The athletes perceiving their cognitive and somatic anxiety as facili-tative to performance tended to perform better (Jones, Swain, & Hardy, 1993; Raudsepp & Kais,2002).

The aim of the present study was to confirm the reliability of a modified version of CSAI-2,to examine the relationship between subscales of the questionnaire and music performance. Someitems of CSAI-2 were modified according to the characteristics of music performance situations,and then it was named “CSAI-2 for musicians (CSAI-2M)”. 51 amateur pianists (36 males and 15females) aged 18 - 26 years (M = 20.6; SD = 2.3) responded to CSAI-2M about 1 hour before theirperformances in a concert. Immediately after the performances, they rated the quality of their ownperformance on 11 scales, such as “articulation”, “sound quality”, “dynamics”, and so on.

Internal consistencies of CSAI-2M subscales were considered sufficient (α = .70-.94). Aver-age performance score was significantly related to self-confidence intensity (r = .46, p <.01) andcognitive anxiety direction scale (r = .37, p<.05). Multiple regression analysis showed that self-confidence intensity had the most consistent positive effect on performance, affecting 6 scales ofperformance (p<.05). Cognitive anxiety intensity had a negative effect on “technique” and “accu-racy” (p<.05). Within 2 direction scales, only cognitive anxiety direction had significant effect onperformance (p<.05).

Acquiring self-confidence is deemed important to enhance music performance. Also, inter-preting symptoms of cognitive anxiety as facilitative can improve performance regardless of itsintensity.

Key words: Music performance anxiety, Competitive state anxiety inventory-2, Music performance

Page 138: Abstract Book

138 Poster Session I

[email protected]

8.83 The recognition of emotional qualities expressed in music

Vanda Lucia Zammuner, Elisabetta Petitbon

Department of Developmental and Social Psychology, Padua University, Padua, Italy

People’s emotion knowledge enables them to recognize the emotional quality expressed by amusical piece, including subtle differences in quality, i.e., to distinguish Joy (J) from Sadness (S),but also S from Tendenerness (T) or Anger (A) from Fear (F), and S from F? To test this hypothesis,in a first experiment subjects (N=24, half of whom musicians) expressed their perception whilelistening to music excerpts, at two points in time for each piece, i.e., at 40sec and at 80 sec.Their latency in response time (reaction time, RT), as well as their verbal rating on a 7-point T-0-S Likert-type scale, were recorded. Experimental stimuli were 40 classical music pieces, fororchestra or solo instruments (e.g., piano; violin and piano), each lasting 85 seconds, selectedfrom XVII-XX century repertoire (Fauré, Grieg, Beethoven, Chopin, etc.) that contained thestructures characterizing (Sloboda & Juslin 2001) either S or T (e.g., Slow tempo for both; Softtimbre for T; Dull timbre for S), or structures expressive of O. Pieces either expressed a singleemotional quality, namely T-T, S-S, O-O, or changed it midway, i.e. had the sequence T-S, T-O,and S-O. 28 new stimuli were created, by reversing original sequences, to obtain S-T (N=20),O-T (N =4), and O-S (N=4) sequences, none of which was exhibited by the original pieces. Thefinal stimuli therefore comprised 68 musical pieces, subdivided in 9 Types as a function of whatemotion the piece expressed at 40” and 85”. The results overall confirmed the hypothesis, but alsoshowed that perception is influenced by such variables as musical expertise, and “location” of anemotional quality within a musical piece (e.g., Tenderness was most easily perceived in single-emotion pieces, and least distinguished from Sadness when it follows it rather than precedingit). Two additional experiments - whose results are currently under analysis - studied subjects’perception of Fearful vs Angry vs Sad pieces, and Sweet vs Sad pieces.

Key words: Emotions and music recognition

[email protected]

Page 139: Abstract Book

Symposium: Gesture, antic-ipation and expression: Theorigins of human musicalityII

9

Convenor: Maya Gratier

Gestures give shape to the living spaces of human exchange. They frame speech and root lin-guistic meaning. Gesture is fundamentally expressive and its spontaneous expressiveness touchesothers directly. Gesture that is oriented, in so far as it is necessarily a movement and thus tem-poral, does not only carry expression but also sets into motion the latent expressiveness of others.Intentionally gestures acquires expressiveness and it entrains and engages the minds and bodies ofothers. It may be precisely because it carries an implicit knowing and because it is incomplete orindeterminate that expressive gesture fosters intersubjective links. We will focus on the expressivedynamic in two areas of universal human activity : in the first nonverbal dialogues between moth-ers and babies and in the creation and performance of musical works. Research on musical expres-sion is concerned with bodily gesture. There are at the very least three forms of musical bodilygesture : instrumental gesture (playing an instrument), vocal gesture (singing, but also speaking,screaming, laughing, moaningE) and rhythmic gesture (organising sound into isochronous periodsby marking pulse). These three forms of gesture are rooted in the experience of the body and inits movements, and they are to a large extent the starting points for expressiveness in any musicalform. The connection between, on the one hand, movement and gesture evoked or performed toaccompany music, and, on the other hand, the musical form itself can be seen as a “projection” ofthe body into this form. Prelinguistic infants too rely on their bodies to express themselves withoutlanguage. An infant at birth expresses social intentions with its body and voices. We know theextent to which the presence of others attracts, motivates and literally puts into motion the mindsand bodies of infants. The infant learns to coordinate his gestures, his vocalisations and facialexpressions between themselves and with those of others. The multimodality of infant expressionsupports his extraordinary communicability and his natural musicality.

139

Page 140: Abstract Book

140 Symposium: Gesture, anticipation and expression: The origins of human musicality II

9.1 Musicality and the human capacity for culture

Ian Cross

University of Cambridge, UK

This paper will take as its starting point recent research (Tomasello et al, 2005) which sug-gests that the human capacity for culture rests on the ability of humans to “share intentionality”and will sketch some of the ways in which music can be interpreted as fulfilling functions thatunderpin and sustain the this ability. It will link the concept of shared intentionality to Bruner’s(1986) notion of “narrative” as a mode of thought which deals in human or human-like intentions,in outlining a theory of meaning in music within which it is proposed that distinct dimensions ofmeaning in music are derived from different sources in the evolution of the human species. Theterm “meaning” here is intended to embrace the experience of both motivational (affective) andconceptual-intentional states in processes of engagement with music, and it is suggested that threedistinct dimensions can be hypothesised as underlying the experience of meaning in music andmusical interactions. One dimension relates to aspects of our experience of the world that areconditioned by our biological heritage and that may have some cross-species generality; a secondis rooted in the specific types of human interaction and interpretations of human interaction thatunderpin our capacity for cultural interaction and learning; while a third is postulated as derivingfrom the particularities of the cultural contexts in which we develop and come to play a part. Ishall propose that the fluidity of meaning that arises from the interplay of these different dimen-sions enables music to be efficacious in the formation and consolidation of the capacity for sharedintentionality that underpins the human capacity for culture.

Key words: Share intentionality, Biological heritage, Cultural interaction

[email protected]

9.2 Trajectories of expression in musical interaction and theneed for common ground

Gratier Maya

Université de Paris X - Nanterre, Paris, France

We will focus on the dynamic aspects of expression, as events that unfold in time and thatdisplay a future-orientation. First, we present research on communication between mothers andinfants which emphasizes both the natural expressiveness of infants and the ways in which cul-tural belonging shapes and orients expressions as they emerge. We show that mothers and infantscollaboratively build shared repertoires of expression which are embedded in patterns of culturalhabitus, and which support and motivate creative forms of expression. We go on to present dataof two jazz musicians playing together at rehearsals as they are getting to know each other. Asthey rehearse known pieces together the musicians learn to anticipate specific improvised patternswhich come to constitute a common musical signature. In both these contexts, we look at how bothfamiliar and novel forms of shared expression, audible and visible, emerge within the frameworkof a growing intimacy and intersubjective understanding. Our analyses are based on the micro-analysis of audio and video data which reveals the dynamic temporal coordination of collaborative

Page 141: Abstract Book

Tuesday, August 22th 2006 141

expression. The human voice, musical instruments, gestures, and reciprocal gaze are shown to co-ordinate a fundamental motivation to explore meaning together while signifying a shared feelingof belonging.

Key words: Expression, Improvisation, Interaction intersubjective

[email protected]

9.3 Meaning in motion

Schogler Benjaman

University of Edinburgh, UK

By moving together in sympathy and with dynamic sensitivity mother and infant share timein a dance of voice, hand and face. Although the modality of information is constantly shiftingthe ability to coordinate and focus their attention remains centred around coherent musical units.There is little doubt that when sharing time constructively, whether engaged explicitly in musicalforms of play or not, the exchange and interaction between mother and infant is musical. Butwhat is the information that makes this possible. How can we coordinate and share motives andgestures expressed across a myriad of modalities. Experiments that explore the ability of humanbeings to translate expressive or emotional gestures into different modalities will be presented inthis talk. There is something in the pattern of flow in music and dance that communicates the un-derlying emotion or expression. What is this something? Can it be measured? and how is it sharedand translated between artists? Experiments studying the transmission of expressive informationbetween musicians and dancers studied using high-resolution 3D motion capture systems will bepresented. Through the application of General Tau theory (Lee, 2004) evidence that demonstratesthe path of coherent neural information about the control of action will be presented and a newmethod for measuring “expression” demonstrated.

Key words: Expressive gestures, Dance, Voice

[email protected]

9.4 The “attunement” between children and a musical machine

Anna Rita Addessi

University of Bologna, Italy

BackgroundThe concept of “interaction” is fundamental in the contemporary epistemology. In the field ofthe studies on musical development the important role played by the child/adult interaction wasunderlined (Stern 1987, 2004; Imberty 2002, 2005; Trevarthen 2002) and, in pedagogy, it wasobserved that the interpersonal dimension is a potential source for the musical creativity (Young2004, Mazzoli 2003). The point of interest is to observe what type of music development ariseswhen this interaction takes place between a child and a machine.

Page 142: Abstract Book

142 Symposium: Gesture, anticipation and expression: The origins of human musicality II

AimsThis paper present some results concerning of the DiaMuse project dealing with the interactionbetween children and an interactive musical system, the Continuator (Pachet 2003).This system isparticularly interesting because it is able to learn and produce music in the same style as the humanplaying the keyboard. As a consequence of this design, the phrases generated by the system aresimilar but different from the phrases played by the user.MethodAn experimental protocol based on the observation methodology and some didactic experienceswas realized to observe young children aged 3/5 years playing with the Continuator. In the protocolthree sessions were held once a day for 3 consecutive days. In every session, the children wereasked to play on the keyboard in 4 different situations: with the keyboard alone, with the keyboardconnected to the Continuator, with another child, and with another child and the Continuator.ResultsIt was possible to observe a sort of “life cycle” of interaction and some micro-processes similarto one observed in child/adult interaction. During the interaction with the system, the childrenreached high levels of “well-being”, very similar to those described in the Theory of Flow byCsikszentmihalyi (1990).ConclusionsThis paper will present some microanalyses which focus on the nature of interaction child/Continuator,by proposing an interpretation of the data in the light of the theories elaborated by Stern and Im-berty in the field of child/adult interaction and musical development.

Key words: Interaction infant-machine, Flow, Musical developpment

[email protected]

Page 143: Abstract Book

Education I 1010.1 Music students at a UK conservatoire: Identity and learn-

ing

Rosie Burt, Janet Mills

Royal College of Music, London, UK

BackgroundThe study reported here is part of ’Learning to perform: instrumentalists and instrumental teach-ers’ (a project funded by the Economic and Social Research Council’s Teaching and LearningResearch Programme). At the project’s core lies a three-year longitudinal study of students, teach-ers and institutional managers at a UK conservatoire. As we build theory of expertise in musicallearning, we consider how conservatoire students - who aim to become professional musicians- describe their musical identity. We build on existing research by investigating how differentdescriptions of identity reflect different approaches to learning.AimsWe ask: 1. How do students at a UK conservatoire describe their musical identity? 2. How canthis inform our development of expertise theory in musical learning?MethodEighty-eight conservatoire students completed a semi-structured questionnaire in April 2005, whichincluded a specifically designed question probing identity. Students were offered up to 30 “crosses”(X) to put next to one or more of 19 pre- determined identities (e.g. “jazz musician”; “student”;“learner”). A ranked list of identities was produced for each individual and data were entered intoSPSS for quantitative analysis. Twenty of these students took part in semi-structured interviewsin January 2005, and were asked to verbally describe their identity. Interviews were transcribed,coded and analysed qualitatively.

Preliminary results and conclusions The identity most often ranked first is “musician” (n=47),followed by “music student” (n=32) and “learner” (n=22). Twenty students describe themselvesprimarily as an “instrumentalist” (i.e. clarinettist, pianist), but individual students vary as towhether they wish to identify themselves in this way. One student, for example, is categoricalthat she will not describe herself as a “pianist” until she has an international reputation. We sug-gest that identity can be linked to these musicians’ aspirations and approaches to learning and canthereby inform our development of expertise theory in musical learning.

143

Page 144: Abstract Book

144 Education I

Key words: Identity, Learning, Conservatoire

[email protected]

10.2 An exploration of the formation of primary school student-teachers beliefs regarding creative music making

Natassa Economidou Stavrou

Department of Education, University of Cyprus, Cyprus

BackgroundResearch regarding music teaching in primary schools investigates the generalist teachers’ andstudent teachers’ low musical abilities and low confidence regarding teaching music. Currentfindings from Cyprus suggest that Music is one of the most difficult and demanding subjects forCypriot teachers and student teachers to teach, basically because it requires instrumental and vocalskills, as well as what is called “creative skills”. As a result of the low confidence, teachers oftenavoid to teach music.AimsThis paper reports on an attempt to create a fruitful environment for 64 primary school student-teachers attending a Music Methods Course at University of Cyprus, most of them with no orlittle musical knowledge, to release their creative potential in music and discover, through activeparticipation: a) what creative music making is all about b) why it is important in children’s musiceducation, and, hopefully c) that it is more feasible than it sounds.MethodThe focus in the first three weeks of this six-week project was on student teachers’ participation inshort- scale experiential activities employing conventional and non-conventional music making.At the end of the third week, student teachers were called to form groups of 8, write a story, createmusic and sound effects in order to support it, and finally perform it with customs and staging.Throughout their meetings and rehearsals in a time period of three weeks that followed, student-teachers were asked to keep a reflective diary drawing their progress and emotions during thewhole procedure.Results and ConclusionsAt performance day, the student teachers presented their “sound stories” in the University ConcertHall. At the end of the performance, they responded to a questionnaire with open-ended questions,where many of them stated that it was a unique experience, which made them realize that everyonecan be creative and musical, even without any special musical skills. Most of the student teachersstressed the importance of the steps they followed and phases they went through and talked aboutthe palette of emotions that evolved. Also, they stressed that this project helped them to form amore positive attitude towards music and feel more confident regarding their musical and creativeskills. Last but not least, they stated that they would like to engage their future students to similarcreative music activities since they know now that. . . it is not as peculiar as it first sounded.

Key words: Creativity, Teacher education, Performance

[email protected]

Page 145: Abstract Book

Tuesday, August 22th 2006 145

10.3 External representations in teaching and learning of mu-sic

Amalia Casas Mas, Juan Ignacio Pozo Municio

Universidad Autónoma de Madrid, Spain

There seems to be an intuitive regularity within the formal context of learning in music but adifferent pattern inside non-formal context (oral tradition). We started analyzing the formal con-text (academic tradition) in order to compare it in further research. This project aims was studyingexplicit discourse about music concept and teaching-learning music, related with the implicit be-lieves and conceptions through score learning (external representation) analysis. Students of theConservatorio Superior of Madrid had worked two kind of pieces in two different languages,classical and contemporaneous, and this task active different learning strategies in the students.Similar to the interpretation of other graphical information as maps, diagrams, graphics, ....; weobserve three processing levels of learning scores. Students taken a classical or contemporaneouswork and studied it for a week. They had two kind of instruction “study it for playing in a week”or “study it for teaching it in a week”;. Then, everyone answered a questionnaire with two sec-tions; the first one about how they had studied it, the second one about how they would teach itto anybody. The results shows differences in using the processing levels in both kind of language(classical and contemporaneous), differences about implicit conceptions (direct, interpretive andconstructive) used if student have received the first or second instruction and into first part andsecond of the questionnaire. Finally, some relations between using a level of processing and animplicit conception, or not use opposite levels or conceptions together.

Key words: External representations, implicit conceptions, Learning strategies, processing levels, Classicallanguage, contemporaneous language

[email protected]

10.4 Conceptions of musical ability

Susan Hallam

Institute of Education, University of London, UK

Background

Historically, musical ability has been conceptualized in relation to aural abilities. Recently, thisview has been challenged. Musical ability is now viewed by several authors as a social constructionacquiring different meaning in different cultures, sub-groups within cultures and at the individuallevel.Aims

This study aimed to explore conceptions of musical ability in the United Kingdom.MethodAn inventory adopting a five point rating scale was developed to assess individuals’ perceptions of

Page 146: Abstract Book

146 Education I

musical ability. The inventory included 77 statements derived from previous qualitative researchcategorized into 21 themes (Hallam, 2003). The inventory was completed by 660 individualsranging in age from 14-90 years, with varying levels of engagement with music making. Therewere 213 males and 447 females.

ResultsMusical ability was most strongly perceived as relating to a sense of rhythm (mean 3.84), followedby the ability to understand and interpret the music (3.74), express thoughts and feelings throughsound (3.73), being able to communicate through sound (3.67), motivation to engage with music(3.56), personal commitment to music (3.48) and being able to successfully engage musicallywith others (3.4). Least important were having technical skills (3.03), being able to compose orimprovise (2.99), being able to read music (2.77), and understanding musical concepts and musicalstructures (2.68).

Factor analysis revealed 6 factors with eigenvalues greater than 2. The first was concerned withbeing able to read music and play an instrument or sing. The second focused on musical commu-nication, the third valuing, appreciating and responding to music, and the fourth to composition,improvisation and the skills required for undertaking these. The fifth factor related to personalcommitment to music, motivation, discipline and organisation, and the sixth to rhythmic and auralskills.

ConclusionsThe findings suggest that the construct of musical ability is perceived broadly in the general pop-ulation. The high proportion of participants stressing the importance of having a sense of rhythmlity may reflect the characteristics of popular music where “the beat” is central. The stress on mo-tivation and commitment also suggests an awareness of the time required to successfully developmusical skills.

Key words: Musical ability, Perceptions

[email protected]

10.5 University to career transitions: Finding a musical iden-tity

Karen Burland

University of Leeds, UK

BackgroundThis paper discusses the role of identity formation in shaping the career choices of undergraduatemusicians. Identity theories suggest that an individual’s sense of self can serve as a critical mo-tivator for behaviour and provide a framework for coping strategies. For example, the theory ofpossible selves (Markus & Nurius, 1986; Cross & Markus, 1991) is an important potential frame-work for understanding an individual’s motivation to pursue high levels of expertise in a particularfield. Possible selves are thought to be based on past representations of the self, but are differ-ent from the current self, though connected to them (Markus & Nurius, 1986). Possible selvesare unique to the individual, although “social” in that they may be the result of previous compar-isons with others: “what others are now, I could become” (ibid: 954). Theories of provisional

Page 147: Abstract Book

Tuesday, August 22th 2006 147

selves (Ibarra, 1999) and success identities (Solberg et al, 1998) are also useful frameworks whenexamining the process of becoming a musician.

AimsThis paper considers the relevance of identity theories for understanding musicians’ career choices,highlighting that an individual’s musical and personal identity is central to his/her decision tobecome a professional musician.

MethodA two-year longitudinal interview study was conducted with 32 undergraduate music students at-tending either a university or a music college. Interviews were conducted once every three monthswith all participants and explored the participants’ daily experiences with music, their motiva-tions, coping behaviours, self-perceptions (as individuals and musicians) and interactions withother people (teachers, peers, parents). The data were analysed using quantitative and qualitativetechniques.

ResultsThe results indicate that an individual’s musical identity plays a central role in motivating personaland career goals and has a significant impact on an individual’s ability to cope with negativemusical experiences. A musician’s ability to cope emerged as a central factor in determiningcareer choice: without adequate coping strategies the musicians were unable to cope with theinevitable struggles associated with becoming a professional musician.

ConclusionsAn individual’s musical identity undergoes numerous changes during the transition between train-ing as a musician and making career choices. Furthermore, the link between a musician’s musicalidentity and his/her ability to cope is central to becoming a professional musician. The findingshave implications for educators in terms of understanding the specific needs of musicians under-going career transitions and providing the necessary skills and support to succeed.

[email protected]

10.6 Children’s practice of computer-based composition as aform of play

Bo Nilsson

Kristianstad University, Sweden

Music in different forms is significant in young people’s lives. Due to the media revolution inour modern society, children of today learn a lot about music on their own by taking part in anincreasing production of musical cultures. In this paper different forms of play is applied as a wayto reach a deeper understanding of children’s creative music making.

This article describes a 2-year empirical study of nine 8-year-old Swedish children creatingmusic with synthesiser and computer software. The research questions are aimed at: (a) clarify-ing the creative processes young children employ when they create music using digital tools, (b)describing and analysing the musical outcomes that are produced by the children as a result ofthis process, and (c) reaching a deeper understanding of what creative music making means to thechildren.

Page 148: Abstract Book

148 Education I

The tasks assigned to the children were framed as invitations to create music to different pic-tures. Step by step computer MIDI-files of the composition processes were systematically col-lected, observations were made and interviews were carried out with each of the participants. Atheoretical framework called ecocultural perspective, developed by the author, was applied in theanalysis. This ecocultural perspective is based on four theoretical areas: (i) musical learning andcreative activities in informal and everyday situations, (ii) oral practice, (iii) theories of play, and(iv) how these three are linked to chance, uncertainty and unpredictable events.

In the analysis, five variations of the children’s practice of composing were identified, eachwith a different object in the foreground: (i) the synthesiser and computer, (ii) personal fantasiesand emotions, (iii) the playing of the instrument, (iv) the music itself, and (v) the task. Findingsalso provide evidence that young children are able to create music with form and structure.

In this paper findings will be demonstrated and further discussed from the above mentionedecocultural perspective, where play is considered especially important as a way of creating mean-ing in musical activities.

Key words: Musical creativity, Play, Computerbased composition

[email protected]

Page 149: Abstract Book

Emotion I 1111.1 Walking and playing: What’s the origin of emotional ex-

pressiveness in music?

Bruno Giordano1, Roberto Bresin2

1CIRMMT, Schulich School of Music, McGill University, Montréal, Québec, Canada2Department of Speech, Music and Hearing, Royal Institute of Technology (KTH), Stockholm,Sweden

BackgroundA recent review compared musical and vocal expression of emotions. Strong acoustical similari-ties between the two domains were identified, supporting the hypothesis that emotional expressionin music originated as an imitation of vocality. Alternatively, it might be hypothesized that suchcommonalities are due to a common origin of these domains in motor activity, influenced by theemotional state. We tested this view studying human locomotion sounds, investigating both pro-duction and perception.

AimsWe tested whether [a] emotional intentions affect acoustical structure in similar ways as musicalperformance does; [b] listeners recognize emotional intentions; [c] recognition criteria resemblethe acoustical means for emotions expression; [d] emotion recognition is influenced by the degreeof musical expertise.

Method[a]: eight musically-untrained participants were asked to walk as if angry, happy, fearful, sad, andin an emotionless way. Signals were characterized with 21 descriptors inspired by the literatureon music performance and source perception. [b]-[d]: fourteen listeners were tested with respectto their ability to recognize the properties of the walkers (emotion, gender, weight, shoe size, solehardness). A variant of the semantic differential was used, bi-polar rating scales being defined byone adjective and its contrary (e.g., angry - not angry). Acoustical response criteria were modeledusing regression procedures.

Results[a] strong similarities between walking and musical expression of emotions were found with re-spect to acoustical variables likely to have been controlled by walkers (tempo, timing, and level,

149

Page 150: Abstract Book

150 Emotion I

but not timbre and articulation). [b] recognition performance was above chance level, althoughemotions were less well recognized than other walker properties (e.g., gender). [c] recognitionperformance was unrelated to the level of musical expertise of listeners. [d] emotion recognitionwas mainly based on signal level and temporal structure.

ConclusionsIt might be argued that similarities between music performance and locomotion are observed be-cause expression of emotions in music originates, at least in part, as an allusion to locomotionsounds. Alternatively, similarities might be explained by the common origin of music and loco-motion (and vocality) in motor activity, modulated by emotional states. We find this latter viewmore plausible.

Key words: Emotions, Locomotion, Source perception

[email protected]

11.2 What music appears in musical peak experiences?

Alf Gabrielsson

Department of Psichology, Uppsala University, Sweden

Background and AimsBased upon a large number of reports on Strong Experiences with Music (SEM), a descriptivesystem for such experiences has been developed with seven main categories - General Character-istics, Physical Reactions and Behaviours, Perception, Cognition, Feelings/Emotions, Existentialand Transcendental Aspects, and Personal and Social Aspects - and with increasing specificationof reactions at two lower levels of the system, in all some 150 aspects (Musicae Scientiae, 2003,7, 157-217). The questions treated in the present paper concern (a) what types of music occur inSEM, (b) how they may be related to personal and situational factors and (c) to different reactionsin the SEM descriptive system.

MethodAn attempt was made to place each piece of music mentioned in SEM reports (more than 1300)into a limited number of different musical categories/genres and then investigate their relationshipto various personal (e.g., gender, age, musical background) and situational (e.g., live music orreproduced) factors and to different SEM reactions.

ResultsAny classification of music meets with difficulties. We decided upon 15 musical categories/genresthat however may be partly overlapping (i.e., not mutually exclusive). Roughly about half ora little more of the pieces belonged to “classical” music from various time epochs (includingreligious and scenic music), the remaining bare half to genres as jazz, rock, pop, dance music,tunes etc (“popular” music). There were marked differences with regard to age and gender - theolder people, the more of “classical” music; the younger, the more of “popular” music. The biggestdifference appeared between old women and young men. With regard to musical experience therewas less difference. The majority of SEM reports referred to experiences as listener rather than asperformer, and live music was more frequent than reproduced music. Examples of other interestingrelationships will be briefly reported as well.

Page 151: Abstract Book

Tuesday, August 22th 2006 151

ConclusionsStrong experiences may be elicited by many different kinds of music in interaction with variousnon-musical (personal, situational) factors.

Key words: Music experience, Musical genres

[email protected]

11.3 Peak experience in music: A case study with listeners andperformers

Sujin Hong

Music College, Seoul National University, South Korea

Measuring peak experience (Maslow, 1962) while listening to or performing music is difficultas the experimental strategies and measurements are limited. Many of the former empirical stud-ies in diverse fields were about flow theory intimately related to peak experience in view of theemotional transition. e.g. G & C (1976), Jackson (1996), Csikszentmihalyi (1990), O’Neill (1999)and etc. Lowis (1998, 2002) measured peak experience in music. The present study attempts toreveal the phenomena of peak experience while both listening to and playing the music. Based onthe fact that music can abruptly induce deep emotional change, this research supposes two basichypotheses that the response of peak experience can be delayed and also peak experience can beeasily interrupted by an experimental condition. In this study, peak experience is defined as “emo-tional climax felt in music”, and this research measured the emotional climax or peak mainly bythe questionnaire as a reappraisal method. Experiment 1 and 2 investigated the emotional climaxor peak of two listener groups of both fifteen non-music majoring students and fourteen pianoperformance majoring students, whereas experiment 3 was conducted with the performer groupof twelve piano performance majoring students at Seoul National University. In experiment 1, theparticipants listened to music twice. After they listened to music, they answered to the question-naire measuring the frequency and degree of emotional climax or peak. Also, during the secondlistening, the listeners manipulated the sound mixer’s volume dial (up or down) indicating thepoint of emotional climax or peak. Experiment 2 was conducted in the same way as in experiment1. In experiment 3, the participants played the same music, and after their second playing, theyanswered to the same questionnaire as experiment 1 and 2.

The results showed that the mean value of the frequencies and degrees of emotional climaxor peak was slightly different among the groups as well as the individuals. Also, familiarity andlikeness about music had a little to do with emotional climax or peak. In addition, the specificmusical structure associated with psycho-physiological characteristics (Sloboda, 1991) seemed tobe related with variables of the different emotional climax or peak. At last, this research opens upconsiderable attention about delay and reappraisal of emotional climax or peak for further study.

Key words: Peak experience, Listener, Performer

[email protected]

Page 152: Abstract Book

152 Emotion I

11.4 Strong experiences in music, their categorisation and con-nection with personality traits

Hendrik Saare1, Jaan Ross2

1University of Tartu, Estonia2University of Tartu, Estonian Academy of Music and Theatre, Etonia

BackgroundA musical experience is a very complex phenomenon, which is why it has been studied so littlein the field of music psychology. There are many different interactive factors, which influencethe music experience, like the person‘s musical preferences, the situation in which the experiencetakes place, and also the person‘s attitudes and personality traits (Gabrielsson 2001).AimsThe aim of this study was to clarify how people describe their musical experiences and to whatextent these experiences are related to different personality traits. The aim of this study was alsoto find similarities and differences between people professionally related to music and peopleprofessionally not related to music.Method60 people professionally related to music and 30 people professionally not related to music, be-tween 17 to 53 years of age, were asked to write descriptions and fill out questionnaires to gatherinformation about the character of their strongest musical experience. The questionnaires wereanalysed and categorised using Alf Gabrielsson‘s and Siv Lindström Wikt’s methodology (2003).To gather information about their personality traits, the subjects were also asked to fill out a per-sonality questionnaire EPIP NEO (Mõtus 2003), which is the Estonian equivalent to the classicfive-factor personality model NEO PI R (Costa & McCrae 1994). Inter-group differences werealso analysed.ResultsThe results of this study show that regarding musical experiences, physiological reactions andpositive emotions were reported most. The group professionally related to music described themusic and the performance more from the structural aspect of music. Physiological reactionswere reported more by the group professionally not related to music. Regarding personality traits,openness provided the most interesting results. High scores of openness were in relation with de-scribing music and performance from the structural aspect as well as in relation with physiologicalreactions.ConclusionsIn my study I used Gabrielsson‘s and Lindström Wikt’s methodology which proved to be adequatefor categorising strong musical experiences. It is clear that the quality and nature of the musicalexperience is affected by one‘s personality (especially the dimension of openness), but it is yet tobe determined to what extent. If asked what role does music as such have in a musical experience,then it can be answered as follows: situational factors, one‘s emotional condition, personality traitsand other important factors are the prerequisites for having a strong musical experience, which isprompted by music itself.

Key words: Emotions, Music experience, Personality

[email protected]

Page 153: Abstract Book

Tuesday, August 22th 2006 153

11.5 Quantification of Gabrielsson’s relationships between feltand expressed emotion

Paul Evans1, Emery Schubert2

1University of Illinois at Urbana-Champaign, USA2University of New South Wales, UK

This study examines empirically the relationship between the emotional quality one can at-tribute to musical stimuli (expressed emotion) and the subjective emotional response one can haveas a result of listening to music (felt emotion). Intuitively, the relationship between the two wouldappear to be positive. That is, in response to music, the listener usually feels the same emotion thatthe music expresses. According to recent research by Gabrielsson however, this assumption seemsto be simplistic, and several other relationships between expressed and felt emotion are possible.

This study aimed to provide empirical support for Gabrielsson’s claim, by quantifying theoccurrence of each of the relationships that he proposed. Participants (45 undergraduate studentsfrom the University of New South Wales) responded to questions about both expressed emotionand felt emotion for Pachelbel’s Canon, Advance Australia Fair (the Australian national anthem),and one or two pieces of their own selection. Quantitative criteria for each of the relationshipsoutlined by Gabrielsson were developed, and the data were mapped onto the two-dimensionalemotion space.

Results show that the positive relationship only occurred, on average, in 70% of cases. Qual-itative data suggested a variety of explanations for the occurrence of other types of relationships.Implications for practices that assume a positive relationship, such as mood induction proceduresand music therapy methods, are discussed.

Key words: Musical imagery, Emotion

[email protected]

11.6 Emotional communication mediated by two different ex-pression forms: Drawings and music performances

Teruo Yamasaki

Faculty of Human Science, Osaka Shoin Women’s University, Japan

Music and painting/drawing are different art forms depending on different modalities. How-ever, both of them could convey emotional intention of artists. To add to it, it is often said that awork of one art form influences another work of the other art form, like “Pictures at an Exhibition”composed by Moussorgsky. In this case, emotional intention which was expressed by the painterwas transformed into music composition by the composer. This study investigates such a inter-modality process experimentally. This study consists of four experiments. In the first experiment,12 students of design course were asked to express seven emotions by drawing simple geometricalfigures. In the second experiment, 27 students of non-design course judged emotional intention ofthe drawings. As a result of this experiment, one drawing which could convey intended emotionmost correctly was chosen for each emotion. The correct communication rates of these drawings

Page 154: Abstract Book

154 Emotion I

ranged from 52 % to 93 %. In the third experiment, 10 students of non-design and non-musiccourse were asked to express seven drawings chosen in the second experiments by playing a MIDIdrum improvisationally. In the last experiment, 61 students of non-design and non-music courselistened to music performances of the third experiment, and 31 students (drawing group) of themjudged drawings which these performances intended to express. 30 students (emotion group) ofthem judged emotional intention of drawings which music performances intended to express. Asresults of analyzing performances in the third experiment, performances for each drawing weredifferent significantly as to mean sound level, standard deviation of sound level, and mean in-terval of beats. Results of the last experiments revealed that the correct communication rates ofboth of listening groups (27 % and 28 %) were very similar and significantly higher than thechance level (14 %). Furthermore, As results of multiple regression analysis between responses ofeach listening group and acoustic features of performances, it was found that interpretive rules foreach listening group were similar. Based on these results, this study discussed the inter-modalityprocess where emotional intention expressed in drawings is conveyed through music performance.

Key words: Emotion, Communication, Inter-modality

[email protected]

Page 155: Abstract Book

Symposium: Acquired Musi-cal Disorders

12

Convenor: Lola L. Cuddy and Isabelle Peretz

Multiple disorders of musical abilities can occur after brain damage. Conversely, vast braininjuries may sometimes spare ordinary musical skills. Detailed studies of breakdown patternsfor musical abilities are highly informative regarding the brain organization of music processingand nicely complement neuroimaging studies. The aim of the symposium is to illustrate recentadvances in the field.

The symposium will begin with a brief (5 min) overview by the organizers outlining the basicprinciples and goals of brain-damage research, followed by four spoken presentations (each 25min including questions and set-up time). The four presentations represent interdisciplinary col-laborations between psychology, neuroscience, neurology, and music, and represent teams fromfour different countries (Canada, UK, France and Germany). The presentations will be followedby summary discussion by the organizers and open group discussion (15 min). Total time for thesymposium is 2 hours.

The first two presentations deal with disorders that may occur in ordinary listeners. Tillmann,Peretz, and Bigand will present a set of studies illustrating the need to assess musical abilities im-plicitly rather than explicitly in brain-damaged populations. Data from a severely amusic patientshow that (apparently impaired) knowledge of harmonic structures may be intact and accessible iftested by implicit methods. Griffiths, Jennings and Warren will present a new case of timbre disor-der, called dystimbria. A patient with a right-hemisphere lesion showed deficits in timbre analysisdespite the absence of a deficit in pitch perception for isolated notes. The next two presentationsdeal with disorders that concern musical experts. Hébert, Cuddy, Beckett and Peretz is partly theo-retical in content and will discuss a musical disorderUthat of damaged music-reading skills. Aftera brief literature review of music-reading disorders following brain damage, the presentation willillustrate, by example, how such findings are used to develop models of normal processing andthereby develop standardized tests for assessment. Finally, Altenmüller and Jabusch will discussthe loss of control and degradation of skilled hand movements, a disorder referred to as musicians’cramp or focal dystonia.

155

Page 156: Abstract Book

156 Symposium: Acquired Musical Disorders

12.1 Harmonic priming in an amusic patient: The power ofimplicit tasks

Barbara Tillmann1, Isabelle Peretz2, Emmanuel Bigand3

1CNRS-UMR 5020 & IFR 19, Lyon, France2University of Montreal, Canada3LEAD-CNRS 5022, Dijon, France

BackgroundPriming experiments have provided evidence for spared implicit processes despite impaired ex-plicit functions in various neurological disorders (e.g., alexia, agraphia, aphasia, prosopagnosia).Up to now, a dissociation between implicit and explicit processes in brain-damaged patients hasnot been documented in music cognition.AimsThe goal of our study was to assess whether an amusic patient would exhibit implicit processingof musical relations in a priming experiment despite failure in explicit investigation methods.Method and ResultsI.R. - a brain-damaged patient exhibiting severe amusia without aphasia - was tested with bothimplicit and explicit investigation methods using chord sequences. For the harmonic primingparadigm, the task consisted in identifying as quickly as possible one of two possible phonemes(Experiment 1) or timbres (Experiment 2) on the last chord of eight-chord sequences (i.e., thetarget). The harmonic relations of the target were manipulated so that the target was harmoni-cally related or less related to the prior chords. In both experiments, I.R. and controls displayedharmonic priming effects: phoneme and timbre identification was faster for related than for less-related target chords (Experiments 1 and 2). However, unlike controls, I.R.’s explicit judgmentsof completion for the same chord sequences did not differ between related and less-related con-texts (Experiment 3). Her impaired performance in explicit judgments was not due to generaldifficulties with task demands. In a control experiment using completion judgments with spokensentences, I.R. performed like controls (Experiment 4).ConclusionsThe findings suggest that implicit knowledge of harmonic structures remains intact and accessi-ble, even when explicit judgments are impaired. Observing differences in performance betweenimplicit and explicit investigations does not necessarily imply different processing components,but might reflect differences in activation levels, notably with implicit testing methods requiringlower activation levels than those necessary for explicit, consciously available judgments. Conse-quently, implicit tests are more sensitive to reveal residual abilities in brain-damaged patients. Thepresent findings thus argue for the use of implicit, indirect investigation methods to test patientswith musical disorders.

Key words: Amusia, Priming paradigm, Implicit knowledge

[email protected]

12.2 Dystimbria: A distinct musical syndrome?

T. D. Griffiths, A. R. Jennings, J. D. Warren

Page 157: Abstract Book

Tuesday, August 22th 2006 157

Auditory Group, Newcastle University and Institute of Neurology, University College, London, UK

BackgroundMelody and rhythm deficits can be assessed using the Montreal Battery for the Assessment ofAmusia (MBEA), whilst timbral assessment is not established. Here, we describe the systematicassessment of timbre in a patient with a right-hemisphere lesion. The approach tests cognitiveneuropsychological models based on distinct bases for timbral analysis, and examines the neuralsubstrate.AimsThe study tests the hypotheses that i) timbral deficits can dissociate from pitch perception and ii)the superior right temporal lobe is critical to timbral analysis.MethodThe subject was assessed at the age of 42 after infarction of the posterior superior right tempo-ral lobe. He complained of deficits in melody recognition and “dystimbria”: a change in theperceived timbre of instruments. Fundamental auditory processing was assessed using the New-castle Auditory Battery (NAB), and melody, rhythm and musical memory were assessed using theMBEA. Timbre analysis was assessed using forced-choice psychophysics to assess the analysis ofthe spectral or temporal dimensions of timbre.Resultsi) The following tests of the detection of fundamental auditory features from the NAB were normal(no z score for threshold greater than +1.0): complex pitch, sinusoidal amplitude modulation of a500-Hz carrier at 2Hz and 120Hz, frequency modulation of same carrier at same rates. ii) Testingusing the MBEA confirmed the presence of a deficit in melodic and certain temporal features (zscores for performance in parentheses): scale (-3.0), contour (-3.2), interval (-3.8), rhythm (-1.9),meter (-2.8), memory (0.0). iii) Testing of his perception of the dimensions of timbre related tospectral and temporal envelope yielded thresholds that were double those of normal controls.Conclusionsi) Deficits in timbre analysis can occur in the absence of a deficit in the perception of the pitch ofisolated notes. This patient had associated deficits in melody analysis and it will be of interest tosee if future cases can show dissociation. ii) The work supports critical involvement of the righttemporal lobe in timbre analysis.

Key words: Timbre analysis, Brain damage, Neuropsychology

[email protected]

12.3 Focal dystonia in musicians: An acquired musical disor-der?

Eckart Altenmüller, Hans-Christian Jabusch

Institute of Music Physiology and Musicians’ Medicine, University of Music and Drama Han-nover, Germany

Performing music at a professional level requires the integration of multimodal sensory andmotor information and precise monitoring of the performance via auditory feedback. In the con-text of western classical music, musicians are forced to reproduce highly controlled movements

Page 158: Abstract Book

158 Symposium: Acquired Musical Disorders

almost perfectly with a high reliability. These specialized sensory-motor skills are acquired duringextensive training periods over many years, starting in early infancy and passing through stagesof increasing physical and strategic complexities. The superior skills of musicians are mirrored inplastic adaptations of the brain on different time scales (for a review, see: Münte et al. 2002). Thereis a dark side to the increasing specialisation and prolonged training of modern musicians, namelyloss of control and degradation of skilled hand movements, a disorder referred to as musicians’cramp or focal dystonia. In our musicians’ clinic, we have seen 360 professional musicians withfocal dystonia during the last 10 years. The disorder presents as painless muscular incoordina-tion or loss of voluntary motor control of highly trained movements while playing the instrument.According to new research data, focal dystonia may be caused by training induced cortical dys-plasticity with pathological fusion of somatosensory representations in sensory or motor corticalregions. Considering 1) the historical advent of the disorder in the nineteenth century with rapidlyincreasing technical demands imposed on musicians, 2) the epidemiological data with repetitiveand spatiotemporally precise physical activity as a risk factor, and 3) neurobiological findingsof the blurring of somato-sensory representations, one is tempted to state that focal dystonia fi-nally marks the natural limits of a process of refinement of manual dexterity over a million years.However, a hereditary component seems to play a role, since according to a presently conductedneurogenetical study in more than 15% of our patients, (non- musician) members of the family areafflicted with other forms of focal dystonia.ReferencesMünte TF, Altenmüller E, Jäncke L. The musician’s brain as a model of neuroplasticity. NatureReviews Neuroscience, 3, 473-478 (2002)

Key words: Music performance, Motor disorder, Over-training

[email protected]

12.4 Music reading deficiencies and the brain

Sylvie Hebert1, Lola l. Cuddy2, Christine Beckett3, Isabelle Peretz1

1University of Montreal, Canada2Queen’s University, Canada3Concordia University, Canada

BackgroundThis paper will provide a brief overview of representative case studies of brain damage and musicreading for the past 20 years. We will describe patterns of selective loss and sparing of musicand text reading in professional musicians who have sustained brain damage. We will also de-scribe patterns of selective loss and sparing of specific components within music reading (suchas pitch, rhythm, and symbol reading) and selective loss and sparing of specific output pathways.Anatomical findings consistently implicate posterior left hemisphere damage. However, we notethat behavioral reports are variable. No standard test battery for music reading exists that wouldallow detailed comparisons among the studies.AimsThe aims of the present study are twofold. The first is use the logic of double dissociation andthe findings of unambiguous cases in the available literature to develop a model of normal music

Page 159: Abstract Book

Tuesday, August 22th 2006 159

reading. The second, based on the model, is to develop a standardized test battery for musicreading that would yield a reliable assessment of music- reading disorders. These aims are derivedfrom the basic assumptions of cognitive neuropsychology. They are consistent with the aims andprocedures undertaken for the assessment of acquired amusiaUdisorders of music perception andmemory (Peretz, 2001).Main contributionFirst, we will present a model that assumes functional independence between the two cognitivefunctions of music reading and text reading, and within music reading, assumes functional in-dependence for pitch, rhythm, and symbol reading. Moreover, the model proposes three, andpossibly four, possible output pathways. Three pathways suggested by brain damage studies arevisual-motor, visual-vocal, and visual-verbal. A fourth pathway, verbal-motor, is tentatively sug-gested. Second, we describe a music-reading battery recently developed in our laboratory and willpresent standardized data obtained from 20 highly trained musicians.ImplicationsWe will argue that the standardized assessment of patterns of loss and sparing in acquired music-reading disorders has clear implications both for theory and clinical intervention. Moreover, wepropose that in line with the description of developmental difficulties for text reading, speech, andmusic processing (congenital amusia), a form of developmental music-reading deficiency may beisolated.

Key words: Music reading, Brain, Musical disorders

[email protected]

Page 160: Abstract Book
Page 161: Abstract Book

Rhythm II 1313.1 Towards a style-specific basis for computational beat track-

ing

Nick Collins

Centre for Music and Science, Faculty of Music, University of Cambridge, UK

Whilst there has been some success in creating computational models of beat tracking, nogeneral beat tracking model has been exhibited. It is easy to find cases where models fail in par-ticular to match phase, due to off-beat signal energy (reggae), low frequency energy anticipatingbeats and a higher pitched clave line as the measure marker (latin), harmonic information andnon-percussive attack envelopes (Western classical) or simply alternative conventions (NorthenPotosí Easter songs in Bolivia (Stobart and Cross 2000)). There is increasing evidence that down-beat induction requires cultural knowledge, carried in timbral instrumental cues and higher levelknowledge forming priors on the distribution and relative role of parts (Hainsworth 2004, Jehan2005).

Tapping experiments on ecologically valid audio data were undertaken to demonstrate someweaknesses in existing approaches to beat tracking. Particular targets were the use of energybased features as detection frontends, and the long autocorrelation or filter windows applied todetect periodicities.

In the first experiment, Scheirer’s (1998) contention that a six-band vocoded signal maintainedadequate cues for rhythm detection was followed up. Subjects showed a statistically significantdrop in performance on vocoded signals.

In the second experiment, subjects followed an obstacle course of (period, phase) transitionsand the dependent variable was resynchronisation time. Human re-sychronisation performancewas superior to all tested computational beat tracking models.

In the light of these results it seems more plausible that a human strategy for beat tracking isthe constant assessment of the timbral cues associated with events in the sound stream. From afast analysis of the style (Perrot and Gjerdingen 1999, Koelsch and Siebel 2005) a schema can beselected. Beat tracking is an active process in which a participant is always prepared for changesinconsistent with their current working hypothesis. Computational beat tracking can be improvedby better signal processing frontends which incorporate more timbral information (Gouyon 2005),and machine learning technology may cope with the cultural aspects of the problem (Jehan 2005).

161

Page 162: Abstract Book

162 Rhythm II

Key words: Beat tracking, Tapping, Timbral cues

[email protected]

13.2 Metrical interpretation modulates brain responses to rhyth-mic sequences

John Iversen1, Bruno Repp2, Aniruddh Patel1The Neurosciences Institute, San Diego, USA2Haskins Laboratories, New Haven, USA

BackgroundOur perceptions of the world are not just a passive recording of stimuli, but reflect constructivemental processes by which our interpretation shapes what we perceive. In music perception, forexample, the perceptual experience of a rhythm depends upon how the listener interprets the me-terUwhere in the sequence they hear the downbeat. This process is interesting because, while oftensupported and implied by the music, it is in fact subject to our control.

AimsWe sought to determine in what way active metrical interpretation of rhythm is reflected in brainactivity, with the aim of uncovering mechanisms by which we internally mark certain events asdownbeats.

MethodBrain responses were measured (using magnetoencephalography) as listeners were presented witha repeating three-beat rhythmic phrase (two tones followed by a rest). In separate trials, listenerswere instructed to internally place the downbeat on either the first or the second tone, yieldingtwo metrical interpretations of the same sound sequence. Listeners were instructed not to move oruse motor imagery. As the stimulus was invariant, differences in brain activity between the twoconditions should relate to metrical interpretation. We asked if the response to a tone is modulatedby whether or not it serves as the downbeat in the rhythmic pattern.

ResultsWe examined auditory evoked responses in several frequency bands, including beta and gamma.For seven out of ten listeners, metrical organization reliably influenced responses in the upper betarange (20-30 Hz), which were larger when a tone was the downbeat. Responses in the gammarange showed less consistent effects.

ConclusionsOur results suggest that imagining the downbeat modifies brain responses to sound at an earlystage. The involvement of beta is suggestive of a role for the motor system in rhythm perception,even in the absence of movement. A second interpretation is that the increases in beta mightmimic responses to physical sound accents, as might be expected if the neural effect is related tothe creation of subjective accent.

Key words: Rhythm, Meter, MEG

[email protected]

Page 163: Abstract Book

Tuesday, August 22th 2006 163

13.3 Anticipatory behaviour in music: Towards a new approachto musical synchronization

Timo Fischer1, Manfred Nusseck2

1University of Kassel, Germany2Max Planck Institute for Biological Cybernetics, Germany

BackgroundAnticipation behavior is a crucial ability in everyday life (Butz et al., 2003) and an elementalskill for music performance. Thereby the coupling of rhythm perception and rhythmic actionsplays an important role. Most of the research on anticipatory behaviour mainly focuses on theanalysis of synchronization tasks when tracking distinctly non-musical isochronous rhythms likemetronome patterns. But only little is known about the underlying processes when synchronizingto “real” musical rhythm. However, two different types of anticipatory processes responsible forsynchronization tasks have recently been identified and can be characterized seperately: One is theimplicit automatic anticipation and the other is the explicit processing of temporal information.The two processes seem to work in both, in partly parallel and partly concurrent manner (Fischeret al., 2005; Miyake, 2004).AimsA growing body of recent publications has addressed these issues from a number of additionalperspectives (Lewis et al., 2003). We attempt to transform these insights into an intergrativetheoretical model of rhythmic synchronization and/or anticipation suitable for real-life musicalsituations.Main contributionWe propose a new cognitive model of anticipatory behaviour which basically includes one com-ponent of high level cognition (like attention, actual decision making) and another of low levelprocessing (automatic, subconscious). The first component comprises explicit predictions and goalorientated actions, while the latter is specialised on automatic processes. Therefore our model ismore adequate in reflecting anticipatory behaviour with respect to musical contexts. We will alsopresent applications of this model which allow further insights into the drawn distinctions.ImplicationsResearch on tracking isochronous rhythms showed that there are distributed timing mechanismsand it provides access to the understanding of the fundamentals of anticipation and timing control.Synchronization to isochronous “metronome pattern” remains a special case of musical synchro-nization though. Our integrated approach could help to bridge the gap between research fromexperimental psychology and music cognition.ReferencesButz, M.V., Sigaud, O, & Gérard, P. (Eds.) (2003). Anticipatory behavior in adaptive learningsystems. Berlin-Heidelberg: Springer.

Fischer, T. & Nusseck, M. (2005). Anticipatory timing precision in synchronization tapping:A matter of attention. Talk at the 10th Rhythm Perception and Production Workshop in Bilzen(Belgium) July 2005.

Lewis, P. A. & Miall, R. C. (2003). Distinct systems for automatic and cognitively controlledtime measurement: evidence from neuroimaging. Current Opinion in Neurobiology, 13, 250-255.

Miyake, Y., Onishi, Y. & Pöppel, E. (2004). Two types of anticipation in synchronizationtapping. Acta Neurobiologiae Experimentalis, 64, 415-426.

Page 164: Abstract Book

164 Rhythm II

Key words: Rhythm, Musical synchronization, Timing control

[email protected]

13.4 An intercultural study on tempo perception in Japanesecourt music gagaku

Rinko Fujita1University of Vienna, Austria

BackgroundThe Japanese imperial court music gagaku is based on the ancient native (Japanese) music and theforeign music forms with Central Asian, Southeast Asian and Indian elements introduced fromChina and Korea during the 6th and 7th centuries. Gagaku includes classic form of dancing andsinging with instrumental accompaniment as well as purely instrumental music. The musical formof gagaku has been preserved by the court musicians from the same hereditary families for morethan one thousand years.AimsIn current performing practice the increase of tempo is one of the distinctive features of gagakumusic, however, the change in tempo remains barely audible to the listeners. The study addressesthree basic questions: How is the accelerando achieved, why is it unrecognizable, and whether theculture-specific auditory experiences affect the ability to detect it.MethodAs the first stage of the study the recorded samples of gagaku music were analyzed using soundspectrograph. And the time of tempo criteria, which are rhythmic pattern and a unit of musicalsystem kobyôshi, and the time deviation were measured. And then an experiment was chosento determine thresholds of detecting the change in tempo. The subjects (n=34) from nine coun-tries comprising the court musicians, musicians and non-musicians participated in the experiment.Their task was to listen for the differences in musical excerpts of gagaku and to respond as “nochange”, “faster”, or “slower”.ConclusionsAlthough percussionists increased the tempo as much as 50% over the course of a piece, the pro-portion of the kobyôshi within the rhythmic pattern was basically kept. Because of the extremelyslow tempo of the music the successive accelerando was very moderate and hard to discern. Thecourt musicians were better able to discriminate the differences in tempo than the other groups.However, the threshold values were affected by individual nature of musical excerpts.

Key words: Japanese court music gagaku, Tempo perception, Ethnomusicology

[email protected]

13.5 An investigation of pre-schoolers’ corporeal synchroniza-tion with music

Tuomas Eerola, Geoff Luck, Petri Toiviainen

Page 165: Abstract Book

Tuesday, August 22th 2006 165

Department of Music, University of Jyväskylä, Finland

BackgroundThe ability to perceive and produce a steady beat is fundamental for many activities, such as timedmotor tasks, speech perception, and music-related behaviour. While the ability of both infantsand pre-schoolers to synchronize their movements to an external timekeeper has been studied,the developmental changes of this ability have not been investigated due to a lack of availablemethodology. Previously, children from age five and up have been studied using behaviouraltasks, such as tapping to a beat. Meanwhile, children aged one to four, despite representing acrucial age group whose speech and motor skills are undergoing substantive development, havereceived much less attention. We present new methods of studying music-induced movement inthe latter age group.AimsTo investigate pre-schoolers ability to synchronize their movements with music using an opticalmotion capture system and signal processing methods.MethodTen children (2 - 4 years of age) were presented with a familiar excerpt of music, in as natural asetting as possible. In order to examine the synchronization process, the music excerpt containedabrupt tempo changes produced by time-stretching the audio file. The children were encouraged todance with the music, and their spontaneous movements, such as rocking, dancing, and gesturing,were recorded by means of a high-resolution optical tracking system. These movements werethen analysed in terms of their periodicity by using principal components analysis and windowedautocorrelation. In addition, the synchronization of these periodic movements to the beat structureof the music, the latter obtained from a beat-finding analysis of the audio signal, was examined.Various indices of synchronization accuracy were developed.ResultsThe results showed that periodic movement could be quantified from the movement patterns ofchildren, and that these periodic movements were related to the beat of the music. More detailedresults will be presented at the conference.ConclusionsWe believe that the study of these visceral responses to rhythm will produce ecologically validdevelopmental data about pre-schoolers’ synchronization with music.

Key words: Musical development, Motion capture, Synchronization

[email protected]

13.6 A non-human animal can drum a steady beat on a musicalinstrument

Aniruddh Patel, John Iversen

The Neurosciences Institute, San Diego, CA, USA

BackgroundTwo universal features of human musical rhythm are the ability to voluntarily produce sound in a

Page 166: Abstract Book

166 Rhythm II

periodic fashion, e.g. by striking a musical instrument, and the ability to move in synchrony with aperiodic beat. These abilities appear to be unique to humans and useful only for musical purposes.Do they therefore represent neural specializations shaped by natural selection for music? Oneway to address this question is to ask if non-human animals can acquire these abilities. If so,then it is unlikely that they represent cognitive adaptations for music- making. Here we focus onperiodic sound production. Crucially, humans can make periodic sounds “on demand” rather thanin constrained biological contexts, distinguishing them from animals that produce periodic soundsin stereotyped displays for mating, alarm calls, etc. (e.g. cricket chirping).AimsWe sought to document whether a non-human animal could play a steady beat on a musical in-strument in the absence of ongoing rhythmic cues from humans. We studied Asian elephants(Elephas maximus) in northern Thailand. These elephants have been trained to strike percussioninstruments as part of their participation in the Thai Elephant Orchestra.MethodWe took video footage of a 13-year old female striking two bass drums in alternation with a malletheld in her trunk. Eleven drumming sequences (with no evidence of human rhythmic cueing) wererecorded over four days. The mean and sd of the inter-beat-interval (IBI) was quantified for eachsequence.ResultsThe elephant’s drumming tempo was very stable across sequences, ranging from 33.4 to 36.1 beatsper minute (mean = 34.9, sd = 0.9). Within each sequence the elephant maintained a very steadybeat. Temporal variability, quantified as the coefficient of variation of IBI within each sequence,averaged 3.5%, which is lower than the variability found when humans tap at this tempo (typically7%).ConclusionsA non-human animal can play a percussion instrument in a periodic fashion, without temporalcues from humans, and with a highly stable tempo between and within sequences. Future workshould examine whether animals, like humans, can play different tempi and can synchronize theirplaying to an auditory beat.

Key words: Evolution, Animals, Rhythm

[email protected]

Page 167: Abstract Book

Demonstrations I 1414.1 The continuous response measurement apparatus (CReMA)

Evangelos Himonides

Institute of Education, University of London, UK

BackgroundAs part of a four-year multi-method doctoral research project that investigates the perception andinterpretation of beauty in sung performances, an innovative CReMA (continuous response mea-surement apparatus) device has been designed, drawing on modern analogue synthesizer controltechnology. This new interface acts as an intuitive linear control system, thus not requiring the userto “jump” to a new location on a circular potentiometer (as in the CRDI - continuous response dig-ital interface - from the Center for Music Research in Florida, Madsen et al). In this way, a one toone analogy to linear scoring (graded scales, Likert scales, and scoring continua) can be providedin an attempt to retain more closely the like-dislike n-point scale linear domain.AimsWith this Demonstration, the researcher aims to introduce this new technology to the researchcommunity and receive critical feedback on its utilization, as well as advice regarding its possibleapplication to projects with different foci to the one for which the CReMA interface has beendeveloped.A Short Description of the ActivitiesThe researcher will present the new technology, discuss the requirements for usage, interconnec-tion, adaptability, and demonstrate different levels of interfacing. as well as data collection andanalysis. In one configuration, the CreMA is synchronized with Medical Bio-feedback equipmentin order to gather physiological data linked to emotional response whilst listening and movingthe interface. Participants will be able to interact with the new technology and discuss issues andimplications.ImplicationsAlthough the interface has been designed for a specific research project that is focused on theinterpretation and understanding of perceived beauty in sung performances, it is believed that theimplications for the utilisation of the CReMA interface are substantially wider.Specific Value and MeaningPerceptual testing and, more specifically, real-time perceptual testing using the latest technologies

167

Page 168: Abstract Book

168 Demonstrations I

offers the potential for new insights into human responses to music. Thus, the development of newtechnologies that will assist us to understand how different “notions” are perceived is believed tobe of vital importance in extending our understanding.

Key words: Continuous response measurement

[email protected]

14.2 The application to implement artificial internal hearing

Sook Young Won, Jonathan Berger, Song Hui Chon

CCRMA, Stanford University, USA

The often-discomforting percept of unfamiliarity of one’s own voice when heard as a pre-recorded analog or digital signal has received limited attention in perception research. The workthat has been done is primarily focused on speech perception [Shuster and Durant. 2003]. Muchof this research focuses on the role of bone conduction which cases the speaker’s voice to be heardas a substantially low-pass filtered signal. While a number of researchers have attempted to deter-mine a transfer function to describe and simulate the bone conducted speech signal [Maurer andLandis, 1990], little research has been done in this regard with the singing voice. In addition todetermining the transfer function, the ratio of bone-conducted to signal processed through the earhas been discussed by von Bekesey and studied [Maurer and Landis, 1990] with speech signals.In this paper a digital filter that simulates bone conductance is studied in terms of self-perceptionof singing by trained singers. The processed sound will be evaluated by singers to determine theproximity of match to the sound of the imagined sung voice. The overall shape of the transferfunction is similar to a sine graph, having set 3000Hz as the threshold (graphing the magnitudevs. the frequency). It explains that the human skull boosts low frequencies and cuts high fre-quencies when propagating singing voice, and the findings are in correspondence to the commonsense that solid materials cut high frequencies because of the damping of the materials. We trans-form the original recording by air transmission using the filter coefficients of the transfer functionand compare the filtered sound to the original recording by bone transmission. We also examinethe perception test for finding the internal hearing ratio of the air-conducted sound to the bone-conducted sound, and then implement the application that generates the artificial internal hearingsound from the microphone input by air transmission in Matlab.

Key words: Internal hearing, Bone conduction, Singing voice

[email protected]

Page 169: Abstract Book

Pitch I 1515.1 Harmonizing a tonal melody at the age of 6-15 years

Pirkko Paananen

University of Jyväskylä, Department of Music, Finland

BackgroundPrevious perceptual studies imply that the knowledge of harmonic relationships develops duringschool age. However, studies on chord production have been limited. It is unclear, whether chil-dren approach harmony from the established key or local chords.AimsThis paper describes 6-15-year-old participant’s (n=44) harmonizations of a simple tonal melody.A preliminary attempt to explain the development of the most stable elements of tonal harmony ismade.MethodThe context consisted of two verses of a simple tonal melody in C major. Four marked keys (C, D,F, G) of the synthesizer, each of which producing a whole major triad, were used in improvising achord accompaniment as a real-time performance, recorded with Micro Logic 3.5 / Logic Educa-tion 5.5. Age-related development of harmony and metre was examined by analyzing the productsstatistically (correlation, ANOVA). The cases were clustered with hierarchical cluster analysis.ResultsThe frequency of chord C (I) as the final chord, the distribution of chords at strong metrical posi-tions, and the number of periodically harmonized measures increased with age. Number of eventsproduced decreased with age. Seven clusters of varying developmental levels were found. Atthe earliest phase of development children explored with chords and rhythmic impressions, andcoordinated melody and chords poorly, producing a high number of events. In the next phase,coordination of melody and harmony improved, and tonic closure became more common. Thechords emerged either at strong beats, or formed ostinati. In the most developed phase, melodyand harmony were well-coordinated, the most frequent chord being C(I), the tonal closure beingcommon, and the majority of chords emerging at the strongest beat.ConclusionsHarmonic production seems to develop during the school years. Younger children tended to ap-proach the task from a more global level, and older ones being more analytic. In the future,

169

Page 170: Abstract Book

170 Pitch I

comparison of different harmonizing contexts is required, as well as the effects of training. Itis likely that the familiarity, length and structure of the melody, and time allowed affect chordproduction.

Key words: Tonal harmony, Production, School age

[email protected]

15.2 The effect of musical training and tonal language experi-ence on the perception of speech and nonspeech pitch andmusical memory

Barbara Schwanhaeusser, Denis Burnham

MARCS Auditory Laboratories, University of Western Sydney, Australia

BackgroundIn tone languages, lexical items are distinguished not only by consonants (e.g., /bee/ vs /fee/), andvowels, (e.g., /bee/ vs /buy/), but also by tones, predominantly conveyed by the fundamental fre-quency (F0) of individual syllables (e.g., /bee/ rising tone vs /bee/ falling tone - meaning, in Thai,“a nickname” and “to press” respectively. In this paper the relationship between F0 perception inspeech (in lexical tone), and in non-speech contexts is investigated.

AimsThe aim is to investigate whether musical training facilitates lexical tone perception and therebydelineate more clearly the psychological process(es) underlying F0 perception in speech vs. non-speech contexts.

MethodFour participant groups were tested - Thai or Australian English language background participantswho were either musically trained, or had no prior musical experience. They were tested for cat-egorical identification and discrimination for a synthetic speech continuum, and an F0-equivalentsinewave tone continuum; and on a pitch memory task consisting of 5-s excerpts from popularsongs at the original pitch level or transposed by 1 or 2 semitones.

ResultsListeners’ perception did not differ as a function of language background but trained musicianswere more accurate in identifying and discriminating synthetic speech and sinewave stimuli froma continuum from a rising to a falling pitch contour. Musicians also required fewer trials to reach atraining criterion in the identification task. However, the degree of musical training did not affectpitch memory — in the musical memory test listeners correctly identified the original song versionon 83% of trials, with no significant difference between Thai and Australian English listeners orbetween musicians and non-musicians.

ConclusionsThese results suggest that musical background plays a more important role in tone perceptionthan does tonal language background - musical training enhances the ability to learn new tonalcategories in these artificial tone continua, irrespective of the degree of tonal language experience.It can be concluded that musical training facilitates learning new tasks that are not related to

Page 171: Abstract Book

Tuesday, August 22th 2006 171

musical performance, here categorisation of novel tonal stimuli, but that this does not occur viaenhanced pitch memory abilities.

Key words: Lexical tone, Speech perception, Music memory

[email protected]

15.3 The expressive intonation in violin performance

Mónica Sánchez

Escuela de Música Joaquín Maya, Pamplona, SpainUniversidad Pública de Navarra, Spain

BackgroundThere is no truth in music intonation. So, why different performances are more in tune than others?What intonation accuracy is? Listeners and performers of non-fixed intonation instruments couldemploy a frequency continuum in pitch control. That is why performers have the power of givingan expressive intention to each note of the melody they play.AimsThe aim of this study is to show how violin intonation is influenced by the melody, the musicinstruction and the ET (Equal Temperament) deviations, in order to know which deviation is pre-ferred by musicians and non-musicians in solo performances.MethodThe research was made in two periods. The first study analyzed a total of fifteen unaccompaniedperformances by five medium-level violinists. They played eight bars of the Corellit’s SonateVIII, op.5 n.8 at first sight. The recordings had wide deviations from ET (calculated in cents:1/100 semitone), and no tuning system (Pythagorean, Just or ET) was perfectly acceptable. Atthe second part, six subjects (professional violinists, musicians and non-musicians) were asked tolisten to five performances, one from each of the violinists-subjects of the first study. Then, theyhave to order the registers according to their intonation preference, and could listen to them asmany times as they wanted.ResultsThe results showed that the widest deviation performance (up to +49 cents from ET on the 7thgrade) was preferred by violinists and non musicians. The others musicians put the nearly ETperformance at first place, possibly caused by their piano instruction.ConclusionsSpecific instrumental education is required to know the different ways of violin intonation in orderto award an aesthetic musical value, by an internal pitch reference and kinaesthetic spacing control.The relationship between intonation and both objective or subjective factors determined perceptualexperience, like the relations between intonation and sensitivity to changes in pitch, tone positionwithin the melody and scale or the harmonic structure of a tone and the size of musical intervals.The expressive intonation could be the best way for conferring an active pitch control of the musicinterpretation, keeping music and musicians alive.

Key words: Pitch control, Violin intonation, Music performance

[email protected]

Page 172: Abstract Book

172 Pitch I

15.4 Are early-blind individuals more musical than late-blind?

Catherine Y. Wan1, Sarah J. Wilson1, David C. Reutens2

1School of Behavioural Science, The University of Melbourne, Australia2Monash Institute for Neurological Diseases, Melbourne, Australia

BackgroundThe musical abilities of blind individuals have rarely been investigated. Most auditory studies haveassessed attention and sound localisation, neglecting skills related to music perception. Comparingthe musical abilities of individuals with onset of blindness in childhood or adolescence (early-blind) versus those blinded in adulthood (late-blind) has the potential to provide important insightsinto the role of brain plasticity in musical skill development.AimsWe examined blind individuals on various tests of musical abilities, to determine whether theirperformance varied as a function of blindness onset age.MethodIndividuals with early or late complete vision loss performed three tasks: (1) Pitch discrimination- judging whether the second of two tones was higher or lower in frequency than the first; (2) Pitchmemory - deciding whether the first and last tone of a sequence was the “same” or “different”;and (3) Pitch-timbre categorisation - classifying the difference between two tones as “no change”,“pitch change”, “timbre change”, or “both change”.ResultsSignificant differences were observed between the early-blind and late-blind participants. Specif-ically, early-blind individuals showed superior fine pitch discrimination [F(1,14)=7.31; p=0.017]and were more proficient at detecting changes in timbre, especially when pitch changed as well[F(1,14)=14.31; p=0.002]. In contrast, the pitch memory task was influenced by musical prac-tice, with individuals currently playing an instrument showing superior performance to those notengaged in practice [F(1,11)=16.22; p=0.002].ConclusionsOur data suggest that an earlier age of blindness onset may play an important role in the devel-opment of superior auditory acuity, particularly pitch and timbre discrimination. The results lendsupport to the existence of a critical period of heightened brain plasticity early in development thatmay enhance musical skills. The data also have implications for improving the design of sensoryaids, particularly for late-blind individuals.

Key words: Blind, Music perception, Brain plasticity

[email protected]

Page 173: Abstract Book

Education II 1616.1 The effect of background music on the interpretation of a

story in 5 year old children

Naomi Ziv, Maya Goshen

Max Stern Academic College of Emek Yizre’el, Israel

BackgroundThroughout development, children acquire knowledge both about the syntactical norms of tonalmusic, and about the relationship between musical form and emotion. The perception of emo-tional expression in music by children has been examined directly using chosen musical excerpts.However, although music is often heard as background to other activities, the effect of backgroundmusic on the perception of other stimuli in children has not been studied.AimsThe aim of the present study was to examine whether “sad” or “happy” background music wouldaffect the interpretation of a story read to young children.MethodMusic: Two versions of the melody of Chopin’s Mazurka op.68 n.2 in A minor were used: theoriginal, played in slow rhythm on piano, and a transposition into a major key, played in a fastertempo. 28 kindergarten children judged the minor version as “sad” and the major as “happy”.Story: BlueBear, a short story containing no conflict or words denoting clear emotion, was chosenfor the study. Intonation and rhythm of reading was standardized. Procedure: Sixty kindergartenchildren (mean age=5.5 years) were divided into 3 groups. Children in groups 1 and 2 heard theread story with the minor or major version of the music playing in the background. Children ingroup 3 heard it without music. Children were then asked 10 questions regarding the feelings ofthe Bear in different points of the story, by selecting happy, sad or neutral pictures of faces.ResultsIn most questions regarding various points of the story, as well as the story as a whole, childrenwho heard the story with the minor background music tended to judge it as sadder, children whoheard it with the major background music judged it happier, and children who heard it with nobackground music were more or less equally divided.Conclusions

173

Page 174: Abstract Book

174 Education II

The present study suggests that even at a very young age, background music may affect the inter-pretation of other stimuli. By the age of 5, children are sufficiently familiar with the tonal idiomand sensitive to music norms to be affected by background music in their emotional interpretationof a story.

Key words: Children, Emotion, Story

[email protected]

16.2 Interpretation of the emotional content of a musical per-formance by 3 to 6-year-old children

László Stachó

Department of Musicology, Liszt Ferenc Academy of Music, BudapestDepartment of Music, University of Jyväskylä, FinlandInstitute of Psychology, Eötvös Loránd University, Budapest

BackgroundAccording to contemporary research on performing practice, expressive, skilled music perfor-mance is characterized by two fundamental features:

1. How the performer conveys the underlying structural constraints of a musical piece to hisaudience; 2. How he brings out the “character”, i.e., the emotional content of the piece. Theemotional content expressed by the performance became a widely discussed topic in cognitivemusicology only during the last decade.

However, empirical evidence regarding children’s abilities in interpreting the emotional con-tent of a musical performance is still lacking.AimsTo investigate 3 to 6-year-old children’s abilities to interpret the emotional content of a musicalperformance.MethodShort musical phrases (excerpts from short keyboard pieces) were presented in a deadpan com-puter performance to 10 non-musicians who rated the expressivity of the excerpts. Two of theexcerpts rated as “inexpressive” were then played by two pianists, with five emotional expressions(happiness, sadness, fear, anger, and neutral), to adult musicians (N = 10), adult non-musicians(N = 10), and 3 to 6-year-old children (N = 15). The children rated the excerpts with the aid ofphotographed faces bearing the five emotions studied.ResultsAdult musicians and non-musicians proved to be equally good at interpreting the emotional con-tent of a performance, while the children did significantly less well. However, the children’sresponses were characterized by systematic biases.ConclusionsThese results confirmed previous investigations about the abilities of interpreting expressivity inmusical performance by adult musicians and non-musicians. Our new results on children’s inter-preting abilities are discussed in the context of general theories of emotional development.

Key words: Interpretation of emotion, Musical performance, Emotional development

Page 175: Abstract Book

Tuesday, August 22th 2006 175

[email protected]

16.3 Music lessons and emotional intelligence

E. Glenn Schellenberg

University of Toronto, Canada

BackgroundMusic lessons are associated with enhanced IQ, and such associations are evident in experimentalas well as in correlational and quasi-experimental studies. Observed associations are not specificto standard subcomponents of IQ, such as spatial, mathematical, or verbal abilities. Music lessonsalso improve one’s ability to decode the emotions expressed through prosody in speech. It isunclear, however, whether music lessons are predictive of emotional intelligence more generally.

AimsThe goal was to determine whether training in music is predictive of scores on a standardizedmeasure of emotional intelligence.

MethodThe sample comprised undergraduates who had either extensive training in music (7 or moreyears of formal lessons) or no musical training. Each participant was tested with the MSCEIT(Mayer-Salovey- Caruso Emotional Intelligence Test), an “ability-based measure of emotionalintelligence” that provides scores on eight subtests, four branches of emotional intelligence (man-aging, understanding, using, and perceiving emotions), and a total score. Participants were alsoadministered the Kaufman Brief Intelligence Test (K-BIT), a standardized measure of IQ (M =100, SD = 15) that provides a total score as well as separate scores for verbal (vocabulary) andnonverbal (matrices) abilities.

ResultsAs in previous research, musically trained participants outperformed their untrained counterpartson the IQ measure (K-BIT). Effect sizes were relatively large, with the two groups differing byabout two-thirds of one standard deviation (approximately 10 points) on both the verbal and non-verbal measures.

By contrast, the two groups did not differ on the total emotional intelligence score, on any ofthe four branch scores, or on any of the eight subtests. In fact, there was no hint of an effect (i.e.,all Fs < 0), which suggests that the null findings were not due to a lack of power.

ConclusionsAs in previous research, music lessons had a reliable positive association with intellectual func-tioning. The results indicate that either: (1) this association does not extend to emotional intelli-gence, or (2) the MSCEIT is not a valid measure of emotional intelligence.

Key words: Emotional intelligence, Intelligence, Music lessons

[email protected]

Page 176: Abstract Book

176 Education II

16.4 Children’s perception of some basic sound parameters

Elisabetta PirasBackgroundIn a famous book (Six études de psychologye, 1964) J. Piaget proposed some well known tests,in order to determine children’s mental development stages, based on abstraction abilities aboutsubstance conservation. Studies in the musical field [e-g. Bamberger 1969, 1991] analyze implicitor explicit forms of children’s reasoning that seem to be not very distant from those studied byPiaget.AimsMy purpose is to show the results of a set of tests based on the mould of Piaget’s work. The ideawas born from a didactic experience. The tests concern three sound parameters: pitch, value andtimbre. The focus of attention is children’s ability, from a perceptive point of view, to abstractand “conserve” a sound parameter -for example pitch-, whereas another is changed -for examplevalue.MethodNine tests have been organized as in the following examples: Initial sound material: Three soundswith different pitch, same value and same timbre. Transformation: The value of each sound ischanged. Verification: Does the pitch of the sounds change? Initial sound material: Three soundswith different value, same pitch and same timbre. Transformation: The timbre of each sound ischanged. Verification: Does the value of the sounds change?

These tests were proposed to sixty eight children, divided in five classes of an elementaryschool (age from 6 to 10 years).Results and ConclusionsThe confrontation of these test verifications was based on the age, and the musical backgroundof the children (particularly about their knowledge of the concepts involved in the experience).A first check of the results indicates that the sound parameter abstraction ability of the childrengrows, obviously, with age, although this doesn’t really coincide with the stages indicated byPiaget. Moreover there aren’t very important differences between tests done by children playingan instrument or performing in other musical activities regularly, and those doing musical activitiesat school only.

Key words: Abstraction, Conservation, Sound parameters

[email protected]

Page 177: Abstract Book

Performance I 1717.1 A methodology for the study and modeling of choral into-

nation practices

Johanna Devaney

Columbia University, USA

BackgroundThe modeling of choral intonation practices, much like those of non-fretted string ensembles,presents a unique challenge because at any given point in a piece a choir’s tuning cannot be con-sistently related to a single reference point; rather a combination of horizontal and vertical musicalfactors form the reference point for the tuning.

AimsThis paper proposes an approach for modeling such practices through the intersection of a computer-generated statistical learning model and commonly received knowledge in the field music theory.The computer model is built on data extracted by tracking microtonal pitch variations betweenrecorded choral performances and twelve-tone equal temperament. The relevant areas of musictheory referenced include tuning, temperament and intonation theories, as well as harmonic andvoice-leading practices and expectations.

Main contributionThis methodology builds and expands on centuries of theory of choral intonation practices in itsattempt to address these practices in greater detail. The conflict between vertical and the hor-izontal tendencies are addressed in relation to the harmonic series and recent theories of tonaltension and attraction. The methodology also makes use of recently developments in the worldsof electrical engineering and computer science; particularly in the areas of polyphonic pitch ex-traction/F0 estimation, MIDI to score alignment, and machine learning. These approaches offer arelatively unique approach to the study of performance practices, allowing for a detailed model tobe built from the analysis of a larger number of recordings of real-world performances rather thanlaboratory experiments.

ImplicationsHorizontal intonation practices function as expression phenomena, and as such they are relatedto the ways in which musical forces shape musical expectation. Thus, this approach may be

177

Page 178: Abstract Book

178 Performance I

able to provide empirical data to help substantiate both the general theories of attraction as wellas some issues related to musical expectation, meaning and emotion. Once a model based onthis methodology has been fully developed it may also be useful as a means of producing moreaccurate computer-based temperament re-creation, as training guide for vocalists, and as a methodof rendering both MIDI and audio recordings more intonationally-accurate.

Key words: Intonation, Choirs, Tonal tension/attraction

[email protected]

17.2 Pop-E: A performance rendering system for the ensemblemusic that considered group expression

Mitsuyo Hashida1, Noriko Nagata2, Haruhiro Katayose2

1Center for Human Media, Kwansei Gakuin University, Japan2School of Science and Technology, Kwansei Gakuin University, Japan

This paper describes design and systematization of a music performance rendering model andsome evaluations.

Musical performance rendering has been one of the hottest themes in music study relatedto artificial intelligence. For the last two decades, quality of musical performances rendered bycomputers has been much improved. Some performances can be compared with skilled amateurmusicians. But still, there remain problems to be solved, especially on 1) automatic analysisof musical structure, and 2) natural expression of ensemble (polyphonic) music. We have beenengaged in research in designing computational models to cope with these problems, and haveproposed some systems. In this paper, we are going to show a rule-based performance renderingarchitecture called Pop-E, in which natural expression of polyphonic music is focused on.

Pop-E generates an expressive performance by applying rules including those for expressinggroups and those for prolonging several notes, individually to each voice. Different from otherperformance rendering systems, this method contributes to gain expressiveness of all of melodicparts. To the contrary, we have to provide functions to manage time transition of each part. In mu-sic performance, if the specific notes of each voice synchronize at certain points, other parts shouldbe given priority of their own expression. Based on this idea, we formulated an effective way tofind the synchronization points using group structures and prolongation given by a user. As forthe adjustment between adjacent synchronization points, we adopted a time-wrapping procedure,which rescales the total duration of the non-attention part maintaining the ratio of occupied lengthof each note, as it may fit to the attention part which is given by the user. One of the characteristicrules of Pop-E is that to prolong notes at transit of an attention parts among voices. Introduction ofthis rule contributed in improvement of performance rendering of the of Romantic school music.

A performance rendered based on Pop-E won the Rencon Award at NIME-Rencon (Perfor-mance Rendering Contest held at NIME04, Hamamatsu.) In this full paper, we would like todescribe evaluation of the model, from the points of productivity improvement and ability to de-scribe plural performances of virtuosi.

Key words: Performance rendering

[email protected]

Page 179: Abstract Book

Tuesday, August 22th 2006 179

17.3 A comparative study of air support in the trumpet, horn,trombone and tuba

Jonathan Kruger1, James McLean2, Mark Kruger3

1Rochester Institute of Technology, Rochester NY, USA2State University of New York - Geneseo College, Geneseo, NY, USA3Gustavus Adolphus College, St. Peter, MN, USA

BackgroundSeveral studies have measured airflow and blowing pressure during brass performance (Fletcher &Tamoplosky, 1999). None has been more significant for brass players, however, than the observa-tions made by Arnold Jacobs, the former principal tubist with the Chicago Symphony Orchestra.Jacobs’s observations done between 1959 and 1960 with Dr. Benjamin Burrows were never pub-lished. Only anecdotal comments made by Jacobs in subsequent interviews and master classesremain. Jacobs claimed that blowing pressure (intra-oral pressure) increases as players ascendin range and that airflow in the horn subsequently decreases. Jacobs went further to claim thatintra-oral pressure and airflow were consistent at a given pitch and decibel level, regardless of theinstrument being played. According to Jacobs, a tuba player and a trumpet player create the sameamounts of intra-oral pressure and airflow in their respective instruments when performing thesame enharmonic pitch.

AimsThe purpose of this study is to replicate Jacob’s unpublished research and to extend it by measuringairflow, intra-oral pressure, and sound simultaneously and by measuring changes continuouslyrather than recording only peak measurements.

MethodFour musicians (trumpet, horn, trombone, and tuba) performed a series of musical exercises inthe same concert pitch selected to allow comparison of air support systems as a function of pitch,loudness, and articulation. Airflow, intra-oral pressure, and sound were sampled at 11,000 Hzusing a microphone and transducers produced by Vernier Electronics (intra-oral pressure) andGlottal Enterprises (airflow).

ResultsIntra-oral compression does increase as pitch increases and airflow decreases as pitch increasesin each of the four members of the brass family. Both measures are also sensitive to changesin loudness (dynamic). As Jacobs observed, the larger bore instruments require less intra-oralcompression and produce more airflow when playing in their normal ranges than the higher in-struments. Contrary to Jacob’s assertion about the similarity of instruments playing the samepitch, we observed measurable differences.

ConclusionsAlthough intra-oral compression and airflow are related in similar ways across the brass family,when playing in normal ranges the experience of air support is likely to be perceptually different.

Key words: Music performance, Music pedagogy, Brass performance

[email protected]

Page 180: Abstract Book

180 Performance I

17.4 The visual impact of specific body parts on perceived con-ducting expressiveness

Clemens Wöllner

Martin-Luther-University Halle-Wittenberg, Germany

BackgroundWhilst conductors develop their individual interpretative style to transmit their expressive inten-tions, there seems to exist a common basis of understanding for conducting gestures that, e.g.,allows orchestras all over the world to become acquainted with new conductors in the shortest oftime. Research has mainly concentrated either on qualitative descriptions of renowned conductorsor on quantitative movement analysis of basic conducting patterns.AimsIn the current study, it is believed that both ways are necessary in comprehensive investigationsof expressive conducting. For that reason, perceptual and quantitative movement analyses arecombined. In particular, the impact of specific body parts that potentially communicate expressivemusical intentions is investigated.MethodFive conductors with different levels of experience each conducted four excerpts from a Beethovensymphony that varied in musical expressiveness. Video recordings were manipulated accordingto the following conditions (template technique): a) only the face was visible, b) only the armswere visible and c) the whole body was visible in a blurred fashion so that overall movements weredetectable but no detailed information could be gained from facial or other bodily expressions. 127musically trained and untrained participants first watched the randomly presented video sequencesaccording to the aforementioned conditions without sound. For each video sequence, they wereasked to rate affective and communicative aspects. Complete video sequences with sound and allvisual information were then presented as a reference and rated similarly. In addition, four experts(experienced conductors of major orchestras) rated these video sequences on various conductingquality items. Quantitative video analyses were also employed.ResultsWhile the observers could gain more general information on the filmed situation from the hands-only condition, video sequences that presented the conductors’ faces resembled significantly morestrongly the reference than the hands-only or the blurred conditions in terms of expressivenessratings. Conductors interpreted each musical excerpt consistently in a different manner, as theratings show even for conditions without sound. Results reveal parallels in the ratings betweenparticipants and experts, and further between the ratings for arousal and quantitatively analysedmovement parameters.ConclusionsPreliminary analyses indicate that facial affective behaviour is more essential than other bodilycommunication for general perceptions of conducting expressiveness. The lack of general differ-ences between untrained observers and experts points to the existence of a commonly understoodgestural language of conducting.

Key words: Nonverbal communication, Affective response, Motion analysis

[email protected]

Page 181: Abstract Book

Music Therapy I 1818.1 Focus on music: Reporting on initial research into the

musical interests, abilities, experiences and opportunitiesof visually impaired children with septo-optic dysplasia

Adam Ockelford1, Graham Welch2, Linda Pring3, Darrold Treffert1

1Royal National Institute of the Blind, UK2Institute of Education, University of London, UK3Goldsmith’s College, University of London, UK

BackgroundThe study grew out of a music workshop held during the FOCUS Families UK Conference (2003).A number of parents reported that their children had what seemed to be unusually high levels ofmusical interest or ability, and they asked whether these characteristics may be related to theirchildren’s medical condition: septo-optic dysplasia. If so, what were the implications for them,and for teachers and therapists? It was agreed that an exploratory study should be undertaken toinvestigate the matter further.AimsThe project sought to investigate the musical interests, abilities, experiences and opportunities ofchildren with septo-optic dysplasia in the UK and the US.MethodA specially designed parents’ questionnaire was piloted and distributed through the FOCUS Fam-ilies Network and other contacts known to the researchers. Through a series of open and closedquestions, parents were asked what they observed in their children in day-to-day situations, aswell as relaying the findings and accounts of professionals. Data were also gathered from matchedcontrols with no disabilities.Results32 questionnaires were completed on behalf of children with septo-optic dysplasia. These werematched with 32 controls. Statistical analyses of returns indicated that, in the view of parents,levels of musical interest and ability of blind children were higher than those who were partiallysighted, suggesting that level of vision may be a key factor in influencing musical development;that blind and partially-sighted children were significantly more likely to have a particular interest

181

Page 182: Abstract Book

182 Music Therapy I

in music; and that the musical development of blind children was more likely to be unusuallyadvanced compared to their peers. In contrast, it was also reported that a smaller proportionof children with septo-optic dysplasia played instruments and none was said to have had formalinstrumental tuition.ConclusionsThe researchers concluded that those working with blind and partially-sighted children, especiallyin the early years, should ensure that account is taken of the likely importance of music to them ina range of contexts; and that further research be undertaken into the musical interests and abilitiesof blind and partially-sighted children to understand better how different medical conditions anddiffering levels of functional vision may impact on auditory and musical development.Footnotes1. Focus = For our Children’s Unique Sight - a support network of families whose children havesepto-optic dysplasia / optic nerve hypoplasia. 2. Septo-optic dysplasia is a rare condition thatoccurs in approximately 1 in 16,000 children. It is defined as a combination of optic nerve hy-poplasia (absent or small optic nerves), pituitary abnormalities and the absence or malformation ofthe septum pellucidum or corpus callosum or both - without which communication between areasof the mid-brain (such as the transfer of sensory information) is hampered.

Key words: Septo-optic dysplasia, Visual disability, Musical behaviour

[email protected]

18.2 Empowering musical rituals as a way to promote health

Kari Bjerke Batt-Rawden

Akershus University College, NorwayUniversity of Exeter, UK

BackgroundThe focus of this study is how and why listening to music plays a role as a “folk medical practice”in the lives of men and women with long-term illnesses and disease. Through participating in amusical promoting initiative, salutogenetic and empowering factors among the individuals can bestrengthened. This study illuminates the role and significance of listening to music as part of aninformal learning in everyday life for the long- term ill.MethodThe study design sought to elicit participants’ life stories and stories of being well and beingill through the prism of music by using a qualitative research stance consisting of a pragmaticsynthesis of elements of action-research, ethnography and grounded theory. 22 Norwegians, aged34 to 65 with long-term illnesses and diseases, were recruited as a strategic sample involving eightin-depth interviews stretching over a year from 2004 to 2005. A novel Participatory CD design wasdeveloped. Four double CD compilations from different genres were used as devices to increaseknowledge as to whether participants through exposure to and exchange of new musical materialsand practices, might learn to use music as a “technology” of self for health, healing and recovery.Results and ConclusionsParticipants described their involvement with the project, and their subsequent raised musical con-sciousness as beneficial, resulting in increased self-awareness and gaining a new repertoire of

Page 183: Abstract Book

Tuesday, August 22th 2006 183

musical skills relating to self - care. Listening to music and musicking seemed to be importanttools in the process of change and self-development, enhancing well-being and “wellness” andoffering resources for recovery and quality of life in the face of illness. Emphasising the “pos-sibility of learning musical skills”, telling “others” the beneficial effects of musicking and theenjoyment and satisfaction that could be derived from it, are advantages that could be tapped and“transported” into “informal teaching settings” as part of health musicking in local communities.Music, as a method or strategy in Health Promotion and rehabilitation ought to be a vital factorfor the enhancement of general health and quality of life in the population for years to come.

Key words: Participation, illness, salutogenetic, informal, well-being

[email protected]

18.3 Playing with autism

Pierluigi Politi1, Marianna Boso1, Enzo Emanuele2, Stefania Ucelli1, Francesco Barale1

1Department of Applied Health and Psychobehavioural Sciences, University of Pavia MedicalSchool, Italy2Interdepartmental Center for Research in Molecular Medicine (CIRMC), University of PaviaMedical School, Italy

Repetition is a common feature that is shared by music (Rose 2004) and autism (Wing andGould 1979). Accordingly, the “restricted repetitive and stereotyped patterns” which represent adiagnostic criterion for autism (APA 2000) might well be paralleled by certain ostinato structuresthat characterize some musical tunes, either at a rhythmic or melodic level.

At Cascina Rossago (Pavia, Italy) - a farm community for young adults with autism - we havecreated a group of autistic patients and health care providers who share musical attitudes and skills.Specifically, our group meets weekly with the exclusive purpose of playing and enjoying musictogether. In this context, some setting components - such as space, time, and musicians - arestrictly defined, while others - including songs or tunes - are handled freely. Altogether we believethat, as a group, we have developed a shared nonverbal language in which spontaneity representsthe essential core feature.

While at the beginning we used simple catchy melodies in our performances, with the next stepwe experienced more intense spiritual absortion when playing classical jazz standards. Specifi-cally, while the obstinate ground bass of some jazz tunes (e.g. Coltrane, Spiritual 1961) assures aconstant degree of repetition (Barale & Ucelli in press), it may allow - at the same time - the expres-sion of individual creative performances in the musicians (Gaslini 1980). It is feasible, therefore,that such an experience could create a safe environment in which patients may co- operate withina group.

Moreover, our group has defined some technical targets in its own performances, including lis-tening to others when playing, performing in a coordinate manner to facilitate integration, chang-ing spontaneously the musical instrument when desired, or to deal with exuberant behaviours.

Although autistic people typically show severe deficit of spontaneous communication with oth-ers (APA 2000), our experience clearly demonstrates that nonverbal forms of communication are apossible source of integration and spiritual absortion in autistic subjects. It is also conceivable thatthe musical language could represent a promising target to overcome the severe communicationdeficits that characterize autism.

Page 184: Abstract Book

184 Music Therapy I

A videotape will illustrate a typical music session at our community.ReferencesAmerican Psychiatric Association (2000): DSM-IV-TR Diagnostic and Statistical Manual of Men-tal Disorders, Fourth Edition, Text Revision, tr.it. Milano: Masson

Barale F., Ucelli S. (in press): La debolezza piena. Il disturbo autistico dall’infanzia all’etàadulta. Torino: Einaudi

Gaslini G. (1980): personal communicationRose G.J. (2004): Between couch and piano. Hove & New York: Brunner-Routledge Wing L.,

Gould J. (1979): Severe impairments of social interaction and associated abnormalities in children:Epidemiology and classification. Journal of Autism and Developmental Disorders, 9,1:11- 29

Key words: Autism, Communication, Jazz

[email protected]

18.4 An investigation of the effects of music in the treatment ofpatients with dementia

Julie De Simone1, Raymond MacDonald1, Alistair Wilson2

1Glasgow Caledonian University, Glasgow, UK2Gartnavel Royal Hospital, Glasgow, UK

BackgroundThere is a growing body of work which suggests that music may provide a variety of social andpsychological benefits for individuals with dementia. (Aldridge 2000) However, there is need forfurther research. The approach adopted in the study stems from an education and occupationalscience perspective that focuses on an holistic model of care and aims to give participants theopportunity engage in creative music activities.AimsThis paper investigates the effects of music on mood, social interaction and cognitive functioningin the treatment of patients with dementia.MethodThis is a joint study between Glasgow Caledonian University, Greater Glasgow health Board anda music charity, Polyphony, who provide access to musical activities in a large psychiatric hospitaland four residential nursing homes in the west of Glasgow. This study utilised a mixed methodsdesign employing elements of qualitative and quantitative methodology. The musical activitiesincluded listening to recorded music, improvisation and composition based on the elements ofmusic, e.g. pitch and timbre. There were six participants in an experimental group and four partic-ipants a non intervention control group. Quantitative methods utilised were the The Mini MentalState Examination (Greater Glasgow Health Board, 2000) and a cognitive functioning question-naire (Van der Gaag,1990). Qualitative methods utilised included interviews with participants andcarers. An evaluative diary was utilised to assess mood and social interaction.ResultsThere were significant increases (p<.05) in cognitive functioning in the experimental group. Therewas an increase (not significant) on the mini mental state questionnaire. Qualitative data analysis

Page 185: Abstract Book

Tuesday, August 22th 2006 185

showed that the participants’ social interaction increased and their mood improved. Staff at thenursing home also indicated that the improvements in mood and social interaction.ConclusionsThis research highlights the benefits of music intervention in improving cognitive functioning,mood and social interaction. The incorporation of music as a therapeutic tool in the care of patientswith dementia has the potential to improve their quality of life. Further research will shed morelight on the process and outcomes if this unique intervention.

Key words: Creative music, Patients with dementia, Effects

[email protected]

Page 186: Abstract Book
Page 187: Abstract Book

Rhythm III 1919.1 Timing in sequences: Development of a visuospatial rep-

resentation method and evidence for rhythmic categoricalperception

Amandine Penel1, Christopher A. Hollweg2, Carlos, D. Brody2

1Laboratoire de Psychologie Cognitive, Université de Provence, France2Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA

We used an analog reporting method to investigate the perception of temporal patterns, whichinvolves the translation of temporal information into a visuospatial representation. Participantsheard sequences of three brief tone pips, spanning a total of 1 or 1.2 sec from first to last tone(blocked), with the middle tone uniformly distributed within the interval. After each sequence,they had to place a vertical line within a horizontal bar symbolizing the sequence, at a positionthat represented the time when the middle tone occurred. We found that stimuli with middle tonesthat were within +/-10% of the midpoint were reported as if they had occurred at the midpoint itself(i.e., assimilation). A subdivision of the total interval into equal parts thus seemed to correspondto a perceptual category: Response variability was maximal at the boundaries of the assimilationzone, and a contrast effect was observed immediately beyond that zone. If the method indeedcaptures the mental representation of the patterns, it should be possible to validate our findings inexperiments not involving visuospatial responses. We performed a second experiment in which aclassical 2AFC discrimination task was used. Participants had to compare two auditory sequences,each consisting of three tone pips, and decide whether in the second sequence, compared to thefirst sequence, the middle tone was played earlier or later. As predicted by the visuospatial dataof the first experiment, local maxima in performance were observed near the assimilation zone’sboundaries in this purely auditory experiment when one sequence fell outside the assimilationzone while the other sequence fell within it. This conforms to rhythmic categorical perceptionand generalizes previous findings (Clarke, 1987; Desain & Honing, 2003; see also Schulze, 1989)in a task which does not involve rhythmic categorization (or quantization) via the use of musicalnotation and the explicit instruction to disregard expressive timing, and to participants who are nothighly musically trained (ours were nonmusicians or amateur musicians at best).

Key words: Rhythm, Perception, Isochrony

187

Page 188: Abstract Book

188 Rhythm III

[email protected]

19.2 Groove microtiming deviations as phase shifts

Andy McGuiness

Open University, UK

BackgroundSeveral studies of microtiming deviation in groove musics of the African diaspora have arguedthat the deviations are an essential component of the groove, rather than a product of poor timingby performers (Bilmes 1993b; Alén 1995; Cholakis 1999; Freeman and Lacey 2001; Waadeland2001; Freeman and Lacey 2002). Data available from two studies (Alén 1995; Freeman and Lacey2001) yields phase deviations in the range of approximately 5msecs to 50msecs in absoolute values(McGuiness 2005).

The existence of phase corrections in response to subliminal timing perturbations - below thedetection threshold for ISI changes - has been established in a number of studies (Thaut, Tianet al. 1998; Repp 2000; Repp 2001; Repp 2001; Repp 2002; Repp 2002; Thaut and Kenyon2003). Entrainment models incorporating separate phase correction and period correction termshave been described by (Large and Kolen 1994), (Mates 1994), (Thaut, Miller et al. 1998), and(Large and Jones 1999); but make no prediction about the respective location of period and phasewithin the Wing & Kristofferson two-process model of internal clock and motor implementation(Wing and Kristofferson 1973a). The phenonemnon of negative synchronisation error has beenfound in a number of tapping studies (Dunlap 1910; Vos, Mates et al. 1995; Pressing 1998; Thaut,Tian et al. 1998; Semjen, Schulze et al. 2000; Repp 2001).AimsTo adapt existing models of entrainment to account for microtiming deviations.Main contributionIt is proposed that 1) systematic microtiming deviations are phase shifts;and that 2) phase cor-rection processes in entrainment are located in the motor (implementation) stage of the Wing &Kristofferson model.ImplicationsThe idea of functional anticipation is introduced, which accounts for entrainment phase correc-tions, microtiming deviations, synchronisation error in tapping studies, and adaptation to latencyin musical instruments.

Key words: Entrainment, Participatory discrepancies, Microtiming

[email protected]

19.3 Detecting changes in timing: Evidence for two modes oflistening

J. Devin McAuley, Deborah Frater, Kellie Janke, Nathaniel Miller

Page 189: Abstract Book

Tuesday, August 22th 2006 189

Department of Psychology, Bowling Green State University, USA

An outstanding psychological question in research on music perception concerns the natureof the mechanisms involved in judgments about sequence timing. A simple musical examplethat illustrates this capacity is our ability to indicate whether a musical sequence is accelerating(“speeding up”) or decelerating (“slowing down”). Theoretical perspectives on this problem aretypically one of two sorts: an interval-based perspective or a beat-based (entrainment) perspective.The interval-based perspective assumes that judgments about sequence rate are based on compar-isons of the absolute time intervals comprising the sequence. If the absolute duration of successivetime intervals shortens or lengthens, then this is an indication that the sequence is speeding up orslowing down, respectively. In contrast, a beat-based perspective assumes that judgments aboutsequence rate are based on comparisons of successive events with a periodic beat induced by thesequence. Events that arrive prior to an expected beat are “early” and suggest that the sequenceis speeding up, while events that arrive after an expected beat are “late” and suggest that the se-quence is slowing down. The present research contrasted interval- and beat-based perspectives ontiming in a series of experiments. Participants listened to simple monotone sequences that dif-fered in temporal structure and judged whether at the end of the sequence, they felt the sequencewas speeding up or slowing down. Across all experiments, findings supported two modes of lis-tening. Some participants appeared to be listening in an interval mode, while other appeared tobe listening in a beat-based mode. An intriguing consequence of individual differences in modeof listening was that there were particular stimulus instances that yielded opposite perceptions.For some stimulus sequences, a beat-based mode provided listeners with a strong sense that thesequence was speeding up, while an interval-based mode suggested that the same sequence wasslowing down. Implications of this work for the neural mechanisms underpinning time and rhythmperception will be discussed.

Key words: Rhythm and tempo, Models of timing, Individual differences

[email protected]

Page 190: Abstract Book
Page 191: Abstract Book

Workshops 2020.1 An introduction to musicians’ gestures: Recording, an-

alyzing, and reporting on the “body language” of musi-cians

Richard Ashley

Northwestern University, USA

BackgroundThe last two decades have seen a great increase in research oriented toward understanding howbodily movement, facial expressions, and other physical actions by people engaged in normalcommunicative situations; researchers such as Ekman, Goldin-Meadow, Kendon, and McNeilland their students and colleagues have contributed substantially to our knowledge of how gesturehelps people communicate. A handful of researchers have applied these concepts to musicians’interactions, but to most researchers in music cognition such work continues to have the status ofexotica.AimsThis workshop gives a practical introduction to the field of gesture studies for researchers in musiccognition. Attendees will become aquainted with the major concepts and techniques of the field,will recieve a copious bibliography of appropriate source material, and will see techniques ofrecording and analyzing musicians’ body movements, as well as learn about technological aids tosuch work.

Activities of the workshop The workshop has four main aspects: 1) a general introduction tothe field; 2) an introduction to major ways in which gesture can be analyzed, including not onlythose approaches from the field of gesture studies per se, but also those from allied disciplinessuch as cognitive anthropology and conversation analysis; 3) a description of technical aspects ofrecording and analyzing gestures, including the use of single or multiple video cameras and motioncapture systems, as well as computer software for analyzing and annotating gestural behaviors; anda “data session” where a short videotaped excerpt is analyzed.ImplicationsGesture research can be better integrated into the daily work of researchers intersted in musiccognition.

191

Page 192: Abstract Book

192 Workshops

Specific value and meaning This workshop is about giving a window into the ways in whichgesture research can be carried out, often with low-cost equipment and software, and yield goodresults. Attendees will be able to begin applying gesture research to their own questions aboutmusic cognition and know where to turn for more information and can make use of the ever-increasing literature on gesture research to good ends.

Key words: Gesture, Communication, Expressive performance

[email protected]

20.2 Sight-reading strategy: A cognitive psychology approach

Jue Zhou, Dan Feng

Music Conservatory of Southwest China University, China

Sight-reading is an important step in piano education or piano performing. Piano teachersshould take initiative in fostering and bettering their students’ ability of sight-reading with appro-priate strategies. This paper tries to explore the process of sight-reading from the perspective ofcognitive psychology and argues that sight-reading is influenced by sensory input, pupil speed,sound recognition speed and most importantly, the categorization of scores based on a series ofexperiments. We tentatively propose a strategy termed as figuring with the purpose to help pianoeducators modifying their tactics in instructing sight-reading.

Key words: Sight-reading, Cognitive psychology, Strategy

[email protected]

Page 193: Abstract Book

Part II

Wednesday, August 23th 2006

193

Page 194: Abstract Book
Page 195: Abstract Book

Symposium: The influence ofpreference upon music per-ception

21

Convenor: Raymond MacDonald

Discussant: David Hargreaves

There is a now well established literature highlighting the many ways in which music can influ-ence human behaviour, emotions and general psychological functioning (MacDonald, Hargreavesand Miell, 2002; Miell, MacDonald and Hargreaves, 2005). Much of this literature has focusedupon the effects of specific pieces or genres of music and has produced compelling evidence tosuggest that particular pieces of music can have casual effects on specific aspects of psychologi-cal functioning (Juslin and Sloboda, 2001). However, given that music can have intense personaland subjective meanings for listeners, which may not relate to structural aspects of the music(eg tempo or mode), there is a need for more research that investigates the subjective aspects ofmusical perception.

In this symposium, five papers will be presented that each offer a different, yet related, way inwhich preferred music (i.e. music that is selected by participants as being particularly enjoyable)can influence a range of psychological variables. A first introductory paper will discuss the roleof preference in our musical identities and communication. The second paper discusses a seriesof empirical studies that highlight the effects of listening to preferred music on pain and anxietyperceptions in a laboratory setting. The third paper is a qualitative investigation upon musicalidentities and in particular highlights the way in which our musical preferences influence out senseof identity across the life span. The fourth paper is an experimental study investigating the effectsof preferred music upon driving behaviour. The final paper reports a series of experiments thatinvestigate the effects of listening to preferred music in a hospital setting and also in a gymnasiumsetting.

Taken together, these papers highlight that preferred music is an important variable to be takeninto consideration by researchers investigating the effects of music listening. In particular, theresults of these studies suggest that listening to our preferred music, regardless of genre, can havespecific psychological effects across a range of different environmental situations. The discus-sant Prof David Hargreaves shall offer some conclusions, summary points and discuss possibledirections for future research in this area.

195

Page 196: Abstract Book

196 Symposium: The influence of preference upon music perception

21.1 An investigation of the effects of post-operative music lis-tening in hospital settings

Raymond MacDonald

Glasgow Caledonian University, UK

BackgroundAlthough recent advances in the psychology of music have highlighted the importance of socialand cultural influences upon music perception, there is still much to learn about the impact thesefactors have on music perception.AimsThis paper introduces the topic of the effects of listening to preferred music by discussing theimportance that preferred music has on both musical identities and musical communication. Thispaper also discusses two empirical studies that highlight the effects of listening to preferred musicon pain and anxiety perceptions in hospital settings. The key role played by subjective evaluationsin mediating the effects of music listening is highlighted.MethodIn Experiment 1, following minor surgery on the foot, 20 participants in an experimental grouplistened to preferred music while 20 participants in a control group did not. Experiment 2 involveda preferred music listening group of 30 females and a no music control group of 28 females. Bothgroups underwent a total abdominal hysterectomy. Post-operative measures of pain, anxiety andpatient-controlled analgesia were taken.ResultsIn study 1, results indicate that the music group felt significantly less anxiety than the controlgroup in the post operative period. No differences in pain measurements between the two groupswere found. In study two there were no significant differences between the two groups followingthe operation. In both studies 80% of participants selected music that was categorised as popular.ConclusionsThese experiments do demonstrate the complex and multifaceted nature of music listening. Thus,when investigating the effects of music listening in hospital situations it is important to take intoconsideration the wider social context within which the participants are listening to music, e.g.preferred music choice, type of medical procedure, the way in which the music is listened toand the ward environment. These two studies also provide the foundation for a series of studies,reported in the rest of this symposium, that investigate music listening in a laboratory and appliedsettings.

Key words: Therapy, Perception, Hospital

[email protected]

21.2 The effects of preferred music listening on pain

Laura Mitchell

Glasgow Caledonian University, UK

Page 197: Abstract Book

Wednesday, August 23th 2006 197

Background“Audioanalgesia”, the ability of music to affect pain perception, has been the focus of a significantnumber of recent research studies. The majority of these have used music selected by the exper-imenters for perceived relaxing qualities. Psychology of music theory, however, would suggestour own preferred music may provide an emotionally engaging distraction capable of reducing thesensation of pain and the accompanying negative affective experience regardless of the structuralfeatures of the music. In support of this standpoint, our earlier work using laboratory-induced painin 98 healthy participants found preferred music to significantly increase tolerance of pain com-pared to white noise, relaxing pre-selected music and mental arithmetic, and to increase perceivedcontrol over pain compared to white noise, relaxing music and audiotaped comedy. These findingswere then developed by a survey of the music listening behaviour of 318 chronic pain sufferers,which suggested the distracting and relaxing effects to continue in longer term pain.

AimsThis presentation will focus on the findings of two recent studies investigating the effects of pre-ferred music listening in a laboratory and a clinical setting.

MethodAn experimental study of 80 participants compared the effects of preferred music to a silence con-trol and a visual distraction in the form of a preferred painting chosen from 15 popular artworks.A further study currently in progress investigates the effects of preferred music on the experienceof pain, anxiety and perceived control during acupuncture, with 15 participants having undergoneone session under normal treatment conditions and one whilst listening to their chosen music.

ResultsThe experimental study found preferred music to increase tolerance and perceived control anddecrease anxiety compared to visual distraction and silence conditions. In the acupuncture study,participants were found to be significantly more relaxed during the music than the non-musicsession, and were significantly calmer, more relaxed, more comfortable and less tense after themusic session but not after the normal treatment session.

ConclusionsThese studies provide experimental evidence for the efficacy of preferred music listening in dis-tracting attention from pain, reducing anxiety and increasing feelings of control over the experi-ence.

Key words: Pain, Distraction, Popular music

[email protected]

21.3 A qualitative analysis of everyday uses of preferred musicacross the life span

Lana Carlton

Glasgow Caledonian University, UK

BackgroundThere is a focus of existent theory on the importance of adolescence to music-related identity

Page 198: Abstract Book

198 Symposium: The influence of preference upon music perception

processes and a need for literature on musical identities in later life. Investigating and under-standing the progression of musical identities throughout the life span would offer insight into thecontribution of preferred music to personal and social identities.AimsTo investigate, through interview-based research with younger and older adults, the changes thattake place in everyday uses of music (and the centrality of music within personal and social iden-tity) throughout the life span.MethodThree studies are reported involving n=45 participants (26 females and 17 males). Study 1 utilisedsix focus groups where participants’ mean age was 23.8 years. Study 2 used a semi-structuredindividual interview design (n=7) with participants’ mean age at 25.7 years. Similarly, study 3addressed age-related changes in uses of music in everyday life through individual interviews(n=6) with older adults (mean age 55.5 years).ResultsAnalysis of recurrent themes for this rich data sample indicated a range of ways in which youngerand older adults utilise music in the formation and maintenance of their multifaceted identities.Whilst younger and older adults differed in the social expression of their musical identities (e.g.genre-affiliated image was found to subside with age), there remain some constants. Age wasfound to be less relevant in the uses of music for emotional regulation, task assistance and inmaintaining musical memories. Genre preference in older adults was found to have less influenceon personal and social relationships than was the case for younger adults interviewed. Similarly,the inter-group bias in strong musical identities was found to dissipate with age.ConclusionsMusic can be crucially important in everyday life to both older and younger adults. The numerousways in which we use our preferred music, particularly in terms of identity formation and main-tenance, were found to change with age. Issues relating to the generalisability of this qualitativedata are discussed, as are directions for future research to this end.

Key words: Age, Identity, Focus groups

[email protected]

21.4 The effects of preferred music on driving game perfor-mance

Gianna Cassidy

Glasgow Caledonian University, UK

BackgroundRapid changes in technology and society have contributed to the increasingly ubiquitous nature ofmusic in our lives (North and Hargreaves, 1997). Computer game play provides a context in whichwe can investigate the effects of music listening on completion of an everyday task simulation e.g.,car driving (Brodsky, 2001).AimsThe current study investigates the effects of participants’ preferred music, and experimenter se-lected music of contrasting arousal and affect, on driving task performance.

Page 199: Abstract Book

Wednesday, August 23th 2006 199

MethodA between-subjects design was utilized, and participants completed the profile of moods (POMS)questionnaire pre- and post-experiment. Participants (n=25) then completed a lap of a simulateddriving task in one of five sound conditions: preferred music, positive low arousal music, negativehigh arousal music, car sounds, or silence.

ResultsParticipants completed the task significantly faster in the presence of preferred and high arousalmusic respectively. However, high arousal music and silence (the absence of auditory stimuli)displayed a significant detrimental effect on task performance, with participants performing sig-nificantly less accurately in high arousing and silence respectfully, than the other sound conditions.This indicates that although participants completed the task more quickly in both high arousal andpreferred music, only high arousal music displayed a detrimental effect on task accuracy. Par-ticipants were more accurate in the presence of car sounds with the addition of preferred music,and car sounds alone, respectively. Performance was significantly more accurate in the presenceof preferred music and car sounds respectively, than low arousal music. In the presence of pre-ferred music, participants reported finding the experiment more pleasurable, and the music lessdistracting than those in other sound conditions.

ConclusionsThe study highlights the importance of addressing listeners’ control over music and music pref-erence, when investigating the role of music in our everyday lives. Participants performed bestwhen listening to their self selected music, or when in the presence of car sounds alone. These,and other results, will be discussed in relation to individual differences issues such as personalityfactors. Directions for future study will also be presented.

Key words: Music and driving, Preferred music, Personality

[email protected]

21.5 The effects of preferred music listening in college studentsand in renal failure patients

Maria Pothoulaki, Miwa Natsume

Glasgow Caledonian University, UK

BackgroundRecent research has demonstrated that preferred music can play a significant role in influencingphysiological and psychological processes. Much of the previous research has been undertaken ina laboratory setting. This paper focuses upon issues of ecological validity in applied settings.

AimsThis paper presents two studies investigating the effects of preferred music listening on psycho-logical and physiological functions in a hospital and a gymnasium. The first study investigatesthe effects of listening to preferred music on exercise performance using psychological and phys-iological parameters such as perceived exertion and fatigue and heart rate during step exercise.The second study examines the effectiveness of preferred music listening in reducing stress andperceived pain for patients undergoing haemodialysis treatment.

Page 200: Abstract Book

200 Symposium: The influence of preference upon music perception

MethodStudy 1 involved British and Japanese female university and college students. There were threemusic conditions: experimenter-selected, preferred and no music. Study 2 involved a between-subjects design (n=60) of end-stage renal failure patients who were undergoing haemodialysis.Stress and pain were measured pre- and post-treatment. Participants in the experimental grouplistened to preferred music during haemodialysis. Participants in the control group did not listento music.ResultsIn study 1, while listening to preferred music, participants’ heart rate, perceived exertion and fa-tigue levels were significantly lower during the step exercise. Their exercise performance was alsosignificantly enhanced with their preferred music. Additional motivational and distraction effectsof preferred music such as reduction in boredom, tiredness and/or muscle pain were also found.Results for study 2 showed that mean state anxiety scores were significantly lower post- treat-ment for the experimental group than for the control group. Although there was not a significantdifference between groups in pain intensity scores, results indicated a within-group difference inthe control group, showing that participants in the control group experienced significantly higherpost-treatment pain intensity.ConclusionsThe present findings suggest that preferred music listening has positive effects on both psycholog-ical and physiological processes. Gender differences in responses to preferred music will also bediscussed.

Key words: Music and exercise, Music and hospitals, Preferred music

[email protected]

Page 201: Abstract Book

Education III 2222.1 Children’s responses to 20th century “art” music, in Brazil

and Portugal

Graça Boal Palheiros1, Beatriz Ilari2, Francisco Monteiro1

1Escola Superior de Educação, Instituto Politécnico do Porto, Porto, Portugal2Universidade Federal do Paraná de Artes, Brazil

BackgroundSeveral studies have investigated how children of different ages respond to diverse musical styles.Age seems to be a determinant factor in the development of musical preferences. Therefore, manyteachers advocate the use of a wide variety of musical styles during the early years of schooling.However, very few studies have examined children’s responses to twentieth century “art” music.In addition, most of the research on children’s musical preferences has been carried out in Europeand North America with few cross-cultural comparisons.

AimsThis study is investigating the responses to and preferences for twentieth century “art” music,of children from two different countries (Brazil and Portugal) at two age levels (9-11 and 12-14years). It compares children from different cultures who share a same language, such as Brazilianand Portuguese children. Knowing their musical preferences may have clear implications formusic education.

MethodSo far, 57 Brazilian and 119 Portuguese children participated in the study. Participants were testedin their schools in Curitiba and Porto. They listened to 13 excerpts from twentieth century works,rated their preferences, and were asked to describe each excerpt. They also indicated if they knewthe composers’ names from a given list.

ResultsPreliminary results suggest that, overall, participants knew very few composers from the list andgave low or moderate ratings to the excerpts. The responses of Portuguese children (so far, thecompleted group) fell within the predictions of most theories of musical preferences, with youngerchildren showing more openness to novel excerpts than the older children did. Additionally, agedifferences were also evident in both the quantity and the types of verbal descriptions of the ex-

201

Page 202: Abstract Book

202 Education III

cerpts, that is, with younger children producing fewer descriptions than the older children did.

ConclusionsA full comparison of the four groups and its implications for musical development will be laterpresented. So far, we may conclude that twentieth century “art” music is still rarely used in theclassrooms. Given its importance in Western music history as well as its potential for musicteaching and learning, some implications for music education will be further discussed.

Key words: Children, Preferences, Twentieth century “art” music

[email protected]

22.2 The early development of three musically highly giftedchildren

Franziska Olbertz

University of Paderborn, Institute for Research on Musical Ability (IBFM), Germany

BackgroundDevelopmental psychology of music is mostly based on cross-sectional studies, which relate eachmusical ability to an average minimum age. Due to strong individual differences, rules can beformulated at an elementary level only. Therefore, it seems to be reasonable to pay more attentionto the musical development itself (cp. Schwarzer 2000). In longitudinal studies, the processes andinfluences on musical development are of special interest.

AimsThe survey aims to describe musical development in gifted children and intends to derive typicalpatterns. A further objective is to develop general hypotheses for developmental processes andinfluences.

MethodThe project consists of three case studies on musical gifted children, ranging in age from 5-7.Firstly, data are being collected for a period of two years. They are gathered by means of obser-vation (abilities, interests, motivation, social behavior, environment), standardized tests (HAWIK,“Musik-Screening”, “Wiener Test für Musikalität”), questionnaires and oral interviews with par-ents and teachers. According to an implicit hypothesis, general attributes of musicality are ex-tremely obvious with musically gifted children.

ResultsThe following results will be presented: 1. All three children are highly intelligent, especially inlanguage and memory. 2. All three children are regularly fascinated by music, which affects theirachievements. This cannot be planned. 3. With all three children, high achievements appearedusually after intensive musical training. Only the fist signs of musical giftedness seem to emergeabruptly. Such incidences make the parents provide more musical offers, which in turn are con-ducive to further musical achievements. 4. The children show a distinctive need of appreciation,which they can get within their families by demonstrating their musicality. 5. In two cases, schoolenrolment impairs in musical activities, whereas children perform better in musicality tests. 6. Inspite of high musical competences, all three children have not yet achieved stability in meter.

Page 203: Abstract Book

Wednesday, August 23th 2006 203

ConclusionsThere are obviously positive interrelations between early musical development and individual lan-guage and memory capacities (e.g. Chan et al. 1998). Furthermore, high motivation and theacknowledgement of musical abilities within the family are factors, which improve individualmusical development (e.g. Manturzewska 1995). Meter competence is most probably closely con-nected to nonmusical stages of maturation (e.g. Bruhn 2005), which have to be passed througheven by highly musically gifted children.

Key words: Musical giftedness, Musical development

[email protected]

22.3 Creating original operas with special needs students

Kathleen M. Howland1, Hilda Bacon2, Debra Evans1

1Metropolitan Opera Guild2Bancroft NeuroHealth

BackgroundPart of the mission of the Metropolitan Opera Guild (MOG) is to serve the public as an educa-tional resource with the intention of making opera “accessible and exciting to people of all agesand backgrounds.” In the 25 year history of the program, over 250,000 school children have par-ticipated in creating original operas both nationally and internationally.

Four years ago, a research project was initiated to better understand the accessibility of the“Creating Original Opera” (COO) curriculum and the experiences of special needs students in avariety of settings. The research reviewed schools with both mild and moderate special needsstudents and then focused on one specific site that serves severe special needs students in an ed-ucational and residential setting. This school, Bancroft NeuroHealth, adapted the curriculum toaccommodate the severity of their students as well as to create a shift from a behaviorally-modeledclassroom paradigm to an after school arts program utilizing facilitated mediation. The curricu-lum modifications will be presented as well as the results from both quantitative and qualitativeresearch analyzing artistic, psychosocial and cognitive gains in students pre- and post-opera pro-duction. Video examples of student-created operas will be included.AimsThe intention is to introduce the potential of student-centered productions for special needs stu-dents with a variety of disabilties. This program is expected to be of interest to music therapists,expressive therapists, music educators and cognitive neuroscientists.ResultsAfter 4 years of opera productions, the students of Bancroft have created aesthetically pleasingproducts while gaining important arts and non-arts skills in the process. In quantitative pre- andpost-opera measures, problem behaviors were noted to have decreased while social interactionratings increased. Observations by staff have qualitatively described increases in divergent think-ing, problem solving and sequencing by students whose diagnoses include traumatic brain injury,autism, attention deficit and mental retardation.ConclusionsThe opportunity to work on a long term project in a group setting with individually-defined roles

Page 204: Abstract Book

204 Education III

has yielded interesting results. This body of work has demonstrated that working in a compre-hensive arts program can enhance skill development in psychosocial interactions, cognition andartistic expression. Further this program is accessible to students from mild to moderate/severedisabilities of many etiologies.

Key words: Creating original operas, Special needs students, Non-arts benefits

[email protected]

22.4 Age differences in listening to music while studying

Anastasia Kotsopoulou, Susan Hallam

Institute of Education, University of London, UK

BackgroundThere is an increasing literature relating to the use and impact of background music on behaviourincluding that relating to listening to music while studying. To date, there has been little researchspecifically focusing on age differences and the perceived effects of music on different types oflearning tasks.AimsThis research aims to explore age differences in the perceived effects of playing music in thebackground while studying a range of different tasks.MethodRating scale questionnaires were administered to six hundred students in three age groups, 12-13, 15-16 and 20-21. The questionnaires explored the use of music in everyday life, the extentof listening to music while studying, the kinds of tasks where background music was played,the perceived effects of music on studying, the types of music listened to while studying and thefactors that influenced the decision to listen to music while working.ResultsRepeated measures and factorial analyses of variance revealed age significant differences in rela-tion to most aspects of listening to music in everyday life, the moods which stimulated listeningto music while studying, and factors which triggered music being turned off. There were few dif-ferences in relation to the kinds of tasks where music was played. Overall, the youngest studentslistened to music the least in their everyday lives, and while studying. The university students wereleast likely to agree that it aided concentration and helped them learn faster, while the youngestwere the least likely to agree that it kept them company, alleviated boredom, relaxed them, inter-fered so they could not concentrate and interfered because they sang along with the music. Theywere also less likely to turn off the music when they could not concentrate, when they were unableto learn, and were less discriminating in when they played music in the background.ConclusionsThe findings suggest that older students perceive that music plays a increasing role in their every-day lives. They also become more aware of its effects on their studying. Experimental research isnow needed to verify these findings.

Key words: Background music, Studying, Age differences

[email protected]

Page 205: Abstract Book

Wednesday, August 23th 2006 205

22.5 Instrumental lessons: What do they expect? The role ofgender in pupil/teacher interaction

Victoria Rowe

Centre for International Research on Creativity and Learning in Education (CIRCLE), Roehamp-ton University, UK

BackgroundThere is an absence of research on the role of gender in learning to play musical instruments. Thenature of the typical one-to-one lesson suggests that the gender of teacher and pupil may havea significant impact on their interactions and expectations. The only aspect of gender to havebeen considered in relation to instrumental teaching so far is its influence on choice of instrument:nothing is known about its possible effects on the pupil/teacher relationship.AimsThis investigation aims to discover whether distinctive patterns of gender interaction occur ininstrumental lessons and, if so, what the consequences for pupils and teachers may be.MethodThe investigation involves two questionnaires, and a video and interview study. The question-naires were designed to explore any generally held views about gender interactions: the first wascompleted by 53 instrumental teachers (19 male and 34 female), and the second by 50 pupils (25boys and 25 girls aged between 10 and 15), in Southeast England. The third and fourth studiesinvolved video recordings of lessons, which were then used as a stimulus for semi-structured in-terviews with each of the participants: 8 teachers (4 men and 4 women) and 16 pupils (8 boys and8 girls aged between 10 and 15).ResultsAnalysis of the questionnaires highlighted several gender-stereotypical views, although some re-spondents attributed any differences in their behaviour to individual characteristics rather than togender. In the interview study, the 24 participants offered valuable insights that not only helped toaccount for these stereotypical views, but also presented some interesting contradictions. Analy-sis of the video data so far completed suggests that gender interactions do occur, but they can beovershadowed by the teacher’s assumption of a dominant role and the pupil’s submissive response.ConclusionsClear differences emerge between participants’ recorded behaviour and their beliefs. While thepower imbalance and teachers’ and pupils’ individual characters account for some differences inbehaviour, gender interactions also have an effect.

Key words: Gender, Instrumental teaching, Gender stereotyping

[email protected]

22.6 Some trends in Estonian music education in the 21th cen-tury

Tiina Selke

Page 206: Abstract Book

206 Education III

Tallinn University, Estonia

Estonia like other Baltic countries is known with his choirs and All-Estonian Song Festivalswith more than 20.000 singers on the stage. The quantity and quality of the choirs is one sample ofmusic education in comprehensive school. The base - Relative Solfa (JO-LE-MI) method whichhas been a part of music pedagogy in Estonia during about half of the 20th century has changed.The idea of plurality in postmodernist society has brought new trends. Visual arts in music edu-cation is quite a new phenomenon in Estonian music education. We can talk about drawing andsculpting music, drawing mood with music etc., and that even at the beginning of 1990s. What isthe aim of these activities, and how much are they accepted by public and music educators?

The base of the analyses is the pedagogical media from 1990-2005, school-songbooks from1990-2005, questionnaires provided in 2000 and 2003-05 and experience of the author of thispaper. The inquiry (public N=205 and music teachers N=66) was carried out in 2003/05. The aimof the research was to investigate the changes in music education. The analyse of the pedagogicalpress showed that using art activities with music listening was caused by the music therapy boomin 90ies. Music therapy was introduced to the music teachers and included into the curriculumof the music teachers preparation. The research showed that visual arts have been integratedwith music differently, depending on pedagogical background and time. The attitude of publiccircles to the self-forming role of arts has been mostly positive but different, depending on the ageof respondents: more accepted by parent-aged people (with different backgrounds of education)and less valued by respondents over 55 years of age (grandparents age). Visual art activities inclassroom have been mainly teachers own initiative and response.

Key words: Classroom music, Music therapy, Art

[email protected]

Page 207: Abstract Book

Pitch II 2323.1 Scaled harmonic implication and its realization: Search-

ing for a unified cognitive theory of music

Eugene Narmour

Music Department, Psychology Graduate Group, University of Pennsylvania, USA

BackgroundFrom a psychological point of view, the number of contrasting analytical theories currently usedin musical scholarship is troubling. Even in tonal style, which covers only a very small segmentof music history, strikingly dissimilar theories of harmony, melody, voice leading, rhythm, meter,form, and so forth abound. However, most scientists will agree that an important goal of anyintellectual discipline is to work toward formulating a unified cognitive theory, one that subsumeswhat seems empirically accurate from all the varying theories of musical analysis.

AimsThe approach herein is to extend the scope of the implication-realization model, whose tenetsin certain respects efficiently and accurately capture some important psychological aspects ofmelodic listening. Relying on the formal and functional distinctions of similarity and differenceand closure and nonclosure, we subject the parameter of tonal harmony to the major tenets of themodel.

Specifically, we construct hypothetical syntactic scales of some of the essential properties ofharmony-common-toneness between adjacent chords, rankings of all triads and all common sev-enth chords, and ordering vertical consonances and dissonances. Nested in common-toneness arescale step, inversion, and soprano position. To complete the scalings we also invoke such prop-erties as root ambiguity, ratio between types of ranked intervals, intervallic equivalence, templatematching, and symmetrical construction.

Main contributionsFrom these parametric scales we then formally hypothesize, identify, and define prospective andretrospective harmonic processes (P) and reversals (R). Along the way, we present a thorough re-vision of the analytical symbols of the model, which allows demonstration of the flexibility andgenerality of the theory as applied to harmonic examples from various tonal styles. For experimen-tal psychologists the results indicate new ways to test the manifold properties of harmony while

207

Page 208: Abstract Book

208 Pitch II

constructing and controlling examples that are closer to real musical experiences of harmony. Bypositing analogous structures between melody and harmony, we can see how melody and harmonyinteract to produce aesthetic affect.ImplicationsIf theories reflect cognitive processing, the question arises why the evolving brain would dedicateso many diverse processing units to an art that, unlike language, seemingly lacks adaptive survivalvalue. It is thus possible, efficient, and perhaps likely that all parameters of music are processedin the same way and with similar structures(though doubtless not in the same neural substrates).

Key words: Harmony, Modeling, Implication

[email protected]

23.2 Relative pitch learning: The advantages of active trainingand Asian ethnicity

Michael J. Hove, Mary Elizabeth Sutherland, Carol L. Krumhansl

Cornell University, USA

BackgroundMusic is unique in perception in that pitches are generally coded in a relative, as opposed toan absolute manner. Absolute pitch (AP) is a relatively rare ability and has been studied quiteextensively. Recent research points to a possible genetic component, as suggested by both anadvantage among East Asian populations and familial aggregation. It has also been suggestedthat AP is related to speaking a tone language. However relative pitch, which is common amongmusicians, has received less attention. It offers the advantage that it is possible to investigate itsacquisition in the laboratory setting in a relatively short time among non- musicians.AimsOne purpose of this study was to see whether active learning (in which participants physicallyproduce intervals) facilitated relative pitch learning as compared to passive learning (in whichthey simply hear the intervals and see the labels). A second purpose was to see whether differentpopulations or language types would affect the facility of learning relative pitch.MethodThe paper will report three experiments in which non-musician subjects were taught to distinguishbetween different pitch intervals, starting on either C or F#. Experiment 1tested participants (n=27;10 Asians, 17 non-Asian Cornell students) on four intervals (M2, M3, P4, P5) after either activeor passive training. Experiment 2 used three intervals (M2, P4, P5, found in pentatonic scale) withparticipants (n=35 high school students; 9 Chinese (tone language speakers), 14 Hmong (tonelanguage speakers), 12 non-Asian) in only the active condition.

Experiment 3 tested participants (n=23; 11 Euro-American, 12 Asian) on three intervals (M2,P4, P5) after either active or passive training.ResultsExperiments 1 and 3 supported the idea that active learning facilitated the learning of intervallabels. Thus, the action of physically producing the intervals during training resulted in higheraccuracy in later interval naming. All three experiments find higher performance for East Asian

Page 209: Abstract Book

Wednesday, August 23th 2006 209

participants. However, none of the experiments supported the link between pitch abilities and tonelanguage.

ConclusionsThese findings have implications for training relative pitch ability, and extend the issue of geneticfactors beyond AP to RP.

Key words: Relative pitch, Active learning, Genetic factors in pitch perception

[email protected]

23.3 Melodic expectancy in contemporary music composition.Revising and extending the Implication-Realization model

Fernando Anta, Isabel Cecilia Martinez

Universidad Nacional de La Plata, ArgentinaUniversidad Nacional de La Plata, Argentina

One of the most comprehensive theories of music expectancy is the Implication-Realizationmodel [I-R] by E. Narmour. It postulates that events implications are determined by bottom-up andtop-down expectancy processes. The [I-R] original version postulated five bottom-up processes,which cognitive reality has been identified. Several revisions of the model simplified it, reduc-ing the analysis to only two processes: one in which, given a melodic interval, it is expected afollowing event close in register to the last tone (Pitch proximity) and other in which every inter-val implies change of direction, returning to the first interval tone register, being such implicationmore evident the higher the size of the given interval (Pitch reversal). In spite of the empirical sup-port of the [I-R] reduced version, its application to the study of contemporary music compositionhas not been reported so far.

AimsTo assess the validity of the [I-R] reduced version to describe bottom-up processes of melodicexpectancy, present in the elaboration of the note-to-note level, during a contemporary musiccomposition task.

MethodAn experiment was run with 20 major students of music composition, who were required to com-pose a good continuation to 9 melodic fragments extracted from lieder by A. Webern. The firstnote of the composed continuation was analyzed to see if it satisfied the model’s implicative crite-ria.

ResultsResults strongly supported the [I-R] revised version, except for one small interval. Further dataanalysis led to identify that small intervals had also clear implicative direction properties. As theywere not reflected in the [I-R] reduced version, Pitch Reversal predictor was modified in order tocapture them. Reformulation succeeded in describing data, except for another small interval. Aconcise study of this case led to hypothesize that answers could have been influenced by higher-level expectancy processes. If the three last notes of the fragment were considered instead of justtwo of them, the model could efficiently predict the found answers.

Page 210: Abstract Book

210 Pitch II

ConclusionsOverall results support the [I-R] revised version to describe expectancy processes at the note-to-note level and suggest the model’s preliminary validity to describe expectancy processes that occurat higher levels of musical structure.

Key words: Melodic expectancy, Contemporary music composition, Implicative processes

[email protected]

23.4 Influence of tonal context on tone processing in melodies

Frédéric Marmel, Barbara Tillmann

CNRS UMR 5020, Université Lyon, France

BackgroundFor harmonic material, the priming paradigm has provided evidence that a prime context (a chordsequence) facilitates the processing of target chords that are tonally stable in the context key:related targets are processed more accurately and faster than unrelated and less related targets.For melodic material, numerous studies have shown context effects with completion judgments ormemory tasks.

AimsOur goal was to replicate musical priming with melodic material. Adapting the priming paradigmprovides an implicit task for melody perception research and, since tonal information is conveyedless strongly by melodies than by chord sequences, observing a priming effect would highlightlisteners’ implicit musical knowledge. Additionally, the use of melodic stimuli will allow us toinvestigate the processing levels facilitated in musical priming. We here started with the influenceof tonal relatedness on the processing of the target tone’s pitch height.

MethodIn short melodies, the last tone (the target) functioned as a stable or less stable tonal degree (me-diant vs. leading tone; tonic vs. subdominant). To focus on cognitive priming, melodies wereconstructed by pair and differed only in their first half by either all tones or a single (sometimesrepeated) tone. This difference changed the tonality of the melodies in a pair and the target’s tonaldegree. For the priming paradigm, participants’ task on the target was based on discriminationof timbre (played by TimbreA or TimbreB?) or speed of amplitude modulation (played with slowor fast warbles?). To investigate pitch height perception, participants directly judged the extent towhich the target tone was mistuned or detected mistuning of varying degrees.

ResultsCorrect response times of the different priming tasks showed weak, but consistent, facilitatedprocessing of related target tones over less related ones (at least for one of the response options).First results obtained with judgments related to mistuning indicated an influence of tonal functionon pitch height processing (experiments in progress).

Conclusion Our findings provided evidence that listeners perceive fine differences in tonalfunctions, even in melodies. Based on their tonal knowledge, listeners develop musical expecta-tions that influence the processing speed of target tones.

Key words: Musical priming, Implicit knowledge, Pitch height processing

Page 211: Abstract Book

Wednesday, August 23th 2006 211

[email protected]

23.5 Priming by non-diatonic chords: The case of the Neapoli-tan chord

Nart Bedin Atalay1, Hasan Gürkan Tekman2, Petri Toiviainen3

1Middle East Technical University, Turkey2Uludag University, Turkey3Jyväskylä University, Finland

BackgroundPredictions of MUSACT (Bharucha, 1987) on expectancy created by non-diatonic chords wereinvestigated empirically. MUSACT, a neural network model of tonal schema, has been accuratein explaining effects about the perception of chords. In tonal music, some non-diatonic chords areused within diatonic progressions. For example, Neapolitan chord (chromatically lowered seconddegree) generally resolves to dominant chord, the most distant to the Neapolitan chord on thecircle of fifths. Expectation after non-diatonic chord has been modelled by melodic anchoring(Bharucha, 1996) and melodic attraction (Lerdahl, 2001).AimsWe hypothesized that the level of expectation for the dominant chord after the Neapolitan chordshould be similar to or higher than the expectation for the dominant after a non-diatonic chordthat is closer to the dominant on the circle of fifths than the Neapolitan chord. In Experiment 1,we investigated expectation created by the Neapolitan chord with piano tones. In Experiment 2,possible effects of horizontal motion (melody) were controlled by using Shepard tones.MethodThe chord priming paradigm was utilized in this study. In Experiment 1 (N=32), participantslistened to chord sequences and decided whether the target (the last chord) was a consonant ordissonant. In Experiment 2, participants listened to chord sequences and decided whether thetarget was an in-tune or an out-of-tune chord. Data collection in this experiment is in progress.ResultsExperiment 1: Analysis of reaction times revealed that for consonant dominant targets, there wasan advantage of listening penultimate chord six steps away on the circle of fifths (Neapolitanchord) compared to a chord three steps away. This advantage was reversed for consonant sub-dominant targets. Experiment 2: Analysis of reaction times with ANOVA revealed that responsesto in-tune and out-of- tune targets changed significantly with respect to the target degree and thepenultimate distance.ConclusionsExperiment 1 demonstrated the expectation for dominant chord after listening to Neapolitan chord.In Experiment 2, the trend in data suggests that investigated expectations are not due to horizontalmotions; however, due to limited number of participants, it is early to reach definite conclusions.

Key words: Chord priming, Non-diatonic chords, Tonal schema

[email protected]

Page 212: Abstract Book

212 Pitch II

23.6 Singing along with music to explore tonality

Eveline Heylen, Dirk Moelants, Marc Leman

Department of Musicology, Ghent University, Belgium

Background and AimsExperimental studies in tonality provide deeper insight into listener’s sensitivity to tonal hierarchyin music. Until now, most investigational paradigms consist of probing and priming techniquesapproaching the induction of tonality from the viewpoint of perception. These methods are linkedup with mental cognitive processing and make the psychological basis of tonal perception obvious.Yet, less attention has been devoted to action components involved in tonal perception. With theobject of enlarging approaches exploring tonality in music a new method based on the concept ofembodied music cognition is presented.MethodIn an empirical behavioural study 29 adults with diverse levels of training, background and mu-sical experience were asked to sing along with 30 one-minute major (15) and minor (15) musicalexcerpts. These fragments were tonally homogeneous, carefully selected from pieces in variousstyles of classical and popular Western music and considered to be representative for the tonal lan-guage of the piece. The experiment was implemented using Cakewalk Sonar 3 Producer Edition.Each subject performed the experiment individually and was verbally instructed to sing long, clearand stable tones suiting the content of the music heard. The task was illustrated and practised in atest experiment as well.ResultsObtained sung output revealed after decoding and analysis accurate tonality indications. By count-ing the durations of all tones sung, an overarching tone profile composed for major and minor keyswas obtained. The major key profile reveals maximum sung output on respectively the tonic, dom-inant and major third. The minor key profile reveals highest sung results on the tonic, dominantand minor third. Both profiles expose low and scarcely any sung output on the remaining diatonicand non-diatonic tones. Behavioural results were furthermore compared with two computationalmodels and diverse tone profiles one can find in literature.ConclusionsSince the above mentioned results support the hypothesis that sensitivity to tonal induction mightbe based on corporeal imitation of tonal components in music, it is presumed that motor activitiesform an integrated part of music perception. The applied technique shows that listeners as activecontributors instead of a passive registrators, can provide important to tonality related information.

Key words: Tonality

[email protected]

Page 213: Abstract Book

Symposium: Infants’ experi-ence and expression of musi-cal rhythm

24

Convenor: Marcel Zentner

Because music cannot be dissociated from rhythm it is not surprising that processing of rhythmhas been extensively studied. However, most studies have been carried out with adults; much lessis known to date about the early developmental origins of humans’ abilities to process rhythms.The aim of the present symposium is to bring together researchers, whose current work focuseson infants’ experience of rhythm.

The first presentation (by Sandra Trehub) provides an overview of what is currently knownabout infants’ rhythm processing biases. A particularly fascinating finding of her own work isthat, sometimes, infants are actually more proficient at detecting subtle changes in rhythm thanadults. The following two presentations move from infants’ perception of rhythm to infants’ ex-pressive reactions to rhythm. The second contribution (by Colwyn Trevarthen and his students)presents research, in which spontaneous rhythmic expressions of infants between 2 and 10 monthsof age were examined in three conditions, when alone, when alone and hearing songs, and whenplaying with her mothers. Findings indicate that infants tend to produce more rhythmic expres-sions when hearing the songs (compared to silence). Findings relating to the rythmicity of infants’vocalizations in mother-child interaction from three different countries (Crete, Japan, Scotland)are also presented. The third contribution (by Marcel Zentner) presents findings from a study,in which spontaneous rhythmic expressions of forty infants between 6-16 months were exam-ined in three experimental conditions: When exposed to musical excerpts; when exposed to therhythmic beat of the same excerpts (deployed of the melody and harmony), and when listeningto a prerecorded story. Based on two studies, a pilot study and a large-scale study designed tostudy environmental determinants of infant musicality, the final presentation (by Mayumi Adachi)discusses the question of the contribution of nature vs. nurture to infants’ rhythmic expressions.

24.1 Infants’ experience of rhythm: Contributions from na-ture and nurture

Sandra Trehub

213

Page 214: Abstract Book

214 Symposium: Infants’ experience and expression of musical rhythm

Department of Psychology, University of Toronto, Canada

Infants exhibit a number of rhythm processing biases. For example, they detect subtle temporalchanges more readily in patterns that provide a strong metric framework than in patterns thatprovide a weaker metric framework. They also detect subtle pitch changes more readily in thecontext of music in duple meter than in triple meter. These results are consistent with inherentadvantages for the processing of temporal patterns that induce a metric framework. They are alsoconsistent with natural preferences for binary hierarchical structures. One would expect infants’limited experience with music to put them at a disadvantage relative to more experienced listenerssuch as older children and adults. At times, however, there are perceptual advantages for musicalnovices. For example, 6-month-old infants detect some changes to foreign metric patterns thattheir adult counterparts fail to notice. In contrast to adults, whose implicit knowledge of musicmay lead to distortions and misrepresentations of foreign musical rhythms, 6-month-old infants’limited experience can generate culture-free, or more accurate, perceptions of the musical input.By 12 months of age, infants exhibit adult-like difficulties with these foreign metrical distinctions,which indicate that the process of musical enculturation is well under way. Nevertheless, infantsand toddlers can overcome these metrical processing difficulties more easily than adults can, whichimplies that their metrical representations are less robust and more amenable to change than arethose of adults. Infants’ initial biases, along with their openness and ease of learning, facilitatethe process of musical enculturation. Mothers play an important role in infants’ and toddlers’musical enculturation by means of their frequent singing for purposes of soothing or play. Forsuch infant-directed performances, mothers alter their usual manner of singing. The resultingrenditions highlight the rhythmic structure of the music and the singers’ expressive intentions.Various features of maternal performances will be illustrated by samples of singing from NorthAmerica and elsewhere.

Key words: Infants, Rhythm, Development

[email protected]

24.2 Investigating the rhythms and vocal expressions of infantmusicality, in Crete, Japan and Scotland

Mazokopaki Katia2, Powers Niki1, Trevarthen Colwyn1

1Department of Psychology, The University of Edinburgh, UK2Department of Philosophy and Social Studies, University of Crete, Greece

Music is composed of rhythmic pulse and melodic harmonies, which carry moving and emotivenarratives for a listener. We have investigated how infants respond to the rhythms of recordedmusic or their mothers’ song with displays of pleasure and movements of their bodies, and wehave looked for the emotional messages in the extended voice sounds infants make or hear in theirmothers’ talk and singing.

Mazokopaki recorded 15 infants at their homes in Crete between the ages of 2 and 10 monthsto observe their spontaneous rhythmic expressions in three conditions: when alone in a quiet room,when alone and hearing recorded traditional Greek songs, and when playing with their mothers.Micro-analysis of videos and spectrographic analysis of vocal expressions showed that infants

Page 215: Abstract Book

Wednesday, August 23th 2006 215

could generate different kinds of graceful, simple or more complex rhythms and they producedsignificantly more rhythmic “dancing” activity when hearing the songs. The mean duration ofsimple rhythmic sequences in infant expression is about 3 seconds and the mean duration of morecomplex sequences is about 6 seconds. The rhythms with music were much more complicated,especially during the first 5 months. Infants listened attentively at the beginning of the song,looked for the source of sound, expressed surprise and interest and then participated by dancingand singing, showing increasing joy. They demonstrated appreciation of the narrative syntax ofthe song. In condition 3, the mothers’ songs in free play were studied. Spectrographic analysisdefined the phrases, marked the infant vocalizations and described the development of narrativein interaction. The singing excites the interest of the infant who recognizes the melody, antic-ipates the changes and joins in rhythmic vocalizations and body movements. The musicality ofmother-infant interaction generates a dialogical form based on the rhythmic phrase unit of mother’ssinging. The infant anticipates the mother’s phrase, usually performing as it begins or at the end.

Powers made 3 studies to explore how vowel-like sounds express and regulate emotional ex-pression between mothers and infants in the first year. In a comparison between 2 very differentcultures, 6 English-speaking and 6 Japanese-speaking mother-infant dyads were filmed in theirhomes when the infants were aged 4, 6, 8, 10 and 12 months. Vowel sounds produced by moth-ers and infants and of bodily contact in defined emotional situations. Acoustic features of vowelsounds (pitch, intensity and duration) were coordinated with bodily contact and correlated withspecific emotional communicative contexts. Two additional studies were made in Scotland. In-fant’s reactions to changes in pitch variation in mother vowel sounds in specific emotional situ-ations showed that they react in distinctive ways to emotional changes in their mothers’ voices.In Study 3, 158 adults were asked to judge if isolated infant vowel-like calls (which had pre-viously been coded for emotional content) expressed distinct emotions, and whether they feltany emotional response to the sounds. Adults consistently identified emotional meanings in in-fants’ early vowel sounds. Infants and mothers appear to use vowel sounds to express, perceiveand regulate expressive interactions. Differences found between Japan and Scotland, suggest thatmusical-emotional communication, though grounded on universal motive principles develops inculturally specific ways from early in life. These studies illustrate the range of rhythms in innatehuman musical expression, from the pulse of whole body and limb movement to the breath soundof vocalisations modulated emotionally by movements of the mouth and tongue. They confirmthe sympathetic response to these rhythms of movement from early life, and mothers’ intuitivefacilitation of this sympathy.

Key words: Infants, Rhythm, Expression

[email protected]

24.3 Do infants dance to music? A study of spontaneous rhyth-mic expressions in infancy

Marcel Zentner, Russell Alexandra

Department of Psychology, University of Geneva, Switzerland

BackgroundInformal observations suggest infants sometimes appear to spontaneously attune their body move-

Page 216: Abstract Book

216 Symposium: Infants’ experience and expression of musical rhythm

ments to the rhythmic patters in the music - a phenomenon resembling “dancing”. However, littleresearch has been done to substantiate such anectodal evidence.

AimsThe aim of the research was to examine the following questions: Do infants produce spontaneousrhythmic expressions when exposed to music? If so, how can these expressions be identified,and how can their rhythmicity characterized? Do spontaneous rhythmic expressions follow adevelopmental progression from a more primitive to a more elaborate form during infancy?

MethodTo answer these questions, forty infants between 6 and 16 months of age were exposed to threeexperimental conditions. In one condition, the music condition, the infant listened to two upbeatexcerpts of classical music. In another condition, the rhythm alone condition, the infant listenedonly to the beat of the same two excerpts. In a third condition, the control condition, the babylistened to a pre-recorded spoken story. In all conditions the infants sat on their mother’s lap.Mothers wore headphones and were instructed not to interact or otherwise influence the baby’sbehavior. The experiment was videotaped. The coding of the tapes consisted of two phases. Ina first step, all sequences in which the infant repeated a movement for 3 or more times wereidentified, by two independent coders, as sequences potentially containing rhythmic expressions.Subsequently, these sequences were then coded for rhythmicity by additional four coders (profes-sional musicians). At the time of submission, the first part of the coding is completed, while thesecond is under way.

Results and DiscussionPreliminary analyses indicate that the infants produce spontaneous rhythmic expressions more fre-quently to the music and the rhythm alone conditions compared to control condition. In addition,“dancing” tends to increase after 9 months. Presentation of the findings will be illustrated byvideotaped samples of infants’ dancing behavior.

No firm conclusions can be drawn at this stage. Preliminary evidence suggest that infants maybe born with a biological predisposition for rhythmic expression.

Key words: Dancing, Rhythm, Infants

[email protected]

24.4 Japanese home environments and infants’ spontaneousresponses to music: Initial reports

Mayumi Adachi

Department of Psychology, Hokkaido University, Japan

BackgroundAdachi, Nakata, and Kotani (2002) observed two Japanese infant girls 12 months of age duringtheir spontaneous play at home. Findings of this study lead to two questions. First, the two girls,raised in completely different home musical environments, moved repetitively more often withthan without music. It cannot be sure, however, that two 12-month-old infants’ behaviors trulysignified the “innate” nature to react to music rhythmically. Second, the girl whose parents movedher body to music regularly at home demonstrated a wider range of movement repertoires and a

Page 217: Abstract Book

Wednesday, August 23th 2006 217

more playful attitude toward ongoing music than the other girl without such musical interactionfrom her parents. Were such qualitative differences derived only from the parental initiation ofrepetitive body movements to music, or from that along with parents’ own active involvement inmusic?AimsThe current study has just been launched to clarify the two issues underlined above.MethodRegarding the environmental issue, a large scale of survey is conducted to identify the range ofmusical environments surrounding 0- to 3-year-old children at Japanese homes, and interviewsare conducted to identify primary caregivers’ own attitude toward music and their musical inter-action with their infants. Among the participating families of survey and interviews, spontaneousbehaviors of young infants 3-4 months of age are videotaped at home with and without music toexamine the innate issue.ResultsThe survey, interviews, and field experiments for 3- to 4-month-old infants are still in process.Coding system for young infants’ videotaped behaviors is being developed.ConclusionsNo conclusion will be available for a while. After this initial observation, 3- to 4-month-oldinfants will be assigned randomly either to an experimental group (whose parents are asked tosing Japanese child folk songs while moving their infants’ body at least once a week) or to acontrol group (no such treatment). These infants’ spontaneous behaviors with and without musicwill be observed longitudinally at 7, 10, 13, 18, 24 months. The current paper focuses on whatmusic elicits in young infants’ behaviors while musical interactions from parents are still limited.

Key words: Infants, Rhythm, Nature / nurture

[email protected]

Page 218: Abstract Book
Page 219: Abstract Book

Emotion II 2525.1 How does music induce emotions in listeners? The AMUSE

model

Patrik N. Juslin1, Daniel Västfjäll2

1Department of Psychology, Uppsala University, Sweden2Department of Psychology, Göteborg University, Sweden

The most important problem in music psychology is to explain people’s responses to music.Still, research on music and emotion has been neglected. This paper presents a new project, Ap-praisal in Music and Emotion (AMUSE), which aims to construct a model that combines variouspsychological mechanisms to explain and predict listeners’ responses to music. The model fea-tures a detailed set of predictions regarding six psychological mechanisms through which musicmight induce emotions: (a) arousal potential, (b) evaluative conditioning, (c) emotional conta-gion, (d) mental imagery, (e) episodic memory, and (f) musical expectancy. The AMUSE modelis developed and tested by means of an interplay between field studies (questionnaire studies, di-ary studies) that capture experiences of music as they spontaneously occur in everyday life andlaboratory studies that test predictions experimentally. The model is discussed in terms of itsimplications for future research on musical emotion and the current debate on whether aestheticactivities might influence subjective well-being and health.

Key words: Music, Emotion, Theory

[email protected]

25.2 Cross-cultural approach to emotions in choir singing

Jukka Louhivuori

University of Jyväskylä, Finland

BackgroundComparing to the popularity of choir singing and its social and emotional significance for people

219

Page 220: Abstract Book

220 Emotion II

around the world, researchers have not directed much attention to this topic. The message ofthose few existing studies is coherent: the main motivation to sing in a choir comes from socialrelationships with other choir members and emotional experiences related to choral music, in thisorder.

AimsThe aim of the paper is to discuss the relationship between cultural and social aspects of choirsinging and musical emotions. It is argued that in order to understand the nature of musicalemotions in general, the relationship between emotions and cultural and social context shouldbe better understood.

MethodThe research material consists of quantitative and qualitative data, that has been collect from choirsingers in five countries (Finland, Estonia, Belgium, South Africa, Romania). The quantitativedata was analyzed by using statistical methods. After transcription of audio files into text the qual-itative data was classified into categories with the help of a computer programme. Qualitative datawas used to support interpretations based on quantitative data, and to give a deeper understandingand more detailed picture of the phenomenon.

ResultsThe profiles of the answers to the question about the strength of emotions in choir singing werealmost identical among respondents from different cultural backgrounds. In interviews the dif-ferences between informants from different cultural backgrounds about how willing they were todiscuss their emotions became apparent. On the other hand only small differences were found,when physical reactions were measured (tears, shivers etc.). The strength of emotions was re-lated to the social context of the performance. The strongest emotions were experienced whenthe performance had a communicative function for the society, such as in singing for an audience(concerts) or singing in more informal musical events (evening gatherings, parties etc.).

ConclusionsThe results support the hypotheses that the strength of emotions is related to the social contextof the performance. Cultural background is reflected more in the verbal descriptions of singingexperience than in actual physical reactions to emotions. It is suggested that theories about musicalemotions should take cultural and contextual dimensions of emotions more seriously into account.

Key words: Emotions, Cross-cultural music cognition, Choir singing

[email protected]

25.3 Effects of mode, consonance, and register in a picture-,and word-evaluation affective priming task

Marco Costa, Daniela Campana, Pio Enrico Ricci Bitti

Department of Psychology, University of Bologna, Italy

BackgroundSollberger, Reber, and Eckstein (2003), using an affective priming paradigm, have demonstratedthat the affective tone of musical chords influences the evaluation of target words. Affective tone

Page 221: Abstract Book

Wednesday, August 23th 2006 221

was manipulated using consonant and dissonant chords, and target words were pleasant or un-pleasant. The results showed that negative targets were evaluated significantly faster if precededby a dissonant rather than a consonant chord.AimsThe paradigm of affective priming was applied to verify if: (a) major and minor chords presentedin high and low register can significantly influence reaction times and response accuracy in a word-evaluation task with happy and sad words as targets; (b) major and minor chords presented in highand low register can significantly influence reaction times and accuracy in a picture-evaluationtask with happy and sad pictures as targets; (c) consonant and dissonant chords presented in highand low register can significantly influence reaction times and accuracy in a picture-evaluationtask with pleasant and unpleasant pictures as targets.MethodThe three hypotheses were tested in three different studies. The first one involved 70 participants,the second one 27, and the third one 29. Participants were university students without professionalmusic training. Major and minor chords were major and minor C triads. Dissonant chords werecomposed by D-G#-D’. Register was manipulated presenting chords at two octave distance. Primechords were presented for 800 ms, followed by 200 ms by the target stimulus, whose offset wasdetermined by the subject’s response. Pictures were selected from the International AffectivePicture System (IAPS).ResultsIn the word-evaluation task, a happy word was classified significantly faster and more accuratelyif it was preceded by a major chord, or a high register chord, whereas reaction times were signifi-cantly slower if it was preceded by a low register chord. Happy words and pictures were classifiedfaster than sad words and pictures. In the picture-evaluation task happy and pleasant pictures wereevaluated faster if the prime was a high register chord; sad and unpleasant pictures were evaluatedfaster if the prime was a low register chord (congruent pairs). In incongruent pairs reaction timeswere significantly slower. In the picture-evaluation task mode and consonance of prime chords didnot influence reaction times, whereas register did.

Conclusion The affective content of musical chords, as results from their mode, consonance,and register significantly influence cognitive non-musical tasks such a picture- or word-evaluationtask. This means that affective properties of musical stimuli are shared with ongoing mentalprocesses. These results can contribute to explain the Mozart’s effect, i.e. the influence of musicalstimuli to concomitant cognitive tasks. Mode, consonance, and register are not equally effective ininfluencing the evaluation task. As regard to mode major chords were more effective than minorchords (also Mozart’s effect is found for major excerpts and not for minor ones). Register wasmore effective than mode. Since the picture-evaluation task implies more bottom-up processesthan the word-evaluation task, it was less influenced by the affective content of priming chords.

Key words: Affective priming, Emotion, Mode, register, consonance

[email protected]

25.4 Singing, music listening, and music lessons on children’ssensitivity to emotional cues in speech

Takayuki Nakata, Sayuri Maruyama, Yukari Kihara

Page 222: Abstract Book

222 Emotion II

Department of Psychology, Nagasaki Junshin Catholic University, Japan

BackgroundStudies have shown that brief musical experience increase children’s cognitive performance (Schel-lenberg, Nakata, Hunter, & Tamoto, in press; Nakata & Ueno, 2005) and that music lessons en-hance children’s sensitivity to understand emotion in speech (Thompson, Schellenberg, & Husain,2004).AimsTo examine (a) effects of singing on children’s sensitivity to identify emotions conveyed in speechand (b) relations between musical activities outside kindergarten and children’s sensitivity to emo-tions in speech.MethodThe participants were 43 5- to 6-year-old children (17 boys, 26 girls) from kindergarten classes.Children were randomly assigned to fast singing (n = 14), slow singing (n = 14), or control (n= 15) groups. Before the emotion judgment task (see Thompson, et al., 2004), children in thetwo singing groups participated in pairs and sang two familiar songs for 12 minutes while ex-perimenters provided keyboard accompaniment and sang along (60-72 bpm for fast singing and30-38 bpm for slow singing). Control group did not sing prior to the emotion judgment task. Allchildren were tested individually for the emotion judgment task in which children were asked toidentify emotions conveyed in semantically neutral utterances spoken with emotions (speech ut-terances) and melodic analogues of the spoken sentences (tone sequences) by choosing betweenthe two choices – happy/sad or afraid/angry. Their parents responded to a questionnaire about thechildren’s musical activities.ResultsOne sample t-tests revealed that children in all groups performed above chance levels (50% cor-rect) with speech utterances (85% correct or better). With regard to tone sequences, while controlgroup’s performance was at chance levels for both happy-sad and afraid-angry pairs, fast singinggroup performed above chance levels with angry-afraid pairs (66% correct), t(13) = 2.94, p < .05,and slow singing group’s performance exceeded chance levels with happy-sad pairs (68% correct),t(13) = 3.98, p < .01).

Analyses on questionnaires revealed that music-listening initiators performed significantly bet-ter on emotion judgment task overall (85%) than non-initiators (70%), t(13) = 3.21, p < .05. Also,children who took music-related lessons performed better on emotion judgment task overall (87%)than those who did not (74%), t(13) = 2.61, p < .05.ConclusionsOur results from both experimental and correlational approaches suggest that children’s judgmentsabout emotional content of speech may be enhanced not only by long-term musical activities, butalso by brief singing.

Key words: Singing, Music listening, Emotion

[email protected]

25.5 Induction of anxiety with music: Is it related with atten-tional and memory biases toward threatening images?

Roberto Nuevo, María Márquez, Ignacio Montorio, María Izal, Isabel Cabrera

Page 223: Abstract Book

Wednesday, August 23th 2006 223

Universidad Autónoma de Madrid, Spain

BackgroundCognitive biases toward threatening information has been widely reported as a correlate of chronicanxiety and it has been proposed as a key factor in the development and chronification of anxietydisorders. These biases have been usually studied in subjects with pathological levels of trait-anxiety related with different topics (e.g., phobia to spiders). State-anxiety is thought to be relatedwith similar cognitive biases but it remains understudied if experimentally induced anxiety incontrolled situations in laboratory settings can generate cognitive biases.AimsAlthough several procedures to induce anxiety has been proposed, the analysis of the effects of theanxiety induction with music in the cognitive processing of threatening information is the goal ofthe present study.Method65 university students involved in advanced courses of psychology were randomly assigned totwo conditions (music, or neutral). Levels of trait-anxiety were not different between conditions.Music was a 5 minutes piece from the Ligeti’s Requiem. After of induction, individuals completedtwo attentional tasks in a laptop and two memory tasks (recognition and free-recall). These tasksinvolved the processing of images (taken from the International Affective Picture System), whichhad different affective valences: positive, neutral, and low-, medium- and highly threatening.ResultsStatistically higher scores of anxiety were found for the music condition both in self-report and inbehavioural post-measures, pointing out to a clear effect of increasing state anxiety with the listen-ing of the Ligeti’s Requiem. Likewise, preliminary analyses point out to attentional and memorybiases toward highly threatening images in music condition in relation with control condition,although these data are still being analyzed and definitive results are still not available.ConclusionsResults of this work suggest, on one hand, that music can be an effective method to experimentallymanipulate the level of anxiety, and, on the other hand, and more interestingly, that the type ofanxiety induced by music is associated with biases in the attention and memory, facilitating theprocessing of very threatening information.

Key words: Cognitive biases, State-anxiety, Emotional effects of music

[email protected]

25.6 Decoding emotion in music and speech: A developmentalperspective

E. Glenn Schellenberg, William Forde Thompson

University of Toronto, Canada

BackgroundIn some instances, young children’s musical knowledge seems relatively sophisticated. In other in-stances, their knowledge seems immature or completely absent. These apparent discrepancies are

Page 224: Abstract Book

224 Emotion II

likely to be a consequence of the particular experimental methods, and whether one asks listenersto make emotional or cognitive judgments.AimsThe goals were to chart the development of sensitivity to emotional cues in speech and music,and to compare such development with age-related changes in mental representations of structuralaspects of familiar songs.MethodMusically untrained adults and children 4, 5, and 8 years of age (ns = 40) were tested in eightdifferent tasks. Four tasks involved judgments of emotion conveyed by music or by the musi-cal aspects of speech. One task required listeners to identify whether emotionally unambiguousinstrumental music sounded happy or sad. Two other tasks asked whether semantically neutralspeech with unambiguous prosody sounded happy or sad, or angry or afraid. A fourth task re-quired listeners to decode the prosody of speech with conflicting semantic cues (e.g., “all the kidsat school make fun of me” spoken in a happy tone of voice).

In four other tasks, listeners were asked to judge whether a familiar melody (“Twinkle Twin-kle” or “Old MacDonald”) was performed correctly or incorrectly (i.e., with one tone mistuned bya semitone), or whether the melodies were harmonized correctly or incorrectly (e.g., leading toneharmonized with a major III chord rather than V).ResultsEight-year-olds performed at adult levels (over 90% correct) on each of the four emotion tasks.Performance of the younger children was impacted negatively by the presence of semanticallyneutral words, and even more so by the presence of conflicting semantics. The younger childrenalso found angry-afraid comparisons to be more difficult than happy-sad comparisons.

On the harmony tasks, 4-year-olds performed at chance levels, and even 8-year-olds werebelow 70% correct. Age-related improvements on the melody tasks were more pronounced, butnot nearly as rapid as improvements on the emotion tasks.ConclusionsDevelopment is relatively rapid for culture-general processes, such as decoding the emotionalvalence of temporal cues. Sensitivity to culture-specific musical cues develops at a slower rate.

Key words: Emotion, Development, Harmony

[email protected]

Page 225: Abstract Book

Musical Style 2626.1 A timed Blindfold Test: Identifying the jazz masters in

short time spans

Caroline Davis

Northwestern University, USA

BackgroundOne of the most highly valued skills of a jazz musician is the development of aural recognitionand identification. From Leonard Feather’s “The Blindfold Test”, we know that professional jazzmusicians can identify performers on unknown recordings (Downbeat). Previous studies havealso concluded that humans possess the ability to identify familiar voices in under 1/2 of a secondand popular songs in under 200-msec (Lavner et al, 2000; Schellenberg et al, 1999). Moreover,musicians are adept at recognizing the differences between jazz saxophonists in samples of 2 to 3seconds (Benadon, 2003).AimsThis study investigates the amount of time required to identify musicians in recordings and thepossibility that particular musicians may be easier to identify than others.MethodTwelve jazz musicians were required to identify the primary soloist in 1, 2, 5, and 10-secondrecordings. In each time span category, listeners identified 10 jazz performers by freely typingtheir response. The performers were chosen to represent the “masters” of jazz, based on a surveyand reputable jazz texts.ResultsIdentification success increased as time span increased; however, particular musicians such asCharlie Parker and Louis Armstrong were successfully identified by all participants in 2-seconds.Saxophonists, trumpeters, and trombonists were more successfully identified than rhythm sectionplayers, with a difference of 34.3%. Listener experience did not correlate with identification suc-cess. Identification success also depended on the instrument of the listener; for example, brassparticipants identified brass performers more successfully.ConclusionsThe results echo previous findings where participants were required to identify timbre, song, or

225

Page 226: Abstract Book

226 Musical Style

voice from multiple-choice lists. In the present study, jazz performers were highly recognizablefrom recording alone, in as little as 1-second. However, this skill of identification is not solidifiedfor particular musicians, especially rhythm section players. This finding may imply differentstrategies for identifying musicians. For instance, Charlie Parker may be identified by timbrealone, since the majority of participants identified him in 1-second. Elvin Jones, however, maybe identified by some other factor, due to the finding that participants were more successful in the10-second time span. Further study in the realm of feature-extraction and attention is warranted.

Key words: Jazz, Timed identification task, Aural recognition

[email protected]

26.2 Parameters distinguishing baroque and romantic perfor-mance

Dorottya Fabian, Schubert Emery

School of Music and Music Education, The University of New South Wales, UK

This paper examines the relationship between listeners’ perceptions of performance parametersand aesthetic variables in a composition by Bach and Brahms. The work expands on the authors’theory that there exists a distinct difference in the meaning of “expressiveness” which is dependenton the way the music is interpreted: Historically informed performance (HIP) versus mainstream(MS).

96 participants rated four performance parameters: articulation, legato, flexibility of playingand intensity of playing. They also rated five aesthetic parameters: baroque expressiveness, ro-mantic expressiveness, historically informed, stylishness and preference. The nine parameterswere rated on 9 point unipolar scales, with 1 indicating absence of the parameter and 9 indicatingthe presence of the parameter.

It was hypothesised that the HIP performances would be rated as more baroque expressive,whereas romantic expressive would have been used for MS expressiveness. If this hypothesis wassupported, we wanted to further investigate whether any of the other parameters distinguished be-tween these two categories. Finally, we investigated whether higher music skill levels, particularlyin baroque music, produced greater sensitivity and contrast in these changes.

The participants were divided into four equal groups of 24, using questionnaire data, and by re-cruiting novice listeners from a music fundamentals class, student listeners and experienced listen-ers from undergraduate music degree programs, and professional players specialising in baroqueperformance.

Results supported a distinction between HIP and mainstream performance, however these dif-ferences were shown most clearly by the professional group. Novices made no statistically cleardistinctions for any of the parameters, except Romantic Expressiveness. Overall, articulation (HIPmore articulated) and legato (higher for MS) were consistent with expectations. However, onlyprofessionals had a clear preference for HIP, whereas the other 3 groups did not have a strongpreference for one performance style over another.

This study has implications in our understanding of expressiveness in performance, and in ourbelief in what people of different skill levels are and are not able to evaluate and perceive.

Key words: Expressiveness, Baroque and romantic, Expert listener

Page 227: Abstract Book

Wednesday, August 23th 2006 227

[email protected]

26.3 Style processing: An empirical research on Corelli’s style

Mariateresa Storino, Mario Baroni

University of Bologna, Department of Musicology, Italy

BackgroundMusical style processing is one of the least explored fields in the psychology of music owingto a series of difficulties related to the complex nature of the phenomenon called “style”. Themain problem resides in the selection of musical examples which can satisfy the requirements ofexperimental studies without denying the nature of music as an aesthetic object.

AimsThe aim of this research is to investigate musical style processing by means of experimental proce-dures already applied in recent studies (Bigand - Barrouillet 1996, Storino 2003, Storino - Baroni- Dalmonte in press). The main questions are: which style processing procedures do the musi-cians adopt? Are they different from the procedures of non-musicians? What role do the trainingphase and pre-existent knowledge play in the recognition of a style? The music examples are shortpieces by Corelli and other Italian Baroque composers who share a part of the same background;the pieces are drawn from their collections of Sonatas for violin and basso continuo.

MethodThe experiment involved 33 musicians and 34 non-musicians, who were asked, in a listening task,to distinguish the pieces written by Corelli from those composed by the other Baroque composers.A training phase, in which the participants listened to examples by Corelli, preceded the listeningtask; the theory of implicit learning was the theoretical support. At the end of the recognition task,the participants had to fill in a form providing details of their musical expertise and their degree ofliking of various musical styles.

ResultsMusicians and non-musicians seem to process the identification of styles in rather different ways,both in the role assigned to aesthetic responses and the importance given to pre-existent stylisticknowledge.

ConclusionsThe final results of the research are presented in the complete paper: a comparison of the responseswas made between the two groups (musicians and non-musicians). Our aim was thus not only todetermine the ways of recognizing a style, but also to improve our understanding of the differencesbetween analysing a style and listening to it.

Key words: Style, Analysis, Aesthetic response

[email protected]

Page 228: Abstract Book

228 Musical Style

26.4 An exploration of musical style from human and connec-tionist perspectives

Giuseppe Buzzanca1, Mario Baroni2

1State Conservatory of Music, Bari, Italy2Department of Music, University of Bologna, Bologna, Italy

The present work describes two different approaches to musical style recognition. The firstone includes a behavioral experiment in which human listeners (subdivided into musicians andnon musicians) are asked to recognize musical style. The main purpose of this approach is toexplore the cognitive processes involved in musical style categorization and the influence of priormusical knowledge in style perception.

The second approach simulates stylistic recognition through a back propagation neural net-work model, considering recognition as a supervised learning task. The learner (i.e. the model) isgiven examples of music in the specified style, together with non-examples, thus learning to distin-guish between examples and non examples. However, since musical style recognition is differentfrom traditional supervised learning tasks (due to the free form nature of the input and the depthof structure), considerable care is required in the design of the model. We have chosen back prop-agation neural networks for our approach for a number of reasons: they have an excellent trackrecord in complex recognition tasks and are capable of inducing the hidden features of a domain.Our model accounts for various features that are quite likely to be important in the process ofmusical style recognition in humans: these features can roughly be described as the accounting formusical input of varying length in an even manner; modelling the hierarchical nature of musicalrecognition; capturing the importance of directional flow in music. Our model has been trained ona corpus comprising the same compositions of the behavioral experiment (i.e. arie by Legrenzi,by several other coeval composers Rossi, Stradella, Gabrielli, A. Scarlatti and by the computerprogram LEGRE). The model reaches a pretty high classification accuracy (varying between 66.7and 100%, depending on the composers considered).

The study of the approaches shows similarities (the ability to correctly recognize a style de-pends on the global exposure to that style both in human listeners and the neural model) but alsopuzzling discrepancies (for example in the case Legrenzi vs LEGRE, where the accuracy of themodel 100% is well beyond the humans’ one) that the present work tries to assess.

Key words: Style perception, Neural networks, XVII century cantatas

[email protected]

Page 229: Abstract Book

Perception I 2727.1 Infants perception of timbre in music

Eugenia Costa-Giomi1, Les Cohen2

1Center for Music Learning, University of Texas at Austin, USA2Children’s Developmental Lab, University of Texas at Austin, USA

Studies on word recognition have shown that, for infants, timbre is a salient feature in speech.For example, 10.5 month olds can recognize words said by talkers of different sex but 7.5 montholds cannot; the younger infants can only recognize the words if said by talkers of the same sex.Surprisingly, more is known about infants’ long-term retention of timbre features of a melodythan their short-term retention of the same features. Trainor et al. (2004) familiarized 6-montholds with a melody played by a specific musical instrument over a period of a week and tested theinfants in two conditions on the 8th day. The main difference between conditions was the timbreof the stimuli used. Although there was no direct assessment of infants’ ability to discriminate themelodies on the basis of timbre, the comparison of the results for the two conditions suggests thatinfants can remember timbral information.

The first of two experiments conducted with 7 month olds using the habituation-novelty prefer-ence procedure established that infants can discriminate timbre in music. The infants (n=16) werehabituated to a melody played with a synthetic timbre of a marimba, violin, cello, oboe, or trumpetduring 16 trials whose duration was infant controlled. Upon habituating, infants were presentedwith three test trials: (1) the familiar melody played by the familiar instrument; (2) the familiarmelody played by a novel instrument; and (3) the novel melody played by a novel instrument.Boys and girls were tested with each melody-instrument combination.

An analysis of variance showed significantly longer looks during familiar-melody/novel-instrumentstimulus than during familiar-melody/familiar-instrument combination clearly indicating that in-fants can discriminate timbre at 7 months. In agreement with the novelty preference paradigm,infants in both groups looked longest during the unfamiliar melody/unfamiliar timbre stimulus.These results were consistent for boys and girls.

The second experiment showed infants’ difficulty in categorizing melodies presented with dif-ferent timbres. 7 month olds (n=16) were habituated to a melody played successively with differentinstruments up to 16 times. Upon habituating, they were presented with the same test trials usedin Experiment 1. Results showed marginal differences in looking time among test trials indicat-ing that, for 7 month olds, timbre is a salient feature in music affecting their categorization of

229

Page 230: Abstract Book

230 Perception I

melodies. Further studies will establish at which age infants can disregard timbral differences inorder to recognize melodies played by different instruments.

Key words: Timbre perception, Infant music perception, Melodic categorization

[email protected]

27.2 From sound to symbol: Connecting perceptual knowl-edge and conceptual understanding in music theory

Micheal Houlahan, Philip Tacka

Millersville University of Pennsylvania, USA

BackgroundGenerally speaking, music instructors believe that teaching music notation is an efficient pathto developing musicianship skills. But if the symbol system is taught in isolation of perceptionand cognition then a disjunction between perceptual knowledge and conceptual understanding inmusic will occur and may have an adverse effect on students’ understandings of music.AimsA goal of this interdisciplinary research is to document the effects of a professional developmentin-service model on teachers’ perceptual knowledge and conceptual understanding of music fun-damentals. The main focus of our presentation will be on exploring how expert musicians (musicteachers) connect their own perceptual knowledge and conceptual understanding of music theoryand how their knowledge of music theory acquired through either of these approaches, affects theirown teaching of music theory.MethodWe will draw on a wide range of our own experimental and quasi-experimental recent research.The population for this study includes 100 expert musicians from the United States. The methodsof data collection for this study includes (a) pre- and post- tests of vocal and written musicianshipskills; (b) focus group interviews; (c) video-graphic analysis of individual interviews with expertmusicians; (d) surveys and (e) biographical narratives. This research is currently funded by a $1.3million dollar USA department of education grant.ResultsResults of this research provides the necessary information to sketch a model for teaching musictheory that connects conceptual and perceptual understanding of music. This learning theory isbroken into a tonal content learning sequence which includes tonal pattern learning sequence, arhythmic content learning sequence which includes rhythm pattern learning sequence and a skilldevelopment sequence.ConclusionsThis model of music learning proposed permits us to come closer to understanding some of thecomplexities associated with the acquisition of music literacy skills as well as assess students’levels of perceptual knowledge and conceptual understanding. Our music Learning Theory an-swers the question as to how students gain musical knowledge, understanding, comprehensionand mastery of the fundamentals of music theory.

Key words: Music education, Music theory pedagogy

Page 231: Abstract Book

Wednesday, August 23th 2006 231

[email protected]

27.3 Notational audiation is the perception of the “mind’s voice”

Warren Brodsky1, Avishai Henik1, Bat-sheva Rubinstein2, Jane Ginsborg3, Yoav Kessler1

1Ben-Gurion University of the Negev, Beer-Sheva, Israel2Buchmann-Mehta School of Music, Tel Aviv University, Tel Aviv, Israel3Royal Northern College of Music, Manchester, England

BackgroundHighly trained musicians not only play an instrument competently, but have been found to developefficient skills related to music notation. For example, they can remember visually presentedsequences of notes via musical imagery, and graphically represent in notation the pitches theyimagine. Yet, the nature of this skill and encoding processes are not fully known, and whilethe experience seems to be one involving auditory processes of the inner ear (i.e., hearing inthe “mind’s ear”), these mental representations may actually be much more reliant on subvocalkinesthetic motor memory (i.e., covert rehearsal of the “mind’s voice”).

AimsIn a previous study, Brodsky et al (2003) demonstrated that phonatory interference rather thanrhythmic distraction or auditory input obstructs notational audiation. Accordingly, it was sug-gested that score reading involves kinesthetic-like phonatory processes. The current study at-tempted to further explore this notion.

MethodThe current study employed Brodsky’s embedded melody paradigm, but with added concurrentsurface Audio/EMG monitoring of the vocal folds.

ResultsThe results replicate previous findings of phonatory interference causing impairment to imagerycued by music notation. Moreover, though significant differences of audio output were found forscore reading while singing aloud versus silent reading, no differences of musculoskeletal vocalfold activity were seen.

ConclusionsThe results seem to imply the importance of vocal learning and phonological rehearsal in de-veloping skills related to music notation reading. In addition, they offer further support to theproposition that notational audiation is linked to the “mind’s voice.”

ReferencesBrodsky, W., Henik, A., Rubinstein, B. and Zorman, M. (2003) Auditory imagery from musicalnotation in expert musicians. Perception & Psychophysics, 65, 602-612

Key words: Music reading, Musical imagery, Phonatory processes

[email protected]

Page 232: Abstract Book

232 Perception I

27.4 A criterion-related validity test of selected indicators ofmusical sophistication using expert ratings

Joy Ollen

School of Music, Ohio State University, USA

BackgroundMusic researchers regularly test the hypothesis that participants will respond differently basedupon their levels of musical sophistication. Circumstances may not permit any extensive pre-testing of participants or demonstrations of their performing, composing or listening abilities.Instead, researchers often select or group participants according to their answers to simple survey-type questions related to their musical background. Results from a survey of 743 music researchstudies and experiments showed that the two most frequently used indicators asked about formalmusical training (e.g., years of music lessons) and current group membership (e.g., music versusnon-music major). However, this author found instances of 38 different types of indicators thatwere used singly or in various combinations to produce 173 unique operational definitions ofmusical sophistication. A number of these indicators have been criticized as being inadequate;yet, to date, there has not been a study conducted to test the validity of the various indicators.AimsThe aim of the current project was to conduct a criterion-related validity test of selected indicatorsof musical sophistication in order to determine which indicators, if any, correlate with experts’subjective ratingsUused as the criterion variable. A model will be developed and tested that willclassify participants according to their level of sophistication.MethodA 36-item questionnaire was administered in three countries to over 600 participants who rangedfrom being musically naïve to highly experienced professional musicians, and who belonged tovarious types of groups involved in music-related behaviours (e.g., university music courses fornon-majors, amateur choirs, professional orchestra).ResultsFinal results are not yet available as data collection is ongoing. Preliminary results suggest, amongother things, that the most-used indicator of musical sophistication–formal musical trainingUdoesnot enjoy the strongest correlation with experts’ ratings.ConclusionsThe project will reveal those indicators that correlated most strongly with expert ratings. A model,composed of one or more indicator questions that will be able to predict a person’s sophisticationcategory membership, will be introduced.

Key words: Musical sophistication, Indicators, Testing and measurement

[email protected]

Page 233: Abstract Book

Rhythm IV 2828.1 The sonic illusion of metrical consistency in recent mini-

malist composition

Michael Buchler

Florida State University, USA

BackgroundMeter is often thought of as a fixed property that readily withstands the challenges of unusual sur-face rhythmic patterns. Lerhdahl and Jackendoff (1983) certainly maintain this viewpoint withtheir sharp distinction between meter and grouping. As Palmer and Krumhansl (1990) haveshown, Lerdahl’s and Jackendoff’s theoretical constructs apply best to music that avoids use ofpolyrhythms. More recently, Hasty (1997) and Horlacher (1992, 2001-2) have made the case for afar more causal relationship between those two musical facets. In the realm of minimalist music,where literal or varied motivic repetition frequently projects metrical (or at least meter-like) struc-tures and where musical textures can be rather dense, containing several possible metrical layers,Hasty’s and Horlacher’s flexible approaches seems more apt.

AimsThis study will explore the elasticity of our metrical perceptions in recent minimalism, demon-strating instances where motives repeat erratically yet sound so consistent that we hear metricaleffects. We will also discuss situations where regularly recurring motives are heard as irregularagainst the backdrop of a complex polyrhythmic texture.

Main ContributionThe earliest minimalist compositions featured lengthy passages where a single motivic ostinatorepeated again and again. The motive itself defined the meter. In more recent “minimalist” com-positions by John Adams, Michael Torke, and Steve Reich, repetition is still prevalent, but it is notalways entirely consistent. Notes are omitted or inserted and the ostinati frequently take a freershape. Though inconsistency may be a defining practice, the predominant musical effect is stillone of consistent repetition. Especially in thick textures, we tend either not to notice or else toforgive slight imperfections. This paper will show some illusory examples of metrical consistencyand inconsistency and will posit a general model for understanding coherence in this popular andrapidly expanding repertoire.

233

Page 234: Abstract Book

234 Rhythm IV

ImplicationsMinimalism has become one of the most influential musical styles over the past twenty years,even as its hallmark repetitions have become less pervasive. Theorists are quite good at modelingmusical consistency, but irregular variations in a metrical fabric call for more phenomenologicallybased methodologies.

Key words: Meter, Minimalism, John adams

[email protected]

28.2 Keeping the tempo and perceiving the beat

Sofia Dahl1, Guy Madison2

1School of Music, Ohio State University, Columbus, USA2Department of Psychology, Umeå Universitet, Umeå, Sweden

BackgroundThe perception of tempo and time has sparked a number of theories, all of which assume that thedefault operation of the system yields an isochronous tempo, as might be inferred by the metaphorof the internal clock. However, it is quite possible that series of isochronous intervals do not havea special status in our neural system. For example, Madison (2001) showed that a small amount ofdrift is ubiquitous in the production of interval sequences. Consequently, the perception of tempomight be biased such that, for example, a small continuous tempo increase is perceived as constantor even decreasing tempo. Indeed, Dahl and Granqvist (2003) reported consistent differences insuch bias among individuals, who were asked to judge whether sequences with varying amountsof drift increased or decreased in tempo.

These findings raise the question whether drift in production is related to a perceptual bias,given numerous commonalities between perception and performance in general. Since severaltiming models embrace the notion that absolute tempo is important, as reflected in terms like“indifference tempo”, “attractor tempo”, and “preferred tempo”, we assessed this question acrossa range of tempi.AimsTo determine if and how unintentional tempo drift in production of interval sequences is related tothe sensitivity and perceptual bias for tempo drift.MethodParticipants produced 10 sequences for each of the intervals 300, 500, and 800 ms, and the driftwas estimated in several ways. The same participants’ perceptual threshold for continuous tempodrift was estimated by means of an adaptive psychophysical procedure.ResultsPreliminary results show the perceptual threshold estimates to correspond well with the values ofdrift in production data, both being in the range 0.25-0.30%. Results from correlation analysis ofdrift from participants’ perception and production data are under way.ConclusionsThe similarity in magnitude across perception and production suggest they be closely linked alsofor continuous tempo drift. Conclusions from the analysis of the bias and its correlation withproduced drift will be reported.

Page 235: Abstract Book

Wednesday, August 23th 2006 235

Key words: Tempo drift, Perception, Production

[email protected]

28.3 Captured by music, less by speech

Simone Dalla Bella, Anita Bialunska, Jakub Sowinski

Department of Cognitive Psychology, University of Finance and Management in Warsaw, Poland

Rhythmical auditory stimuli (e.g. a metronome) tend to attract movement more than visualstimuli (e.g. Repp & Penel, 2004). Furthermore, it is a common observation that people oftenmove in synchrony with musical beats whereas synchronization of movement with speech accentsis rare. Hence, there may be differences between domains within the auditory modality. In oneexperiment we investigated the possibility that movement is more strongly attracted by music thanby speech asking 33 nonmusicians to tap their hand in synchrony with an isochronous auditoryTarget sequence (i.e. tones with 600 ms IOI) while a Distractor sequence was presented, namelymusic or speech. Musical distractors were 3 excerpts from highly familiar musical pieces (i.e. Cir-cus music, Sleighride, Bee Gees’ Stayin’ Alive). Speech distractors were 3 well-known excerptsfrom Polish children poetry read by an actor instructed to synchronize speech accents with anexternal metronome (IOI = 600 ms). Distractors were presented at one of various phase relation-ships with respect to the target. Analysis of asynchronies and their variability showed that musicaldistractors attracted movement more strongly than speech distractors. This attraction was moreevident when distractors preceded the target tones than when they followed them. Further exper-iments were performed in order to assess the effect of potentially confounding variables whichmay account for the differences between music and speech. Musical and speech distractors wereequalized with respect to their average pitch (to control for streaming effects) and temporal vari-ability. In addition, non-musical target sounds (i.e. noise bursts) were used instead of repeatingtones. Differences between speech and music in attracting participants’ taps were attenuated bythese manipulations, but still significant. In sum, these findings converge in indicating that mu-sical rhythms attract movement more than stress structure in speech. This is consistent with theidea that music, because of the complexity and regularity of its metrical structure, is particularlywell-suited for capturing our attention and favorizing spontaneous motor entrainement.

Key words: Synchronization, Tapping, Music and speech

[email protected]

28.4 The counterpart of time-shrinking in playing regular sound-ing triplets of tones on the alto recorder

Roel Boon1, Gert ten Hoopen1, Takayuki Sasaki2, Yoshitaka Nakajima3

1Leiden University, Leiden, The Netherlands2Miyagi Gakuin Womens University, Sendai, Japan

Page 236: Abstract Book

236 Rhythm IV

3Kyushu University, Fukuoka, Japan

BackgroundAt the First ICMPC, Kyoto, 1989, we reported an illusion of auditory time perception which wecalled “time-shrinking”(TS). The duration of an empty time interval(marked by short sounds) canbe hugely underestimated if preceded by a shorter time interval. In the pattern |120|200|ms forexample, the second interval of 200 ms is underestimated by about 50-60 ms. We argued thatTS is a process of unilateral temporal assimilation: the second interval assimilates to the first one.Recently we showed that the first interval could also assimilate to the second one but to a lesser de-gree. As a consequence one might expect that both intervals appear more similar. In another studywe showed this to be the case: patterns |170|150|, |160|160|, |150|170|, |140|180|, |130|190|,and|120|200|ms were perceived isochronously. We called this categorical 1:1 ratio perception.AimsThere are several clues in the literature that musicians who are required to play two eighth- orsixteenth-notes, followed by a quarter-note, tend to shorten the first note and to lengthen the secondnote. We suspect that this tendency is the performance counterpart of TS, and we carried out twoexperiments to test this.MethodIn Experiment 1, musically experienced participants played a series of triplets of tones on an altorecorder with the instruction that each triplet should be produced as isochronously as possible. Theperformance of the series of triplets was either preceded or accompanied by a metronome beat atthree speeds. In Experiment 2, the participants played the triplets right after randomly distributedstart signals in time without a concurrent metronome beat.ResultsBoth experiments together demonstrated that the second interval(T2) of the triplets had a sig-nificantly longer duration than the first one(T1), mainly at fast tempi. In the fast condition, the(T2-T1)-values ranged between about -10 and 60 ms. This range of T2-lengthening seems tomirror the T2-underestimations in perception.ConclusionsThe lengthening of T2 is not caused by adjusting for deviations from the metronome beat, butreflects the fact that the musician uses the range of the 1:1 category to produce isochronouslysounding triplets.

Key words: Time-shrinking, Categorical perception, Playing isochronously

[email protected]

Page 237: Abstract Book

Education IV 2929.1 Assessment criteria of composition: A student perspective

Goran Folkestad

Lund University, Sweden

BackgroundEarlier studies on musical creativity and composition reveal that there might be a difference be-tween the criteria by which children, students, music teachers and composers assess compositions.AimsThe study reported in this paper is part of an overarching project investigating the assessment cri-teria of composition. The project aims at describing and comparing the criteria of composition byprofessional composers, children/students, and teachers respectively. Besides giving a descriptionof the assessment criteria of each group, an overall aim of the project is to find out whether compo-sition carried out by professional composers outside school and school composition respectivelyare assessed in the same way or, if shown in the descriptions of the assessment criteria of the dif-ferent groups, composition has to be regarded as a specific school phenomenon when carried outwithin a school context.MethodThe method was as follows: a sample of nine compositions from the attached CD recording ofFolkestad’s (1996) three-year study of 15-16 year old adolescents’ creative music making wasselected by a professor in composition at a University School of Music. The criterion of thisstrategic sampling was that it would cover a wide range of musical styles, instrumentation, tempi,rhythms, et cetera. A questionnaire was constructed based on the question scheme of the semi-structured interviews of the composers’ study, and besides open fields for each song, where freecomments on the compositions were written down along with a note whether they thought thecomposition was made by a boy or a girl, questions of sex, age, earlier experiences of playingan instrument and composition were included. This complementary information was put on aseparate page at the end of the questionnaire, and filled in after the listening/assessment activitywas completed.Results and ConclusionsIn this paper the results of the children/students investigation will be presented and discussed inorder to highlight the importance of acknowledging the students’ previous experiences of music

237

Page 238: Abstract Book

238 Education IV

and their acquired criteria of composition when conducting music composition as an activity to beassessed within the school context.

Key words: Assessment, Composition, Music education

[email protected]

29.2 Dramatising the score. An action research investigationof the use of Mozart’s Magic Flute as performance guidefor his clarinet Concerto

Oscar Odena12, Leticia Cabrera3

1Escola Superior de Musica de Catalunya, Barcelona, Spain2University of Barcelona, Spain3Trinity College, London, UK.

BackgroundMusic contests and auditions are stressful situations that can have a negative impact on the per-formance of young musicians. Strategies to overcome performance anxiety are discussed betweenteachers and students but it is often left to the students to “experiment” with the strategies in theirown time. In addition, the emphasis on the technicalities of the score during study time and inlater performances can disrupt the musical communication between performer and audience (seefor example Parncutt & McPherson, 2001).AimsWith this investigation we tried to go beyond the music score, looking for alternative sources thatcould help in the performance of a piece.MethodIn order to do this we worked with Mozart’s Clarinet Concerto, a frequent choice in auditions,with the assistance of five advanced clarinet students from the Escola Superior de Musica deCatalunya, in Barcelona, Spain. With the research techniques of an action research project, asexplained by Bell (1999), we created a study method that helped to better understand the musicscore. We talked to three experienced performers who acted as key informants prior to the researchdesign. Over a period of two months participants studied the Concerto associating the characters ofMozart’s opera The Magic Flute with the different passages of the Concerto. Individual and groupstudy sessions were video recorded. The participants’ development and the implementation of themethod were continuously assessed by the researchers. Students also filled an initial evaluationquestionnaire and were interviewed at the end of the project.ResultsThe study method that originated helped the students to better understand the Concerto, seeingthe music like a large theatre play where the characters interact telling a story, and in doing so,giving a greater meaning to what they try to communicate. In the final session all the studentsplayed together the Concerto’s first movement. They wore costumes inspired in the characters ofThe Magic Flute and the Concerto was transformed into a “Magic Clarinet” Opera. In this paperwe discuss issues arising from data from the research diary, and the students’ initial and finalassessments.

Page 239: Abstract Book

Wednesday, August 23th 2006 239

ConclusionsIn the conclusions we refer to several psychological and educational theories (Gardner, 1995;Serafine, 2001; Odena, 2006). It is suggested that this method can be adapted to performancestudents at all levels and has the potential of benefiting them all.

Key words: Performance studies, Dramatised performance, Conservatory music education

[email protected]

29.3 Machine arrangement in a modern Jazz-style for a givenmelody

Norio Emura1, Masanobu Miura2, Masuzo Yanagida1

1Department of Knowledge Engineering, Doshisha University, Japan2Department of Media Informatics, Ryukoku University, Japan

BackgroundThere are many systems available on the market for automatic arrangement of musical piecesgiven as notes for solo piano into a score for a modern Jazz-style. Those systems, however, areusually designed to generate output by connection of existing arrangement patterns, we cannotexpect those systems to meet user requirements. This paper proposes a system that yields arrangedoutputs based on a theory of harmony, so- called “Jazz theory”. Recently, computers can analyzepopular music according to the theory, we can expect a powerful and flexible system that arrangesgiven melodies according to the Jazz theory.AimsTo develop a prototype system that arranges notes for solo piano into a score for a piano of amodern Jazz-style.MethodConstructed is a system that arranges notes for solo piano into a modern Jazz-style piano employ-ing “voicing” described in Jazz theory established in 1950’s. Voicing means chord assignmentincluding note-allocation. Although there are broad variations in allocating notes at voicing, noteallocation is done carried out using tension notes and considering the available note scale.ResultsPerformance of the system is evaluated by comparing results given by the system and those givenby popular arrangement systems on the market. Experimental results show that that by arrange-ment by the proposed system is significantly better than arrangement systems in the market. Fromthe results, the proposed system is more useful than the systems available in the market.ConclusionsThe proposed system still dealt with some voicing techniques at present. Moreover, the systemcannot deal with excerpts having modulation, extra-chord patterns, such as “Upper Structure Tri-ads”, “Super-imposed Chords”, and so forth. These points will be improved in the near future.

Key words: Automatic arrangement, Modern jazz-style, Voicing

[email protected]

Page 240: Abstract Book

240 Education IV

29.4 Technological instruments for music learning

Lorenzo Tempesti2, Sergio Canazza1, Roberto Calabretto1

1University of Udine, Laboratory Mirage, Italy2University of Udine, Italy

BackgroundThe information technology is more and more present in our everyday life as well as in the culturallife. As learning music is concerned we can say that studies about the possibilities given bythe IT are not so extensive. Furthermore they are often based on researches carried out withmethods which are not so scientific and conducted by no expert on experimental methods, i.e.music teachers.AimsWe have planned and carried out an experiment in order to estimate the variation of musical abili-ties according to the level of presence of the information technology in teaching and according totime with 11-year-old students.MethodWe had to limit our sample to two classes. The first one built up the experimental group (EG)and the second one the control group (CG). The EG attended 15 classes of a teaching programmewhich included the IT as an important tool to learn music and which paid more attention to com-position activities than to traditional software for learning music. The CG attended classes ofthe standard music programme. At the beginning and at the end of the experiment both groupswere scientifically tested with the Aptitude test for music (Valseschini 1986), the Music teacher’sopinion and a questionnaire aimed to verify students’ interest in the activity.ResultsDuring the presentation we will illustrate the details of the two teaching programmes. Interestingcomparison between the two learning modes will be elicited from measures achieved with testing.We will also discuss an evaluation of conceivable e-learning applications that IT and new humaninterfaces allow.ConclusionsWe will try to draw a possible scenario of further research in applications of IT in music teachingand learning. The community will be asked, among other things, to devise appropriate psycho-logical tests, to carry out experiments with strict scientific principles, to develop ad hoc interfacesand technologies for learning music like powerful software that should be easy to use at the sametime.

Key words: Music education, Music learning, Information technology

[email protected]

Page 241: Abstract Book

Musical Meaning I 3030.1 “Nearly Stationary”: The use of silence in Cage’s String

Quartet in four parts

Pwyll Sion, Ruth Barnett

University of Wales, Bangor, UK

John Cage is primarily remembered today for advancing the concept of silence in music, asemployed in his infamous silent piece 4’33” (1952). While there has been general agreementamongst Cage scholars that the aesthetic and structural potential of silence had already been ex-plored by him prior to the composition of this controversial work, little research has been done todetermine the exact nature of its function pre-4’33”, or the extent of its importance to Cage at thistime.

In this paper we attempt to address these issues by examining the role of silence in Cage’sString Quartet (1950). Pritchett and Bernstein’s inventories of Cage’s gamut of sounds for theQuartet are anomalous in various respects. In particular, they do not reflect Cage’s evident preoc-cupation with the number twenty-two, and proportions, divisions and multiplications derived fromit. The addition of a silent sonority to Bernstein’s inventory of forty-three sonorities significantlyalters the position and disposition of sounds and silences within the Quartet’s content and form.The impact of the silent sonority becomes even greater if it is located at the very beginning of theset. Not only does it serve to highlight the central position of sonority twenty-two in the overallscheme of the work, but quite literally places silence at the very beginning of the work’s concep-tion and design. These issues will be addressed in relation to the work’s third movement “NearlyStationary”.

Key words: Gamut of sounds and silences, Proportion and symmetry, Compositional process

[email protected]

241

Page 242: Abstract Book

242 Musical Meaning I

30.2 A method for recognising the melody in a polyphonic sym-bolic score

Anders Friberg1, Sven Ahlbäck2

1Speech, Music and Hearing, KTH, Stockholm, Sweden2Royal College of Music in Stockholm, Sweden

BackgroundFor a human it is in most cases an easy task to identify which part can be considered the mainmelodic theme. Discussions about melodic themes are abundant in music theory books. However,specific descriptions of what constitutes a melody and sets it apart from, i.e. accompanimentparts have been rare. Recently, a need for automatic melody recognition has been identified inapplications of music information retrieval. In this work we start from a symbolic representationin terms of MIDI files.AimsThe current experiment aimed at investigating if a rather simple collection of intuitive melodicfeatures could be sufficient to find the melody part in a set of polyphonic scores ,which could becharacterized as melody with accompaniment.MethodTwo sets of scores (a training set and an evaluation set) were manually annotated indicating themain melody, bass, and accompaniment. For each part, several features were extracted, intuitivelyconsidered melodically important. These cues were, for example, average and standard deviationsof note IOI, note duration, and note pitch. These features were used as predictors in a multipleregression analysis using the training score set.ResultsPreliminary results indicate that the algorithm predict the correct melody in about 90% of allscores. When the algorithm picks an alternative part it is often an alternative melody. Furtherdevelopment and testing are currently performed. We foresee that the inclusion of more specificgestalt-theoretic principles, such as good continuation will improve the recognition.ConclusionsSo far, the automatic identification of the melody was found to be surprisingly accurate using onlysimple average tone features. The system is currently used in a commercial system for modifyingpolyphonic ring tones.

Key words: Melody recognition, Music analysis

[email protected]

30.3 How the timing between notes can impact musical mean-ing

Bret Aarden

University of Massachusetts Amherst, USA

Page 243: Abstract Book

Wednesday, August 23th 2006 243

BackgroundIn the opening chapter of his book Emotion and Meaning in Music, Leonard Meyer (1956) quotesEduard Hanslick in summary: “[T]he intellectual satisfaction which the listener derives from con-tinually following and anticipating the composer’s intentions. . . this perpetual giving and receivingtakes place unconsciously, and with the rapidity of lightning flashes” (p. 30).

AimsIt is a convenient fiction that perception and the formation of expectation occur in instantaneousflashes. Since it takes a third of a second from the start of a note to the first hint that our mindshave noticed a violation of expectation (Besson 1997), a fast melody can have moved once, oreven twice. The purpose of this study was to determine how long it takes to form an expectationfor the following note.

Method50 undergraduate music students were asked to listen to unfamiliar melodies and make a snapjudgment about the contour of the melody (up/down/same) at certain notes. Prior research hasshown that the speed of response is a good indication of how much a note was expected (AUTHOR2003). That is, the less surprising a note is, the faster listeners are able to determine the contour.The melodies were played at two different tempi: some listeners heard melodies at 60bpm, andothers at 40bpm.

ResultsOverall, listeners responded much the same as in previous work: like Meyer (1956) and Narmour(1990) predicted, they expected leaps and steps to continue in particular directions. More impor-tantly, those who heard the slower melodies tended to be less surprised by leaps, but only - andhere is a striking departure from prior evidence - for descending leaps. In addition, they expectedreversals after leaps - but only after ascending leaps. These results are consistent with the principlethat ascending leaps and descending steps are more common in melodies (Vos & Troost 1989).

ConclusionsAlthough only some types of expectations were stronger at the slower tempo, that does not meanthose are the only melodic expectancies that develop over time. It is interesting to discover, how-ever, that melodic expectations are still developing over a second after a note has sounded.

References

AUTHOR. 2003. Unpublished Ph.D. dissertation.Besson, Mireille. 1997. Electrophysiological Studies of Music Processing. In Perception and

Cognition of Music, edited by I. Deliége and J. Sloboda. East Sussex, UK: Psychology Press.Meyer, Leonard. 1956. Emotion and Meaning in Music. Chicago: University of Chicago

Press.Narmour, Eugene. 1990. The Analysis and Cognition of Basic Melodic Structures. Chicago:

Uni- versity of Chicago Press.Vos, Piet and Jim Troost. 1989. Ascending and Descending Melodic Intervals: Statistical

Find-ings and Their Perceptual Relevance. Music Perception, 6(4): 383-396.

Key words: Timing, Expectancy, Perception

[email protected]

Page 244: Abstract Book

244 Musical Meaning I

30.4 Influence of expressive music on the perception of shorttext messages

Ivett Flores Luis, Roberto Bresin

Department of Speech, Music and Hearing, Royal Institute of Technology (KTH), Stockholm, Swe-den

BackgroundShort text messages (SMS) have become a popular form of communication especially amongyounger portions of the population. SMS are sent using mobile phones and because of lengthlimitations a coded language has been developed that makes use of acronyms, phonetic shorteningsand of emoticons (e.g. smileys), used to express emotions. Last generation mobile phones allowa new type of messages, MMS (Multimedia Messaging Service) messages, which consist of atext message complemented with a colored picture and a sound file. In MMS messages, musiccould be used to enhance the perceived emotional content of the written message. We tested thisinvestigating the effects of music and pictures on the perception of the emotional content of textmessages.AimsWe tested whether the perception of emotional content of short text messages was affected by achange in concomitant emotionally expressive music performances and colored pictures.MethodIn a factorial design thirty subjects (15F, 15M) rated the emotional content of three text messages(depicting a neutral, happy and angry situation respectively) on the three scales Neutral, Happyand Angry ranging from low to high. The three text messages were presented in combination withor without one of two colored pictures (red triangles or yellow circles) and with or without one oftwo instrumental songs (a happy pop song and an aggressive hard rock song) for a total of 3x3x3stimuli.ResultsA three-way ANOVA, repeated measures, with the three components (text, picture, and music)as factors was conducted on the listeners’ mean ratings separately for each of the three adjectives(Neutral, Happy, and Angry). For each emotion there were significant main effects of all com-ponents. A significant interaction between music and text was observed; angry text stimuli wererated as less angry when the music was happy and the opposite.ConclusionsResults show that it is possible to influence the emotional content of a text message with varyingthe accompanying musical information. Depending on its emotional character, music can eitherincrease or decrease the intended emotion in the text message.

Key words: Emotion, Communication, Multimedia messagging

[email protected]

Page 245: Abstract Book

Neuroscience II 3131.1 Neural mechanisms underlying multisensory processing

in conductors

W. David Hairston1, Jonathan H. Burdette2, Donald A. Hodges3

1Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, USA2Department of Radiology, Wake Forest University School of Medicine, USA3Music Research Institute, University of North Carolina at Greensboro, USA

BackgroundSuccessful conductors develop a myriad of multisensory skills, such as reading a musical score andretaining the idealized version of sounds in auditory memory while monitoring the actual soundsbeing currently produced. Of particular interest is how visual information (e.g., from the score andplayers) is integrated with auditory information (both real and imagined). As an example, considerthat conductors must be adept at identifying errors, localizing the errant sound in precise time andspace. Thus, one would expect that experienced conductors have developed specialized skills atsound localization, to instantly identify “who” played “what” wrong note, and also connect thisvisually with the player. Discussed here is continuing research on auditory processing within thisunique population, and impacts this has on multisensory processing.

AimsWhat are the behavioral advantages of the specialized processing required by music conductorswithin both the auditory and multisensory realms? What neurophysiological differences are re-lated to these processes?

MethodSubjects, including experienced conductors and non-musicians, participated in a series of auditoryand cross-modal behavioral tasks, including pitch discrimination and temporal order judgment,as well as visual, auditory, and multisensory target localization. Additionally, differences in spe-cific patterns of brain activity associated with these tasks are being evaluated using contemporaryfunctional MRI techniques.

ResultsBehavioral data collected thus far indicates that, in the auditory realm, not only are conductorsmore accurate in both pitch discrimination and auditory temporal judgments, but they are also

245

Page 246: Abstract Book

246 Neuroscience II

more precise in locating sounds in space. Interestingly, they also show a general localizationenhancement with multisensory (visual-auditory) stimuli relative to visual localization, a trendnot observed in control subjects. Although functional neuroimaging data are still being collected,pilot results suggest that conductors show different patterns of activity within regions of the braingenerally associated with cross-modal or multisensory processingConclusionsThese results suggest that the unique daily experience and training of music conductors has adirect impact upon both their auditory sensory acuity, and the application of this to multisensoryexperiences. Additionally, they suggest the utilization of different neural mechanisms to supporttheir superior performance in these tasks.

Key words: Multisensory processing, Auditory temporal order judgments, Neural mechanisms

[email protected]

31.2 Automatic pitch processing in different kinds of musicians

Elvira Brattico, Mari Tervaniemi

Cognitive Brain Research Unit, Department of Psychology, University of Helsinki, Finland

BackgroundThe talk will present recent neuroscientific studies investigating whether and to what extent musi-cal expertise affects automatic pitch processing of musical sounds in the brain.AimsIntensive musical training is known to facilitate perception, recognition and memorization ofpitch, also according to the type of training. For instance, musicians with extensive studies inmusic theory and composition show superior categorization of chords with a complex structurethan musicians focusing mainly in playing an instrument. Also, musicians playing instrumentswith non-fixed tuning show more fine-grained pitch discrimination skills than musicians playingfixed-tuning instruments. Training may render fully automatic and unconscious processes thatpreviously required the conscious control of voluntary attention. We hypothesized that such au-tomatization specifically targets the pitch processing skills required for each type of training.Main contributionThe automatization of skills in pitch discrimination was studied with the memory-specific mis-match negativity (MMN) component of the event-related brain potentials (ERPs). It was foundthat musicians show larger MMN than non-musicians to dissonant and mistuned chords but notto minor chords. This indicates more efficient automatic encoding of less common chords witha higher degree of spectral complexity in musicians than in non-musicians. The strategies andcontents of musical training also affect MMN to complex sound: The aural or non-aural cognitivestrategy used by musicians in practicing and performing affects automatic processing of isolatedpitches and, even more, of temporally complex tone patterns. Moreover, musicians with an “ana-lytic” knowledge of music theory and composition are able to categorize pentoid chords accordingto their underlying set-class structure both pre-attentively and attentively when the set-classes dif-fer in dissonance. The neural networks and mechanisms underlying pre-attentive and attentivechord categorization of those “analytic” musicians are distinct, however, from the ones of instru-mentalists with a more “procedural” training. Finally, even band musicians with minimal or totally

Page 247: Abstract Book

Wednesday, August 23th 2006 247

lacking formal training in music, display selectively superior automatic discrimination of musicalsound features as compared to non-musicians.ImplicationsMusical expertise is reflected in the automatic cortical discrimination of pitch information in mul-tiple ways which are specific to the type of training and the acquired music skills.

Key words: Musical expertise, Event-related potentials, Mismatch negativity

[email protected]

31.3 The actions behind the scenes: What happens in yourbrain when you listen to music you know how to play?

Amir Lahav1, Elliot Saltzman3, Gottfried Schlaug2

1The Music Mind and Motion Laboratory, Sargent College of Health and Rehabilitation Sciences,Boston University, Boston, USA2Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School,Boston , USA3Haskins Laboratories, New Haven, USA

Musical actions can be either performed by us or seen performed by others. But often, musicalactions can also be heard, when the sound of a well-trained piece of music, for example, is pre-sented. Recent evidence has shown that a special population of mirror neurons within the monkeypremotor cortex responded when the animal only heard the sound of an action (e.g. paper ripping).Inspired by this, we used music as a triggering sound, to explore for the first time whether humanshave a mirror-neuron system that motorically resonates when one listens to music that he/sheknows how to play. To address this, we trained nonmusicians subjects to play by ear a novel pieceof music on a piano in a fully monitored learning environment. We then used functional magneticresonance imaging (fMRI) to measure brain activity in subjects while they listened to the newly-acquired musical piece, without performing any movements. We will discuss the implications ofthis study for understanding the neural mechanisms involved in music perception and action, andwill suggest music as a food for our motion system.

Key words: Auditory-motor interactions, Mirror neurons

[email protected]

31.4 The brain in concert - activation during actual and imag-ined singing in professionals

Boris Kleber1, Niels Birbaumer1,2, Ralf Veit1, Martin Lotze1

1Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen, Germany2Center of Cognitive Neuroscience, University of Trento, Italy

Page 248: Abstract Book

248 Neuroscience II

IntroductionFew imaging studies examined activation sites during singing, but no such study with professionalsingers is present. Non-experienced singers lateralize to the right hemisphere within the inferiorprimary motor cortex, insula and temporal pole has been reported (1, 2). Activation during tonalvocalization (3) comprise sensorimotor and auditory areas, as well as the bilateral insula, themedial cingulate cortex (MCC), parietal and occipital lobe and the brain stem. We investigated thecerebral activation (fMRI) during actual and imagined singing of an aria in professional singers.Method16 classically trained singers; 4 opera chorus, 4 soloists and 8 vocal students (mean age: 31.06years; SD = 8.27; 5 men) with an average singing experience of 14.06 years (SD = 7.59) partic-ipated. The first line of T. Giordano’s aria “Caro mio ben” was divided into 6 phrases lasting 3seconds each. Subjects sang each phrase separately after a visual signal. In the imagery condition,the same phrases were imagined. Blocks with deep inspiration without singing served as base-line and alternated with singing blocks. FMRI-sparse sampling allowed auditory control duringsinging and avoided movement artifacts.ResultsOvert singing activated bilateral sensorimotor and auditory cortices, SMA, PMC, Broca’s andWernicke’s area and its analoga, insula, MCC, anterior temporal lobe, cerebellum, thalamus, brain-stem and basal ganglia. Imagined singing activated predominantly fronto-parietal areas and pri-mary sensorimotor and auditory cortex in both hemispheres. Bilateral areas processing emotionsshowed intense activation (ACC, insula, hippocampus, temporal pole, amygdala). Executed mi-nus imagined singing revealed increased activation in bilateral somatosensory and right auditorycortex, cerebellum, thalamus, brainstem and left Wernicke’s area. Mental singing showed higheractivation in SMA, PMC, prefrontal cortex, inferior parietal lobe and the ACC.

Discussion This data present the first investigation on areas involved in professional singingand confirm the important role of a fronto-parietal network for mental training of musical material.Mental singing additionally involved areas associated with emotional processing.References

(1) Wildgruber D et al. (1996). Neuroreport 7, 2791.(2) Riecker A et al. (2000) Neuroreport 11, 1997.(3) Perry DW et al. (1999) Neuroreport 10, 3979.Supported by the Deutsche Forschungs Gemeinschaft (BI 195/47-1&2)

Key words: Classical singing, fMRI, Professional singers

[email protected]

Page 249: Abstract Book

Symposium: Musical com-munication

32

Convenor: David J HargreavesDiscussant: Frances Rauscher

Music is a fundamental channel of communication: it provides a vital means by which peo-ple can share emotions, intentions, meanings, information, and cultural values. Music can exertpowerful physical and behavioural effects, can produce deep and profound emotions within us,and can be used to generate infinitely subtle variations of expressiveness by skilled composers andperformers. The rapid pace of technological change over the last two decades or so has led to avirtual revolution in the ways in which people engage with and experience music in everyday life,and so the nature of musical communication has itself changed rapidly. The convenors’ recentlypublished book Musical Communication (Miell et al., 2005) attempts to documents these recentchanges, organising 19 wide-ranging chapters around 4 main sections: Cognition, representationand communication; Embodied communication; Communication in learning and education; andCultural contexts of communication.

This symposium brings together four contributors to the book, which are selected so as to showthe diversity of approaches to this fascinating, complex and rapidlyUchanging topic. David Har-greaves, Raymond MacDonald and Dorothy Miell first set the scene by outlining the theoreticalbackground to the study of musical communication, proposing a “reciprocal feedback” model inwhich the social and cultural context is conceived as an integral part of musical interaction. Su-san Young next explores infant-adult interaction, discussing communicative aspects of children’sinstrumental play in the presence of adults, and emphasising the importance of imitation and elab-oration. Jane Davidson investigates the musical communication of the vocal performers, drawingon observational and interview data with professional singers and their audiences, and proposingan original theory of musical performance gestures. Finally, Raymond MacDonald, Dorothy Mielland Graeme Wilson consider how talking about music can be seen as an integral part of musicalcommunication. Drawing on interviews with young people involved in informal music making,and with professional jazz musicians, they show how talk about music can form an important partof the process of musical identity formation.

The discussant, Frances Rauscher, will assess the parallel and divergences between these variedcontributions, and consider directions for further research.

Reference:Miell, D.E., MacDonald, R.A.R. & Hargreaves, D.J. (eds.)(2005). Musical communication.

249

Page 250: Abstract Book

250 Symposium: Musical communication

Oxford: Oxford University Press. pp. xvi + 433.

32.1 Musical communication in a social world

David Hargreaves1, Raymond MacDonald2, Dorothy Miell3

1Roehampton University, UK2Glasgow Caledonian University, UK3Open University, UK

BackgroundIn Chapter 1 of Musical Communication (eds. Miell, MacDonald and Hargreaves, Oxford UP,2005), we propose that there are three major determinants of the musical communication process,namely the characteristics of the music itself; those of the people involved (i.e. the composer,performer and/or listener); and those of the situation in which it occurs. Although this is not a newidea in music psychology, most previous models of musical communication have been influencedby “transmission” models, in which a communicator uses a channel to send information to areceiver: the social and cultural context of the “information” (in this case a musical messageor expressive intention) has typically been neglected. We propose a new, socially contextualisedapproach to the study of musical communication which forms the starting point of this symposium.AimsOur main aim is to propose a “reciprocal feedback” of musical communication which has twomain features. First, it incorporates the notion of reciprocal determinism, in which each of thethree major main determinants above is seen to exert a mutual influence on each of the others.The second feature is our adoption of broader definitions of the three determinants - of the peopleinvolved, of “music” itself, and of the social contexts within which it occurs - than has hithertobeen the case.Main contributionThe resulting model is intended to represent a view of musical communication which goes beyondprevious transmission models (a) by taking into account the many relevant personal, musical andcontextual variables, and (b) by virtue of its incorporation of the reciprocal causal influences of allits components.ImplicationsThis socially contextualised model of musical communication has the potential to yield new in-sights, as well as to open up new areas of study: the present symposium illustrates some of thesepossibilities.

Key words: Communication, Socio-cultural context, Reciprocal feedback

[email protected]

32.2 Imitation and elaboration: Processes in young children’simprovisation

Susan Young

Page 251: Abstract Book

Wednesday, August 23th 2006 251

University of Exeter, UK

BackgroundIn recent years, theories of adult-infant interaction have been theoretically influential and informa-tive to researchers interpreting many forms of musical activity in early childhood (e.g. Addessi,Barrett, Custodero, Dissanayake, Littleton, Young). Therefore taking this theoretical position asfamiliar and widely endorsed, this presentation will go on to consider more specific questions.Drawing on examples of young children’s play with instruments it will ask: What are some of thecommunicative processes of young children’s music-making? How are these processes activated?How do they implicate the way we interpret young children’s musical activity?

MethodThree and four-year-olds in nursery settings typical for the UK were provided with opportunitiesto play with various instruments as one of a number of available free-play options. Adults wereready to join in with playing with individual children if invited.

Analysis The children’s instrumental play was recorded continuously onto video-tape and thenshort selected episodes were subject to further detailed analysis. The analytical method purposelyhighlighted the interaction between adult and child.

ResultsMusical play between adults and children in both sets of analyses was characterised by the ex-change of imitative turns which evolved into elaborated versions.

ConclusionsIt is proposed that imitation to achieve repetition of key ideas, far from being a simplistic formof musical activity, is a powerful mechanism (particularly in improvised music) to establish con-nection and to provide coherence and a basis for elaboration. These processes of imitation andelaboration produce a sense of meaningful pattern and experience, albeit determined by context: aform of musical communication.

Existing versions of children’s musical development assume musical processes to be autonomousand individualistic and thus separable from any interactive format. Invoking communicativeprocesses as one of a range of pedagogical strategies could potentially support children’s develop-ment as improvisers and composers in educational practice.

Key words: Imitation, Improvisation, Early childhood

[email protected]

32.3 Bodily acts: Vocal performance re-considered

Jane Davidson

University of Sheffield, UK, and University of Western Australia, Australia

Aims and objectivesIn this paper I shall investigate the musical communication of the vocal performer. Specifically,the inner and outer states of the vocal performer as interpretable from the bodily communicationsbetween performer and audience in a range of live performance contexts. This is undertaken to

Page 252: Abstract Book

252 Symposium: Musical communication

explore: i) how important bodily communication is for musical and social interaction and un-derstanding in the performance context; and ii) investigate ways of clarifying and developingpedagogical strategies for such communications to occur.

Context For more than a decade I have been fascinated by the ways in which body movementrepresents, generates and presents music in performance. No where is the case more intriguingthan in vocal performance where the singer becomes the site of: the story of the song or aria withboth “true” and “false” (acted) intentions and expressions, and the focus of audience attention asthe “star” of the performance, with all the associated socio-behavioural roles. This can mean thata tension between inner and external states occurs.MethodsThe paper draws on observational and interview data with professional singers and their audiencesfrom a number of contexts including observing DVD materials and attending or participating inlive performances. Techniques of analysis include interpretative phenomenological analysis andstandard non-verbal observational measures.ResultsThe results are drawn up within my own theory of vocal performance gestures.

Key words: Body, Voice, Performance

[email protected]

32.4 Talking about music

Raymond MacDonald1, Dorothy Miell2, Graeme Wilson3

1Glasgow Caledonian University, UK2Open University , UK3University of Newcastle, UK

BackgroundThis presentation considers how talk about music can be seen as integral to musical communica-tion. The model of communication adopted here is one in which talk is seen as a tool of socialaction - people are seen as being able to achieve certain personal and social ends through theirtalk rather than using it purely as a means of transmitting information between people. Whenindividuals describe their tastes and interests in music, for example, this model suggests that theyare not only conveying to other people certain information about their particular preferences for amusician, band, or piece of music, but also that they are doing some important personal “business”by positioning themselves (eg as “knowledgeable fans”) in relation to others.AimsThis presentation aims to highlight how talking about music is an important part of the musicalcommunication process.MethodTwo data sets are presented: one from interviews with young (N=16) people who are involved withmusic making in their own free time (eg with rock bands and choirs) and another from focus groups(N=2) and interviews (N=10) with professional jazz musicians. In both studies, the conversationswere tape recorded and transcribed. Following repeated inspection of the transcripts, researcherscoded the data under individually developed categories representing emergent themes.

Page 253: Abstract Book

Wednesday, August 23th 2006 253

ResultsThe analysis examines ways in which the participants’ talk about their involvement in musical ac-tivities (including listening) construct and maintain particular musical identities, and in particularhow they use their talk about music to claim and mark their relationship to various communitiessuch as knowledgeable others. For example, jazz musicians articulate conflicting priorities of so-cial groupings and individual qualities to authorise their musical identity. To resolve this tensionin their discourse, these musicians often resort to the same strategies as the young people.ConclusionsTalking about music is a crucially important aspect of the overall process of musical communica-tion and one that can illuminate, in many different ways, the richness of the process of musicalcommunication.

Key words: Talk, Young people, Jazz improvisation

[email protected]

Page 254: Abstract Book
Page 255: Abstract Book

Perception II 33

33.1 Similarity measures for tonal models

Richard Randall1, Bilal Khan2

1Department of Music and Dance, University of Massachusetts, Amherst, USA2Department of Mathematics, John Jay College of Criminal Justice, USA

Multidimensional-scaling models of tonal hierarchy encode cognitive relationships betweenchords as euclidean distances. Fred Lerdahl’s tonal pitch space (TPS) model approximates cog-nitive perceptual relations between chords by providing a combinatorial procedure for computingthe distance value between two chords. Because of the influence of experimental data on theTPS model, we would expect a high correlation between experimental data and analyses of chordprogressions generated by the TPS model. The value of such a comparison is clear. If the TPSmodel posits a hypothesized model of perception, then we would like to know if and by howmuch it differs from experimental data it claims to approximate. In this paper, we focus on theintra-regional relation descriptions of TPS. We achieve two important goals. First, we develop asimilarity measure that allows us to accurately compare the TPS model with a model of perceivedchord relations created by Bharucha, et al. Second, this paper applies the similarity measure tonormalized canonical representations of each model, thereby avoiding comparisons affected byarbitrary design choices. The greatest obstacle inherent in quantifying a comparative procedureis negotiating the manner in which each constituent model codes its data. The similarity measureand the method of normalization are applicable to any model with formal properties describedherein and has the potential to focus experimental design and strengthen the relationship betweenexperimental data and analytic systems.

Key words: Perception, Tonality, Multidimensional scaling

[email protected]

33.2 An interval cycle-based model of pitch attraction

Matthew Woolhouse, Ian Cross

255

Page 256: Abstract Book

256 Perception II

Centre for Music and Science, Faculty of Music, University of Cambridge, UK

BackgroundAlthough a substantial body of literature and empirical research exists which addresses the idea ofperceived attraction between pitches (Krumhansl, 1996; Lerdahl, 2001; Larson, 2004), to date nomodel has been proposed in which specifications for pitch attraction may be formally grounded.Furthermore, although a number of formal properties of pitch-class structures correspond withcertain music-theoretic processes (Balzano, 1982), no link has been established between eitherthese properties or the concept of pitch attraction in the formation of tonal hierarchies (Krumhansl,1990).AimsThis paper presents a computational model of pitch attraction built on the formal property ofinterval cycles (Woolhouse & Cross, 2004). Experimental research, currently under way, aims toinvestigate the applicability of the model to the pitch attraction properties of harmonic structuresfrom a number of different music-historical style-periods.MethodThe pitch attraction of chord transitions is modelled graphically in the form of pitch attraction pro-files (pitch attraction profiles based on interval cycles correlate significantly with tonal hierarchiesrecovered by Krumhansl (1990), Brown, Butler & Jones (1994), and Lamont & Cross (1994)).Preliminary studies are at present assessing the efficacy of a number of different experimentalmethods aimed at measuring listeners’ experience of pitch attraction. Initial results suggest that anattraction-rating paradigm will be used in which listeners’ response profiles will be correlated withthe predictions of the model. Experiments testing both “real music” chord-to-chord transitions aswell as context-free chord-pair transitions are scheduled for testing in late January and February2006.Results and ConclusionsIn addition to interval cycles, the model also contains a number of “bolt-on” psychoacoustic com-ponents that depending on the musical style being modelled may or may not require inclusion (forexample, the effective modelling of diatonic pitch attraction requires a relatively large number ofpsychoacoustic components). An additional aspect of the research, therefore, involves explorationof the relative balance between cognitive (interval cycle-based) and psychoacoustic constituentsinherent in a given musical style-period.

Key words: Interval cycles, Pitch attraction, Tonal hierarchies

[email protected]

33.3 Evaluation and applications of tonal profiles for automaticmusic tonality description

Emilia Gómez, Perfecto Herrera

Music Technology Group, Pompeu Fabra University (UPF) and Sonology Department, HigherSchool of Music of Catalonia (ESMUC), Spain

BackgroundA large amount of research has been devoted to study how tonality is perceived, mainly in western

Page 257: Abstract Book

Wednesday, August 23th 2006 257

music. Some of this research has lead to the development of algorithms for the automatic estima-tion of the key of a given piece by analyzing its score indicating that these tonal models can beused to estimate the tonality of unknown musical pieces. During the last few years, we have wit-nessed an increasing interest in analyzing audio recordings where the score is unknown (MIREX-05).AimsThe aim of this work is discussing how a set of tonal profiles derived from human ratings can helpto analyze pieces of music in audio format, where the score is unavailable.MethodWe evaluate the performance of different tonal models when applied to estimate the key of a piece,in comparison to the use of pitch class distribution profiles obtained empirically from statisticalanalysis and some flat profiles taken from music theory. We first obtain a representation of thepitch class distribution of an audio piece and then correlate these features with the studied tonalprofiles in order to obtain an estimated global key measurement.

Two different models are compared: the one by Krumhansl (Krumhansl-90) and the modifica-tions by Temperley (99). We also include major and profiles derived from statistical analysis offolk MIDIs (Chai-05), from the Kostka and Payne (95) music theory textbook (Temperley-05), andsome additional audio-derived statistics (Gomez-05). Finally, we include tonic triad and diatonicflat profiles derived from music theory.ResultsThe use of flat diatonic, Krumhansl and Temperley profiles yields similar results. This perfor-mance is also similar to the use of statistical profiles obtained from audio features, but better thanprofiles empirically obtained from symbolic scores. In general, the performance degrades whenanalyzing musical genres different than classical music (e.g. pop, rock, etc). The best result forpopular music (55% accuracy) is obtained using a tonic triad profile. This reveals that tonal mod-els might be depending on the musical style and it supports the importance of the tonic triad inpopular music (Temperley-01).

Key words: Tonality estimation, Audio description, Music information retrieval

[email protected]

33.4 A comparison between the temporal and pattern approachto virtual pitch applied to the root detection of chords

Ludger Hofmann-Engl

The Link Schools for Special Needs, UK

BackgroundFor some decades the discussion of which of the two approaches, the temporal model or the patternmodel, to virtual pitch appears to be more powerful has remained unresolved with the temporalmodels being favored within the scientific community. While temporal models cannot predictfrequencies above 5 kHz, pattern models are phase shift insensitive and thus cannot explain certainexperimental data. This paper is to investigate this question further.AimsThe temporal model, according to R. Meddis, was compared to the pattern model, according to

Page 258: Abstract Book

258 Perception II

Hofmann-Engl, and they were tested against the judgments of students of composition at the RoyalAcademy of Music London. It was the aim to see which of the models would predict the roots tonon-classical chords and the 5th root to classical chords (piano, midi) more precisely.Method

Participants. 6 composers from the Royal Academy London. I composer was female.Equipment. Laptop (Aspire 1310) programmed in Java (JDK 1.4), headphones (Panasonic DJ

100), midi sounds piano1.Stimuli. 16 chords played in order a - b and b - a (all in all 32 trials). Each chord was followed

by the bass note c after 5 midi units. In case of a non-classical chord, chord a fetches the best rootc according to Hofmann-Engl’s model and chord b fetches the best root c according to Meddis’smodel. In case of classical chords, the 5th best root was chosen in order to avoid similarity effects.

Procedure. Participants listened to the randomized chord pairs via headphone and were in-structed to select the best match with the choice a or b, equal and not sure.ResultsIn all cases except one did the participants prefer the root for non-classical chords and the 5throot for classical chords according to Hofmann-Engl’s model. For instance while Hofmann-Engl’smodel predicts the chord bb, b, c to be the best match to c (preferred by 3.09 participants), Meddis’smodel predicts g, g#, a (preferred by 1.94 participants).Conclusions

As much as there seems to be a preference for the temporal model, these data indicate that apattern model seems to be a more reliable root predictor.

Key words: Virtual pitch, Temporal model, Pattern model

[email protected]

33.5 The geometry of musical chords

Dmitri Tymoczko

Princeton University, USA

To conceptualize music is to disregard information. When a violist plays the pitch sequenceE3-G3- C4, listeners will typically understand this as an instance of a “C major chord”Uthe un-ordered set of pitch classes {C, E, G}. My paper uses quotient spaces to model the way listeners,composers, and performers abstract from musical information. The result is a new geometricalrepresentation of musical chords, one that is potentially useful to cognitive scientists who seek toexplain or simulate composers’ behavior, and to psychologists interested in judgments of similaritybetween musical chords.

An unordered set of pitch-classes can be modeled as a point in the quotient space Tn/Sn, the n-torus modulo the symmetric group Sn. Line segments in these spaces represent voice leadings, ormappings from the pitch classes of one chord onto those of another. The length of a line segmentis equal to the size of the voice leading it represents. Understanding the geometry of the quotientspaces Tn/Sn therefore allows us to specify precisely how a chord’s internal structure determinesits contrapuntal possibilities.

Page 259: Abstract Book

Wednesday, August 23th 2006 259

Chords that divide the octave evenly, such as {C, E, G}, lie near the center of these spaces,and can be linked to their transpositions by efficient voice leadings. Chords that divide the octaveunevenly, such as {C, C#, D}, lie near the boundary of the spaces, and can be linked to theiruntransposed form by efficient voice leading. Consequently, paradigmatically “consonant” and“dissonant” sonorities occupy different regions of the spaces, and suggest different musical uses.

This paper appeared in Science Magazine, July 7, 2006.

Key words: Voice leading, Harmony, Geometry

[email protected]

Page 260: Abstract Book
Page 261: Abstract Book

Cognition I 3434.1 Music beyond the score: Its meaningful mathematics, mod-

eling, performing - and the flow of spoken language

Manfred Clynes

Georgetown University, Department of Oncology and Department of Biophysics and Physiology,USA

We earlier isolated the biologic dynamic time-forms for expressing, generating and commu-nicating specific emotions, common to various modes such as touch ,sound and gesture - calledsentic forms. Music is seen as a double stream, Stream 1 repetitive pulse, and Stream 2 the con-tinuing developing story, being brain processed simultaneously but differently - the two streamsbeing varyingly emphasized historically, from Gregorian chant through the balanced flow of theclassical period to rock music.

Over decades of research four algorithms were discovered which together model “living” musi-cal thought and perform it, Microstructure and Structure, with all instruments. They permit globaladjustment to ever better realize the interpreter’s concept of the work; computer-constructing aninterpretation as a whole and in integrated detail. A major concert of such orchestral music wasfirst performed at Stanford in 2004, and recently the University of Vienna, (ESMCR) on 4/18/06,Unfinished Symphony, Schubert and Eroica Symphony of Beethoven.

Four global algorithms provide Musical Microstructure related to structure 1. Hierarchic Pulse(HP) (composer specific) 2. Predictive Amplitude Shaping (PAS) (shaping the present note de-pending on what and when the next note is going to be - according to the tangent of the pitch-timecurve) 3. Organic Vibrato, and the recently developed: 4. Expressive Intonation (melodic tuning).PAS provides a natural flow, which, together with HP, results, quite astonishingly, in phrasing.They are supplemented by sectional actions, according to composer’s directions.

Great musicians (eg Pablo Casals, Arthur Schnabel) used syllables to indicate natural phrasingin music. Because syllables join in specific dynamic ways - shaping the tails of syllables to jointhe next - they can explain and demonstrate subtleties in musical phrasing and especially shapingof tones, such as violin or cello. Those subtleties are prerequisites of emotional meaning.

Conversely, understanding how present notes are shaped by PAS for musical flow, depend-ing on the next note, can illumine the ways speech is formed predictively, involving the enerva-tion of voice chords producing the dynamic amplitude shaping of the fundamental, as a similar

261

Page 262: Abstract Book

262 Cognition I

brain-function, ubiquitously aiding the effortless natural flow of ordinary speech. Illustrated byexamples.

Key words: Brainfunction, Language, Flow

[email protected]

34.2 Motor-mimetic images of musical sound

Rolf Inge Godoy

University of Oslo, Departement of Musicology, Norway

BackgroundOn the background of a rapidly growing amount of research on embodied cognition (see Galleseand Lakoff 2005 for a particularly lucid summary) and on gestures in music (e.g. Wanderley andBattier 2000), we have in our own research on musical gestures (http://musicalgestures.uio.no)become increasingly aware of the crucial role of human movement in the perception and cognitionof musical sound. In short, we believe perceiving and imagining music involves overt and/orcovert simulation of sound-producing (hitting, stroking, bowing, etc.) and/or sound-accompanyinggestures (dancing, gesticulating, etc.), hence the idea of motor-mimetic images of musical sound.AimsThe main aims of this paper is to present the basis for such a motor-mimetic theory in the per-ception and cognition of musical sound, as well as to demonstrate how motor-mimetic images ofmusical sound can be actively exploited in various practical contexts. This will include indicatingecological schemata for auditory-motor links (source, mode of excitation, effort, gesture trajec-tory, etc.) as well as biomechanical and neurocognitive constraints for human movement (parsing,coarticulation, etc.) at work in sound-production, as we assume that these elements condition thesimulation or re-enactment of musical sound in our minds.Main ContributionFirstly, this paper will give a brief overview of past research in favor of this motor-mimetic theory,including general embodiment-related research (Berthoz 1997), motor theory research (Libermanand Mattingly 1985), and recent research on auditory-motor links (Haueisen & Knösche 2001,Hickok et al. 2003). Secondly, there will be a brief presentation of our research on air- instrumentperformance (Godøy, Haga, and Jensenius 2006) and our ongoing research on sound- tracing.Thirdly, we will conclude with a tentative model for motor-mimetic perception and cognition ofmusical sound based on auditory-motor interactions.ImplicationsBesides contributing to our basic understanding of musical imagery (e.g. Godøy and Jørgensen2001), this motor-mimetic theory could have practical applications in performance, improvisation,composition, orchestration, arranging, etc. by enhancing mental images of musical sound, andcould also be used in music education for stimulating awareness of sound features in the minds ofnovices as well as experts.

Key words: Gesture, Embodiment, Motor theory

[email protected]

Page 263: Abstract Book

Wednesday, August 23th 2006 263

34.3 Exact measures of musical structure for predicting mem-ory for melodies

Daniel Müllensiefen

University of Hamburg, Department of Musicology, Germany

BackgroundMost empirical studies of musical and melodic memory do not employ means of exact quantifi-cation for the structure and features of the experimental stimuli. This means for many studiesfollowing the so-called recognition paradigm (e.g. Kauffman & Carlsen, 1989) that they cannotinvestigate the impact of minor changes in musical structure upon memory performance. On theother hand, for studies employing the experimental recall paradigm (e.g. Sloboda & Parker, 1985),empirical analysis is often limited to qualitative conclusions, because the musical data gatheredexperimentally is not analysed in quantitative terms.AimsSeveral methods and algorithms for quantification of musical and melodic features, such as melodicsimilarity, accent structure and accent coherence, melodic complexity, are explained and their usein modelling melodic memory is discussed.MethodThe experiment used recall as experimental paradigm similar to the one employed by Sloboda &Parker (1985). 30 subjects participated and 14 pop music melodies served as experimental stimuli(both as MIDI melodies and as excerpts from the actual songs). The recall data is transcribed to asymbolic format and its similarity to the original melodies is determined by algorithmic similaritymeasures that have been proved to be effective (Müllensiefen & Frieler, 2004). The similarityvalues are predicted using linear and non-linear modelsResultsIn linear models as well as in non-linear ones the employed exact measures of musical structureproof to be the most important predictors. Next to variables of musical structure, musical experi-ence is also of importance in explaining the experimental data.ConclusionsThe exact measurement of musical features seems to be very well suited to describe memoryperformance depending on characteristics ofC natural’ musical stimuli. Employing measures ofthe kind explained here, cognitive models can be constructed that are defined more rigorously inempirical terms and that may be generalised to a broader range of musical material.ReferencesKauffman, W.H. & Carlsen, J.C. (1989). Memory for Intact Music Works: The Importance ofMusic Expertise and Retention Interval. Psychomusicology, 8 (1), p. 3-20.

Müllensiefen, Daniel. & Frieler, Klaus (2004). Cognitive Adequacy in the Measurement ofMelodic Similarity: Algorithmic vs. Human Judgments. Computing in Musicology, Vol. 13, p.147-176.

Sloboda, J.A. & Parker, D.H.H. (1985). Immediate recall of melodies. In: Howell, P., Cross,I. & West, R. (Hg.), Musical structure and cognition. London: Academic Press, p. 143-167.

Key words: Melodic memory, Melody structure, Prediction models

[email protected]

Page 264: Abstract Book

264 Cognition I

34.4 An enculturation effect in music memory performance

Steven M. Demorest1, Steven J. Morrison1, Munir Beken2, Denise Jungbluth1

1University of Washington, USA2Siena College, USA

BackgroundResearchers have begun to examine the role of formal train-ing versus passive exposure in de-veloping musical under-standing. Most of these studies have been carried out within the Westerntradition, but one factor that may strongly influence both types of music learning is encultur-ation,the informal process by which a person acquires the culture of a particular society from infancy.

AimsWe tested the cross-cultural musical understanding of experts and novices from two distinct musi-cal cultures to explore the influence of enculturation and its interaction with formal training. Thehypotheses were: 1) Subjects’ scores on a test of musical understanding will be significantly higherfor their home culture. 2) Experts and novices within each culture will differ in overall memoryperformance. 3) Western music will be culturally-familiar to subjects from both countries.

MethodMusically trained (n=70) and untrained (n=80) subjects born in the United States (n=80) andTurkey (n=70) heard examples of music from Western classical, Turkish tradi-tional, and Chinesetraditional music in one of three coun-terbalanced orders. After the presentation of three 30-second examples of each culture’s music, musical under-standing was measured by having subjectscomplete a 12-item memory task comprised of 6 shorter target excerpts chosen from the previouslyheard music and 6 foils taken from different sections of the same musical works.

ResultsBoth groups were significantly better at remembering the music of their home culture over theother two musics. There were no differences in test scores based on musical training. Turkish sub-jects’ memory was significantly better for Western versus Chinese music, suggesting that exposureto Western music played a role in their responses.

ConclusionsEnculturation strongly influences subject’s musical under-standing, regardless of their level offormal training. Sub-ject’s superior memory performance for their native musical culture suggeststhat deeper levels of musical understanding cannot easily cross cultural boundaries. The lackof an ex-pert-novice difference supports research demonstrating that many musical tasks can beperformed equally well without formal training. The presence of a second-culture response forWestern music on the part of the Turkish subjects sup-ports the continued plasticity of schemadevelopment be-yond childhood.

Key words: Acculturation, Music memory, Training

[email protected]

Page 265: Abstract Book

Wednesday, August 23th 2006 265

34.5 Application of a new method for consistency assessmentand grouping of listeners’ real-time identification of mu-sical phrase parts

Neta Spiro1, Beata Beigman Klebanov2

1Cognitive Science Center, University of Amsterdam, The Netherlands, and Centre for Music andScience, Faculty of Music, University of Cambridge, UK2School of Computer Science and Engineering, The Hebrew University, Jerusalem, Israel

BackgroundStudies of musical phrase perception have investigated listeners’ responses (including Palmer andKrumhansl 1987 and Deliège 1998). However, there is a need for a method of analysis of real-time phrase responses. The following computational method was first developed and used here toanalyse responses collected from listeners with a range of levels of musical experience. They wereasked to listen to pieces from the western classical repertoire and identify phrase parts (includingphrase starts) by key-pressing, repeating each task three times. The statistical part of this methodwas adapted from content analysis in natural language (such as Krippendorff 1980 and Carletta1996).AimsTo develop a computational method to analyse self- and inter-listener- consistency of real-timeidentification of phrase parts, to identify groups of listeners with similar patterns of responses, andto relate the results to known musical reasons.Method1. Calculating individual listener’s self-consistency and concluding his/her interpretation (inter-pretations include all positions chosen in two or more listenings). 2. Merging units whose bound-aries have multiple close-to-note-boundary key- presses, leading to new time-lines. 3. Using thenew time-lines with merged units to re-calculate self-consistency and obtain individual listeners’interpretations. 4. Calculating inter-listener consistency and building groups of similar interpreta-tions.Results1. Many listeners’ responses were self-consistent and interpretations could be concluded. 2. Fre-quent close-to-note-boundary key-presses allowed unit merging. 3. In many cases unit mergingimproved self-consistency: interpretations now included presses in the same area that were missedin the unmerged data. In other cases positions were chosen only once and far from any other.These were still excluded from the interpretation. For these, self-consistency did not improve. 4.Inter-listener consistency varied for different pieces. For some it was high - there were few groupsof alternative interpretations with small differences between groups. For others it was lower - therewere more groups of interpretations.ConclusionsThis new method provides a way of assessing self- and inter-listener- consistency, of deciding onan interpretation for each listener, and using these to build groups of interpretations. All this leadsto the quantitative identification of areas of phrase parts perceived by groups of listeners whichwere found to be in accordance with identified musical features.ReferencesCarletta, J. (1996). Assessing agreement on classification tasks: the kappa statistic. ComputationalLinguistics 22: 249-254.

Page 266: Abstract Book

266 Cognition I

Deliège, I. (1998). Wagner “Alte Weise”: Une approche perceptive. Musicæ Scientiæ (SpecialIssue): 63-90.

Krippendorff, K. (1980). Content Analysis: An Introduction to Its Methodology. BeverlyHills, CA., Sage Publications.

Palmer, C. and C. L. Krumhansl (1987). Independent Temporal and Pitch Structures in De-termination of Musical Phrases. Journal of Experimental Psychology: Human Perception andPerformance 13(1): 116-126.

Key words: Musical phrase, New statistical method, Time-line units

[email protected]

34.6 Training a classifier to detect instantaneous musical cog-nitive states

Rafael Ramirez, Montserrat Puiggross

Music Technology Group, Pompeu Fabra University, Spain

The study of human brain functions has dramatically increased in recent years greatly due tothe advent of Functional Magnetic Resonance Imaging (fMRI). While fMRI has been extensivelyused to test hypothesis regarding the location of activation for different brain functions, the prob-lem of automatically decoding cognitive states has been little explored. The study of this problemis important because it can provide a tool for detecting cognitive processes useful for diagnosingdifficulties in performing a task.

In this paper we aim to detect, in a musical context, the instantaneous cognitive state of a personbased on her Functional Magnetic Resonance Imaging data. We describe a machine learningapproach to the problem of discriminating a cognitive state produced by listening to melodic tonalstimuli from (1) a cognitive state produced by listening to speech stimuli, and (2) a cognitive stateproduced by mentally rehearsing a tonal melody.

We apply an ensemble classifier which is trained to predict a subject’s cognitive state given herobserved fMRI data. We associate a class with each of the cognitive states of interest and givena subject’s fMRI data observed at time t, the classifier predicts one of the classes. We trainedthe classifier by providing examples consisting of fMRI observations (restricted to selected brainareas) along with the known cognitive state of the subject. We selected the brain areas based ontheir activity during the different stimulus.

The classifier accuracy is measured by the percentile of corrected classified instances. Weobtained accuracies of over 85% for both the melody listening vs. speech listening tasks andthe listening vs. rehearsing tasks. We obtained these numbers using the standard leave-one-outvalidation in which each fMRI image in the training set was held out in turn as a test examplewhile training on remaining data. We removed from the training data any image close (in time) tothe test example in order to avoid training on similar images.

It is possible to train classifiers in order to distinguish between melody listening and rehearsingcognitive states as well as between melody listening and speech listening cognitive states. Thisresult can lead to the development of tools for detecting basic musical cognitive states which arethe components of complex cognitive processes. Thus, this work can help to understand suchcomplex cognitive processes and be useful for diagnosing difficulties in performing a particulartask.

Page 267: Abstract Book

Wednesday, August 23th 2006 267

Key words: Cognitive state, fMRI, Machine learning

[email protected]

Page 268: Abstract Book
Page 269: Abstract Book

Symposium: Music and im-agery: Stimulation throughsimulation

35

Convenor: Emery Schubert

Imagery in music can refer to a wide spectrum of definitions and phenomena, from some-thing which can be directly linked to motor actions (such as subvocalising when imagining music)and getting tunes stuck in the head, to more abstract definitions/phenomena such as the cognitivecomponent of the ideo-motor principle U a kind of imagery which occurs almost during the per-formance process, but not necessarily consciously accessible. The papers in this symposium takeexamples from different portions of this spectrum to provide evidence that imagery is a neces-sary and sufficient component of the musical experience, despite the idea that the mental auditoryimage can occur independently of the acoustic signal.

Imagery in the form of mental simulation during performance seems to be a key in the exe-cution of musical activity of the professional performer. Also discussed is the possibility that theemotional experience associated with music may be experienced through this mental simulationprocess alone. The so called “Eearworm” phenomenon provides evidence of imagery gone wrongU with the listener being unable to remove a repeating portion of imagined music from consciousattention. While there exists neurological evidence of localisation shifts during listening and imag-ining (or filling in the gaps), the symposium includes discussion of the cognitive processing whichmight be involved during a “filling in the missing music” task.

The implication of this alleged essential nature of imagery is that the well worn notion ofSmusic existing only in the presence of the listening brain’ is taken one step further: The brain iscontinually trying to simulate musical experiences during performance and perception. But also,the brain is able to generate musical experiences in the absence of acoustic performance or percep-tion. If this is the case, further research is required to determine both qualitative and quantitativedifferences between simulation only-experiences (e.g. practicing without an instrument, experi-encing emotion without music playig. . . ) and the effect of perceiving music (whether as a listener,as a performer making fine motor adjustments to jell with an ensemble, or listening to one’s ownperformance output).

269

Page 270: Abstract Book

270 Symposium: Music and imagery: Stimulation through simulation

35.1 Studying musical imagery: Context and intentionality

Freya Bailes

SCRG, University of Canberra, Australia

BackgroundThe phenomenon of a conscious “inner hearing” of music, when this music is not actually present,is known as musical imagery. While it might play a functional role in musical activities such ascomposition, orchestration, and even performance, it can also occur unintentionally in everydaylife, and this phenomenon is often called having a “tune on the brain” or “earworms”. Of interestto music cognition are when, where, why, what and how particular music is imaged. Given themethodological difficulties of imagery research, there is a paucity of evidence. However, theresearch that exists has begun to suggest fruitful questions to address and useful methods to achievethis.AimsThis paper reviews converging evidence of musical imagery experience from a sampling study,interviews, and laboratory experiments. The particular focus is on the involuntary experience ofhaving a “tune on the brain”, with respect to repeated exposure to music through perception, andwhat it is to hear “distinctive” music.Main ContributionSpecific findings from a pilot study that sampled the everyday occurrence of having a tune on thebrain are presented (Bailes, in press) as an introduction to some of the probable factors linked to theoccurrence of an involuntary musical image. Analyses highlight the influence of recent exposure toparticular music on what is subsequently imaged. Following on from this, an experimental study isdescribed which measured the point of recognition (POR) of 120 melodies by 32 participants, withthe goal of predicting POR as a function of different subjective measures of familiarity with thestimuli, and melodic distinctiveness (derived from statistical analyses of 16,069 Western themesand melodies). Results point to the complementary roles of perceptual exposure and memorywhen participants intentionally generate a mental image of music.ImplicationsAlthough musical imagery is an intangible phenomenon, this paper argues that by triangulatingmethods and examining converging evidence it is possible to discern commonalities worthy of fur-ther study, such as perceptual exposure and distinctive melodic structure. Importantly though, thisoverview also underlines the changing nature of imagery experience dependent on the contextualfactors of the intention to image music and musical task.

Key words: Musical imagery, Converging evidence, Context

[email protected]

35.2 Emotion in real and imagined music: Same or different?

Emery Schubert1, Paul Evans2, John Rink3

1School of Music and Music Education, University of New South Wales, UK2School of Music, The University of Illinois at Urbana Champaign, USA

Page 271: Abstract Book

Wednesday, August 23th 2006 271

3School of Music, Royal Holloway, University of London

This paper reports two explorations in the emotional effect of imagined music. The first explo-ration employed a professional pianist who listened to a recording of his own (already recorded)performance of a piece while at the same time registering the emotion expressed on a two-dimensional emotion space (2DES). The process was repeated with the same pianist imaginingthe same performance of the piece. While the shape of the response in the two (arousal andvalence) dimensions were similar across real and imagined conditions, the imagined conditionalways produced a temporally stretched response. Cognitive load explained this time warping.

Interference was removed in a second exploration by asking the participant to make post-performance, rather than continuous, responses to imagined or real performances of a familiarpiece. This was done with sixteen undergraduate music students. They were asked to make emo-tional ratings (including valence and arousal used in the 2DES) in response to two highly familiarpieces which were either played to the participants over loudspeakers, or imagined in the absenceof sound (real and imagined orders were counterbalanced). Repeated measures ANOVA revealedno significant difference in emotional responses between imagined and real conditions for eitherpiece.

These explorations suggest that emotional response to music can be reliably produced in theabsence of music, although continuous response collection produces some interference, and in anycase the music must be highly familiar. The study has important implications in emotion in musicresearch, and in mood induction and regulation procedures.

Key words: Imagination, Emotion, Continuous response

[email protected]

35.3 Anticipatory auditory images and temporal precision inmusic-like performance

Peter Keller1, Iring Koch2

1Max Planck Institute for Human Cognitive and Brain Sciences, Germany2RWTH Aachen University, Germany

BackgroundExperimental tasks that manipulate the compatibility between brief manual response sequencesand their auditory effects have demonstrated that musicians anticipate upcoming sounds duringaction planning. Unlike most instances of musical performance, however, these tasks are usuallyconducted under “speeded” conditions where movements are carried out as quickly as possible.In such speeded tasks, action planning (indexed by reaction times to “go” signals), but not actionexecution (indexed by movement duration), is typically faster when responses and their effectsare compatible than when they are incompatible (in terms of spatial height and pitch height, forexample).AimsWe tested whether response-effect compatibility affects the execution of music-like sequentialactions that require temporal regularity rather than rapidity. We were specifically interested inwhether response-effect compatibility affects timing accuracy for the first (pre-auditory feedback)

Page 272: Abstract Book

272 Symposium: Music and imagery: Stimulation through simulation

movement of each sequence because this would provide evidence that action-effect anticipationplays a role in the control of temporally precise movements.MethodMusicians responded to each of four color-patch stimuli by producing a unique sequence of threetaps on three vertically aligned keys. The stimulus in each trial flashed three times with a 600 msinter-onset interval, and participants tapped their responses as regularly as possible at this tempoafter a further three flashes of a neutral stimulus. Each tap triggered a tone. Response- effect map-ping was either compatible (taps on the top, middle, and bottom keys triggered high, medium, andlow pitched tones, respectively) or incompatible (key-to-tone mapping was scrambled or reversed).ResultsTap timing was more accurate with compatible than with incompatible mappings both for tapsproduced before (tap 1) and after (taps 2 and 3) the onset of auditory feedback. Thus, the observedinfluence of response-effect compatibility on action execution was not due exclusively to actualauditory feedback.ConclusionsThe anticipation of auditory action-effects appears to play a role in planning the dynamics of tem-porally precise movements. This implies that in musical contexts, the degree to which movementtrajectories are optimal (i.e., conducive to producing a desired timing pattern) may be affected bythe performer’s ability to imagine forthcoming sounds.

Key words: Timing control, Action planning, Auditory imagery

[email protected]

Page 273: Abstract Book

Neuroscience III 3636.1 Can amusics perceive harmony?

John Sloboda1, Karen Wise1, Isabelle Peretz2

1Keele University, UK2Universite de Montreal, Canada

BackgroundDense perceptual deficits in melodic processing characterise a syndrome recently defined as “con-genital amusia” (Peretz et al 2003). Such a syndrome may underlie some cases of what is colloqui-ally known as “tone deafness”. Congenital amusics show normal perceptual and cognitive func-tioning outside the restricted domain of music. A psychometric measure, the “Montreal Batteryfor the Evaluation of Amusia,” contains six subtests, which probe aspects of melodic and rhythmicperception and memory. The tasks are trivially easy for most individuals, regardless of musicaltraining. Amusics, however, score at or near chance. The original MBEA does not contain tests foremotional or harmonic perception. In the context of a broader study, aimed at understanding whymany self-labelling “tone-deaf” individuals score normally on the original MBEA, Sloboda, Wiseand Peretz (2005) showed that amusics score similarly to normals (and highly) on a test of emo-tional discrimination for melodies. This suggests that not all components of musical perceptionand cognition are equally affected by congenital or developmental musical deficits. Furthermore,different combinations of them may be present or absent within the sub-group of individuals selfdefining as tone-deaf.AimsThe present research aims to extend the MBEA, to facilitate fuller explanation of the range ofdeficits which appear in the general population. We report the development and norming of a newsub-test for harmonic perception, based on the same materials that are used in the original MBEAsub-tests.MethodEach of the thirty melodies used in the MBEA has been harmonised with three different versions ofits final cadence - conventional, mildly unconventional, and highly unconventional. These stimuliare currently being piloted with non-musicians. The final test will require a “same-different”judgement between pairs of items that are the same until the final cadence, with success dependingon the ability to discriminate conventional from unconventional harmonisation. The test will be

273

Page 274: Abstract Book

274 Neuroscience III

normed with the general population, and will be used to assess the responses of normals, congenitalamusics and self-declared tone deaf participants.ResultsResults will be presented for comparisons between normals, congenital amusics, and self- defined“tone deaf” participants who score highly on the MBEA.ConclusionsWe will discuss the implications of the results for our understanding of musical deficits, and pos-sible interventions to enhance musical capacity.

Key words: Amusia, Perception, Harmony

[email protected]

36.2 Sounds of intent: Initial mapping of musical behavioursand development in profoundly disabled children

Graham Welch1, Adam Ockelford2, Sally-Anne Zimmermann2, Fern-Chantele Carter1, Evan-gelos Himonides1

1School of Arts and Humanities, Institute of Education, University of London, UK2Royal National Institute of the Blind, UK

BackgroundA survey of special schools in England and Wales (Welch, Ockelford & Zimmmann, 2001) foundthat, although musical provision for children with Profound and Multiple Learning Difficulties(PMLD) varied in quality, many children were seen to have a strong interest in musical activitiesand that music potentially offered significant extramusical benefits, including heightened com-munication skills, more attentive behaviour and enhanced social development (Ockelford et al.,2002). Nevertheless, with regard to musical behaviours in children with PMLD, no research datawere available on attainment or progress. Consequently, no evidence-based music curriculumguidance existed for these children. Funding was secured from the UK Government’s Qualifica-tions and Curriculum Authority (QCA) (2004) and the Esmée Fairbairn Foundation (2005-2007)to undertake research that addressed these issues.AimsThe aim of the research project has been to develop, refine and evaluate an original model ofmusical behaviour and development for children with PMLD. Subsequently, this model is beingused to inform the construction and implementation of effective intervention strategies throughmusic in special schools and enhance the capacity of the mainstream sector to include childrenwith PMLD in the early years.MethodA series of longitudinal case studies is ongoing (currently numbering 20+) with participants from10 special schools in England. Individual videoed classroom-based observations, supplemented byinterview data and computer-based observational assessments, are being used iteratively to design,critique and develop a robust model of musical development in children with PMLD.ResultsHaving designed and evaluated several models of how PMLD musical behaviour might be mapped,

Page 275: Abstract Book

Wednesday, August 23th 2006 275

a coherent predictive model is emerging from patterns in the observed case study data. Musicaldevelopment in the domain of PMLD embraces different forms of action, reaction and interactionwhich have some correspondence to those which might be expected of “normally developing”children in the first year of life.ConclusionsPreliminary results indicate that children with PMLD exhibit a range of musical behaviours thatare valued by the children themselves and their carers, and are sometimes capable of significantdevelopment over time in appropriately supportive environments.

Key words: Musical development, Complex needs, Assessment and support

[email protected]

36.3 Imaging the neurocognitive components of absolute pitch

Sarah J. Wilson1, Dean S. Lusher1, Catherine Y. Wan1, David C. Reutens2

1School of Behavioural Science, The University of Melbourne, Australia2Monash Institute for Neurological Diseases, Melbourne, Australia

BackgroundPrevious research has suggested that absolute pitch (AP) engages two cognitive processes: (1)long- term absolute pitch memory, and (2) conditional associative memory for pitch labelling.Neuroimaging results have linked these processes to activation of right temporofrontal regions forpitch processing, and left dorsolateral prefrontal (DLC) cortex for retrieving verbal-pitch associa-tions.AimsWe aimed to directly examine differences in functional activation in musicians with varying de-grees of AP ability during a pitch naming and tonal classification task.MethodThirty-six highly trained musicians underwent positron emission tomography (PET) following thebolus injection of the blood flow tracer [15O]H2O. Three replications of three task conditionswere performed: (1) Baseline - listening to pairs of noise bursts and responding with the words“C natural”, (2) Pitch naming - listening to an arpeggiated chord of octaves followed by a tone ofthe same pitch (target) and responding with its musical note name, and (3) Tonal classification -listening to an arpeggiated dominant chord followed by the tonic or a tone one semitone higherand classifying these as “tonal” or “atonal” respectively. High resolution T1 weighted magneticresonance imaging (MRI) scans were also acquired in all participants.ResultsFor Pitch naming - Baseline, musicians scoring >90% accuracy for pitch naming showed activa-tion of the right middle frontal gyrus (pcorrected<0.001; x = 24 y = 54 z = 6; BA 10) comparedto musicians without AP (accuracy <20%). Activation was also observed in the left insula (pcor-rected<0.001; x = -36 y = 14 z = -4), and the right cerebellum (pcorrected<0.05; x = 2 y = - 78 z= -18). Across all musicians, Tonal classification - Baseline showed activation in the left superiortemporal gyrus (pcorrected=0.000; x = -60 y = -26 z = 2; BA 22), and the right cerebellum (pcor-rected<0.05; x = 4 y = -76 z = -24). Activation of dorsolateral prefrontal cortex was not observedin either task.

Page 276: Abstract Book

276 Neuroscience III

ConclusionsThese data show that the right middle frontal gyrus is specifically activated during pitch naming inAP possessors, and suggest differences in the organisation of musical function between musicianswith and without AP.

Key words: Absolute pitch, Tonal classification, Functional neuroimaging

[email protected]

36.4 Music perception and production in a severe case of con-genital amusia

Elena Rusconi1, Barbara Tillmann2, Carlo Umilta3, Brian Butterworth1, Isabelle Peretz3

1Institute of Cognitive Neuroscience, University College London, London, UK2CNRS-UMR, Lyon, France3Department of General Psychology, University of Padua, Padoua, Italy, Isabelle Peretz, Depart-ment of Psychology, University of Montreal, Montreal, Quebéc, Canada

BackgroundCongenital amusia is a musical disability that cannot be explained by acquired brain lesions, hear-ing loss, generalized cognitive deficits, socioaffective disturbance or lack of stimulation duringdevelopment. Peretz et al. (2002; Neuron, 33, 185-191) characterized congenital amusia as adisorder of fine-grained pitch discrimination.AimsWe are currently investigating a severe case of congenital amusia, CU (male, 67 years old, pro-fessor of neuropsychology), to assess the extent and specificity of his perceptual deficit. Also,his production abilities were assessed, in order to specify better the functional causes and conse-quences of his amusia.MethodThe basic musical abilities of CU and of his two sisters’ (MA and ML) were assessed with theMontreal Battery of Evaluation of Amusia (MBEA; Peretz et al., 2003; Annals of the NationalAcademy of Sciences, 999: 58-75). For perception, a series of psychophysical tests was employedto assess fine-grained pitch and time discrimination abilities (Hyde & Peretz, 2004; PsychologicalScience, 15, 356-360.). Production was tested with popular songs, by providing, in successiveblocks, a title, a sample of sung lyrics or sung melody to be repeated. Production was also testedwith pairs of notes: CU was required to repeat targets sung by a male voice in his vocal range.Finally, his perception of speech intonation was compared to non-speech analogues (Patel et al.,1998; Brain and Language, 61, 123-144).ResultsThe results of the MBEA confirmed CU’s disorder and indicated that his two sisters were musi-cally normal. The results of psychophysical tests showed that CU is impaired in the detection ofpitch and time changes. Yet, his speech intonation processing abilities are spared. Finally, CU’sperformance in production tasks was severely disturbed.ConclusionsHere we report a new case of congenital amusia, CU. His musical disorders cannot be explained by

Page 277: Abstract Book

Wednesday, August 23th 2006 277

a lack of environmental stimulation, since his two sisters, who grew up in the same environment,are musically intact. The results are also consistent with Peretz et al.’s hypothesis about the roleof fine-grained pitch discrimination in music processing. In addition it suggests that an even moresevere impairment shows up in congenital amusia when musical abilities are tested in repetitionand production tasks. Whether a defective perceptual system is sufficient to cause his productionimpairments still remains to be determined.

Key words: Congenital amusia, Pitch discrimination

[email protected]

36.5 Train the brain

Olin Parker

Hugh Hodgson School of Music, The University of Georgia, Athens, USA

BackgroundThe acquisition of musical skills and expertise has been investigated in many domains. However,we have few reports wherein the concentration of music educators and music psychologists have,in their classrooms, systematically sought to musically develop the mind a priori. AimsThis study employed methods of eupraxia primarily (mental rehearsing sans sounds or physicalmotions). Secondarily then, came the physical responses to the musical stimuli. Is concentrationon “training” the brain more effective than “training” for the physical skills?MethodEmploying the premise of eupraxia procedures, 12 university music majors, over a 15 week periodwere the participants. This study was designed to investigate the tuning abilities of each student totune a C major scale and a C harmonic minor scale. Each student was to do this performance weeks1, 7, and 15. Participants utilized the Johnson Intonation Trainer, Stroboconn, and appropriatetranslation tables to obtain results.ResultsResults were recorded in deviations in cents from the equal temperament original pitch, then con-verted to deviations in frequencies. Statistical comparisons were made comparing the “improve-ment” made by the group on tuning the C major scale versus the improvement made on theirtuning of the C harmonic minor scale. Pragmatic observations of the group’s scores over the 15weeks suggested more than the expected improvements in pitch acuity skills. The results of the ttest revealed no significant difference in the tunings based on modality.ConclusionsEupraxia as an instant intensely focused practice prior to action enables highest levels of perfor-mances. Information furnished by neuroscientists utilizing brain imaging technics (MRI, PET,SQUID, EEG, & ERP) report that mental rehearsal activates the brain’s functions which result inmusical responses. Thus, the foregoing supports the premise of a pedagogical eupraxia approachof “training the brain.”

Key words: Brain functioning, Eupraxia, Neurosciences

[email protected]

Page 278: Abstract Book

278 Neuroscience III

36.6 Characteristics of harmonic patterns and their contribu-tion to the stylistic ideal: An ERP study

Ina Michelson, Dalia Cohen

Student in the Hebrew University of Jerusalem, Israel

In the present paper we test brainwave responses (event-related brain potential [ERP]) to vari-ous harmonic patterns. This is part of a big project that also includes verbal experiments.PurposeTo define and formulate the factors that contribute to categorization of the patterns according tothe stylistic ideals. This has not been done in previous ERP studies.Assumptions1. Among the most important characteristics of stylistic ideals (which vary from era to era) aretypes of directionality (clear/unclear), types of expectations, and ways in which the expectationsare or are not fulfilled. 2. Harmony is the most influential factor in shaping types of directional-ity. The main harmonic characteristics contributing to clear directionality are “strong” harmonicprogressions and prototype harmonic degrees. 3. We can distinguish among types of deviationson the basis of harmonic characteristics. 4. The relevant characteristics of the chords are theirharmonic degree, the extent to which they belong to a scale system, and their timbre (M/m). Wemostly neutralized other possible influences, such as melody, rhythm, timbre of the notes, andtexture. Most of the chords are in the root position.

Research hypothesis The characteristics of the chords and combinations thereof are reflectedin ERP responses; outside factors also influence the responses.

The experiment The material studied: Nine harmonic patterns, each consisting of four chords.The patterns represent types of directionality through various harmonic progressions, degrees, anddeviations. The patterns were played in pairs in the timbre of a piano: the first pattern served as thereference (R); the second was for comparison (C). Half of the pairs consisted of identical patterns(R=C), while the other half consisted of different patterns (R 6= C); thus it was possible to test theeffects of repetition versus change. The subjects were musiciansUeight men and eight women.The responses to chords were obtained from nine electrodes in different locations. The task wasto indicate whether or not the patterns in the pair were identical.ResultsThe results confirmed the importance of the aforementioned characteristics of harmonic patterns(“strong”/“weak” harmonic progressions, etc.) and indicated that certain external factors irrelevantto harmony (e.g., gender) exert an impact.

Key words: ERP, Harmony, Directionality

[email protected]

Page 279: Abstract Book

Symposium: EuropeanTeachers and Music Educa-tion - (EuroTEAM)

37

Convenor: Nigel Marshall

The European Teacher Education and Music (EuroTeam) network has been created to integrateresearch on teacher education in music within European countries. Music education providesa means by which European cultural heritage can be celebrated and cultural understanding andintegration can be fostered at curricular level through daily musical engagement by pupils.

In all partner countries there are a number of varied and ongoing projects exploring musicteachers’ musical background, the training process of student music teachers, the sources of influ-ence on their career, and factors which effect changes in values and opinions during the transitionfrom teacher education to the world of work.

The team’s work is guided by 2 main concepts, namely professionalisation and identity. Theconcept of professionalisation embraces different aspects of the process of learning to be a teacher,developing a professional competence, and developing a view of one’s role, position and statuswithin the teacher community. An essential problem is the educational content of the educationitself, i.e. subject-oriented pedagogical profile of the education in a given type of institution (e.g.university versus academy of music, versus teacher education college).

In conjunction with this, the concept of identity concerns both the development of the role ofteacher within a community of teachers and musicians, and the identity and core of the educationalcontent (e.g. didactical theories in music). This is complemented by a psychological approach inwhich the problems of music teacher education are explored through the teacher’s developingidentities as teaches and musicians, and how these develop, change and are sustained throughouttheir careers as music teachers.

During the symposium, researchers from Gothenburg, Sweden; Bologna, Italy; London, UK;Vienna, Austria and Warsaw, Poland will present results from ongoing research projects being car-ried out within and between each country .The seminar will end with a discussion of comparativeapproaches in research. Moreover, the tension between formal and informal approaches in musiceducation will be elaborated.

279

Page 280: Abstract Book

280 Symposium: European Teachers and Music Education - (EuroTEAM)

37.1 Music teachers as researchers - a meta-analysis of Scan-dinavian research on music education

Bengt Olsson

Musikhogskolan, Goteborgs Universitet, Goteborg, Sweden

One interesting issue of processes of professionalisation is the development of research carriedout by music teachers. In Scandinavia this kind of research is mostly connected to academies ofmusic and conservatoires and the reason behind is the aim of creating a research-oriented teachereducation in music. Music teachers as researchers may strengthen the professional role of musicteachers as well as contribute to key issues in teacher education. During the last ten years about60 doctoral theses have been carried out and a majority of researchers have been music teachers.A meta-analysis of these 60 doctoral theses reveals some clear results. Nearly 80% of the studieshave a didactical approach, i.e. in the studies embrace mainly teaching methods, learning strate-gies, teacher education and different contexts of teaching and learning. Moreover there is a strongdominance of a qualitative approach mainly based on interviews and observations.

A tentative discussion of the results will be presented. It seems that certain norms of what keyissues are elaborated in this kind of research are present. Pupil’s everyday musical practice andinformal modes of teaching and learning are treated as the main objectives for improving musiceducation in schools. The research that is carried out is used to legitimise these kinds of values.Finally, key issues among professionals will be discussed.

[email protected]

37.2 From the training to the job: The beginning years as amusic teacher

Noraldine Bailer1, Nigel Marshall2

1Universitet fur Musik und darstellend Kunst Wien, Vienna, Austria2Roehampton University, London, UK

From the training to the job: The beginning years as a music teacher Teacher training for musiceducation in Austria is featured as a two-phase programme of study to be completed: The firstphase consists of a theoretical study of music education at the university and is followed by a one-year practical teacher training to be practiced in a school. After the successful completion of bothphases the young teacher is officially entitled to give lessons in high schools. A research project,which was completed last year, focuses on the first years as a music teacher, which includes theone year of practical teacher training as well as the first active years on the job. We investigatedthe process of learning and particular aspects of teaching: Which experiences are important for theprofessional development of young music teachers? How do they cope with the tasks acquiringand submitting knowledge? Which initial problems do the teachers complain about?

In the specialised literature on music education, settling into vocational life is often describedas a “practice shock”: The well sheltered environment during the studying years is now replacedby the working routine within the delimited school system. The rigid system only offers restricted

Page 281: Abstract Book

Wednesday, August 23th 2006 281

room and freedom which implies again a tremendous pressure to adapt to it. In my presentationI’ll focus on the special problem “practice shock”: Which negative experiences do the teacherssubsume under the term practice shock? Has the process of changing the roles from student toteacher influence on the practice shock? How do they cope with this situation? Finally I’ll discussthese aspects of their job and their daily work, which they enjoy and which cause job satisfaction.

Noraldine Bailer - ViennaThe issue of the changing roles and evolving “music teacher identity” which occurs when

moving from student to music teacher, has been the subject of a parallel study in England. Theresearch has been carried out on a population of music teachers during their first year in teachingand has used an identical instrument to that developed in Vienna by our Austrian partners. Themain focus of the research has been on changing teacher attitudes, the need to balance the role ofmusician and music teacher and the need to maintain the important identity of being a musician.A number of findings from the parallel study will be presented and some further links drawn tothe research of other partners.

[email protected]

37.3 University students, music teachers and social represen-tations of music

Anna Rita Addessi1, Felix Carugati2, B Santarcangelo2, Mario Baroni1

1University of Bologna, Department of Music and Performing Art, Italy2University of Bologna, Department of Education, Italy

BackgroundThis paper deals with a research project currently being undertaken about the training of the uni-versity students studying to become music teachers. The general hypothesis of the project is that“musical knowledge” (Olsson 1997, 2002), can be investigated as a social and psychological con-struction as described by the theory of Social Representations (Moscovici 1981; Mugny-Carugati1989), as well as social music values (Baroni 1993, Bourdieu 1983) affecting music education andteaching practice.AimsAccording to this perspective, musical knowledge could have its development in the crossroadsbetween different social representations of music. The main aim of this research project is tostudy the impact of the social representations of music on students studying to become musicteachers.MethodA questionnaire was submitted to the university students to became general teachers in kinder-garten and primary school. The students were asked to complete some sentences and answersome questions (e.g. In your opinion, is child musicality different from that of the adult? In youropinion, does the musical child exist? Etc).ResultsWe classified the answers into different categories. We are analysing the categories by means aspecific software in order to observe the correlations among different conceptions, and betweenthese conceptions and the different groups of participants. The conceptions about 2 topics will

Page 282: Abstract Book

282 Symposium: European Teachers and Music Education - (EuroTEAM)

be show: “Musical child” (natural child, gifted child, educated child, able child, creative child,enjoyed Child) and “Music teacher” (basic, professional and general competencies). Conclusion:We believe that by making explicit their own social representations of music, the students willhave a better awareness of their own future professional role and will act more deeply during theuniversity training. The expected impact of the results will be a contribution to the elaboration ofthe university curriculum of music teachers.

[email protected]

37.4 Music education and music teacher training in Poland:The paradox

Malgorzata Chmurzynska

Frederyk Chopin Academy, Warsaw, Poland

Music education and music teacher training in Poland: the paradox Problems which havetrapped Polish musical culture for years, such as a low standard of general music education, thelow social and professional status of music teachers, and the lack of interest amongst young peoplein the music offered by a school curriculum, are shared by other EU member countries, in spiteof quite different social and cultural conditions. The network of specialised music schools whichhave existed in Poland for sixty years, which grant free musical training, have failed to bringgrowing numbers of people into the world of high art music; in fact quite opposite is now foundto be true in that the gap between professional and amateur musicians is widening.

Curricula in all types of schools are focused on high art music, particularly from the 17th,through to the 19th Century. However, the attitude of the young toward this kind of music is foundto be very negative, and musical competence (in all genres and styles including pop music) isvery low. Simultaneously, young people declare their need for music, and are ready to spend asignificant amount of time listening to it.

Music teacher training in Poland lags behind cultural and social transformations, thus the train-ing community is challenged by an urgent need to formulate a new teacher model and develop newways of training music teachers.

Our research has focused on the professional development of music teachers, their attitudestowards the aims of music education, important skills and musical and pedagogical competences,and focussing on self efficacy as a factor in effective teaching

[email protected]

Page 283: Abstract Book

Perception III 3838.1 Effect of carrier on the pitch of long-duration vibrato tones

Rachel van Besouw, David Howard

Department of Electronics, University of York, Heslington, York, UK

Background

Previous studies on the pitch perceived for long-duration vibrato tones, where the modulator isa symmetric function (e.g. sine or triangular wave) have shown that the pitch perceived is generallythe mean. For sinusoidal carriers modulated using a triangular wave is has been suggested that thepitch perceived is closer to the geometric mean than the arithmetic mean.

Aims

A pilot study was undertaken with the aim of exploring the effect of carrier on the perceivedpitch and pitch strength of vibrato tones. The following carriers were investigated: a sinusoid, animpulse with four resolved harmonics of equal amplitude (including f0), an impulse with twelveharmonics and an impulse containing the fundamental plus unresolved harmonics nine to twelve(all of equal amplitude).

Method

Pitch matches were obtained between modulated and unmodulated tones using a two-intervaltwo-alternative adaptive procedure. The modulated tone was presented first, followed by onesecond of silence and then the unmodulated tone. Subjects indicated which of the tones was higherin pitch. Three carrier frequencies (fc = ERBN no. 4 (123.2Hz), 10 (442.3Hz) and 16 (1051.1Hz))were investigated for each of the four carriers resulting in twelve experimental conditions. Inaddition subjects were required to make unmodulated tone matches for all carriers at fc = ERBNno. 10. Tone length was one second including 40ms raised cosine onset and offset ramps. Themodulator in all cases was a sinusoid with an initial phase of 0 degrees, a rate of 6Hz and extentof s 6% of fc. All stimuli were presented at 45dBA.

Results

283

Page 284: Abstract Book

284 Perception III

A two-way repeated measures ANOVA revealed a significant main effect of carrier, F(3, 39) =4.01, p < .05 and interaction between carrier and frequency. Overall there was much variation inthe subjects’ responses and this variation tended to increase with fc.Conclusions

The results of this pilot study suggest that carrier type may affect the perceived pitch of tonesvibrato; however there was large variance in subject responses and so a long term study usingfewer, highly trained subjects is currently being undertaken to corroborate these results.

Key words: Pitch perception, Vibrato tones, Carrier type

[email protected]

38.2 The information dynamics of melodic boundary detection

Marcus Pearce, Geraint Wiggins

Centre for Cognition, Computation and Culture, Goldsmiths College, University of London, UK

BackgroundThe perception of grouping structure in music involves the detection of segment boundaries inthe musical surface. The low-level organisation of the musical surface into groups allows the useof these perceptual units in more complex structural processing and may alleviate processing andmemory demands.AimsWhile most models of grouping structure are inspired by Gestalt psychology, we propose a com-plementary theory based on expectancy violation and predictive uncertainty.Main ContributionThe perception of melodic groups has traditionally been modelled through the identification oflocal discontinuities or changes between events in terms of temporal proximity, pitch, durationand dynamics. Narmour proposes a different model according to which perceptual groups areassociated with points of closure where the ongoing cognitive process of expectation is disrupted.Meyer discusses three ways in which expectations may be disrupted: first, an event expected tooccur in a given context is delayed; second, the context fails to stimulate strong expectations forany particular continuation; and third, the continuation is unexpected or surprising.

Building on these approaches, we propose that boundaries are perceived at points of surpriseor predictive uncertainty and we quantify these two metrics in information theoretic terms byreference to a model of unsupervised inductive learning of melodic structure. Recent researchsuggests that expectation in melodic pitch structure can be accurately modelled as a process ofprediction based on the statistical induction of regularities in various dimensions of the melodicsurface. Furthermore, there is evidence that infants and adults use the implicitly learned statisticalproperties of pitch and pitch interval sequences to identify segment boundaries before unexpectedevents.ImplicationsSubject to further empirical corroboration, the theory offers the possibility of relating two areasof research on music perception: expectation and grouping. Since there is a wealth of evidencefor the strong influence of relatively long events (temporal proximity) on boundary detection we

Page 285: Abstract Book

Wednesday, August 23th 2006 285

believe that the information theoretic approach will prove most fruitful in modelling the influenceof melodic and tonal structure on perceptual grouping. It will be important to clarify the rela-tionship between the information theoretic influences, and the influence of rhythmic and metricstructure on grouping. We expect to be in a position to present empirical results by the time of theconference.

Key words: Melodic grouping, Boundary detection, Information theory

[email protected]

38.3 Cross-domain mapping and the experience of the under-lying voice-leading

Isabel Cecilia Martinez

Universidad Nacional de La Plata, Argentina. Roehampton University, UK. CMS, Cambridge, UK

BackgroundIn spite of the arguments developed so far about the cognitive status of the underlying musicalstructure, the following questions remain unanswered: How are underlying events abstracted?How does the listener derive hierarchical structures from the musical piece? Recently, hypothesisabout the metaphorical nature of music cognition have been enounced, highlighting the assump-tion that metaphorical thinking -present in the language used to conceptualise music- might, atsome extent, model musical experience. In this paper, it is hypothesized that the underlying mu-sical structure is metaphorically experienced, based on the implicit use of basic image-schematicstructures developed during the life-course of an individual’s embodied interaction with the envi-ronment. By means of a process of cross-domain mapping, the listener uses knowledge from agiven domain to understand information in another domain.AimsTo test the hypothesis that the underlying voice-leading is metaphorically experienced, throughthe activation of a BLOCKAGE-RELEASE OF BLOCKAGE image-schema.Method31 musicians listened to 9 melodic fragments that represent examples of interrupted underlyingstructures, paired with 3 musical reductions (contour, rhythm and voice-leading). 3 visual anima-tions corresponding to three image-schemas (UP-DOWN, BLOCKAGE-RELEASE OF BLOCK-AGE and INTER-ONSET BEATS) were used to prime the activation of a cross-domain mappingprocess, in order to hear A in terms of B, being A the musical piece and B the structural featurehighlighted by the reduction. It was assumed that, to the extent that the primed image-schemacorresponds to the feature highlighted in a given reduction, the match fragment-reduction wouldbe weighted higher in that pair than in the other ones.ResultsData analysis found significant differences between the three different conditions of exposure tothe experience of the underlying voice-leading. They estimated higher the match fragment-voiceleading reduction when they were primed with the BLOCKAGE-RELEASE OF BLOCKAGEimage-schema.ConclusionsResults bring strong support to the assumption that metaphorical thinking models the experience

Page 286: Abstract Book

286 Perception III

of hierarchy in music, and that structural metaphors are used not only as linguistic constructs butalso as internal models of cognitive processing that listeners activate during the experience of theunderlying musical structure.

Key words: Cross-domain mapping, Underlying voice-leading, Image-schemas

[email protected]

38.4 Beethoven’s last piano sonata and those who chase crocodiles:Cross-domain mappings of auditory pitch in a musicalcontext

Zohar Eitan1, Renee Timmers2

1Tel Aviv University, Israel2University of Nijmegen, The Netherlands

BackgroundThough auditory pitch is customarily mapped in Western cultures onto spatial verticality (high-low), both anthropological reports and cognitive studies suggest that pitch may be mapped ontoa wide variety of domains, including size, brightness, angularity, mood, age, and social status(Ashley, 2004; Marks, 2000; Walker & Smith, 1984).

AimsIn an ongoing study, we investigate (1) how Western subjects apply non-Western pitch mappings(2) whether mappings of “high” and “low” pitch derive from associations with spatial verticality,and (3) whether mappings reported for simple auditory stimuli are similarly applied by listenersto actual music.

MethodExperiment 1. 63 participants received a list of 30 antonym pairs, mostly consisting of previouslyreported pitch metaphors, circled the term of a pair that they associated with either high or lowpitch, and rated the appropriateness of the pair as a metaphor for pitch. A control group of 58participants performed the same tasks with regard to spatial verticality. Experiment 2. 61 partic-ipants listened to two segments from Beethoven’s op. 111, that differed mainly in pitch register,and rated the applicability of the metaphors to each segment.

ResultsIn Experiment 1 participants significantly matched terms with high or low pitch for 28 out of 30pairs. Mappings of “high” and “low” pitch significantly differed from those of high and low spatialposition in 13 metaphor pairs, notably those associated with physical size. Resultsof Experiment 2 concurred with the results of Experiment 1. In both experiments, participantsalways employed metaphors originating in non-Western cultures in agreement with their originaluses.

ConclusionsDiverse cross-domain mappings of pitch, regardless of their cultural origin, are consistently usedand understood, even when not previously known to participants. This consistency applies equallyto abstract conceptual tasks and to tasks involving actual music listening. The ubiquity of the

Page 287: Abstract Book

Wednesday, August 23th 2006 287

verticality metaphor in Western usage notwithstanding, cross-domain pitch mappings are largelyindependent of this metaphor. We shall discuss possible other sources for such mappings, suchas the size of a sound source, and suggest ways to further examine their roles in perception andcognition of music.

Key words: Pitch register, Metaphor, Cross-modality

[email protected]

38.5 Zone of proximal development, mediation and melodicgraphic representation

Maria Guadalupe Segalerba

Laboratoire Psychomuse Universite de Paris X-Nanterre, France; Universidad Nacional de LaPlata, Argentina

BackgroundBoth, the zone of current and proximal development are two Vigotsky’s psychological constructsthat could account for the transference of learning between different ways of musical knowledge.In this paper it is posited that a) verbal language -as a form of metalanguage- is an importantpsychological tool that favors problem resolution and transfer knowledge in the musical domainand b) teaching mediation is essential in sustaining the student-knowledge relationship during thebuilding process of musical language representation.AimsDemonstrate that both teaching and metalanguage mediation act positively upon the zone of prox-imal development, prompting the transference of musical knowledge.MethodAn experiment on melodic transcription was run. Participants were 60 young students (12 yearsold) with two years of ear training, divided in three groups: 1) control group; 2) experimentalgroup 1 and 3) experimental group 2. The experiment was run in the context of an educationalenvironment. Three conditions were designed and subjects were required to solve the followingtasks in chronological order: 1) memorization, singing performance and “first transcription” of atonal melody using music notation; 2) written “verbal description” of the structural componentsof the transcribed melody and 3) “second transcription” of the same melody, based on the verbaldescription provided in condition 2. Experimental groups run conditions 1, 2 and 3, while thecontrol group accomplished conditions 1 and 3.ResultsDifferences between experimental and control groups were significant. Experimental groupsshowed a clear improvement in the quality of the second transcription. Verbal language, actingpositively, favors the cognitive processing involved in the melodic transcription.ConclusionsThe role of the teacher as mediator interacts positively with the mediation of the metalanguageupon the zone of proximal development. An instructional design based on the arrangement ofappropriate conditions brings the necessary framework to prompt student’s path by the zone ofproximal development. The use of metalanguage allows the construction of musical knowledge,

Page 288: Abstract Book

288 Perception III

providing a bridge that connects the mere perception and vocal performance with the graphicrepresentation of the melody.

Key words: Zone of proximal development, Metalanguage, Melodic graphic representation

[email protected]

Page 289: Abstract Book

Music Therapy II 3939.1 The effect of hypnotic induction on music listening expe-

rience of high and low musical involvers

Katalin Héjja-Nagy1, Csaba Szabó2

1Eszterházy Károly College, Eger, Hungary2University of Debrecen, Debrecen, Hungary

BackgroundThere are great individual differences in musical experiences. According to former observationsand questionnaire studies, the capacity for being involved in music is characteristic of a person. Inour former research, we found that intensity of musical involvement and type of music had a greateffect on musical experiences. High involvers experienced more trance-like experiences, whilerelaxation prevailed in the experience of low involvers. Music has long been used in hypnotherapyto deepen the hypnotic state, and conversely, trance-like experiences are often observed in musictherapy.

AimsAims of the present research were to examine whether and to what extent hypnotic inductioninfluences music listening experiences, and whether there are differences between high and lowinvolvers regarding the effect of hypnotic induction. We also wanted to discover, whether thesephenomenological changes are modified by the type of music, too.

MethodMusical Involvement Scale was administered to 250 college students in Eger, Hungary. High andlow musical involvers (N=48) were chosen for the experiments. In one session their hypnoticsusceptibility was measured, in an other session, subjects listened to one of two musical pieces(classical or easy-listening) alone, in a laboratory setting. Half of them received standard hypnoticinduction prior to music-listening. Subjects reported about their experiences in a free report andby filling in the Phenomenology of Consciousness Inventory.

ResultsHypnotic susceptibility showed moderate correlations with musical involvement. Hypnotic induc-tion influenced the musical experience of low involvers only. Among those subjects not havingreceived hypnotic induction, there were significant differences between experiences of high and

289

Page 290: Abstract Book

290 Music Therapy II

low involvers. As to those receiving hypnotic induction before, low involvers had as strong trance-like experiences as did high involvers. We found significant differences regarding types of music,too. In our presentation we will report about fine pattern differences in music listening experiencesof the different groups.ConclusionsIt can be assumed that music itself serves as a hypnotic induction for high involvers, but not forlow involvers. These results can assist music therapy considerations and further studies of musiclistening experience.

Key words: Music listening experience, Musical involvement, Hypnotic induction

[email protected]

39.2 Diagnosing level of mental retardation from music ther-apy improvisations: A computational approach

Geoff Luck, Kari Riikkilä, Olivier Lartillot, Jaakko Erkkilä, Petri Toiviainen

University of Jyväskylä, Finland

BackgroundPrevious work suggests that a relationship may exist between an individual’s level of mental re-tardation and the features which characterise their musical improvisations. Much of this work isbased on indirect methods of data collection, such as questionnaires, or subjective aural analysesof recorded improvisations. A limited amount of MIDI-based analysis has been carried out, butso far no reliable relationship between musical features and level of mental retardation has beenidentified.AimsThe aims of the present study were to twofold. Firstly, we sought to identify a reliable relationshipbetween musical features and level of mental retardation. Secondly, we aspired to develop anautomated method of musical therapy improvisation analysis.MethodTwo hundred and sixteen music therapy improvisations, obtained from 7 music therapists’ regu-lar sessions with their clients, were collected in MIDI format. A total of fifty clients contributedmusical material, and these clients were divided into four groups according to their level of di-agnosed mental retardation (group 1: none, group 2: mild, group 3: moderate, group 4: severeor profound). The improvisations were subjected to a musical feature extraction procedure, inwhich 67 client-related musical features were automatically extracted in the MATLAB computerenvironment. These features were then entered into a linear regression analysis as predictors ofclients’ group number.ResultsThe emergent model accounted for 70% of the variation in clients’ improvisations. Specifically,higher levels of mental retardation were best predicted by four main features: longer periodsof silence, better integration of pulse between client and therapist, larger differences in volumebetween the clients’ and therapists’ playing, and higher levels of dissonance. Together, these fourvariables accounted for over 50% of the variance in clients’ improvisations.

Page 291: Abstract Book

Wednesday, August 23th 2006 291

ConclusionsThe present study suggests that an individual’s level of mental retardation can be predicted usingan automatic analysis method based upon the extraction of detailed musical features from theirimprovisatory material.

Key words: Musical feature extraction, Automatic analysis, Improvisation

[email protected]

39.3 A study of relational communication process in music ther-apy

Giovanna Artale1, Fabio Albano2, Luisa Bonfiglioli3, Cristian Grassilli, Pio Enrico RicciBitti3

1Emilia-Romagna Regional Health Service, Department of Mental Health, Bologna, Italy2Emilia-Romagna Regional Health Service, Department of Mental Health, Modena, Italy3Department of Psychology, University of Bologna, Italy

BackgroundConsistently with the dynamic systems theory, the communication process in music therapy canbe conceived as a continuous process in which music therapist and patient are involved. In thisstudy we used the Alan Fogel’s Relational Coding System (2000) to examine communicationprocess in music therapy. Originally this method analyses mother-infant interaction: the dyad isobserved as a unique communicative system which evolves dynamically over time. The methoddescribes forms of communicative processes that are qualitatively different by means of a seriesof interaction categories. In order to study the role of musical interactions in the communicationprocess we added the term “musical” to the different categories when the dyads used sound ormusic to communicate. Another dimension of the method is the identification of frames that arelarge segments of co-actions that have a coherent theme and lead to specific forms of co-orientationbetween the participants.

AimsThe aim of this study was to analyse the evolution of music therapist-patient relationship. Thisstudy entailed two different analysis:1) a longitudinal analysis of duration and sequences of dif-ferent categories; 2) the identification of frames and the analyisis of their historical development.

Method52 videotape recordings of weekly individual sessions of music therapy was codified. The fivechildren aged between six and eleven, who took part in these sessions, had diagnosis concerningexpressive, communicative and relational disorders. The analysis of the videotape recording wascarried out by three couples of music therapists. The inter-coder agreement was verified on 20%of time of the videotape recording.

ResultsMain analyses concerned the differences between duration of different categories during musictherapy process. This analysis was developed separately for each one and all the dyads and showedstatistically significant results. Particularly, statistical analysis demonstrated that duration andsequences of different categories depends on session’s order. As far as the frame is concerned

Page 292: Abstract Book

292 Music Therapy II

we observed a prevalence of musical frames over non musical ones. Historical analysis of framesshowed an increasing complexity of the communication process.ConclusionsThe frame and category concepts of this method allow us to describe and analyse the specificcharacteristics of the communicative processes in music therapy.

Key words: Music therapy, Co-regulation, Framing

[email protected]

39.4 Music therapy and aggression in 50 children with mildmental handicap: A clinical trial

Masoud Nematian1, Reza Khanmohammad1, Nzanin Hajigholamrezaei11Music Application in Mental and Physical Health Association, Tehran, Iran2Tehran University of Medical Sciences, Tehran, Iran

BackgroundAggression behavior is one of the most common reasons for psychiatric referral of persons withmental retardation. The works by Montello and Coons showed that music therapy investigationsproduce a lowering of scores in the aggression- hostility scale. In the present study we haveinvestigated the effect of active music therapy on the aggression behavior of children with mentalhandicap.MethodThis single blind randomized clinical trial study was performed in a school for mentally handi-capped boys in Tehran, Iran. Fifty pupils (all boys, age range: 9-11 years) were randomly selectedfrom 100 pupils with sever aggressive behavior. Raven’s Intelligence Quotient Test and Rosen-zweig’s Picture Frustration Study (PFS) were performed for all the cases. They had mild mentalretardation (IQ: 50-70) with symptom of aggression confirmed with PFS. They were randomlyassigned in a case and control group of 25 each. The case group pupils performed twice a weekone- hour session of active music therapy for a period of three months (24 sessions totally). Thetreatment procedure consisted of improvisation of Persian rhythm based music therapy includingmusic appreciation, improvisation, coral songs, rhythmic movements and relaxation. They wereall evaluated for aggressiveness at the end of the three months period using the same method (PFS).Men scores of the PFS were calculated by Student’s T-test.ResultsThe results of the tests were significantly (p<0.05) different before and after the active musictherapy intervention.ConclusionsOur results are in agree with similar studies and strongly suggest music therapy as an effectiveway in controlling the behavioral problems in children with mental handicap. However, moreinvestigation is warranted to better understand the role of music therapy in psychiatric problemsof children with mental handicap.

Key words: Music therapy, mental retardation, Aggression

[email protected]

Page 293: Abstract Book

Wednesday, August 23th 2006 293

39.5 The “Virtual Music Maker”: An interactive human-computerinterface for physical rehabilitation

Amir Lahav1, Misha Gorman1, Margrit Betke1, Saltzman Elliot1

1The Music Mind and Motion Laboratory, Sargent College of Health and Rehabilitation Sciences,Boston University, Boston, USA2Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School,Boston , USA3Computer Science Department, Boston University, Boston, USA

Listening to music can influence the way we move. Playing music makes us to move. Thisunique auditory-motor interplay between perception and action may provide the neurological ba-sis for the use of music therapy for physical rehabilitation. Specifically, active music therapy (aprocess where patients are physically involved in playing music) has been shown to aid in thetreatment of people with neuromuscular disorders. However, people with neuromuscular disor-ders often do not have the motor dexterity needed to play a typical instrument such as a piano,guitar or drums. As an alternative, easy-to-use therapeutic device, we have designed the “VirtualMusic Maker” - an interactive human-computer interface that converts simple body movementsinto audiovisual feedback in real-time, allowing patients to make music by performing prescribedtherapeutic exercises. Musical feedback, for example, could range from a single piano note to thepatient’s favorite oldies song, depending on the therapeutic goal of the exercise. Musical feedbackacts as a reward for the patient, and is provided only when the patient produces movements thatsatisfy predefined criteria (e.g. specific motion frequency or position in space).Our fundamentalhypothesis is that movements will become more controlled, coordinated and purposeful when pa-tients participate in therapy routines in which movements produce music (rather than accompanymusic). Here we present preliminary results on the rehabilitative potential of the Virtual MusicMaker for improving motor function in people with neuromuscular disorders. This work may pro-vide insight into the use of interactive music production as a treatment modality for sensorimotorrecovery and neurorehabilitation.

Key words: Music making, Music therapy, Music technology

[email protected]

39.6 Clinical evaluation of the treatment of high blood pres-sure with receptive music therapy

Vera Brandes1, Hans-Ullrich Balzer2, Gernot Ottowitz1, Wolfgang Maier1

1Paracelsus Medical University Salzburg, Austria2Mozarteum University Salzburg, Austria

The numerous side- and after-effects of pharmaceutical hypertension therapy ask for scientificstudies of alternative therapies for the treatment of this condition. Previous studies have alreadyproven the blood pressure reducing effect of special music. What remains obscure, however,

Page 294: Abstract Book

294 Music Therapy II

is to what extend music affects the human organism in a purely physiological (e.g. rhythmicsynchronisation phenomena) and in a psychological manner.

The main objective of this study was to pinpoint the factors which lead to an optimal ther-apeutical effect of a regular cure-like treatment with music which is based on chronobiologicalprinciples.

Based on previous studies of music effect and the analysis of psycho physiological parametersby means of regulatory diagnostics, four hypotheses were formulated: 1. It is possible to composeand record music, which elicits specific physiological and psychological effects. 2. The emotionalquality of the music has to address the specific emotional needs of the subjects. 3. The effectis dependent on the time of application determined by means of analysis of the chronobiologicalintraday. 4. The therapeutic effects do not surface immediately, but after a period of the samelength as the active phase.

The study design was based on two phases which were each preceded and followed by differenttests. The first phase consisted of the actual four weeks of the music cure consisting of 20 listeningsessions of 30 Minutes each which were to be conducted at home, followed by a control phase.Method1. BET (blood pressure relaxation test) (10 Minutes) 2. SET (stress relaxation test) (20 Minutes)3. 60 min measurement (alternating rest and light movement states) of EMG, skin resistance, skinpotential, heart frequency variability. Psychological effects were assessed by specially developedand standardised questionnaires and interviews. Subjects reported on their subjective well being,sleep quality and intake of medication daily.

The evaluation shows a reduction of the blood pressure level after four weeks About 2/3 ofthe participants (n=26) reported an enhancement of their state of health. 96% of the test subjectsanswered the question on “how they liked the music” with “good” or “very good” (on a scale of 1to 6). 100% of the test subjects would like to repeat the therapy.

Key words: Music medicine, Chronobiology, Psychological morphology

[email protected]

Page 295: Abstract Book

Symposium: Longitudinalcase studies of preparationfor music performance

40

Convenor: Roger ChaffinDiscussant: Andreas Lehmann

Musical performances by concert soloists in the Western classical tradition are highly preparedand practiced, providing cognitive scientists with a natural laboratory within which to study the de-velopment of a complex skill. Music practice provides an opportunity to observe experts engagedin a process of creative problem solving in a situation that naturally provides a detailed behav-ioral record. By itself, however, the behavioral record of practice is relatively uninformative, butwhen combined with the musician’s self-report about musical goals and strategies, practice pro-vides valuable insights into the cognitive processes involved. The symposium describes a methodof conducting longitudinal case studies of musicians preparing for performance and presents newdata from three such studies.

Case studies are the method of choice for highly developed skills like those examined in thissymposium. The 20+ years of training required increases the normal range of individual differ-ences so that aggregating observations across individuals would risk obscuring the phenomena ofinterest. Therefore, we believe that case studies are the appropriate way to test the application ofgeneral psychological principles to experts, whether in music or in any other domain. General-ization from the study of exceptional cases must be based on support for general psychologicalprinciples. The musicians that we have studied show important commonalities in their use of mu-sical structure to organize practice and thinking and in their practice of performance cues to speedretrieval from long-term memory. More broadly, expert musicians’ use of performance cues isconsistent with principles of expert memory developed from the study of experts in other fieldsand with principles of memory derived from the study of the general population

This kind of collaboration between performer and scientist creates dilemmas of meaning-making as the two negotiate the differences in epistemology (science vs art), goals (productionof knowledge vs aesthetic experience), and subjectivity that they bring from their different disci-plines. A musician who participates in research as both the subject of study and as a researchermust choose between the two roles when they conflict. We will discuss these epistemologicalissues in addition to describing data from three studies.

295

Page 296: Abstract Book

296 Symposium: Longitudinal case studies of preparation for music performance

40.1 An overview of the longitudinal case study method forstudying musicians’ practice

Roger Chaffin

Department of Psychology, University of Connecticut, USA

BackgroundStudying the practice of expert performers provides valuable information about how musiciansprepare for performance. For experienced performers, an important part of memorization is thepractice of memory retrieval cues (performance cues). Performance cues are the landmarks of apiece of music that a performer attends to during performance. While other aspects of the perfor-mance become automatic with repeated practice, performance cues remain conscious and providethe musician with a way of consciously controlling actions that would otherwise be entirely auto-matic. Performance cues are established by thinking about features of the music during practiceso that they automatically spring to mind during performance.

AimsIn this symposium we describe three longitudinal case studies of experienced performers preparingnew works for performance that follow the emergence of performance cues during practice andduring performances both in the practice studio and on stage.

MethodThe musicians videotaped their entire practice of a new piece of music. At the end of the learn-ing process, they wrote out the score from memory and gave detailed reports of every aspect ofthe music they had attended to during practice (musical structure, technique, interpretation, andperformance cues) by marking features on copies of the score. These reports provided predictorvariables for regression analyses in which bars or beats were the unit of measurement and thedependent measures were: the number of starts, stops, and repetitions during practice, bar-to-bartempo and sound-level (mean and vari-ability) during practice and live performances, and accuracyof recall.

ResultsStarts and stops during practice showed which aspects of the music the performers were attendingto. Fluctuations in tempo and sound-level during practice and live performances showed both thedeveloping interpretation and which aspects of the music were a source of difficulty. Accuracy ofrecall showed which points in the piece served as landmarks in the musicians’ memories.

Conclusion The memorization strategies of experienced musicians are similar to those of ex-pert memorists in other fields. Musicians chunk the notes to be learned into familiar patterns, usemusical structure as a retrieval organization, and practice to increase the speed and automaticityof memory retrieval at performance cues.

Key words: Memory, Practice, Performance

[email protected]

Page 297: Abstract Book

Wednesday, August 23th 2006 297

40.2 Shared performance cues: Predictors of expert individualpractice and ensemble rehearsal

Jane Ginsborg1, Roger Chaffin2, George Nicholson3

1Royal Northern College of Music, Manchester, UK2University of Connecticut, USA3University of Sheffield, UK

ContextResearch into expert musicians’ preparation for performance has hitherto focused on individualmusicians: classical and jazz pianists, a cellist, and singers. Musicians’ practising strategies havebeen analysed in terms the features of a piece of music that are attended to in performance (per-formance cues) and the goals that are identified - and met - by singers memorizing the words aswell as the music of songs.AimsThis paper reports data from an investigation of a singer working with a conductor (who was alsothe rehearsal pianist) towards a public performance of the first Ricercar from Stravinsky’s Can-tata for solo soprano with a small instrumental ensemble.MethodsThe singer and conductor recorded their individual and joint practice sessions, including verbalcommentaries, and reported their goals for practice and performance. Following the performances,they reported their understanding of the compositional structure of the piece, the decisions theymade during practice independently and together, and the cues attended to in performance. Thesinger’s and conductor’s ten individual practice sessions and joint rehearsals were tran-scribed,coded and analysed.ResultsRegression analyses show the concerns that were stable and that changed over the course of prac-tice and rehearsal. Phrase boundaries were used, throughout, for retrieval. Technical issues wereidentified early and remained salient, as did expressive features. Although the performers beganto prepare for ensemble performance in their individual practice sessions, the focus on en- sembleintensified over time.ConclusionsThe roles and requirements of conductor and singer are different: the conductor has to have anoverview of the whole score while the singer performs from memory, aware of, but relying onlyintermittently on information from the other musicians. Nevertheless these different conceptu-alisations (and those they must resolve during the rehearsal process) combine so that ultimatelythe musicians give a performance that is experienced by both of them, and by their audience, asentirely unanimous.

Key words: Memory, Practice, Performance

[email protected]

40.3 Action, thought, and self in cello performance

Tania Lisboa1, Roger Chaffin2, Kristen Begosh2, Topher Logan2

Page 298: Abstract Book

298 Symposium: Longitudinal case studies of preparation for music performance

1Centre for the Study of Music Performance, Royal College of Music, London, UK2Department of Psychology, University of Connecticut, USA

BackgroundPrevious studies suggest that experienced soloists learn to direct their attention to specific featuresof the music during performance. Attending to these performance cues provides the musician withthe ability to monitor the highly practiced actions of the performance and provides mental controlover what would otherwise be an entirely automatic sequence of actions.

AimsA longitudinal case study of an experienced cellist (the first author) learning a new piece of musicfor solo performance followed the development of performance cues for two years and across eightpublic performances.

MethodThe music, the Prelude from J.S. Bach’s Suite No. 6 for solo cello, was learned in preparation for aseries of public performances. The cellist videotaped her practice and performances and, in addi-tion, wrote out the score from memory. Practice was transcribed by recording the location of startsand stops in practice and bar-to-bar tempi were measured for practice and live performances. Thecellist gave detailed reports about musical structure, performance cues, and decisions about tech-nique and interpretation. These re-ports were related to practice, recall, and tempo by multipleregression analyses which showed which dimensions of the music affected practice and perfor-mance at different points in the learning process.

ResultsAttention to performance cues and musical structure was evident in the location of starts and stopsduring practice, in slowing at performance cues during performance, and in se-rial position effectson recall. Effects of different types of performance cues at different points in the learning processreflected shifts in the cellist’s goals as her mastery of the piece developed.

ConclusionsWe were able to observe the development of the performance cues that the cellist used to guideher playing and their effects on successive live performances. While the details of the cellist’smemorization strategies were unique to the performer and the piece, the general principles werethe same as for other musicians and expert memorists. The dual role of musician/researcher oftenplaced conflicting demands on the cellist but also provided unique opportunities for musical andpsychological insight.

Key words: Memory, Practice, Performance

[email protected]

40.4 Variability and automaticity in highly practiced perfor-mance

Roger Chaffin, Anthony Lemieux, Colleen Chen

Department of Psychology, University of Connecticut, USA

Page 299: Abstract Book

Wednesday, August 23th 2006 299

BackgroundPerformance cues are the landmarks of a piece of music that a performer attends to during perfor-mance. While most aspects of a performance become automatic with practice, performance cuesprovide the musician with a means of conscious control of the otherwise automatic actions. Pre-vious evidence for this claim has been indirect, based on practice and recall. This report providesthe first direct evidence of effects of basic performance cues on performance. Basic performancecues are details of technique that need attention during performance, e.g., a long jump.AimsIn polyphonic music, performers separate the voices of the polyphony by playing each in a distinc-tive manner (e.g., legato, staccato). We expected interpretive nuances of this sort to be reflectedin increased variability in the sound-level and looked for a reduction in this variability at pointswhere the performer reported attending to basic performance cues (i.e., to details of technique).MethodWe examined practice performances of a concert pianist preparing J.S. Bach’s Italian Concerto(Presto) for a professional recording session over a 10-month period. Tempo, sound-level, andsound-level variability were measured for each bar for practice performances from each stage ofthe learning process.ResultsConsistent with the assumption that nuances of interpretation were associated with increasedsound- level variability, this measure increased with complexity of phrasing. Two effects sup-ported the hypothesis that the pianist focused more on technique at basic performance cues. First,tempo decreased at basic cues in all except the final performance, suggesting that the pianist tookextra time at basic cues to ensure technical details were correct. Second, sound-level variabilitydecreased at basic cues during early practice performances and again at the end of the learningprocess, suggesting that the pianist paid more attention to technique at basic cues at times in thelearning process when she was striving to perfect her performance.ConclusionsThe data provide the first direct evidence that performance cues provide a means of deliberatelycontrolling actions during highly practiced performance. The pianist trained herself to attend tocritical details of technique in order to ensure that the performance unfolded as planned.

Key words: Memory, Practice, Performance

[email protected]

Page 300: Abstract Book
Page 301: Abstract Book

Rhythm V 4141.1 Cognitive and affective judgements of syncopated themes

Peter Keller1, Emery Schubert2

1Max Planck Institute for Human Cognitive and Brain Sciences, Germany2School of Music and Music Education, University of New South Wales, Australia

BackgroundThematic development in music can be achieved by various techniques including rhythmic vari-ation, which typically involves a change in rhythmic complexity. For example, adjacent sectionsof music may change from being unsyncopated (simple) to being syncopated (complex) and viceversa. Little is known about how such changes in rhythmic complexity are interpreted by listenersalong cognitive and affective dimensions.

AimsWe investigated the influence of increasing versus decreasing syncopation on cognitive (perceivedcomplexity) and affective (perceived happiness, arousal, tension, and enjoyment) responses toshort melodies.

MethodMusicians listened to melodies each consisting of two four-bar phrases (in quadruple meter). Thepitch series repeated across both phrases but the rhythm changed. The rhythm of the first phrasewas either syncopated (S) or unsyncopated (U). A change in syncopatedness from the first to thesecond phrase occurred in half of the patterns, yielding four conditions: US, SU, UU, SS. Thetransition between phrases was accompanied by a change in timbre. Participants rated the secondphrase of each pattern, relative to its first phrase, with respect to how complex, happy, aroused(excited), and tense it sounds, and how much more or less enjoyable it is. Ratings for these factorswere made on a scale from -100 (less) to +100 (more).

ResultsOverall, cognitive and affective ratings were reliably higher when syncopation increased (US) thanwhen it decreased (SU): 2.91% vs. -1.79% (after subtracting the average of the baseline UU andSS ratings). The size of this effect was statistically equivalent across all factors. Furthermore, theabsolute values of effect sizes were larger (relative to baseline) for increasing than for decreasingsyncopation.

301

Page 302: Abstract Book

302 Rhythm V

ConclusionsOur results suggest that increasing syncopation has more impact than decreasing syncopation oncognitive and affective responses alike. The ordered progression from low to high complexity maycreate a situation where listeners are initially better able to form mental representations based onthe simple musical structure. Consequently, more extreme responses obtain when the expectanciesembodied in these representations are violated by the structural complexities that follow.

Key words: Syncopation, Perceived complexity, Affect

[email protected]

41.2 Investigating computational models of perceptual attacktime

Nick Collins

Centre for Science and Music, Faculty of Music, University of Cambridge, UK

Computational models of perceptual onset time are of great interest in automatic transcriptionsystems for event analysis (Collins 2005). Not all events are percussive. Slow attack envelopesmay shift the perceived onset time later into the physical event. Even with a percussive transientattack, it takes time for the temporal integration of the signal energy to allow a cognitive detectionof the event. This leads one to suspect that the perceptual rather than the physical onset mayconstitute an informative feature of signals, and in particular assist with accurate scheduling of asequence of events, with regard to specifying temporal relations within and across streams. Thestudy of p-center grew out of work on automatic analysis of prosody in the speech processingliterature and has been termed perceptual onset time (POT) (Vos and Rasch 1981) or perceptualattack time (PAT) (Gordon 1987) in experimental work on synthesized and instrumental tonesrespectively.

In the present study human listeners were tested in a relative POT task, with event materialderived from a number of sources including both percussive and non-percussive polyphonic music,monophonic instrumental sources and singing voice. All sound samples were from 100-500 msecin duration and avoided double hits or other ambiguous cues. Subjects had to adjust the start timeof a given sample to give perceptual isochrony when presented in alternation with a single impulsereference sound. This yields the POT of samples relative to the POT of the reference tone.

Implementations of a number of models from the literature were used to fit the data, includingmaximal peak based models, energy integration, normalised rise time as well as a more com-plicated model proposed by Pompino-Marschall (1989) in the speech processing literature. Inmodel fitting, energy integration models showed some promise, and a simple auditory model withactivation integration was investigated.

In practice, there is difficulty in applying any model of POT to general polyphonic audio sig-nals. It appears that our experience in listening may be schema-driven, with the parameters of ourneural models tuning to specific situations.

Key words: Perceptual onset time, P-centre, Computational event analysis

[email protected]

Page 303: Abstract Book

Wednesday, August 23th 2006 303

41.3 Computer analysis of performance timing

Simon Dixon

Austrian Research Institute for Artificial Intelligence, Austria

The current bottleneck in music performance research is not a lack of audio recordings butrather the extraction of high-level content from these data. We present new methods of automatingthe analysis of audio recordings in order to extract precise performance information such as thetiming and tempo at each score position. These methods have been implemented in two computerprograms: a new version of the beat tracking system BeatRoot and an audio alignment systemcalled MATCH. The new version of BeatRoot runs on multiple operating systems and has manynew features such as the annotation of multiple metrical levels and phrase boundaries, as well asimproved onset detection and tempo induction functions.

Previous work focussed on beat tracking directly from audio recordings, without using anyknowledge of the score, to extract tempo curves at various metrical levels. Although the system’serrors could be corrected interactively, it was found that automatic beat tracking was not partic-ularly helpful for music with extreme tempo variations. Much better results have been obtainedusing audio alignment, whereby a mapping between the time axes of two performances is auto-matically generated using an efficient time-warping algorithm. This mapping allows annotationsof score positions to be transferred automatically between audio recordings of different renditionsof the same piece of music.

For example, to analyse several performances of a piece, the automatic beat tracking systemis run on the first performance. The errors made by this system are corrected manually, giving amapping of score time to performance time for the first performance. The remaining performancesare then aligned automatically to this first recording using MATCH, and the errors in the alignment(much less frequent than for beat tracking) are corrected manually. The combined use of the twosystems provides an order of magnitude decrease in the work involved in generating multiplescore-performance alignments. A further improvement can be made which bypasses the beat-tracking step entirely, by synthesising a performance directly from a score in MIDI format, whichautomatically provides an initial mapping of score times to the audio file for use by the alignmentsoftware.

The long term aim of this project is to collect data for machine learning experiments on thecomputational modelling of expressive performance. The software is also available for downloadand potentially useful for anyone working in performance research. Both programs are written inJava and run on Linux, Windows, and Mac operating systems.

Key words: Performance analysis, Beat tracking, Audio alignment

[email protected]

41.4 Beat induction with an autocorrelation phase matrix

Douglas Eck

University of Montreal, Computer Science Department, Canada

Page 304: Abstract Book

304 Rhythm V

BackgroundIn previously published work (first at ICMPC 2004) we outlined a method that uses autocorrela-tion to perform tempo and meter estimation. Autocorrelation has long been used in models thatestimate tempo, meter and beat. The novelty in our model is the introduction of a phase-by- pe-riod “Autocorrelation Phase Matrix (APM)” that stores intermediate autocorrelation results. Whilestandard autocorrelation discards all phase information, the APM can be used to analyze repeti-tion patterns in phase space, yielding sharper and more reliable tempo estimation. It can also besearched for metrical trees, yielding enhanced meter estimation.AimsWe describe for the first time an APM-based model for beat induction. The model is fast enoughto work in real time on digital audio or MIDI and yields promising results. In addition we explorethe qualitative and quantitative differences between this model and other beat induction models(of which there are too many to discuss in this short abstract).MethodThe model works by (a) updating an APM at regular intervals using features extracted from audioand (b) estimating beat using tempo and phase maps computed using the APM. The model canwork in causal or non-causal mode, with slightly better results in non-causal mode. We compareour model to others on existing and new datasets.ResultsThough our model yields promising results compared to other models. we will not argue that itis “the best” or “the state-of-the-art”. It seems clear that beat induction is too complex a taskto support a single dominant model. Instead we highlight the advantages and disadvantages ofdifferent techniques, focusing on issues such as speed and flexibility in adjusting to a changingbeat.ConclusionsWe conclude that the APM is a good data structure for analyzing the temporal structure in music.It retains the ability of autocorrelation for weighing the relative importance of different time-lags while also storing the necessary information about phase to allow an “alignment” of the lagstructure with a musical sequence. We also conclude that our APM-based beat induction modelrepresents a good balance of speed and performance. Further investigation is warranted.

Key words: Beat induction, Autocorrelation, Real-time models

[email protected]

41.5 Perceiving and cognizing the mathematical processes inmusic composition in 20th century

Rima PovilionieneIn this paper there is investigated the problem of music and mathematics interaction that con-

tinues in various forms of expression from the Antiquity. This phenomena may be perceived andcognized in various forms. The period of Antiquity is marked by Pythagoras system of musicintervals and mathematical proportions. The mysticism in the Middle Ages determined the sacral-ness both in culture and in music and signified the religious symbols (as likewise the numbers thatwere provided with symbolic meanings). In the world-view of Renaissance a mark was left byGreek system of intervals’ rationalisation that resounded in the philosophy of antique beauty that

Page 305: Abstract Book

Wednesday, August 23th 2006 305

was specific in Renaissance period. Its material expression - proportions in mathematical symbols- was recreated and embodied in various areas of art - in architecture, literature, the fine arts andmusic. The musical epoch of Baroque is characterised by especially sacral tradition of Christiannumerology that interconnects with numinous world-view of the Middle Ages and bases the tra-dition of composing. In Classicism there was observed a phenomenon of proportions influence tothe architectonics of form. The quadratic principle that displayed the harmony of balanced formelements was used widely. But if in XVIII century a concept “sounding mathematics” was a syn-onym of music so Romanticism distinguished by blossomed anti-rationalisation (“if the mind maymistake so the sense never”). In this period there was a culmination of antagonism in music andmathematics.

However in the XX century the music turned back to the object of science (for example,Stravinsky stated for “particularity of composer’s thinking differs almost nothing from mathemat-ical thinking”) that concentrated and synthesized the experience of previous periods. So the inter-action of art of sounds of XX century and number construct became miscellaneous eminently andcoexisted as the layer of different numerological traditions. The music of XX century is aplentyof numerological symbolism of various religions. Also here we may see the world harmony pro-portions, numerical progressions (such as Fibonacci, Lucas), laws of Sectio aurea, confrontationof symmetry-asymmetry in polyrhythmic and polymetric.

Yet by this phenomenon that synthesised the numerological compositional traditions of earlierepochs there shows up a new way of music mathematisation in the compositional practice ofmusic in XX century. It is connected to modern mathematical theories and inspires the analysisof new deviation: investigations of algorithmic procedures, recursive models, theories of fractals,chaos, probability and Markov chains that are invoked in music creation (for example, Xenakisexperiments with stochastic processes, implications of group theory into musical material, temposby C. Nancarrow’s canons or T. Johnson’s compositions are dictated by mathematical formulas,a prototype for computerised compositions by G. Nelson is chosen from equations of granularsynthesis and chaos theory, Ch. Dodge in his composing process uses fractal rules, J. Cage or M.Feldman experiment with reproducing a theory of probability in musical sounds). Several of theseaspects are illustrated by analysis of musical compositions that are provided in this paper.

Key words: Contemporary music and mathematics, Theories of chaos, fractals, algorithms in music, Com-posers johnson, feldman, ligeti, nelson, nakas, mazulis

[email protected]

41.6 A time-based approach to musical pattern discovery inpolyphonic music

Tamar Berman

Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign,Champaign, USA

BackgroundThe task of musical pattern matching has been extensively studied. The accelerated developmentsin the field of Music Information Retrieval have added special interest to this task, as musicalpattern matching is an essential component of any content-based music retrieval system. Various

Page 306: Abstract Book

306 Rhythm V

methods have been employed: Some are inspired by text retrieval methods, in which the music isconverted into “words” and then searched; others employ Markov models, in which the music isrepresented as a stochastic process. The motivating task in most of these is matching a melodywhich is sung or played by a user with similar sequences stored in a database. Much of thechallenge facing these systems is related to the imprecision of the query, as the singer or playermay provide the system with incorrect or imprecise pitch or rhythm information.AimsThis paper poses a different task to the pattern matcher: in this task, the user is a music researcher,scholar or student, who can define the target sequence parametrically and precisely. The targetdatabase is polyphonic, and the target sequence is often concealed within it: embedded withinother patterns or interspersed with other musical events. The pitches are described relative to someanchor key, and the maximal or minimal distances between the musical events in the sequence aregiven in absolute time units. This perceptual, time centered approach creates a common basis forpieces of different tempi, and is the basis for the representation employed.MethodThe music is encoded as an equally-spaced time series, in which the presence or absence of relativepitches in successive time intervals is recorded, resulting in a succession of “harmonic windows”.This representation, which serves as the basis for the pattern matching, is part of the author’sdoctoral dissertation.ResultsThe method has been tested on midi sequences of music by W.A. Mozart. Complex, polyphonicpatterns have been successfully retrieved. Examples of these are included in the paper.ConclusionsThis paper proves that encoding music as a series of equally spaced “harmonic windows” yieldssuccessful retrieval of polyphonic, multi-event musical sequences.

Key words: Polyphonic, Pattern, Time

[email protected]

Page 307: Abstract Book

Demonstrations II 4242.1 Digital representation of musical vibrato

Fernando Gualda

The Pennsylvania State University, USAInternational Double Reed Society, USAAudio Engineering Society, USA

BackgroundMusical vibrato is an important feature used by musicians for musical expressiveness. There hasbeen much discussion about musical vibrato because of the subjective nature of its perception andproduction.

This paper suggests a new objective approach to representing musical vibrato through a com-puter software capable of analyzing and comparing it.

Since Seashore (1936) musical vibrato has been studied as frequency modulation and asso-ciated with intensity variations. Sundberg (1995) provides new a parameter: “waveform”, inaddition to “rate” and “extent”. Brown (1996) discusses the pitch center of vibrato tones. Remley(1997) and Yoo et al. (1998) discuss the perception of vibrato; Keisler (1991) and Sethares (1993)discuss the importance of inharmonicity to the perception of pitch.AimsThis paper proposes a new computational process of analyzing and describing musical vibrato, aswell as its perception, using a new graphical representation. Although there is a vast literature onvibrato viewed as its standard definition, which is frequency modulation, not as much attentionhas been given to the influence of the harmonic content and inharmonicity in musical vibrato.MethodA new analysis system combining standard spectral analysis methods is proposed: a series ofoverlapped windowed FFTs is connected to a Pitch Tracking (F0 Estimation) algorithm. Thisalgorithm will signalize to a group of high-resolution frequency analysis objects that calculate theintensity and frequency of each harmonic partial of interest.

For the demonstration, only a digital projector will be necessary. The demonstration is basedon a short slides presentation of DSP techniques, the implementation of the computer models (inC++), the graphical representation approach and examples of the application in real sounds.Implications

307

Page 308: Abstract Book

308 Demonstrations II

A computer software that is capable to compare vibrato differences will be presented. It will bevaluabe for further research in musical vibrato.

Specific value and meaning A new approach to represent and compare musical vibrato will bepresented, which will make possible to understand and teach musical vibrato more efficiently andobjectively.

Key words: Vibrato, Computer models, Spectral analysis

[email protected]

42.2 PracticeSpace: A platform for real-time visual feedbackin music instruction

Alex Brandmeyer, David Hoppe, Makiko Sadakata, Renee Timmers, Peter Desain

Music, Mind, Machine group, Nijmegen Institute for Cognition and Information, Radboud Uni-versity, The Netherlands

BackgroundCheaper, more powerful technology has opened the door for the development of sophisticatedmusic education platforms for use in conservatories. Real-time visual feedback has been shownto be an effective tool in singing instruction, but currently, there are no platforms for use in theinstruction of other instruments.

AimsWe aim to develop a computer-based platform incorporating real-time visual feedback for trainingexpressive musical performances. The platform will initially utilize instruments which produceMIDI data, and will eventually be expanded to incorporate audio analysis of non-MIDI instru-ments.

A short description of the activitiesThe initial platform was run on a PowerMac G5 using two flat-panel displays, two monitor speak-ers, and a MIDI drum machine with attached triggers. Playback and recording was handled usingthe Logic sequencer, while analysis of expressive information was done with Max/MSP. Visualfeedback was presented with Macromedia Flash. The “teacher” selects or performs a desired pas-sage, which triggers a display on the student’s screen in real-time. The display consists of a seriesof geometric shapes whose features are mapped onto the voice, timing, and velocity values of thepassage, forming a coherent figure. Different expressive patterns lead to unique visual figures.When the passage has completed, the student attempts to imitate it. This triggers a second displaypattern that is overlaid on the initial display. Discrepancies in expression between the instructionperformance and the student’s performance lead to differences between the two display figures.

ImplicationsWe feel that incorporating a visual feedback system into lessons may improve the effectivenessof music instruction. The feedback is synchronized with the musical events, and is more closelyassociated in time with the student’s sensory feedback, reinforcing it. Additionally, the display isdirectly related to the musical aspects of their performance (intensity, timing, etc.), as opposed tophysical measurements (i.e, spectrograms, waveforms).

Page 309: Abstract Book

Wednesday, August 23th 2006 309

Specific value and meaning The platform will enable students to develop their expressive abil-ities with a variety of instruments. It also will enable self-instruction through libraries of pre-compiled materials, and can reduce the cost of instructor time for conservatories.

Key words: Music education, Music technology, Real-time visual feedback

[email protected]

Page 310: Abstract Book
Page 311: Abstract Book

Part III

Thursday, August 24th 2006

311

Page 312: Abstract Book
Page 313: Abstract Book

Symposium: Music in every-day life: A lifespan approach

43

Convenor: Alexandra Lamont

This symposium brings together papers which treat important questions of engagement withmusic in everyday life at different points across the lifespan. All the papers reflect the currentconcerns of social psychology of music in terms of identifying important contexts for music en-gagement and interpreting the meaning drawn from and created by such contexts, using principallyqualitative approaches. They illustrate complementary methodologies which enable access to reallife situations and allow participants to express and reflect on the ways that music is used in every-day life in age-appropriate ways. Each paper focuses on a critical stage in development, exploringhow music helps smooth important life transitions.

The symposium begins with an enquiry of musical engagement in toddlers, using experiencesampling methodology to capture everyday experiences over short timespans (Lamont). At thisage participants are unable to respond for themselves, but the paper draws on established meth-ods for studying this age range by drawing on the secondary accounts of caregivers. The papercompares contexts and explores issues of control over musical exposure in the early stages ofdevelopment. The second paper (Ashley & Durbin) draws directly on children’s first-hand ac-counts of their musical preferences, music listening behaviour and mood regulation at ages 10-12.This continues the focus on uncovering authentic musical experiences, exploring a period whenchildren begin to engage in substantial music listening and begin collecting their own music.

The third paper (Saarikallio) continues the focus on mood regulation, here in relation to ado-lescents’ use of music. Adolescence has been identified as a key transition period during whichmusic serves an important role in identity formation and emotion regulation. This paper tracesdifferences due to gender, age, and musical background as well as focusing on adolescents’ gen-eral abilities to regulate their own moods. The final paper (Greasley & Lamont) uses open-endedinterviewing techniques to explore the importance of music in early adulthood, where many re-spondents report having begun serious music collecting. This continues the emphasis on moodregulation as well as uncovering important themes of musical style categorisation, technology,and the psychological functions of music in everyday life.

313

Page 314: Abstract Book

314 Symposium: Music in everyday life: A lifespan approach

43.1 Toddlers’ musical worlds: Musical engagement in 3.5 yearolds

Alexandra Lamont

Keele University, UK

BackgroundPreschoolers are involved in a large number of different family and social contexts, and have theopportunity to hear and engage with a wide range of music. They also are beginning to developtheir own musical preferences.

AimsThis study explores preschoolers’ real-life engagement with music in everyday life, examining thedegree of freedom that they have over music listening and the extent of engagement that they showin relation to different music in different contexts.

Method30 children aged between 3.2 years and 3.9 years participated with their families, nursery teachersand other caregivers. Experience Sampling Methodology (ESM) was used to capture up to 21episodes within a 7 day period. The person responsible for the child was contacted by mobile tele-phone and asked a series of questions about the child’s context and activity, focusing on whetherany music was playing and if so who had chosen it and how the child was responding to it.

ResultsA total of 407 episodes were captured through ESM calls. Only 20.7% had no music exposureeither at the time of the call or at all during the 2 hours preceding; 42.6% had experienced music inthe 2 hours preceding but not at the time of the call, and 36.7% at the time of the call. Children’smusic was the most frequently heard, chosen and responded to at 47% of all episodes; pop musicwas also frequently heard (38%) but less chosen or responded to, and other styles comprised lessthan 15% of all music exposure at this age. Further analysis to be reported at the conference willexplore in more detail the activities associated with music listening and the engagement shown byindividual children in relation to different types of music in different settings.

ConclusionsMusic is prevalent in the lives of young children and they show differentiated responses to musicthat they have chosen and not chosen to listen to. Family and institutional influences on musicchoice are emerging as influential in determining the kinds of music exposure children have, withtheir own music preferences contributing to this interaction.

Key words: Music in everyday life, Development, Context

[email protected]

43.2 Music preference, music listening, and mood regulationin preadolescence

Richard Ashley, Emily Durbin

Page 315: Abstract Book

Thursday, August 24th 2006 315

Northwestern University, USA

BackgroundThe middle school years (ages 10-12) are a time of rapid development for young people in manyways. They have a growing sense of identity, both individual and social, distinguished from theirfamily of origin, and are developing their own musical tastes.AimsThis study aims to investigate how musical preference develops at this age; how and why pre-adolescents choose the music they do; and how they use this music in their daily lives for purposesof emotional self- regulation and construction of social identity.Method10-11year old students were recruited from local schools and provided with a personal digitalassistant holding MP3 files as well as the means for data collection about music listening andmood. The PDA is programmed to issue an alert at random times within predefined hours, andthe students fill out a brief survey based on the short form PANAS and modified BDI, as well asindicating the music they have listened to in that day. From this we will seek to see how students’personality type and other factors serve as a starting point for their developing musical preferences,and how the music they choose serves to enable mood regulation. Of particular interest is thepossible difference between young people with depressive tendency, or at risk for depression, andother young people.ResultsData collection are still underway at the time of this writing. We anticipate giving informationabout the daily moods of the students, as well as the music which they listened to, and the effect oflistening to this music on their moods. As far as we know we are the first project to attempt suchsampling in this age group.ConclusionsYoung people spend much time listening to music and that anecdotal evidence connects musiclistening with mood regulation in an emotional trying time of life. Understanding such processes inthe earliest years of children’s conscious, strategic use of “their own music” is of great importancefor understanding the relationship of music to the emotional lives of older adolescents.

Key words: Music in everyday life, Development, Emotion

[email protected]

43.3 Differences in adolescents’ use of music in mood regula-tion

Suvi Saarikallio

University of Jyväskylä, Finland

BackgroundMood regulation is one of the most important reasons for engaging in music in adolescence. How-ever, there has been little theory-based research on the realization of different mood-regulatorystrategies by everyday musical activities.

Page 316: Abstract Book

316 Symposium: Music in everyday life: A lifespan approach

AimsThe study aimed at exploring how adolescents use different strategies of mood regulation by music,and how regulation is related to differences in age, gender, musical background, and abilities ofgeneral mood regulation.Method1515 adolescents, 652 boys and 820 girls, completed a questionnaire. There were three agegroups: 10-13 -year-olds (mean age 11.76), 13-15 -year-olds (13.76), and 15-20 year-olds (16.67).The questionnaire consisted of background information, questions on musical activities, Music inMood Regulation scale (MMR), and four measures of general mood regulation. The MMR isbased on the author’s previous qualitative work on inductively constructing a theory of music inmood regulation. It consists of 40 items assessing seven regulatory strategies. The survey datasupported the validity and reliability of the measure.ResultsThe most employed regulatory strategies for both boys and girls in all age groups were “Entertain-ment”, “Revival”, and “Strong Sensation”. Girls used music for mood regulation more than boys.Regulation increased with age, and the increase occurred earlier in girls than in boys. Greater useof music for mood regulation was related to a greater amount of listening, singing and playing,making songs, music being played in the family, greater diversity in musical preferences, and theexperience of music as an important part of life. Preferences for certain musical genres were re-lated to certain regulatory strategies. The most important musical activity for regulating mood waslistening, but for certain strategies, the preferred activity was playing or singing. The use of musicin mood regulation was also correlated with abilities of general mood regulation.ConclusionsThe survey gave support to the author’s theory of mood regulation by music, and demonstratedpersonal factors related to differences in regulation. It provided valuable information on howmusic functions as a means for mood regulation, and how person-related factors contribute to thecreation of emotional meaning of music in the everyday life of adolescents.

Key words: Music in everyday life, Adolescence, Mood regulation

[email protected]

43.4 Music preferences in adulthood: Why do we like the mu-sic we do?

Alinka Greasley, Alexandra Lamont

Keele University, UK

BackgroundA wide range of psychological approaches have been used in the study of music preferences andmusic in everyday life, yet none of these have yet successfully approached the complexities ofmeaning involved in people’s everyday use of music.AimsThe study aimed to map individual levels of engagement with music; to tap into the content andquality of people’s emotional experiences with the music they prefer and why this is valued; to

Page 317: Abstract Book

Thursday, August 24th 2006 317

investigate how technology affects the ways people engage with music; to investigate how peoplecategorise music, and to discover what it is about the specific characteristics of the music that ispreferred.MethodQualitative methodology was used to ground the research in people’s experiences with music, toidentify how they account for their listening behaviour, and to focus on what they perceive to beimportant in shaping their music preferences. Interviews were carried out in participants’ homes,enabling a thorough investigation of preference in front of their music collections - in some casesasking participants to actually play the music and encouraging them to express the thoughts andfeelings it evoked. 30 participants (aged 18-73) took part and the transcripts were analysed usingthematic analysis.ResultsParticipants talked extensively about their engagement with music, and were able to articulatewhy they liked (and disliked) certain styles of music in detail. Themes identified included thecomplex and idiosyncratic ways in which people categorise and organise the music they own; therole of technology in shaping everyday engagement with music; the number of different stylespeople used; the psychological functions of music in everyday life; the way in which people usemusic as a form of “self-therapy” - especially for purposes of mood-regulation; and changes andfluctuations in music preferences over time.ConclusionsAdults are consciously aware of the many ways in which they use and engage with music, andemphasise music as a meaningful and important personal and social undertaking. This studyprovides a rich soil from which to develop hypotheses about the nature of people’s engagementwith music, and highlights important dimensions of preference for further investigation.

Key words: Music in everyday life, Music preference, Mood regulation

[email protected]

Page 318: Abstract Book
Page 319: Abstract Book

Development 4444.1 Lullaby and good night

Sandra Trehub

University of Toronto, Canada

BackgroundLullabies are used to soothe infants or lull them to sleep. Although they are found throughout theworld, their greatest use occurs in societies in which infants sleep with their mother and remain inclose contact with her much of the time. Mother and infant may be lulled or soothed simultane-ously. At times, however, the singer offers bribes for falling asleep (e.g., a mockingbird) or threatsfor failing to do so (e.g., throwing the infant into a ditch or into a wolf’s jaws), which implies aseparate maternal agenda. Divergence between the words and musical form highlights the distinctfunctions of lullabies for singer and listener.AimsThe principal goal of this presentation is to shed light on the functions of lullabies for mothers (theusual lullaby singers) as opposed to infants.Main contributionThe study focuses, in particular, on lullaby lyrics, with a view to discerning the functions oflullabies in different societies. Because the words are incomprehensible to young listeners, lullabysingers have a unique opportunity to “speak their mind,” out of earshot of others if they chooseto do so. On the one hand, they celebrate their positive feelings and aspirations for infants–themost common theme in lullaby lyrics. On the other hand, they may complain about their lot inlife, either in ritualized form (i.e., by means of conventional lyrics) or through improvised lyricsthat delineate their specific woes. Examples of these different lullaby types will be provided.In this light, complaint lullabies are similar to laments in the sense that they express grief orregret. Lullabies also share properties with work songs, which promote physical and emotionalsynchronization among workers, relieve the boredom of repetitive work, and provide opportunitiesfor subversive verbal expression (e.g., escape fantasies of slaves). Lullaby singing is on the wanein many industrialized societies. Why is that the case, and what are the consequences for motherand infant?ImplicationsThe implications of contradictory emotional intentions in lullabies with soothing melodies and

319

Page 320: Abstract Book

320 Development

disturbing lyrics are unclear, but the issue is worthy of consideration.

Key words: Infancy, Context, Lullaby

[email protected]

44.2 To sing or not to sing: Communication in early social in-teractions

Elena Longhi

Roehampton University, London, UK

BackgroundWhen mothers are in company of their young infants they sing and speak to them in an infant-directed style. Interestingly, both infant-directed speech and infant-directed singing appear topromote infants attention, regulate their arousal and support interpersonal bonding (Trehub andTrainor, 1998). However, Nakata and Trehub (2004) suggested that the different acoustic andstructural properties of ID speech and ID singing might support different patterns of attention andengagement in infants. In fact, ID speech and ID singing might also affect differently communi-cation in early social interaction. On this issue, Longhi (2003) found that singing in interaction isa powerful form of communication when infants are 3-4 months of age but not later when they are7-8 months of age.AimsThe purpose of this study is to explore the nature and development of communication in earlysocial interactions, in particular, how communication emerges in different kind of interactions,and whether communication changes with the infant development.Method20 mother-infant dyads were tested when infants were about 4 months of age and later when theywere about 8 months of age. The mothers were asked to sing and play and also to talk and playwith their infants for 10 minutes. The sessions were videorecorded in a quiet room especiallydedicated in the Social-Development lab at Roehampton University. The mothers’ and infants’participation in the different kind of interactions was coded and analysed focusing in particular onthe infants’ attempt to communicate, their level of engagement and emotional state.Resultsdiscussion Preliminary results indicate that 4-month-old infants tend to be more communicative,interested in the interaction, and also appear to experience more positive emotions during singinginteractions than speaking ones. By contrast, at around 8 months of age they appear to make morecommunicative attempts, show more positive emotions and interest in the interaction during speak-ing interaction rather than singing ones. This suggests that maternal speaking and singing mightbe affected by different patterns of infants’ responsiveness. Moreover, it might be that infants’development, in particular their language development, might affect the kind of communication inearly social interactions.ReferencesLonghi, E. (2003). The temporal structure of mother-infant interactions in musical contexts. PhDThesis, The University of Edinburgh.

Page 321: Abstract Book

Thursday, August 24th 2006 321

Longhi, E., & Karmiloff-Smith, A. In the beginning was the song: The complex multi-modaltiming of mother-infant musical interaction. Behavioural and Brain Sciences, 27, (4) 516-517.

Nakata, T., & Trehub, S.E. (2004). Infants’ responsiveness to maternal speech and singing.Infant Behaviour and Development, 27, 455-464.

Trehub, S., & Trainor, L. (1998). Singing to infant: Lullabies and Playsongs. Advances inInfancy Research, 12, 43-77.

Key words: Mother-infant, Interaction, Communication

[email protected]

44.3 Understanding performance anxiety in the adolescent mu-sician: Approaches to instrumental learning and perfor-mance

Ioulia Papageorgi

Institute of Education, University of London, UK

Research on musical performance to date has tended to focus on adult musicians. Althoughthere is anecdotal evidence that problems can begin at a younger age, with adolescence being aparticularly critical period, there has been no systematic research into the condition in youngerperformers. The results of a research study exploring how performance anxiety affects adoles-cent instrumental musicians are presented. A new self-reporting questionnaire was piloted andadministered to 410 adolescent musicians aged 12-19 years old in the UK and Cyprus. Principalcomponents factor analysis was conducted to explore student approaches to instrumental learningand performance and to investigate the potential influence of performance anxiety. The analysisrevealed four components that represented both positive and negative approaches. Positive ap-proaches were “evidence for achievement, positive musical identity and effort” and “tendency forself-reliance and intrinsic motivation”. Negative approaches were “suggestion for susceptibility tomaladaptive anxiety” and “indication of unsuccessful coping with anxiety, lack of motivation andlow self-confidence”. Differences in nationality, age, gender and examination undertaking wereobserved across the four extracted components. Each component contributed to a different extentin the prediction of students’ level of attainment (evidenced by their last examination result), theiramount of weekly practice and quantity of annual public performances they carried out.

Key words: Performance anxiety, Musical performance, Adolescent musicians

[email protected]

44.4 Seeing into the future? Predicting achievement at a con-servatoire

Janet Mills, Rosie Burt

Page 322: Abstract Book

322 Development

Royal College of Music, London, UK

How can oversubscribed higher education institutions select the candidates that would becomethe highest achieving students? Debate of the role, in such selection, of national examinationstaken at UK schools has a long history (Petch, 1956, 1966).

The nine UK conservatoires, which train performers, select students through audition. Weinvestigated whether longitudinal data relating to the 170 undergraduates in Year 2 or Year 4 atone conservatoire in 2005 predict their level of achievement there. Thirteen variables were drawnfrom: - central records of assessment data including audition scores and, for students from UKschools, school examination scores - data reflecting students’ participation in a research projectthat tracks their progress longitudinally over three years.

Factor analysis of the data for Year 4 students (n=95) extracted four factors that account for66% of variance. Factor 1 comprises five written verbal conservatoire assessments. Factor 2embraces four assessments, including the audition, that are recitals. Factor 3 reflects students’uptake of opportunities to participate in the research project. Factor 4 contains two assessmentsrequiring written response to aural presentations of music, including the school examination.

Of the 119 students who took the UK school examination, the 91 with top grades achievedsignificantly higher in their audition, and also in each of their four (Year 2s) and nine (Year 4s)written and further recital assessments to date.

Of the 170 students overall, those with an audition score above the median achieved signifi-cantly higher in recitals, but also significantly lower in two written assessments. This result wasreplicated among the 119 students from UK schools.

Pre-course auditions allow the conservatoire to select students whose performance standard ex-ceeds the minimum for the top grade in the performance component of the UK school examination.The greater power of another measure - the school examination result - to then predict students’overall achievement at conservatoire reflects a “threshold-enhancement” model of prediction thatwas proposed over 20 years ago (Bruton-Simmonds, 1969; Mills, 1983, 1984; Seashore, 1938).Knowledge that this model is at play in the conservatoire may help it to support students acrossthe range of their degree programme.

Key words: Conservatoire, Selection, Assessment

[email protected]

44.5 “Anything Goes”: A case-study of extra-curricular musi-cal participation in an English secondary school

Stephanie Pitts

Department of Music, University of Sheffield, UK

This paper presents the findings of an empirical investigation into secondary school students’experiences of participating in a school production of the Cole Porter musical, “Anything Goes”.The study was prompted by a broader interest in the effects and experiences of musical participa-tion (Pitts, 2005), and by the absence in the research literature of any qualitative investigation ofthe extra-curricular activities that form a vital part of many young people’s musical development.The project therefore focused on individual motivation and experience, exploring the effects of theschool show not just on its participants, but also on the broader school community.

Page 323: Abstract Book

Thursday, August 24th 2006 323

Research methods were devised to capture the views of representative sample of the schoolpopulation, before focusing in more detail on the experiences of a smaller number of participants.A questionnaire was distributed to all students in Year 7 (aged 11-12) and Year 10 (aged 14-15),and the novel method of audio diaries was also used, whereby 5 participants were given a portabletape recorder to record their own views and those of their friends and fellow performers during theweeks leading up to the performances.

Results showed a widespread awareness of the show amongst non-participants, and a generalbelief that it made a valuable contribution to school life. Some Year 7 students anticipated futureinvolvement, and were more likely than the Year 10s to express disappointment at having failedto audition successfully for this production. Amongst participants, the costs and benefits of par-ticipation were evident in descriptions of the intensity and commitment involved in rehearsals,the effects on friendship groups of spending time with like-minded people, and the challenges toparticipants’ own musical, personal and social development.

This study complements existing knowledge of the measurable positive effects of extra-curricularactivity on school engagement and academic achievement (e.g. Cooper et al., 1999; Jordan &Nettles, 2000), by focusing on individual experiences in order to understand what music-makingoutside the curriculum contributes to school life. The paper will conclude with some recommen-dations for music education practitioners, and will outline plans for longitudinal research whichaims to investigate the longer-term impact of musical involvement in the school years.ReferencesCooper, H., Valentine, J. C., Nye, B. & Lindsay, J. J. (1999) Relationships between five after-school activities and academic achievement. Journal of Educational Psychology, 91 (2): 369-378.

Jordan, W. J. & Nettles, S. M. (2000) How students invest their time outside of school: effectson school-related outcomes. Social Psychology of Education, 3: 217-243. Pitts, S. E. (2005)Valuing Musical Participation. Aldershot: Ashgate.

Key words: Musical participation, Performance, Motivation

[email protected]

44.6 The effect of real-time visual feedback on the training ofexpressive performance skills

David Hoppe1, Alex Brandmeyer1, Makiko Sadakata1, Renee Timmers2, Peter Desain1

1Music Mind Machine Group, NICI, Radboud University Nijmegen, The Netherlands2Center for History and Analysis of Recorded Music, Music Department, King’s College London,UK

BackgroundIn traditional music education, the teacher usually provides the student with verbal feedback. Twomajor problems arise from this teaching method. First, verbal feedback is prone to ambiguousinterpretation. Second, there exists a time lag between the student’s ?performance and the teacher’sfeedback, this gap in time being the critical learning period of the student. One way of overcomingthese problems is providing the music student with real-time visual feedback (RVFB), because itis immediate and unambiguous. The use of RVFB has already been successfully applied in musiceducation, such as singing training and teaching the musical expression of emotion.

Page 324: Abstract Book

324 Development

AimsOur primary aim was to investigate the effect of a new type of RVFB, which presents abstractshapes, on learning to imitate short rhythms with expressive variations. Secondly, we addressthe transferability of rhythm production skills. Finally, we test whether the effect of the trainingis task-independent, e.g. production training may not only improve motor skills, but also mayincrease the perceptual sensitivity for timing differences.MethodWe conducted an experiment with a between-subject pretest-posttest design. Twenty-four amateurmusicians (13 years of music lessons on average) were divided over two groups, a RVFB group anda control group. RVFB was presented during the training phase. Simple three-interval rhythmicpatterns with timing and dynamic variations were used as stimuli. Both production and perceptionwere assessed in a pre- and posttest, the former using an imitation task, the latter in a discriminationtask. Participants were trained only on imitating short rhythms with expressive variations.Results and ConclusionsThe experiment and analyses are currently in progress. We expect to find the following: 1) alarger learning effect for the RVFB group than for the control group (effectiveness of the new typeof RVFB), 2) improvement of the task performance skill for the rhythms which are not used forthe training (transfer-of-learning effect), and 3) improvement in perceptual sensitivity for timingdifferences as a consequence of the production training (task-independency).

Key words: Real-time visual feedback, Expressive performance, Learning

[email protected]

Page 325: Abstract Book

Cognition and computational 4545.1 Melodic perception in multi-voice music: The peceptual

role of contour

Jung Nyo Kim

Northwestern University, USA

BackgroundStudies on memory for single-voice melodies have revealed that melodies can be represented inmemory in terms of musical features like pitches, contours, intervals, and so on. Studies onmemory for multi-voice music, however, have focused on the effect of other features such asregister of voices and texture.AimsThe present study investigates the encoding features used in memorizing multi-voice music, espe-cially, focusing on the possible role of contour. It was expected that contour would be more salientin the lowest voice, but less in the highest voice, because listeners tend to pay more attention tothe high voice. Detailed pitch information of the high voice is likely to get into memory, but thelow voice is likely to be encoded in terms of a more general feature, contour.Method72 three-voice examples, each seven notes long, were composed for the standard stimuli. Com-parison stimuli were either identical to or different from the standards. The different examplesinvolved with a pitch change in one outer voice, half with and half without contour changes. Inexperiment 1, 11 participants listened to standard only once and then listened to comparison, andanswered if these two were same or different. In experiment 2, four participants listened to thestandard three times. Data were analyzed via signal detection theory.ResultsIn experiment 1, participants had high sensitivities to pitch changes in the high voice and, asexpected, there was no effect of contour. However, most had no sensitivities to the low voice.Experiment 2, designed to improve performance for the low voice, there was again no effect ofcontour, although participants could encode the voice.ConclusionsEarlier studies show that contour is an important feature in memory for single-voice melodies but

325

Page 326: Abstract Book

326 Cognition and computational

it was not salient in the current multivoice textures. These results raise the questions of how studiesof single melodies apply to perception of voices in multi-voice music. To further investigate, newexperiments will be conducted to compare the contour effect in single- and multi- voice music

Key words: Memory, Melodic perception, Contour

[email protected]

45.2 Towards better automatic genre classifiers by means ofunderstanding human decisions on genre discrimination

Enric Guaus1, Perfecto Herrera2

1Sonology Department; High School of Music of Catalonia (ESMUC), Spain2Music Technology Group, Pompeu Fabra University, Spain

Research on how humans categorize music genres is still in its infancy. There is no compu-tational model that explains how musical features are attended, selected and weighted in order toyield genre decision. We also lack of a model that explains how new categories are created andintegrated into our musical knowledge structures. Contrastingly, one of the most active areas inMusic Information Retrieval is that of building automatic genre classification systems. Most oftheir systems can achieve good results (80% of correct decisions) when the number of genres tobe classified is small (i.e. less than 10). They usually rely on timbre and rhythmic features that donot cover the whole range of musical facets, nor the whole range of conceptual abstractness thatseem to be used when humans perform this task.

The aim of our work is to improve our knowledge about the importance of different musicalfacets and features on genre decisions. We present a series of listening experiments where audiohas been altered in order to preserve some properties of music (rhythm, timbre, harmonics. . . )but at the same time degrading other ones. It was expected that, for specific genre discrimination(e.g., folk versus pop), timbre alterations would be more critical than rhythm alterations. In otherwords, we try to find whether a given genre can be identified by a unique property of music or bya weighted combination of them.

The pilot experiment we report here used 42 excerpts of modified audio (representing 9 musicalgenres). Listeners, who had different musical background, had to identify the genre of each oneof the excerpts.

Results of this survey have been used to build an ensemble of genre-tuned classifiers thatimproved up to 4 percent points the observed performance over a standard generic classifier forthe 9 genres. These results show that understanding our perceptual and cognitive constraints andpreferences is important when building a MIR system. With the inclusion of this information,the accuracy of automatic genre recognition systems is improved and its computational efficiencyincreases as well.

Key words: Genre classification, Machine learning, Linstening experiences

[email protected]

Page 327: Abstract Book

Thursday, August 24th 2006 327

45.3 “Voice” separation: Theoretical, perceptual and compu-tational perspectives

Emilios Cambouropoulos

Department of Music Studies, Aristotle University of Thessaloniki, Greece

BackgroundRecently, there have been a number of attempts (e.g. Temperley, 2001; Cambouropoulos 2000;Kilian & Hoos 2002; Chew & Wu 2004; Kirlin & Utgoff 2005) to model computationally thesegregation of polyphonic music into separate “voices”. Much of this research is influenced byempirical studies in music perception (e.g. Bregman, 1990; Huron 2001), as well as by more tra-ditional musicological concepts such as melody, counterpoint, voice-leading and so on. It appearsthat the term “voice” has different meanings for different research fields (traditional musicology,music cognition, computational musicology); especially, in the computational domain, researcherssometimes adopt simplistic, if not naïve, views on this topic that lead to limited or uninterestingresults.

AimsThe paper presents different views of what “voice” means and how the problem of voice separationcan be systematically described, with a view to understanding the problem better and developinga systematic perceptually-based description of the cognitive task of segregating “voices” in mu-sic. Vague (or even contradicting) treatments of this issue within the main music research fieldsmentioned above will be presented.

Main ContributionIn different contexts, “voice” is taken to mean literally human choral voice (or instrumental voice),harmonic “voice” relating to voice-leading, and perceptual “voice” relating to auditory streaming.The paper explores primarily the perceptual aspects of voice segregation and proposes a set ofdefinitions and rules that describe systematically the whole process (a computational model isunder development). The proposed model enables a monophonic melodic sequence to be split inmore than one “voices” (implicit polyphony), a polyphonic excerpt (e.g. fugue) to be separatedinto a number of constituent voices, a homophonic work to be broken up into a melodic voice andan accompanying chordal “voice” (more than one simultaneous notes are allowed); the model candeal with any musical texture consisting of both homophonic and polyphonic elements.

ImplicationsThe notions of “voice”, as well as, homophony and polyphony, are thought to be well understoodby musicians. Listeners are thought to be capable of perceiving multiple “voices” in music. How-ever, there exists no systematic theory that describes how “voices” can be identified, especially,when polyphonic and homophonic elements are mixed together. The current paper highlights anumber of issues in this vein and also proposes a systematic theory that can be implemented as acomputer program (voice separation algorithms are very useful in computational implementations,e.g., in Music Information Retrieval).

Key words: Voice separation, Stream segregation

[email protected]

Page 328: Abstract Book

328 Cognition and computational

45.4 The evolution of melodic complexity in the music of CharlesParker

Paul Hodgson

University of Sussex, UK

This paper shows the results of experimental work on the artificial evolution of musical struc-ture. Every known recorded instance of Parker’s rendition of “Ornithology” is used as a data setfrom which to build an evolutionary simulation of generative complexity. The focus of attentionis the first two bars of every sixteen bar section. Fundamental primitives are identified as dyadicstructures and a Narmourian analysis of every two bar phrase is undertaken in an attempt to de-rive a higher-level reduction of melodic structure. Evolutionary methods are then used to try andevolve coherent phraseology in the style of Parker.

This work demonstrates how the maximal exploration of a musical space is greatly enhanced ifthe primitives used for recombination in the generation of larger more complex structures are smallenough to generate variety, and are not randomly selected. It also shows that if these primitivesare then combined randomly to generate larger structures, they will not necessarily generate musicthat is stylistically coherent. To do this higher order information in the form of structural ruleshas to be inserted into the program to generate a level of complexity that produces the optimalexploration of a style space

Key words: Creativity, Complexity, Evolution

[email protected]

45.5 A user-dependent approach to the perception of high-levelsemantics of music

Micheline Lesaffre1, Liesbeth De Voogdt1, Marc Leman1, Hans De Meyer2, Jean-PierreMartens3

1Department of Musicology (IPEM), Ghent University, Belgium2Department of Applied Mathematics and Computer Science, Ghent University, Belgium3Department of Electronics and Information Systems (ELIS), Ghent University, Belgium

The maturing of music information retrieval systems outlines an attractive future for emotion-based retrieval of music. In view of making such music systems appealing and functional tothe user, research is needed that provides user-dependent knowledge on the perception of high-level semantics of music. The present paper reports the results of an experiment that exploreshow users perceive affects in music, and what structural descriptions of music best characterizetheir understanding of music expression. 79 potential users of music information retrieval sys-tems rated different sets of adjectives, while they were listening to 160 pieces of real music. Thestudy aimed at investigating the relationships between musical expressivity and musical structure.An attractive feature of this study is that it focuses on user profiles. The subject group (79) wasrecruited amongst 774 participants in a large survey on the music background, habits and inter-ests, preferred genres, taste and favourite titles of people who are willing to use interactive music

Page 329: Abstract Book

Thursday, August 24th 2006 329

systems. Moreover, the stimuli used reflected the musical taste of the average participant in thelarge survey (774). The study reveals that perceived qualities of music are affected by the profileof the user. Significant subject dependencies are found for age, music expertise, musicianship,broadness of taste and familiarity with classical music. Furthermore, interesting relationships arediscovered between expressive and structural features. Analyses show that the targeted populationmost unanimously agrees on loudness and tempo, whilst less unanimity was found for timbre andarticulation. Finally, our findings are tested and validated by means of a demonstration applica-tion of a semantic music recommender system that supports the querying of a music database bysemantic descriptors for affect, structure and motion. The system, that recommends music froma relational database containing the quality ratings provided by the participants, illustrates thepotential of a user-dependent and emotion-based retrieval of music.

Key words: Perceived music qualities, Interactive music systems, Semantic descriptors

[email protected]

45.6 Acquiring new musical grammars - a statistical learningapproach

Psyche Loui1, David Wessel2

1Department of Psychology & Center for New Music and Audio Technologies, University of Cali-fornia at Berkeley, USA2Department of Music & Center for New Music and Audio Technologies, University of Californiaat Berkeley, USA

BackgroundHow do humans learn a musical system? The ability to learn musical harmony has been investi-gated in various ways, including studies with infants, comparisons of musicians and nonmusicians,and ethnomusicological analyses.

AimsThe present study explores a new paradigm for investigating the acquisition of harmonic knowl-edge - the statistical learning of artificial musical grammars.

MethodWe use the Bohlen-Pierce scale, a microtonal tuning system based on 13 logarithmically evendivisions of a three-to-one frequency ratio. Based on tone profiles reflecting psychoacousticalprinciples of harmony (where low-integer frequency ratios are perceived as more harmonious),we generated two chord progressions which formed the bases for our two new musical gram-mars. Melodies were composed according to these two grammars, and participants were exposedto one of the two sets of melodies. Tests conducted to assess learning included pre- and post-exposure probe tone ratings, forced-choice recognition tests, and subjective preference ratings foreach melody.

ResultsIn Experiment 1, five melodies were presented repeatedly for 25 minutes. Participants could cor-rectly identify melodies they had heard, but were not above chance at identifying new melodiesthat followed the same grammar. One group of participants also rated the familiar melodies as

Page 330: Abstract Book

330 Cognition and computational

preferable over unfamiliar ones; however, ratings for new melodies were undifferentiated acrossthe familiar and unfamiliar grammars. This shows that participants learned the given melodies, butdid not generalize to the underlying grammar. Experiment 2 was similar to experiment 1 except15 melodies were presented for 25 minutes. Again, forced-choice tests showed that participantsrecognized melodies presented during the exposure phase. In addition, post-exposure probe toneratings correlated significantly higher with the melodies than pre-exposure ratings, suggesting thatlisteners were sensitive to the statistical hierarchy of melodies after only 25 minutes of exposure.ConclusionsResults suggest that after limited exposure to a sufficiently large number of melodies, participantscould recognize melodies and extract the underlying statistics of a new musical system. Ongoingwork involves testing with a larger corpus of melodies, as well as experiments using differenttimbres. Implications of our results on the constraints of statistical learning will be discussed.

Key words: Statistical learning, Harmonic expectation, Musical grammar

[email protected]

Page 331: Abstract Book

Symposium: Similarity per-ception I

46

Convenor: Petri Toiviainen and Irène DeliegeDiscussant: Geraint Wiggins

The concept of similarity is central to perception. It underlies processes such as object recog-nition, comparison, and classification, which are crucial to much of human cognitive process-ing. In the domain of music, similarity plays a vital role in fundamental perceptual and cognitiveprocesses such as grouping, segmentation, musical variation, genre recognition, and expectancy.Furthermore, memory and schematization processes in real time listening rely on the recogni-tion of small figures, or cues, which again is based on the notion of similarity (Deliège 2001).Therefore it is evident that understanding of the basic processes underlying perception of musicalsimilarity is necessary for acquiring a deeper comprehension of music perception in general.

A further, more practical, motivation for research on musical similarity comes from the field ofMusic Information Retrieval (MIR), which aims to develop computational methods for extractingrelevant information from digital representations of music, including both audio and symbolicforms. Central areas within MIR are music classification and content-based retrieval, both ofwhich rely heavily on the notion of similarity. Some of the models developed rely on musicperception research, others are more concerned about issues related to computational efficiency.The majority of empirical work on and computational models of musical similarity deal withmelody, due to the central role it plays in the perception and memorization of music.

The purpose of the symposium is to discuss recent advances in the research on musical similar-ity. Main emphasis will be in melodic similarity, but similarity in other musical dimensions will betackled as well. The topics discussed include the relation of motivic, segmental, and accent struc-ture to melodic similarity, context-dependency of similarity perception, models of content-basedretrieval, social dimensions of melodic similarity, as well as the connection between internal andexternal musical similarity.

46.1 Similarity relations in listening to music : How do theycome into play

Irene Deliege

331

Page 332: Abstract Book

332 Symposium: Similarity perception I

University of Liege, Belgium

In this essay, similarity relations in the perception of music are studied on the basis of twodifferent standpoints. The first is said to be external and studies comparisons between distinctand autonomous musical entities, i.e. different works or different interpretations. The second isinternal and looks at similarity relations which the listener identifies in the same work or part of awork, and in the same interpretation. This last procedure is developed in different directions. Weshow the importance of the similarity factor: (i) in the actual composition process as the composerseeks unity and coherence in his or her piece; (ii) in folk music; (iii) as an essential element forthe listener who strives to understand the piece as he listens to it in real time. This last point is offundamental importance for the study of cognition in general i.e. the implicit or explicit role ofsimilarity in perception processes.

Concrete examples of the main theoretical points described here can be identified in the courseof the different phases of the cue extraction model that I developed recently, i.e. segmenta-tion, categorization, schematization, imprint formation (Deliège, 1989, 1991). The SIMILAR-ITY/DIFFERENCE axis is a central element in the structure of the model. The empirical ap-proaches used in experimenting with the different phases of the model are analyzed in this two-fold perspective. From the very beginning, this was the aim of a series of procedures that aimedat showing the implicit or explicit aspects of the processes that musician and non-musician par-ticipants use during the experimental listening process. An overall view of the results obtainedconfirms that the assumptions were correct.

[email protected]

46.2 Growing oranges on Mozart’s apple tree: Intra-opus co-herence and aesthetic judgment

Zohar Eitan1, Roni Granot2

1Tel Aviv University, Israel2The Hebrew University, Israel

Music theorists (e.g., Schoenberg, 1978) often presume that the various themes in a musicalmasterpiece match “organically,” in a way enhancing aesthetic value and unity. This supposedintra-opus (piece-specific) coherence should be distinguished from that associated with inter-opusconstraints, such as tonal structure, musical forms, or a composer’s style. A number of works haveinvestigated how violating inter-opus constraints affects listeners’ aesthetic judgments, by alteringor reordering materials within a single piece (e.g., Bigand & Tillman, 1996; 2001; West-Marvin &Brinkman, 1999). To examine whether intra-opus unity affects aesthetic judgments, one needs totake an opposite course, comparing a hybrid which combines materials from different pieces withthe original compositions, while controlling for inter-opus cohesive features. This proposal reportssuch comparison. In the study reported, we selected the opening movements of Mozart’s pianosonatas, K. 332 and 280 - sonata-form movements sharing meter, tempo, key and tonal structure,but considerably differing in thematic material. A hybrid was created from recordings of thesemovements, performed by the same performer, such that sections from one piece were replacedby structurally equivalent sections from the other. We presented K. 332 and the hybrid piece incounterbalanced order to subjects (60 adults, 29 musically-trained) as two versions attributed to

Page 333: Abstract Book

Thursday, August 24th 2006 333

Mozart, and asked them to rate numerically each version for aesthetically-relevant features includ-ing coherence, expressiveness, and interest, and to indicate which version they prefer. Listenersfirst rated the pieces after a single hearing, and again after week-long exposure to both versions.Partial results show no significant preference for the original piece in all scales, even after re-peated hearings. Moreover, after repeated exposure musicians indicated stronger preference thannonmusicians for the hybrid, and rated it as more coherent than the original. Results suggestthat if themes in a musical masterpiece are indeed “organically” unified, such unity may eludeexperienced listeners even after extended exposure.We discuss these results in light of recent argu-ments supporting the priority of local over global coherence in aesthetic appreciation (Levinson,1998)), the roles of similarity and contrast in perceived coherence, and the Ars combinatorica ofthe Classical period.

[email protected]

46.3 Approaches for content-based retrieval of symbolically en-coded polyphonic music

Lemström Kjell, Pienimäki Anna

Helsinki University, Finland

In this paper we deal with content-based music retrieval of symbolically encoded music. It isone of the key issues in the field of music information retrieval. Due to rather extensive research,there already are satisfactory methods for monophonic music retrieval. However, to obtain satis-factory results also with polyphonic music, there is still much to be done. We classify methodsthat have been developed for polyphonic content-based music retrieval in two approaches. Themethods of the first approach use the well-known edit distance framework and encode music aslinear, 1-dimensional strings. The other, more recent approach uses a piano-roll-like representa-tion for music. Instead of making a comparison of the methods, we compare the two approachesand specify their pros and cons. We conclude that the geometric approach seems more promis-ing for the task, although, at the moment, it lacks some important properties that are needed forreal-world applications.

[email protected]

46.4 Transportation distances and human perception of melodicsimilarity

Typke Rainer

Utrecht University, The Netherlands

This article describes how transportation distances such as the Earth Mover’s Distance can beused for measuring melodic similarity for notated music. We represent music notation as weighted

Page 334: Abstract Book

334 Symposium: Similarity perception I

point sets in a twodimensional space of onset time and pitch. The Earth Mover’s Distance can thenbe used for comparing point sets by determining how much work it would take to convert one ofthe point sets into the other by moving weight between the point sets. For evaluating how well thismethod and other methods agree with human perception of melodic similarity, we established aground truth for the RISM A/II collection based on the opinions of human experts. The RISM A/IIcollection contains about half a million musical incipits. For 22 queries, we filtered the collectionso that about 50 candidates per query were left, each of which we then presented to about 30human experts (out of a group of 75 experts) for a final ranking. We present our filtering methods,the experiment design, the resulting ground truth, and a new measure (called “Average DynamicRecall”) that can be used for comparing different similarity measures with the ground truth.

[email protected]

Page 335: Abstract Book

Musical Meaning II 4747.1 The Hausdorff metric in the Melody Space: A new ap-

proach to Melodic Similarity

Pietro Di Lorenzo, Giuseppe Di Maio

Second University of Naples, Department of Mathematics, Italy

BackgroundThe measure of Similarity in Music is based on different types of information related to specificapproaches: symbolic (musicological analysis of scores, texts, historical information as well asnumerical methods applied on scores, MIDI codes, etc.); acoustic (audio signal analysis, frequencyspectrum etc.) and subjective (theoretical investigations and experimental results). There areseveral mathematical models that offer computational tools for music similarity (often only themelodic one). Unfortunately, comparisons with psychological experiment are missing. Moreover,the applications are strongly restricted to few examples.AimsWe attack the similarity problem in music by using theoretical mathematical techniques appropri-ate to measure nearness between points or sets as in (digital) Image Processing e.g. Hausdorff met-ric (Hausdorff 1914; Barnsley Image compression method 1985), Proximity Space (Riesz 1908;Ptak and Kropatsch, 1995), Digital Topology (Rosenfeld 1978, Rosenfeld and Klette). ImageProcessing seems to parallel Music Similarity. Nevertheless, a careful investigation shows thatmusic has crucial and slightly different peculiarities. In fact, music is a dynamical process thatinvolves time dimension and human memory abilities. In this work, our main aim is to set a modelfor music similarity. The deeper goal is to investigate the geometrical properties that Hausdorffdistance involves in the space of the melodies.

Model In this model, we use a numerical code to represent the musical score, namely themelodic part of it. The model is based on the Hausdorff metric that seems to offer the appropriatemathematical tools for music similarity. We recall that the Hausdorff distance is defined betweensubsets of points.ResultsThe preliminary results confirm some properties of music similarity tonality invariance, partialtime-translation proximity, etc. An appropriate choice of the time-scale unit is crucial. The studyof the geometrical properties is still in progress.

335

Page 336: Abstract Book

336 Musical Meaning II

ConclusionsOur model points out some simple but deep geometrical facts and suggests further developmentsthat offer a new perspective to study music similarity in a mathematical framework.

Key words: Melodic similarity, Nearness and proximity, Hausdorff metric

[email protected]

47.2 Body and mind in the pianist’s performance

Isabella Poggi

Department of Education, University of Rome III, italy

This work presents a theoretical model of the Pianist’s mind and an observational study on aPianist’s body behaviour as a cue to validate this model. In piano playing, cognitive, emotional andmotor processes take place in the Pianist’s mind. Cognitive processes include attention, perceptionand memory: visual images of meanings to convey and written music to play, auditory images ofsounds to produce, and the procedural memory of hand movements to perform. At the same time,emotions are felt by the Pianist during or thanks to actual performance, and other emotions are self-induced in order to be impressed to the music played. Cognitive and emotional processes convergeinto motion processes: hand and feet movements and their manner of movement give rise to thedifferent aspects of music, melody, rhythm, harmony, timber, tempo, intensity and expression.But the Pianist does not move only hands and feet; also his trunk, head, face, mouth, eyes andeyebrows often move during playing: they participate in the production of sound and its quality,and at the same time they express the Pianist’s cognitive and emotional processes. Previous workshave shown that the Pianist’s head, face and trunk express cognitive and emotional states (Caterinaet al., 2004) and that they cooperate with hand motions in the production of melody, rhythm,harmony, intensity (Poggi, 2005). In this work, the performance of a classical Pianist, MarcellaCrudeli, was videorecorded during Mozart’s piano concert K 488, in concert and rehearsal. Twentyfragments of the concert performance were analysed in terms of an annotation scheme (Poggi2005) that describes each movement of the Pianist’s trunk, head, face, mouth, eyes and eyebrowsand classifies it in terms of its meaning and its relation to the concomitant musical performance,whether it cooperates with hand motions in the production of music or it expresses emotions orother mental states. The analyses of different fragments are compared to assess if the functionsof body movements are recurrent across different musical passages, and across performance inconcert and rehearsal.

Key words: Musical mind, Body communication, Multimodality

[email protected]

47.3 Personality correlates and the structure of music prefer-ences

Marek Franek1, Pavel Muzik2

Page 337: Abstract Book

Thursday, August 24th 2006 337

1University of Hradec Králové, Faculty of Informatics and Management, Hradec Králové, CzechRepublic2National Technical Library, Prague, Czech Republic

Typically, musical preference is studied in a context of music sociology and social psychology.An association between personality characteristics and musical preference was not systematicallystudied. However, it seems that people’s preference for music reveals a greater deal about theirpersonalities and values. Recently, Redfrow and Gosling (2003) conducted the study, where linksbetween personality and music preference were investigated. Their study revealed four music-preference dimensions: “Reflexive and Complex”, “Intense and Rebellious”, “Upbeat and Con-ventional”, and “Energetic and Rhythmic”. Preferences to these music dimensions were related toa wide array of personality dimensions.

The goal of our study was to carry out a cross-cultural comparison between the American andthe Czech sample of music listeners. A selection of 15 musical genres was conducted and theirliking was examined. The list of musical genres reflected a different cultural tradition of the CzechRepublic. Personality was assessed by the Eysenck Personality Inventory, which measured twopersonality traits - Extraversion and Neuroticism. The preliminary study was conducted with asample of 440 subjects aged from 14 to 59 years.

It was found that extraversion correlates negatively with preference of classical music, country,ethnic music, while it correlates positively with preference of rock, alternative music, world music,new age, sound track, rap, hip-hop. Neuroticism is not associated with preference of particularmusical genre.

The factor analysis revealed five music-preference dimensions. The three of them are identicalto the dimensions found in the Redfrow and Gosling’s study: “Intense and Rebellious”, “Upbeatand Conventional”, and “Energetic and Rhythmic”. Next two factors have certain similarities tothe dimension “Reflexive and Complex”. One factor is loaded by preference to classical music,folk, country, brass music and ethnical music, while the second factor by jazz, blues, soul, funk,gospel, folk, and classical music.

The differences between the structure of music-preference dimensions are explained in termsof cultural differences between Central-Eastern Europe and USA. The research, which is now inprogress, examines a link between these five music-preference dimensions and personality dimen-sions by using Big Five Inventory and Sensation-seeking scale.

Key words: Music preference, Personality

[email protected]

47.4 Musical communication and musical comprehension: Aperspective from pragmatics

Richard Ashley

Northwestern University, USA

BackgroundAlthough many researchers speak of musical “communication,” this concept of musical commu-nication remains under-theorized relative to verbal or visual communication. Some view it as

Page 338: Abstract Book

338 Musical Meaning II

focusing on musical structure; others on emotion; still others on embodied metaphors. Each ofthese viewpoints provides valuable insights but at present we lack a broad, general framework tounite such varied perspectives.

AimsThis paper reviews a range of significant works in pragmatics, a field vital to linguistics and psy-chology of language, and related works in language meaning and language use, and shows theways in which the concepts and methods from these fields illuminate processes of communicationbetween musicians. The goal is to seek primary ways in which musical communication can beunderstood relative to other forms of human communication and interaction.

Main ContributionThis paper provides a systematic approach to the idea of musical communication based on currenttheories of pragmatics, in linguistics and beyond. The starting point is post-Gricean pragmatics,both in the neo-Gricean versions of Levinson and Horn, and in the Relevance Theory of Sperberand Wilson. Additional material from psycholinguistics (Clark and others) and discourse analysisis woven into a comprehensive approach; an analysis is presented of two videotaped sessions, oneof musicians interacting while rehearsing, and the other of musicians’ gestures while discussinga piece of music. Together these show the approach’s suitability and robustness in dealing withcombinations of verbal, “purely” musical (auditory) and gestural channnels of communication.One major area addressed is that of Lakoff-style metaphorical approaches; the way in which apragmatics-based theory makes sense of this approach while answering some of the criticismstypically raised toward it is demonstrated.

ImplicationsMusic cognition can be substantially enriched by broadening the base of disciplines and conceptswith which it interacts. The case of pragmatics, along with psycholinguistics and discourse analy-sis, serves to show how our understanding of musical phenomena can be deepened and enrichedby such a broadened base.

Key words: Music and language, Communication, Gesture

[email protected]

47.5 Sense and meaning in music: The correspondences hy-pothesis

Mario Campanino

University of Salerno, ItalyIDIS Fondation, Città della Scienza, Naples, Italy

The theme of meaning in music is very wide and above all not well defined up to today. In thispaper the background will be the analysis of three different theories: 1) the one of the belgian lin-guist Nicolas Ruwet, with his paper Contradictions du langage sériel, of 1959; 2) a recent analysisby “Anthèmes”, a violin piece by Boulez, realized by a Nattiez pupil in Montreal, Jonathan Gold-man; 3) a study of sense and of meaning in music, Suoni Emozioni Significati by Imberty.

In relation with this background, the aim of the paper will be to demonstrate that 1) the struc-tural look does not reveal the essence of musical language, 2) the formal analysis collides with

Page 339: Abstract Book

Thursday, August 24th 2006 339

unavoidable problems of perceiving relationships between sounds, 3) the verbal language revealsitself as an inadequate instrument to express the sense of music.

The main contribution will be the consideration that in poetic language often poets use sen-tences which haven’t been understood in a literal sense, but have to be read as images (as in thecase of “Ergerti un trono vicino al sol!” by Verdi-Ghislanzoni), and which lose its original mean-ings once said in other words. So, is there something words cannot say making reference to theirown system of codified relationships signifier/meaning. Sense and meaning, then, are no morerelated to the systemic semantics of verbal language, but reduces it to a tool to represent a figurethat is producer of ineffable sense.

This reflection on the poetic language opens an interesting perspective on the musical one,which is able to represent images provided of contextual and untranslatable sense. But what doesjoin together that image “Ergerti un trono vicino al sol” and what poet wanted to say? Whathappens is a phenomenon of correspondence between forms and meanings of these forms. Thecircumstance these meanings are untranslatable in words, does not prevent us from hearing theircorrespondence. The net of these correspondences generates sense phenomena in music.

Key words: Language, Semiotics, Meaning

[email protected]

47.6 Musical meaning: Imitation and empathy

Uwe Seifert, Jin Hyun Kim

University of Cologne, Institute of Musicology, Germany

Empirical studies of musical expressivity have been mainly focusing on acoustical and formalfeatures of music that are assigned to expression of emotional states. Herein, the idea of “musicas expression” has been connected to the view of music as code, which serves as the basis formany recent psychological and semiotic approaches of music research. Due to this view, music isregarded as a vehicle for (emotional) communication, in so far as musical meaning may be coded.

The paper will question these research approaches and develop a theoretical framework forstudying musical meaning based on the relationship of music, motor behavior and empathy. Ageneral theory of “ästhetische Einfühlung” (esthetical empathy) was developed by Theodor Lippsas a means to explain the contents of art works. Recently, empathy has been related to neuroscien-tific findings on mirror neurons as a basic mechanism for understanding others based on imitation.We will propose to extend this restricted sense of empathy and transform it back to its originalmeaning and reevaluate Lipps’ theory in the light of recent interpretations of these neuroscientificfindings. Further, we will discuss to what extent in connection with the mirror-system hypothesisand recent results concerning the syntactical processing of music in the Broca area aesthetical em-pathy might serve as a basis for studying the basic mechanisms and structures that cause musicalmeaning.

The neuroscience of social cognition furthermore provides evidence for coupled neural codingof sensory and motor processes considered as basic structure of inner imitation for understandingothers. The aspect of sensorimotor and affect integration that shows a link between simulation andempathy is crucial to studying musical meaning. Musical meaning does not seem to be explicitlycoded, but to be ascribed by “embodied simulation” taking place during music production andperception. Research on musical meaning should therefore be directed toward the motor aspect of

Page 340: Abstract Book

340 Musical Meaning II

music production and perception that seems to have an impact on ascribtion of musical meanings.The relationship between music, motor behavior, and emotion may be elaborated in the connectionwith the mechanism of “embodied simulation” that is basic for music production and perceptionguided by sensorimotor coupling.

Key words: Esthetical empathy, Mirror neurons, Musical meaning

[email protected]

Page 341: Abstract Book

Symposium: Music in in-fancy: The importance ofcontext

48

Convenor: Alexandra Lamont

Most existing music perception and cognition research employs adult participants who canfollow verbal instructions and make explicit responses to music. However, there has been a recenttrend towards more implicit research methods, such as priming and neuroscientific techniques,suggesting that music listening should be studied as a musical phenomenon (e.g. Tillmann, 2006).Research into music in infancy is also enjoying a surge of interest, for the insights it can bring intofundamental forms of music perception and cognition and the role of music in development andhuman culture (e.g. Trehub, 2006), and infant paradigms all depend on implicit and non-verbalresponses. However, despite much recent research activity, there still remain many unansweredquestions in relation to music in infancy.

This symposium brings together four papers on music in infancy. Each has a different substan-tive topic: Kim & Werner explore tone chroma and octave separation; Lamont & Trehub focus onpreferences for calming and arousing music; Hannon explores musical rhythm and meter; and Tre-hub focuses on infant-directed singing. However, each engages with wider issues of methodologyand highlights the importance of treating responses to music musically with this population.

Firstly, through the use of different procedures and context materials, Kim & Werner are ableto shed light on an earlier empirical discrepancy in the field. Their results highlight the influenceof musical context on infant (and adult) responses to musical stimuli. Next, Lamont & Trehubexplore the stability of music responses in the laboratory. Their results illustrate that single-visittesting does not reflect the flexibility and potential diversity of the range of responses to music,which appear to be influenced by a range of other musical and non-musical factors.

In the third paper, Hannon provides an overview of research on infants’ perception of rhythmand meter, presenting work that suggests multiple influences of both culture and intrinsic con-straints on the development of musical knowledge. The final paper by Trehub explores the contra-dictory emotional intentions between lyrics and music in lullabies, highlighting some importantfeatures of music’s functions for different types of listener and performer with a focus on thespecific context of infant-directed music.

341

Page 342: Abstract Book

342 Symposium: Music in infancy: The importance of context

48.1 The role of context on the perception of tone chroma ininfants and adults

Daniella Kim, Lynne Werner

University of Washington, USA

BackgroundVariations in stimuli and experimental method are referred to as “contextual considerations” (Schmuck-ler, 2004). These variables appear to influence listeners’ judgments of the chroma of octave notes.Music perception and psychoacoustic literature suggest that musically naïve listeners, such as in-fants and musically untrained adults, perceive octave-separated notes as having the same chroma.

AimsThis paper will explore the perceptual reality of tone chroma for infants and adults, especially asit is influenced by “contextual considerations”. The main focus will be on differences in stimuliand method.

Main ContributionIn a habituation paradigm, 3-month old infants were shown to respond to single note transpositionsof a 7th or a 9th, but not to an octave, in a 3-note sequence (Demany & Armand, 1984). This sug-gests that infants perceive octaves as having similar chroma. However, in a similar melodic manip-ulation, in a same-different paradigm, adults were shown to have difficulty perceiving single-noteoctave transpositions in a 6-note melody as equivalent (Deutsch, 1972). The findings of these twostudies challenge the notion that the perceptual similarity of octaves is a fundamental propertyof the auditory system. We will report data from preliminary studies testing infants’ and adults’perception of tone chroma that systematically test experimental context variables, manipulatingmelody length for note and whole-melody octave transpositions. In a replication of Demany andArmand’s original 3-note stimuli using an observer-based procedure, infants and adults were askedto ignore single-note and melody octave transpositions and respond to only the chroma changes.Infants and adults were found to perceive octave transpositions as melodically equivalent (Kim &Werner, 2004). In a replication of Deutsch’s study, adults were shown to perceive melodic, but notsingle, octave transpositions as similar as well.

ImplicationsThe results of these studies have attempted to resolve the role of context and melody length inthe perception of tone chroma. This paper should provide reasonable evidence to suggest that theperceptual reality of tone chroma should always be interpreted with “contextual considerations”in mind.

Key words: Infancy, Context, Auditory perception

[email protected]

48.2 Variability in infants’ responses to music

Alexandra Lamont1, Sandra Trehub2

Page 343: Abstract Book

Thursday, August 24th 2006 343

1Keele University, UK2University of Toronto, Canada

BackgroundIndividual differences in infants’ responses in the laboratory have begun to receive some attentionin the field of infant research, with clear evidence for differences due to temperament, attentionalstyle, and more variable factors such as mood and arousal. Studies of infants’ responses to music,however, have not systematically tackled individual differences or variability in responses.

AimsThis study examines the stability of infants’ music preferences over repeated testing, comparinginfants’ responses to music and to other stimuli and test situations, and explores these in relationto individual differences (temperament, gender, and state).

Method38 infants aged 7m16d to 8m21d at the outset (mean 8m) participated in the study, including22 girls and 16 boys. Background information was collected on the family, music experiences,and infant temperament (using Rothbart’s Infant Behavior Questionnaire-Revised, 2000). Infantsparticipated in four experimental tasks in a soundproofed booth with their caregivers: a book in-teraction, a visual attention task, a music preference task using the preferential looking technique,and a novel toy interaction. All participants were tested twice, one week apart, and if willing(n=23) were tested twice again, each time separated by a week. All sessions were video recorded.

ResultsGeneral testability and responses to the visual attention task were highly consistent within individ-ual infants from session to session. Music preferences were less consistent, however, with infantsseemingly changing their minds about whether they preferred to listen to calming or arousing mu-sic on subsequent visits. The general level of testability had no bearing on the variance in any ofthe other tasks, however, suggesting that good data can be gathered from infants if they are able toundergo the experimental procedures.

ConclusionsFurther data analysis is focusing on the short-term effects of mood and arousal and daily routineson individual infants’ music choices, as well as addressing the effects of individual pieces ofmusic. In addition microanalysis of the infants’ behaviour during the experimental sessions mayshed light on the variability observed in response to music. Adults engage with music in highlycontext-specific ways and preliminary analysis suggests that infants’ behaviour is similar.

Key words: Infancy, Context, Variability

[email protected]

48.3 Perception of rhythm and meter in infancy

Erin Hannon

Department of Psychology, Harvard University, USA

Background and AimsRhythm and meter are ubiquitous aspects of music that play a fundamental role in communication

Page 344: Abstract Book

344 Symposium: Music in infancy: The importance of context

between caregivers and young infants. Given the prevalence of highly rhythmic, metrical struc-tures in infant-directed music, an obvious goal for research is to understand what basic perceptualabilities exist in infancy and how these abilities are modified during development. A review of pastand current research will address three questions about infants’ perception of rhythm and meterin music: 1. Can infants perceive rhythm and meter? 2. Are these abilities affected by culture-specific musical experiences? and 3. Do basic perceptual biases constrain the types of rhythmsand meters that can be perceived and learned?Main Contribution and ImplicationsMethods such as infant-controlled habituation and familiarization-preference procedures indicatethat infants 2-8 months of age can discriminate rhythms, infer meter from simple rhythmic pat-terns, and prefer metrical patterns they have heard before. Early infancy is nevertheless known asa period of perceptual flexibility and rapid learning, and accordingly, rhythm and meter perceptionis strongly influenced by culture-specific music listening experiences early in life. Young infantshave greater perceptual flexibility than older infants or adults. North American 6- month-oldsreadily distinguish disrupted from un-disrupted versions of a folk tune, whether the tune has a reg-ular, familiar (Western) meter or an irregular, foreign (Balkan) meter. In contrast, 12-month-oldinfants and adults distinguish only regular, familiar-meter folk tunes. Such findings could implythat perception of meter is determined solely by cultural experience, but further research sug-gests that rhythm and meter perception are initially constrained. Seven-month-olds and adults canmore readily detect disruptions to a melody whose rhythm is labeled as “good” than one labeledas “bad” by other adults. When metrical regularity of rhythms is systematically varied to createWestern (2:1 ratio of long and short note durations), Balkan (3:2), or artificially complex (7:4)meters, adults are better at detecting disruptions to the Western than to either of the non- Westernmeters, and we hypothesize that regularity will also predict the difficulty of the task for infants.Overall, such research is essential for teasing apart the relative contributions of intrinsic perceptualconstraints and familiarity-based learning in infants’ emerging understanding of music.

Key words: Infancy, Context and culture, Perceptual bias

[email protected]

Page 345: Abstract Book

Education V 4949.1 Gender effects in young musicians’ mastery-oriented achieve-

ment behavior and their interaction with teachers

Margit Painsi, Richard Parncutt

Department of Musicology, University of Graz, Austria

BackgroundPlaying a musical instrument requires considerable motivation and self-regulation to cope withfailures, setbacks, the daily hassles of practicing, and stressful events like concerts. Teachers oftenassume that the highly skilled relish a challenge and persevere in the face of setbacks (Dweck,1999). In fact, the most talented students are often the most worried about failure and the mostlikely to question their ability when they encounter obstacles (Leggett, 1985). Girls are more likelythan boys to develop inappropriate achievement behaviors. One reason could be that teachers givethem different feedback for their performances, because they explain successes and failures ofboys and girls in different ways (Painsi, 2003).

AimsAn aim of the project is to investigate differences in the achievement behavior of boys and girlsand the connection to their teachers’ implicit theories of musical ability, attribution patterns andfeedback style. We are interestet in wheter and how these behaviors change in response to amotivation-training for teachers and pupils.

MethodParticipants of the first and second training phase of this project are 11 female and 10 male chil-dren aged 12-14 who have been playing for at least one year, and their teachers. The Childrenattend 8 weekly sessions where they learn e.g. strategies for reaching realistic goals and copingwith failures. The teacher training is carried out during the same period. It involves effectivecommunication with students by means of feedback that fosters motivation and self-esteem.

ResultsPreliminary analysis of the data suggest differences between the the girls’ and boys’ achievementbehavior and between the way that teachers explain successes and failures of boys and girls. At thestart of the course the girls will have had less appropriate responses to success and failure than theboys. For that reason we expect that girls will improve more than boys. For teachers we anticipate

345

Page 346: Abstract Book

346 Education V

an improvement of their attribution-patterns and less stereotyped thinking when giving feedbackto their pupils.ConclusionsSince girls and boys need different support in developing mastery-oriented behaviour in musicpractice it would be appropriate for teachers to treat boys and girls differently.

Key words: Achievement behavior, Gender effects, Teacher-pupil interaction

[email protected]

49.2 Musical transfer-effects revisited: Preliminary results froma study among 21 primary schools

Jan Hemming

University of Kassel, Music Department, Germany

BackgroundA fair amount of reports that seemingly demonstrate positive effects of musical activity (“transfer-effects”) was published in the past decade. Although these studies have led to controversial de-bates in the academic world, it is the public’s opinion today that music does improve people’sabilities, intelligence and behaviour. Music pedagogy currently benefits from the fact that it hasbecome easy to initiate according musical projects. These activities (and the associated research)predominantly address individuals, e.g. by means of increased music lessons in kindergarten or inschools.AimsIn contrast to the above, the project referred to in this paper acts on a group level - its focusare the effects of increased musical activities of various kinds in 21 primary schools. On fouroccasions, music teachers from these schools receive further vocational training in the conceptionand realisation of musical projects. At their home schools, they act as multipliers transferring thisknowledge to all other colleagues. In the common application of these ideas and suggestions, theschools will develop into a “musical primary school”.MethodThis paper relates to the scientific evaluation of the project outcome and not the project itself. Inthe course of two years, various analyses are repeatedly being carried out in each school. Theseinclude: - questionnaires for pupils in grade 3 and 4 (social and family background, musical back-ground and activities, school climate, etc.) - questionnaires for the music teachers, the principalsand the delegates of the parent’s councils (assessment of project idea and advancement, communi-cation structures, school climate, etc.) - participant observation of the further vocational training -telephone interviews with non-music teachers involved in the project - documentation of resultingmusical projects at the schoolsResultsAt the time of abstract submission, the above instruments were applied once, resulting in a pre-cise picture of the current state of each school. A second cycle of surveys will be completedbefore the summer. It will therefore be possible to present preliminary long-term tendencies at theconference. At the moment, participant observation suggests the project will yield positive results.

Page 347: Abstract Book

Thursday, August 24th 2006 347

ConclusionsI will discuss the preliminary project evidence in contrast to existing studies on musical transfer-effects. It will also be asked under which conditions the project idea could possibly be applied andrealised beyond the 21 schools of the sample.

Key words: Transfer-effects, Music-education, Primary school

[email protected]

49.3 The singing lesson. Learning and non verbal languages

Alessia Vitale

Université de la Sorbonne, Paris IV, Paris, France

The paper deals with certain aspects of non verbal and non vocal communication in voicetraining for singers, taking into consideration the visual elements inherent in teaching the “invisibleinstrument”.

The present research is part of a wider range of study that the author is carrying out on thespecificity of learning and the teaching of the one and only musical instrument that is completelyintegrated within the human body: the voice, in its singing mode, with all the implications thatthis integration implies. Although in educational institutions singing tends to be considered as amusical instrument exactly like the others, the whole length of this research has born witness to itsunique multiplicity.

The study was carried out with a cognitive aim, making use of a cross of methodologies, in thelight of clinical observations made, with the aid of film, over a long period of time (several years)“in the field”, in other words in various singing classes chosen from French institutions officiallydedicated to the teaching of singing in different musical styles and genres.

Over and above the uniqueness of each pupil and each teacher, of the different methods adoptedand styles of music dealt with, the most interesting aspect to emerge is the presence of “commonpoints”, elements that consistently recur despite the vast range of variety in the approach to teach-ing.

The author’s project is to achieve reflection through a multifaceted methodology - given thatthe voice is by nature polysemic, polyvalent - to try to reconstruct the structures underlying thelearning of “l’instrument-voix”, to define the cognitive elements that lead to learning, study andprogress. By comparing the specific dynamics of learning to sing with those of other instruments.From this point of view the study does not focus solely on the pupil, but rather on the “dyad” orthe “teacher-pupil constellation”, considering the interactive dynamics of this relationship to be ofhermeneutic value.

Key words: "L’instument-voix", Vocal gestures, Memories

[email protected]

49.4 Early acquisition of musical aural skills

Richard Parncuttt1, Gary E. McPherson2, Margit Painsi1, Fränk Zimmer1

Page 348: Abstract Book

348 Education V

1Department of Musicology, University of Graz, Austria2School of Music, University of Illinois, USA

BackgroundAcquisition of musical aural skills involves interaction between genes and environment, practiceduring critical early periods, and intrinsic/extrinsic motivation. Musical ability is a complex ofskills, of which audiation appears central. Memory is stronger for meaningful events.

AimsWe explore when, how, and why musically talented children spontaneously recognize musicalpitch structures, with the aim of improving (aural) music education.

MethodSome 200 internet users responded to a questionnaire athttp://www-gewi.uni-graz.at/staff/parncutt/omas/. Data of about 100 respondents was sufficientlycomplete, consistent, and appropriate for quantitative and qualitative analysis.

ResultsPreliminary analysis of data from 10 female and 9 male respondents yielded the following. Allreported receiving grade A or equivalent in a post-secondary ear-training course. Countries oforigin reflected the internet’s western, anglophone bias. Their mean age was 40, casting doubt ontheir ability remember childhood events. They had played music for mean 33 years, implying thattheir audiation skills contributed to their intrinsic musical motivation. 12 respondents gave pianoas their main instrument, suggesting that keyboard provides a good visual representation for auralstructures. Musical activities before beginning formal instruction tended to involve piano and/orchoir. The average number of musical instruments in early childhood homes was 2.5 and usuallyincluded piano. Most reported a musically active mother or father. Early musical activities (ageabout 6) were consistently rated as very enjoyable with many positive emotions. Early musicalactivities were considered very important for musical skill acquisition. Respondents began playingmusic at average age 6.5 - mostly piano. In their first year they practiced mean 5.7 days/week @1.7 hours/day plus rehearsals and performances. They learned most about music from musiclessons and engaged in many musical activities.

ConclusionsAural skills (like general musicianship) gain from early, frequent, long-term, guided, social, en-joyable, meaningful musical engagement. No evidence was found for a genetic basis.

ImplicationsParents should enjoy making music, own several instruments incl. keyboard, encourage curiosity,sing (esp. in choirs), and offer conventional music lessons. Educational systems should focus onenjoyable musical activities for young children, and involve their parents.

Key words: Argument, Textbooks, Software

[email protected]

49.5 A longitudinal study of young female professional singers

Graham Welch1, David M. Howard2, Evangelos Himonides1

Page 349: Abstract Book

Thursday, August 24th 2006 349

1Institute of Education, University of London, UK2Department of Electronics, University of York, UK

BackgroundFemale choristers are a relatively new innovation to UK cathedrals. Although several cathedralsexperimented with female choristers during the twentieth century, it was not until 1991 that Sal-isbury established the first choir in an “old” cathedral, nearly 900 years after its original all malefoundation. Since then, over half the cathedrals and minsters have introduced female choristers.This innovation has not been without controversy, however, because of the persistence of a beliefthat the male chorister sound is unique and is threatened with extinction by the introduction offemales. Consequently, with few exceptions, girls usually sing services separately from the boys.AimsA longitudinal study of female chorister development has been undertaken at Wells Cathedral inSouth-West England in order to track the psycho-acoustic features of their singing development,draw comparisons with male singers of the same age and experience and understand more clearlythe impact of female choristers on the cultural identity of musical life in the cathedral.MethodThe authors have been making six-monthly visits to Wells Cathedral since May, 1999 in order to(a) record the vocal products of female choristers (individually and collectively) using a speciallydesigned protocol and (b) gather qualitative data on the nature of the cathedral music culture. Theresultant data have been subjected to acoustic and qualitative analyses and psycho-acoustic singergender identification studies. The prime focus for this presentation will be to present new, uniquelongitudinal data on psycho-acoustic features of female chorister singing development (n=52, rep-resenting 218 individual recordings from 1999-2005).ResultsClear trends are emerging from the extensive case study data of characteristic changes in singingacoustic outputs (voice source and voice spectrum). These represent an interaction betweenchronological age phase (childhood, puberty, adolescence, early adulthood) and singing experi-ence.ConclusionsData analyses are ongoing, but there is evidence that female chorister vocal products can be per-ceived as gender specific and also gender neutral, depending on the musical item being performed.The perceptions relate to characteristic features in the vocal products that are susceptible to theinteraction of age, experience and cultural expectation.

Key words: Cathedral choristers, Female, Singing development

[email protected]

Page 350: Abstract Book
Page 351: Abstract Book

Emotion III 5050.1 Emotion and meaning in music fifty years later: Delayed

realization of some of Leonard Meyer’s implications

Alexander Rozin

West Chester University, USA

BackgroundLeonard B. Meyer’s Emotion and Meaning in Music has found its way onto bookshelves of per-formers, composers, and musicologists and into syllabi from a vast array of courses including phi-losophy, aesthetics, perception and cognition, memory, cultural anthropology, communications,film theory, music theory, ethnomusicology, music education, applied music, and computer music.This groundbreaking book has influenced and appealed to scholars, practicing musicians, and non-musicians like no other work in music theory and has greatly shaped the field of music perceptionespecially through subsequent theoretical elaborations by Eugene Narmour and Fred Lerdahl.

AimsThis paper reconsiders Meyer’s ideas from his first book, celebrating how some of them have sig-nificantly shaped studies in musical emotion, melodic implication, dynamic attending, and othersubfields and complaining that others have not received adequate attention. Among those ideasthat have not received due attention, I discuss Meyer’s insistence that 1) musical structure andmusical affect are not separable, that is, one cannot study one without also considering the other,2) that musical parameters can be statistical (i.e., scaleable such as dynamics or tempo) or syn-tactic (i.e., creating grammatical relationships such as in tonal harmony), and that memory playsan integral role in the formation of musical structure. I consider how Meyer’s elegant argumentsmight play out in a rigorous system of music analysis, attempting to capture the complexities andevolution of perceived musical structure and affect.

Main ContributionThis paper not only honors Leonard Meyer’s vast contribution to the field but also brings his ideasto the forefront again, hoping to steer music theory and analysis towards the realm of perception,memory, and affect and offers psychologists a theoretical framework to organize future experi-ments aimed at understanding the formation and persistence of musical structure.

Implications

351

Page 352: Abstract Book

352 Emotion III

Not all of the implications of Meyer’s work have been realized. This paper suggests that all musicalfields should return to his ideas and gather inspiration for future musical research and for somedelayed gratification.

ReferencesMeyer, L. B. (1956). Emotion and Meaning in Music. Chicago, The University of Chicago Press.

Key words: Meyer, Affect, Structure

[email protected]

50.2 Regulation of emotions by listening to music in emotionalsituations

Mirjam Thoma1, Stefan Ryf2, Ulrike Ehlert1, Urs Nater3

1Clinical Psychology & Psychotherapy, Institute of Psychology, University of Zürich, Switzerland2General Psychology (Cognition), Institute of Psychology, University of Zürich, Switzerland3Emory University School of Medicine, Atlanta/GA, USA

Musical stimuli are among the most intensive stimuli triggering emotions. Therefore, we in-vestigated in the present study whether subjects use music to regulate their emotions in everydaysituations. We set out to examine whether dispositional emotional regulation styles are determin-ing the situation-dependent choice of music. In a pre-study (N = 72), 20 music stimuli and 16emotionally laden situations (on the dimensions valence and arousal) were determined. In themain study, 89 subjects (aged 20-30 years, no professional musicians, no hearing problems, nomental disorders, no substance abuse) were presented the music stimuli via head phones. Theywere indicating on a computerized visual analogue scale how likely they would choose these mu-sic stimuli in given emotionally laden situations. In addition, all subjects were asked to fill out the“Inventory for Regulation of Emotion” (IERW, Mohiyeddini, in prep.).

Analyses of our data by means of multidimensional scaling (MDS) show that specific musicstimuli were preferred in emotionally congruent situations. This situation specificity could beexplained by the two emotion dimensions valence and arousal. Furthermore, there were modu-lating influences of dispositional emotion regulation styles on individual music preference: e.g.,the choice of positively evaluated music in situations characterized by negative valence, and higharousal is correlated with emotion moderating regulation style.

In this study, we were able to show that music is chosen in emotional situations in a very spe-cific manner. What is more, we demonstrated that dispositional regulation styles might influencethe choice of music pieces characterized by specific emotions. Our findings are among the firstto elucidate the important role emotion regulation might play in the choice of music in everydayemotionally laden situations.

Key words: Emotion regulation, Music preference, Multidimensional scaling

[email protected]

Page 353: Abstract Book

Thursday, August 24th 2006 353

50.3 Music-listening practices in workplace settings in the UK

Anneli Beronius Haake, Nicola Dibben

Department of Music, University of Sheffield, UKDepartment of Music, University of Sheffield, UK

Recent research into good business practice suggests that well-being is of primary importancewithin work today. Yet, when looking at the state of employee health in workplaces, a differentpicture emerges: stress, burn-out, job dissatisfaction, anxiety and depression are growing problemsin many organisations. A small number of studies have begun to explore the influence of music onwell-being at work, following evidence that collective music listening can enhance productivityand morale, and that people use music listening to manage their well-being in daily life (Batt-Rawden & DeNora 2005; North, Hargreaves & Hargreaves 2004; Sloboda, O’Neill & Ivaldi 2001;Thayer, Newman & McClain 1994). This study explores the extent to which individual musiclistening is used at work, and its effects on work performance and subjective well-being.

An empirical study was conducted to investigate self-directed music listening practices inworkplace settings. The study aimed to provide data on the extent and type of self-directed musiclistening, how music is incorporated into working life in different occupations, self-reported func-tions of music for employees, and to provide exploratory data on the relationship between musiclistening and subjective well-being.

A survey with 300 respondents was used to gather quantitative and qualitative data on occu-pation, perceived workplace “stress”, the amount and type of music listening during a workingweek, the listening technologies employed, the amount of control the respondent had over musicin their workplace, the music listened to, the activities performed concurrently with the music lis-tening, and the perceived functions that music listening had for the respondent. Full results willbe available after February 2006.

This research sheds light on whether and how listening technologies are incorporated intoworkplaces and patterns. It also reveals the kinds of activities which music listening accompanies,and the roles that employees perceive music-listening to have for them. The results of this surveywill be used to design an intervention study to examine the therapeutic function of music listeningin workplace settings. The results of this study will further understanding of the therapeutic valueof self-directed music listening (both current and potential), and contribute to knowledge of musicuse in daily life.

(This research is funded by the Arts and Humanities Research Board).

Key words: Music listening, Well-being, Work

[email protected]

50.4 A cross-cultural study of the perception of emotions inmusic: Effects of rhythm and pitch

Xueli Tan

The Cleveland Music School Settlement, USA

Page 354: Abstract Book

354 Emotion III

One purpose of this study was to investigate if listeners from a Western culture were able toidentify intended emotions in music from an unfamiliar tonal system. The second purpose wasto investigate the effects of rhythm and pitch in determining the perception of emotions in musicfrom an unfamiliar tonal system among listeners from a Western culture. In the present study, 20Chinese listeners and 60 Western listeners identified and rated emotional content in excerpts fromChinese music.

The Chinese listeners rated 20 selections of music on a rating scale depicting the emotions ofhappiness and sadness. From the 20 music selections, 2 selections portraying the greatest degreeof happiness and 2 depicting the greatest degree of sadness were chosen. The four selections thenunderwent systematic manipulations to control for rhythm and pitch variables. Western listenersrated the original and manipulated versions of the 4 selections on the same rating scale as used bythe Chinese listeners. Western listeners agreed with Chinese listeners on the perceived emotionsin one happy and one sad music selection but did not statistically agree on the other happy and sadselections.

In general, the psychophysical property of rhythm appeared to be more influential than pitchin helping the Western listeners identify the emotional content in the Chinese music. The Westernlisteners appeared to perceive a greater degree of sadness in the rhythm versions compared to thepitch versions. No significant differences in the ratings were found between the rhythm and pitchversions for the happy music selections. The findings of this study suggest that perhaps humanbeings are hard-wired from birth to interpret and associate certain psychophysical properties ofmusic with specific emotions. However, human beings are also susceptible to cultural and envi-ronmental influences. Thus, the listeners’ perception of emotional content in music appears to bea highly evolved aspect of sensitivity to both cultural influences and psychophysical dimensionsof music.

Key words: Emotions in music, Cross-cultural, Music perception

[email protected]

50.5 Spatio-temporal connectionist models to study the dynam-ics of music perception and emotional experience

Eduardo Coutinho, Angelo Cangelosi

Adaptive Behaviour and Cognition Research Group, University of Plymouth, UK

BackgroundA striking property of music is its capability to move people and induce or alter emotional states.Following Meyer [1], some research studies have focused on the cultural and learning aspects tounderstand this emotional experience. Such a cognitive view emphasizes studies on music ratherthan on the individual and his biological substrate [2].

Some other researchers instead have focused their studies on brain, especially in the areas thatprocess both emotions and music, and physiology dynamics. They propose that brain organizationand dynamics can be the source of explanation of the affective power of music (e.g. [2,3,4]).

Looking to the changes caused by musical cues both in brain chemistry and physiology, onecan wonder to which extent music can interact with brain dynamics and autonomic functions, andthen contribute to the emotional experience [2]. Krumhansl contribution presented some evidence

Page 355: Abstract Book

Thursday, August 24th 2006 355

about the relation between emotional experience and psycho-physiological processes [4]. Theseideas are further supported and extended by other studies (e.g. [3,5,6]).AimsWe are interested in investigating the processes that might elicit some of these emotional responsesduring music listening and processing. This study aims at identifying the embodiment implicationsin response to music, and its consequences in the quality and intensity of affective states. Fromthis perspective, we propose the use of spatio-temporal connectionist models [7] to model thedynamics of music perception and emotion, and their interaction.Main ContributionAs suggested by Panksepp [2], the identification of neural dynamics can be fundamental for thestudy of musical emotions. Following Clynes [8], we are motivated by the idea of a possiblemathematical and/or a functional relationship between acoustical and emotional dynamics. Thisapproach can highlight the underlying aspects of the temporal musical cues that interfere withbrain dynamics and emotion, such as valence and intensity of musically induced emotions, oreven tension and melodic expectancy.ImplicationsThis approach can help us to decode the effects of music in brain and body, and to better understandthe temporal dynamics of both emotion and musical patterns.References

[1] Meyer, L. B., 1956. Emotion and Meaning in Music. Chicago University Press, Chicago.[2] Panksepp, J., Bernatzky, G., 2002. Emotional sounds and the brain: the neuro-affective

foundations of musical appreciation. Behavioural Processes 60, 133-155.[3] Blood, A., Zatorre, R., 2001. Intensely pleasurable responses to music correlate with

activity in brain regions implicated in reward and emotion. Proceedings of the National Academyof Sciences 98, 11818-11823.

[4] Krumhansl, C. L., 1997. An Exploratory Study of Musical Emotions and Psychophysiol-ogy. Canadian Journal of Experimental Psychology 51, 336-352.

[5] Philippot P.; Chapelle G.; Blairy S., 2002. Respiratory feedback in the generation of emo-tion. Cognition and Emotion, 16(5), 605-627.

[6] Patel, A.D. & Balaban, E., 2000. Temporal patterns of human cortical activity reflect tonesequence structure. Nature, 404, 80-84.

[7] Kremer, S. C.,2001. Spatiotemporal Connectionist Networks: A Taxonomy and Review.Neural Computation 13, 249-306.

[8] Clynes, M., 1978. Sentics: the Touch of Emotions. Doubleday, New York.

Key words: Music and emotion, Brain and physiology, Spatio-temporal connectionist models

[email protected]

Page 356: Abstract Book
Page 357: Abstract Book

Symposium: Similarity per-ception II

51

Convenor: Petri Toiviainen and Irène DeliegeDiscussant: Geraint Wiggins

The concept of similarity is central to perception. It underlies processes such as object recog-nition, comparison, and classification, which are crucial to much of human cognitive process-ing. In the domain of music, similarity plays a vital role in fundamental perceptual and cognitiveprocesses such as grouping, segmentation, musical variation, genre recognition, and expectancy.Furthermore, memory and schematization processes in real time listening rely on the recogni-tion of small figures, or cues, which again is based on the notion of similarity (Deliège 2001).Therefore it is evident that understanding of the basic processes underlying perception of musicalsimilarity is necessary for acquiring a deeper comprehension of music perception in general.

A further, more practical, motivation for research on musical similarity comes from the field ofMusic Information Retrieval (MIR), which aims to develop computational methods for extractingrelevant information from digital representations of music, including both audio and symbolicforms. Central areas within MIR are music classification and content-based retrieval, both ofwhich rely heavily on the notion of similarity. Some of the models developed rely on musicperception research, others are more concerned about issues related to computational efficiency.The majority of empirical work on and computational models of musical similarity deal withmelody, due to the central role it plays in the perception and memorization of music.

The purpose of the symposium is to discuss recent advances in the research on musical similar-ity. Main emphasis will be in melodic similarity, but similarity in other musical dimensions will betackled as well. The topics discussed include the relation of motivic, segmental, and accent struc-ture to melodic similarity, context-dependency of similarity perception, models of content-basedretrieval, social dimensions of melodic similarity, as well as the connection between internal andexternal musical similarity.

51.1 Melodic and contextual similarity of folk song phrases

Eerola Tuomas, Bregman MicahUniversity of Jyväskylä, Finland

357

Page 358: Abstract Book

358 Symposium: Similarity perception II

Various models of melodic similarity have been proposed and assessed in perceptual exper-iments. Contour and pitch content variables haven been favored although music-theoretical andstatistical variables have also been claimed to explain similarity ratings. A Re-analysis of ear-lier work by Rosner & Meyer (1986) suggests that simple contextual features can also be highlyexplanatory with more complex stimuli. A new experiment containing short melodic phrases in-vestigated the effectiveness of several global and comparative variables. A multi-dimensionalscaling solution indicated that both melodic direction and pitch range are highly relevant for mak-ing such similarity judgments and that the most salient aspects of melody when making similarityjudgments are relatively simple context-dependent features.

[email protected]

51.2 Similarity perception of melodies and the role of accentpatterns

Daniel Müllensiefen, Klaus Frieler

Hamburg University, Germany

There is a considerable amount of literature that claims that accented notes play an importantrole for the perception of melodies (e.g. Yeston, 1976; Thomassen, 1982; Lerdahl & Jackendoff,1983; Monahan et al., 1987; Boltz, 1999). Patterns of accented notes are assumed to be abstractedfrom the sounding melody during listening and are believed to be represented in memory. Theirimportance for similarity perception and memory performance has been shown in several earlierworks (e.g. Boltz & Jones, 1986; Boltz, 1991; Jones & Ralston, 1991). Unfortunately, mostaccent theories in the literature do not meet the requirements of precision and explicitness to beimplemented in a computer model right away. This study describes the construction of an accentpattern software module that comprises most of the rules for accent generation that can be found inthe literature. In a very flexible way, reduction analysis from single line melodies to hierarchicalaccent patterns can be carried out similar to the analysis mechanisms proposed by Lerdahl andJackendoff (1983) or Petersen (1999). As an application, we show how accents patterns fromthis software module in combination with different similarity algorithms can be used to modelsimilarity judgements of experts for pairs of melodies from a previous study (Müllensiefen &Frieler, 2004). The results show that similarity algorithms based on accent patterns perform almostquite as well as algorithms comparing complete melodies. An optimised way to combine accentrules and similarity algorithms is discussed.

References

Boltz, M. (1999). The processing of melodic and temporal information: independet or unifieddimensions? Journal of New Music Research, 28 (1), p. 67-79.

Boltz, M. & Jones, M.R. (1986). Does rule Recursion Make Melodies Easier to reproduce? IfNot, What Does?.Cognitive Psychology, 18, p. 389-431.

Jones, M.R. & Ralston J.T. (1991). Some influences of accent structure on melody recognition.Memory & Cognition, 19, p. 8-20.

Lerdahl, F. & Jackendoff, R. (1983). A generative theory of tonal music. Cambridge (MA):MIT Press. Monahan, C.B., Kendall, R.A. & Carterete, E.C. (1987). The effect of melodic and

Page 359: Abstract Book

Thursday, August 24th 2006 359

temporal contour on recognition memory for pitch change. Perception & Psychophysics, 41 (6),S. 576-600.

Müllensiefen, Daniel. & Frieler, Klaus (2004). Cognitive Adequacy in the Measurement ofMelodic Similarity: Algorithmic vs. Human Judgments Computing in Musicology, Vol. 13, p.147-176.

Petersen, Peter (1999). Die Rhythmuspartitur. Über eine neue Methode zur rhythmisch-metrischen Analyse pulsgebundener Musik. Hamburger Jahrbuch der Musikwissenschaft, 16.Frankfurt: Peter Lang, p. 83-110.

Thomassen, J.M. (1982). Melodic accent: Experiments and a tentative model. Journal of theAcoustical Society of America, 71, p. 1596-1605.

Yeston, M. (1976). The stratification of musical rhythm. New Haven (CT): Yale UniversityPress.

[email protected]

51.3 Melodic identification strategy for automated pattern ex-traction

Lartillot Olivier

University of Jyväskylä, Finland

A computational automation of motivic analysis is proposed, focused primarily on the extrac-tion of repeated patterns in monodies. It is shown that this task is founded on a complex system ofinterdependent mechanisms. Obtaining relevant analyses for a broad diversity of musical styles isa demanding task, requiring a fine tuning of the system’s interactions and a precise modelling ofeach individual mechanism. These requirements significantly reduce the number of possible alter-native models. In this respect, due to its good behaviour, the proposed solution suggests necessaryconditions for motivic extraction. One core question, in particular, relates to the formalisation ofmelodic comparison, which may be considered in different ways. On one hand, melodic compar-ison is founded on similarity distances defined numerically on the parametric space. However,no distance threshold has been found that would ensure a robust identification close to listeners’judgments. On the other hand, comparisons may be founded on exact identification along multipleparametric dimensions. Indeed, a melodic variation, although featuring surface differences rela-tively to specific parameters, may still present structural identities for more general parameters,such as gross contour or rhythm. Yet, the formalisation of gross contour parameter arouses in-tricate difficulties due to its necessary restriction to temporally local comparisons. This difficultymay be solved by adopting a chronological vision of motive constructions and by allowing thedefinition of the successive intervals of motives on variable parametric dimensions. This offers aclearer explanation of the influence of gross contour in melodic identification, and allows the dis-covery of new interesting motivic structures. We also show the limitations of segmentation-basedmotivic extraction strategies, and support the alternative strategy founded on a construction of in-dependent structure. The combinatorial explosion of candidate structures implied by the patternextraction task is filtered without loss of information through a selection of closed patterns anda modelling of pattern cyclicity. The modelling results in a computational automation of motivicanalysis, able for the first time to offer clear and detailed description of musical pieces. However,

Page 360: Abstract Book

360 Symposium: Similarity perception II

the success rate needs to be improved, and the domain of application needs to be generalised topolyphonies and more complex musical transformations.

Key words: Similarity perception symposium

[email protected]

51.4 Melodic similarity as determinant of melody structure

Ahlbäck Sven

Royal College of Music, Stockholm

This paper presents an approach to analysis of melodic similarity as a determinant of melodystructure, which has proved to be successful within a computerized method of analysis for predic-tion of segmental structure in a large sample of monophonic melodies of diverse cultural origin.(Ahlbäck 2004)

Although the impact of similarity in the determination of segmental structure in music is gener-ally acknowledged, methods based on similarity have been criticized with regards to the difficultyof formalizing criteria and threshold values for structurally significant similarity (e.g. Cook 1987).It is herein maintained that, since similarity is a fundamentally relative concept and categorizationof similarity and difference relates to a given context, segmentation based on similarity requires adynamic measure of similarity.

The proposed method of analysis is based on common psychological principles such as gestaltpsychological rules, human perceptual and cognitive limitations regarding temporal frames ofattention, simultaneous category handling and cognition of temporal proportions.

The core of the model is that melodic segmentation on a structural level is primarily establishedby similarity, in particular repetition of melodic content, and dissimilarity, in particular disconti-nuity of melodic processes. It is discussed how the influence of melodic similarity on segmentstructure is dependent upon general features of musical structure, such as metrical structure. Thisnotion is supported by the results of an experiment, which indicated that listeners did not makeuse of even perfect repetitions of pitch sequences for melodic segmentation when these were inconflict with the perceived general beat structure.

It is described how influence of metrical context is handled within the method, as well as themeans of allowing for different level of similarity through a categorization of different types anddegree of similarity. This is illustrated by an example analysis of a melody in which the segmentalstructure is determined by melodic similarity of different types, the result of which is evaluatedby a listener test. The result supports the notion that structurally significant similarity is relativeand contextual and indicates that this is possible to model formally within a rule-based, style-independent method of analysis.

[email protected]

Page 361: Abstract Book

Musical Meaning III 5252.1 Musical tension/release patterns and auditory roughness

profiles in an improvisation on the Middle-Eastern mijwiz

Pantelis Vassilakis1, Roger Kendall2

1School of Music, ITD Libraries, DePaul University, Chicago, USA2Department of Ethnomusicology, University of California, Los Angeles, USA

Attaching meaningful and emotional qualities to instrumental pieces of music relies largelyon the recognition of musical tension/release patterns, set up using various sonic and sonic-organization devices. Within the Western art musical tradition, such devices include contrasts interms of pitch, rhythm, dynamics, tonal center, orchestration, performance technique, sensory dis-sonance (auditory roughness), etc. Performance practices outside the Western art musical traditionplace increased importance on roughness for communicating expressive intent, often accompaniedby a decrease in the importance of other relevant devices. For example, the Middle-Eastern mijwiz(double clarinet) is constructed and performed in ways that limit the possibilities for most soniccontrasts, relying mainly on the exploration of narrow harmonic intervals and their correspondingrough sounds.

We estimated the roughness time-profile of a recorded, stylized improvisation on the mijwiz us-ing SRA (Vassilakis; http://www.acousticslab.com/roughness), a custom, online roughness analy-sis application that incorporates a previously-tested roughness estimation model [Vassilakis, P.N.(2005). “Auditory roughness as a means of musical expression,” Selected Reports in Ethnomusi-cology 12: 119-144]. The roughness profile was compared to tension/release patterns, indicatedby the improviser (Racy, UCLA) and twenty Western-trained musicians in a perceptual experimentdesigned on Kendall’s Music-Experiment-Development-System (MEDS). Subjects first listened tothe 1-minute-long-piece as often as necessary to become familiar with it. Then, while listening,they tapped two keys on the computer keyboard to indicate either an increase or a decrease inmusical tension. Subjects were instructed to continue tapping throughout the increase/decrease intension and to not tap if no change in tension was sensed.

The results suggest that auditory roughness is a good predictor of the tension/release patternindicated by the improviser. The patterns obtained by the rest of the subjects show a different trend.Roughness appears to be just one of the cues guiding musical tension judgments, often overriddenby tonal and temporal cues, and/or by expectations of tension/resolution raised by such cues. It is

361

Page 362: Abstract Book

362 Musical Meaning III

argued that the observed differences between the performer’s expressive intent and the listeners’interpretation support understanding musical tension and release as culture-specific concepts thatare guided by the equally culture-specific musical cues used to organize and recognize them.

[Study supported by the Department of Instructional Technology Development and the Schoolof Music, DePaul University. Special thanks to Prof A. J. Racy, Ethnomusicology Department,University of California, Los Angeles, for his expertise.]

Key words: Musical tension, Roughness, Non-western music

[email protected]

52.2 Some factors affecting the successful use of digital soundswithin a hybrid pipe-digital organ

Peter Comerford, Lucy Comerford

Microcomputer Music Research Unit, Department of Computing, School of Informatics, Univer-sity of Bradford, UK

BackgroundThis paper reports part of a study to investigate the use of pipes and digitally generated soundtogether in a hybrid pipe-digital organ.

AimsThe aim was to examine what makes for musical success - or failure - in hybrid organs. The use oftechnology must be transparent if it is to be acceptable - the organist must not be intrusively awareof the provenance of each stop, but must be able to use each as the musical need arises.

MethodThe study has included extensive field experimentation with a medium-sized hybrid organ in a typ-ical organ context, with a traditional console, in a “dry” acoustic. This instrument has incorporateda specialist digital sound control tool to create stops by synthesis. This has allowed the number andtype of digital stops in the instrument, and their characteristics, to be varied experimentally. Ar-eas examined included tonality (including quality match, tonal blend, and which stop pitches andtonalities are best created by which method); tuning and note stability; and sound output (includ-ing enclosure, and use of loudspeakers). Experiments included a series of perceptual assessmentsby a group of expert players and organ builders, playing and listening to the hybrid instrument us-ing varied controlled and repeated experimental specifications; general functional assessments ofhybrid instrument in normal use, using different experimental specifications; assessments of tem-porary partial hybridisation of different organs; and observations and measurements of instrumentoutput.

ResultsOverall, the experiments show that quality match between pipes and digital sounds is achievable,provided that the digital sound can be adjusted in detail to suit to the pipes; the aim is to produce aneffective whole, not just to juxtapose separately beautiful but unmatched stops. Synthesis provesan effective method for this purpose. Given careful voicing, neither stop pitch nor stop familyhave proved to be barriers to effective use of digital stops in a hybrid context. Potential problemsinclude adequate pitch-tracking.

Page 363: Abstract Book

Thursday, August 24th 2006 363

ConclusionsGiven digital technology with detailed voicing facilities, it is possible to create hybrid organswhich are musically functional and satisfying to play.

Key words: Organ, Hybrid acoustic/digital instruments

[email protected]

52.3 Broadening musical perception by AKWEDs techniquevisualisation

Jacek Grekow, Teodora Dimitrova-Grekow

Technical University of Bialystok, Faculty of Computer Science, Poland

Audio information we receive is a complicated phenomenon. Consequently, an extensiveanalysis of a piece of mu Consequently, an extensive analysis of a piece of music is a complextask due to the sheer number of data attributes such as tempo, rhythm, pitch, consonance, harmony,dynamics. That is the reason why all the research conducted to date in this field has concentratedsolely on selected parameters, and that fact determined the aim of the experiments as well as theirresults. This paper presents a totally new approach to creating 3-D images to the analysis andsimulation of musical content. As the main criterion for the synthesis of the 3-D images we havechosen harmonic content, which has close relation to the musical content of a piece. Addition-ally, the graphic display of its main components in shape of solid figures is a crucial aspect in thepresented approach. The figures, which were named AKWETs, are applicable in simulation andanalyse musical content. The paper also states that the the order of AKWETs identified in a givenphrase forms a meaningful visual sequence, connected with its musical original.

The sequence of appearance of static 3-D images is an interesting field of research. Resultsenables us to visually track the development of a major cadence Kd (T S D7 T). ConsecutiveAKWETs form a visual sequence that is temporally connected with the musical original. Eachchord from the sequence Kd has meaningful content reflected in a visual sequence: introduction,development, climax and denouement.

The method presented in this paper is an innovative approach to simulation and analyse piecesof music. It is based on combining two human senses: hearing and vision, which results in mutu-ally linked abstract figures that illustrate music and to analyse musical content (AKWETs).

The AKWET method may be applied in: T visualising music; T studies and prediction of theimpact of music on listeners; T supporting composer’s work; We are certain that continuation ofthis research may bring, especially when the computer programmes are developed, new discoveriesin the area that combines music with technology.

Key words: Acoustic information, Visualisation, Musical content analyse

[email protected]

Page 364: Abstract Book

364 Musical Meaning III

52.4 The effects of non-musical components on the ratings ofperformance quality

Jeanne Siddell

McGill University, Montreal, Quebéc, Canada

Auditions, concerts, competitions, and recitals are all part and parcel of the experience to be-come and sustain a life as a musician. Understanding how non-musical components could beaffecting evaluators’ performance quality ratings is an important area of research. The purposeof the study was to determine how the presence of a music stand (i.e. an implied use of thescore) impacts ratings of performance quality when no performance quality difference exists; andto determine to what extent attractiveness, stage behaviour and appropriateness of dress impactsperformance ratings; and the age at which this effect becomes significant. Fourteen cellists werevideo taped performing an unaccompanied Bach movement of their choice by memory. The cel-lists then recorded a second performance with their music on a music stand, syncing their motionswith the audio feedback from their first performance. One thousand fifty-one evaluators rangingfrom age six to age fifty-one were assigned to one of six groups: visual only, audio only, audiovi-sual I, II, III, or IV and were asked to rate specific criteria. Ratings for the audio only conditionwere significantly higher then for the audiovisual conditions. This could be attributed to the agree-ment by all visual only evaluators that the performers were, on the whole, not terribly attractive.When the performances were homogenous (i.e. all memorized or all “non-memorized”) no statis-tical difference was found between memorized and non-memorized ratings. Only when alternatingmemorized and “non-memorized” performances were viewed in the experimental condition did asignificant bias towards memorized performances become evident. Regardless of group member-ship, evaluators became significantly less generous with their ratings as they got older. As well, thesignificant results found in the visual only group were irrespective of the evaluator’s age or gen-der. Many results from this study contradict previous research. For example audiovisual ratingsare normally higher then audio only and the most attractive female is normally the best performer.The results from this study also provide fresh evidence in support of memorized performancesover non-memorized. At this time, statistical analysis is on going and more detailed results willbe available at the conference.

Key words: Cello performance evaluation, Memorization, Visual perception

[email protected]

52.5 Music, movement and marimba: An investigation of therole of movement and gesture in communicating musicalexpression to an audience

Mary Broughton, Kate Stevens, Stephen Malloch

MARCS Auditory Laboratories, University of Western Sydney, Australia

BackgroundResearch in many fields has demonstrated the perceptual advantages of experiencing the world

Page 365: Abstract Book

Thursday, August 24th 2006 365

through multiple sensory modalities for accurate and effective communication. The aim of thecurrent study was to test the assumption that visual perception of movement plays a role in com-municating a musically expressive performance. In the live, concert music setting, performershave increased opportunities for engaging audience attention and guiding awareness to musicalcontent, by presenting information simultaneously via multiple modalities. Non-verbal behavioursand gestures are natural and integral components in interpersonal communication. This study isconcerned with investigating the interaction of auditory and visual information in communicatingmusical expression to an audience. This study is of particular relevance to the marimba (a tuned,wood instrument in the percussion family) because of its relatively restricted ranges of articulationand dynamics.AimsIt is hypothesised that multi-modal perception, where the visual features are expressive and rein-force the performer’s expressive musical intention (aural features), enhances the observer’s levelof interest and perceived expressivity.MethodMusically expert and novice observers rated digitised presentations of solo marimba excerpts (pro-jected or deadpan performance manners) on rating scales under two conditions: audio alone andcombined audio-visual. The experimental design consisted of three factors each with two levels:modality (auditory alone; combined auditory and visual conditions), stimulus (projected perfor-mance manner; deadpan performance manner) and expertise of observer (novice; expert) with thefirst two variables as repeated measures. The dependent variables were observers’ ratings of inter-est and perceived expressivity indicated on two separate seven-point Likert scales. The marimbawas used as the instrument to create digital stimulus materials as the movements required to playit are visible. The stimulus material comprised sets of thirty-two 20-25 second excerpts of 20thcentury solo marimba repertoire of fast and slow tempi and varying levels of difficulty and musicalstyle, performed by two marimbists, one male and one female.ResultsData are currently being collected and results will be available mid-April. The proposed analysisinvolves an examination of the effect of modality, expertise and stimuli interactions on interest andperceived expressivity ratings.

Key words: Musical expression, Multi-modal perception, Nonverbal communication

[email protected]

Page 366: Abstract Book
Page 367: Abstract Book

Part IV

Friday, August 25th 2006

367

Page 368: Abstract Book
Page 369: Abstract Book

Symposium: Music and me-dia

53

Convenor: John Hajda

This symposium investigates the temporally organized interaction of aural and visual structuresin multimedia contexts. Historically, the vast majority of empirical research on music has focusedentirely on musical patterns of sound, yet vision is often an integral component of any musicalexperience. Consider for example, the “listener” who is an audience member at a recital. Whereastraditional musicological research on the communication of emotion and meaning would onlyconsider aural, and perhaps notational, elements of the performance, we will present researchthat shows the salience of visual components in a live musical performance. Conversely, wewill present research that demonstrates the salience of musical components in film, video andanimation. In short, we hold that meaning in any multimodal setting results from a complexinteraction of modes of perception. Therefore, multisensory musical research, which has a historyof less than 20 years (at least partially due to technological and cost constraints), adds a criticallevel of ecological validity to the study of the musical experience and, we believe, it will have agrowing place in the study of music perception and cognition.

We use “media” in the general sense of that which conveys, expresses or communicates. Al-though we focus primarily on perception of the listener/viewer, we see our research in the largercontext of the communication of intended meaning from the composer/director to the performersto the listener/viewer. For the papers in this symposium, we consider communication in multime-dia settings including music and live human performance, film, video or animation. We note thatthis list merely scratches the surface of common real-world communicative phenomena in whichmusical and visual domains are inseparably intertwined, such as video gaming, music videos andadvertising, to name a few.

This two-hour symposium will present five connected, but unique, experimental research pa-pers on the interactions and synchronicity of visual and musical structures and their effects onlevels of meaning. The presentations and discussion, led by Kendall, aim to place all empiricalresearch on emotion and meaning in music in the larger framework of ecologically valid modes ofmusical experience.

369

Page 370: Abstract Book

370 Symposium: Music and media

53.1 How music influences absorption in film and video: TheCongruence-Associationist Model

Annabel Cohen

Department of Psychology, University of Prince Edward Island, Canada

Marshall and Cohen (1988) proposed the Congruence-Associationist Model to account foreffects of music on the meaning of a short animation. Cohen (2001, 2005) extended the modelto accommodate the integral roles that music plays in film, television and other media (cf, Boltz,2004, Lipscomb, 2005). The present paper aims both to review the revised Model and to showhow it can be used to predict and interpret results of research that explores the role of music onabsorption or involvement in film and TV. The Congruence-Associationist (C-A) Model proposesthat when audiences experience multimedia, they mentally generate a conscious working narrative.The working narrative results from top-down inferences that best match outputs of a bottom-upanalysis of three audio and two visual channels (music, speech, sound effects, scenes, and text,respectively). The principle of Congruence refers to cross-channel structural similarities that guideattention. The principle of Association refers to cross-channel meanings that are aggregated anddirected to attentional foci. The framework predicts that (1) self-assessed absorption in a film willbe higher in the presence of music that elaborates the context as compared to a condition that lacksmusic, and (2) detectability of irrelevant visual symbols in a film will decrease in the presence ofrelevant music as compared to inappropriate or no music. Two sets of experiments tested thesepredictions respectively. In the first, participants rated their degree of absorption in short movieclips representing different genres. Comparison data were also obtained for other participantswho rated realism and professional quality. In the second, participants viewed a feature filmexcerpt, approximately 15 minutes long, in the presence of appropriate, inappropriate, or no (oralmost no) music. They were asked to detect an extraneous X that appeared at approximately45-second intervals. The results are discussed in the light of the above-stated predictions of theC-A Model. The presentation emphasizes the role that music plays on absorption or involvementin the narrative of a film and supports the C-A Model as a framework for understanding both theencoding and influence of music in multimedia contexts.

Key words: Multimedia (or media), Absorption, Congruence-association

[email protected]

53.2 Perception of opening credit music in Hollywood featurefilms

John Hajda

University of California, Santa Barbara, USA

BackgroundThe initial segment of a commercial film, into which opening credits are often integrated, is criticalfor setting up the ensuing narrative. Here, music can fill a variety of functions, including the

Page 371: Abstract Book

Friday, August 25th 2006 371

following: - establishing the mood of the opening scene or the entire film - conveying significantaspects of the story, such as genre

Previous research has demonstrated that music can evoke emotion in a visual context thatwould, without music, be perceived as neutral. This may well be the situation with the openingcredits of many films.

AimsThe purpose of this research is to study the relationship between musical and visual elements in thecommunication of genre and emotion during opening credits of select Hollywood feature films.

MethodSegments of opening credits from eight feature films, each with original underscoring, were used.Segments were edited such that film titles were removed and duration was normalized. Each filmrepresented one of four genres: romance, adventure, horror and comedy. The stimuli were furtheredited for one of three experimental conditions: (1) music only, (2) visual only and (3) intendedand unintended (i.e. visual film A/music film B) combinations of audio and visual. Subjects weregiven one of two tasks: (1) rate, by using VAME, the emotional content of the edited stimuli and(2) match edited stimuli with the genre of best fit.

ResultsEmotional rating data will be analyzed by principal components analysis and ANOVA. The match-ing data will be analyzed via confusion matrix. Based on previous research, it is expected thatunaltered audio/visual contexts will be properly matched by subjects, but mismatches will causeconfusions for genre as well as affect emotional content based on the nature of the musical context.

ConclusionsTo the author’s knowledge, this is the first empirical study of the relationship between musicaland visual components in opening credits. However, some films do not have opening credits andothers do not introduce musical underscoring until well into the narrative. Future research mustconsider the effect of these permutations on the communication of emotion and meaning at thebeginning of a film.

Key words: Music and media, Film

[email protected]

53.3 Perception of iconicity in musical/visual patterns

Roger Kendall

Music Cognition and Acoustics Laboratory, Program in Systematic Musicology, University of Cal-ifornia, Los Angeles, USA

BackgroundThe proliferation of multimedia presentations worldwide is evident particularly in the combina-tion of music with time-changing visuals in such domains as film, video games, dance, ice-skating,gymnastics, and popular music videos among many others. Therefore, systematic analysis of theimplicit rules behind visual and musical combinations is not only justified but imperative; musicas a recording isolated from performance visuals is likely only an artifact of limited technology, anissue that is increasingly being eliminated. Kendall, in a number of lectures and two recent articles

Page 372: Abstract Book

372 Symposium: Music and media

(2005a and 2005b) outlines an experimental semiotic theory based on a continuum of referential-ity applicable to combinations of visual and musical elements. The current paper experimentallystudies iconicity, where musical pattern suggests its connection to the visual domain. Based ontemporal structure as its foundation, musical patterns iconically suggest motion in multidimen-sional space.AimsThe purpose of this study is to implement part of a theory of musical iconicity and test it throughperceptual experiments. For the purposes of this experiment, the musical variable will be pitchheight, and the visual space will be 2-dimensional.MethodsThe ramp, arch, and undulation iconic prototypes outlined in Kendall (2005b), with the additionof a spiral visual pattern with motion of constant angular velocity, are employed. The visual objectis a filled circle that moves across a 2-D space in temporal congruence with the musical pattern.Descending (right to left) and ascending (left to right) pitch ramps, an arch ascending 4 pitchchroma and descending 3 chroma from the tonic (C), an undulation which ascends 4, descends 5,and then ascends 3 chroma, and a single pitch with increasing IOI, and another with descendingIOI are hypothesized to map onto the ball motion. There are 6 musical patterns and 8 visualpatterns that are combined to create 48 total stimuli. Each stimulus consists of pitch and visualmotion for 2.5 sec followed by 1.25 seconds of stop motion and held half-note pitch.

Each trial lasts 10 seconds. One computer presents the visual example on a 15 inch diagonalscreen. A second computer running MEDS (Kendall, 2002) collects the subject responses usinga mouse on a 0-100 point VAME rating scale. There are two random orders of visual stimuliassigned at random to subject blocks.ResultsResults are analyzed as a within-subjects repeated measures ANOVA with variables of pitch pat-tern (six levels) and visual pattern (eight levels). In addition, a cross correlation matrix of subjectresponses is submitted to hierarchical clustering analysis and classical multidimensional scaling.ConclusionsFuture experiments will expand the musical variables to include dynamic, spatial sound, and tim-bral factors, along with 3-D visual spaces.

Key words: Media, Multimodal, Iconicity

[email protected]

53.4 Facial expressions of pitch structure

William Forde Thompson, Frank A. Russo

University of Toronto, Canada

BackgroundResearch suggests that facial expressions used during music performances have striking effectson music perception (Davidson & Correia, 2002; Thompson, Graham, & Russo, 2005), includingjudgments of emotion, pitch relations, dissonance, and tonality. In this talk, we’ll describe recentfindings on how facial expressions express pitch structure and the extent to which they influenceour perceptions and interpretations of music. Following a discussion of the cognitive implications

Page 373: Abstract Book

Friday, August 25th 2006 373

of these data, we’ll trace old and new media trajectories of aural and visual dimensions of music,highlighting how our conceptions, perceptions, and appreciation of music are intertwined withtechnological and media deployment strategies.AimsOur goal was to determine whether facial expressions of music performers can communicate pitchstructure (interval size and tonal relations) and to identify the specific cues that are decoded byviewers.MethodsIn Experiment 1, accomplished singers sang four ascending intervals. The two notes of intervalsspanned 2, 4, 7, and 9 semitones. Singers were videotaped, and participants were presented withthe video component alone. Conditions varied in the number of facial features occluded. In Exper-iment 2, singers sang a number of key-defining melodies that ended on each note of the chromaticscale. Singers were videotaped, and participants were presented with the video component alone.ResultsIn Experiment 1, listeners differentiated interval size based on facial cues alone. Differentiationwas observed even when all facial cues were occluded (i.e., head movements only), but perfor-mance was enhanced when facial cues were available. In Experiment 2, listeners differentiated thetonal stability of notes based on facial cues alone.ConclusionsFacial expressions provide an importance source of information about pitch structure. We discussthe implications of these and other effects for the deployment of music in contemporary media.ReferencesDavidson, J. & Correia, J.S. (2002). Body movement. In The Science and Psychology of MusicPerformance, R. Parncutt and G.E. McPherson (eds.), 237-253. New York: Oxford UniversityPress.

Thompson, W. F., Graham, P., & Russo, F. A. (2005). Seeing music performance: Visualinfluences on perception and experience. Semiotica, 156, 203-227.

Key words: Audio-visual integration, Performance, Media

[email protected]

Page 374: Abstract Book
Page 375: Abstract Book

Rhythm VI 5454.1 Perceptual motion: Expectancies in movement perception

and action

Kate Stevens

MARCS Auditory Laboratories, University of Western Sydney, Australia

There is growing evidence that action and perception are intimately linked and that when weobserve an action we use the repertoire of motor representations used to produce the same action.We contend that to explain the processes that underpin motor perception and action theoreticalapproaches and tools are required that create controlled and graded differentiation between them.In the experiment reported here, we use an established and reliable protocol from human timingand expectancy research with the intention of yielding graded differences between the extremes ofperception and action.

Synchronization protocols involve the presentation of temporal patterns that set up event ex-pectancies, e.g., a test stimulus is presented that occurs at the expected location as predicted byprior inter-onset intervals (IOIs), or one that is early or late relative to the IOIs. This method leadsto: i) behavioural responses that are quantifiable; ii) cognitive expectancies-they relate to an im-plied or anticipated event and enable investigation of the role of central processes; iii) identicaltasks that can be used in both perception and action conditions.

Theories propose that movement perception and neural simulation are constrained by the ob-server’s motor competence and expectations. Therefore, effects of expertise will be used to furtherdifferentiate perception and action. The present sample is divided into performing musical instru-mentalists and those with no specialist expertise. The former sample, through years of training andpractice, will have developed heightened fine motor control for finger movements and excellentauditory acuity.

Results of an experiment will be reported where we investigate the hypothesis that test trialsthat follow action elicit the most accurate expectancy judgments. As musical training improvesmotor planning, synchronization, and temporal expectancy, trained musicians respond more accu-rately and with less variability than untrained listeners and this manifests in both perception andaction conditions. The results will be discussed in the light of Jones’ model of expectancy and itsimplications for perception-action coupling.

Key words: Expectancy, Action, Timing

375

Page 376: Abstract Book

376 Rhythm VI

[email protected]

54.2 Perception and production of short western musical rhythms

Makiko Sadakata, Peter Desain

Music Mind Machine Group, NICI, Radboud University Nijmegen, The Netherlands

BackgroundRecently, an index that measures the variability of successive durations (nPVI) has been applied forthe analysis of music scores. It has revealed that such serial characteristic of temporal informationin music differs cross-culturally (Patel & Daniele, 2003, etc.). More interestingly, this differencewas in line with the difference found in the speech rhythm of the mother tongue.AimsThe nPVI measures a serial surface level of durational contrast. However, most rhythms in westerntonal music are structured in a hierarchical manner. Therefore, it is interesting to see if a hierar-chical measure accounts even more for cross-cultural differences in the perception and productionof musical rhythms.MethodWe used short rhythms consisting of three intervals. Eighteen Dutch and 18 Japanese pianistsparticipated in experiments, which consisted of a rhythm perception task (identification), a rhythmproduction task (performance from scores) and a familiarity judgement task. We applied the serialindex (nPVI) as well as the hierarchical index (syncopation measure by Longuet-Higgins & LEE,1982) of the rhythmic structure for analysing data.ResultsResults confirmed the systematic cultural difference only in the rhythm production of syncopatedrhythms. Dutch pianists tended to perform the second duration longer, while Japanese pianiststended to produce the last duration longer. Interestingly, a related tendency has reported in theperception of speech rhythms. A positive correlation has found between the syncopation index ofthe rhythm and the amount of diversity in processing musical rhythm. The variability in perform-ing tasks within and between individuals became significant when the task involved more complexsyncopations.ConclusionsWithin the rhythms we used, the cultural differences emerge only for patterns that are more com-plex in terms of their hierarchical structure. This cultural tendency corresponds with its counterpartin speech rhythm.References

Longuet-Higgins, H. C., & LEE, C. S. (1982). The Perception of Musical Rhythms. Percep-tion, 11(2), 115-128.

Patel, A. D., & Daniele, J. R. (2003). An empirical comparison of rhythm in language andmusic. Cognition, 87(1), B35-B45.

Key words: Cross-cultural study, Rhythm production, Rhythm perception

[email protected]

Page 377: Abstract Book

Friday, August 25th 2006 377

54.3 Co-operative tapping and collective time-keeping - differ-ences of timing accuracy in duet performance with hu-man or computer partner

Tommi Himberg

Centre for Music and Science, Faculty of Music, University of Cambridge, UK

BackgroundCurrent models of time-keeping and sensorimotor synchronisation only cover situations where asolo performer maintains a steady pulse without an external referent, and when that referent is non-responsive (does not adapt its phase or period in response to the performer). However, in real- lifegroup music-making, participants have to take each other into account; collective time- keeping inensembles is a dynamic process based on mutually responsive relations between the performers. Inrecent experiments by the author, the characteristics of collective time-keeping have been studiedusing co-operative tapping tasks. Participants of these experiments have reported a qualitativedifference between tapping along a metronome and tapping with another person.

AimsThe aim of the current study was to compare synchronisation in a non-responsive environment(solo or with metronome or computer) with synchronisation in a mutually responsive setting, andtry to find quantifiable differences in the tapping performances in these two conditions.

MethodsTwo participants were tapping isochronously or interlocking rhythm patterns, with and withouta metronome. The following conditions were compared: solo tapping (1 tapper), duet tapping(2 human tappers), “fake” duet tapping (human tapper with computer-tapper mimicking humantapper). The computer-tapper had a phase irregularity similar to a human tapper, and was eitherkeeping the original tempo or had a constant period error (was speeding up or slowing down).

Results(Proper experiments due January - February 2006, these tentative observations based on pilot data)

In the pilot experiment, the participants in most trials recognised whether they were tappingwith a computer-generated playback or with the other tapper. Coordination between two humantappers was generally better than in the human-computer pairs, especially when the computer wasexhibiting a period error in addition to the “normal” phase error.

ConclusionsThese results suggest that with regard to music performance, our current models of time-keepingand synchronisation are incomplete. Collective time-keeping is a dynamic process where theparticipants are continually adapting the phase and period of their pulse-trains to match with theother performers. Musically relevant timing should be studied in real, interactive settings ratherthan with isolated individuals and pre-programmed stimuli.

Key words: Musical interaction, Cooperative tapping, Sensorimotor synchronisation

[email protected]

Page 378: Abstract Book

378 Rhythm VI

54.4 Trends in/over time: Rhythm in speech and music in 19th-century art song

Leigh van Handel

Michigan State University, USA

BackgroundThis paper presents the results of a quantitative study of the relationship between rhythmic char-acteristics of spoken German and French and the rhythm of musical melody in 19th-century artsong.

AimsLinguists have demonstrated that spoken French contains less variability between the length ofsuccessive syllables than does spoken German, and refer to the measure of this variability as theNormalized Pairwise Variability Index, or nPVI; French’s lower variability is reflected in a lowernPVI than that for German.

A recent series of articles published in Music Perception demonstrate a general correlation be-tween the nPVI for language and a similarly calculated value applied to the rhythm of instrumentalincipits.

This study modified the nPVI measure to allow it to apply in a relevant manner to a largersample of music; this modified measure was called the pnPVI.

MethodMore than 600 French and German art songs by 18 composers were encoded and studied usingDavid Huron’s Humdrum Toolkit. In addition to using existing Humdrum analysis tools, new toolswere developed for the study.

ResultsThe expected result was that the rhythmic characteristics of melodies would mimic the rhythmiccharacteristics of spoken language, and that the overall pnPVI would be lower for French than forGerman. While this was the result, the difference in pnPVI values was not statistically significant;this differs from the results in Patel and Daniele (2003) and Huron and Ollen (2003).

However, the study returned an unexpected result that will be the focus of this presentation: thetwo languages exhibited sharply diverging trends as a function of time through the 19th century.That is, a decreasing pnPVI value for French song and an increasing pnPVI value for Germansong indicate that according to this data set, the rhythms of French and German song melodieschanged from 1840-1900 to more closely reflect the measured differences in the rhythm of thespoken languages. This trend is reflected both in the overall trends and in the trends of individualcomposers.

ConclusionsThis presentation will discuss the study and its outcomes, focusing on the trend for the rhythmiccharacteristics to change over time. The results for individual composers will also be discussed,and examples provided to illustrate the rhythmic changes over time.

Key words: Humdrum, Language, Music

[email protected]

Page 379: Abstract Book

Friday, August 25th 2006 379

54.5 Polyrhythmic communicational devices appear as languagein the brains of musicians

Vuust Peter1, Ostergaard Leif2, Roepstorff Andreas3

1Royal Academy of Music, Aarhus, Denmark2Centre of Functionally Integrative Neuroscience, Aarhus University Hospital, Denmark3Institute for Anthropology, University of Aarhus, Denmark

BackgroundPolyrhythm is one of the major trajectories for communication in jazz. This is especially true in thestyle of music that was propagated by Miles Davis’ quintet of the 1960ies, in which free commu-nication between the musicians was the centre of the music. An analysis of this music reveals twomain polyrhythmic communicational devices: 1) metric displacement (MD), the superposition ofa counter meter, isometric with the main meter but with a different starting point, and 2) regroup-ing of subdivisions (RS), e.g. by accentuation of every fourth 8th note triplet. MD is mainly usedfor establishing contact between musicians, whereas regrouping of subdivisions (RS) is a devicefor creating anticipatory patterns of tension and relief. Could this abstract communication throughmusic be comparable to other forms of human communication as through language?AimsWe studied this question by examining neural processing of MD and RS. 1. Using magnetoen-cephalography (MEG), we investigated neural responses to metric incongruence (including MD)in jazz musicians and in non-musicians. 2. Using fMRI we investigated neural correlates in musi-cians to epochs of intensive polyrhythm (RS) compared with epochs of no rhythmic tension.Main Contribution1. At the pre-attentive level, incongruent rhythms (/MD) strongly and rapidly stimulated the au-ditory cortex of the left hemisphere of expert musicians, in contrast to rhythmically inept non-musicians showing a pre-dominant right-hemispheric response.

2. RS activated Brodmann area 47 (BA47) bilaterally when musicians tapped the main pulse ina polymetric context where the music emphasized a counter pulse. Activation of BA47 in the righthemisphere correlated negatively with subjects’ metrical competence and the difference betweenthe left and right hemispheric response correlated positively with metrical competence.ImplicationsThese studies suggest that brain processes, fundamental to linguistic interaction also subservebrain responses in jazz musicians to MD and RS. We hypothesise that this, reflects their usage injazz as communicational devices. Thus, similar cognitive processes and strategies may serve mu-sic, language and possibly other forms of communication. We therefore propose that to musicians,rhythm is a language.

Key words: Polyrhythm, fMRI, MEG

[email protected]

54.6 Measuring swing in Irish traditional fiddle performances

Vincent Rosinach, Caroline Traube

Page 380: Abstract Book

380 Rhythm VI

Laboratoire d’informatique, acoustique et musique (LIAM), Faculté de musique, Université deMontréal, Québec, Canada

BackgroundAs in blues, country and jazz music, Irish traditional fiddle music is often played with swingrhythm, which consists in performing eighth notes where downbeats and upbeats receive approxi-mately 2/3 and 1/3 of the beat, respectively, giving a triplet feel and providing a rhythmic lift to themusic. Swing ratio can be defined as the ratio of the two successive beats durations (long/short).The amount of swing varies among performers and repertoires, from no swing (1:1) to hard swing(3:1). Usual swing ratio in Irish traditional fiddle music is around 2:1 (triple meter). Althoughswing can be clearly perceived by experienced performers, it is a very subtle rhythmic variationthat can not be easily quantified perceptually and that provokes many debates among musicians.AimsThe aim of this study is to quantify the amount of swing through the analysis of audio recordingsof Irish fiddle performances.MethodAs in the case of the violin, the attacks are not well defined, we base the segmentation of the audiorecordings into notes on the fundamental frequency extracted from the autocorrelation function ofsuccessive time frames. Taking advantage of the presence of accents, we determine the meter fromthe autocorrelation of the average power signal. Then the vector of note durations is ordered (fromthe shortest to the longest) and quantized. Finally, the short and long eighth notes are identifiedand the average swing ratio is calculated.ResultsWe tested our analysis method on synthesized audio samples, which gave good results sincepitches and durations were accurately estimated. Then we analyzed various actual Irish fiddleperformances and we were able to extract the swing ratios characterizing the playing of the differ-ent performers.ConclusionsOur results showed that swing is actually present in the acoustic signal of the Irish fiddle per-formances we studied. We still need to make our algorithm more robust to handle all real liferecordings. Also we would like to extend the study to a larger number of musicians, as well as toother musical and acoustical parameters of Irish fiddle playing.

Key words: Swing, Rhythm, Autocorrelation

[email protected]

Page 381: Abstract Book

Memory 5555.1 Testing the influence of the group for the memorisation of

repertoire in Trinidad and Tobago steelbands

Helmlinger Aurélie

Laboratory of Ethnomusicology, UMR 7173/LEAD, UMR 5022, France

BackgroundThis research deals with memory of repertoire in steelbands, trying to explain how seasonal play-ers, playing about one month a year, sometime beginners, are able to perform by rote very tech-nical symphonic-like tunes, at an extremely fast tempo. The players have to face written traditionconstraints (perform one person’s piece without variation), with an oral tradition learning andperforming process (no score).AimsThe research tries to evaluate the possible influence of other players in the band (performers play-ing the same thing): Fieldwork observations tend to suggest a positive influence of the collectivesituation on individual performances. More particularly, the importance of the visual parameter isexplored.MethodThe research is based on participative observation in Trinidad, and on a free recall task inspiredby cognitive psychology. This experiment was carried on two panels of Trinidadians (regular andseasonal players), and on a panel of French pianists . The musicians had to learn three melodiesof growing difficulty from a video, and retrieve twice it in different situations: -Alone - Betweentwo persons knowing the melody - Between the to same persons hidden behind a curtain. The goalwas to compare the amount of mistakes of those situations.ResultsThe error rate of the performed melodies has been coded through two criteria: - Non Retrieval ofPitch (NRP): Number of non retrieved pitches, reported in percentage to the reference melody. -Wrong Phrasing (WP): Number of beats in the melody including one or several rhythmic mistakes,reported in percentage to the number of beats in the reference melody. An analysis of variance hasbeen applied to compare the results.Conclusions

381

Page 382: Abstract Book

382 Memory

The difference between solitary and collective situations appears clearly, but in opposite directionsaccording to the criteria: NRP is higher in collective situations, and WP is higher in solitary situa-tion. The difference between the two collective situations is less obvious, but appears significantlyin the second recall. One can conclude of a positive influence of the group for the rhythmic di-mension of the melody, even though the contradictions of the different criteria calls for cautionand further investigations.

Key words: Ethnomusicology, Steelbands, Trinidad and tobago

[email protected]

55.2 Footprints of musical phrase structure in listeners’ re-sponses to different performances of the same pieces

Neta Spiro

Centre for Music and Science, Faculty of Music, University of Cambridge, UK and Cognitive Sci-ence Center, University of Amsterdam, The Netherlands

BackgroundExperimental approaches similar to those of phrase identification (Deliège, 1998) and performancefeatures (Friberg and Battel, 2002) are used to compare responses to different recorded perfor-mances of the same pieces.AimsTo identify factors that contribute to listeners’ perceptions of different performances of the samepiece from the western classical repertoire.MethodForty-five listeners with a variety of musical experience identified musical phrases while hearingtwo publicly available recordings of pieces by Bach, Mozart, Brahms and Wagner. Two groupsof listeners heard two sets of different performances in sessions two weeks apart; each heard adifferent set first. Tempo and dynamics contours of the recordings were analysed.ResultsListeners’ responses were found to be consistent, both for and between individuals, for the sameperformances of each piece within both sessions. The same main phrase positions were identifiedin both performances, all coinciding with previously identified musical features.

The pieces can be divided into two groups: 1. The tempo and intensity contours of the twoperformances are similar and there is no significant difference between responses to the two per-formances of the same piece. 2. The tempo and intensity contours of the two performances aredifferent, there is a statistically significant difference in the proportion of listener responses at thesame positions, and there is coincidence between patterns of the contours and responses. More-over, in the second session the responses to each piece have “footprints” of the responses to theprevious performance, and the differences between the responses to the two performances aremuch smaller (no longer statistically significant).ConclusionsThe main phrase positions identified by listeners for both performances of each piece are the sameand coincide with musical features. The proportions of response to positions in some performances

Page 383: Abstract Book

Friday, August 25th 2006 383

change in agreement with performance features. When there are differences in responses to thetwo performances of the same piece in the first session, there then seems to be an effect of the firstperformance heard on the responses to the second (two weeks later). The first hearing leaves a“footprint” on the phrase identification in the second.References

Deliège, I., (1998). Wagner “Alte Weise”: Une approche preceptive, Musicæ Scientiæ (SpecialIssue), pp. 63-90.

Friberg, A. and G. U. Battel, (2002), Structural Communication, The Science and psychol-ogy of music performance: Creative strategies for teaching and learning, R. Parncutt and G. E.McPherson. (Oxford, OUP), pp. 119-218.

Key words: Phrase perception in MIDI and performances, Analysis of performances, Memory effect

[email protected]

55.3 Working memory for music and language in normal andspecific language impaired children

Stephan Sallat

Justus-Liebig-Universität GieSSen, Germany

Both language and music comprise a limited number of elements, which are organized accord-ing to special rules with an infinite number of possible combinations. In the last years it has beenshown that music and language are processed in a similar way and share neural resources. Further-more, there is growing evidence that children use the same learning mechanisms in language andmusic acquisition. In early childhood, especially during the preverbal period, prosodic elements,that mean mostly musical parameters (Pitch, Contour, Melody, Rhythm), are the most importantcomponents of the acoustical input. Therefore, music might play a key role in the understandingof linguistic patterns, words and language.

Children with specific language impairment (SLI) show a significant limitation in languageability without any hearing impairment or neurological damage, and with normal nonverbal intel-ligence. As so-called late talkers, they acquire their first words at a time when normal children arealready able to combine two or more words. In my talk I hypothesize, that a deficit in processingmusical parameters might be responsible for SLI, due to their dominant role in the preverbal time.In this case, an integrated view at language and music, including production, perception and cogni-tive processes like working memory, may enlarge the understanding of their normal developmentand their impairments.

I am going to present a behavioral study on working memory in 4- to 5-year old normal chil-dren and 5-year olds with SLI. The performance of the children in the linguistic tasks of a Germanstandardized language acquisition test (Word order / Nonword/Sentence Repetition) was comparedwith music tasks (melody-pairs with different length, changing in melody or rhythm). Results re-veal, that SLI children show also in the musical task special deficits. Furthermore it was shown,that musical working memory serves as a very good predictor for language abilities. These obser-vations can lead to the development of new instruments for diagnostics and therapy of languageimpairments.

Page 384: Abstract Book

384 Memory

Key words: Working memory, Language acquisition, Specific language impairment

[email protected]

55.4 Predicting memorization efficiency through compositionalcharacteristics

Jennifer Mishra

University of Houston, USA

Recent qualitative studies have confirmed that memorization requires a great deal of time,even for the professional musician (e.g., Chaffin, Imreh, and Crawford, 2002). It is not yet knownwhether memorization efficiency is influenced consistently by identifiable factors and whetherinfluential factors are within or beyond the control of the musician to alter for a more efficientmemory. A large number of factors may potentially influence the amount of time required tomemorize which appear to fall broadly into two categories: memorization strategy and character-istics of the composition. The purpose of this study was to explore the effects of compositionalcharacteristics on memorization efficiency. Fourteen research studies investigating musical mem-orization provided efficiency data and sufficient information to collect the notation for memorizedexcerpts. Notation for 56 out of 60 different compositions and compositional excerpts used inthe 14 research studies was obtained. Compositional characteristics chosen for inclusion in theregression model were number of bars, number of beats, number of notes, tonality, number ofsharps/flats in key signature, number of chromatic tones, meter, tempo, and number of repeat-ing bars. A stepwise multiple regression was conducted to determine whether the listed variableswere predictors of total memorization time. Number of notes (representing amount of memorizedmaterial) was found to be the single best predictor of memorization efficiency (Adjusted R2 =.565) with additional factors of number of beats (representing length), number of chromatic tones(representing harmonic complexity), and tonality accounting for 70.2% of the variance. Composi-tions used in studies of musical memorization were generally short with an average of 23 bars and229 notes. Musicians commonly memorize pieces with hundreds of bars and thousands of notes.Using memorization efficiency data collected from the shorter compositions, a curve estimationwas computed to predict memorization time for longer compositions. Compositional characteris-tics do affect how much time is required to memorize a piece. The amount of time required formemorization depends to a large extent on the composition itself. Compositional characteristics,isolated from memorization strategies, predict the amount of time required to memorize a piece.

Key words: Memorization, Efficiency

[email protected]

55.5 Music and memory in advertising: Music as a device ofimplicit learning and recall

Margarita Alexomanolaki1, Catherine Loveday2, Chris Kennett2

Page 385: Abstract Book

Friday, August 25th 2006 385

1Goldsmiths College University of London, UK2University of Westminster, UK

Music may play several roles and have many effects in advertising; it may attract attention,carry the product message, act as a mnemonic device, and create excitement or a state of relax-ation. There have been numerous studies that have focused on the general perceptual, cognitiveand affective processing that occurs in response to exposure to music; there also have been studieson the effects of music on short- and long-term memory. However, few of these have examinedthe specific importance of music as a mnemonic device within filmed events (Boltz et al. 1991) orTV commercials (Yalch 1991, Stewart et al. 1998). In this paper, the role of music will be evalu-ated within advertising and during low-attention conditions. A series of experiments was carriedout whereby musicians and non-musicians were exposed to an advert that was embedded into agroup of other adverts, presented in the middle of an engaging TV program, thus replicating verynaturalistic conditions. There were 4 audio conditions examined in an example advertisement:jingle, instrumental music, instrumental music with voice-over and environmental sounds withvoice-over. Results indicate that music is effective in facilitating implicit learning and recall of theadvertised product, showing that, under non-attentive conditions, there is a certain mechanism ofunconscious elaboration of the musical signal. The role of previous musical training seems to havea little significance under low-attentive conditions, thus we observe an unconscious physiologicalreaction to the information carried with the music of a commercial which is common to musiciansand non-musicians. Conclusions concerning the function of music on listeners and on memorystimulation could prove effective in an analysis of the communicative role of music in advertising,and might also have wider ramifications for current research into more generalised analysis ofmusic and meaning.

Key words: Implicit memory, Music, TV advertising

[email protected]

55.6 Language use in autobiographical memory for musicalexperiences

Mark Kruger, Mark Lammers

Gustavus Adolphus College, St. Peter, Minnesota, USA

BackgroundAutobiographical memory is believed to aid in the development of the self, social bonding, andmotivation (Bluck, 2003; Pillemer, 2003). Because music is a pervasive part of most individuals’lives (Sloboda & O’Neill, 2001) and music has been connected to peak emotional experiences(Gabrielson, 2001), it is interesting to ask if memory for musical experience is similar to otherautobiographical memories and whether it differs as a function of musical expertise.

AimsOur study examines memory for musical experiences among adults associated with a symphonyorchestra either professionally, as an amateur member of a civic orchestra, or as a member of theaudience in order to explore the content and function of autobiographical memories.

Page 386: Abstract Book

386 Memory

MethodTwenty full-time symphony musicians, eighteen adult members of a civic orchestra, and seventy-two adult members of a concert audience wrote short descriptions of three types of musicalexperiences- an early experience, a very strong or memorable experience, and a meaningful expe-rience associated with practicing an instrument. Autobiographical memories for musical experi-ences were assessed using the Pennebaker, Francis, and Booth (2001) Linguistic Inquiry and WordCount software. Each narrative was coded for number of words, for the proportion of positive andnegative emotion words, for the proportion of cognition words related to causality and insight, andfor the proportion of words related to social relationships.ResultsAlthough full-time musicians, civic orchestra members, and audience members did not differ inreported ease of recall, performers produced longer narratives (p < .05). Expertise also produceddifferences (p < .05) in the proportion of words associated with cognition. Performers used propor-tionately more words associated with insight, especially in their narratives about practice. Gender,but not expertise, influenced the use of emotion words (p < .05).ConclusionsThese results extend our earlier findings that professional musicians are more likely to reportevents they found motivating, events they continued to share with others, and events involvingensemble performance. Expertise increases use of words associated with cognitive mechanisms.This may reflect an abstraction of experience. Observed gender differences are consistent withother research on autobiographical memories (Bauer, Stennes, and Haight, 2003).

Key words: Autobiographical memory, Expertise

[email protected]

Page 387: Abstract Book

Symposium: Interactive re-flective musical system formusic education

56

Convenor: Anna Rita Addessi

Discussants: David Hargreaves and Göran Folkestad

The relationship between new technology and learning is gaining more relevance in the fieldof music education (Webster 2002, Folkestad et al. 1998). This symposium aims to introduce aparticular kind of interactive musical systems, so-called reflective, and to discuss as they can beused in music education. The IRMS are a kind of interactive systems in which the user, whateverhis skills, competence level, and musical goals, is confronted with some sort of developing mir-ror of himself. The core concept of this approach is to teach powerful - but complex - musicalprocesses indirectly by putting the user in a situation where these processes are developed not bythe user, nor by the machine, but by the actual interaction between the user and the system. A par-ticular application of these systems, the Continuator, elaborated by François Pachet researcher atCSL - Sony in Paris, will be discussed (Pachet 2003, 2006). This system is able to produce musicin the same style as a human playing the keyboard. An important consequence of this design isthat the musical phrases generated by the Continuator are similar but different from those playedby the users. The experiments carried out so far with children 3 to 5 years old suggested that theContinuator is able to develop interesting child/machine interaction and creative musical processesin young children and it could represent a versatile device to enhance the musical invention andexploration in classroom setting (Addessi & Pachet 2005).

The aim of the symposium is to bring together experts in technologies and music education inorder to illustrate and discuss the experiments carried out so far, the educational and technologicalvalues of this particular system and to enhance new experiments and projects. The final aim is tocreate a spiral collaboration between system designers and experts in pedagogy and psychology.

The symposium previews the following plan: François Pachet (CSL-SONY, Paris) will intro-duce the IRMS, the experiments realized so far and the new projects; A.R. Addessi (University ofBologna) will present the pilot study; Susan Young (Exeter University) will illustrate the experi-ences currently being undertaken in the UK. Ajax McKerral Ajax McKerral (LSO) will introducethe London Symphony Orchestra Discovery. David Hargreaves (Roheampton University, UK) andGöran Folkestad (Lund University, Sweden) are invited to take part like discussants.

387

Page 388: Abstract Book

388 Symposium: Interactive reflective musical system for music education

56.1 Interactive reflexive musical systems

François PachetBackgroundWe have developed for the last 5 years various interactive musical systems with the goal ofassisting musical improvisation. One of these systems, the Continuator, is based on a ques-tion/answering scheme in which each phrase played b a user is continued (or answered) by thesystem in the “same style”. The system has been experimented both with several Jazz musiciansin the context of professional Jazz improvisation, and with children in the context of psychologicalexperiments related to music education. These various experiments have led to a conceptualisa-tion from a system design perspective coined IRMS, for Interactive Reflexive Musical Systems,i.e. systems in which a facet of the user’s musical personality is reflexively being exploited forcreating musical interactions.AimsThe aim of the project is now to leverage the ideas and results obtained in this first phase, and createa second phase of experiments, with a “spiral” model of collaboration in mind: from an educationalviewpoint the goal is to exploit concretely these ideas to create better educational systems; fromthe system design viewpoint, the goal is design yet better IRMS systems based on the results ofthese experiments, in particular invent new interactive schemes (not only question/answer) thatprovides the same quality of feedback as the Continuator.Major ContributionThe major contribution of CSL in this project has been so far the very invention of the conceptof IRMS and the design and development of early systems including the Continuator. The nextcontribution will be to propose novel systems (design and implementation) based on this concept,which will benefit from the analysis of the new experiments. In particular we seek to design sys-tems aiming at creating not only continuous streams of improvisation, but also fully structuredpieces (songs). Additionally we are strongly interested in the coupling between musical perfor-mance and music listening, as developed in our MusicBrowser project. In this project, the notionof musical descriptor is central: descriptors are symbolic information extracted automatically fromaudio signals (typically mp3s). We are interested in exploiting these descriptors for musical perfor-mance, both to create novel forms of musical interactions and to facilitate the access of collectionsof existing music repertoire for children.ImplicationsThe implications of this project are essentially to expand on the notion of IRMS and its concreteapplications to musical education.References

Addessi, A.-R. and Pachet, F. Experiments with a Musical Machine: Musical Style Replicationin 3/5 year old Children. British Journal of Music Education, 22(1), March 2005.

Pachet, F. Enhancing Individual Creativity with Interactive Musical Reflective Systems. Psy-chology Press. 2006.

Pachet, F. On the Design of Flow Machines. The Future of Learning, IOS Press. 2004.Pachet, F., Addessi, Anna-Rita When Children Reflect on Their Playing Style: The Continua-

tor. ACM Computers in Entertainment, 1(2), 2004.

Key words: Interactive reflexive musical systems, Continuator, Musical creativity

[email protected]

Page 389: Abstract Book

Friday, August 25th 2006 389

56.2 Pedagogical perspectives of the Interactive reflexive mu-sical systems

Anna Rita Addessi

University of Bologna, Italy

BackgroundThe relationship between new technology and learning is gaining more relevance in the field ofmusic education (Webster 2002). The IRMS are a particular kind of interactive systems in whichthe user, whatever his skills, competence level, and musical goals, is confronted with some sortof developing mirror of himself. The core concept of this approach is to teach powerful musicalprocesses indirectly by putting the user in a situation where these processes are developed not bythe user, nor by the machine, but by the actual interaction between the user and the system. Aparticular application of these systems, the Continuator, was elaborated at CSL - Sony in Paris,(Pachet 2004, 2006).

AimsThe aim of the DiaMuse project is to understand in what way the children relate to these interactivemusical systems, what kinds of musical and relational behaviours are developed, and how they canbe used in the educational field to stimulate creativity and the pleasure of playing. The aim is tocreate a spiral collaboration between system designers and experts in pedagogy and psychology.

MethodAn experimental protocol and some practice experiences in classroom setting were realized toobserve young children aged 3/5 years playing with the Continuator. In the protocol three sessionswere held once a day for 3 consecutive days. A new experiments was carried out in classroomsetting, to observe in what way this system can be used in classroom music education, with littlegroup of children (kindergartner, primary school and middle school).

Results and ConclusionsThe results suggested that the Continuator is able to develop interesting child/machine interactionand creative musical processes in young children and it could represent a versatile device to en-hance the musical invention and exploration in classroom setting. In this paper some example ofthe results, concerning the nature of interaction between children and machine, musical style im-provisation and listening conducts, will be presented. We will then discuss some pedagogical andpsychological perspectives of the interactive reflexive musical systems. We will show and discusssome video excerpts.

Key words: Interaction child/machine, Interactive reflexive musical systems, Continuator

[email protected]

56.3 Interactive music technologies in early childhood musiceducation

Susan Young

Page 390: Abstract Book

390 Symposium: Interactive reflective musical system for music education

School of Education and Lifelong Learning, University of Exeter, UK

In a review of music technology in education, Webster (2002) concludes that there is a scarcityof research on using music technology with young children. The reasons for this may lie in certainresistances to the idea of using technologies with young children embedded in the ideolo-gies andestablished traditions of early childhood music education practice.

From birth children are immersed in everyday musical worlds mediated increasingly by digitaltechnologies. They arrive in pre-school education equipped with a range of competences and con-cepts about music and musical process derived from these experiences. The issue is not whetherdigitised technologies should be part of early childhood music education (for if early childhoodeducation purports to “start with the child”, then they are already present in experiences childrenbring with them), but how pedagogical approaches need to transform in order to best serve thecompetences children have, and will need, to function effectively in future music-audio cultures.

Whereas technology might have been originally associated with screen-based computer activ-ity and softwares based on impoverished conceptions of learning in music, the Con-tinuator, a par-ticular interactive musical system elaborated at the Computer Science Laboratory-Sony in Paris,repre-sents a new generation of computationally augmented mu-sical instruments. The range ofpossibilities afforded by these “new generation” softwares considerably increases their educationalpotential. The presentation will report on an exploratory study using the continuator in a nurserysetting in the UK.

Key words: Early childhood music education, Interactive reflective musical systems, Continuator

[email protected]

56.4 London Symphony Orchestra discovery music technologyunit

Ajax McKarrelLondon Symphony Orchestra Discovery Music Technology Unit, UK

BackgroundLSO Discovery, the education wing of the London Symphony Orchestra, initiates and runs a rangeof educational projects across the lifespan, in a variety of settings, with a strong commitmentto serve the community local to the LSO’s base in East London. The music technology unit isone part of LSO Discovery and is responsible for a variety of educational work with childrenand young people. LSO Discovery is also committed to the development and dissemination ofinnovative practice in music education and will be one of the participants in the proposed IMTECproject.AimsBuilding on the success of earlier studies (Pachet and Addessi), the aims of the Interactive MusicTechnology in Early Childhood project (IMTEC) are to explore the application of the IRMS withyoung children in pre-school education (3-5 years) and, using these explorations as a basis, togenerate a productive dialogue between practice, educational theory and software design.MethodThe project will operate on a number of intersecting levels. On one level it will explore the opera-tional and practical aspects of children aged 3-5 years using music technologies in typical nursery

Page 391: Abstract Book

Friday, August 25th 2006 391

environments. On another level, the project will collect and analyse video-data from children’splay with the Continuator in order to extend understandings of the music-learning processes theequipment affords. On another level again, the project will bring together practitioners, researchersand technology designer in a series of collaborative seminars to develop a dialogue between prac-tice and design.

The London Symphony Orchestra technology unit will be interested in and explore the widerapplication of IRMS technologies to instruments other than the keyboard and to contexts otherthan early years education.ResultsIt is anticipated that the project will arrive at findings and ideas which are informative to earlychildhood music education and point at certain possible directions for integrating interactive musictechnologies into practice as well as highlighting challenges which may need to be addressed bysoftware and equipment design or by practical situations in early childhood education.

Key words: London symphony orchestra, New technologies, Interactive reflective musical systems

[email protected]

56.5 Rhythm interaction modes

Sergio Krakowski

Instituto de Matematica Pura e Aplicada, Rio de Janeiro

This demonstration will focus on the rhythmic aspects of music. The examples that will beshown are the first part of a PhD Thesis developed at Instituto de Matemática Pura e Aplicada inRio de Janeiro. The main subject of this thesis is to understand and develop ways through whichthe percussion instruments can enlarge its potentials creating new and exciting musical expres-sions. Actually, with the advance of computational performance, it’s possible to build analysisalgorithms that can extract many kinds of information from the audio captured by a microphone.The idea is to do that with a Brazilian instrument called Pandeiro. This information will then beused to control higher-level algorithms that should build an interesting answer to this percussiveinput. Each type of information generated by the analysis can control one specific feature of themusical output. The main question is to understand why some of these links between input andoutput attract the musicians and the listener’s attention and why some of them do not. To do that,many possible links have been done and will be presented. These are the first prototypes that allowone to criticize and think about this issue based on practical material.

Key words: Percussions, New technologies, Rhythm, Analysis algorithms

[email protected]

Page 392: Abstract Book
Page 393: Abstract Book

Perception and Cognition 5757.1 Reawakening the study of musical syntactic processing in

aphasia

Aniruddh Patel1, John Iversen1, Marlies Wassenaar2, Peter Hagoort2

1The Neurosciences Institute, San Diego, CA, USA2F.C. Donders Center for Cognitive Neuroimaging, Nijmegen, The Netherlands

BackgroundDoes music draw on brain mechanisms used for linguistic syntactic processing? This question isattracting increasing interest because it addresses the neural modularity of music as well as thenature of the brain operations underlying human syntactic abilities. One way to address this ques-tion is to ask whether aphasic individuals with linguistic syntactic problems also have a musicalsyntactic deficit.

Despite early suggestive work in the 1970’s, there has virtually no research on musical syn-tactic processing in aphasia. This is likely due to an emphasis on case studies of dissociationsbetween music and language in aphasic musicians (e.g., Shebalin). However, these cases typi-cally represent highly accomplished composers or conductors, i.e., individuals who may not berepresentative of the general population.

AimsWe sought to determine whether ordinary Broca’s aphasics with a syntactic comprehension deficitin language also had a musical syntactic deficit. To determine if musical syntactic abilities werespecifically related to linguistic syntactic abilities, we also tested semantic processing in language.

Method12 Dutch Broca’s aphasics (plus matched controls) with left-hemisphere lesions listened to lin-guistic and musical sequences and made acceptability judgments about them. The linguistic se-quences were sentences (n=120): half contained either a syntactic or a semantic error. The musicalsequences were chord sequences (n=60): half contained an out-of-key chord, violating the musicalsyntax (harmony) of the phrase.

ResultsThe aphasics performed significantly worse than controls on the chord sequence task, indicatingthat they had a deficit in the processing of musical harmony. They also showed deficits on both the

393

Page 394: Abstract Book

394 Perception and Cognition

linguistic syntactic and semantic tasks. Crucially, multiple regression revealed that performanceon the music task was a significant predictor of performance on the syntactic task, but not thesemantic task.

ConclusionsBroca’s aphasics with a syntactic comprehension deficit in language also have a musical syntacticdeficit, and there is a correlation in the degree of their deficit in both domains. This suggests thatleft hemisphere language circuits play an essential role in musical syntactic processing in non-musicians. Our results also suggest that aphasia merits renewed attention as a focus for music-language syntactic studies.

Key words: Brain, Music and language, Syntax

[email protected]

57.2 Perception of music by patients with cochlear implants

Jaan Ross1, Inna Koroleva2, Jelena Ogorodnikova3

1University of Tartu and Estonian Academy of Music and Theatre, Tallinn2Institute of Otorhinolaryngology, St. Petersburg3Pavlov Institute of Physiology, St. Petersburg

BackgroundThe number of cochlear implant users in St. Petersburg has reached about 150 people by 2005.While the primary function of an implant has largely been seen to facilitate distinguishing betweenenvironmental sounds as well as understanding speech (in combination with lip-reading), cochlearimplant users are now becoming more motivated to attend also musical sounds, as both the overallnumber of implantees and the practice of their implant use are increasing.

AimsThis study is aimed at obtaining data about to what extent post-lingual cochlear implant users havebeen able to adapt themselves to listening music after implatation.

MethodPatients were required to listen to four exerpts of vocal or instrumental music, which included aSaami jojk, a melody played on a Japanese bamboo flute shakuhachi, an example of Lithuanianvocal polyphony, and an example of Asian overtone singing. An interview was conducted witha group of cochlear implantees (n > 10) on the basis of a special questionnaire developed forthis purpose. Each implant user was questioned individually at a clinic in St. Petersburg. Someinterview sessions were recorded on video tape. Questions addressed general characteristics ofthe musical excerpts: whether the music was vocal/instrumental, monodic/polyphonic, fast/slow,in high/low register, etc. After the interview an experiment was conducted: participants had torecognize (1) the same musical excerpts they have been exposed to a couple of minutes ago and(2) timbres of eleven widely known musical instruments.

ResultsResults demonstrate that the temporal characteristics of music are better recognized by cochlearimplantees than the spectral characteristics. Musical timbres with wideband spectral distribution ofsound energy (organ) are recognized worse than timbres with narrower energy distribution (flute,

Page 395: Abstract Book

Friday, August 25th 2006 395

cello). Patients with cochlear implants are in general able to recognize musical instrument timbresunder conditions of forced choice from a finite list.

ConclusionsPost-lingual patients with cochlear implants are strongly motivated to attend music even whenits perceived acoustical characteristics only remotely resemble the patterns they are familiar withfrom the period preceding their hearing loss. This may be explained by an ecologically importantfunction music fulfills in people’s everyday life.

Key words: Perception, Music, Cochlear implant

[email protected]

57.3 Cognitive bases of musical and mathematical competence:Shared or independent representations?

Rowena Beecham, Sarah Wilson, Robert Reeve

School of Behavioural Science, University of Melbourne, Australia

BackgroundAnecdotal reports have long suggested that there is an overlap in mathematical and musical com-petence. Researchers interested in these two domains have separately found evidence that canbe interpreted as supporting the anecdotal claim; however, no research has examined directly thepossibility of an overlap in the cognitive processing between the two domains. Exploring the puta-tive relationship between aspects of musical and mathematical cognition has implications for ourconceptualisation of the nature of cognitive representation in general, and the nature of cognitivedeficits, such as amusia or dyscalculia, more specifically.

AimsThe study was designed to test the hypothesis that there would be similarities in pitch and numberprocessing. Specifically, it was designed to investigate the claim that analogous spatial represen-tations underlie pitch and number judgments.

MethodSixty-one adults completed pitch and number magnitude discrimination tasks under two condi-tions: (1) Explicit focus on the spatial properties of the stimulus comparison, (2) No explicitmention of the spatial judgement dimension (Implicit). To explore the impact of spatial represen-tation on task performance participants were required to press response buttons that were eitheraligned vertically or horizontally.

ResultsClassification analyses were conducted to identify subgroups of individuals whose response pat-terns varied as a function of the task dimensions. For the pitch task, approximately half of theparticipants showed the expected spatial judgement effect for the Explicit condition; however,only one third of participants showed the expected effect for the Implicit condition. Performanceon the number task was consistent with performance on the music task, but the magnitude of theeffects was lower. However, performance on the musical and mathematical tasks appeared to belargely independent.

Page 396: Abstract Book

396 Perception and Cognition

ConclusionsConsistent with previous research findings the study provides evidence that, for some people, pitchand number map onto mental spatial representations that can influence their reaction times. How-ever, we found little evidence to support the claim that similar cognitive representations underpinmusical and mathematical judgements.

Key words: Pitch representation, Number representation, Domain specificity

[email protected]

57.4 Enhanced information processing speed in musicians com-pared to non-musicians

Jennifer Bugos, Ashley Yopp, Wendy Pedder

School of Music, East Carolina University, USA

We explored the relationship between musical training and information processing speed inyoung adult musicians and non-musicians. Musical performance requires the complex coordina-tion of movements with regard to temporal and spatial criteria. In addition, musicians utilize rapidreading and planning skills that may contribute to generalized cognitive abilities. The purpose ofthis study is to examine the role of music instruction on information processing speed in musi-cians and non-musicians (ages 18-23). Subjects were screened for neurological impairments suchas depression that can adversely affect performance on processing speed measures. Measures ofintelligence and music aptitude were administered to control for potential group differences. Wethen administered a battery of cognitive assessments that examined information processing speed.Preliminary data from this study suggest that young adults with a minimum of seven years of for-mal music instruction show enhanced performance on the Paced Auditory Serial Addition Task(PASAT) compared to those with no prior formal music instruction.

The cognitive processes necessary for musical performance may enhance generalized cognitiveability. It is unclear as to the degree in which musical performance may depend upon generalizedcognitive and specialized cognitive processes. Transfer of cognitive abilities engaged throughmusic instruction to other tasks or non-musical domains has been a topic of great concern formusic educators.

Key words: Music and cognition, Processing speed, Music education

[email protected]

57.5 Individual differences in music perception

Rita Aiello, Doris Aaronson, Alexander Demos

New York University, USA

Page 397: Abstract Book

Friday, August 25th 2006 397

BackgroundResearch shows greater accuracy in perceiving musical structure for musicians than non-musicians.Little detail is known about individual multi-dimensional differences in listening strategies.AimsAnalogous to language perception, musicians may focus differentially on global structure or de-tailed aspects of music. We evaluated such differences in musicians’ responses during real-timelistening, and when writing paragraphs about their experiences after the music ended.MethodWhile listening to the 1st movement of Mozart’s B flat Piano Sonata K.333, graduate students(experienced musicians) at the Juilliard School freely made marks on the musical score. Two ofthe authors defined 3 strength-levels of musical boundaries, based on cadences and their musicalcontext. They categorized the score’s structure into musical “boundary” (signal) and “control”(noise) regions (between boundary regions) for signal-detection analyses. The analyses of mu-sicians’ responses determined their “sensitivity” to the boundary structure, measured by the A’-statistic, and also their “response bias,” (making many or few marks) as measured by the B”- sta-tistic. The authors evaluated musicians’ paragraphs about their listening experience by giving eachone “salience” scores from 1-3 for their three most salient aspects from among the following twosets: MACRO–performer/performance, emotion, score-marking task; MICRO–harmony, melody,other musical elements. Boundary detection data were then examined separately for musiciansclassified as using “Macro” (global, n=9) and “Micro” (detailed, n=9) listening strategies.Results1. Musicians’ summed paragraph scores were generally more Macro (range 2-6) micro (range0-3). The macro listening group scored zero on all micro categories.

2. For all 3 boundary-strengths, boundary-sensitivity was greater for Macro (A’= .591) thanMicro (A’= .558) musicians. Response bias was greater for Micro (B”= -.130) than Macro (B”=-.012) musicians. (Note: More negative B” values means more response marks in both boundaryand control regions.) These trends held for all 3 boundary-strengths.

3. Boundary sensitivity was higher for the Exposition (movement’s beginning) (Macro, A’=.660; Micro, A’ =.458) than for the Recapitulation (movement’s end) (Macro, A’=.617; Micro,A’=.385). Further, the highest strength-level of boundaries had higher boundary-detection values(Macro, A’=.656; Micro, A’=.614) than did the lowest level boundaries (Macro, A’=.552; Micro,A’= .517).ConclusionsQualitative paragraph analyses and quantitative boundary analyses agree to provide evidence formacro (global) and micro (detailed) musical listening strategies.

Key words: Listening strategies, Real-time listening, Musical boundary-detection

[email protected]

57.6 Long-term effects of music instruction on cognitive abili-ties

Eugenia Costa-Giomi

Center for Music Learning, University of Texas at Austin, USA

Page 398: Abstract Book

398 Perception and Cognition

There is a markedly absence of experimental research on the long-term cognitive effects ofmusic instruction. Most studies conducted to date have focused on the effects of one-year ofmusic instruction and have not retested the subjects after the discontinuation of the treatment.This is surprising given that (1) music instruction is usually undertaken for more than one year,and (2) most children who start music instruction discontinue it during their school years.

The present study is the continuation of a longitudinal investigation on the effects of three yearsof music instruction on children’s cognitive development, academic achievement, self esteem, andpersonality traits. In the original study, 9-year-old children (n = 117) who were attending publicschools and had no formal music instruction were randomly assigned to an experimental groupand received individual piano lessons for three years or a control group which did not participatein any type of formal music instruction. The results of standardized tests administered before andthroughout the three years showed increasing (and significant) differences in general cognitiveabilities and spatial abilities between the two groups during the first two years of music instructionbut no differences between the groups by the end of the three years of lessons. These initialresults supported the findings of previous and recent research about the short-term effects of musicinstruction on cognitive development but questioned the long-term effects.

In the follow-up study, 24 young men and women from the original sample were interviewedand administered tests of self-esteem, memory, and cognitive abilities five years after the comple-tion of the initial investigation. No differences in the test results could be established between thestudents who had participated in the piano lessons and those who had never received formal mu-sic instruction. These results question the long-term effects of transitional engagement in musicinstruction and highlight the importance of further longitudinal research on the cognitive benefitsof music education.

Key words: Cognitive development, Music instruction, Children

[email protected]

Page 399: Abstract Book

Pitch III 5858.1 Tipping the (Fourier) balances: A geometric approach to

representing pitch structure in non-tonal music

Carol L. Krumhansl

Psychology, Cornell University, Ithaca, NY, USA

BackgroundGeometric representations of pitch structures have been applied primarily to tonal-harmonic music(see, for example, Chew, 2000; Sapp, 2001; Toiviainen & Krumhansl, 2003). The music-theoreticliterature offers a range of approaches to non-tonal music. The approach considered in this paperis that of Lewin (2001) who proposed five Fourier properties. These subdivide the set of chro-matic tones into subsets (such as those contained in the whole-tone scale, or those contained inaugmented triads). Quinn (2004) translated these into geometric representations in the form ofbalance pans for each of the Fourier properties, each containing the subsets described by property.This provides a concrete, visualizable representation of pitch structures that could be applied toboth tonal-harmonic and non-tonal music.

AimsAt a general level, the objective was to explore whether the Fourier balances provided interpretableresults for different pitch sets. The first analyses considered the familiar diatonic set to determinewhich Fourier balances were most strongly tipped by the application of the diatonic set. Thesecond analysis considered the octatonic set for comparison. The final analysis considered whetherthe balances could account for perceptual judgments of tension in an octatonic passage.

Main ContributionThe first analysis found that the Fourier balances generated by intervals of fifths and major thirdswere most strongly tipped by the diatonic set. Building on this, it was found that the increase intipping by adding additional tones to the diatonic set correlated strongly with probe tone judgmentsin tonal contexts (Krumhansl & Kessler, 1982). The second analysis found that the Fourier balancegenerated by minor thirds was tipped most strongly for the octatonic set. The final analysis showedthat listeners’ judgments of tension in an octatonic passage corresponded to the degree to whichthe Fourier balance most active for the octatonic set tipped away from the position of the balancefor the referential octatonic set.

399

Page 400: Abstract Book

400 Pitch III

ImplicationsThe results suggest that the Fourier balances may provide a way of representing pitch sets otherthan those characteristic of tonal-harmonic music. Because the Fourier balances provide a com-plete decomposition of any distribution of pitches, they can be applied without prior knowledgeof the pitch structures. Moreoever, they may serve as a model for perceptual responses to music,such as tension in non-tonal styles of music.ReferencesChew, E. 2000. Towards a Mathematical Model of Tonality. Ph.D. dissertation. Operations Re-search Center, MIT. Cambridge, MA.

Krumhansl, C.L., & Kessler, E.J. (1982) Tracing the dynamic changes in perceived tonal orga-nization in a spatial map of musical keys. Psychological Review, 89, 334-368.

Lewin, D. (2001). Special cases of the interval function between pitch-class sets X and Y.Journal of Music Theory, 45, 1-29.

Quinn, I. (2004). A unified theory of chord quality in equal temperaments. Ph.D. dissertation.Department of Music Theory, Eastman School of Music. Rochester, NY.

Sapp, C. (2001). Harmonic visualizations of tonal music. Proceedings of the 2001 Interna-tional Computer Music Conference.

Toiviainen, P., & Krumhansl, C. L. 2003. Measuring and modeling real-time responses tomusic: Tonality induction. Perception, 32, 741-766.

Key words: Non-tonal music, Geometric representations, Music theory

[email protected]

58.2 The perception of non-adjacent harmonic relations

Matthew Woolhouse, Timothy Horton, Ian Cross

Centre for Music and Science, Faculty of Music, University of Cambridge, Cambridge, UK

BackgroundMost research on cognition of harmonic relations has focused on the experience of relations be-tween successive events. Yet a major feature of theories of harmony is elucidation of relationshipsbetween non-adjacent events. The present study seeks to address this issue empirically. A seriesof pilot studies demonstrated that listeners can be sensitive to relations between non-adjacent har-monic events, and two larger-scale studies have extended these findings, in the process providingnew evidence concerning the cognitive structure of fifth-relations.AimsThe aim of this study is to explore: (a) the extent to which listeners with different levels of mu-sical training exhibited sensitivity to non-adjacent harmonic events, and; (b) whether or not suchsensitivities are in accord with music-theoretic accounts of harmonic relations.MethodTwo studies were conducted using a probe-cadence paradigm. The first used sequences of ninechords which started in one key and modulated to another, all possible modulations being em-ployed. Each sequence was followed by a cadence (V-I) which could be in any key (includingthe initial and terminal keys of the sequence), which listeners were required to rate for its closuralproperties in respect of the entire sequence. Listeners’ responses were compared with those in a

Page 401: Abstract Book

Friday, August 25th 2006 401

control condition involving non-modulating sequences. In the second, modulating sequences wereused in which the proportion of the total sounding duration of the sequence in the second key wasvaried. Listeners again responded by rating probe-cadences in a range of keys.ResultsThe results fit with the predictions of Reimannian harmonic theory, in that sequences which mod-ulate to keys that can be interpreted as “functional” in the context of the initial key elicit higherratings for the initial key than sequences in which the modulation is to a non-functional key.Moreover, listeners’ experience of increasing key “distance” from an initial key follows a patternanalogous to an inverse-square function.ConclusionsListeners with different musical backgrounds are sensitive to relations between non-adjacent har-monic events, though that sensitivity is dependent on the nature of the relationship and on sequenceduration.

Key words: Harmony, Key-distance, Syntax

[email protected]

58.3 Vertical and horizontal dimensions in the spatial repre-sentation of pitch height

Elena Rusconi1, Bruno L. Giordano2, Alex Casey1, Carlo Umilta3, Brian Butterworth1

1Institute of Cognitive Neuroscience, University College London, London, UK2CIRMMT, Schulich School of Music, McGill University, Montreal, Quebéc, Canada3Department of General Psychology, University of Padua, Padua, Italy

BackgroundA previous study (Rusconi, Kwan, Giordano, Umiltà, & Butterworth, Cognition, 2005) high-lighted, in instrumental musicians, a preferential mapping of pitches onto a vertical spatial di-mension, high/low pitches being associated with upper/lower response keys, and onto a horizontalspatial dimension, high/low pitches being associated with the right/left response keys. We calledthis effect Spatial Musical Association of Response Codes SMARC).AimsHere we tested whether the horizontal SMARC effect originated from the experience of the pitchmapping found on keyboards. Also, the interaction between the SMARC effect and the auditorylocation in space was investigated.MethodTwo experiments were conducted with a group of Western singers. A musical instrument clas-sification task was used (wind vs. percussion instrument). Participants pressed one of two keys,aligned either on the vertical or on the horizontal plane. Stimuli were presented over headphones.In a second experiment stimuli were presented through one of two speakers (high or low and leftor right positions). Analyses were conducted using an inverse efficiency measure, defined jointlyby reaction time and accuracy.

The relationship between musical training and SMARC was assessed using an objective mea-sure of keyboard skill (speed and accuracy) and also by noting years of musical education and byscore reading ability.

Page 402: Abstract Book

402 Pitch III

Results and ConclusionsSignificant horizontal and vertical SMARC effects were found in both experiments. The horizontalSMARC effect was uncorrelated with either keyboard skill or musical education. We thereforepropose that the horizontal SMARC effect originates from an orthogonal remapping of the verticalonto the horizontal space.

When the position of the sound source was unambiguous (i.e., in the horizontal alignment, asinferred by the presence of a significant Simon effect), the congruency between pitch height andsource position strongly modulated the SMARC effect. We argue for either an influence of pitchheight on the perceived source location, or for an inability to separate pitch and perceived sourcelocation at a decisional level, even when they are task irrelevant.

Key words: Pitch, Space, S-R compatibility

[email protected]

58.4 Setting words to music: An empirical investigation con-cerning effects of phoneme on the experience of intervalsize

Frank Russo, William Thompson

University of Toronto at Mississauga, Canada

BackgroundRecent research in our lab has shown that experience of interval size is influenced by severalvariables not directly related to pitch distance. These variables include spectral centroid (Russo &Thompson, 2005), pitch register and pitch direction (Russo & Thompson, in press), and in the caseof vocal music, facial expression (Thompson, Graham, & Russo, 2005; also see talk in “Media”Symposium, ICMPC9). The influence of spectral centroid is somewhat surprising because itsperception is generally considered to be a component of timbre (i.e., brightness), and orthogonalto pitch perception.AimsBecause phonemes vary with regard to spectral centroid, it is possible that the experience of pitchrelations in sung music may be influenced by phoneme sequence. Our focus in this investigationwas limited to the influence of phonemes on the perception of interval size.MethodThree-tone sequences were synthesized so as to vary precisely with regard to phoneme and pitchcontent. The sequence of phonemes yielded spectral centroid patterns that followed one of threecontours: rise-fall (e.g., /di/-/da/-/di/), fall-rise (e.g., /da/-/di/-/da/), or static (e.g., /da/-/du/-/da/).The sequence of pitches followed one of two contours: rise-fall (e.g., C-G- C) or fall-rise (e.g.,G-C-G). Participants understood that all sequences would start and end on the same phonemeand pitch but that these parameters would vary across sequences. The experimental task was toassess the extent of pitch change between the middle tone and its flanking neighbor tones in eachsequence.ResultsPerceived pitch distance varied as a function of centroid and pitch contour, with larger pitch dis-tances associated with sequences possessing congruent contours.

Page 403: Abstract Book

Friday, August 25th 2006 403

ConclusionsOur findings suggest that word selection in vocal music may influence the experience of intervalsize between neighboring tones. This research may provide insight to age-old questions concern-ing how to best set words to music.ReferencesRusso, F. A., & Thompson, W. F. (in press). The subjective size of melodic intervals over a two-octave range. Psychonomic Bulletin & Review

Russo, F. A., & Thompson, W. F. (2005). An interval-size illusion: The influence of timbre onthe perceived size of melodic intervals. Perception & Psychophysics, 67, 559-568.

Thompson, W. F., Graham, P., & Russo, F. A. (2005). Seeing music performance: Visualinfluences on perception and experience. Semiotica, 156, 203-227.

[email protected]

Page 404: Abstract Book
Page 405: Abstract Book

Education VI 5959.1 Aspects of musical movement representation, in early child-

hood music education

José Retra

Exeter University, UKDutch Foundation for Toddlers and Music, The Netherlands

BackgroundMany interpretations of young children’s movement responses to music are based on criteria in-fluenced by Piagetian models of development, resulting in the view that early childhood music-making represents a period of intense motor activity with little mindful engagement. Conse-quently, movement reactions to music are often taken for granted in early childhood music ed-ucation. The premise of the current study is that movement should be considered an importantform of kinaesthetic representation through which children, come to understand and learn differ-ent aspects of music.

AimsThe purpose of this study is to investigate the development of movement representation of musicalactivities of children aged 18 months to 36 months that occur during a regular Dutch early child-hood music education course. Investigating the musical movement behaviour of young childrenasks for a “real world” situation, in which they can act freely and spontaneously. Dutch PreschoolMusic Education offers a semi-controlled environment in which the children are offered a rangeof musical activities (e.g. a song with a motor activity) and where they can freely respond in theirown manner.

MethodA pilot study was conducted to construct a defined set of musical stimuli - 8 songs with specificmovements. To define the different movement responses to music, it was necessary to investigate ifthe movements would indeed represent musical characteristics, or if they would represent qualitiesof music indirectly. Data were collected through video-recordings of the music sessions. Thevideo-recordings were transcribed and analysed.

ResultsThe analysis of the pilot study generated a framework for the movement responses of the children:

405

Page 406: Abstract Book

406 Education VI

movement types and functions. It was seen that in the music educational environment: the tempoof the activities, the understanding of the lyrics as well as the concepts presented in the lyrics, areof great importance to evoke movement responses from the girls.ConclusionsBenefiting from movement as a kinaesthetic representational way for learning and understandingmusic implies that certain conditions are needed in order to evoke appropriate movement reactions.

Key words: Movement, Early childhood, Music education

[email protected]

59.2 Refining a model of creative thinking in music: A basisfor encouraging students to make aesthetic decisions

Peter Webster

Northwestern University School of Music, Evanston, IL 60208 USA

This presentation argues for a change in music teaching culture that stresses creative thinkingand links this change to the research literature. I provide a basis from educational philosophy byoutlining a constructionist position that places emphasis on creative work. This will lead to a briefreview of the research base in creativity within the general psychology literature, focusing on thelast five decades. Next, I will review some of the most important music research in the last fiveyears, including a revised model of creative thinking in music. This revised model adds a moredetailed description of the core aspects of creative thinking, including a more refined approach toreflective thought in music. Music listening and improvisation are better described in this model,as are the enabling conditions that support creative thought.

The model helps to provide a conceptual framework for the Measures of Creative Thinking inMusic, a set of activities designed to measure creative thinking aptitute in music. I will providea summary of recent quantiative data that compares the results of this measure with compositionproducts of middle school children in an attempt to estbalish some criterion-based validity for themeasure and for the model itself.

I will end by summarizing why this work is so important for a change in how we conceptualizemusic teaching and learning.

Key words: Creative thinking in msuic, Model building, Music assessment

[email protected]

59.3 Effects of different teaching styles on the development ofmusical creativity

Theano Koutsoupidou

Roehampton University, UK

Page 407: Abstract Book

Friday, August 25th 2006 407

Creative activities in music promote the ability for self-expression, stimulate children’s imag-ination and contribute to their active involvement into the learning process. Musical creativity inthe school can take place in different kinds of activities described as listening, performing, impro-visation and composition. The two latter have been the area of interest for most studies conductedon children’s musical creativity over the past few decades, investigating their original music mak-ing, in different settings, within different frameworks and with children using their voices, theirbodies or musical instruments to make music. This paper explores the effects of different teachingstyles on the development of creative thinking in music among primary school children. An exper-imental study that was carried out first in a primary school in England revealed that children whohad experienced creativity through their music lessons scored higher than those who did not in atest of creative thinking in music (Webster’s MCTM). This finding suggested that creative thinkingcan be an acquired behaviour, and therefore, it can be nurtured in the music classroom. The resultsof the experiment were followed by interviews with music educators. They were asked to reflecton the content and the outcomes of the music lessons through a comparative observation of thedifferent teaching approaches that were adopted for the two groups during the experiment. Amongthe topics that were discussed were the different objectives and outcomes of each approach andthe links between theory and practice concerning the development of musical creativity. Analysisof the responses led to the identification of two different teaching styles: the didactic/teacher-controlled one and the free/child-centred one. While each of them can have different effects onthe musical development of children, this study showed that the latter could support and promotethe creative development of children, as well as their psychological and social development.

Key words: Creativity, Teaching styles, Primary education

[email protected]

59.4 The effects of the “musicogram” on musical perceptionand learning

Graça Boal Palheiros1, Jos Wuytack2

1Escola Superior de Educação-Instituto Politécnico do Porto, Portugal2Lemmens Institute, Leuven, Belgium

BackgroundLearning how to listen to music is important, in order to better understand and appreciate it. Chil-dren’s everyday modes of listening are often physically active. In contrast, teachers generallyuse more passive approaches. Music educator Jos Wuytack has developed strategies for teachingyoung non-musicians, which demands the listener’s both physical and mental participation, beforeand during the listening activity. Children previously learn the musical materials through perfor-mance. Then they listen while following a “musicogram”: in this visual scheme, musical elementsare represented through colours, geometric figures and symbols.

AimsEmpirical observation in schools in different countries suggests that those strategies enhance chil-dren’s understanding and enjoyment of “classical” music. Some studies also mention the advan-tages of visual perception to musical perception and memory. This study investigated the influence

Page 408: Abstract Book

408 Education VI

of the “musicogram” upon the perception and enjoyment of children and music teachers. It alsoasked teachers’ opinions on the effectiveness of this strategy.MethodParticipants were 144 children and 41 music teachers from Australia, Belgium and Portugal. Chil-dren in their schools attended a lesson taught by Jos Wuytack: they listened to the “March’ fromTchaikovsky’s “Nutcracker Suite” either with or without the “musicogram”. After listening theywere asked about both musical characteristics and their enjoyment. During training courses, teach-ers listened to this music with the “musicogram”. They responded to the same questions and ratedtheir opinions about the strategies.ResultsChildren responded correctly significantly more when listening with the “musicogram” than with-out it. They also enjoyed the music more with it. Most children showed similar reactions of enthu-siasm towards this lesson and felt “good and happy”. Similarly, all teachers were enthusiastic andrated the strategies very highly. Most teachers responded correctly to the musical characteristics,as expected.ConclusionsThe findings suggest that children overall perceived, memorized and understood the music betterwith the “musicogram”, rather than without it. This strategy did enhance children’s enjoyment ofthe music: this is positive, given children’s general negative attitude towards “classical” music.Teachers’ motivation was very high: they did appreciate the “musicogram”. The overall resultsare positive and may have relevant implications for music education.

Key words: Children, Music perception, Music education

[email protected]

Page 409: Abstract Book

Perception IV 6060.1 The perception of accents in pop music melodies

Martin Pfleiderer, Daniel Müllensiefen

University of Hamburg, Department of Musicology, Germany

BackgroundThere are several theories in the psycho-musicological literature that aim at explaining the phe-nomenon of why some notes of a single line melody are perceived as more important than others.This natural tendency to perceive accents on specific notes even if they not differ in dynamics orarticulation is mostly explained in terms of gestalt-like rules (Thomassen, 1982; Povel & Essens,1985; Boltz & Jones, 1986; Monahan et al. 1987).

AimsThis study aims at specifying empirically a) a particular combination of rules for the differentmusical dimensions (pitch, contour, rhythm, meter, etc.) that is cognitively most adequate and b)the weight of the individual rules in a rule combination.

MethodThe present study reports the results of an experiment with 30 music experts that rated the per-ceived accent strength of individual tones in pop music melodies (MIDI and audio). These ratingswere then predicted in a linear model taking 25 gestalt-like rules from the literature as independentvariables.

ResultsAfter variable selection and coefficient estimation we receive as a result two optimal combinationsof accent rules with according weights within a linear model for the MIDI stimuli and the audioexamples respectively.

ConclusionsThe discussion of these results reflects on the overall correlation of experts’ ratings and the cor-responding similarity of accent perception, as well as on the factors other than gestalt rules thatinfluence accent perception in realistic listening situations.

ReferencesBoltz, M. & Jones, M.R. (1986). Does rule Recursion Make Melodies Easier to reproduce? IfNot, What Does?.Cognitive Psychology, 18, S. 389-431.

409

Page 410: Abstract Book

410 Perception IV

Monahan, C.B., Kendall, R.A. & Carterete, E.C. (1987). The effect of melodic and temporalcontour on recognition memory for pitch change. Perception & Psychophysics, 41 (6), S. 576-600.

Povel D.J. & Essens, P. (1985). Perception of temporal patterns. Music Perception, 2 (4),411-440. Thomassen, J.M. (1982). Melodic accent: Experiments and a tentative model. Journalof the Acoustical Society of America, 71, S. 1596-1605.

Key words: Melodic accent perception, Melody structure, Gestalt perception

[email protected]

60.2 Popular music genre as cognitive schema

Mark Shevy

Boise State University, USA

Music has the potential for communicating many types of meaning, such as referential, ab-solute, connotative, denotative, cognitive, emotional, inward, and outward. Listeners may con-struct these meanings from a number of structures within a piece of music, from individual notesto phrases to genre. The present research examines a music structure and a type of meaning thathas received relatively little empirical research: popular music genre and its extramusical mean-ing. By combining literatures from music meaning and cognitive psychology, the argument ismade that popular music genres can be considered as cognitive schemas. Two experiments involv-ing 250 participants were used to test the differences between concepts associated with countryand hip-hop music and the ability of brief exposures to these genres to influence message process-ing in a manner consistent with the effects of cognitive schemas. Significant differences werefound not only in concepts typically used to define the genres, such as race, rural-urban classifi-cation, and age, but also in concepts that play an important role in communication research, suchas trustworthiness, friendliness, and political ideology. These differences were tested in personperception, attitude toward a target in a persuasive message, and memory inference. Two typesof schema-priming effects were observed: 1) direct influence on inference and 2) a type of microagenda setting. The relationship between music genre schemas and other cogniitive structuressuch as stereotypes and attitudes are also considered.

Key words: Cognitive schema, Popular music genre, Country and hip-hop

[email protected]

60.3 Jazz, blues, and the language of harmony

Elizabeth Lhost, Richard Ashley

Northwestern University, USA

BackgroundIn many ways, harmonic structure in music parallels syntactic structure in language. Recent stud-ies using this analogy provide interesting insights into harmony’s cognitive function. Results from

Page 411: Abstract Book

Friday, August 25th 2006 411

these studies suggest the cognitive function of harmony and illustrate similarities underlying theprocessing of musical and linguistic structure.

AimsThe current study probes the abstract, transformational, rule-based properties of harmony focusingon the importance of harmonic context and online processing capabilities. By gathering responsesto a set of manipulated song samples, we hope to gain a better understanding of the manner inwhich people process harmony within a restricted harmonic context.

MethodWe use the 12-bar jazz-blues style as our restricted harmonic context. We carried out statisti-cal analysis of a corpus of almost 100 blues tunes, and developed a set of graded expectationscharacterizing the harmonic content at three specific structural locations (measures 4, 9 and 11).By manipulating those harmonies, we created stimuli with target chords that correspond to threecategories (e.g., “expected”, “acceptable”, or “unacceptable”). In order to test the psychologicalrelevance of these expectations, we asked participants for immediate and retroactive judgments ofthe chord’s appropriateness at that location. Participants provided responses using a button box toanswer questions presented on the computer screen.

ResultsPilot data supports our initial hypothesis that people accommodate some harmonic manipulationswhile rejecting others. Within the 12-bar jazz-blues form, they agree with the plausibility of certainchord manipulations but deny the plausibility of others. Complete data is forthcoming.

ConclusionsWe expect responses to reveal graded expectations following the target-chord categories (i.e., “ex-pected”, “acceptable”, and “unacceptable”). Current data anticipates a statistically significantdifference between the response times for chords belonging to the three categories. Results sug-gest people accommodate certain chord substitutions while rejecting others, as they would allowcertain substitutions in linguistic examples. Our results provide insight into the nature of har-monic expectations and the cognitive relevance of musical structure while shedding light on thepsychological similarities between musical and linguistic structure.

Key words: Harmony, language, processing

[email protected]

60.4 Does Musical Melodic Intelligence enhance the percep-tion of mandarin lexical tones?

Franco Delogu1, Giulia Lampis2, Marta Olivetti Belardinelli1

1Department of Psychology, University of Rome La Sapienza, Italy2Department of Oriental Studies, University of Rome La Sapienza, Italy

In tonal languages, as Mandarin Chinese and Thai, word meaning is partially determined bylexical tones. Previous studies suggest that lexical tone is processed as linguistic information andnot as pure tonal information. Mandarin Chinese, for western language speakers, has the addi-tional difficulty of the presence of lexical tones, an alien feature to western languages. The factthat lexical tones are both linguistic and tonal, leads to wonder if a great ability of discriminating

Page 412: Abstract Book

412 Perception IV

relative pitch variations in a musical context can enhance the perceptual discrimination of lexicaltones in an ecological linguistic context. This study aimed at verifying if the discrimination of lex-ical mandarin tones varies in function of melodic ability. 46 students from the University of Rome(19 males; 27 females) participated in the study. They were native speakers of Italian with noknowledge of Mandarin or any other tonal language. Subjects were tested for their melodic mem-ory by means of test #3 (Melodic Memory) of Wing’s Standardised Tests of Musical Intelligenceand consequently divided in three groups according to their score (respectively high, medium andlow melodic ability groups). Participants were tested in a three-condition recognition task. Theywere presented with two short lists of spoken monosyllabic mandarin words and they were invitedto perform a same-different task. When the difference occurred, it could either be a different word(phonological condition), or a different tone of one of the words (tonal condition). The task alsoasked for an identification of the kind of variation between the first and second list (phonologicalor tonal). The task became progressively more difficult as the lists increased in length from 2 up to5 words. Main results show that subjects are significantly better at identifying phonological varia-tions rather than tonal ones. More interestingly, the high-Wing group shows a better performanceexclusively in detecting tonal variations, while showing no difference from the other groups inidentifying phonological variations and unchanged lists. Our results lead to infer that the abilityto process musical tones can be effectively transferred to a tonal-linguistic domain confirming thestrong relation between music and language.

Key words: Music and speech, Lexical tone perception, Musical transfer effect

[email protected]

Page 413: Abstract Book

Perception V 6161.1 A large-scale survey regarding listeners’ tastes to sung

performances

Evangelos Himonides, Graham Welch

Institute of Education, University of London, UK

BackgroundMedia reviews often focus on the perceived quality of a singer’s voice. Yet, quality is also acontentious concept, not least because there is no agreed definition in the psycho-acoustic literatureconcerning the characteristics of high quality, or “beautiful” singing. The presentation draws onthe findings from one (as yet, unreported) phase of a four-year, multi- method study into thepsycho-acoustic nature of vocal performance quality.

AimsAn initial, semi-structured interview study had revealed a lack of consensus amongst a small num-ber of expert participants (n=9) concerning their criteria for high quality singing. Consequently,in this phase of the research, the aim was to see if this relative idiosyncrasy was reflected in theviews of a much larger number of participants, representing a wide range of listener backgrounds.

MethodThe research phase embodied the design, piloting and distribution of a standardised open-endedquestionnaire that served as a large-scale structured interview substitute. Respondents (n=374)provided (i) demographic information, (ii) nominations for a “world’s best singer”, with reasons,and (iii) a self-generated list of up to five key elements for the assessment of singing quality, withLikert-scale importance ratings. Resultant data have been analysed using Atlas.ti (qualitative) andSPSS (quantitative).

ResultsAlthough the distillation of the qualitative responses is ongoing, certain trends in the data haveemerged. One of the most striking is that participants rarely, if ever, make reference in their “listof important features” to any of the characteristics by which they define their previous choice for a“world’s best singer”. Participants demonstrate two co-existing conceptions of beauty in singing:beauty as encapsulated by a particular vocal performer and, separately, self- generated criteriaof vocal beauty in the “abstract”. In the latter category, the three most common criteria that are

413

Page 414: Abstract Book

414 Perception V

evidenced so far are tone quality, range and strength of voice.ConclusionsThe data suggest that there is a certain bipolarity between different questionnaire components inthe listeners’ construction of singing beauty, namely between neuropsychobiological and socio-cultural interpretations of the performance, distinct from features in the perceived production ofthe acoustic signal by the performer.

Key words: Singing, Aesthetics, Quality

[email protected]

61.2 German fricatives /X/ & /Ç / in singing. Testing a trainingmodel

Claudia Mauleon, Isabel Cecilia Martinez, Raul Carranza

Universidad Nacional de La Plata, Argentina

BackgroundRelationships between perception and production of the sounds of a non-native language are stillnot clear enough. While some researchers suggest the precedence of perception to production,others point out that foreign language speakers tend to behave better in production than in percep-tual skills. Nevertheless it seems to be consensus on that native language sets a perceptual filterthat becomes the speaker sensitive to the traits of his/her mother’s tongue language and resistantto acoustic features absent in the native language repertoire. Some authors suggest that laboratorytraining programs contribute to the perceptual and production improvement of non-native phoneticcategories.AimsThis work attempts to test some tools suggested in the above mentioned studies on language speak-ing, and to apply them to the improvement of the perception and production skills of non- nativephonemes in singing.MethodGerman fricatives /x/ and /ç/ were selected, given the difficulties that their production report to Ar-gentinean Spanish Speaker’s students. 30 subjects, students of Choral Conducting at the Facultyof Arts in the Universidad Nacional de La Plata, Argentina participated in the experiment. Theywent through an experimental design consisting on three sessions: (i) Pre-Test session, in whichsubjects were initially measured on perception and production tasks of phonemes /x/ and /ç/. Re-action time was taken as an indicator of perceptual sensitivity during the identification task. In theproduction test, subjects read and sung the same text. Accuracy of performance was assessed by apanel of 5 experts using a 7 degree lickert scale; (ii) Training session: Participants listened to dif-ferent trials of spoken and singing samples and behaved in the same way as they did in the pre-testsession, but this time they received visual and aural feedback after every answer they produced;and (iii) Post-Test session, in which subjects were measured identically as in the Pre-Test session.It was hypothesized that a laboratory training program would also contribute to the perceptual andproduction improvement of non-native phonetic categories in singing.ResultsData analysis is still in progress.

Page 415: Abstract Book

Friday, August 25th 2006 415

ConclusionsConclusions will be derived to the field of singing teaching.

Key words: German fricatives, Non native language perception and production, Singing voice

[email protected]

61.3 Muugle: A framework for the comparison of Music In-formation Retrieval methods

Martijn Bosma, Remco Veltkamp, Frans Wiering

Utrecht University, Information and Computing Sciences, The Netherlands

BackgroundMusic information retrieval (MIR) is the interdisciplinary science (computer science and musi-cology) of retrieving musical objects. Recently many MIR methods have been developed; mostof these employ a music representation and have a concept of musical similarity. A web-basedframework for their comparison is still lacking.AimsOur aim with Muugle (Musical Utrecht University Global Lookup Engine) is to create such aweb- based framework, where different MIR methods for feature extraction, matching and presen-tation can be compared. With all methods operating under the same circumstances a methodicalcomparison of their performance is made possible.MethodWe have reason to believe that the performance of a MIR system will improve if it operates onfeatures that are relevant to human music perception and cognition. These features should bematched by a similarity measure that models perceptual similarity. To obtain our goal we makeuse of results from music perception and cognition research. These are used to develop methodsthat can be implemented as components in Muugle and can be compared to other methods.ResultsThe user can specify a query by playing a keyboard on the screen either with the mouse or the com-puter keyboard. A MIDI-instrument can be used for this purpose as well. Humming or whistlinginto a microphone is another way of specifying a query. The query can be modified in a piano-roll-editor. Apart from studying the usability aspects of the input methods we have started investigatingthe presentation of the output and the possibility of relevance feedback. Four matching methodshave been implemented that operate on the note level. They can be compared with more cogni-tively informed methods that operate on higher level features of the music such as the key, andthe metric structure. We are developing a feature module that extracts the key from the music.Another feature module under development is an algorithm that detects the metric structure ofmusic.ConclusionsThe modular web-based architecture of Muugle makes it a good framework for testing and com-paring MIR algorithms and performing usability experiments, even with distant users.

Key words: Music information retrieval, Cognitive features, Experimentation framework

[email protected]

Page 416: Abstract Book

416 Perception V

61.4 Does melodic accent shape the melody contour in Estonianfolk songs?

Taive Särg

The Estonian Literary Museum, Department of Ethnomusicology, Estonia

BackgroundMusical accent may be defined as an increased prominence ascribed to a sound event. Of thevarious types of accent proposed by music theorists, one of the most contentious has been the so-called “melodic accent”. Huron and Royal (1996) have tested eight conceptions of melodic accentof three samples of music: Western folk music and art music, Gregorian chant. The results for allthree studies were consistent with a perceptual model of melodic accent developed by Thomassen(1982). The effect of melodic accent emerged most clearly in unaccompanied isochronous solopassages.AimsThe aim of this paper is to test the effect of Thomassen’s model of melodic accent in unaccompa-nied isochronous monophonic folk song style of ancient Estonian folk songs called regilaul thatdiffer from Western folk song tradition.Method24 typical melody contours from one South-Estonian district are analyzed. While using correla-tional analyses, two hypotheses regarding the role of melodic accent in shaping of melody con-tours are tested. (1) Is melodic accent one of the constituents of metric accent? The coincidence ofmelodic accents with points in metric hierarchy is examined. (2) Are melody variations caused bycontradiction emerging between melodic accent and lexical stress? The value of melodic accentsat lexically stressed syllables is computed for unvaried, stable melody contours and for variablemelody contours both in their unvaried and varied form. The analysis looks for cases in whichmelodic accents coincide with lexical stresses the best.ResultsThe prominent melodic accents fall in most cases on metrically strong positions. Melody variationtakes place only if melodic accent is much stronger (its value being between 0.5 and 2) on thelexically unstressed syllable than on the lexically stressed syllable.ConclusionsIn the melody contour of old Estonian folk songs, melodic accents are synchronized with lexicalstresses.

Key words: Melodic accent, Metric hierarchy, Estonian folk song

[email protected]

Page 417: Abstract Book

Ethnomusicology 6262.1 A Look at the adaptation and cognitive process in Ghantu

performance as practiced by the Gurungs of Nepal

Kishor Gurung

BackgroundThe rural Gurung communities of Nepal practice a narrative folk music tradition called ghantuin which prepubescent female dancers go into trance induced by the singing of the guru-s (mainsingers). The main performance lasts for five days and the lengthy song-text functions as thepropagator for the ceremony and the trance. For the host community, this text is sacred; it ismemorized and passed on to the next generation in oral tradition.AimsThe goal of this inquiry was to describe the performance and to transcribe the guarded text andmusic for the first time in order to determine how closely related they are to the everyday languageand music of the Gurung community. The singers perform from memory in a language that theydo not completely understand. The dancers’ ability to enter and safely return from the trance- stateis believed to depend on the singers’ accuracy in reproducing the text.MethodThe text was transcribed, from the dictation of one of the main singers, utilizing devnagari let-ters (similar to Sanskrit). Photographs and video recording provided visual documentation of theceremony. Staff and graph notations of the sound recording were used to reveal musical details:voice movements, ornaments, tonal hierarchy and phrase construction. Separate tables revealedgeneralized pitch measurements.ResultsThe transcription of the text of the ghantu song showed that it is an esoteric Indo-Aryan languageof unknown source. This is surprising since the Gurungs are a Mongoloid tribe, thought to havemigrated to the foothills of the Himalayas from the regions of western China. They speak a dialectof the Bodish Substock of Tibeto-Burman family. The two language families are unrelated and theGurung performers are therefore, unable to understand the language completely. This makes thefeat of memorizing the lengthy text all the more remarkable.Conclusions

417

Page 418: Abstract Book

418 Ethnomusicology

The folk music tradition of ghantu reveals a tribal society’s spiritual and cultural adjustment. Asthe egalitarian Gurung communities moved downward from the foothills of the Himalaya, theyadjusted to Hindu elements in their new territory. The finding from the text reveals that the lin-guistic adaptation far exceeds the actual physical southward shift of the host community and theattachment to spiritual power overrides other indigenous considerations.

Key words: Adaptation, Oral tradition, Memorization

[email protected]

62.2 A comparison of Western classical and vernacular musi-cians’ ear playing abilities

Robert Woody1, Andreas Lehmann2, James Karas1

1University of Nebraska, Lincoln, Nebraska, USA2Hochschule für Musik, Würzburg, Germany

BackgroundMany Western classical musicians work exclusively from printed notation and struggle greatlywith ear playing. Alternatively, vernacular musicians (e.g., jazz, rock) utilize ear playing exten-sively in their activities. Clearly cognitive skills are developed differently along these dissimilarpaths of expertise.

AimsThe Lehmann and Ericsson model of mental representations specifies three cognitive componentsof music performance: goal representation, representation of production aspects, and representa-tion of current performance. Focusing on the first two we hypothesized that classical performershave difficulty playing by ear not because of poor aural memory (goal), but because of the inabilityto generate the needed motor programs (production aspects).

MethodEight collegiate musicians served as subjects in the study. Four came from an exclusively Westernclassical background and four had significant vernacular music experience. The four pairs werematched for instrument played (saxophone, trumpet, bassoon, percussion). Each subject workedwith two unfamiliar melodies of equivalent length, compositional makeup, and technical difficulty.Melodies were played through a computer using MIDI. With one melody, subjects heard it andthen attempted to sing it back, repeating the listen-and-sing cycle as many times as needed tovocally reproduce the melody accurately. With the other melody, subjects listened then tried toplay the melody back on their instruments. Across all subjects, the order of singing and playingwas counterbalanced.

ResultsSubjects’ performances were recorded and the data are currently being analyzed. Preliminaryresults suggest the following: The vernacular musicians required fewer listening trials than theWestern classical musicians to play the melodies accurately. Across the sample and especiallyamong the classical musicians, fewer attempts were required for accurate vocal reproduction thaninstrumental. Qualitative analysis of errors revealed interesting differences between instruments.

Page 419: Abstract Book

Friday, August 25th 2006 419

ConclusionsThis study suggests that the cognitive “bottleneck” that many Western classical performers expe-rience when trying to play by ear comes from a poorly developed link between aural imagery andmotor production representations. Auditory working (aural) memory thus seems by itself to betrained through extensive listening and playing, albeit for different purposes that do not readilytransfer to a motor production task.

Key words: Ear playing, Mental representations, Imitation

[email protected]

62.3 The Aksak rhythm: Structural aspects versus cultural di-mensions

Simha Arom

LMS, CNRS, Paris, France

The Turkish term aksak, borrowed from Ottoman musical theory, means "limping", "stum-bling", in other words, irregular. It designates a rhythmic system in which pieces or sequences,executed in a fast tempo, are based on the uninterrupted reiteration of a matrix, which resultsfrom the juxtaposition of cells based on the alternation of binary and ternary quantities, as in 2+3,2+2+3, 2+3+3, etc. Aksaks can be classified according to the nature of their total value - which canbe a prime, even or odd number. Those based on prime numbers are distinguished by the fact thatthey are resistant to any division into equal beats, while all the others can be divided according to abinary or ternary principle. The aksak of 8 minimal values can thus be equivalent to 2 x 4 or 4 x 2equal beats, while the aksak of 9 values can be divided into 3 equal beats. The asymmetric natureof the latter results in a high coefficient of ambiguity because the content may be interpreted intwo different ways : either as an aksak (for example, 2+2+2+3), or as a rhythmic matrix insertedinto a metric framework which is divisible by 3 (3 x 3) - but "syncopated". This means that thesame sonic phenomenon may be perceived in different ways.

Anthropologists, particularly Clifford Geertz (1973), stress the point that cultures must be seenas ensembles of shared systems of meaning. With regard to rhythm and meter, ethnomusicologistMieczyslaw Kolinski (1973) emphasizes the existence of "mental patterning" (Kolinski 1973).This implies that rhythmic perception operates on the basis of a shared code within a given cogni-tive system. The presentation will show that in some traditional African societies, the ambivalentasymmetrical aksaks can only be conceived of in one form, i.e. the aksak, whereas the symmetri-cal forms are never recognized as such. These two cases eloquently illustrate the difference thatcan exist between the aksak seen exclusively from a structural standpoint and the manner in whichit is understood in a given cultural context.

Key words: Ethnomusicology, Rhythm, Turkish music

[email protected]

Page 420: Abstract Book

420 Ethnomusicology

62.4 Perception, effect and the power of words. An overviewon song-induced healing processes in eastern Amazonia

Bernd Brabec de Mori

University of Vienna, Austrian Academy of Sciences, Austria

BackgroundThroughout the eastern Amazon lowlands, specific songs are the key element in traditional healingsessions. The healer may sing at an object or to a patient who suffers from any illness or inconve-nient social behaviour. Surprisingly, a high percentage of successful treatments can be observed(75%), also with patients of different cultural origin.AimsThe paper forms an overview, without pretending a solution to the problem. We will show thedistribution of effects on dependant variables. We try to give an explanation about the high im-pact healing songs have on the patientt’s physic or psychosocial condition. Relations betweenquantitative and qualitative methods in this case of study will be touched.MethodResearch instruments are based on an ethnomusicological database, compiled in the authort’s five-year-activities researching indigenous musical traditions in the Ucayali valley (Perú): a) ethnomu-sicological analysis b) ethnolinguistic analysis of lyrics and endemic terminology, c) medium-termobservations of seven patients, d) qualitative interviews with nine healers. The processes in heal-ing sessions are analysed and split up in variables: V1: Emotional perception of the acousticphenomenon, V2a: Suggestion within lyrict’s semantics (encultured patient), V2b: Associationwith words (non-encultured patient), V3: Extramusicological parameters (contextual projection;set and setting), V4: Influence from plant medicine use. Vx: Healert’s endemic explanation. Theloading of the variables is exposed with given examples (listening, photos and transcriptions). Abrief paragraph is dedicated to polyphony in the case of two healers working together (which shedsmore light on Vx).ResultsWe observe high loading in variables V2a and V3, mainly in the case of “V2a-patients”. If thepatient is not encultured, the main loading occurs in V1 and V3. V4 complicates the scheme, be-cause most active compounds are unknown. The endemic (Shipibo indian) explanation is concise,featuring terms that can not be translated into western languages.ConclusionsThe case study shows significant influence of V2a and V4, but both are not sufficient. Combinedwith V3 the phenomena can be explained with minor deficits. The complete explanation is givenin Vx. This is why the culturally relativistic ethnological approach seems more effective than aquantitative one, especially because statistical or neurological methods can hardly be applied inthe phenomenology of the healing sessions.

Key words: Ethnomusicology, Healing songs, Ethnolinguistics

boshirashki.gmx.at

Page 421: Abstract Book

Poster session II 6363.1 Rhythm sensibility : A dichotomy

David Aladro-Vico

My work deals with an observed phenomenon in listening and also in making music whichpresents two different rhythmic sensibilities; one oriented and needed of any kind of regular mu-sical event, a pulse - in the sense of equal shapes of sound, an isochronously repeated sonic event-and another which looks for irregular events (dissimilar sound shapes) or even completely avoid-ing any form of isochrony. This I found either in audition or production; it seemingly serves to orcreates a pre-condition for choosing, evaluation or appreciation of musical events and, in the caseof music creation, becomes a compositional tool.

This represents a sensibility which might be studied in many different approaches, bottom-up or top-down, form basic perceptual responses (to regularity) to aesthetical issues, involvingculture, development, conditioning or millieu.

My aims are two: to study these sensibilities, in order to understand music and music creation,and to use this understanding in my own music creation proposals. For this I dare to bring upsome possible explanations; first I exemplify the observed phenomenon via samples from severalchronological and ethnic extractions, thus exposing it’s relation and explanation with aestheticaltrends and characteristics within each musical culture, and then examine with a critical review theway the issue is addressed in recent studies on rhythm and temporal musical experiences frommusic psychology.

Key words: Musical rhythm aesthetics,, Psychology of rhythm, Aesthetic judgement.

[email protected]

63.2 Music as communication: The listening pyramid

Margarita Alexomanolaki

Goldsmiths College, University of London, UK

421

Page 422: Abstract Book

422 Poster session II

Music is present in everyday life, perhaps more than ever before, due to technological progressin the means of its broadcast. In communicative terms, music is considered a sign, which carriesa meaning that has to be interpreted (Barthes 1977, Nattiez 1990). The usual approach of thesemiotic analysis of music in everyday life is either by structural analysis (Tagg 2000) or by theuse of theories of psychology that define the semantic role of music (North et al. 2004, Santacreuet al. 2004, North et al. 2005). Nonetheless the studies mentioned concluded on the specific casestudies presented and do not seem to be able to be applied in different listening situations andcultural backgrounds. This paper aims to evaluate the semiotic role of music from the perspectiveof the listener and the listening situation - musical communication as it is eventually received andnot as it was initially created - emphasizing the different meaning the same musical piece couldhave depending on its use. Expanding Peirce’s third Trichotomy on how the sign is interpreted,and adjusting it to musical terms, this paper will present the Listening Pyramid, which includes thethree stages of listening and interpreting the musical sign. The innovation of the above study is theincorporation of physiology: thus, at the base of the pyramid is our universally shared physiology;in the middle is our partially common cultural background, and at the top is our unique individualexperience. Defining in which of the stages of the Listening Pyramid the meaning was aimedto be received, then we are oriented towards a semiotics analysis which excludes any factors ofdifferent cultural background or individual experience that occurs among listeners.The above willbe applied to the music of TV advertising, as a practical example of application of the abovetheory, but also as one that could have wider ramifications for the field of analysis as a whole.

Key words: Semiotics, Peirce, Music perception

[email protected]

63.3 The history of European music as a passage from classicalphysics to quantum mechanics

Grigori Amosov1, Angelica Komissarenko2

1Moscow Institute of Physics and Technology, Russia2The Tchaikovsky Moscow Conservatory, Russia

It is known that the quantum uncertainty principle can be explained by means of the soundreproduction. We propose to analyze a musical piece from the viewpoint of auditory perception oftwo mutually complementary parameters which are timbre and melody.

What is these two parameters in our understanding? Each instrument has own timbre, more-over, a major role is played by the features of performer execution. Taking into account also thecombination of different musical instrument we obtain a particular timbre sounding. Indeed, anymelody we hear has its own timbre because it is evoked from the certain instrument but the essenceof the question can be found in determining what parameter is spotlighted in the auditory percep-tion. One of possibilities is a domination of the melody under which the timbre is necessary onlyfor its volumetric sounding. This situation is typical for homophone music. The other possibilityis given by a domination of the timbre. In the case, the melody is not spotlighted or converted intoshort intonation motives. This situation is spread in music of the XX century, especially in suchstyles as sonority and chance music.

Page 423: Abstract Book

Friday, August 25th 2006 423

The quantum uncertainty principle based upon the existence of two mutually complementaryparameters allows to introduce the classification of states light. The main states in this classifica-tion are coherent, squeezed and excited. In our previous paper, we introduced the classification ofstates of the listener perception including coherent, squeezed and excited states. Remember thatunder a squeezed state we mean the state in which whether the melody or the timbre is spotlightedin the perception in damage of the certainty of the other parameter. In the excited state the un-certainties in the melody and the timbre are great simultaneously. In the coherent state the bothparameters, the melody and the timbre, have the same level of importance and their uncertaintiesare small.

In the present paper we discuss how the states of the listener perception is appearing in thehistory of music. Investigating the development of the instrumental music we revealed that anumber of squeezed stated with the domination of timbre increases in the XX century. ArnoldSchoenberg gives us the emancipation of the timbre as it was noted by Alfred Shnitke.

Key words: Quantum uncertainty principle, Mutually complementary parameters

[email protected]

63.4 Reconstructing “Incontri di fasce sonore” by Franco Evan-gelisti

Adriana Anastasia, Nicola GiosminBackgroundThe study of the historical repertoire of electro-acoustic music is quite new and not well-attended.Moreover the most of electro-acoustic music’s analysis starts from an auditory aspect, pointing outthe audio elements (i.e. aesthetic-cognitive objects) in the piece, concealing how sound objects arecombined by the composer.AimsThis paper is the outcome of a work focused on the re-creation and reproduction (via modern dig-ital techniques) of the score and tape of “Incontri di fasce sonore” (1956-57), the most importantelectro-acoustic work realised by the Italian composer Franco Evangelisti in the WDR Studio inCologne.MethodThe software used has been written in Python; the score has been designed with Pic and themusic with CSound. The whole process described in this paper, strictly follows the realisationscore (edited by Universal) prepared by the composer for the technician to create the electronicmaterials and sound’s transformations. The new digital version of the piece and the new digitalscore allow to better describe the structure, the form and the compositional creation process of it.In brief, used methodology is analysis by synthesis.ResultsThe results are: a complete automatic process for score/tape creation, which was an Evangelisti’sidea (not realised for obvious technical problems at his time); a new version of the music piece(not a restoration); a new digital score precisely generated from the musical events (another Evan-gelisti’s problem); a complete critical revision of the original score (individuation of mistakes,ambiguities and problems); a deeper explanation of the structures and materials of the piece, fromthe “inner” side of it. The score and the new tape version are accompanied by a detailed analysis

Page 424: Abstract Book

424 Poster session II

which not only explores the composer idea about the “historical and aesthetic necessity of scoreof electronic music”, but furthermore underlines Evangelisti’s research of new compositional laws(i.e. his concept of “linear counterpoint”) based on new divisions of the acoustic space.

ConclusionsThe availability of the code and of the complete materials used for the preparation of this papercan be a useful tool for analysts and teachers of electro-acoustic music of the early 1950ies.

Key words: Electro- acoustic music analysis, Analysis methods, Musical meaning

[email protected]

63.5 Dual-task and psychophysiological measurement of atten-tion during music processing

Richard Ashley, Kyung Myun Lee

Northwestern University, USA

BackgroundLike other cognitive activities, musical processing demands attention. A few researchers (notablyMari Reiss Jones and her colleagues and students) have made extensive studies of how musicalattention may operate, but few studies have sought to look at online aspects of attention allocation.

AimsThe current study investigates the utility of five real-time methods for measuring online attentiondemand while carrying out a musical task. Three of the methods use dual-task paradigms and thefourth and fifth use physiological measures. These methods have been validated in other studies,primarily in human factors and engineering psychology, but have been almost unused in musicalstudies.

MethodsStimuli were modified from a previous study (Lee, 2002) involving a melodic listening and dis-crimination task with three significantly different levels of difficulty, and thus three hypothesizedlevels of cognitive-attentional and short-term memory load. During the performance of this pri-mary task, measures of attentional demand were sought with a reaction time task to a visual orauditory probe, with EMG data recorded from two facial sites (corrugator supercilii and frontalis),and with measurement of pupil diameter. For the reaction time tasks, both auditory and visualmodalities were used for the secondary tasks as prior theories have suggested that attention mayin part be modality-specific. EMG and pupillometry were used to assess two low- intrusion,temporally-precise pyschophysiological measures.

ResultsData collection are partially complete for the reaction time data and have just started for the phys-iological measures. A full report will be given on all of these at the conference. It is clear thatcarefully designed target-response tasks do show sensitivity to auditory attention at statisticallyreliable levels. Auditory probes show more sensitivity but seem to be influencing performanceon the primary task; visual probes are less sensitive, and potentially less reliable, but affect theprimary task less.

Page 425: Abstract Book

Friday, August 25th 2006 425

ConclusionsDual task methods, although complicated to design and control, can be used to assess attentionload during music processing. These methods, along with psychophysiological measures, arequite promising as tools for increasing our knowledge of online processing of music.

Key words: Attention, Dual-task methods, Psychophysiology

[email protected]

63.6 Integration of non-diatonic chords into diatonic sequences:Results from scrambling sequences with secondary dom-inant chords

Nart Bedin Atalay 1, Hasan Gürkan Tekman 2

1Middle East Technical University, Turkey and Jyväskylä University, Finland2Uludag University, Turkey

BackgroundOrder of musical units was shown to be critical for perception of music (Bharucha, 1984; Deutsch1980). However, Tillmann and Bigand (2001) showed that manipulating the order of chords inchord sequences did not affect the observed advantage of global context on processing the tonicchord over the subdominant chord. Sequences of Tillmann and Bigand (2001) included onlydiatonic chords. Secondary dominant chord (SDC) is a non-diatonic major chord whose rootis a perfect fifth above the root of a diatonic major or minor chord. In tonal harmony, SDCmust precede its diatonic associate (Piston, 1987). This music theoretical constraint raises thehypothesis that order of chords may provide information for the perception of chord sequencesincluding SDCs.

AimsWe hypothesized that the effect of global context will decrease in the case of listening to scrambledversions of sequences that include SDC.

MethodWe investigated the effect of global context on priming of tonic and subdominant targets. Con-texts were chord sequences (excerpts from Bach chorales) in their original or scrambled versions.Sequences were composed of either eight diatonic chords or six diatonic and two SDCs. In thefirst and second experiments, effects of scrambling with 2by2 and 4by4 algorithm (Tillmann &Bigand, 2001) were investigated, respectively. In the third experiment, effects of scrambling with2by2 algorithm were further investigated by using Shepard tones.

ResultsThere were not significant differences between listening to sequences with or without SDCs, norbetween original or scrambled versions of sequences that include SDCs in Experiment 1 (N = 14)and in Experiment 2 (N = 13). Pooling Experiment 1 and Experiment 2 together did not changethe results. Average reaction times from Experiment 3 (N = 19) replicated these results.

ConclusionsResults showed that listening to non-diatonic chords within diatonic sequences did not affect the

Page 426: Abstract Book

426 Poster session II

observed advantage of processing the tonic chord over the subdominant chord. SDCs were inte-grated into the percept formed by diatonic chords without disturbing it whether SDCs were orga-nized by the music theoretical constraint or not. Integration mechanisms are discussed in terms ofdecay and anchoring.

Key words: Chord priming, Global musical context, Non-diatonic chords

[email protected]

63.7 The Singing difficulty in Dotted Rhythm: Towards an un-derstanding of the influence of mother tongue on youngchildren’s musical behaviour

Nozomi Azechi

Institute of Education, University of London, UK

The influence of a mother tongue on young children’s singing, especially in dotted rhythm wasexamined in this study. Music and language are often closely linked, and a boundary between themis ambiguous. As young children’s babbling formed into a certain rhythm of their mother tongue,their singing might be formed into one of the culturally influenced certain rhythm. Japanese andEnglish children aged 3-6 singing “If You’re Happy and You Know It” were analyzed by twoanalysis; 1) nPVI (normalized Pairwise Variability Index) analysis, and 2) ratio of the pair ofnotes which consists of a dotted eighth note and a following sixteenth note. The actual length ofIOIs (inter-onset intervals) was used for both analyses. The difference was found only between5yo groups (34.62 vs. 68.12). All groups had much lower nPVI values than the values, whichcalculated from score timing (118.80). It indicates that both Japanese and English groups sung thepair of notes in close to equal timed rhythm, in other words a pair of two notes (a dotted eighth noteand a sixteenth note) were sung as two same length notes (tow eighth notes). This same tendencywas found in both cultural groups, although further difference was expected from previous studyon “Twinkle, twinkle”. Further analysis of the dotted rhythm is currently going on. The result willbe come out soon.

Key words: Rhythm development, Mother language, nPVI

[email protected]

63.8 Time series analysis as a method to characterize musicalstructures

Kai Bachmann

University Mozarteum Salzburg, Austria

Series is a set of statistical information, compiled, registered or observed in different dates orperiods of time and arranged chronologically. In the analysis of temporary series there are realized

Page 427: Abstract Book

Friday, August 25th 2006 427

mathematical-statistical analyses of a temporary series. It is presupposed that the information didnot take place in way continues, but in discreet way, that is to say, in temporary finite distances.For the analysis of musical structures by means of the analysis of temporary series the computer-program AlisOnda was developed.

With the current version of the above mentioned program three different analyses are carriedout: Determination of the average change of loudness inside a certain margin of analysis, deter-mination of the bogey of frequencies, added of semitone in semitone (for every certain margin ofanalysis), and the determination of the tonal density.

By means of seven different interpretations of the same pieces of music (second and thirdmovement of Dvorák’s symphony no. 9, including three interpretations of the same conductorand orchestra made within ten days) there are compared the results of these three analyses, and itis is tried to answer to the question if the results (the graphic visualization) of the parameters ofAlisOnda reflect the character of the compositions.

To the information given by the program AlisOnda, methods of the chronobiology are appliedto them also. This is done to be able to compare them in a following step with psychophysiologicalinformation.

With this comparison (p.e. by means of interrelation) they would allow to obtain knowledgeto what extent and where the music influences the biological organism observed. Or: if certaineffects in the psychophysiological information measured allow themselves to attribute to the lis-tened music. And finally also: how a music has to be composed to produce certain effects in thebiological system.

Key words: Time series analysis, Interpretation, Chronobiology

[email protected]

63.9 The influence of musical expertise on music appreciation

Esther Beyer1ISME - International Society for Music Education2ABEM - Associação Brasileira de Educação Musical, Brazil3ANPPOM - Associação Nacional de Pesquisa e Pós-Graduação em Música, Brazil

BackgroundSeveral studies (Bigand, 2003; James et alii, 2005; Ruthsatz, 2005; Koesch, Schmidt & Kansok,2005; Neuhoff, Knight & Wayand, 2002; Martin & Kim, 1998) compared differences betweenmusicians and non-musician listeners processing musical structures. Some of them investigatedlistening to simple stimuli and verified not a big difference between expert musician and nonmusi-cian listeners. But when they are asked to hear and analyse whole musical structures, the outcomessuggest to show differences between higher and lower musical expertise.AimsDiscuss about the influence of musical expertise on music appreciation activities.MethodSubjects: 15 professional musicians or musical educators, studying their Master Degree at a privateUniversity in Brazil. Each subject was asked to inform about the the years and hours of musicalstudies and the time of work as musician after studies. Both were registered in number of years

Page 428: Abstract Book

428 Poster session II

and hours. Procedure: 5 musical pieces from different cultural contexts were presented aurally.Task: subjects were asked to write the original country or region of the music presented and theinstrument family heard on it.

ResultsThe approximate number of hours invested on music studies varied from 1480 to 6880. Generally,the subjects that had the most number of correct answers are related to continuous musical experi-ences on formal music classes from 5 to 20 years old. The subjects that invested more hours onlyat the undergraduate studies have had less results on appreciation as the ones that began earlier.The working time as musician varied also from 400 to 72.000 hours. Some subjects were expertmusicians and other had finished recently their undergraduate musical studies. Although the ex-perience to work with music as a profession is probably very enriching for the musical perceptionof a subject, it is also important to see the large quantity of hours per week some subjects workedand that this resulted not necessarily on a better musical appreciation.

ConclusionsMusical expertise may be a determinant factor by appreciation activities, combined with musicalability. For some tasks, as free description of the music, it seems to be low differences betweenmore or less musical expertise.

Key words: Musical expertise, Musical appreciation, Music education

[email protected]

63.10 Changing the pacing stimulus intensity does not affectsensorimotor synchronization

Anita Bialunska, Piotr Jaskowski, Simone Dalla Bella

Department of Cognitive Psychology, University of Finance and Management in Warsaw, Poland

When we synchronize our finger or hand movement with an external stimulus (e.g. a metronome)our taps typically precede the external events by a few tens of milliseconds (NA, negative asyn-chrony). Representational models (e.g. Aschersleben, 2002) make the hypothesis that synchro-nization is established at a central level where the action movement code has to coincide with therepresentation of external events. To explain NA, it is assumed that processing times for generatingthe kinestetic-tactile tap code and for generating the auditory or visual stimulus code are different.In order for these codes to coincide at a central level, the taps should precede the stimuli by approx-imately the amount of difference between the processing time needed to build the representation ofthe information in the two afferent systems. Accordingly, Aschersleben et al. (2001) showed thatmanipulation of perceptual latency of somatosensory information incoming from taps by changingthe effector (hand vs. foot) resulted in changes of the amount of NA. Similarly, manipulation ofthe pacing stimulus intensity should also affect NA. To test this hypothesis 20 participants hadto produce short-duration force pulses with their index finger on a force transducer along withisochronous auditory stimuli (IOI = 800 ms). For comparison, a simple RT task using auditorystimuli with unequal IOIs was also performed. It is well known that intensity changes affect RTs.Intensity of auditory stimuli was manipulated from near-threshold to strong. Results showed thatincreasing stimulus intensity did not affect NA whereas the same manipulation reduced RT. In the

Page 429: Abstract Book

Friday, August 25th 2006 429

synchronization task the time of occurrence of maximum tapping force did not change with in-tensity of the pacing stimulus. Still, when intensity increased maximum tapping force decreased.These findings are not consistent with representational theories. To understand NA, movementproperties other than taps’ occurrence time should be taken into account (e.g. force). In addi-tion, these findings suggest that timing of action in synchronization and RT tasks are mediated bydifferent mechanisms.

Key words: Synchronization, Negative asynchrony

[email protected]

63.11 The blue chords in rock music: Some possible meanings

Roberto BolelliBackgroundSome years ago I introduced a research named “Is rock blue?” (Bolelli, 1992). That title derivedfrom some particular chords characterizing rock music: I called them “blue chords”, becausethey are built upon the minor third and the minor seventh of the blues scale, the so-called “bluenotes”. In that study I connected the musical analysis to some aspects of the “common musicalcompetence” (Stefani, 1982), from the slide guitar’s technique to the amateurish musical prac-tice, including the simplifications of the so-called “songbooks”: all these elements are important,mainly in a teeming situation, where modal sensibility goes to be inserted into the tonal one.AimsNow I want to deepen the possible meanings of those chords: do the blue chords assign to rockmusic a different character than the blues, even if they arise from the pentatonic scale?Main ContributionI think the answer to the last question is affirmative, and the emotional level plays a basic part.The instability of the blue notes causes the indefinite character of blues music, floating betweensocial, sexual, emotional meanings. Albert Murray says blues is an energetic music: it drives outthe “blue devils” of melancholy, accompanying the dance too, while soul and rock music expressa “whimpering self-pity” (Murray, 1982). Is the blue chords’ colour faded, compared to the bluenotes?

I think the blue chords express a sense of strong steadiness, especially if played by a heavystroke of electric guitar. The glissando of blue notes, mainly performed by voices and guitars,makes the sound sliding between the major and minor modes, in a melodic, “horizontal” di-mension. In the comparison between the instability of blue notes and the harmonic, “vertical”steadiness of blue chords I’ll refer to the “gesture” in music too (Delalande, 1993).ImplicationsSome musicologists introduced the concept of “chord-timbre”, concerning the style of some 20thcentury’s composers, from Debussy onwards: it might be helpful referring to that concept, sincethe only harmonic theories cannot completely explain the chords’ events (Risset, 1992). In thistheoretical paper I’ll try also to establish a starting point for an empirical research about the con-nection between the blue chords and the emotional reactions of the listeners, through the formula-tion of a perceptive test, in a musictherapeutic setting too (Juslin, 1998. Postacchini, 2001).

Key words: Blue chords, Timbre, Emotions

[email protected]

Page 430: Abstract Book

430 Poster session II

63.12 Facial expressions and piano performance

Luisa Bonfiglioli1, Roberto Caterina1, Iolanda Incasa1, Mario Baroni2

1Department of Psychology , University of Bologna, Italy2Department of Music, University of Bologna, Italy

BackgroundPrevious studies (Caterina et al. 2004) on piano players’ body and facial expressions during theirperformance showed that there is a specific relation between non verbal expressions and musicstructure.

AimsWe tried to compare the value of single facial action unit movements- such as eyebrow raising andfrowning - found in our observations, with the concomitant musical structure.

MethodVideorecording of a group of professional piano players performing different repertoires wereobserved. A facial expressive analysis was done by two independent expert judges using Ekmanand Friesen FACS. Facial action unit movements were considered together with main musicalfeatures in order to get some ideas concerning the meaning of the most frequently used facialsignals.

ResultsThe first results of our analyses indicate that eyebrows movements, head movements and mouthopening-closing are the most frequent non verbal signals in piano player performances. At a mu-sical level eyebrow raising and lowering may be considered as perceptual cues and they oftenseparate a musical phrase-segment from another. We isolated some categories concerning themeaning of eyebrows movements. As regard to music structure, eyebrow raising occurred mainlywhen playing “pianissimo”, staccato, and with high pitch melodies, whereas eyebrow frowning oc-curred mainly when playing forte or fortissimo, with fast tempo, and with low register melodies.In agreement with human ethological studies (Costa, Ricci Bitti 2003), eyebrow raising is associ-ated with expression of surprise, and also in the musical domain it occurs mainly whenever thereis a violation of expectations. Eyebrow frowning is associated with the expression of negativeemotions such as fear, anger, disgust and in music performance it occurs mainly when the musicalfeatures are associated with the expression of these negative emotions, such as when there arestrong dissonant passages, in low register melodies, or when playing forte or fortissimo

ConclusionsEyebrow raising and frowning are the most frequent movements during piano performances. Theiroccurrence is associated with specific musical features and concerns the interpreter’s self moni-toring activities (sensomotory control and emotional control), cognitive processes involved in theanalysis and memorization of a musical piece, communicative aspects towards the listeners.

Key words: Eyebrows, Facial expression, Piano performance

[email protected]

Page 431: Abstract Book

Friday, August 25th 2006 431

63.13 Musical chord categorization in the brains of theoristsand instrumentalists

Elvira Brattico1, Tuire Kuusi2, Mari Tervaniemi1

1Cognitive Brain Research Unit, Department of Psychology, University of Helsinki, Finland2Department of Music Theory, Sibelius Academy, Finland

BackgroundThe human brain can be modified by long-term exposure to music. However it is not yet known towhat extent the repeated practice to analytically listen to sounds automatizes pitch processing ofstructurally complex sound combinations.AimsWe studied whether brain mechanisms involved in a chord categorization task differed accordingto the musical background and listening strategies of the participants. The task was to catego-rize different pentachord voicings according to their set-class identity. The subjects were eithermusic theorists and composers (with their main focus in “analytic” training) or instrumentalists(with a mainly “procedural” or motoric training). During the experiment, electric brain responseswere first measured when participants were distracted from the sounds and subsequently, whenthey were performing a sound-related task. By these means, we wished to determine whetherthe type of musical expertise selectively enhances either automatic or controlled forms of soundneurocognition.Method11 theorists and 10 instrumentalists were measured with a 64-channel electroencephalograph(EEG). They heard a sound sequence in which 80% of the pentachords represented one set-class(the context chords) and 20% another set-class (the deviant chords). During the “passive” ex-periment, subjects ignored the chords and concentrated in watching a movie. In the “active”experiment, subjects pressed a button whenever they heard a chord deviating from the context.ResultsIn the passive experiment the deviant chords elicited a small negative brain response peaking at 150ms, corresponding to the mismatch negativity (MMN), with no group differences in its amplitude,latency or scalp distribution. In the active experiment the deviant chords evoked a larger MMN inthe right-hemisphere for theorists than instrumentalists. The P300 peaked earlier and had a moreanterior distribution for instrumentalists than theorists.ConclusionsMusical expertise and listening strategies modulate the neural activation pattern related to auto-matic discrimination and conscious categorization of musical chords. Our results support the viewthat the listening biography of musicians may at least partially modify their music-related brainfunctions.

Key words: Musical expertise, Event-related potentials, Set-class theory

[email protected]

63.14 A computational approach for measuring articulation

Jens Brosbol, Emery Schubert

Page 432: Abstract Book

432 Poster session II

School of Music and Music Education, University of New South Wales, UK

Articulation (the duration of time between the onset and offset of a note) in a musical contexthas effects that both concern performers and are important in the communication of structure,emotion and expression. However, relatively little work has been done on the extraction andcomputational calculation of articulation from sound recordings, such as commercial CDs. Thispaper reports on some of the issues involved in measuring articulation in sound files, and describesa method of identifying note onset time and note offset time which relies on gradient magnitudeof the energy and loudness pattern, rather than simple note intensity threshold. The latter is morecompliant with the traditional notion of an onset and offset threshold, but is highly sensitive tomasking and noise. The former, gradient anlaysis approach identifies sudden positive-gradientvalues caused by note onsets, and sudden negative-gradient changes caused by note offsets. Theproblems of identifying articulation in multiple voices, such as pitch tracking, and onset-offsetpairing/matching, are also discussed.

Key words: Articulation, Psychoacoutsics

[email protected]

63.15 Where is the “one”? Tapping on the perceived beginningof repeating rhythmical sequences

Riccardo Brunetti1, Marta Olivetti Belardinelli2

1Department of Psychology, University of Rome, Italy2ECONA, Interuniversity Centre for Research on Cognitive Processing in Natural and ArtificialSystems, Rome, Italy

When we listen to a repeating rhythmical sequence, we easily perceive its repetition point,namely the point where the flow of events loops and starts all over again. That means that wequickly extract its structure (the regularities or invariant information and/or, at a higher level, itsmetrical structure) and, accordingly, we choose one of the events as the “first one”. But some-times this decision is easily changed for another one, e.g. we perceive the pattern as “temporallyrotated” and we change our mind about which event should be at the beginning of the repeatingsequence. This effect has been called gestalt flip and it is theoretically comparable to the classicalre-interpretation of a visual ambiguous figure. The study presented here has been designed to findout which principles regulate the choice of a specific event as the “first” and which patterns aremore easily rotated or flipped, starting from a review of the literature and the classical Run-Gapprinciples proposed by Wendell Garner and colleagues. 22 patterns were composed for the exper-iment varying systematically in length, metrical strength, and events grouping. 13 Ss participatedin the experiment, ranging from 0 to 12 years of musical training (mean 4.8 yrs). In each trial,the Ss were presented with a repeating pattern fading in from silence: the task was to tap at thebeginning of each repetition, starting as soon as they perceived the starting point of the pattern.Results show that this effect is ruled by a clear hierarchy of principles, from the position of thelargest group of events, to the metrical structure evoked by the pattern. At the same time, a clearrole is played by the familiarity of certain structures (as the 8-elements pattern, often interpretedas a 4/4 measure), and a new principle linked to the structure of sub-repetitions inside the pattern

Page 433: Abstract Book

Friday, August 25th 2006 433

itself. The outline of all these principles is discussed along with proposals for further studies,including the possible role of other parameters (intensity accents, melodic movements).

Key words: Phenomenal perception, Entrainment, Isochronous and non-isochronous meter

[email protected]

63.16 The role of pitch interval magnitude In melodic contourtasks

Tim Byron, Kate Stevens

MARCS, University of Western Sydney, Australia

The cognition of melodic contour - the direction of the melody, ignoring the individual notesor intervals between notes - is important in the cognition of short unfamiliar melodies. It hasrecently been implicated as a link between the cognition of music and the cognition of language.It is argued that the concept of melodic contour needs to be expanded, and that the pitch intervalmagnitude (i.e., steps or leaps) is important to the cognition of melodic contour.

This experiment used a melodic contour discrimination task. Participants’ accuracy at dis-criminating between two melodies was the dependent variable, while the independent variableswere the type of change made to the melody (i.e., contour direction change, pitch interval mag-nitude change, or contour direction/pitch interval magnitude change), participants (musicians ornon-musicians), and melody length (either 8 or 16 notes long).

It is predicted that all participants will accurately discriminate between the melody pair signifi-cantly more often when there are changes to the melody, in both 8-note and 16-note melodies, thanwhen there is no change in the melody. It is predicted that discrimination between 8-note melodiesis significantly more accurate than discrimination between 16-note melodies. It is predicted that, inboth 8-note and 16-note melodies, that there will be no significant difference between participants’accuracy at discriminating melodies with a contour direction change and their accuracy at discrim-inating melodies with a pitch interval magnitude change, and predicted that, in 16-note melodies,melodies with a contour direction and pitch interval magnitude change will be significantly moreaccurately discriminated as similar than melodies with only a contour direction change or a pitchinterval magnitude change. Finally, it is predicted that musician participants will be more accuratethan non-musicians at discriminating differences in all conditions.

The results of this experiment support the hypothesis that, in both 8 and 16 note melodies, bothmusician and non-musician participants are able to accurately discriminate between melodies withno change and melodies with contour direction changes and pitch interval magnitude changes.Additionally it supports the hypothesis that musicians are significantly more accurate than non-musicians at detecting changes to melodic contours.

Key words: Melodic contour

[email protected]

Page 434: Abstract Book

434 Poster session II

63.17 Historical development of expertise in jazz double-bassplayers: Increased technical performance

Claudio Caloiero, Andreas C. Lehmann

Hochschule für Musik Würzburg, Germany

BackgroundThe acquisition of expert performance in the domain of classical Western art music has been stud-ied, whereas the domain of jazz music remains largely unexplored. So far, research has neglectedinstruments that are learned partly in an informal way, instruments that are used mainly in anaccompanying function, and instruments that entail improvisation. These missing aspects are allrelevant for jazz double-bass skills. This instruments’ history falls entirely into the 20th centuryand is documented with recordings.

AimsIn this study we trace the historical development of skill for the double-bass in jazz music andexplore the possibility of generalizing previous finding from classical music domain.

MethodFour expert bass players served as subjects a listening experiment. The experts were asked toassess 16 excerpts for quality of performance and structural aspects of music using 5-point ratingscales. The excerpts were taken from a commercially available anthology and covered the years1929 to 1960. Furthermore, biographical information on 550 known bass players was extractedfrom encyclopaedias.

ResultsReliability of the judges in the listening experiment was found to be moderate. Year of record-ing correlated positively with complexity, virtuosity, and technical quality (all > .50, p < .05),but significantly with aesthetic or musical quality. A factor analysis yielded two factors (Virtu-osity, Expressivity), which together explained 91% of the variance. Correlations of factor scoreswith year of recording supported the above mentioned results. Our historiometric analysis of bi-ographical data revealed a tendency for lower starting ages for instrumental tuition and increasedtransfer/cross-over from the classical domain.

ConclusionsJazz double-bass players have become ever more skilful than their predecessors. However, thistechnical aspect of performance is largely independent of today’s experts’ musical appreciation ofthose works. Our analysis of biographical data of expert bass players can help explain the increasein skill. Also, as writings on jazz show, jazz players are highly competitive with regard to theirtechnical skills. Although this study confirms previous results found for classical music, it alsosuggests important differences.

Key words: Expert performance, Jazz, Double-bass

[email protected]

Page 435: Abstract Book

Friday, August 25th 2006 435

63.18 Singing performance as a motivation to involve pupils insinging activities

Aintzane Camara

University of the Basque Country, Spain

The implementation of singing performance in the Elementary School provides excellent mo-tivation to involve children in singing activities. We thought that singing activities are a verygood source to improve confidence singing, to foster the group work and children’ socialization,and to enjoy different kinds of music. Usually music teachers in the most of the Schools in theBasque Country Autonomous Community in Spain develop different kinds of music performancesto celebrate the end of the academic year, vacations or another circumstance. However, teachersthink these activities help the achievement of abilities and music skills that are necessary to thedevelopment of children and their basic music formation.

The aim of this investigation focuses on the analysis of the relationships of the participationin the concerts to children’ attitude toward singing activities at school, and outside school. Thisinvestigation also wants to analyse the criteria music teachers have used to programme this kindof singing activities and their thinking about the importance about to sing in the elementary edu-cation.

The present study is developing in three schools of the Basque Country where usually musicteachers organize concerts. The participants are music teachers and pupils of 5th and 6th grades ofthese schools. The research is designed in two phases. The first one comprises semi - structuredinterviews with music teachers to know intentions and planning about the concerts. Besides, someclasses will be observed to examine how music teachers work this kind of singing activities. Inthe second phase, it will observe the performance and then it wants to obtain the opinion of musicteachers and some chosen pupils about the performance through semi - structured interviews.

Results will be interpreted from the obtained data of the interviews and observation of theclasses and performance. To the interpretation of the data it will be elaborated a series of categoriesrelated with the singing activities at school and out of school. As conclusions we hope to find thatmusic performances at school play an important part in the development of 5th - 6th grades pupils’attitudes towards singing.

Key words: Music education, Singing activities, Attitudes

[email protected]

63.19 Relative sensitivity to melodic and phonetic strings changesbetween 8 and 11 months

Gina Cardillo, Patricia Kuhl

Institute for Learning and Brain Sciences, Department of Speech and Hearing Sciences, Univer-sity of Washington, USA

Previous research suggests that in categorizing sounds, infants are not limited to local cuessuch as phonetic identity, spectral information or harmonic structure, but also depend on the more

Page 436: Abstract Book

436 Poster session II

global, relational characteristics of sound sequences, such as pitch contour. The acoustic featuresof infant-directed speech include exaggerated intonation contours, which are thought to increaseinfant attention and thus facilitate language acquisition. There is some evidence to suggest thatexaggerated pitch contours facilitate phonetic perception, though the precise nature of the rela-tionship between pitch and phonetic encoding in infancy is largely unknown. The present studyinvestigated infants’ relative sensitivity to pitch and phonetic information at 8 and 11 months ofage using a head-turn auditory preference procedure. Twenty infants at each age were tested intwo conditions, “word” and “melody,” in counterbalanced order. In both conditions, infants wereinitially familiarized for one minute with a four-item string (word = “go-bi-ra-tu”; melody = thesyllable “la” sung on A-F#-E-B). In a subsequent test phase, the familiar stimulus and a novelstimulus (where the order of elements in the word or melody strings was changed) were randomlypresented. Looking times to each stimulus were recorded. Older infants demonstrated a noveltypreference for new melodies over familiar melodies, F(1,19)=4.94, p<.05, but no preference in theword condition, leading to a significant stimulus (familiar vs novel) by condition (word or melody)interaction, F(1,19)=4.46, p<.05. Younger infants did not show a preference pattern in either themelody or word condition. Thus, by 11 months, pitch information may be more robustly encodedthan at 8 months, and pitch changes may be more salient than phonetic changes. Our workinghypothesis is that the addition of melodic cues (i.e., singing) to words should facilitate the encod-ing of syllable order in those words. This prediction, currently being tested, is in line with otherdata suggesting that pitch cues facilitate phonetic categorization. The relationship between pitchperception skills and early language acquisition will be discussed.

Key words: Infant, Speech perception, Melodic perception

[email protected]

63.20 Can rhythm help children in reading and writing diffi-culties?

Patrizia Casella Piccinini

Primary School Teacher, Lucca, Italy

My work is founded on the concept that our knowledge is essentially based on our understand-ing of: 1. space - like measure and orientation; 2. time - like pulsation, rhythm, length, pause,order; 3. space-time - like space of temporal events in which the individual tunes into the collectivegood.

In fact people without a good intuition of these categories have learning difficulties. My work’saimed to show that we can prevent some common didactic difficulties and we can also increasepeople’s learning abilities by improving one’s intuition of space and time. About reading andwriting at first my method uses the isochronal-rhythmic movement of the body and of the voice.Then it ads the auditory-rhythmic perception by using the music and the metronome becausethey discret the space-time and improves the rhythmic movement and the speeds of the body, thecoordination between the movement and the voice and the speed and the fluency of the readingand writing voice.

This way of proceedings gives very good results with poor reading and writing children; it givesundreamed results with brain damaged people and with dyslexic children. The latter immediatelyimproved their fluency and correctness in reading and writing.

Page 437: Abstract Book

Friday, August 25th 2006 437

ConclusionsAll the pupils improve their reading and writing: children that haven’t reading difficulties read veryfast; children in reading difficulties or dyslexic pupils make progress in speed and correctness; theydon’t go back.Therefore my work demonstates that rhythm can help pupils and students in readingand writing difficulties.

Key words: Rhythm, Rhythmic movement, Music and metronome

[email protected]

63.21 Exploring the relation between the singing activity, thepersonality of singers, and their state and trait anger lev-els

Daniela Coimbra, Jane Davidson3

1Centro de Investigação em Ciência e Tecnologia das Artes (CITAR), Universidade Católica Por-tuguesa, PortugalCentre for the Study of Music Performance (CSMP), Royal College of Music, PortugalThe University of Sheffield, UK

BackgroundResearch on the personality of musicians has discussed whether certain personality characteristics,especially those related to creativity and associated with creative people, are an aid to the musicalperformance (Kemp, 1996; Csikszentmihaly & Getelzs, 1973; Howe, 1990; Dyce & O’Connor,1994). In fact, if on the one hand it is not possible to establish causal-effect relations regardinga given activity and the personality of those who carry it out, associations between the two havebeen observed (Pervin, 1989; Kemp, 1996).

AimsThe aim of the present study was to shed light on the personality of classical singers. Moreover,given that one of the core features of a singing performance is an effective expression of emotionsand the importance of anger in carrying out activities, the study also aimed to indicate specificaspects of the state and trait anger levels of singers. A final aim was to explore possible relationsbetween the characteristics that emerged from the study and the activity of singing.

MethodThe NEO PI-R (Revised Personality Inventory; Costa & McCrae, 1992) and the STAXI-2 (State-Trait Anger Expression Inventory-2; Spielberger, 1999) formed the basis of the quantitative analy-sis. Both were completed by a group of fifty-four singers in a Music College.

ResultsThe most distinctive aspect of the singers’ personality was their high scores on the Opennessdimension and their low scores on the Conscientiousness dimension. Results also showed thatthose who sang better scored higher on the “Conscientiousness” dimension of the personality test.Results form the STAXI-2 showed that both female and male singers differed from their normativegroups on the Anger Expression-in scale and that there were no significant differences betweenthe singers’ State Anger levels prior to and after the performances.

Page 438: Abstract Book

438 Poster session II

ConclusionsIt was hypothesised that Openness provides the fuel for the imagination and creativity necessaryto carry out the activity of singing, and that the Conscientiousness dimension is of paramountimportance for academic achievement. It was also hypothesised that the mid-year examinationsdid not raise the singers’ levels of anger but that singers might need to refrain from expressinganger overtly in a competitive and hierarchical system such as that of a Music College.

Key words: Personality assessment, Revised personality inventory, NEO PI-R, STAXI-2, Singing activity

[email protected]

63.22 Investigating the association between personality, angerand the singing activity

Daniela Coimbra1Centro de Investigação em Ciência e Tecnologia das Artes (CITAR), Universidade Católica Por-tuguesa, Portugal2Centre for the Study of Music Performance (CSMP), Royal College of Music, UK

BackgroundResearch on the personality of musicians has discussed whether certain personality characteristics,especially those related to creativity and associated with creative people, are an aid to the musicalperformance (Kemp, 1996; Csikszentmihaly & Getelzs, 1973; Howe, 1990; Dyce & O’Connor,1994). In fact, if on the one hand it is not possible to establish causal-effect relations regardinga given activity and the personality of those who carry it out, associations between the two havebeen observed (Pervin, 1989; Kemp, 1996).AimsThe aim of the present study was to shed light on the personality of classical singers. Moreover,given that one of the core features of a singing performance is an effective expression of emotionsand the importance of anger in carrying out activities, the study also aimed to indicate specificaspects of the state and trait anger levels of singers. A final aim was to explore possible relationsbetween the characteristics that emerged from the study and the activity of singing.MethodThe NEO PI-R (Revised Personality Inventory; Costa & McCrae, 1992) and the STAXI-2 (State-Trait Anger Expression Inventory-2; Spielberger, 1999) formed the basis of the quantitative analy-sis. Both were completed by a group of fifty-four singers in a Music College.ResultsThe most distinctive aspect of the singers’ personality was their high scores on the Opennessdimension and their low scores on the Conscientiousness dimension. Results also showed thatthose who sang better scored higher on the “Conscientiousness” dimension of the personality test.Results form the STAXI-2 showed that both female and male singers differed from their normativegroups on the Anger Expression-in scale and that there were no significant differences betweenthe singers’ State Anger levels prior to and after the performances.ConclusionsIt was hypothesised that Openness provides the fuel for the imagination and creativity necessary

Page 439: Abstract Book

Friday, August 25th 2006 439

to carry out the activity of singing, and that the Conscientiousness dimension is of paramountimportance for academic achievement. It was also hypothesised that the mid-year examinationsdid not raise the singers’ levels of anger but that singers might need to refrain from expressinganger overtly in a competitive and hierarchical system such as that of a Music College.

Key words: Personality assessment, Revised personality inventory, NEO PI-R, STAXI-2, Singing activity

[email protected]

63.23 Does information or involvement increase reported en-joyment of classical music?

Lenore DeFonso, Steve Johnson, Mary Rowlett

Indiana University, Purdue University, Fort Wayne, USA

BackgroundMany people claim they do not like classical music, but often they have not been exposed to itand are uninformed about what it is. Previous research suggests that familiarity and exposurehelp to determine how much people like different types of music. However, the role of specificinformation has not been examined. Involvement with classical music, e.g. in music groups orclasses, appears to influence appreciation of it, but this also has not been examined directly.AimsThis study’s purpose is to examine the effects of information about the music, as well as involve-ment in the music, on preferences for and enjoyment of classical music. The results may revealsome factors influencing tendencies to dislike or avoid classical music, and how these may bechanged.MethodThree groups of subjects heard 10 short musical excerpts, all classical music and all programmatic(i.e. music depicting a particular event, place, or scenario.) Highly familiar music (e.g. wedding orgraduation marches) was not included. Group I (“Involvement”) described a scene or “story” theyimagined while listening to the music. Group II (“Information”) was told the title, composer, andwhat the composer tried to represent. Group III (Control) simply listened to the music. All groupsfilled out a rating form for each excerpt, and a questionnaire about their musical preferences andexperience.ResultsData analysis will involve comparing the three groups on how much they liked the musical ex-cerpts, and their likelihood of listening to such music in the future or attending a concert of thistype of music. The analysis will take into account prior knowledge of the music, the ease withwhich subjects could (or thought they would be able to) describe a scene or “story,” and their pastexperience with music.ConclusionsThe decreasing numbers of people attending classical music performances has been a concern inrecent years. Music groups have tried to discover ways to reach young people in order to buildaudiences for the future. This study’s results can suggest some ways this may be accomplished.

Key words: Classical music, Music preferences, Music information or involvement

Page 440: Abstract Book

440 Poster session II

[email protected]

63.24 Musicality of mother-infant vocal interactions

Anne Delavenne

Centre de recherche en Psychologie et Musicologie Systématique U.F.R. S.P.S.E. Université ParisX-Nanterre, France

In the theoretical background of communicative musicality, we explored the impact of a lack ofmusicality on mother-infant interactions through the analyses of the vocal interactions of a groupof 30 mothers with their 3-month-old infants. 15 of the mothers were diagnosed with borderlinepersonality disorder while the 15 others constituted the control-group. We used the still-face para-digm to compare the vocal interactions of mothers with borderline personality disorder to those ofthe control group. The interactions were both video and audio recorded. A previously conductedpilot study had opened up new perspectives of research, particularly concerning the musical phras-ing of mothers with borderline personality disorder. In the present study we chose to focus on therhythmic and melodic aspects of the musical phrasing of mothers with borderline personality dis-order. We selected one minute from each interaction, based on the quality of the infant’s arrousal,to perform a spectrogram and acoustic micro-analysis. We then studied the communicative musi-cality of each dyad. We used a grid to collect the information concerning the duration and numberof vocalizations, pauses, and narrative episodes. We also considered the variations of melodiccontours of the mothers’ vocalizations and those of the infants. Finally we examined the seman-tic content of mothers’ speeches. We observed that mothers with borderline personality disordertend to be more rigid and repetitive than control mothers. Accordingly, their vocalizations aresignificantly longer, while the pauses between vocalizations are shorter. Moreover, there are fewernarrative episodes than in the interactions of the control group. Finally, infants of mothers withborderline personality disorder tend to vocalize less than infants of control mothers. This studyhighlights the importance of musicality in the co-construction of a shared time between a motherand her infant and motivates us to examine the different aspects of communicative musicality.In particular, we would like to develop a process to measure the quality of the communicativemusicality of early mother-infant interactions.

Key words: Communicative musicality, Mother-infant vocal interactions, Borderline personality disorder

[email protected]

63.25 Rhythm-melody interaction: Is rhythmic reproductionaffected by melodic complexity?

Franco Delogu, Riccardo Brunetti, Marta Olivetti Belardinelli

Department of Psychology, University of Rome, Italy

In psychological literature, many attempts have been made to describe how the different do-mains of musical pitch and rhythm are related to one another. Whether these two parameters are

Page 441: Abstract Book

Friday, August 25th 2006 441

independently or jointly encoded in music listening and performance has been extensively dis-cussed and supported by several, controversial, experimental investigations. The present research,consisting of three experiments, focuses on the ability to selectively attend the rhythmic informa-tion of a musical piece at different ages and expertise levels. In Exp.1 45 naïve adult subjects weretested, while 59 children were invited to participate in Exp. 2. Exp. 3 was administered to 26professional musicians. In all the experiments, participants were invited to learn and reproducethe rhythm of a set of short musical pieces specifically composed for the study, varying in bothmelodic and rhythmic complexity. Ss reproduced the rhythmic profile of the fragments by pressingthe keys of the computer keyboard at the correct timings. Each key pressing triggered the correctnote of the sequence. General results show that melodic complexity does not affect rhythmic per-formance and that the ability to selectively attend rhythm information does not require specificmusical education and is already present in children. Our findings are consistent with a model ofencoding strategy in which rhythm and melody can be effectively selectively attended. Present in-vestigations are focusing on the possibility to selectively reproduce rhythm while paying attentionalso to melodic materials.

Key words: Rhythm and melody interactions, Rhythmic performance, Cognitive organization of music process-ing

[email protected]

63.26 New technologies for new music education: The Contin-uator in classroom setting

Laura Ferrari

Nursery School, Bologna, Italy

BackgroundWhat happens in a classroom when a keyboard answers and plays like a virtual partner? Themajority of the researches carried out until now have regarded the new technologies as tools thatallow children to produce music (Webster, 2002). A particular interactive system, called Continu-ator, was elaborated at the Sony Computer Science Laboratory, based on the notion of InteractiveReflexive Musical Systems (IRMS). The Continuator is interesting because it is able, in real time,to learn and produce music in the same style as the user playing the keyboard. The experiments,carried out so far with children 3 to 5 years old, suggested that the Continuator generates interest-ing child/machine interaction and creative musical processes in young children (Addessi & Pachet2005).AimsA new didactical experience was carried out. From a pedagogical view the general aim was toanalyse if and how the Continuator can be used in the daily school activities. We are also interest-ing to understand the role of the teacher in two different situations: in the free games and in theguided activities with the system.MethodThe experience was carried out with 18 children of 3/5 years old, in an Italian kindergarten(Bologna). Children, divided into small groups (3/4 children), played with the Continuator forthree times. In the 1st session children explored the system. In the 2nd and in the 3rd session the

Page 442: Abstract Book

442 Poster session II

teacher suggested to children three activities with the Continuator. In all the sessions free game(alone, in pair or in group) with the system was provided.ResultsThe children reached high levels of well-being and pleasure, very similar to those described in theTheory of Flow (Csikszentmihalyi, 1996). They learned to dialogue musically with the system,developing autonomy and learning to manage some kinds of collaborative playing. These factorsgave rise to some particularly careful and prolonged bouts of listening, stimulating the childrento think in sound, spending time with the system and developing a genuine desire of “play” withmusic.ConclusionsThis practice experience shows the Continuator could represent a versatile device to enhance themusical invention and exploration in classroom setting. The system’s double role of partner andteacher seems to enhance socialization and an important self-regulation (Canevaro, 2002) of thegroup. In this poster the classroom setting, the method and some results will be showed.

Key words: New technologies, Music education, Didactical experience

[email protected]

63.27 Effects of articulation styles on perception of modulatedtempos in violin excerpts

John M. Geringer, Clifford K. Madsen, Rebecca B. MacLeod

Center for Music Research, College of Music, Florida State University, Tallahassee, USA

BackgroundThe study of perception of ongoing temporal events in music is an important aspect of understand-ing both music performance and listeners’ responses. Decisions regarding choice of tempo andthe amount and direction of tempo deviations may be influenced by a number of factors includinglistener discrimination, style, and specific musical context. One variable that has not been studiedis the effect of articulation style on tempo perception.AimsThe purpose of this research was to examine the effect of three articulation styles on the abilityof musicians to perceive tempo modulations. Specifically, would violin excerpts performed withstaccato, legato, and pizzicato articulations affect music majors’ perception of and preferences formodulated tempos?MethodSeventy-two music majors served as participants. Two examples with contrasting rhythmic rateswere chosen from solo violin literature. Each excerpt was performed with three articulations(legato, staccato and pizzicato) and presented to listeners in three conditions of tempo modula-tion: gradual increase, gradual decrease, or no change. Modulations (total magnitude = 6%) wereproduced in small increments so that listeners would not notice abrupt changes.ResultsBoth articulation style and direction of modulation affected listener perception of tempo, andthese two factors interacted significantly. Legato examples were rated consistently as increasing in

Page 443: Abstract Book

Friday, August 25th 2006 443

tempo more (and decreasing less) than staccato and pizzicato examples in both excerpts. However,staccato was heard as increasing less (and decreasing more) than pizzicato in the excerpt with lessrhythmic activity. Listeners preferred the no change and tempo increase modulations to the tempodecreases, and preferred legato and pizzicato styles to staccato in the excerpt with less rhythmicactivity.ConclusionsThere were obvious differences between articulation styles, as well as interactions with the direc-tion of modulated change. Further, interactions occurred with the specific excerpts that differedin rate of rhythmic activity. These findings lead to speculation that style variations such as artic-ulation and rates of melodic rhythm may exert effects on perception comparable to tempo itself.Further research is necessary to determine implications for a broader perceptual theory and formusic performance practice.

Key words: Tempo perception, Performance practice, Articulation style

[email protected]

63.28 The music education of seven cantons of French-speakingSwitzerland: A comparative study

Marcelo Giglio

Institut de Psychologie, Université de Neuchâtel, SwitzerlandHaute École Pédagogique des Cantons de Berne, du Jura et de Neuchâtel, Switzerland

This study offers a comparative analysis on the Music Education designs in the educativesystems of the seven French-speaking cantons of Switzerland. A parametric and nonparametricconceptual structure was conceived starting from the cultural and educational contexts of each can-ton. Some differences are observed concerning the name to designate this discipline, the weeklytime and the organization. Some similarities are observed with regard to the cantonal study plans,the projects and the methodology use. This comparison of musical education suggests an appli-cation on the next changes of the Music education curriculum’s in the French-speaking part ofSwitzerland.

Key words: Music education, Switzerland:, Comparative study

[email protected]

63.29 Semantic and rethoric in the “mute” recitatives and liederby P. Hindemith

Nicola GiosminBackgroundThe study of musical rhetoric aims to point out the rhetorical elements bounded to text and mir-rored by musical structures. Rhetorical features are, obviously, present in western music from the

Page 444: Abstract Book

444 Poster session II

early musical repertoires to the XX century ones. Great importance is given to the text and toits relationship with musical standard figures (anaphora, interrogatio, rest, etc.) but: what is therelationship between meaning and rhetorical figures? We think that rhetoric can (in some cases)build signification and meaning, simply “hiding” itself.AimsThrough the analysis of a reduced set of pieces the main aim is to show the operating of rhetoricfeatures in building meaning without showing the necessary link with the text. This will be donestudying a small repertoire by P. Hindemith: a set of three instrumental (not sung) recitatives-lieder: the “Lied” from the harp sonata, the “Recitative” from the two pianos sonata and the“Recitative and Lied” from the double-bass (and piano) sonata. The peculiarity of this set is thata poetic text is at the top of the page, above the musical part, not inside of it. This would suggesta general information about the special mood of the piece, which is wrong: the singing voice ofthese compositions corresponds perfectly to the above mentioned text (syllables, intonation, etc.).Main ContributionThe main contribution is not only an enlightenment in Hindemith’s production (which is the sim-pler result) but, most of all, the contribution will provide an example of rhetoric as meaning builder,in addiction to the signification given by its standard figures. The final result will be an analysis ofa “recitative” without words (in the double-bass sonata, third piece of our set), studying the roleof “mute” rhetoric.ImplicationsThe most important implications are the showing of musical rhetoric power and (on the other hand)the poverty of the musical semantic (most of all the semantic tied to external elements). Anotherimplication will be the study of the meaning process by rhetoric in order to produce meaning, andthe demonstration that an inverse path (from general meaning to rhetorical one) is impossible.

Key words: Analysis, Musical meaning

[email protected]

63.30 What kinds of lyrics are more communicative to patientswith autism?

Tohshin Go1, Eiko Shimokawa2, Mika Hoshide2, Mayumi Sato2

1Department of Infants’ Brain and Cognitive Development, Tokyo Women’s Medical University,Tokyo, Japan2Department of Rehabilitation, Saitama Medical Center for the Disabled, Saitama, Japan

BackgroundPatients with autism have qualitative impairment in verbal and nonverbal communication. Musictherapy has become accepted as a useful intervention for these patients and could improve theircommunication ability.AimsTo investigate what kind of musical intervention is useful for patients with autism, we sang thesame song with different lyrics during music therapy sessions for these patients and assessed therelationship between the lyrics and their communicative behavior.

Page 445: Abstract Book

Friday, August 25th 2006 445

MethodWe used favorite and familiar songs for each patient, for example “The old gray mare”, “It’s a smallworld”, and other songs. First, the song was sung with original lyrics that were totally unrelatedto the patient’s activities during the individual music therapy session. Second, after a short pause,the same tune was sung but with different lyrics that were connected with what the patient wasdoing then. We repeated the same procedure several times and observed the behavior of the patientrecorded in the videotape. We counted how many times the patients laughed, vocalized, spoke,played the instrument by themselves, and initiated interaction with surrounding persons duringeach part of the song and considered these as positive behavior. We also counted how many timesthe patients left the room, cried, tried to hurt surrounding persons or things, and put their fingersinto their mouth and considered these as negative behavior. The total number of positive behaviorsdeducted from that of negative behaviors was compared during each part of the song.

ResultsTwo boys, five and eight years, with autism were included in this study. Neither boy was able tospeak. Their developmental quotients (DQ) were 38 and 17, respectively. Their behavior scoreswere significantly increased during the song with changed lyrics compared to those during thesong with original lyrics.

ConclusionsMusic has been shown to be a better intervention tool for patients with autism if it is combinedwith lyrics related to their actual activity. Language might be understood and communicated betterif it is presented with music.

Key words: Autism, Music therapy, Lyrics

[email protected]

63.31 Nonmusicians might not know what is in or out of tune

Massimo Grassi

University of Padua and Udine, Italy

Findings on nonmusicians’ tuning abilities are inconsistent. They perform poorly in matchingtasks where intonation is evaluated in comparison with an internalized standard (e.g. Elliot, Platt,& Racine, 1987; Loosen, 1994) although they show sensitivity to out of tune notes as high as thatof musicians in tasks that do not require such a comparison (e.g. Lynch, Eilers, Oller, & Urbano,1991). Recently, Grassi (under review) investigated whether such an inconsistency was due toa lack of internalized representation of intonation. In a yes/no experiment nonmusicians heardin (equitempered scale) and out of tune (five log-compressed, five log-stretched) major diatonicscales and were asked to indicate which scales were out of tune. Nonmusicians’ performancewas better than in matching experiment but overall mediocre: the probability of “out of tune”response increased only shallowly as function of the difference between out of tune and equitem-pered scale. Moreover, a high number of out of tune responses (i.e. 30%) was observed on theequitempered scale. Here, I suggest and test a model of virtual listener (VL) that can account forGrassi’s results as well as those of previous experiments. VL possesses good frequency discrimi-nation abilities and an adaptive internalized standard of intonation which selects the “best in tune”stimulus according to the stimuli set it is listening to. In brief, VL adopts as best in tune stimulus

Page 446: Abstract Book

446 Poster session II

the “barycenter of intonation” of a given stimuli set (i.e. the equitempered scale in the case ofGrassi’s experiment). VL’s results conforms those of Grassi’s listeners with a discrepancy of only5%. With a behavioral experiment I tested whether nonmusicians adopt as best in tune stimulusthe barycenter of intonation of a given stimuli set. Nonmusicians heard musical scales whose in-tonation could range from highly compressed up to equitempered (Group 1) or from equitemperedup to highly stretched (Group 2). All listeners’ task was to indicate whether the scales were inor out of tune. Overall, VL and nonmusicians results were similar but a moderate discrepancybetween VL and nonmusicians’ results (i.e. 10%) was observed.

Key words: Tuning perception, Simlation, Pitch perception

[email protected]

63.32 Latent absolute pitch: An ordinary ability?

Manuela Birgit Gussmack, Oliver Vitouch, Bartosz Gula

Cognitive Psychology Unit (CPU), Department of Psychology, University of Klagenfurt, Austria

BackgroundAbsolute pitch (AP) is the ability to identify pitches without a reference tone. The related phenom-enon of (passive) “absolute tonality” (e. g., Terhardt & Seewann, 1983; Schellenberg & Trehub,2003) is the ability to discriminate between the original key of a musical piece and exact transposi-tions of it. Absolute tonality seems to be more widespread than traditionally assumed. Generally,all-or-none models of absolute pitch are increasingly being replaced by a continuum view.

AimsWith a new design, Vitouch & Gaugusch (2000) found that high school students (amateur musi-cians) without AP were significantly able to distinguish an original piece from even a one-semitonetransposition. The aim of the present study was an independent, modified and extended replicationof this finding.

MethodEighty-three high school students (41 with active piano experience), all non-possessors of AP,listened to the beginning of Beethoven’s “For Elise”. The piece was presented via headphones on14 consecutive days (one trial per day), either in the original key (A minor) or as a “digital twin”in Ab minor, in a double-blind design. Pieces were identical except for pitch (MIDI transposition).Subjects had to judge if they heard the original key or not. The 24-hours inter- trial interval servedas a rigorous method of memory interference.

ResultsStudents with and without piano playing experience showed significant, and significantly different,recognition scores of 67 % (M = 9.3 out of 14, SD = 1.95; effect size delta = 1.2) and 59 % (M= 8.2, SD = 2.2; delta = 0.6), respectively. Results were highly convergent with the findings ofVitouch & Gaugusch (2000), with correspondence even at the group-specific effect size level.

ConclusionsOur findings are in line with the results of Schellenberg & Trehub (2003). They lend strong supportto the view that latent forms of AP are widespread, at least among amateur musicians, and that therudimentary ability of absolute tonality is modulated by (early?) music experience.

Page 447: Abstract Book

Friday, August 25th 2006 447

ReferencesSchellenberg, E. G., & Trehub, S. E. (2003). Good pitch memory is widespread. PsychologicalScience, 14, 262-266.

Terhardt, E., & Seewann, M. (1983). Aural key identification and its relationship to absolutepitch. Music Perception, 1, 63-83.

Vitouch, O., & Gaugusch, A. (2000). Absolute recognition of musical keys in non-absolute-pitch- possessors. In C. Woods, G. Luck, R. Brochard, F. Seddon & J. A. Sloboda (Eds.), Proceed-ings of the 6th International Conference on Music Perception and Cognition (CD-ROM). Keele,UK: Keele University Dept. of Psychology.

[email protected]

63.33 An analysis of “Successful Sight Singing, A Creative StepBy Step Approach” for principles of comprehensive mu-sicianship

Tracy Heavner

University of South Alabama, USA

BackgroundIn a traditional music curriculum, various courses such as music theory, music history, music lit-erature and performance are usually studied as separate and distinct areas of music. The compre-hensive musicianship approach advocates the intradisciplinary study of music in which all areas ofmusic are integrated and synthesized into a unified whole. Several beginning band method bookshave adopted this pedagogical approach in order to provide more comprehensive instruction toinstrumental students.

AimsThe purpose of this study was to (1) analyze the choral method book “Successful Sight Singing, ACreative Step By Step Approach” for principles of comprehensive musicianship and (2) compareit with a researcher-developed, theoretical comprehensive musicianship curriculum model. Themodel outlines the musical knowledge and skill advocated by the comprehensive musicianshipapproach and consists of five categories: concepts, content, activities, instructional literature andevaluation.

MethodA panel of collegiate music education students experienced in the comprehensive musicianshipapproach served as reviewers for this study. Each reviewer individually completed a five-questionsurvey that investigated “Successful Sight Singing” for principles of comprehensive musicianship.The “Chi-Square Goodness-of-Fit Test”, with an alpha significance level of .05, was used to deter-mine the relationship between the observed survey scores of “Successful Sight Singing” and thepredicted scores of the theoretical comprehensive musicianship curriculum model.

ResultsResults from the “Chi-Square Goodness-of-Fit Test” indicated that “Successful Sight Singing”matched several of the five categories in the theoretical comprehensive curriculum model but didnot match the model in overall comprehensive musicianship scores.

Page 448: Abstract Book

448 Poster session II

ConclusionsFor many students, the choral ensemble is the only music class they will take. It is the hope of thisresearcher that choral method books, like their band method book counterparts, will incorporateprinciples of comprehensive musicianship so choral students will not only learn performance skillsbut musical knowledge and skills in all areas of music. Students will develop a comprehensivemusical knowledge that will allow them to be better performers and increase their appreciationand enjoyment of all music, as they become well-educated patrons of the arts.

Key words: Comprehensive musicianship, Music education, Choral pedagogy

[email protected]

63.34 The tone-melody interface in popular songs written intone languages

Vincie W.S. Ho

Department of Linguistics, The University of Hong Kong, China

“The lyric sounds strange.” It is not uncommon to hear this judgment among Cantonese nativespeakers as a general reaction against lyrics bearing lexical tones which conflict with the melodyin Cantonese popular songs.

Pitch is one of the common acoustic parameters shared by language and music. In the caseof tone languages in particular, variations in pitch levels are used to bring about contrast in wordmeaning. One may wonder if there is a close correspondence between linguistic tones and musicaltunes, and if not, whether the comprehensibility of the lyric would be affected.

While there exists a great number of research publications on the relationship between musicalsemiotics and discourse, not much research has been done on whether and how tone and melodycorrelate in songs from the phonological point of view. Among the very few studies in this area,findings appear to be quite antagonistic - evidence is given in Dagaare and Mandarin songs againstthe assumption that linguistic tones and melody should correspond to preserve intelligibility of thetext (Bodomo and Mora 2000, Chan 1987, Ho 1998). On the other hand, there are claims postu-lating that lexical tones and musical melody do correlate in songs sung in some Asian languagessuch as Cantonese, Thai and the Beijing dialect (Chan 1987, Chan & Wee 2000, Ho 1998, Ho &Bodomo 2003, Stock 1999, Yung 1989). However, few attempts have been made so far to offer anin-depth analysis on how tones correlate with the melodic line.

This paper examines a sample of Cantonese songs for the ways in which lexical tones complywith the melody. Evidence suggests that the mapping between tonal patterns and melodic contouris subjected to a set of constraints at the phonological, syntactic and semantic levels. An assess-ment is made on the extent to which such constraints hold, the effect on the intelligibility of lyricswhen these constraints are violated, and whether the unintelligible sequences of syllables can besensibly interpreted when being put in a context. Some Mandarin and Thai popular songs are alsoinvestigated to show whether the same findings are reflected in songs written in tonal languagesother than Cantonese.

Key words: Melody, Lexical tones, Linguistics

[email protected]

Page 449: Abstract Book

Friday, August 25th 2006 449

63.35 Relationships of dynamics, rhythm performance, and otherelements of music with overall ratings of wind band per-formances

Christopher Johnson1, John Geringer2

1The University of Kansas, USA2The Florida State University, USA

Background and AimsPrevious studies have explored influences of music elements on judges’ ratings and listener evalu-ations of music performance, including tone, intonation, rhythmic timing, dynamics, and expres-sion. In this study we explored possible relationships of dynamic range, rhythmic variation, andother elements with the overall musical evaluation of band excerpts.

MethodThe investigation initially addressed whether there would be patterns in evaluators’ assessmentsof particular musical elements including balance/blend, dynamics, tone/intonation, rhythm/tempo,or musical expression that would relate to prediction of their overall assessment of wind band per-formances. Music major students (N = 84) from one of three state universities in the United Statesserved as participants. Listeners heard four wind band excerpts each performed by two differ-ent ensembles at three levels of performance experience: high school, university, and professionallevel ensembles. Additionally two acoustical parameters of performances were analyzed: dynamicrange and rhythmic variation of each excerpt.

ResultsListeners differentiated between the various performances. An examination of beta coefficientsin significant regression equations revealed that musical expression was the strongest predictor ofoverall ratings in 50% of the trials, followed by tone/intonation (22.5%), dynamics (17.5%), andbalance/blend (10%). The category of rhythm/tempo was not significantly associated with overallratings. Correlations of performed dynamic range with listeners’ overall ratings indicated someexcerpts with slightly positive relationships (r = .19) and some excerpts with negative relationships(r = -.64); however, the relationship across all 24 excerpts was minimal (r = .06). Similarly,correlations for rhythmic variation for specific excerpts varied from r = .02 to .23, and the overallrelationship was .14. Faster performances were rated more highly in three of the four excerpts (r= .67 to .75).

ConclusionsMost of the time, the specific rating that best predicted the overall assessment was the ratingfor music expression, followed by tone/intonation ratings. Little relationship was found betweeneither performed dynamic or rhythmic variations and overall ratings. Evaluation of musical per-formance has been examined many times in different ways. Future research should examine otherunderlying characteristics of music performance in ascertaining additional aspects of performancethat may influence listener evaluations.

Key words: Performance assessment, Musical expression, Musical elements

[email protected]

Page 450: Abstract Book

450 Poster session II

63.36 Musical impression and contribution of structural fac-tors

Hiroshi Kawakami, Rei Shimano, Ko Matsumoto

College of Art, Nihon University, Japan

Music is constructed with various structural factors for example pitch, timbre and tempo. Wehear these factors overall and have impression to the musical piece. Research concerning theinfluence of musical structure on emotional expression has been done by many experiments, andthere are some results which suggest that each factor has the effect to our emotional expression.

The purpose of this research is solving how the musical elements affect our impression andinvestigating the relation between structural factors.

Three experiments were tried to find out the factor of impression and investigate the influenceby structural factors. As stimuli, we chose the famous musical piece, Bolero composed by M.Ravel and two songs from Four Piano Pieces composed by J. Brahms, which are Intermezzo in eminor and Rhapsody in e flat major. In first experiment, the subjective evaluation using the ratingscale methods was tried not to listen to the music for 91 subjects. By the factor analysis, we foundfour factors which were Passion, Activity, Sufficiency and Brightness. In second experiment, 101subjects listened to Bolero and answered the sheet of question and we had the result as same asprevious subjective evaluation even if while listening to the music. In last experiment, 20 subjectslistened to 46 short variations selected from Four Piano Pieces, which were arranged in twelvefactors- duration, pitch, loudness, timbre, accent, rhythm, melodic-direction, duration of centernote, mode, tempo, number of harmony and voicing, and evaluation was performed using thesame sheet. After tried by ANOVA, we considered the relation between the impression and eachstructural factor. And calculating using the contribution of the factor and the score of data, weinquired about the influence of twelve structural factors.

As a result, it turned out that there was great difference of the influence to the impressionby each structural factor. Especially, we had the conclusion that the mood and pitch had greatinfluence. On the contrary, we found out that duration and voicing were not so important. Thus,these experiments show that the structural elements respectively bring the different influence toour impression.

Key words: Musical structural factor, Impression, Subjective evaluation

[email protected]

63.37 ERD/ERS analysis of EEG reveals differences betweenmusicians and non-musicians during discrimation of pitches

Kaisu I. Krohn1, Mirka Pesonen2, Mari Tervaniemi1, Christina M. Krause2

1Cognitive Brain Research Unit and Helsinki Brain Research Center, Finland2Cognitive Science Unit, Finland

BackgroundThe ERD/ERS method can be used to assess the relative task- and/or event-related changes in

Page 451: Abstract Book

Friday, August 25th 2006 451

brain EEG oscillatory activity of specific frequency bands. Previous studies have revealed specificdifferences in brain oscillatory responses between musicians and non-musicians while listening topieces of music (Petsche et al 1988, Overman et al 2003).

AimsThe aim was to compare musical processing in musicians and non-musicians in order to determinewhether the neuroplastic effects of musical expertise could be probed with the ERD/ERS methodeven while the subjects were discriminating single pitches.

MethodsEEG was recorded from 11 musicians and 12 non-musicians while they were attending to an au-ditory stimulus sequence which consisted of all the seven pitches of E major scale. The frequentlypresented E4 (330 Hz) served as a standard (70 %) and was infrequently replaced by the targets:the other six pitches of the scale. The subjects were instructed to press a button on a responsepad when they detected a target. The digitalized EEG data were analyzed with the wavelet trans-form method. Thereafter the relative difference in EEG power between the standard stimulus andthe target stimuli was calculated as a function of time (-100-1000 ms after stimulus onset), fre-quency (1-30 HZ) and electrode location (Fz, Cz, Pz, L1, R1). This resulted in time-frequencyrepresentations (TFRs), in which the ERD/ERS values were expressed as percentage. In theseTFRs, negative values indicated a relative power decrease (event-related desychronization, ERD)and positive values indicated a relative power increase (event-related synchronization, ERS).

ResultsDistinct and statistically significant ERD/ERS responses were witnessed. Theta frequency (4 Hz)ERS responses were elicited in both groups as a response to change in pitch. In the non-musicians,both lower (8-10 Hz) and upper alpha (10-12 Hz) ERS responses were elicited. In contrast, in themusicians, upper alpha (10-12 Hz) ERD and beta (18-25 Hz) ERD responses were elicited.

ConclusionsThe present study supplements previous EEG and MEG observations on music processing, ob-tained with the event-related brain potentials method (e.g. Pantev et al 1998), by showing thatmusical expertise seems to alter the neural processing of pitches.

Key words: Musical expertise, Neuroplasticity, Brain oscillations

[email protected]

63.38 Hearing colors: The role of visual cues in pitch recogni-tion and encoding

Amir Lahav1,2, Christine George1, Elliot Saltzman1,3

1The Music Mind & Motion Lab, Sargent College, Boston University, Boston, USA2Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School,Boston, USA3Haskins Laboratories, New Haven, USA

Pitch perception plays an essential role in both music and speech. Humans store melodicinformation primarily in terms of relative pitch, although they are usually unable to name pitchvalues in fixed categories, as they do for color. We performed two experiments to test the effect

Page 452: Abstract Book

452 Poster session II

of audiovisual feedback on pitch memory in both musicians and nonmusicians. In experiment 1,subjects learned to play a tune on a piano keyboard, using a set of five notes and keys. Learningwas accomplished by ear with no sight reading involved. We varied learning conditions with twotypes of keyboards: one was a typical black-and-white; whereas the other was color-coded (eachkey had a different color). We then measured subject’s ability to recognize individual context-free piano notes (from the newly acquired tune) and match them with the corresponding pianokeys. In experiment 2, subjects were trained to reproduce audiovisual sequences with the aimof developing a pitch-to-color functional mapping. It appears that multisensory experiences caninfluence unisensory (auditory) processing and memory performance. We will discuss the role ofvisual information in auditory memory, and the effect of long-term musical training on processingof both musical on nonmusical audiovisual information.

Key words: Pitch perception, Auditory memory

[email protected]

63.39 Temporal attention in short melodies

Kathrin Lange, Martin Heil

Department of Experimental Psychology, Heinrich Heine University Düsseldorf, Germany

The possibility of focusing attention on a point in time has been investigated in recent studies,where the duration of the cue-target interval was indicated by a symbolic cue. It has been shownthat the focusing of attention on a point in time leads to faster responding to attended as comparedto unattended stimuli. Moreover, event-related potential (ERP) studies have provided evidencethat temporal attention can be associated with a modulation of early, perceptual processing stages.Temporal attention may also play a role when listening to a piece of music. It is possible tofocus attention on a melody, even if it does not stand out from the musical context by virtue ofa different pitch range, intensity, or timbre. It has been speculated that perception of melodiesis supported by expectancies based on the known timing and pitches of the tones. In addition,temporal expectancies may be activated by the underlying pulse. Goal of the present study wasto analyze the influence of temporal attention on the processing of single tones within a melodywith behavioral and ERP measures. Two four-tone melodies were presented, which were eitheridentical or differed in the pitch and / or the timing of the third tone. To encourage participantsto focus attention on the points in time marked as relevant by the first melody, the second melodywas hidden between distracter tones. Participants had to indicate, whether or not the pitch of thethird tone was the same for both melodies. Pitch discrimination should be improved when thetiming of both melodies was identical (i.e. when the third tone was presented at the attended pointin time) compared to when the timing was different (i.e. when the third tone was presented at anunattended point in time). Moreover, if the processes underlying temporal attention are associatedwith a modulation of early, perceptual processing steps, the amplitude of the auditory N1 elicitedby the third tone should be larger to stimuli at an attended point in time than to stimuli at anunattended point in time.

Key words: Temporal attention, Melody perception, ERP

[email protected]

Page 453: Abstract Book

Friday, August 25th 2006 453

63.40 Temporal stability in repeated music listening tasks

Eleni Lapidaki

Department of Music Studies, Aristotle University of Thessaloniki, Greece

BackgroundEmpirical research on stability of temporal judgments in music has increased considerably duringrecent decades. The context of this paper is the question whether a piece of music has one “right”tempo, and if so, whether this seemingly well-established concept possesses an absolute or righttime framework in the mind of the listeners. An experiment on tempo perception during musiclistening (Lapidaki, 2000) demonstrated that the initial tempo significantly dominated subjects’“right” tempo judgments. However, a relatively small number of adults exhibited an exceptionalability with respect to acute stability of large-scale timing in music. New results of the analysis ofthat experiment will be discussed.

AimsThis paper will focus on the use of internal clock-based mechanisms to represent musical timeand tempo in repeated listening tasks that may help us advance our understanding of the activityof human body and consciousness in terms of what they reveal about motor programs and precisetempo preferences involved in the musical experience. Thereby, the theoretical framework of theLapidaki (2000) study and the discussion of findings will be will be expanded and discussed in anew light.

Main ContributionI will outline forms of internal clock used in empirical studies and how they have a bearing onresearch into the physiological basis of stability in timing tasks in music. I will also review theliterature on (a) stability of motor and perceptual tasks during music performance, (b) personal orpreferred tempo judgments, (c) stability of tempo judgments in music listening, and (d) absolutetempo which reflects a form of “implicit cognition.”

ImplicationsThe material of this paper may prove useful in integrating a range of data and stimulating newresearch about the temporal organization of nervous system interactions and internal clocks. Thismay help us expand our knowledge about the way the human body participates in all acts of musicexperience and discover new biological (and ontological) properties of music that may, in turn,have an influence on the human body and its jouissance.

Key words: Tempo perception, Implicit cognition, Music teaching and learning

[email protected]

63.41 Absolute Pitch in transposed-instrument performers. Acase study

Ana Laucirica

Universidad Pública de Navarra, Spain

Page 454: Abstract Book

454 Poster session II

Some musicians identify chroma (pitch name) without an external reference. This ability,called absolute pitch (AP), is showed in these subjects at a different level, mainly depending of:emission timbre, sound register and/or altered sound identification demand (pitch class). Anyway,the age or the habit performance with instruments that are not in tune with 440 Hz could gener-ate the pitch identification ability with a difference of one or several semitones respect standardpitch. The case of transposed-instrument performers presents a special interest because of theirdaily performance with musical sounds that have alternative nomenclatures to La4=440 Hz. Thisstudy pretends to do an approximation to the characteristics of total or partial AP in transposed-instrument performers, for investigating the influence of AP possession in musical experience andvice versa.

Two higher-level music conservatoire students were selected. Their ability observation pe-riod was made during a whole academic year with regard to making sight-reading, improvisation,memory, and melody transposition activities. Both of them declared that they were mentally lis-tening to the pitch name while they were music performing. That was the reason for being selectedto do a pitch identification test and a structured interview.

The subject 1, saxophonist, shows at pitch identification test to possess a kind of AP limited bytimbre. His speciality at Conservatory is Jazz Music. The subject 2 possess a kind of AP limitedby timbre, register and pitch class in a pattern of a semitone above La4=440 Hz. Her specialityis clarinet. Both of them demonstrate a great competence in the observed activities. They state totranspose using their relative pitch and the pitch name perception helps them in this activity andin improvisation, sight-reading and melodic memory.

Although wider researches are suggested, we observe that: absolute and relative pitch cancoexist; transposed-instrument performers are able to possess AP and particular musical abilitiescould depend on practice more than AP possession.

Key words: Absolute and relative pitch, Transposed-instrument performance, Music education

[email protected]

63.42 Preverbal interaction skills and intonation: The role ofmusical elements

Kerstin LeimbrinkPrototypical musical phrases in motherese are basic elements of preverbal interaction. In the

first year mothers and infants use mainly vocalizations to interact with each other. Mothereseshows prototypical melodic contours. Those intonation “phrases” are a common starting point forthe development of both musical and language skills.

The use of different melodic phrases depends on the context: 1. rising contours are used forgetting attention 2. slow melodies with falling contours are used for soothing the child 3. to gainthe child’s visual attention (eye contact)

Tone and sound of the mother’s voice marks an utterance/speech sound as communicative forthe child.

Intonation is also a starting point for musical development in the child. Preverbal interactionis structured by the same means of expression as music: Melody, rhythm, timbre, pitch. Infantsmainly imitate pitch. To a lesser degree they also imitate melodie, timbre and rhythm.

A contingent, affirmative reaction by the adult supports the development of the infant’s com-municative skills. These supporting reactions are mainly mirroring, repetition and exaggeration of

Page 455: Abstract Book

Friday, August 25th 2006 455

intonation.In my project I like to analyse musical elements in an infant’s articulation. Which are the

dominant elements?How do they initiate turn-taking in the first 10 month of life?

Key words: Intonation, Preverbal vocalizations, Interaction

[email protected]

63.43 Interactions between phonemes and melody in the per-ception of natural and synthesized sung syllables

Pascale Lidji1, Régine Kolinsky2, Isabelle Peretz 3, José Morais2

1Université libre de Bruxelles, Belgium and University of Montreal, Canada2Université Libre de Bruxelles, Belgium3University of Montreal, Canada

BackgroundPrevious studies suggest the independence of the semantic and the melodic dimensions of songs(e.g. Besson et al., 1998), but the relation between phonology and melody in the perception ofsung stimuli has been less studied (but see Crowder et al., 1990). Though several authors (e.g.Wood, 1974, Miller, 1980, Repp & Lin, 1990) analyzed the interactions between phonemes andpitch in speech processing with Garner’s (1974) speeded classification paradigm, these resultscan hardly be generalized to song processing since they concern isolated pitch instead of melodiccontour.AimsThe aim of the present study was thus to determine whether the phonological dimension of thelyrics and the contour dimension of the melody are perceived as integral or separable in naturaland synthesized singing.MethodGarner’s (1974) filtering, redundancy, and condensation tests were used to investigate the on-line processing of naturally sung syllables in 48 non-musicians. They had to classify disyllabicnonsense words on two-note melodic intervals, either according to melodic contour or to syllables.In one material, the words differed by one consonant (/damy/ vs. /dany/), while they differed byone vowel (/dale/ vs. /dalo/) in the other material.ResultsThe results suggest that consonants and intervals were processed as separate dimensions. In con-trast, vowels and melody were processed as integral dimensions.ConclusionsThe processing interaction between phonology and melody thus seems to be different for conso-nants and vowels. Further research is needed to specify the origin of this difference. A potentialexplanation rests on the acoustical correlation between vowels and pitch in natural singing. Indeed,vowel discriminability decreased as pitch of the tone increased. To test this hypothesis, a controlexperiment with synthesized instead of natural singing is under progress. Two versions of the ma-terial varying on vowels were synthesized by a source-filter vocal synthesizer so that the acoustical

Page 456: Abstract Book

456 Poster session II

correlation between vowel and pitch was reproduced in one version, and suppressed in the secondversion. If the integrality between vowel and pitch is related to this acoustical interaction, it willdisappear with the second version.

Key words: Phonology, Melody, Song perception

[email protected]

63.44 Image and imagination in music perception

Xiaoling Lu

Center of Art Research, University of Shantou, Guangdong, China

BackgroundBehavioural experiments have shown an interesting connection between how well one knowssomething and how much one likes it. This is the so-called mere exposure effect(1). This effecthas also been found for music perception(2). However, the neural correlates for the phenomenonare not well understood.

AimsThe aim of the study is to find the neural correlates of the mere exposure effect for music per-ception. It is hypothesised that brain activity during melody perception will depend on how wellknown the melody is. The actual presentation frequency, the memory rating as well as the asso-ciated brain activity pattern will correlate with the subjective rating. For seldom heard melodies,many brain regions should show activation, including higher level auditory areas, association ar-eas, and areas supporting a heightened attention function, i.e. primarily in frontal cortex. Wellknown melodies should activate a lesser total extent of areas, since the cognitive burden will havelightened and become more automatic. Additional brain regions will probably show up for theperception of well known melodies, f.i. memory areas such as the hippocampus.

MethodStimuli: 36 specially composed, similar piano melodies of 15 seconds duration, which followcertain musicological guidelines. Keys used are Ab, C, and E; both major and minor modesutilised. Subjects: 20 non-professional, but musically interested right-handed adults, aged 20-35.Equal gender distribution.

DesignLearning phase: 172 presentations (total) of 18 different melodies in three groups with a presen-tation frequency of 2, 8, and 16 times, respectively. Pseudo-randomised. Furthermore, 6 targetmelodies are used, which the subject must identify by a button press. Scan phase: 24 total pre-sentations of different melodies in four groups: 18 from the learning phase, plus 6 new ones.The subject task is to rate their liking of each melody immediately after its completion. Memoryphase: 30 total presentations of different melodies in five groups: 24 from the scan phase, plus 6new ones. The subject task is to rate their certainty of having heard each melody before.

ScanningGE MR scanner, TR=3000 msec. Total length of scan 9 min. Block design; silence and chromaticscale as baselines. Data analysis: Preprocessing (realignment, normalisation, smoothing) usingSPM2(3). Parametric design, mixed effects model.

Page 457: Abstract Book

Friday, August 25th 2006 457

Results and ConclusionsStudy in progress, results to be presented in poster.

ReferencesZajonc (1968). Attitudinal effects of mere exposure. Journal of Personality and Social Psychology,9, 1-28.

Szpunar, K. K., Schellenberg, E. G. & Pliner, P. (2004). Liking and memory for musicalstimuli as a function of exposure. Journal of Experimental Psychology: Learning, Memory, andCognition, 30 (2), 370-381.

http://www.fil.ion.ucl.ac.uk/spm/

Key words: Image, Imagination, Music perception

[email protected]

63.45 Metric structure, performer intention, and referent level

Peter Martens

Texas Tech University, USA

BackgroundJones and Boltz (1989) propose that a listener’s initial physical and attentional experience of mu-sical meter occurs at a referent level. Resonance models (e.g. Moelants 2002, London 2004)predict that performance tempo has the greatest influence over referent level, but Martens (2005)demonstrated that tempo is often no more important than metric structure as a factor in choosinga referent level. Given a specific performance tempo of a specific piece, how might performers’differing interpretation of the piece’s metric structure affect referent level choice?

AimsThe goal of this study was to measure the success of performers’ intentions in communicating areferent level to listeners.

MethodTwo string quartets A and B performed four 30-second pieces; the stimuli consisted of recordingsof these performances. Both quartets had received two separate half-hour coaching sessions withthe experimenter, during which the following performance criteria were conveyed and rehearsed:1) pieces were to be performed at a given tempo, and 2) the quartets were to conceive and com-municate a specific pulse in the music as their main beat. The main beat for each piece differedfor each quartet, even though the overall rate of presentation of each piece was identical for bothquartets.

Each piece had a generically slower or faster main beat, and these were distributed evenlybetween quartets. Two groups of 30 participants each viewed the videorecordings of these perfor-mances, one group viewing the A quartet, the other the B quartet. After a warm-up tapping task,participants were asked to tap along with the music at a steady and comfortable rate.

ResultsPreliminary results indicate that average listeners’ choice of referent level was significantly influ-enced by the main beat that the performers were feeling and attempting to express.

Page 458: Abstract Book

458 Poster session II

ConclusionsGiven an identical performance tempo and metric structure, an instrumental ensemble can com-municate different experiences of a piece’s meter to listeners. Knowledge of a piece’s metricalstructure by the performers and/or coach, as well as conscious performance decisions in this areaduring a piece’s preparation are important, and should be more explicitly integrated into perfor-mance study.

Key words: Performance, Meter, Beat

[email protected]

63.46 Developmental changes in auditory tempo sensitivity: Sup-port for an age-specific entrainment region hypothesis

Ann Mary Mercier, J. Devin McAuley

Department of Psychology, Bowling Green State University, USA

In a recent study, McAuley, Jones, Holub, Johnston, & Miller (2006) constructed lifespan de-velopmental profiles for a battery of perceptual-motor tasks that included synchronize-continuetapping. In this research, the authors propose an age-specific entrainment region hypothesis,whereby the range of sequence rates (tempi) that an individual can readily entrain (synchronize)is predicted to widen during childhood then narrow again late in life. Results from synchronize-continue tapping for a wide range of ages and tempi supported this hypothesis. The present studytested the entrainment region hypothesis using a tempo-discrimination task.

Children (ages 4 - 9 years) and college-age adults judged the relative tempo of standard- com-parison pairs of isochronous tone sequences. On each trial, participants heard a six-tone standardsequence followed by a variable (2, 4, or 6) tone comparison sequence and judged whether thetempo of the comparison was faster or slower than the standard. Standard tempo was fixed withina block of trials, while the comparison tempo took on values yoked to the standard (s18%, s12%,or s6%). Three standard tempi (300 ms, 600 ms, and 900 ms) were tested over the course ofthe experiment, with the presentation order of the different standard tempi counterbalanced acrossparticipants.

Tempo sensitivity improved dramatically with increased age, with all age groups demonstrat-ing best performance at the intermediate 600-ms standard tempo. Consistent with a widening ofthe entrainment region with increased age, the greatest age-related improvements in tempo sen-sitivity were found at the slowest 900-ms tempo. Consistent with Miller & McAuley (2005), wefound that increasing the number of tones in the comparison sequence generally improved temposensitivity, but that the amount of improvement varied as a function of both age and tempo. Re-sults suggest that participants receive the greatest benefit from the inclusion of additional tonesin the comparison sequence at the fastest 300-ms tempo, and the least benefit at the slowest 900-ms tempo. Having additional tones in the comparison sequence tends to benefit younger childrenmore than older children, especially at the faster tempi. Findings will be discussed in the contextof an entrainment theory of timing (McAuley & Jones, 2003).

Key words: Tempo, Development, Entrainment

[email protected]

Page 459: Abstract Book

Friday, August 25th 2006 459

63.47 Linguistic and musical stress in Russian folk songs

Melissa Michaud Baese1, Richard Ashley2

1Northwestern University, Department of Linguistics, USA2Northwestern University, Department of Music Studies, USA

BackgroundHierarchical rhythmic structures are a part of music through metric structure. Language has similarhierarchical rhythmic structures with stressed and unstressed syllables in words, and durational andstress differences across words. Since rhythmic structures occur in both language and music, onecould ask how they interact when text is set to music. These factors have been the focus of muchrecent discussion on rhythm in different languages and music.AimsThis paper examines the question of whether text settings reinforce the underlying rhythmic struc-tures of a language, specifically Russian (which has rhythmic similarities and differences com-pared to French and English). Specifically, we examine whether the musical structure of Russianfolk songs highlights the linguistic prosody.MethodMaterials: Fifteen Russian folk songs from published collections were chosen due to their pri-marily syllabic settings. Procedure: The text of each song was marked for linguistic stress usinga Russian dictionary. Independently, the text was marked for musical stress, considering bothrhythmic duration, and metric stress; the two text markings were compared.ResultsThe primary finding is that the linguistic and musical stress occurred on the same syllable for eachword, and unstressed syllables always occurred on weak beats. In addition, when material didvary from a syllabic setting using a melisma, inserting rests, subdividing longer notes or otherwisealtering the rhythm, the effect was to preserve the strong beat - strong syllable relationship.ConclusionsText settings in folk songs do reinforce the underlying linguistic rhythmic structures of Russian,and highlight problems in typical “syllable” and “stress” timed categories for language rhythms.Three additional investigations are proposed. First, a native speaker will mark the stress in thetext to confirm the dictionary predictions. Second, native speakers will record the texts. Theseproductions will be examined for effects of linguistic stress in these texts. Finally, native speakerswill be presented with a series of short songs that match the aforementioned model, and thosethat do not, to provide goodness of fit judgments. These three follow-up experiments will furtherinvestigate our hypothesis.

Key words: Language, Rhythm, Melody

[email protected]

63.48 A research about the beliefs on teaching, learning andperforming music improvisation

Biasutti Michele1, Frezza Luigi2

Page 460: Abstract Book

460 Poster session II

1University of Venice, Italy2University of Padua, Italy

BackgroundScientific research considered improvisation from many perspectives. Sloboda (1985) reportedpianist David Sudnow’s experience, who described learning improvisation like a process of threestages anticipated by a preliminary one. Pressing (1988) defined general principles involved inimprovisation, assuming that feedback and anticipation are fundamental processes in decisionmaking. Johnson-Laird (2002) proposed the model of the algorithmic demand: improvisation wasconsidered a creative act made on a set of rules defined by two algorithms. Sawyer (1999, 2000)defined improvisation creativity in performance, and studied three aspects of environmental andsocial factors: unpredictability, use of structures and collaborative emergency. Kenny & Gellrich(2002) proposed three “integrated” models referring to different aspects of improvisation: genera-tive mechanisms, mental processes and learning processes.AimsResearch aim was to explore expert improviser’s ideas of improvisation, to support the theoreticalmodels considered and to develop educational perspectives.MethodTen expert improvisers answered to two semi-structured interviews. The first interview addressedparticipants’ definitions of improvisation; theoretical concepts of improvisation; relationshipsamong improvisation, composition and performance; practical skills involved in improvisation;importance of practice and exercise; the way musicians develop their personal style of impro-visation. The second interview gathered information on teaching and learning improvisation.Participants were asked if improvisation can be taught; what elements should be relied on dur-ing teaching and learning improvisation; what are the benefits, in terms of music education, inteaching and learning improvisation.ResultsQualitative analysis was developed using inductive data method. Results showed that musiciansreferred to three aspects of improvisation: complexity, inspiration and communication. Consider-ing teaching and learning improvisation, all participants pointed out that it can be taught in severalways, considering musical and social constraints. Students should learn different techniques, inorder to increase range of choice during improvisation tasks. However, it cannot be taught what toperform, but only how to improvise.ConclusionsResults supply the theoretical models on improvisation considered. Music improvisation is acomplex and multidimensional concept, including technical, expressive and social elements.

Key words: Improvisation, Teaching improvisation, Learning improvisation

[email protected]

63.49 How much time does it last a piece of music? A re-search considering the music tempo, the duration andthe counting during listening

Biasutti Michele, Pattaro Eugenio

Page 461: Abstract Book

Friday, August 25th 2006 461

University of Venice, Italy and Department of Educations, University of Padua, Italy

BackgroundThe researches on the influence of music tempo on time perception, concern the analysis of back-ground music in natural context (schools, shops), using a retrospective paradigm (North, Har-greaves e Heath 1998; Oakes 2003; Caldwell e Hibbert, 2002). These researches studied thepreferences and the phenomenon in general, but give little account in understanding on whichparameters the subject perceives the duration of a musical piece.

AimsThe research considered the influence of the music tempo, duration and counting in evaluating theduration of a musical piece.

Method96 voluntary participants, shared out by sex and musical experience, were tested in the experi-ments. A perspective paradigm was adopted. Subjects were asked to estimate the duration ofvarious versions of the same musical piece, modified by a computer in duration and music tempo,in two experimental phases. In the first phase subjects were asked do not count, while in the secondone to count.

ResultsThe variables were music tempo, duration of the piece and counting. Also sex and musical expe-rience were considered as concurrent variables. Data were subjected to Anova. Concerning themusic tempo, it was significant with F(3;276)=5,399,p<.01. The length of the piece was signif-icant with F(1.16;106.39)=9.055,p<.01. Considering the concurrent variables (sex and musicalexperience) no differences were found. It was found significance the interaction within the mu-sical experience category during the count phase: the males performed better than the females inthe no musical experience group, and the females performed better than the males in the musicalexperience group with F(1,92)=5,06,p<.05.

ConclusionsResults evidenced the different relevance of counting, duration and music tempo for evaluatingthe duration of a musical piece. Participants estimated better the duration during the no counting-phase and with shorter pieces: longer is the piece more error (under estimation) there was inthe evaluation. Music tempo had an influence, but in general it is not possible to argue that fastertempo means shorter evaluation of the piece. Concerning the strategies used by participants duringthe task, they tried to count regularly fitting their perception to the second. An influence was foundwhen the musical pieces had a musical tempo close to the real course of seconds.

ReferencesCaldwell, C. & Hibbert, S. A. (2002). The influence of music tempo and musical preference onrestaurant patron’s Behavior. Psychology and Marketing, 19 (11): 895-917.

North, A., Hargreaves, D. & Heath, S. (1998). Musical tempo and time perception in a gym-nasium. Psychology of Music, 26: 78-88.

Oakes, S. (2003). Musical tempo and waiting perceptions. Psychology & Marketing, 20:685-705.

Key words: Time perception, Music, Counting

[email protected]

Page 462: Abstract Book

462 Poster session II

63.50 Music Performance Anxiety among professional musi-cians and music students: A self-report study

Biasutti Michele, Urli Gilda

University of Venice, Department of Education, University of Padua, Italy

BackgroundMusic Performance Anxiety (MPA) is a complex phenomenon, manifested by affective, cognitive,behavioural and physiological components. It was considered from different perspectives. Craske& Craig (1984) described MPA as a consequence of physiological arousal, according to “inverted-U” model between tension and performance. Hardy & Parfitt (1991) conceptualized a ruinous fallof the quality of performance after a certain level of arousal (catastrophe model), rather then aprogressive decline. According to Steptoe (2001) the interplay between cognitive processes andphysiological activation is central to MPA. Several aspects have been considered as correlatesof MPA: Wilson & Roland (2002) recognized the personality of the musicians, their mastery ofthe task, and the context. Steptoe & Fidler (1987) investigated behavioural and cognitive copingstrategies used by musicians to control MPA.

AimsResearch aim was to explore the following aspects of MPA: 1. The role of public and contexton music performance. 2. The extent and diffusion of MPA. 3. Positive and negative effects ofanxiety on technique and expression. 4. Thoughts and feelings linked to anxiety before and duringperformance. 5. Beliefs about causes and correlates of MPA. 6. Coping strategies. 7. The role ofself-talks and mental imagery to control MPA.

Method12 subjects (5 with permanent job in orchestra, 5 free-lance and 2 students), were asked to answerto a semi-structured interview.

ResultsAll subjects experienced anxiety some times in their professional life, the frequency and the in-tensity of the symptoms vary widely, decreasing with age and experience. Musicians underlinedthe complexity of the phenomenon, concerning its causes, correlated factors, and the role of thecontext to facilitate or limit MPA symptoms. The effects on music performance are mainly detri-mental, but there could be a positive effect on expression. The subjects reported a wide rangeof informal coping strategies, positive self-statements and imagery to control MPA, before andduring the performance.

ConclusionsThe results evidenced the effects of anxiety on the different aspects of performance and cognitiveand behavioural coping strategies. The catastrophe model by Hardy & Parfitt (1989), and the im-portance of the interplay of cognitive and physiological elements (Steptoe, 2001) were confirmed.

ReferencesCraske, M.G., Craig, K.D. (1984). Musical performance anxiety: The three-system model andself- efficacy theory. Behavioural Research and Therapy, 22, 267-80.

Hardy, L., Parfitt, G. (1991). A catastrophe model of anxiety and performance. British Journalof Psychology, 82(2), 163-78.

Steptoe, A. (2001). Negative emotions in music making: the problem of performance anxiety,in Juslin, P.N., Sloboda, J.A. (Eds.), Music and Emotions. New York: Oxford University Press.

Page 463: Abstract Book

Friday, August 25th 2006 463

Steptoe, A., Fidler, H. (1987). Stage Fright in orchestral musicians: A study of cognitive andbehavioural strategies in performance anxiety. British Journal of Psychology, 78, 241-9

Wilson, G.D. & Roland, D. (2002). Performance Anxiety, in Parncutt, R., McPherson, G.E.(Eds.), Science and psychology of music performance, 47-72. New York: Oxford UniversityPress.

Key words: Music performance anxiety, Coping strategies

[email protected]

63.51 Study of timbre as music

Yuki Mito, Hiroshi Kawakami, Ko Matsumoto

College of Art, Nihon University, Japan

Many researches on the impression of timbre have been done and some significant results areclarified about a single sound till the present. However, in actual music, a single sound is not usedas it is. On this research, the purpose is that we consider the impression when timbre is performedas music, and furthermore, it examined how the difference in the impression of timbre in musicwould influence to us as physiology reactions.

We created some synthetic sounds that were different types of spectrum structure and appliedthem to the actual musical piece, the pieces of Lieutenant KIJIE, which was composed by S.Prokofiev. The reason for selection of this musical piece is because we thought that this song hadsome parts of activity, stillness, and middle mood, and we could consider the relation between theimpression of timbre and musical one.

The Semantic Differential methods was used as psychological evaluation, and electroencephalo-gram (EEG) was taken as physiological evaluation. Thirty healthy subjects listened three types ofmusic to which some kinds of sounds were applied, and EEG was recorded at 12 sites based on10/20 methods. After listening the music, all the subjects answer the sheet of question. Psycho-logical evaluation was analyzed by MDS and EEG was done by ANOVA.

As a result, there are some significant differences on EEG among the several sounds. Themusical piece, the difference appeared in EEG. Consequently, even if timbre is applied as music,the impression seems to be as same as the result of a single tone and it turned out that the musicalimpression greatly depended on the difference of timbre and the delicate difference in timbre hadaffected EEG. In research of timbre, it is necessary to measure the impression in actual music. Weconsider that these results are important to solve the problem of timbre in music.

Key words: Timbre, Semantic differential methods, EEG

[email protected]

63.52 “Perfect tempo” and the interpretation of metronomenumbers

Dirk Moelants

Page 464: Abstract Book

464 Poster session II

IPEM-Department of Musicology, Ghent University, Belgium

BackgroundIn music practice, the use of metronome numbers is widely spread. But are musicians able tointerpret these numbers correctly? Is there a sense of “perfect tempo” analogous to perfect pitch?

AimsFind out to what extend musicians can produce the right tempo based on metronome numbers, ifthere are differences between production and perception, between mechanical and musical tasksand between different tempo areas.

Method40 advanced musicians participated in four experiments. In the first they had to tap at 20 tempigiven by their metronome indications. In the second they labeled 20 metronome ticks with thecorresponding tempo indications. In the third they indicated the tempo of 30 fragments fromunknown musical pieces. In the last experiment they were asked to play short musical phrases at36 different tempi.

ResultsDifferences between subjects are very large, mean deviation varying between 7.4 and 44.3% overall four experiments. The production experiments are clearly more difficult than the perceptiontasks. There is a tendency to perform too slow and to judge too fast, especially for subjects thatdon’t score so well overall. Surprisingly, the tapping task (experiment 1) turns out to be the mostdifficult of the four. Also the global profile of the deviations in this experiment is different from theothers: in experiments 2-4 the slower tempi are performed/labeled relatively faster and the fastertempi slower, the answers of experiment 1 follow a u-shape, the fastest tempi been exaggerated.In general 120 bpm seems most easy to produce and recognize. Influences of the musicians’education level and background were found.

ConclusionsMusicians are to some extend able to produce and label tempi based on metronome numbers.Although some subjects clearly score better than others, deviations seem to large to speak of asense of perfect tempo. Metronome numbers as such are not directly associated with exact tempias it is the case for the relation note name-pitch in absolute pitch possessors. Rather the tempi arereconstructed from e.g. counting seconds, recalling the tempo of well-known musical pieces orcertain body movements.

Key words: Tempo, Metronome numbers, Perception and performance

[email protected]

63.53 Short and long term musical training influence pitch process-ing in music and language: Event-Related brain Poten-tials studies of children

Sylvain Moreno1, Carlos Marques2, Andreia Santos1, Sao Luis Castro2, Mireille Besson1

1Institut de Neurosciences Cognitives de la Méditerranée (INCM), Université de Provence, CNRS,France

Page 465: Abstract Book

Friday, August 25th 2006 465

2Universitário do Porto, Portugal

Recently, two experiments (Schön et al, 2004; Magne et al, in press) aimed at better under-standing the specificity of the perceptive and cognitive computations required to perceive andunderstand language. These experiments were designed to directly compare the prosodic levelof processing in language with the melodic level of processing in music. Specifically, the firstaim was to determine whether variations of fundamental frequency (F0) in language are processedsimilarly than variations of pitch in music. The second aim was to explore the influence of musicaltraining on the detection of F0 and pitch variations (musician and non-musician adults and chil-dren). Results for both adults and children showed that F0 manipulations within both music andlanguage elicited similar variations in brain electrical potentials, with overall shorter onset latencyfor musicians than non-musicians. Moreover, musicians detected weak F0 manipulations betterthan non- musicians.

Based upon these results, we conducted two series of experiments aimed at specifying theinfluence of short (8 weeks) and long (8 months) musical training on F0 processing in language.Children (8 years old) were all non-musicians and participated in three experimental phases: First,they were asked to detect F0 manipulations in language. Then, children were divided in twogroups: one followed a musical training while the other followed a drawing training (controlgroup). Finally, children were tested again, using the same protocol as in the first experiment,to determine whether 8 weeks of musical training (short term), or, in the second experiment, 8months of musical training (long term) improved their musical ear so that they detect F0 and pitchvariations better than children in the control group. ERPs were recorded together with behaviouralmeasures (percent correct).

Results for the short term musical training showed that strong variations of fundamental fre-quency (F0) are associated with a decrease in the amplitude of the P300 component. This resultis interpreted as reflecting the specialisation of the neural network involved in the detection of F0variations. The second experiment is still in progress; results will be presented and discussed atthe conference in order to compare short and long term musical training.

Key words: Plasticity, Learning, Language

[email protected]

63.54 Different effects of auditory stimuli on human autonomiccardiovascular rhythms

Martin Morgenstern, Wolfgang Auhagen

Musicology Department, Martin Luther University Halle-Saale, Germany

BackgroundMedieval physicians proposed a connection between the distinct rhythm of the heart and humanstates of health, thus suggesting the application of rhythmic stimuli to cure diseases. Since then,there have been various attempts to alter heart rate by means of auditory stimuli for similar pur-poses. The interactions of periodic exogenous pulses and endogenous biological rhythms (theheart rhythm being the most evident) have been studied extensively: in the sedative effects of mu-sic, the “inner tempo” that may underlie people’s tendencies to tap rhythmically, or in the under-taking of rhythmically coordinated tasks. However, little is known about regulating mechanisms in

Page 466: Abstract Book

466 Poster session II

cardio-respiratory synchronisation, and even rhythmogenesis itself. Applied music therapy couldbenefit greatly from a reliable model of how rhythmic stimuli influence the cardio-respiratorysystem, as well as standard medical interpretations of heart rate variability during the phases ofphysical activity such as the performance of music.

AimsThe author will review current literature about the influence of rhythmic auditory stimuli on thecardiac activity of human subjects. On the basis of this review as well as psychophysiologicaldata previously collected by the author, a computational model of musical rhythm and humancardiovascular and respiratory activity will be developed.

Main ContributionBased on findings of listening/performance experiments on the influence of rhythmic auditorystimuli on cardio-respiratory regulation in human listeners, psycho-physiological aspects of bi-ological rhythm generation and synchronisation will be examined. The focus of this study oncardiac arrhythmiae and their interpretation in the light of physiological adaptation processes willcomplement and extend the current literature about music-induced “chills” and other stimulus-induced psychophysiological reactions.

ImplicationsThe mechanisms by which auditory stimuli can influence biological rhythms are complex and notyet fully understood. To reliably model the influence of auditory stimuli on cardiovascular andrespiratory cycles, studies are needed to investigate biophysical and other physiological aspects(such as phase transitions, possible synchronisation effects, arrhythmiae and their causes) as wellas psycho-acoustic impact (such as group interaction, musical preferences etc.). This researchmight improve the effects and benefits of applied music therapy, and aid the development of specialrelaxation techniques for musicians.

Key words: Heart rate, Auditory stimulus, Synchronisation

[email protected]

63.55 A proposal on a learning system to realize maestro’s fa-vorite SPL balance in a chorus

Kumiko Morimura1, Makoto Iida2, Takeshi Naemura1, Hiroshi Harashima1

1Interfaculty Initiative in Information Studies, Graduate School of The University of Tokyo, Japan2Mechanical Engineering, Graduate School of the University of Tokyo, Japan

BackgroundChoruses have been lead by maestros who are prominently talented with musical skills. Theirleading methods are clear and the whole pictures of the music are always in the hands of theconductors.

AimsWe use information technology to build a scientific learning system for chorus. Here we focus onSPL (sound pressure level) balance among parts in a chorus, and try to construct a learning systemto realize maestro’s favorite SPL balance.

Page 467: Abstract Book

Friday, August 25th 2006 467

MethodIn concrete terms, each singer of a chorus put a headset microphone on her head and 6 channelvoices such as 2 sopranos, 2 mezzo-sopranos and 2 altos, were recorded by a multi-channel record-ing system concurrently while singing a 3 note chord. In this way, we were able to analyze eachsinger’s voice respectively and see how singers coordinate in SPL quantitatively to make a goodharmony. Besides, a Japanese famous conductor was asked to set SPL balance of 6 channel voicesto his taste by adjusting the levers of a digital mixer and his favorite SPL balance was obtainedafter a number of times’ trials. And then, we presented the conductor’s favorite SPL balance tothe singers who didn’t know his favorite balance by projecting a figure with a guideline of SPL foreach singer on the wall. Actual SPL of each singer was also presented over it in the same figure.Singers try to sing following the guideline watching both the guideline and their actual SPL at atime. We measured SPL before and after the practice with this system to compare.ResultsSingers generally lowered their SPL when they tried to make a good harmony. The maestro’sfavorite SPL balance was to set mezzo-soprano less in SPL than other parts. Altos were thefoundation for him to set his favorite SPL balance. Singers could reach the maestro’s favorite SPLbalance after practicing several times with this system even when maestro isn’t around.ConclusionsEvaluation test showed this system is effective for building up maestro’s favorite SPL balance forthe chorus singers while he is not there. The system could be applied to other parameters and haswide application.

Key words: Chorus, Sound pressure level, Learning system

[email protected]

63.56 How does a child acquire the expression of auftakt?

Yasuko Murakami

Tokyo National University of Fine Arts and Music, Japan

BackgroundThe musical expression has been studied through analyzing variations of timing and dynamics inexpert’s performance. Some researches also analyzed learner’s performance by using the scientificmethods. These studies demonstrate that in order for players to express their own interpretation ofa piece of music, he/she must know the musical structures as well as ways of representing them.However, these studies analyze only the sounds performed by player and do not shed light on howlearners feel the sounds and acquire the expression.AimsThe aim of this study is to examine how a child acquires the expression. Here I focus on theexpression of Auftakt. I will identify what a child is aware of by analyzing the sounds made byhim on the spot, his body movement, and his facial/linguistic expressions. Furthermore, the studywill analyze what a child has acquired by observing the process in which the child’s expressionchanges.MethodThe episodes that will be dealt with here are from the piano lessons that I have been giving to

Page 468: Abstract Book

468 Poster session II

a child once a week. So my position is that of both a teacher and an observer. The child was7 years old then, and he had been taking lessons from me for 4 years. I have recorded the 45-minute lessons held at the child’s home. I am going to make the change of his performance clearby using computer software. Finally I will abstract the factors which have caused the change ofhis expression by using the videotapes and fieldnotes.ConclusionsHere, I picked up several episodes of a specific child. However from the point of how a childacquires expression of Auftakt, I have obtained very important points. One point is that the learnermay listen to the sounds in IOI with having the sound imagery which is different form real sounds.Furthermore I can indicate the strong relationship between musical expression and the images ofphysical movements. Theses musical imagery work not only to acquire expression of auftakt, butalso to acquire various expressions.

Key words: Qualitative research, Piano performance

[email protected]

63.57 The structure of fluctuated two tonics in Western-Japanesemixed music: Why some of mixed music have been ac-cepted, whereas others were rejected?

Tadahiro Murao

Aichi University of Education, Japan

BackgroundWhat happened on children’s traditional folk songs when western tonal music was introduced intonon-western countries? In most of the non-western countries, tonics (central tone, ending note) ofchildren’s traditional folk songs are different from those of western tonal music, which are mostlydo or la on the major or minor scale. Then, how Japanese music educators and children haveadjusted the children’s folk music of Japanese modes to Western music?AimsThe most dominated music theory of Japanese traditional music is the “tetra chord theory” de-veloped by distinguished ethnomusicologist F. Koizumi. According to his theory which is verybottom up system, there exists no mode system, but combination of various tetra chord units inJapanese traditional music. This theory is, however, not applicable to explain why some Western -Japanese combined or mixed music have been accepted whereas others were rejected by Japanesechildren. In this paper, I will apply “the mode theory of different pentatonic scales” which isdeveloped by S. Tokawa to Western-Japanese mixed music.Main contributionsBecause there are several possibilities of naming pentatonic scale with sol-fa system, S. Izawa,a pioneer of Japanese music education, and his followers have mistakenly regarded central toneas mi instead of re and fused the traditional folk melodies and western major scale melodies bymoving tonic mi to do, which ended in failure for creating new western and eastern fused music.According to the Tokawa’s theory, central tone of Japanese children’s folksongs is named Re andthe scale is hypo re-mode in major pentatonic line (la-do-Re-mi-so-la). It was Japanese children

Page 469: Abstract Book

Friday, August 25th 2006 469

themselves who found the most natural way of combining western tonal melodies and Japanesechildren’s folk melodies. This is the system that the two tonics (re and do) fluctuate on the lineof hypo mode scale. Japanese composer N, Nakata followed this rule and sophisticated it in hisworks.

We now realize that the national anthem “Kimigayo” has the same rule of mixture as the caseof children’s combining music because it is composed by the re mode with the western tonalharmony on do mode.ImplicationsTow tonics (re and do) fluctuation happened in Israeli children’s songs which evoke the ambigu-ity of melodic expectancy (Cohen 2002). However, Nakata’s sophisticated works indicate thatchildren can enjoy this ambiguity of two tonics.

Key words: Two tonics, Mixed music, Hypo mode

[email protected]

63.58 Subjective evaluation of common singing skills using therank ordering method

Tomoyasu Nakano1, Masataka Goto2, Yuzuru Hiraga1

1Graduate School of Library, Information and Media Studies, University of Tsukuba, Japan2National Institute of Advanced Industrial Science and Technology (AIST), Japan

BackgroundAutomatic evaluation of singing skills is becoming a vital research topic with various applicationsin scope. Previous research on singing evaluation has focused on trained, professional singers(mostly in classic music), using various approaches from physiology, anatomy, acoustics, andpsychology — with the aim of presenting objective, quantitative measures of singing quality.

Our interest is directed more towards ordinary, common person’s singing, understanding howthey mutually evaluate their quality, and to incorporate such findings in an automatic evaluationscheme.AimsIn order to achieve this goal, a preliminary experimental study was conducted. The aim of thisstudy is to explore the criteria that human subjects use in judging singing quality, and whethertheir judgments are stable within and among each individual.MethodThe standard method of subjective evaluation by giving grade scores to each tested stimuli isinappropriate for singing evaluation, where the subtleties of subjects’ judgments may be obscuredby differences in musical experience. So instead, we used a rank ordering method, where thesubjects were asked to order a group of stimuli according to their preferred rankings. The resultsare analyzed using Spearman’s Rank Coordination Coefficients.

The experiment stimuli consist of the following four groups (A-D) — each with the same tuneand lyrics, sung by 10 individuals of the same gender. A: Japanese lyrics, 10 male singers B:Japanese lyrics, 10 female singers C: English lyrics, 10 male singers D: English lyrics, 10 femalesingers The average length of the 40 samples is about 13 secs. The samples were taken from theAIST Humming Database where the singers recited a first-heard tune from memory.

Page 470: Abstract Book

470 Poster session II

The subjects were also asked to give introspective description of their judgments.Results11 subjects (University students, ages 22 to 25) participated in the experiment. Among the total of220 pairs (= 11*10/2 pairs * 4 groups) of given rankings, 199 pairs (90.5%) were significant at the5% level, and 157 pairs (71.4%) were significant at the 1% level. This suggests that the subjects’rankings are generally stable and in mutual agreement.

Introspective comments indicate that the criteria of judgment include: - tonal stability - rhyth-mical stability - vocal expression and quality - personal preferenceConclusionsThe results show that under the control of lyrics language, singers’ gender, and the melody type,the rankings given by the subjects are reliable enough so that they can be used as a referendum toevaluate automatic singing evaluation schemes.

Further experiments will be conducted in various other settings to explore singing skills inmore detail. Work on identifying the key acoustic properties that underlie human judgments isalso in progress.

Key words: Singing skills, Rank ordering

[email protected]

63.59 Assessing the role of melodic and rhythmic factors instructuring musical salience

Davide Nardo1, Enrico Capellini1, Riccardo Brunetti2, Marta Olivetti Belardinelli1

1Department of Psychology, University of Rome, Italy2Department of Mathematics, University of Turin, Italy

In a series of previous studies (Olivetti Belardinelli 2005; Olivetti Belardinelli et al. 1999,2000) we demonstrated a pre-eminence of salience over tonality as an anchor point for musicalmemory, as well as a strong link between salience and episodic memory, and tonality and semanticmemory. According to the operational definition we provided, salience was characterized by theredundancy of melodic and/or rhythmic parameters. Nevertheless it is not clear what is the con-tribution of each single factor (and their interaction) to the mnestic performance, and which oneis the most effective as an anchor point for recognition memory. Therefore the aim of this studywas the assessment of the influence of melodic, rhythmic and melodic-rhythmic redundancies onmusical memory. 60 healthy (30 males) right-handed non-musicians participated in this study.Stimuli were 48 short musical themes made up of modified versions of those originally employedin our previous studies. Half of the new stimuli were tonal and half non-tonal, 16 were charac-terized by only melodic salience, 16 by only rhythmic salience, and 16 by the presence of both.Subjects were asked to listen two times to a “study list” containing 24 musical excerpts. After15-20 min, they were administered a “test list” containing 48 stimuli (24 previously heard plus 24new ones), and they should press a different button whether, according to Tulving’s model (1972):1) they recognized the melody as previously heard (Remember response); 2) the melody evokedin them a sense of familiarity (Know response); 3) they could not recognize the melody at all (Xresponse). Results showed a significant influence of all kinds of salience on episodic memory.The best memory performance occurred when both melodic and rhythmic salience were present

Page 471: Abstract Book

Friday, August 25th 2006 471

and tonal information was absent. The worst performance was obtained when melodic saliencewas present and tonal information was absent, while rhythmic salience in association with non-tonality scored a significantly higher performance. In conclusion, the association of both kind ofsalience seems to improve recognition memory, while the influence of each of them on memoryperformance seems to differ according to the concurrent availability of tonal information.

Key words: Recognition memory, Musical salience, Rhythmic/melodic factors

[email protected]

63.60 Muscular Tensions in musical performance: a set of mea-surement

Gianni Nuti

University of the Valle d’Aosta, Italy

BackgroundThe gap between thought and exact action, since the first one constitutes the challenge of themusician approach with the instrument to the term of the concert career. The birth of musculartensions non functional (TMNF), to postural muscles cargo or of tensive excesses on the muscle-skeletal areas already activated during an instrumental performance, can constitute one of suchobstacle and to reveal like mirror of one generic inadequacy to the psychomotor task, of a deficitrelationship with just the body, of a psychological malaise and psycho-social beyond that to beconsequence of musical or technical-instrumental problems not resolved.

AimsThe study proposes to set-up instruments of measurement of the non functional muscular tensionsin the instrumental learning, discriminating them regarding some expressive movements, in or-der to foster development of strategies for to attenuate them and adapts approaches to practicalinstrumental profits to prevent them.

MethodThree voluntary guitarists and three violinists with declared TMNF or hypertensions of impor-tant, moderated or minimal degree, has been subordinates to three sessions of eleven short onesperformances recorded by digital audio-video during which have been set-up and given one se-ries of measuring instruments: Protocols, questionnaires, Talks of research, electromyographycalmeasuring.

ResultsTMNF is frequently visible to the naked eye from experts observatories, but also from analysts notmusicians like physical distorted states and not expressives, in many cases connected to interfer-ences or extended prejudices of the whole executive and performing map. Some tensions do notinfluence the interpretative results in the short term, but they can, accumulating itself in the time,to wear away the general quality of the performance; not always auto-analysis of the interpretersand those of the observatories coincident; the reiteration in the three sessions of the same tensivestates denounce their presence - also in phase of pre-motor activation - in the mental mapping ofmovements already berthed in the long term memory.

Page 472: Abstract Book

472 Poster session II

ConclusionsWe can have musical writing models, erroneous training of technical-interpretative learning, par-ticular parameter refuse (intensity, agonica, beat, durations) further than a psico- physiologicalcharacter and predisposition more or less favourable to the instrumental ability that determinesTMNF, however on a plan of musical and artistic expression they fall back also the general state ofthe being in its relational life with the outside, through the corporeal manifestations. The teachermust face the problem in analytical and holistic way at the same time.

Key words: Psychophysiology, Teaching musical instruments, Muscolar tension arousal

[email protected]

63.61 Digital pulse forming in wind instrument synthesis

Michael Oehler, Christoph Reuter

Institute of Applied Musicology and Psychology (IAMP), Cologne, Germany

BackgroundThe Variophon is a wind synthesizer, that was developed at the Musicological Institute of theUniversity of Cologne in the 1970/80ies and at that time is based on a completely new synthesisprinciple: the pulse forming process. The central idea of that principle was, that every windinstrument sound can basically be put down to its excitation impulses, which independently of thefundamental always behave according to the same principles. In a recent project, supported by theDeutsche Forschungsgemeinschaft (DFG), it is planned to digitally rebuild the Variophon in animproved version.Aims

The aim of the software-based modelling of that synthesis principle is both, creating an ex-periment system for analyzing and synthesizing (wind) instrument sounds, as well as building asynthesizer, that would be an alternative to comparable Physical Modelling applications, becauseon the one hand this sound synthesis technique accounts for the place where the sound is gener-ated, on the other hand just a single breath controller is required to produce all the sound-nuances,that are possible on a real instrument.MethodFirst of all the analogue circuits of the different instrument modules of the Variophon will bemapped onto a digital representation by means of the analogue circuit simulation software LT-Spice. In a second step the Digital Variophon will be rebuilt in the modular environment Reaktor,made available by Native Instruments, and finally the experiment system will be programmed inC++ by means of the VST-Development Library (Virtual Studio Technology by Steinberg).ResultsThe analogue circuit boards of trumpet and bassoon are already digitally implemented in LTSpiceand NI Reaktor. Time and frequency domain analysis of some instrument specific generated statictones in pp, mf and ff as well as different intensity sweeps show an extensive concordance ofVariophon and Digital Variophon. As expected in the current phase of the project, in comparisonto an original trumpet or bassoon sound, spectral differences can be observed, resulting from thelimited technical feasibility at that time.

Page 473: Abstract Book

Friday, August 25th 2006 473

ConclusionsA software-based Variophon makes it possible to bypass these restrictions, as for example, tosynthesize the excitation impulses of original instruments by means of cosinusoidal or polygonalimpulses, where the rising and falling edges of the impulses can be adjusted freely. Furthermoresome important features of the sound production process, as the multiplicative interconnectionbetween pulse forming and breath noise, can now be considered.

Key words: Timbre, Sound synthesis, Wind instruments

[email protected]

63.62 Collateral and negative effects of sounds and music per-ception in normal subjects and in psychiatric patients

Alessandra Padula

University of L’Aquila, ItalyMusic education and music therapy are based on the benefits that can be obtained with musicalprojects in the fields of education and/or rehabilitation. Nevertheless, it’s necessary to study thepotentially negative effects which can arise from using sounds, rhythms, melodies and so on.

This study researches links between psychological and sociological aspects on one side andlistening to music and performing music on the other side. It includes many examples since thethird century B.C., which show how different communities have settled the question of positiveand negative effects of music.

The study examines many collateral and negative effects of sounds and music perception innormal subjects and in psychiatric patients:

- diseases derived from hearing sounds and music - wild increasing of neuronal activity -traumas of the auditory apparatus - injuries to the sense of balance - psychological and emotionalindispositions - bodily discomforts - inhibition of consciousness - etc.

- diseases derived from performing music - professional pathologies in singers, pianists, gui-tarists, flutists, violinists, cellist, - etc

- undesired effects on different aspects of the socio-political order - human values - way of life- forms of government - etc.

The study requests therefore responsible choices as regards to the kinds of music to listen toand the distinctive features of listening to music and performing music.

For that reason, the results of this study may be useful to: - experts in music perception andcognition - music teachers - music therapists - psychologists - sociologists

Key words: Diseases

[email protected]

63.63 Music as temporal art: Links between music and time

Alessandra Padula

Page 474: Abstract Book

474 Poster session II

If we compare works belonging to different areas such as music, poetry, dance, painting, sculp-ture and architecture, we can see that, though all of them can be perceived in a certain time-frame,some of them can be called products of temporal arts and the others can be called products ofspacial arts.

So, paintings, sculptures, architectural works can be “read” from left to right, and from up todown, nevertheless time is not a fundamental characteristic of these works.

Starting from the etymon of the word “rhythm”, the study reviews the links between music andtime in the course of many centuries, through the opinions of philosophers, writers, musicologistsand physicists. It also highlights which key-concepts can be considered constant and which onesevolved or have caused most changes.

Timing parameters in a musical composition (rhythm, meter and speed) are analysed; changesof these parameters can be grouped in changes which could concern: - linear measures - squaremeasures - volume measures - ordering

The study shows the differences among objective, subjective and evoked time, and what kindof consequences these differences have on composition, interpretation and fruition of music.

Likewise, some anthropologists say that during the fruition of an artistic product, time is can-celled or suspended.

For all these reasons, the results of this study may be useful to: - experts in music perceptionand cognition - composers - interpreters - music teachers

Key words: Music and time, Measures

[email protected]

63.64 Changing the plot: Music can establish the narrative infilm

Bruce Pennycook1, Eugenia Costa-Giomi2

1School of Music, University of Texas at Austin, USA2Center for Music Learning, University of Texas at Austin, USA

The purposes of the study were: (1) to analyze the musical language used by five composers todevelop an invented plot for a minute of film, and (2) to determine the effectiveness of their soundtrack in conveying the story line to an audience. The composers were given a one-minute clipwithout sound from the film The Lost Weekend (1945). The clip showed the skyline of New YorkCity and a close-up of an apartment window through which a man packing a suitcase could beseen. The composers, who were unfamiliar with the film, were asked to compose the sound trackand explain their invented narrative for the clip. The five narratives and five clips of the originalmovie with the new soundtracks were presented to 15 undergraduate and 10 graduate students ata large university. The students were asked to match the narratives to the corresponding clips andto explain their choices.

The music created by the composers was very diverse. The differences in instrumentation, tex-ture, use of melodic motives, rhythm, and structure were striking making the six soundtracks dis-tinct and individual. Although the audience perceived the differences among the soundtracks andwas quite accurate in identifying the narrative from the music, listeners were not always successfulin matching the soundtracks to the corresponding narrative. The analysis of the characteristics of

Page 475: Abstract Book

Friday, August 25th 2006 475

the music and the narrative of the most accurately and least accurately matched pairs allowed usto draw implications for the study of film music and the composition of sound for film.

Key words: Film music, Composition, Meaning and music

[email protected]

63.65 Adults hear the rhythm they feel through active and pas-sive body movement

Jessica Phillips-Silver Phillips-Silver, Laurel Trainor

McMaster University, Canada

Phillips-Silver and Trainor (2005) demonstrated a cross-modal interaction between body move-ment and auditory encoding of musical rhythm in infancy. Here we show that the way adults movetheir bodies to music influences their auditory perception of the rhythm structure. While adultslistened to an ambiguous rhythm with no accented beats, we trained them to bounce by bendingtheir knees to interpret the rhythm either as a march (bounce on every second beat) or as a waltz(bounce on every third beat). At test, adults identified as similar an auditory version of the rhythmpattern with accented strong beats that matched their previous bouncing experience in comparisonwith a version whose accents did not match.

In subsequent experiments we showed that the cross-modal effect does not depend on visualinformation, but that movement of the body is critical. The next question we asked was whetherpassive motion is sufficient for the cross-modal effect, or whether active movement is necessaryon the part of the adult subject. During training, adults listened to the ambiguous rhythm whilelying down on a seesaw-like bed rocked by the experimenter on either every second or every thirdbeat. After this passive experience, adults chose at test the auditory version of the rhythm patternwith strong beats matching their bed-bouncing experience. Thus, it appears that active movementis not required to influence adults’ auditory encoding of the beat that they felt. Parallel results fromadults and infants suggest that the movement-sound interaction develops early and is fundamentalto music processing throughout life.

Key words: Rhythm, Body movement, Cross-modal

[email protected]

63.66 Tonal pitch memory in infants

Judy Plantinga, Laurel J. Trainor

McMaster University, Canada

Although we know that 6-month-old infants can detect a pitch change to a single note of abrief tonal melody (Trehub, Bull, & Thorpe, 1984; Trehub, Thorpe, & Morrengiello, 1985) andcan recognize familiar three-tone sequences in a statistical learning task (Saffran, 2003; Saffran

Page 476: Abstract Book

476 Poster session II

& Griepentrog, 2001), we do not know whether their memory for isolated tones resembles that ofadults. Adults can remember the pitch of an isolated tone for up to 16 seconds when it is followedby silence (Ross, Olson, Marks, & Gore, 2004), but in most individuals, memory for the initialtone degrades with the interpolation of as few as six tones in a five second interval between theinitial tone and a comparison tone (Deutsch, 1970; Siegel, 1974). However, memory for pitch inindividuals with absolute pitch is unaffected by passage of time or the presence of interferencetones.

We used a conditioned head-turn procedure in which 6-month-old infants were presented witha tone that repeated every 2.5 seconds. Occasionally the tone was shifted either up or down bya semitone. Infants were rewarded with animated toys for responding to the pitch change with ahead turn. In the first experiment we established that infants can remember the pitch of a tonefor at least 2.5 seconds. In the second experiment infants were again presented with the repeatingtone every 2.5 seconds, but this time 3, 5, or 15 tones were interpolated between the repetitions.We found a significant negative correlation between the number of interference tones and infants’detection of the change in the repeated tone, with chance performance for 15 tones. The resultsindicate that for 6-month-old infants, as for most adults, memory for the pitch of a single tone isdegraded by the presence of interference tones, and suggest that in general infants are not absolutepitch processors.

Key words: Pitch memory, Infants

[email protected]

63.67 Playing with sounds: Intervention on bullying at school

Eugenio Prete, Angela Costabile, Anna Lisa Palermiti, Maria Giuseppina Bartolo

University of Calabria, Italy

BackgroundBullying is a subcategory of aggressive behaviour; but a particularly vicious kind of aggressive be-haviour, since it is directed, often repeatedly, towards a particular victim who is unable to defendhimself or herself effectively. This behaviour is particulary likely in groups from which the poten-tial victim cannot readily escape. Many authors demonstrated that bullies and victims presenteddifficulties in establishing social relations and in regulating emotions (Camodeca, Goossens, 2002;Crick, Dodge, 1994). Different studies have shown that music can influence positively the chil-dren behaviour and that it can promote their social behaviour (Hilliard R. E., 2001; Layman DL,Hussey DL Laing SJ, 2002; Rickson DJ, Watkins WS, 2003).AimsThe aim of this study is to investigate if the intervention based on the music can: 1. Increaseempathy and cooperation in the classroom; 2. Prevent and reduce bulliying behaviour.MethodThe sample was carried out by 6 classrooms of a primary school in Cosenza, Calabria (Southof Italy); it comprised 158 children (53 eight year old; 51 nine year old;54 ten old year). Themethodology involved 3 phases:

1. Sutton and Smith (1999) adapted version of Salmivalli’s questionnaire was used to identifythe roles of the participant in bullying, in fact the children can assume different roles: bully, victim,

Page 477: Abstract Book

Friday, August 25th 2006 477

defender, indifferent.The questionnaire was administered at the beginning and at the end of theproject to evaluate the intevention. 2. Interventions:each classroom were involved in differentactivities based on music for 6 months (from November to May); 3. Observations of a sub-sampleof 60 children.

Results and ConclusionsOn the basis of the behaviour observed, it is possible to note in both bullies and victims an increaseof social behaviour and a decrease of aggressive behaviour. The data suggested that using activitiesbased on music improvisation it is possible to improve positive relations, effective communicationand cooperation.

Key words: Bullismo

[email protected]

63.68 Automatic characterization and generation of expressiveornaments from bassoon audio recordings

Montserrat Puiggros1, Emilia Gómez2, Rafael Ramirez1, Xavier Serra1, Roberto Bresin3

1Music Technology Group, University of Pompeu Fabra, Spain2Music Technology Group, University of Pompeu Fabra, Sonology Department, ESMUC, Spain3Department of Speech Music and Hearing, KTH, Sweeden

BackgroundThis work characterizes expressive bassoon ornaments by analyzing audio recordings. This char-acterization is later used to generate expressive ornaments in symbolic format.

Expressive performance characterization is traditionally based on the analysis of differencesin performances, performers, playing styles and emotional intentions. Most research focus onstudying timing deviations (Dixon2005, Honing2002), dynamics and vibrato (Desain1999). Theyoften analyze MIDI performances and sometimes recordings (Dixon2005, Gomez2003). Thischaracterization is used later to generate expressive performances from score (Sundberg2003).

Less research is devoted to ornamentation. Ornaments are indicated in the score, without anyexplicit information about timing and dynamics. Some works have studied ornaments in pianoperformances (Moore1992, Palmer1996, Brown2003, Timmers2002).

AimsThe main goals of this work are: first, to study the behaviour of ornamentation by analyzing timingand dynamics from bassoon recordings. Then, the acquired knowledge is used for the generationof expressive trills in symbolic notation.

MethodWe divide our study in two main areas: 1. Characterization of a set of expressive recordings of aSonata by Michele Corette played by a professional bassoon performer. Each movement is playedin three different tempi, obtaining 96 ornaments (trills and appoggiaturas). A melodic descriptionis obtained for each ornament. We then perform some statistical analysis to model their behaviour,and then use some machine learning techniques. 2. Generation of expressive ornaments, testingdifferent machine learning methods. Given a melody with the indicated ornaments, the systemgenerates all the notes within the ornaments.

Page 478: Abstract Book

478 Poster session II

ResultsThe melody estimation is successfully adapted to the particular analysis of bassoon ornaments.The statistical analysis reveals a similar behaviour to previous studies on piano. The speed ofexecution is around 8 notes per second for most of the trills. We can distinguish that the first andthe last notes are usually longer than the central ones for slow tempi. For fast tempi, trills areusually converted into appoggiaturas. We finally identify regularities in the execution of centralnotes. We finally present examples of automatically generated, giving promising results.ConclusionsThis study presents an approach for the automatic analysis and generation of expressive ornamentsof bassoon using automatic melodic description and machine learning techniques. Further work iscentred in increasing the analyzed collection in order to obtain a robust model and to extent it toother musical instruments.

Key words: Music performance, Ornamentations, Audio recordings

[email protected]

63.69 The perception of local and global timing in simple melodies

Sandra Quinn, Roger Watt

Department of Psychology, University of Stirling, UK

Local relations refer to adjacent events (such as the time between successive notes in a melody);global relations refer to a continuous succession of local events (such as the rhythmic timing ofthe complete series of notes in a melody).

Tones in a short auditory sequence can have their perceived timing distorted by local pitchrelations. The Tau and Kappa timing effects in visual motion stimuli have equivalent auditorypitch motion versions (Shigeno 1993) where the perceived delay from one tone to the next dependson local pitch separation. We report data which show local distortions in the perceived durationof a sequence of 3 tones where the first and last tones have one pitch and the middle tone anotherpitch. Perceived duration increases with the pitch interval between the middle tone and the others:the larger the pitch interval the longer the perceived duration. We report a range of results whichallow us to relate this finding to the relative frequency of the melodic intervals in “vernacular”western tonal music: melodic events that are uncommon (such as a pitch change of a major 7th)are perceived to last longer than identically timed common ones (such as major 2nd).

This local effect suggests that there should be an equivalent global effect: large intervals shouldtend to make melodies sound slower. However, we also report data showing that melodies withfrequent large intervals tend to have their perceptual characteristics (such as happiness/sadness)judged as if the melody is faster (not slower) than melodies without large intervals. This shows adiscrepancy between local timing and global timing.

This set of findings is difficult to reconcile with any unitary additive model of time perception.We will describe an alternative account of time based on the nature of events. Uncommon eventshappen less frequently (by definition) and therefore the time between uncommon events will nor-mally be longer that the time between common events. In this sense, uncommon events can besaid to dilate the perception of time. When events happen more frequently than usual, a melodysounds rushed.

Shigeno S (1993), Percept Psychophys. 54:682-92.

Page 479: Abstract Book

Friday, August 25th 2006 479

Key words: Time, Perception, Melody

[email protected]

63.70 Investigating performance practice by piano students inBahia (Brazil)

Diana Santiago

Universidade Federal da Bahia (UFBA), Brazil

BackgroundResearch in music performance in Brazil has seen a great increase during the last twenty years.However, most of the studies focus on musical analysis, and not on the important aspects of themusical experience in itself, such as meaning, expression, mental representations and performanceplans, or the psychophysiological basis of musical practice.

AimsThis study, started in 2003, aims to understand the cognitive processes of musical practice bypiano students in Salvador, Bahia, Brazil.

MethodThe study was designed to include two phases. During the first phase, four non-major pianostudents were assigned the “Choro” from Guerra Peixe’s 1st Suíte Infantil; during the secondphase of the study, four undergraduate piano majors were assigned Villa-Lobos’s “Idílio na rede”from Suíte Floral. All of them have been required to study the pieces by themselves. Recordingsessions of the pieces have occurred thrice: after one week of practice; in the intermediate stage ofpractice; and when the pieces have been considered ready for stage performance by the students.Each subject has been interviewed in each of the sessions. During the second phase, furthermore,practice sessions of one of the subjects were videotaped weekly.

ResultsAnalysis of data from the first phase - which included verbal content analysis - has been concluded,and the recordings, videotapes and interviews of the second phase are almost finished. This paper,thus, presents a description subject to revision.

ConclusionsThis research, the first of its kind in Brazil, has been an attempt to understand the learning processof piano students preparing their performances, and to relate its characteristics with the findingsfrom the international research literature. It is assumed that the results will support Brazilian pianoteachers and researchers with conducting curricular changes for an improvement of performanceinstruction, particularly of piano.

Key words: Performance practice, Performance plans, Learning processes in music

[email protected]

Page 480: Abstract Book

480 Poster session II

63.71 A system yielding the optimum chord-form sequence onthe guitar

Koji Sawayama1, Norio Emura1, Masanobu Miura2, Masuzo Yanagida3

1Graduate School of Engineering, Doshisha University, Japan2Faculty of Science and Technology, Ryukoku University, Japan3Faculty of Engineering, Doshisha University, Japan

BackgroundPlaying chord sequences is a basic style of guitar playing. Many novice players often give upacquiring skill in playing chord sequences. The main reason for that may be the fact that so-called “chord books”, they usually refer to, sometimes give chord-forms that are hard to play forthem. Many chord-forms are listed for each chord in chord books, even though some of them arehard to be played by novice players. Such chord-forms force novice players to stretch fingers soas to realize chord-forms almost impossible for them, so it can be said that chord-forms listed inchord books might not be appropriate for novice players. Moreover, at playing chord sequencesaccording to chord-forms extracted from chord books, novice players often have difficulties inchanging chord-forms from one to another.

AimsProposed here is a system that gives the optimum chord-form sequence for a given chord sequenceto individual players. The system gives each player a chord-form sequence of minimum playingload calculated using individual load values obtained for each fingering item by least-squaresestimation based on data about mistakes in preliminary playing. It is confirmed that using theoutput of the proposed system, novice guitarists can play chord sequences with fewer mistakes.

MethodFor obtaining the individual load value, five subjects were asked to play 30 chord-form sequences,consisting of four chord-forms each. Weighting values for individual load factors were obtainedfor 29 fingering items by least-squares estimation based on numbers of mistakes in preliminaryplaying. For evaluation of the system, then, subjects were asked to play two sets of 12 chord-formsequences: one is those generated by the proposed system and the other, given by chord books.Failure rates of playing the chord-form sequences were compared between the two sets.

ResultsSignificant differences are recognized between the two sets for three players under 5% hazardlevel.

ConclusionsThe results show that chord-form sequences given by the proposed system are easier to play fornovice players than those given by chord books. A novice player can be expected to play chord-form sequences with fewer mistakes using chord-form sequences generated by the system.

Key words: Guitar, Chord-form, Least-squares estimation

[email protected]

Page 481: Abstract Book

Friday, August 25th 2006 481

63.72 Visual Gestures: Perceptual costs and benefits in the per-formance of live music

Michael Schutz, Michael Kubovy

University of Virginia, USA

BackgroundPrevious research has demonstrated percussionists use visual gestures to alter perceived note du-ration, shifting perception to align with the performer intent rather than acoustic reality (Schutzand Lipscomb, ICMPC8). With respect to duration, while percussionists are unable to controlthe sound of the note, visual information allows for the control of the way the note sounds. Thisdemonstrates vision plays an important role in musical performances. However, it remains unclearwhether this gesture has any negative ramifications.

AimsThis study aims to investigate the costs (decreased detection ability) vs. benefits (ability to createcontrol perceived note length) of percussionists’ use of gestures in live musical performances.

MethodVideo recordings of an internationally renowned solo marimbist were presented to subjects bothwith and without visual gesture information. Subjects were asked to rate auditory note durationindependent of visual gesture.

ResultsAs previously reported, vision significantly influenced duration ratings despite instructions to sub-jects to ignore visual information. However, using d’ as a gauge of sensitivity to acoustic differ-ences in note length we report a new finding showing subjects were more accurate at detectingdifferences between the tones when presented without visual information. This contradicts gen-erally reported results of increased sensitivity under audio-visual conditions due to the integrationof sensory information.

ConclusionsVisual gesture information was again shown to have a role in music performances - in effect itbecomes an important dimension for musical communication. However this added control comesat a steep price. While enhancing performer control over the perceptual experience, the gesturedecreases listener sensitivity to variance in note length. Implications of these results will be dis-cussed, as well their impact on performance and listening practice.

Key words: Sensory integration, Duration judgments, Visual information in music performance

[email protected]

63.73 Hearing reductions: A perceptual examination of a Schenkeriananalysis of the Aria from Brahms’ Variation and Fugueon a Theme by Handel (Op. 24)

Nicholas Smith, Deborah Henry, Laurel Trainor

Page 482: Abstract Book

482 Poster session II

McMaster University, Canada

One of the key goals of reductions in musical analysis is to distill the fundamental musicalstructure from more surface-level ornamentation. Psychologically, this process is common acrossdomains: We remember the gist of message, if not the precise wording; a stylized Mona Lisapreserves the essential form of the original. We use the probe-tone method to test listeners’ per-ceptions of pitch structure in “performances” of four levels ofaa Schenkerian reduction of the Ariafrom Brahms’ Variations and Fugue on a Theme by Handel (Op. 24). Although listeners were mu-sically trained, they were unfamiliar with the piece prior to testing. Beginning with the backgroundreductional level, listeners rated the stability of probe tones at various stop points in the music. Insubsequent test sessions, listeners heard middleground and foreground levels of the Aria with in-creasing amounts of surface detail, until finally being tested on the original piece. Regressionanalyses revealed that listener’s probe-tone ratings for the background level account for roughly50% of the variance observed in subsequent ratings for the original excerpt. As more surface de-tail was added, correlations steadily increased. This suggests that although the pitch informationcontained in the background is sparse, it communicates the essential idea to a considerable degree.Nevertheless, the pitch structure of the excerpt is not fully conveyed by the background, but ratherdepends on contributions from the musical surface.

Key words: Schenkerian analysis, Pitch structure, Tonality

[email protected]

63.74 Pitch and tempo precision in the reproduction of famil-iar songs

Frederik Styns, Dirk Moelants, Marc Leman

Department of Musicology (IPEM), Ghent University, Belgium

BackgroundAlthough often stated that only 1 in 10000 people possess absolute pitch (AP), several studiessuggest that much more people possess some kind of AP. Less attention has been given to anabsolute sense of tempo. Yet, analyses of imitation tasks made some authors conclude that thereexists also a widespread stable and/or absolute memory for tempo.

AimsIn this study we investigate the pitch and tempo precision in the reproduction of familiar songs.We report evidence that memory for musical pitch and tempo might not be as absolute as statedin previous studies and we provide an alternative hypothesis concerning the memory for pitch andtempo.

MethodIn a first experiment 72 subjects were asked to imitate familiar songs. In a second experimentsubjects first heard a short fragment and were then asked to make a vocal imitation of it. Datawere collected on “spontaneous pitch”: 244 people were asked to “sing one tone, a tone that laysnaturally in your voice and springs up spontaneously”. Finally the songs used in the productionexperiments were presented to subjects in different tempi.

Page 483: Abstract Book

Friday, August 25th 2006 483

ResultsAnalysis of the recordings revealed that participants did not imitate the pitch of the original songscorrectly, rather a random distribution within the octave was found. Also for the imitation of thetempo, the precision is clearly poorer than what found in previous studies, but differences betweenpieces are striking, the tempo of some songs being imitate rather correctly on average, others up to14% too fast. The data on “spontaneous pitch” give us a rough view on the preferred singing rangewithin the population. If these are compared to the pitches used to imitate the familiar songs, thereis a clear correspondence: people adapt there pitch in order to allow a comfortable singing range.For tempo we see that in most cases those songs that are imitated too fast are also preferred in afastened version in the perception experiment.

ConclusionsRegarding the results we conclude that people do not generally possess an absolute memory formusical pitch and tempo. Rather, when imitating well-known songs people tend to start at a pitchthat assures them a comfortable singing range. Apparently the tempi of some songs seem to bedistorted in the collective memory; more specifically the “image” of music associated with certaindances is fastened.

Key words: Music and memory, Absolute pitch

[email protected]

63.75 Hand-clapping songs: A natural ecological medium forchild development

Idit Sulkin, Warren Brodsky

Ben-Gurion University Of The Negev, Beer-Sheva, Israel

BackgroundMusic is often said to be a basic building block of intelligence as playing an instrument developscritical neural connections; music study improves spatial-temporal reasoning, and improves flexi-bility of thought. While instrumental learning is a training regime with compulsory learning struc-tured by a requisite curriculum, one might question if spontaneous singing games such as hand-clapping songs also cause improvement in nonmusic-related human performance. Hand-clappingsongs are composed by children in an ecologically natural environment, based on chanting-typesinging accompanied with percussive body sounds and rhythmic body movement sequences.

AimsThe study explored hand-clapping songs as serving a developmental function in facilitating theacquisition/training of specific cognitive, motor, and social skills.

MethodTwo studies were conducted to explore the effects of hand-clapping training. In Study 1, 24undergraduates participated in a pretest-posttest design investigating the effects of a 4-sessiontraining protocol; the dependent measures were spatial precision (pen&paper mazes), temporalstability/accuracy (tempo tracking and synchronous tapping), and bimanual coupling (rhythmichand sequences). In Study 2, 18 1ST graders participated in a postdicitve study examining therelationship between hand-clapping song performance and classroom proficiencies. After three

Page 484: Abstract Book

484 Poster session II

training sessions in September, a class of children were ranked for their performances as viewedon videotape by 4 blind judges, and then eight months thereafter in June, these positions werecompared to the homeroom teachers’ ranking for competence in reading, writing, arithmetic, andsocial maturity.ResultsStudy 1 found significant effects of learning/practicing hand-clapping songs on temporal perspi-cacity, and on bimanual coupling (considered to reflect cognitive-motor skills related to spatial-temporal reasoning). Study 2 found a positive correlation between the Hand-Clapping Perfor-mance Scale and Class-Room Performance Scale; the reliability of predicting scholastic facilityvis-à-vis hand-clapping performance was roughly 78%.ConclusionsHand-clapping songs seem to be a developmental platform towards the attainment of specificcognitive, motor, and social skills.

Key words: Hand-clapping songs, Cognitive development, Bi-manual coupling

[email protected]

63.76 Musicality, musical expertise, and pitch encoding: AnMEG study

Mari Tervaniemi1, Maiju Nöyränen1, Elina Pihko 2, Johanna Salonen2, Irma Järvelä3

1Cognitive Brain Research Unit, Department of Psychology, University of Helsinki, Finland2Biomag Laboratory, Helsinki University Central Hospital, Helsinki, Finland3Laboratory of Molecular Genetics, Helsinki University Central Hospital, Helsinki, Finland

BackgroundMusical expertise and musicality are based on anatomical and physiological brain constraints(Münte et al., 2002; Schneider et al., 2002; Tervaniemi et al., 1997). However, it is not knownwhether musical skills and musicality facilitate pitch encoding identically with musically relevantchord sounds and spectrally complex sounds without musical context.AimsWe compared automatic neural pitch encoding in two stimulation paradigms in three groups ofsubjects: professional musicians, subjects with good musicality test scores but without musicaltraining, and subjects with poor musicality test scores.Main contributionWhile watching a silenced video, the subjects were presented with two sound sequences: 1) Shortspectrally rich sounds among which there were 10% higher sounds; 2) Major chords among whichthere were minor chords. Automatic neural pitch encoding was investigated by recording theMMNm to a pitch change in these two sequences. It was found that the MMNm in spectrallyrich sounds did not differ between the subject groups. In contrast, MMNm to chord change wasstronger in both musicians and musical subjects when compared to non-musical subjects.ImplicationsThe present results suggest that neural facilitation of pitch encoding in musical subjects and mu-sicians is constrained to musically meaningful sound environments.

Page 485: Abstract Book

Friday, August 25th 2006 485

Key words: Musical expertise, Musicality, Pitch discrimination

[email protected]

63.77 Integrated supervision in music therapy: Reading, analy-sis and testing of the cognitive, emotional and affective-relational aspects of the musico-therapic process

Pasquale Tripepi1, Giovanna Artale2, Raffaella Coluzzi2, Sandra Masci2

1Department of Mental Health, National Health Service, Latina, Italy2Music Therapist, Nuova PLAIM Music Therapy Professional, Latina, Italy

BackgroundIntegrated clinic supervision in music therapy is intended as an opportunity of personal and pro-fessional growth of the music therapist; as a matter of fact it is presented, according to Prochaskaand Norcross model, as structured in formative stages through which it is possible to acquire newcompetences, in the framework of the integrated therapeutic model. The basis of the integratedmodel will be the philosophic and anthropologic choices on man, so that the integration will notrepresent a mere juxtaposition of different theories, but the possibility of using various instrumentsto understand the man and his needs. In practice the orthodoxy of theories will not be privilegedto the appropriateness and necessity of intervention concerning the specific client.

Aims and Main Contribution The two relationships music therapist-patient and supervisor-supervised share the same possibilities to organize the expectations of reciprocity, intimacy andconfidence and disconferm the strict expectations belonging to the past. The specific organizingpotential of the musical structures contributes, in both relational processes, to the members’ devel-opment of communicative, emotional and relational functions. In order to promote further acqui-sitions by the supervised person in the cognitive, emotional and relation dimension, we chose tointegrate Linehan’s three-dimensional model of supervision and M.J.Zalcman and W.E. Cornell’sbilateral model specifically integrating it with supervision in music therapy.

ImplicationsIt was possible to define a bilateral model for supervision which consists in two three-dimensionalreference frames, inclusive of the operations of the music therapist and of those of the supervisor.Those two frames, while being two separated processes, will have to be simultaneously taken intoaccount in the supervision process, evaluating the respective areas composing them: the activities,the focus of attention and the functional models of the music therapist who is supervised, and atthe same time, the activities, the focus of attention and the functional models of the supervisor.

Key words: Music therapy, Integrated supervision, Bilateral model

[email protected]

Page 486: Abstract Book

486 Poster session II

63.78 Perception of temporal structures in melodies with glides:A basic discrimination of isochrony

Minoru Tsuzaki1, Kazutaka Tatsumi2, Satomi Tanaka2

1Kyoto City University of Arts/ATR Spoken Language Communication Research Laboratories,Japan2Kyoto City University of Arts, Japan

When we hear a melody, we also perceive a specific rhythm pattern (or temporal structure).This temporal structure is assumed to be mainly correlated with the temporal relations amongthe beginning points of tones. While they are explicitly specified in musical scores, they are notnecessarily well defined when we look at real acoustic signals. For example, a next sound maystart while the previous sound is still ringing, and this may diminish an obvious gap betweentwo consecutive sounds. The situation will become more problematic in case of “portamento”,i.e., when a fundamental frequency of a tone glides into that of another. Even when there is acontinuous signal whose fundamental frequency glide from one value to another in the middle,human listeners can often perceive two tones. A solution can be provided by assuming a subband processing where the signal is subjected to a bank of bandpass filters of the auditory system.When we look into some appropriate filters, there could be clear onsets and decays. However,it still remains as an empirical question to ask what defines the arrival of a new event; is it thepoint where the glide starts, or the point where the glide ends? To answer this question, a series ofperceptual experiments was done. The task was to discriminate an isochronous tone sequence froma jittered sequence. Each sequence was comprised of a continuous sinusoidal signal in term ofamplitude, but its frequency was modulated with a pattern where five steady steps were connectedwith four glides. By randomizing the duration of glide parts and adjusting the duration of steadysteps compensatively, one can add a certain degree of fluctuation exclusively on the starting pointsof glides, or on the ending points of glides. If the starting points of glides function as a main cuefor the arrival of new events, the discrimination based on the isochrony will become more difficultwhen the fluctuation is applied to the starting points than vice versa. The results are discussed inrelation with a computational model of auditory event detection.

Key words: Rhythm, Event detection, Time perception

[email protected]

63.79 Investigating psychomotor learning of adult students us-ing piano trill performance as an outcome measure

Sonja C. Ulrich1, Andreas C. Lehmann2

1Universität Würzburg, Germany2Hochschule für Musik Würzburg, Germany

BackgroundThe performance of a piano trill requires fast alternating movements, and expert pianists excel

Page 487: Abstract Book

Friday, August 25th 2006 487

in their regularity and speed of trilling. This skill appears to be highly trainable, as even short-term piano playing results in neuroplastic changes. Performance in trilling has also been found tocorrelate with sight-reading and other music-related skills.AimsThe intention of our study was to focus on the development of this skill in less-experienced adultsubjects. We also investigated whether trill performance could be a useful outcome measure forpsychomotor learning.MethodOur experimental group comprised 24 subjects who were enrolled in a ten-week long beginningpiano class; 13 participants of the experimental groups were complete piano beginners, the other11 had some previous experience. 12 additional subjects served as a control group. Trill perfor-mance was tested before and at the end of the course. Mean trill speed (right hand) was calculatedfor two tasks: trilling for 15 seconds between thumb and middle finger and between middle andring finger. Both tasks were repeated. Additional measures: A brief keyboard typing test andmusic aptitude (AMMA). Also, experimental subjects kept a practice diary during the course.ResultsInexperienced piano players were able to transfer non-music psychomotor skills (typing) to themusical task: Better typists showed a tendency to higher trill speeds and lower error rates. Therewas no significant correlation between AMMA scores and trill measures. All groups showedsignificant improvements from pre- to post-test, although on different levels of performance, withthe control group showing the lowest performance. Several stepwise regression analyses werecalculated to predict final trill performance or pre-/post-test improvement, using as predictorsinitial typing speed, accumulated practice times, and music aptitude. Initial trill speed was anexcellent predictor, to a much lesser extent practice, and finally typing performance (total varianceexplained: 65% and 90%).ConclusionsTrill speed does improve with short-term training and practice and is best predicted by initialspeed. It appears that manual dexterity and extramusical experience can be effectively transferredto a musical task for adult novices. Our results will be discussed in light of psychomotor learningand adult education.

Key words: Adult education, Piano, Psychomotor skills

[email protected]

63.80 Natural sound as object for perception

Maris Valk-Falk

Estonian Academy of Music and Theatre, Tallinn, Estonia

As background a project of investigation of natural and synthesized sounds on microstructurelevel has been served (Valk-Falk & Lock, H.-G. 2003a, b; Valk-Falk, Lock,G. & Rosin 2005).The studies speak in support of virtual perception of natural sound, as well as synthesized soundanalysed argues responses of perception on microstructure level of the sound. In this work thenatural harpsichord sound is analysed and investigated. Aims: The hypothesis emerged of devel-opment (in progress) of conception of style-culture in keyboard music sound level. Harpsichord

Page 488: Abstract Book

488 Poster session II

sound is analysed as object for perception in contemporary music: conception of independency ofsound-structure is shown. As method of study possibilities of Audio Sculpt FFT-based sonogram-analysis functions were used. ADSR virtual envelope used for to describe the changes producedover time of specified sound-volume. Implications: Conception of independency of sound isdiscussed connected to quality of harpsichord sound in early and contemporary interpretations.Meaning of the sound level is influential on several fields and levels of accepting events on com-position, performance and hearing fields. Contemporary musical interface impacts in synthesizedas well as natural sounds. [Used: 1. Valk-Falk, M. & Lock, H.-G (2003a). Research aspect re-lated to performance: Does the harpsichord sound fade out or obliterate quickly? Proceedings,ICMPC6. Hannover. CD-ROM; 2.Valk-Falk, M. & Lock, H.-G (2003b). Toward performance:Formal structure of harpsichord sound. Proceedings, Caserta. CD-ROM; 3. Valk-Falk, Lock, G.&Rosin (2005). Proceedings, Caserta. CD-ROM; 4. Tulve, H. (2005). “Sans titre” for harpsichord.“Sula”. ERCD050.]

Key words: Perception of sound, Investigation of natural sound, Harpsichord sound

[email protected]

63.81 SRA: An online tool for spectral and roughness analysisof sound signals

Pantelis Vassilakis1, Kelly Fitz2

1School of Music, ITD Libraries, DePaul University, Chicago,USA2School of Electrical Engineering and Computer Science, Washington State University, Washing-ton, USA

SRA is the only application of its kind available online[http://www.acousticslab.com/roughness/index.html]. It is included in Musicalgorithms[http://musicalgorithms.ewu.edu/algorithms/Roughness.html], a database of music composition/analysisalgorithms hosted by Eastern Washington University. In this web-based application, users can sub-mit 250- to 1000ms-long portions of uncompressed sound files (.wav and .aif formats) for spectraland roughness analysis.

The spectral analysis uses an improved STFT algorithm, based on reassigned bandwidth-enhanced modeling [Fitz, K. and Haken, L. (2002). “On the use of time-frequency reassignmentin additive sound modeling,” Journal of the Audio Engineering Society 50(11): 879-893], andincorporates an automatic spectral peak-picking process to determine which frequency analysisbands correspond to spectral components of the analyzed signal. It is implemented using theLoris open source C++ class library, developed by Fitz and Haken (CERL Sound Group). Userscan manipulate 3 spectral analysis/peak-picking parameters: (a) frequency bandwidth (10Hz or20Hz), (b) spectral amplitude normalization (Yes or No), and (c) spectral amplitude threshold(user-defined). To ensure the reliability and validity of the analysis results, every step of the filesubmission process includes detailed descriptions of the parameters, as well as suggestions on thesettings appropriate to the submitted file(s) and the question(s) of interest.

The spectral parameters obtained from the analysis (frequency and amplitude values of theidentified spectral components) are fed to a roughness estimation model [Vassilakis, P. N. (2005).“Auditory roughness as a means of musical expression,” Selected Reports in Ethnomusicology

Page 489: Abstract Book

Friday, August 25th 2006 489

12 (Perspectives in Systematic Musicology): 119-144], outputting a roughness estimate for thesubmitted sound file as well as estimates of the roughness contribution of each individual sine-pair in the sound file’s spectrum. The model includes 3 terms that represent the dependence ofroughness on a sine-pair’s (a) intensity (related to the combined amplitude of the sine-pair), (b)amplitude fluctuation degree (related to the amplitude difference between the sines in the pair),and (c) amplitude fluctuation rate (frequency difference between the sines in the pair) and register(frequency of the lower sine).

A detailed outline of the roughness estimation model will be followed by a demonstration ofthe tool, a discussion of research studies that have employed it, and an outline of future possibleresearch applications.

Key words: Roughness, Spectral analysis, Computer model

[email protected]

63.82 Types of metrical patterns in Serbian folk music

Milankovic Vera1, Petrovic Milena1, Zec Draga2

1Faculty of Music, University of Belgrade, Yugoslavia2Cornell University, USA

BackgroundUnlike the western music tradition which is characterized by isochronous meters, other music tra-ditions include both isochronous and non-isochronous meters. We focus on the varieties of metricstructures in the Balkan music tradition, both isochronous and non-isochronous, as represented inthe extensive corpus collected by the well-known Serbian ethnomusicologist Miodrag Vasiljevicin mid 20th century. The corpus includes both duple and triple beat within regular and irregularmeter.AimsWe show that both isochronous and non-isochronous patterns are organized hierarchically, andthat the structure of non-isochronous patterns is asymmetric.Main contributionPatterns vary from isochronous (e.g. 2/4, 6/8) to non-isochronous (e.g. 7/8 [2+2+3 or 3+2+2]).Despite a wide range of possible combinations within the set of non-isochronous patterns, we notean important restriction: the resulting structures are asymmetric. Thus while 2+2+3, 3+2+2 or2+3+3 are occurring non-isochronous meters, 3+2+3 or 2+3+2 are conspicuously absent.

Non-isochrony may figure not only at the beat level, but also at higher levels. In the latter case,(i) isochronous beats are organized non-iscohronously at the next higher level, and (ii) isochronousbeats are combined with non-isochornous beats, resulting in non-isochrony at the next higher level.

These patterns can be further combined within a single song. In some cases, a song mayinclude bars with both isochronous and non-isochronous beats, as in 5/8+3/4+7/8.ImplicationsWe note the significance of asymmetry in the organization of non-isochronous beats in the corpusof Serbian folk music. The asymmetric non-isochronous beats lend themselves to a straightfor-ward rhythmic grouping on the part of the listener, even though such groupings cannot be sub-sumed under the standardized accents in music theory, such as thesis and arsis.

Page 490: Abstract Book

490 Poster session II

It is important to observe that both isochronous and non-isochronous beats in the corpus areconditioned by, and closely follow, the rhythmic word shapes in the accompanying verse. Serbianfolk poetry is organized both in duple and triple combined meters within a line of verse. Thisclosely parallels the organization of non-isochronous beats in folk music.

Key words: Meter, Isochrony, Asymmetry

[email protected]

63.83 The cultural significance of Absolute Pitch

Maria Vraka1Institute of Education University of London, UK2EEME (Greek Society for Music Education), Greece

BackgroundIn research about Absolute Pitch (AP) ability, the reported general rarity of the trait occurs along-side a continuing debate as to its nature and biological basis, including recent neurological andinfancy focused studies. Throughout the literature, there is increasing evidence that AP abilitymay reflect underlying neurological abilities, nurtured by early musical experience before the ageof 7. Given that culture (as enculturation) is a key feature of early learning and musical develop-ment, it seems logical to explore the extent to which AP might be culturally influenced.AimsThe aim of the present research has been to explore the influence of culture on the development ofAP ability by testing two hypotheses: a) AP behaviours can be present irrespective of culture andb) AP has variable cultural significance. Two cultural contexts are explored: Greece and Japan.MethodThe AP research has two elements: (i) a self- reporting questionnaire, based on findings fromprevious research literature and (ii) a specially designed AP detection test. After piloting, bothcomponents were administered initially to (n=92) undergraduate music students in Greece. Thedata were analysed by using both qualitative (NVivo) and quantitative (SPSS) methods. (Com-parative fieldwork in Japan, where AP is highly valued and taught, is ongoing and data will beavailable by the time of the ICMPC9 conference.)ResultsIn Greece, there is no reference to AP abilities in any musicological literature. Nevertheless, someparticipants were familiar with the English term and some made reference to a Greek languageconcept “absolute ear”. However, not withstanding a general disregard for AP in the Greek mu-sical culture, initial SPSS analyses suggest that AP exists in similar proportions within Greekparticipants to those reported elsewhere in the literature.ConclusionsEnculturation impacts on a rapidly changing cognitive system as the many skills supported bythe culture are learned. By investigating a relationship between Absolute Pitch and culture, thepresent research is hoping to provide new insight into the ability itself, as well as supporting anintercultural approach to studies of cognitive abilities.

Key words: Absolute pitch, Culture, Enculturation

Page 491: Abstract Book

Friday, August 25th 2006 491

[email protected]

63.84 Social implications of iPod use on a large university cam-pus

Catherine Warlick

The University of Texas at Austin, USA

Since its introduction in 2001, the iPod, produced by Apple Computer, Inc., has become ahugely popular musical fixture in society. In the quarter ending June 2005, Apple shipped 6.2million iPods, accounting for $1.1 billion in revenue, a five-fold increase in earnings for thatquarter. The purpose of this study was to survey students at a large university and determine thesocial implications that the widespread use of iPods has had, including possible motivation forpopularity of the iPod, positive versus negative potential social interaction resulting from iPoduse, and whether the music played on these devices has had an impact on individual perceptionsof social well-being and status.

Subjects were 50 students from The University of Texas at Austin, 25 iPod users and 25 non-iPod users. Each subject completed a questionnaire, comprised of the following types of questions:demographic, open-ended, multiple choice, rating scale, and a multiple answer question. With theexception of three questions, non-iPod users were asked the same questions, in hypothetical terms,as the iPod users.

The most interesting results indicated that overall, the iPod users provided more positive ob-servations of the effects of iPod use on society than the non-iPod users, despite both groups havingan average rating that indicated they felt that the use of iPods tended to inhibit potential social in-teraction. Also, 36% of each group felt that having an iPod was a social status symbol, providing apossible motivation for the widespread popularity of iPod ownership. The social effects of musicchoice proved inconclusive as neither group showed a particular preference of genre, even though88% of each group considered the music on their iPods to be representative of their personali-ties. The mean rating for feeling removed from surroundings was quite high among iPod users,indicating that many subjects lose perception of their surroundings when listening to their iPods,potentially creating dangerous situations in which a person’s safety could be compromised.

The social consequences presented here revealed a distinct and different mindset within bothgroups surveyed and that the social implications of iPod use can be far-reaching and of great socialimportance.

Key words: iPod, Mobile listening, Music and society

[email protected]

63.85 Musical and general absolute pitch in humans

Ronald Weisman1, Christopher Sturdy2

Page 492: Abstract Book

492 Poster session II

1Queen’s University, Canada2University of Alberta, Canada

Humans with and without absolute pitch (AP) were compared in a pitch claaification task. Thetask required sorting tones into ranges based on their pitch. Contiguious pitches spaced 120 Hzapart were sorting into eight ranges of five tones each. In general birds, and especially song birdsare much more accurate than mammals, includeing humans, at this pitch sorting task. It appearsthat humans with musical AP are no more accurate than the rest of us at this pitch sorting task. Inother words, the ability to sort pitches by their height is no better in musical AP possessor than inother musicians. We wish to discuss some versions of the pitch sorting task in which musical APpossessors might excell.

Key words: Absolute pitch, Pitch height sorting, Comparative psychology

[email protected]

63.86 You are the music while the music lasts*: An investi-gation into the perceptual segregation of dichotically-embedded pitch (*T. S. Elliot)

Rebecca WheatleyDepartment of Psychology, University of Auckland, New Zealand

The ability to make fine temporal discriminations of acoustic signals contributes to a numberof auditory perceptions including speech discrimination and the localisation of sound. Temporalprocesses were investigated here through the use of a psychophysical procedure involving thebinaural unmasking of dichotically-evoked pitches. A typical dichotic-pitch stimulus comprisestwo copies of the same white noise, the result of which (at this stage) sounds like a ball of noisein the middle of the head. As the brain senses that the temporal structure of the noise in eachear is identical - the two copies contain interaurally identical amplitude spectra - it fuses the twonoises together into one perceived sound source (Dougherty et al., 1998). The next stage involvesextracting a narrow band of frequencies from the noise signal going to one ear and shifting them,usually in relation to their phase, such that an interaural timing difference exists between the twocopies of noise (Dougherty et al., 1998). This interaurally-shifted frequency band will perceptuallysegregate from the perceived ball of noise, resulting in the perception of a barely audible pitch,with a tonal quality associated with the centre frequency of the dichotically- delayed portion ofthe spectrum (Cramer & Huggins, 1958).

Separate dichotic-pitch stimuli were created to the left and right sides of auditory space byadjusting to which ear the temporally-advanced portion of the noise process was presented. Threeconditions of differing interaural timing difference were used to assess the perception of differing“strengths” of dichotic pitch. Repeated measures analyses revealed a significant difference be-tween the ability to detect dichotic-pitch stimuli on the left and right side of space. This discoveryprovides an important contribution to discussions of the neural processing involved in auditoryscene analysis and hemispheric lateralisation of function. Further application of these methodsmay be useful to assess the performance and possible lateralisation of auditory processes in spe-cial populations such as children with developmental disorders (for example delayed languageacquisition) and individuals with music specialisation.

Page 493: Abstract Book

Friday, August 25th 2006 493

Key words: Dichotic pitch, Temporal processes, Interaural timing difference (ITD)

[email protected]

63.87 Music in working memory? Examining the effect of pitchproximity on the recall performance of non-musicians

Victoria Williamson, Alan Baddeley, Graham Hitch

Department of Psychology, University of York, UK

The working memory model (WMM-Baddeley and Hitch, 1974 and Baddeley, 2000) continuesto provide a framework for the examination of short-term memory for verbal and visual stimuli.The phonological loop component affords an economical and coherent account of a number ofphenomena associated with remembering verbal material. However, according to the model, theextent to which music may be processed in working memory remains unclear. The broad aimof this research is to assess whether the phenomena associated with recalling verbal materialsin WMM can be observed when non-musicians attempt to recall music. The verbal memoryphenomenon that is relevant to this first study is the phonological similarity effect. This presentseries of experiments had two aims. The first aim was to examine whether there was a paralleleffect to phonological similarity when participants attempted to recall novel sequences of notes.The second aim was to examine the music memory capacity or “span” of non-musicians. In allthe experiments, proximity in note pitch height was employed as a theoretical musical parallelto phonological similarity. This method draws upon the fact that phonological similarity occurspartly as a result of the auditory confusability of the sounds. This “confusability” also occursduring memory tests when a musical sequence contains notes that are very close in pitch height asopposed to notes that are more distant (Semal and Demany, 1993, Ueda, 2004). In each experimentof this study, participants listened to sequences composed of notes that were either close togetherin pitch height (proximal) or far apart (distant), and performed immediate serial recall of contour.The sequences varied in length from three to seven notes. Span, serial position and percentagecorrect recall of proximal and distant melodies are analysed. The results are discussed in termsof the WMM and the phonological loop, as well as the validity of the new methods and measuresemployed.

Key words: Working memory, Music recall, Pitch proximity

[email protected]

Page 494: Abstract Book
Page 495: Abstract Book

Symposium: Music psychol-ogy pedagogy

64

Convenor: Richard Parncutt

Music psychology pedagogy presents interesting and unusual demands that have not yet beenaddressed in a conference session or journal issue:

1. There is no standard text that covers the basics in a form that reflects the current state ofresearch and can be learned within a single course by students with limited background knowledge.

2. Although some music psychology instructors use multimedia learning environments, soft-ware created for this purpose is seldom internationally shared, nor is there a body of knowledgeon how to use it.

3. Published claims in music psychology are often uncertain and limited. Many central ques-tions are not yet clearly answered. Students need to critically evaluate literature and defend theirown theses.

4. Students have a mixture of backgrounds and ways of thinking (humanities, sciences, musicalpractice) and need to acquire basic music psychological skills and knowledge in a limited time.

The session will present creative, unusual, useful pedagogical approaches to these issues. Eachpaper will address a clearly defined topic. Authors will draw on their extensive experience inteaching music psychology, but will not focus on specific courses or schools.

Other issues in music psychology pedagogy may be addressed in a future session. For example,how should the method and content of music psychology teaching reflect typical careers pursuedafter such courses?

64.1 Suitability and usefulness of available books on music psy-chology for teaching in different institutional contexts

Andreas C. Lehmann1, Herbert Bruhn2, Robert H. Woody3

1Hochschule für Musik Würzburg, Germany2Universität Flensburg, Germany3University of Nebraska, Lincoln, USA

Background

495

Page 496: Abstract Book

496 Symposium: Music psychology pedagogy

Educational materials for music psychology have only recently been emerging, and the majorityof them have been published in English. Although different books purport to cover similar ma-terials, their contents differ widely. Also, we observe different (national?) teaching traditions inwhich the textbooks serve different purposes. Furthermore, in Germany there have been emerginginitiatives to discuss the teaching of so called “Systematic Musicology” which is dominated bymusic psychology.AimsDiscuss the usefulness of available music psychology text books using standard criteria and pointout problems that arise from the varying institutional constraints

Own contribution On the basis of a compiled list of textbooks or handbooks in music psychol-ogy we compare contents and try to assess the usefulness in different educational contexts (univer-sity, school of music) and for different target audiences (psychologists, musicologists, perform-ers). In particular, we address and contrast the following books (“Handbuch Musikpsychologie” byBruhn, Oerter, & Rösing, 1985; 1993; “Psychology of music” by Deutsch, 1999; “The Science andPsychology of Music Performance” by Parncutt & McPherson, 2001; “The new handbook. . . ” byColwell & Richardson, 2002; “Musical excellence” by Williamon, 2004; and an upcoming book“Psychology for Musicians” by Lehmann, Sloboda, & Woody). We also discuss issues concerningthe importance of knowledge of research methods and the possible existence and necessity of abasic canon of music psychology.ConclusionsThe biggest barrier for the international use of textbooks in music psychology is the language (atleast for the European mainland). In addition, the target audiences may consist of future musicteachers, performers, psychologists, or other academics, foremost musicologists, which requirea differing scope and depth of content. At the same time core topics have to be addressed thatcharacterize music psychology as a unified discipline, otherwise why concern ourselves with text-books.

Key words: Richard Parncutt’s symposium, College teaching, Music psychology textbooks

[email protected]

64.2 Integrating educational technology tools and online learn-ing environments into a course on Psychoacoustics andMusic Cognition

George Papadelis1, Katie Overy2

1School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, Greece2Institute for Music in Human and Social Development, School of Arts, Culture and Environment,University of Edinburgh, UK

BackgroundOver the last two decades, we have seen an impressive flourishing of technological applicationsinto educational practice and research. As a result, the traditional campus-based learning wasgreatly influenced by the enhanced communication, information retrieval and management ca-pability provided by information technology and the Net. Net-based technological development

Page 497: Abstract Book

Friday, August 25th 2006 497

within the area of educational technology also led to new methods and theories for designingeffective online learning environments.AimsThe current presentation focuses on basic problems we faced and decisions that were taken in thedesign of a hybrid course - both online and campus based - on Psychoacoustics and Music Cogni-tion. That project is supported by two collaborating Music Departments (University of Edinburgh- UK and University of Thessaloniki - GR).

Short description Fundamental issues of online learning and the “Semantic Web” will be pre-sented, such as: knowledge modeling and management, the role of interaction in online learning,communication tools that can support individualized and community centered learning activitiesand integrated course management. These issues will be approached through the use of illustra-tive, domain-specific examples drawn both from our work and from other resources. Discussionwill be mainly concentrated into two main topics:

1. The use of “virtual lab” applications as a tool to visualize and gain a better understand-ing of psychoacoustical and cognitive phenomena. That approach will be discussed through apresentation of a step-by-step tutorial on musical rhythm perception.

2. The use of computer-based (software & hardware) environments to run online and/or offlinepsychoacoustical and psychological experiments, as a hands-on method to teach the fundamentalsof experimental research practice. Discussion on that topic will be based on a specific example ofan online experimental environment that was developed to support a study on the perceptual cueswhich govern similarity rating for pairs of microvariations of short rhythm patterns.

Specific meaning and value The basic aim of focusing on these two specific topics is to usethem as paradigms in order to discuss about alternative educational practices that may be provento be useful in the development of educationally effective, both hybrid and online learning designsin the field of Psychoacoustics and Music Cognition. Given the apparent difficulty that musicuniversity students face as they enter the specific domain and due to the lack of any considerableprior knowledge on fundamental issues of the Cognitive Science they experience, a major concernof our approach is to stress the importance of using interactive content that responds to studentbehavior and attributes.

Key words: Educational technology, Cognitive musicology education, Online learning

[email protected]

64.3 Structuring the argument of a theoretical paper: A guide-line and its reception by advanced undergraduate musi-cologists

Richard Parncutt, Margit Painsi

Department of Musicology, University of Graz, Austria

BackgroundMusic psychology research abounds with uncertain claims for which good arguments and evidencemay exist on both sides. Students expecting to learn “facts” may have difficulty dealing with suchmaterial. Instructors may respond by teaching “facts” to younger students and later encouragingolder students to question the same material.

Page 498: Abstract Book

498 Symposium: Music psychology pedagogy

AimsWe develop strategies to support undergraduate students as they make the transition from learning“facts” to independently formulating their own (uncertain) hypotheses. Our seminars aim to mimicreal international research processes.Main contributionStudents work in groups and are given guidance on constructive collaboration. They first submitpreparatory documents (tabular argument, reference list, self-evaluation). The group then presentsa theoretical talk (analogy: conference presentation). The audience gives verbal and written feed-back (peer review), which is documented and realised in the final theoretical paper (journal sub-mission). Both talk and paper focus on a clearly formulated research question and thesis, andare structured into introduction, main section (divided into subtopics), and conclusion. Each sec-tion includes the following ingredients. Introduction: topic, explanation, example, definitions,multidisciplinary background, main question, its relevance, its possible answers, and approach(subtopics = structure of rest of presentation). Each subtopic: Link to main question, formulationof subquestion and subthesis, detailed evidence and arguments for and against.ConclusionsMain question (again), formulation of (new! unitary!) thesis, general arguments for and against,practical implications, suggestions for further research. These points are drafted at the start of thesemester and repeatedly revised as students become familiar with relevant literature and developtheir own ideas.ImplicationsSome students find this procedure overly regulated. It works best if students already have con-siderable background in basic music psychology (“facts”) and research methods (experimentalmethod, critical evaluation, scientific writing). Our approach promotes clear, critical thinking inany scholarly discipline that poses difficult questions and seeks tentative answers.

Key words: Argument, Critical evaluation, Theoretical paper

[email protected]

64.4 Teaching music psychology to psychologists and to musi-cians: Differences in content and method

Jane Ginsborg1, Helen Brown2

1Royal Northern College of Music, Manchester, UK2Purdue University, USA

In this paper we discuss contrasting approaches to teaching music psychology to a) psychologystudents and b) students of music performance and music theory.

Although psychology students cannot be expected to have high levels of musical expertise orto be musically literate, most spend a lot of time listening to music: those who enrol voluntarilyin a music psychology course are likely to be interested in music and may have a good deal ofmusical knowledge. Music-related research is thus an effective vehicle for teaching broader psy-chological issues, including quantitative and qualitative research methods. For example: cognitivepsychology students can be asked to consider topics such as musical talent in the context of exper-tise development more generally. Developmental psychology students might evaluate the value of

Page 499: Abstract Book

Friday, August 25th 2006 499

musical aptitude testing in relation to intelligence and IQ testing. Research into adolescents’ useof music as a badge of identity, and the use of music for persuasive purposes, can inform the studyof social psychology. Supervisors with experience of carrying out music psychology researchshould encourage psychology students with particular interests in music to devise and undertaketheir own research projects.

Music performance students, on the other hand, are unlikely to know or care much about psy-chology, other than in relation to their own needs. Yet research in music teaching and learning isparticularly salient to them. Teachers of music psychology who are also experienced performersmay have more credibility with music students than “mere” psychologists, and find it easier toselect appropriate, relevant topics such as effective strategies for performance preparation (e.g.practising, improvising, sight-reading, memorising, dealing with performance anxiety). Similarly,psychological concepts can be introduced into music theory teaching, with reference to appropri-ate literature. It is therefore very important to provide an adequate grounding in psychologicalapproaches and methods, particularly notions that may be unfamiliar to musicians such as therelationship between research and theory, and the importance of different kinds of evidence.

While the desired specific outcomes of music psychology teaching for psychology and musicstudents may differ, each should benefit from being introduced to the other discipline. The relativesuccess of the strategies outlined above will be reported and discussed.

Key words: Music pedagogy, Psychology, Music performance and theory

[email protected]

Page 500: Abstract Book
Page 501: Abstract Book

Education VII 6565.1 Development of musical preferences in adulthood

Antje Bersch-Burauel

Hochschule für Musik Würzburg, Germany

BackgroundThe present study bases on the fact, that, up to now, there are few reliable specific results on thedevelopment of musical preferences and functions of music in adulthood. The previous studieson musical preferences in the field of music psychology, music education and media research arepredominantly cross-sectional studies, whose samples consist of subjects in adolescence and earlyadulthood. The main part of the few studies on musical preferences in adulthood deal with thesubject in gerontological and music-therapeutic context.

AimsThe present study aims at elaborating factors that have got an influence on the interindividual andintraindividual development of musical preferences and functions of music in adulthood.

Method48 subjects of three different age groups (25 - 35 years, 45 - 55 years and 65 - 75 years) kept a mu-sic diary for one week, in which they documented their music listening behaviour. Subsequent tothe music diary the subjects were investigated on the changes of musical preferences and functionsof music in their biography by means of a two-part interview consisting of a half-standardized anda narrative part.

ResultsOut of the data on age, gender and education only the variable “age” has got a significant influenceon musical preferences and functions of music. The musical styles preferred by the oldest subjectsdiffer clearly from the musical preferences of the two groups of younger subjects. The oldest sub-jects predominantly prefer classical music, “Schlager”, sacred choral music and operas, whereasthe two younger groups of subjects prefer pop, rock and classical music. With growing age thefactors having an influence on musical taste and coming from the individual’s environment, e. g.friends, partners, media, decrease rapidly.

ConclusionsThe oldest subjects tend to be mere music listeners, the subjects at the age of 45 - 55 years tend to

501

Page 502: Abstract Book

502 Education VII

use music for a specific purpose and the youngest subjects (25 - 35 years) tend to use music as adiffuse background for all kinds of activities.

Key words: Musical preferences, Functions of music, Musical development in adulthood

[email protected]

65.2 Effects of auditory feedback in the practice phase of imi-tating a piano performance

Takahashi Noriyuki, Minoru Tsuzaki

Kyoto City, University of Arts, Japan

The purpose of this study is to investigate how auditory feedback is engaged in playing thepiano. Several studies have argued that auditory feedback has a marginal effect on the pianoperformance. However, most of the observations have been focused on the performance of highly-practiced pieces, and the degree of training of players has not been considered sufficiently. In thispaper, we investigated effects of auditory feedback in practicing a piece of piano music using par-ticipants of different degrees of training. The experiment was two-way factorial. One factor wasthe degree of training, and the other was the availability of auditory feedback in practice sessions.Thirty-six highly-trained and less-trained pianists took part in an experiment. Participants of bothtraining level groups were further divided into two practice groups, i.e. an auditory feedback (AF)group and a no auditory feedback (NF) group. All participants listened to a model performanceplayed by a professional pianist with an example of standard expression and were instructed toimitate the model as much as possible on a MIDI keyboard. Each participant was allowed topractice ten times. In this practice phase, participants in the AF group were provided with theauditory output of their performance, while sound outputs were cut off in the NF group. Afterthese practice sessions in each condition of auditory feedback, participants in both groups wererequired to play the piece in a final test with auditory feedback. The MIDI records of the final testswere analyzed in terms of onset timing and key velocity. Effect of the auditory feedback appeareddifferently between the highly-trained and less-trained participants. The less-trained participantsmade better imitation with auditory feedback than they did without auditory feedback. The effectwas prominent in the velocity contours as well as a degree of final ritardando in the piece. On theother hand, the highly-trained participants showed no significant differences between two practiceconditions. Those results suggest that auditory feedback is used especially for unskilled pianiststo elaborate his expressive performance.

Key words: Auditory feedback, Piano performance, Performance skill

[email protected]

65.3 Quality piano instruction affects at-risk elementary schoolchildren’s cognitive abilities and self-esteem

Frances Rauscher1, Marcy LeMieux1, Sean Hinton2

Page 503: Abstract Book

Friday, August 25th 2006 503

1University of Wisconsin Oshkosh, USA2Medical College of Wisconsin, USA

The purpose of this study was to test the effects of piano keyboard instruction in a real-world,severely disadvantaged public elementary school setting, with lessons provided by public schoolteachers assigned by the school district. Kindergarten through 5th-grade children (n=627) fromfour public elementary schools (two classrooms per grade) were assigned to one of two conditions:piano instruction, or no music (control). All children were tested at the beginning and end of eachyear using three verbal tests, two mathematics tests, one spatial-temporal test, and one self-esteemtest (5th grade only). Lessons were provided weekly for a period of three academic years. Weobserved several problems with the instruction during the first year of the study. Music teachersappointed by the school district did not receive classroom assignments until one month into theschool year. They were unfamiliar with the keyboard curriculum and had not taught piano inthe classroom before. In addition, there were high rates of absenteeism, lateness, and conductdisorders among the children, and classroom space was inadequate. Despite our efforts, theseproblems persisted through the second year of the study. During the third year, a music specialistwith proficiency teaching keyboard to elementary school children at a different school districtvolunteered to assist the teachers in the classroom and share her expertise, greatly improving thequality of the instruction. No significant differences were found between groups for the cognitivemeasures during the first two years of the study. However, children who received piano instructionscored significantly higher than controls on all but one sub-test of the self-esteem measure duringall three years of the study. Differences between the piano and control groups emerged for thearithmetic and spatial-temporal tests during the third year of the study only, after the keyboardinstruction had improved. There were no differences between the groups on the verbal measures.This research emphasizes the importance of quality music instruction for cognitive transfer effects,and supports previous research reporting differential effects of piano instruction on cognition.

Key words: Cognition, Children, Self-esteem

[email protected]

65.4 Teacher - student relationship in Western classical singinglessons

Sofia Serra1, Jane Davidson2

1University of Sheffield, UK2University of Sheffield, UK and University of Western Australia, Australia

BackgroundConsiderable research has examined instrumental learning. Singing, however, seems to presentclear specifications that are not explored. This study observes the specific characteristics that makewestern classical singing lessons different from other instrumental learning as well as examiningthe way the teacher - student relationship develops and can both positively and negatively interferewith vocal development. Themes in focus include: technical, social and psychological processesin operation for successful learning outcomes.

Page 504: Abstract Book

504 Education VII

Aims- Describe the processes of learning that emerge from teacher - student relationship. - Detectmotivations that lead to changing singing teacher. - Clarify singing teacher - student psychologyand role. - Explore factors that contribute towards an effective singing teacher - student interaction.- Make singing lessons more effective by detecting ways in which a singing teacher can solve day-to-day problems.MethodSixty four singing students who just changed singing teacher from Portugal, United Kingdom,Canada and Australia were interviewed by postal questionnaire and asked about their preferencesand experiences in a quantitative manner on the following topics: i. Singing lesson’s sequence andenvironment. ii. Teacher’s teaching strategies and abilities. iii. Teacher - student relationship. iv.Student’s development during the year v. General singing environment vi. Verbal and non-verbalcommunication used in the singing lessons.

Additionally, in a qualitative observational study, twelve singing teachers from United King-dom and Portugal were observed and video recorded every 12 weeks during an academic year.The lessons were analysed in terms of behaviours, non-verbal communication, interactions, per-sonalities, and immediate problem solving.ResultsThe complete analysis of all data will be finished in the period leading to the conference. But,tentative findings from the questionnaire study demonstrate:

- Relationships develop and adjust over time. - Student and teacher develop a specific com-munication. - Teaching techniques differ from one student to another with the same teacher. -Teachers demonstrate patterns of behaviour, teaching techniques and sequence of lesson. - Moststudents believe that teacher - student dynamic affects their development.ConclusionsAs these results begin to reveal that the role of the singing teacher has a psychodynamic aspect, yetthis potentially powerful element is not discussed in the existing literature, so will feature stronglyin this presentation and the author’s forthcoming investigations.

Key words: Singing, Teacher-student’s relationship, Psychology

[email protected]

65.5 BoomTown Music - a co-creating way to learn music withinformal music education

Anna-Karin Gullberg

Luleå University of Technology, School of Music in Piteå, Sweden

The general purpose of this presentation is to discuss how alternative forms of learning strate-gies in specifically designed contexts can strengthen the development of musical, social and per-sonal competences.

Research in music education has confirmed that how knowledge in music is created is highlycorrelated with context qualities, as for example the organization of musical learning and socialinteraction. Still, it is difficult to free the practice of learning in music from conservatoire traditionand didactic “hidden curricula’s”. Music institutions all too often suffer by institutionalization

Page 505: Abstract Book

Friday, August 25th 2006 505

and a levelling of cultures. On the other hand informal music learning is largely characterised byco-creating and peer learning, something that formal music education has often ignored. By notpaying sufficient attention to learning processes within smaller groups, the great opportunities forpowerful growth in personal and social skills, are also passed over.

In autumn 2005 a completely new curricula in music education - BoomTown Music - was bornwithin the School of Music in Piteå. The educational base line is resting on scientific theories andprevious research dealing with informal learning strategies. Peer learning and playing by the ear ishere acknowledged and strongly supported. The philosophy of BTM opens up to a wider musical,social and ethnic variety and is supported by a mixture of guest musicians, artists, innovators etc,and by guided self-studies concerning processes in learning and communication. No teachers areengaged on a regular basis - the music groups decide who they want to meet and what skills theywant to develop.

The research project is based on studies in formal and informal learning and theories of social-isation in music. Data are collected by participating observations during rehearsals and concerts aswell as qualitative analyses of songs, concerts and interviews. An interesting body of knowledgeis also coming from student’s diaries and written reflections that has been collected since the start.Research results will also contribute with important knowledge about how learning in music isaffected by the organisation and design of learning contexts.

Key words: Informal learning, Peer learning, Music education

[email protected]

65.6 Environment, motivation and practice as factors of the in-strumental performance success in elementary music ed-ucation

Blanka Bogunovic1, Ksenija Rados2, Oliver Toskovic3

1Institute for Educational Research, Belgrade, Serbia and Montenegro2Faculty of Philosophy, Department of Psychology, Belgrade University, Serbia and Montenegro3Faculty of Philosophy, Kosovska Mitrovica, Serbia and Montenegro

BackgroundThe previous findings have been shown that musical potential is necessary but not a sufficientfactor of successful performance development and that other factors such as environment and psy-chological characteristics of the student also play a significant role. Our attention is now focusedon a specific relationship pattern between environmental factors (family background, teacher’sattributes), student’s motivation and practice and the path of their influence on an instrumentalperformance success during the first years of specialized elementary music education.AimsDescribing individual and common contribution of teacher’s characteristics and family back-ground to the early intrinsic motivation indicators and instrumental practice and finding out theirintegrative contribution pattern on the different levels of the student’s success in instrumental per-forming and dropouts from instrumental tuition.MethodA 5 years longitudinal study was conducted in a 5 specialized music schools. The sample included

Page 506: Abstract Book

506 Education VII

993 students (aged 6-12), 512 male and 506 female parents and 165 teachers. Family backgroundwas represented with indicators of parental musical stimulation, musical study encouragement andinvolvement in a daily learning process. Teachers’ characteristics included professional, educa-tional and personal attributes and cooperation with a student and parents. Motivational indicatorsand practice schedule have been assessed by parents. Student’s musical success was representedon two levels: a) school achievement and b) performance success.ResultsPreliminary results show that different types of parental activities in a preschool and school periodhave a significant role in the development of intrinsic motivation and then in foundation of practicehabits in the early learning to play an instrument. The certain teacher’s attributes are significantonly for accomplishing frequency and duration of pupil’s practice. Findings point out that moti-vational indicators, such as enjoyment in playing and importance of attending music school andduration of practice influenced all levels of instrumental achievement, especially on performancesuccess.ConclusionsThe results underline the importance of qualitative and adequate parental incentive and supportand also the relevance of teacher’s personal characteristics and educational competence in devel-oping such an attributes as early intrinsic motivation and interest in instrumental practice. Wemay say that there is an integrative pattern of environmental influences in the beginning years ofinstrumental tuition that causes motivational development and foundation of practice habits whaton the other hand, have a significant contribution to all levels of performance success.

Key words: Environment, Motivation and practice, Performance success

[email protected]

Page 507: Abstract Book

Rhythm VII 6666.1 A hierarchy of rhythm performance patterns for first

grade children (ages six and seven)

Debbie Lynn Wolf

Philadelphia Biblical University, Langhorne, USA

Although the need for introducing rhythm performance patterns to children is recognizedas crucial to foundational music instruction, research has not provided a definitive hierarchy ofrhythm patterns to be introduced. This study investigated the difficulty of rhythm patterns per-formed by first grade students and established a hierarchy of rhythm performance patterns forchildren ages six and seven years.

Subjects (N =195) were first grade students, ages six or seven years, from six public elementaryschools in the United States. The investigator-designed Rhythm Pattern Performance Test (RPT),was administered individually to all subjects. The Rhythm Pattern Performance Test examined theability to perform rhythm patterns in imitative response to a recorded model. The test featuredrhythm patterns with macro/micro beats (the beat and its division into two or three parts), elonga-tions (the extension of a macrobeat), divisions (the subdivision of a microbeat), and divisions andelongations (combinations of extensions and divisions) in similar forms for duple and triple meter.Subjects were audiotape recorded as they listened and imitated each of the patterns; their recordedresponses were evaluated by two independent judges using a six-point continuous rating scale.

As reported in other studies of pattern difficulty, the rhythm patterns of the present study wereassigned to categories (easy, moderate, difficult) according to the mean and standard deviation ofthe pattern difficulty levels. Performance difficulty levels were determined for each RPT pattern:the mean difficulty level was 45 (SD = 21). The patterns determined to be difficult includedpatterns with difficulty levels one standard deviation or more below the mean; moderate, difficultylevels between one standard deviation below and above the mean; and easy, difficulty levels onestandard deviation or more above the mean.

A hierarchy of the rhythm performance patterns was established by examining difficulty levelsfor all RPT patterns. Rhythm performance pattern difficulty is determined primarily by meter;duple meter patterns are easier than triple meter patterns. The results of this study are applicableto elementary music instruction, curriculum design, performance assessment, and future research.

Key words: Rhythm performance, First grade music education, Rhythmic development

507

Page 508: Abstract Book

508 Rhythm VII

[email protected]

66.2 The effect of tempo on the perception of anacruses

Justin London, Tommi Himberg, Ian Cross

Centre for Music and Science, Cambridge University, UK

BackgorundA short-short-long (SSL) rhythm may be heard as either (a) beginning directly on the downbeator (b) with its two short elements as an upbeat or anacrusis. Studies have shown that listenersdo not uniformly or consistently associate relative length with accentUand in particular, higher-level metric accentsUin ecologically valid listening contexts (Vos, van Dijk, and Schomaker 1994;Stobart and Cross 2000; Snyder and Krumhansl 2001; Toiviainen and Snyder 2003).AimsThis study explores the effect of tempo on the metric perception of SSL figures. The experimentalhypothesis is that in metrically ambiguous cases, at tempos where each short element is apt to beheard as a beat (i.e., 400-800ms IOIs) listeners will tend toward a downbeat interpretation, whileat faster tempos (200-400ms IOIs) perception shifts to an anacrustic interpretation. The effect ofarticulation (note offset times) is also investigated.MethodA stimulus set of 10 melodies was composed. Some melodies are (a) strongly downbeat oriented,(b) strongly anacrustic, and (c) metrically ambiguous. Melodies were presented in random orderat 5 tempi and in two response contexts; all melodies are clearly in 4/4. The experimental task isto indicate the perceived beat or downbeat, either (a) with a tapping response, where participantstap on a MIDI drum pad while they listen, and (b) a notation response, where musically literateparticipants add barlines to an unmeasured score as they listen.Preliminary ResultsPilot studies have confirmed the stimulus set reliably contains metrically unambiguous and am-biguous melodies. Velocity data (i.e., a pattern of stronger versus weaker taps) was not a reliableindicator of metric interpretation, but tapping mode proved to be satisfactory. Analysis shows dis-ambiguation of ambiguous melodies at faster temps, with a tendency for SSL patterns to emergeas anacrustic.ConclusionsThe experiment is an extension of London’s (2004) proposals regarding the effect of tempo onmetric structure. It demonstrates how the relationships between rhythmic (figural/serial) groupingand metric (hierarchical/attentional) structure may be empirically investigated.

Key words: Metric ambiguity, Downbeat accent, Tempo

[email protected]

66.3 Detecting subjective rhythmic attending in ERP

Peter Desain, Rebecca Schaefer, Juliët van Drumpt

Page 509: Abstract Book

Friday, August 25th 2006 509

Music, Mind, Machine Group, Nijmegen Institute of Cognition and Information, Radboud Univer-sity, The Netherlands

BackgroundIn the so-called clock illusion, isochronous stimulus trains are subjectively converted into a binarygrouped percept (tick-tock-tick-tock instead of tick-tick-tick-tick). This effect has been shownto occur spontaneously and is measureable in omitted evoked potentials in EEG (eg. Brochard,Abecasis, Potter, Ragot, & Drake, 2003)AimsTo use the manifestation of subjective accenting in EEG for realisation of a Brain-Computer Inter-face, we measured EEG while subjects imagined different groupings superimposed on an isochro-nous train of stimuli, thus producing accented and nonaccented beats in a train of perceptuallyidentical metronome ticks. By eventually classifying the accented from the non accented beatsin EEG, a communication method for patients with neurodegenerative diseases or central neuraldamage preventing motor activity, the so-called locked-in syndrome, can be created.MethodIn 15 right-handed female students subjective accenting was induced by presenting a rhythmicpattern and asking them to continue it mentally, keeping movement minimal. To ensure correctcontinuation and attention, an omission detection task concluded every trial. The experimentconsisted of 80 trials. Both a binary a ternary metric pattern were used. The EEG-data, recordedwith electrodes placed according to the 10-20 system, were analysed using cluster randomization.This method utilises stricter statistical criteria and is an appropriate way to prevent the need tocorrect for multiple comparisons in EEG data.ResultsWe investigated perception (the beginning of a trial) and imagery (the latter part) in both ERPand frequency domains. For perception, results were comparable over participants and showeddifferent EEG signatures for strong and weak beats. In the frequency domain no significant resultswere found. For imagery, 64% of participants showed highly significant within-subjects results,but these cancelled each other out in a grand average. In the frequency domain, increased ?-activitywas seen for accented beats. No other frequency results were significant.ConclusionsIt appears interpersonal variability prevents us from seeing general effects of imagined accents,although a strong consistency was seen in a good proportion of participants. Roughly, two typesof responses were seen, most likely to be produced by different cognitive strategies in task per-formance. The frequency results indicate a possible coupling of rhythmic accenting with motorimagery. These data support the feasability of using subjective accenting to drive a BCI-device.

Brochard, R., Abecasis, D., Potter, D., Ragot, R., & Drake, C. (2003). The “ticktock” of our in-ternal clock: Direct brain evidence of subjective accents in isochronous sequences. PsychologicalScience, 14(4), 362-366.

Key words: Subjective rhythmization, Dynamic attending

[email protected]

66.4 Context effects on the experience of musical silence

Elizabeth Margulis

Page 510: Abstract Book

510 Rhythm VII

Northwestern University, USA

BackgroundSilences in music are distinguished acoustically along only one dimension: the length of timethey occupy. However, they are distinguished syntactically along many dimensions, dependingon the tonal, temporal, dynamic, and expressive context in which they occur Silence can assumedifferent functions in different contexts, simply marking a grouping boundary in one circumstance,and dramatically enhancing musical tension in another.AimsThis study seeks to investigate how musical context influences the experience of silence.MethodUntrained listeners were recruited for participation. They were played musical excerpts containingperiods of silence and asked to perform three tasks. In task one, participants pressed a button whenthey heard a period of silence begin and another when they heard it end. In task two, they moveda slider to indicate perceived changes in musical tension across the course of each excerpt. Intask three, they answered a series of questions about each silence, including questions about itsduration, placement, salience, and metric qualities.ResultsThe results show that musical context transforms the experience of silence along many dimensions.Specifically, certain contexts caused participants to be late or early with their recognition that thesilence had begun or end. Additionally, certain contexts caused participants to perceive musicaltension as increasing across the course of a silence, and others caused them to perceive the silenceas decreasing in tension. Context also had an effect on participants’ estimates of the duration,relative placement, salience, and metric qualities of the silence.ConclusionsThe effects of musical structure are experienced even during periods where no auditory stimulusoccurs. This finding, and especially its consistency across a wide pool of participants, suggeststhat musical structure engages listeners in relatively robust ways.

Key words: Silence, Tension, Musical structure

[email protected]

Page 511: Abstract Book

Symposium: The ecology offlow experience: Cognitive,social, and pedagogical ob-servations of children’s musicmaking

67

Convenor: Lori A. Custodero, Anna Rita Addessi

Discussants: Colwyn Trevarthen and Maya GratierFlow experience is defined by an individual’s perception of high challenge and high skill for

an activity in which s/he is engaged (e.g., Csikszentmihalyi, 1975, 1997). Intrinsically rewardingtasks that involve concentration and focus generate such a state, typically these tasks are marked byclear goals, immediate feedback, and the perception that one’s actions are consequential in direct-ing the experience. Music is believed to be a quintessential flow activity (Csiskzentmihalyi, 1993),offering challenge through its multi-faceted requisites for engagement including the physiological,the communicative, the cognitive, and the aesthetic (Custodero, 2002). From the literature on mas-tery motivation (e.g., McCall, 1995), play and exploration (e.g., Lieberman, 1977), constructivism(e.g., Piaget, 1962), and creativity (Feldman, 1994), we also know that children are challengeseekers. Given this match between flow as an experiental state sustained by the perception of chal-lenge, and children’s propensity for such activity, this symposium offers four perspectives on flowexperience in children and/or adults, each in different contexts and each addressing the researchconundrums implicit in observing and reporting subjective experience. These include experiencesampling issues and the unit of analysis, the nature of defining the measurable task in a subjectiveparadigm, and operationalizing and validating reliably observable indicators of flow experience.

After a brief introduction to these issues, Addessi,Carlotti, Ferrari, and Pachet present theirresearch of musical “duets” between kindergarten children in Italy and “the Continuator,” a key-board synthesizer designed to respond in succession (vs. simultaneously) in musical partnershipto the performer’s improvisations. Next, St. John discusses her study of the social influences onobserved flow experience, making connections to Vygotskian social-cultural theory U her focus ishow peer to peer partnerships facilitate flow. Custodero and Stamou present recent findings froma study with children’s music teachers in Greece who were instructed in the recognition of flowindicators as tools for improving learning effectiveness in their own classrooms. Sinnamon inves-

511

Page 512: Abstract Book

512Symposium: The ecology of flow experience: Cognitive, social, and pedagogical

observations of children’s music making

tigates Dublin musicians’ experience of the flow state, and how cognition and emotion interactamong expert musicians using in-depth interviews and surveys carried out with expert musiciansand student musicians from three traditions of music, Classical, Jazz and Traditional Irish. Allstudies are centered in the interactive relationship between the individual and the environment:laboratory, classrooms, studios and concert halls all are interpretable vis-a-vis ecological factorswhich support or inhibit flow experience. Discussants Colwyn Trevarthen and Maya Gratier offercommentary.

67.1 A community of learners: Young music-makers scaffold-ing flow experience

St. John Patricia

Teachers College, Columbia University, USA

BackgroundThe flow paradigm honors the individual nature of musical experience occurring in-the-moment.Vygotsky’s socio-cultural prototype purports that meaning is socially constructed. Collectivemusic-making speaks to the socializing power of music. Social aspects related to flow experi-ence in young children’s music-making have been observably present in complex forms from theinception of Custodero’s (1998) observational protocol. Additional studies (e.g., St. John, 2005)suggest the dynamic interplay among participants facilitates learning and scaffolds flow.

AimsThe influence of others seems relevant. My aim is three-fold: to examine how children use theenvironment as a learning tool; to identify scaffolding strategies within the music-making com-munity; and to define the relationship between these strategies and flow experience.

MethodEight 75-minute sessions of a music class for twelve 4- and 5-year-olds were videotaped acrossa 15-week semester. Videotaped sessions were coded using Custodero’s revised Flow Indicatorsin Musical Activities form (R-FIMA). A total of 110 events (e.g., a singing game, a movementactivity, instrument-play) yielded 681 units of analysis (event X participant). Verbal and non-verbal exchanges were reported in descriptive narrative style in the comments sections of the R-FIMA.

ResultsFactor analysis of 9 Behavioral Indicators resulted in 4 experiential dimensions. Flow, includingself-assignment, deliberate gesture, expansion, and skill, was interpreted as the essence of optimalexperience, representing both elements of challenge and skill. Material Strategies, whose compo-nents anticipation and extension also predicted flow, were repeatedly coupled with peer awareness.Personal Strategies (self-correction and imitation), Material Strategies, and Social Strategies (peerawareness and adult awareness) facilitated or sustained flow.

ConclusionsDrawing from the social context and reflecting personal agency, children discovered who wouldbest assist them (Social Strategies), and how to get to flow (Personal Strategies), or sustain theexperience (Material Strategies). Peer awareness was robust to influences of all types of musicalactivities suggesting that being with each other is fundamental to young children’s music-making.

Page 513: Abstract Book

Friday, August 25th 2006 513

Peers provide a relational context from which to draw interactions; these interactions seem tofacilitate flow.

[email protected]

67.2 Young children’s musical experiences with a flow machine

Anna Rita Addessi1, Laura Ferrari1, Simona Carlotti1, François Pachet2

1University of Bologna, Italy2Computer Science Laboratory-SONY, Paris, France

BackgroundIn the field of the studies on creativity is placed the Theory of Flow introduced by Csikszentmi-ahlyi (1990) in order to describe the state of Flow, or optimal experience, experimented by thecreative persons during their preferred activities. In the field of music education Custodero (2005)elaborated a list of flow indicators to observe the Flow state in the musical experience of children.At the Sony Computer Science Laboratory, an innovative system was elaborated able to producemusic in the same style as the person playing the keyboard, the Continuator. The ability of thesystem to attract and hold the attention of children can be interpreted through the theory of Flow(Pachet 2005).

AimsThe DiaMuse project is carrying out dealing with the interaction between young children and theContinuator. The experimental results have shown some micro-processes similar to one observedin child/adult interaction (Stern 1985, Imberty 2005, Addessi & Pachet 2005). During the in-teraction with the system, the children seem to reach high levels of “well-being” and creativity,very similar to those described by Csikszentmiahlyi. An observation grid was therefore realisedin order to analyse in details the emotive tones described in the Theory of Flow.

MethodThe experimental protocol was realised in a kindergartner with young children aged 3/5 yearsplaying with the Continuator. Three sessions were held once a day for 3 consecutive days. Inevery session, the children were asked to play on the keyboard in 4 different situations: with thekeyboard alone, with the keyboard connected to the Continuator, with another child, and withanother child and the Continuator. All sessions was video recorded.

ResultsThe data show the presence of behaviours correlated with all the variables considered in the grid:focused attention, concentration, clear-cut feedback, control of the situation, intrinsic motivation,excitement, change in the perception of time, clear goals, pleasure, involvement, and socializationwith partner. In particular we found that the percentage when the Flow state was present is higherwhen the child play alone with the Continuator. We also noticed the presence of the flow indicatorsas observed by Custodero (2005) in musical experiences.

ConclusionsThe Flow experience that the Continuator, or other similar interactive reflective musical system,could generate during the interaction constitutes one of the prerequisite for enhancing musicalcreativity and personal music improvisation styles. This is an important result since despite its

Page 514: Abstract Book

514Symposium: The ecology of flow experience: Cognitive, social, and pedagogical

observations of children’s music making

great importance, teaching to improvise is still rarely tackled in western formal music education.We will discuss the results and we will show some video examples.

Key words: Musical creativity, Continuator, Theory of flow

[email protected]

67.3 Engaging classrooms: Flow indicators as tools for peda-gogical transformation

Lori Custodero1, Lelouda Stamou2

1Teachers College, Columbia University, USA2University of Macedonia in Thessaloniki, Greece

Maintaining the state of flow requires that as challenges are met with existing skills, then bothskills and challenges must increase, hence maintaining flow experience is linked to learning. Therelationship between flow experience and learning implies that a teacher’s ability to recognize in-dicators of such experience in their students would lead to improved effectiveness. Custodero’s(1998, 2005) observable flow indicators have been researched in preschool music classrooms (Cus-todero 1998, 2002, 2005; St. John, 2004) and studio voice lessons (Matthews, 2002) resulting infindings having strong implications for teaching. These include a new appreciation of the learner-as-agent who can creatively interpret and expand upon musical materials teachers provide, therole of peers in providing models, and the need for time to achieve mastery through invention.The current study aims to investigate the meaning and usefulness of flow indicators for primarygrade music teachers in their pursuit of more effective teaching and learning.

A computer-based instructional program, involving video clips and coding protocols for theflow indicators, is being piloted for the study, which will run the first two weeks of March, 2006in Thessalonika, Greece. The indicators include those situated in the individual learner and con-sidered to be linked to Challenge-seeking – Self-assignment, Self-correction, and Deliberate Ges-ture - and those situated in the learner’s response to instruction and considered to be Challenge-monitoring – Anticipation, Extension, and Expansion of the presented materials. Approximately20 teachers will participate in pre- and post-implementation interviews, children’s musical en-gagement will be videotaped before and after a teaching intervention lasting one week. As actionresearch, this intervention will be designed by music teacher-graduate researcher teams based ontheir interpretation of the first videotape revealing potential for increasing flow experience in theirparticular classrooms. Teachers will be interviewed and will keep journals charting the recog-nition of flow experiences in specific children doing specific tasks for the duration of a week;self-reporting data in the form of artwork, text, and focus groups will be collected from children.Multiple perspectives will be compared to ascertain pedagogical implications of flow in musicclassrooms.

Key words: Flow experience, Teaching, Learning

[email protected]

Page 515: Abstract Book

Friday, August 25th 2006 515

67.4 In the mood: Exploring flow states in musicians

Sarah Sinnamon, Aidan Moran

University College Dublin, Ireland

Flow is the fully satisfying, coveted and yet elusive state of mind which underlies peak per-formance and peak experience. Not surprisingly, given their practical and theoretical impor-tance, such experiences have attracted considerable research attention from psychologists (e.g.see Kimiecik & Jackson, 2002) since the pioneering work of Csikszentmihalyi (1975). However,the research has almost exclusively been carried out in the field of sport psychology. The presentstudy investigates musicians’ experience of the flow state, and how cognition and emotion inter-act among expert musicians. In-depth interviews and surveys have been carried out with expertmusicians and student musicians from three traditions of music, Classical, Jazz and TraditionalIrish.

Results to date show that musicians do experience flow and that whilst there are some similar-ities between the experience of musicians and the experience of athletes, there are also importantdifferences, for example the expression of emotion in music, rapport with the audience and expe-rience of flow during practice.

Findings suggest that the experience of flow is a system incorporating antecedents (e.g. highlevels of preparation, high arousal and feelings of risk-taking, confidence and love of the music),the experience of flow itself (e.g. total absorption and focus, feelings of ease and control, ability toexpress emotions, feeling at one with the instrument and lack of anxiety) and consequences (e.g.total fulfillment, high level of emotion, a desire to recapture the feeling).

Results of a survey of student musicians suggest that even young children may experience flowand that, similar to expert musicians, this has a crucial impact on their subsequent activity in musicand their motivation to practice, play and perform.

These and other findings are discussed in relation to current theories and research in this field.

Key words: Flow, Music performance, Emotion

[email protected]

Page 516: Abstract Book
Page 517: Abstract Book

Pitch IV 6868.1 Are scale degree “qualia” a consequence of statistical learn-

ing?

David Huron

Ohio State University, USA

Ten musicians were asked to describe the quality or character evoked by different scale tonesin the Western major scale. For example, the “mediant” pitch was variously described by partici-pants as “light,” “lifted,” “restful,” “peaceful,” “warm,” “bright,” or “calm.” An informal semanticanalysis of the descriptive terms suggests that most of the descriptors can be grouped into sevencategories: (1) certainty/uncertainty, (2) tendency, (3) completion, (4) mobility, (5) stability, (6)power, and (7) emotion.

Independently, statistical data were collected on the probabilities of different scale tones ina large sample of Western music in the major key. Collected data included the first-order prob-abilities for different tone successions. Some tones (such as the dominant) show considerableflexibility. In addition, data were collected on scale tones that are most likely to terminate a phraseor work.

There appears to be a strong association between the subjective “qualia” reported by musiciansand the basic statistical properties of scale tones for Western music. Those scale tones character-ized by musicians as “tending,” “leading,” or “pointing,” are most likely to be those tones that areobjectively most highly constrained in their first-order probabilities. Scale tones described as “sta-ble” or “restful” are most likely to be those tones that exhibit a high probability of terminating aphrase or work. Most of the qualia categories, including certainty, tendency, completion, mobility,and stability appear to be readily interpreted as relating to the statistical properties of tones andtone successions. These findings are consistent with the idea that the subjective experience evokedby tones may be attributable to statistical learning.

Key words: Tonality, Qualia, Statistical learning

[email protected]

517

Page 518: Abstract Book

518 Pitch IV

68.2 Functional differences between the tonotopic and peri-odic cues in discrimination of melodic pitch intervals un-der a diatonic context

Toshie Matsui1, Minoru Tsuzaki2

1Graduate School of Music, Kyoto City University of Arts2Faculty of Music, Kyoto City University of Arts

BackgroundPrevious research has revealed structural and functional differences in musicians’ brains comparedto those of non-musicians (Pantev et al., 1998; Schlaug, 2001; Gaab et al., 2003; Schneider et al.,2002) that correlate with starting age, duration and intensity of training, and specific instrumentplayed. These findings support an environmental explanation. However, in order to determinedirectly whether these differences are inherent or due to training, we are conducting a longitudinalstudy in 5-7 year-olds as they begin study of a keyboard or stringed instrument.Method40 5-7 year olds (24 instrumental, 16 non-instrumental controls) underwent cognitive, motor, andmusic tests and a functional MRI study at baseline (t1), and again approximately 15 months later(t2). The behaviorally-controlled functional MRI study required rhythmic (RD) and melodic (MD)discrimination with a button-press response (see Overy et al., 2004). During 15-months of obser-vation, the instrumental group had weekly lessons and kept a record of their practice time. Imagepre-processing and individual estimations were conducted in SPM99 using the same parametersfor each time point. Random effects analyses were used to estimate effects within groups overtime and between-groups at each timepoint.ResultsThere were no pre-existing between-group differences (Overy et al., 2004; Norton et al., 2005).At baseline (t1), both groups showed bilateral activation in the superior temporal gyrus (STG)for both MD and RD tasks compared to the motor control tasks (p<.05, FDR corrected). 15months later (t2), a direct comparison between the two groups revealed no significant between-group differences (p<.05, FDR corrected) for any of the contrasts. However, within-group analysesrevealed significant differences over time for the instrumental, but not the control group. Thesechanges were seen in the STG bilaterally (right more than left) including the primary and adjacentsecondary auditory cortex, the right lateral cerebellum, and a small region in the left inferior frontalcortex for the RD vs. motor control contrast (Fig.1). The MD vs. motor control contrast showedsignificant changes in the left STG including the primary auditory cortex, a region in the middletemporal gyrus (BA 21), and a strong change in the anteriolateral parts of the right STG (BA 38)(Fig1).ConclusionsThis study revealed functional brain changes after 15 months of musical intervention. The changesfound in auditory and auditory association regions as well as in the cerebellum and inferior frontalbrain regions in 5-7 year old musicians are seen in the same regions where functional (Pantevet al., 1998; Schneider et al., 2002; Gaab et al., 2003) and structural brain differences (Gaserand Schlaug, 2003, Schneider et al., 2002) have been found between adult musicians and non-musicians.

Acknowledgement Supported by grants from the National Science Foundation, the Dana Foun-dation, the International Foundation for Music Research, and the Grammy Foundation.

Page 519: Abstract Book

Friday, August 25th 2006 519

Key words: Tonotopic and periodic cue, Melodic interval, Tonality

[email protected]

68.3 Judgments of distance between trichords

Nancy Rogers, Clifton Callender

Florida State University, USA

BackgroundMusicians often refer to the perceived distance between two chords, but it is unclear how intu-itions of distance are formed and which factors are most influential. Contrapuntal voice leading(the intervals by which individual notes move) is traditionally assumed to play a key role: mu-sic theorists, including Cohn, Lewin, Roeder, and Straus, have typically measured the distancebetween two chords by simply summing the displacements of each voice (the “taxicab metric”).This approach assumes that displacements sum in a linear manner.AimsThe established measures of chordal distance ignore many factors that may influence our judg-ment, including direction of motion (up/down), relative motion of voices (similar/contrary), tuningenvironment (standard/microtonal), and tonal implications. The aim of our research is to test theassumption of linearity and the strength of these factors in judgments of distance between chords.MethodIn this experiment, 20 participants listened to pairs of trichords and rated their perceived musicaldistance. The trichords were presented in Shepard tones to eliminate the effect of register, andthe tuning environment randomly varied between the standard 12-tone universe and a microtonaluniverse of up to 48 tones. Other variables such as the direction of motion and the relative stabilityof sonorities were controlled.ResultsAs predicted, increasing the total sum of motion created a perception of greater distance. However,the size of displacements only correlated positively with judgments of distance in the microtonaltuning environment, while the number of moving voices only correlated positively in the standardtuning. Listeners’ judgments of distance were greater for descending motion than ascending mo-tion in both standard and microtonal environments. In standard tuning, contrary motion produceda greater sense of distance than did parallel motion. This effect was not observed in the microtonalenvironment.ConclusionsWhile distance judgments overall were correlated with the “taxicab metric,” our results imply thatthis metric underemphasizes common tones in standard tuning and underemphasizes displacementsize in microtonal tunings. This suggests that displacements do not necessarily combine in astatic, linear manner. Additionally, other factors, particularly direction, had noticeable effects onperceived distance and should be considered in theoretical models.

Key words: Distance, Trichord, Voice-leading

[email protected]

Page 520: Abstract Book

520 Pitch IV

68.4 The effect of categorization training on pitch contour judg-ments in Shepard tones

Bettina Keresztesi, Kate Stevens

MARCS Auditory Laboratories, University of Western Sydney, Australia

Studies in the visual domain have found that perception is often biased by “nonspecifying”variables. A variable is considered nonspecifying, “if a single value of the variable can go to-gether with several states of the property” (Jacobs & Michaels, 2001; p.564). In the case of pitchperception, chroma (i.e., C, C#, D, D#, E, F, F#, G, G#, A, A#, B) is a nonspecifying variablebecause musical notes that are an octave apart have the same pitch chroma, even though theycorrespond to different fundamental frequencies (i.e., F0; - the specifying variable). The effectof chroma on pitch perception is demonstrated in Shepard tones. Shepard tones consist of 10 si-nusoidal partials spaced at octave intervals with a fixed, bell-shaped spectral envelope of soundlevel. As a result of this structure the pitch contour of two successively presented Shepard tones isdetermined by the dimension of pitch chroma rather than F0.

It is suggested that the effect of chroma on perception originates in category learning. Accord-ing to the principles of category learning, the perceptual system becomes sensitized to category-relevant dimensions. Musical categories (i.e., melodic and harmonic intervals) are determined bythe number of semitones that separate a pair of tones rather than the actual distance between tonesmeasured in F0.

The aim of two experiments reported here was to alter the perception of Shepard tones throughcategorization training. In Experiment 1, the category-relevant dimension was the frequencychange in F0 (i.e., specifying variable); and in Experiment 2 it was the frequency contour ofthe loudest partials (i.e., nonspecifying variable). In both experiments it was hypothesized thattraining alters pitch contour judgments in the direction of the category-relevant dimension. Thehypothesis was supported in Experiment 1 as participants were successfully trained to developsensitivity to F0. On the other hand, categorization training in Experiment 2 did not increaseparticipants’ sensitivity to the frequency contour of the loudest partials. It is suggested that par-ticipants failed to develop sensitivity to the dimension of the loudest partials because in Shepardtones it conveys information inconsistent with the information carried by the F0.

Key words: Pitch, Shepard tones, Categorization training

[email protected]

68.5 Musically Puzzling: Sensitivity to global harmonic andthematic relationships

Roni Granot

The Hebrew University of Jerusalem, Israel

BackgroundA number of studies have shown that listeners are insensitive to global harmonic relationships.

Page 521: Abstract Book

Friday, August 25th 2006 521

Cook (1987) has proposed that such relationships are more conceptual than perceptual, servingmainly as guidelines for the composer rather than the listener. However, it is likely that judgmentsconcerning the overall structure of complex musical materials require a more thorough acquain-tance with the musical piece, than is provided for in such studies in which different versions of apiece are presented only once.

AimsHere, I examine whether listeners show sensitivity to global harmonic relations in an active task (amusical puzzle) in which listeners are free to explore different combinations of the various sectionsof the piece, taking into account additional information stemming from the thematic materials.

MethodTwo quasi monothematic sonata-form movements from the classical period (Mozart’s piano sonataK.571 in B flat Major and Haydn’s piano sonata No.34 in E minor), were selected. Both selectionswere dissected into their constituent parts according to the sonata structure (theme 1, bridge, theme2, closing theme, development cut into two sections, and parallel sections in the recapitulation).83 participants (25 musicians) and 68 participants (25 musicians) received a randomized versionof the Mozart and Haydn piece respectively. All participants were required to rearrange the CDtracks so as to create a musically logical piece. Participants were given as long as necessary tocomplete the task.

ResultsPartial results show that participants are sensitive to closing gestures and to a lesser degree to open-ing gestures, which compete with order effects. In addition, listeners are sensitive to the instabilityof the development sections. Thematic relations seem to drive local combinations of sections.However, despite the active task, and extended exposure to the musical excerpts, listeners, do notshow a global harmonic sensitivity. Expertise is reflected in increased local harmonic sensitivityand some extension of the span of sensitivity from two successive correct items to sequences of3-4 correct combinations.

ConclusionsResults suggest that the priority of local over global processing as found in studies such as Bigandet al., (1999) and as suggested by aestheticians such as Levinson (1998) is consistent with listeners’perception even when listeners are highly familiar with intra and inter-opus stylistic constraints.

Key words: Overall structure, Harmonic relationships, Thematic relationships

[email protected]

68.6 Context affects chord discrimination

Tuire Kuusi

Department of Composition and Music Theory, Sibelius Academy, Finland

BackgroundMany studies on perception of nontraditional chords have shown the importance of consonance,which is connected with set-class properties. Yet, chordal characteristics, such as transpositionallevel and actual setting of the pitches, are also important for consonance and, hence, for perception.

Page 522: Abstract Book

522 Pitch IV

AimsIn this study chord discrimination was examined in context. The degree of consonance of the con-text was systematically varied. The study examined whether chords were discriminated accordingto set-class structure more often in consonant or in dissonant context, and what was the role ofchordal characteristics for discrimination.MethodThe chords of the study were derived form 15 pentad classes, each of which was represented by9-14 pentachords and a number of transpositions. The items were five-chord sequences wherefour chords represented one set-class (the context) and one chord represented another set-class(the deviant). The expert subjects (students of music theory or composition) were asked to aurallyselect one chord that, in their opinion, did not belong to the sequence.ResultsThe discrimination was guided by the difference in the degree of consonance between the con-text and the deviant: with marked differences the discrimination was systematic (the deviant wasselected in 90.9% of the cases), and with subtle differences the discrimination was only a littleabove chance level (26.3%). When the differences in consonance were relatively subtle or rela-tively marked, the discrimination accuracy differed significantly as a function of the context: ifthe context was more dissonant than the deviant, 71.4% of selections were made according to theset-class; if the context was more consonant than the deviant the percentage was only 47.3%.ConclusionsThe results indicated the importance of the degree of consonance between the context and thedeviant. However, the context itself was also important, especially when the differences in conso-nance were neither marked nor subtle. If the context was relatively dissonant, the deviant chord(and set-class) was selected because it was more consonant than the context chords; but within therelatively consonant context the chordal characteristics were obviously analysed and they guidedthe selections as often as the set-class structure.

Key words: Chord discrimination, Consonance, Set-class

[email protected]

Page 523: Abstract Book

Performance II 6969.1 The advantage of being non-right-handed in a piano per-

formance task (sight reading)

Reinhard Kopiez1, Niels Galley2, Ji In Lee1

1University of Music and Drama, Hanover, Germany2University of Cologne, Dept. of Psychology, Germany

Background and AimsIn this study, the unrehearsed performance of music, known as “sight reading”, is used as a modelto examine the influence of motoric laterality on highly challenging musical performance skills.As expertise research has shown, differences in this skill can be partially explained by factors suchas accumulated practise and an early start to training. However, up until now, neurobiologicalfactors (such as laterality) that may influence highly demanding instrumental performance havebeen widely neglected.MethodIn an experiment with 52 piano students at a German university music department, we could showthat the most challenging musical skill, sight reading (which is characterized by extreme demandson the performer’s real time information processing), is positively correlated with decreasing right-hand superiority of performers. Laterality was measured by the differences between left and righthand performance in a speed tapping task. SR achievement was measured using an accompanyingtask paradigm.ResultsAn overall superiority of 22 % for non-right-handed pianists was found. This effect is gender-related and stronger in non-right-handed males (r(24) = -.49, p < .05) than in non-right-handedfemales (r(28) = -.16, p > .05).ConclusionsWe conclude that non-right-handed motoric laterality is associated with neurobiological advan-tages required for sight reading, an extremely demanding musical subskill.

Key words: Music performance, Sight reading, Perception

[email protected]

523

Page 524: Abstract Book

524 Performance II

69.2 Emphasizing voices in polyphonic organ music: Issues ofexpressive performance on an instrument with fixed toneintensity

Gingras BrunoSchulich School of Music, McGill University, Montreal, Canada

This study sought to identify the expressive means used by organists to emphasize a specificvoice in a polyphonic organ piece. Piano performance research has shown that performers empha-size a melody by playing its notes louder and earlier than nominally simultaneous notes in othervoices. Since dynamic differentiation is impossible on an instrument with fixed tone intensity suchas the organ, it is hypothesized that organists rely on articulation differentiation between voicesto a greater extent than pianists. An experiment was carried out to investigate the means used byorganists to communicate voice-specific melodic emphasis. Eight organists were asked to performa short Baroque polyphonic piece on an organ equipped with a MIDI console. Three conditionswere tested, each requiring performers to emphasize a particular voice in the piece (soprano, alto,tenor). Three parameters were analyzed: note onset asynchrony, rubato, and articulation (overlap).Although the results indicate that organists may use note onset asynchrony as an expressive para-meter for specific voice emphasis, the onset asynchronies were much smaller than those observedin piano performance, and were generally too small to be perceptible. The present study thusappears to validate the thesis that melody lead is a byproduct of dynamic differentiation betweenvoices (the “velocity artifact” hypothesis). Variations in the spread of rubato were observed acrossvoices, but no systematic attempt to differentiate between voices according to the melodic inter-pretation could be detected. Variation in overlap patterns was found to be the most widespreadand consistent strategy used by organists to emphasize a voice. Specifically, a voice was generallyperformed in a more staccato manner when it was emphasized than when it was not. Moreover,on a note-by-note basis, different organists modified their articulation patterns according to theinterpretative condition in a similar fashion. I propose that organists are attempting to create afigure/ground separation in which the non-emphasized voices, played legato, recede into the back-ground, while the note-onsets of the emphasized voice, preceded by longer gaps, become moresalient.

Key words: Expressive performance, Organ, Polyphonic music

[email protected]

69.3 Variation in expressive physical gestures of clarinetists

Bradley Vines1, Ioana Dalca1, Marcelo Wanderley2

1Department of Psychology, McGill University, USA2Department of Music Technology, McGill University, USA

BackgroundMusicians’ expressive physical/body movements carry information about mental states, perfor-mance intentions, and musical interpretation of the score. There is evidence that expressive ges-tures are ingrained in musicians’ memory for music - performers will tend to repeat the same

Page 525: Abstract Book

Friday, August 25th 2006 525

sequence of gestures each time they play a particular piece of written music. Furthermore, musi-cians’ body movements convey information that can reinforce or contradict the information con-veyed by sound, thus influencing the overall experience for observers and providing an additionalcommunicative link between performer and audience. By investigating musicians’ body move-ments, we may gain further insight into the mental processes and communicative mechanismsunderlying musical performance.

AimsIn the present study, we sought to explore variation in clarinetists’ expressive body gestures. Weinvestigated changes in movement pattern across repeated performances of a musical segment,played with three different expressive intentions.

MethodThree clarinetists performed a segment of a musical score multiple times (= 6), using three dif-ferent performance manners: immobile (while restraining body movement as much as possible),standard, and exaggerated (with intentional over-exaggeration of expressiveness). Additionally,one of the clarinetists repeated the series of performances after a six-month interval. The musicalsegment was drawn from Stravinsky’s second movement for solo clarinet. We used an Optotraksystem to record the location of sensors placed on each musicians’ body and clarinet, in three di-mensions over time. The resulting data were modeled using functions of time, and analyzed withFunctional Data Analysis techniques. We applied a linear warping strategy to align the perfor-mances temporally, which facilitated comparison across performances.

ResultsThe musicians’ body-gesture patterns remained consistent across repeated performances of themusical segment, even with an intervening period of six months. The timing of the movementpatterns also remained stable across variation in expressive intention. However, the magnitudeof the clarinetists’ gestures, as measured with Optotrak, varied over time and with expressiveintention. In general, the magnitude of movements was found to increase with increasing intendedexpressiveness, and after a 6 month interim, but only at particular points in time that were relatedto structural features in the Stravinsky score.

ConclusionsThis research supports the hypothesis that musicians’ movement patterns are intertwined withthe sound in their mental representation for a musical piece. Results suggest that a clarinetist’sintended expressiveness influences body movement at specific points in the musical score, andthat the timing and magnitude of expressive gestures might vary independently.

Key words: Music performance, Expressive gesture, Motion tracking

[email protected]

69.4 The effects of concert dress and physical appearance onperceptions of female solo performers

Noola Griffiths, Jane Davidson

Department of Music, University of Sheffield, UK.

Page 526: Abstract Book

526 Performance II

A literature review demonstrates the importance of visual information in performance. Dressand appearance have complex cultural labels attached to them that, through commonly held so-cietal stereotypes, make assumptions of performers’ abilities and intentions before they play anote. There are differences in men and women’s dress that may put female classical soloists ata disadvantage that could affect their musical advancement. This study investigates the roles ofconcert dress and physical attractiveness on audience perceptions of performance quality, focusingon solo women violinists of the Western Classical tradition. Four female violinists were filmedplaying in four states of dress: Jeans, a Nightclubbing dress, black Concert dress and Point-lightcondition (body movement is apparent but not physical appearance). Each clip was recorded intwo conditions: both as the Performer’s own version and with a mastertrack dubbed over the top.The dubbed versions therefore had a constant musical soundtrack. Fifteen male and 15 female par-ticipants (age range 17- 66 years) were asked to rate clips on six point scales in terms of TechnicalProficiency, Musicality, Appropriateness of Dress and Attractiveness of Performer. Significanteffects were found of Dress, Performer, Gender and Condition on participant perceptions. Im-plications of these perceptions suggest that observers have a strong concept of what constitutesappropriate dress for a female recitalist, as the Concert dress was overwhelmingly favoured abovethe Nightclubbing dress and Jeans. Low ratings of Point-light clips show visual information isessential for the audience to understand fully performer intentions. Performer 3’s uneven profileshows high ratings for mentally dominant scales (Musicality and Technical Proficiency) and lowerratings for physically dominant scales (Appropriateness of Dress and Attractiveness of Performer);this is evidence in support of the continuation of a mind/body split, where creative characteristicsare valued over physical characteristics. Male observers’ lower ratings of Technical Proficiencyand Musicality are thought to stem from objectification of Performers displaying a body focussedimage.

Key words: Concert dress, Women, Stereotypes

[email protected]

Page 527: Abstract Book

Education VIII 7070.1 Notational strategies as representational tools for sense-

making in music listening tasks: Limitations and possi-bilites

Mark Reybrouck

University of Leuven, Section of Musicology and Center for Instructional Psychology and Tech-nology, Belgium

BackgroundTraditional musicological research has focussed mainly on score analysis rather than on soundingmusic. Yet , formal-conventional notation has several shortcomings in relying almost exclusivelyon symbolic representation of music as part of a conceptual system. Notation and notational strate-gies should be considered not only as analytical tools, which are, in a way, post-fact abstractionsof the sounding music, but as tools for sense.AimsThis paper argues for a processual approach to dealing with music. Notation and notational strate-gies are considered as representational tools for sense-making out of the perceptual flux. Ratherthan relying on existing forms of symbolic notation, it is argued that notational systems should beself-constructed with as major aim the development of better listening and processing strategies.As such listeners are invited to make a selection as to what they hear and how they represent whatthey consider to be relevant.Main contributionThe major contribution of this paper is to bring together theoretical grounding and empirical evi-dence as to using self-invented notations while listening to music. A central focus is the semioticclaim of knowledge construction and symbolizing as a process of sense-making with as key ele-ments the informal self-construed ways of symbolizing and the use of symbols as cognitive toolsthat go beyond a mere representation of the outer world. A first issue is related to what listenersactually represent and how they represent the elements they consider to be relevant. A secondissue is how to categorize these representations, both their contents and expression forms. A thirdissue deals with the representational adequacy and the actual motivation behind the representa-tions. And a final issue is related to the way how to improve the level of sophistication of the

527

Page 528: Abstract Book

528 Education VIII

representations. All these questions are approached from the background of theoretical groundingand from previous and ongoing empirical research.ImplicationsThe assessment of self-produced music representation is a challenging area of research. Ratherthan providing ready-made and static representationsUas in a musical scoreUit aims at uncoveringthe underlying strategies for sense-making. The area of research raises theoretical, methodologicaland educational issues. Besides a better understanding of actual representational competence bymeans of further ascertaining studies, there is an urgent need of design experiments aimed atimproving music education by helping children to gradually and actively build up music cognitionout of their more intuitive and informal ways of representing music.

Key words: Musical representations, Sense-making, Listening strategies

[email protected]

70.2 Vocal creativity: How to analyse children’s song makingprocesses and its developmental qualities

Stefanie Stadler Elmer

University of Zuerich, Switzerland

This contributions has three aimes: First, a (new) methodolgy for analysing and represent-ing children’s singing is introduced and demonstrated with case studies. Second, this method isused for a microgenetic study on how children adapt to our vocal culture (music, language) whilelearning new songs. Third, the data of the microgenetic studies serve to discuss theoretical issuesconcerning musical development.

The analysis of children’s vocalisations is an intricate issue, because it consists of a configu-ration of several simultaneously present parameters. The most important ones are: timing, pitch,syllables. In addition, the social context has to be taken into account. A reliable and valid method-ology for analysing children’s vocal musical expression is important for gaining a better under-standing - i.g. a theory - of the development of this universal and elementary behaviour.

Case studies from a quasi-experimental setting are presented: Children were taught sevennewly composed songs. Eventually, they invented new ones. The data of the case studies con-tain a child’s entire process of learning each of the new songs. Ihe interaction during the teachingprocess is analysed, and a child’s consecutive production is analysed in detail. For that, an acousticcomputer programm is used. Then, the data is represented by a notation system that is more dif-ferentiated than the conventional one. This methology allows insights into the structural aspectsof how children work out their song making. As a result, here, two such microgenetic studiesare presented. The learning processes are compared and discussed from a developmental point ofview. Within a theoretical framework, common and different strategies and qualities are identi-fied and discussed. Following the Piagetian tradition, it is assumed, that children construct newstructures at the basis of already acquired ones. They adapt actively and gradually to the culturalconventions - here with regard to music, language, and social rules - by imitation (accomodative)and by playing (assimilative) with the organisation of their own actions. The children’s strategiesin making songs resemble adult composers’s creative strategies: repetitions and variations pre-dominate. Yet, children still act at sensorimotor and operational levels, whilst composers built uphabits of working with imagination.

Page 529: Abstract Book

Friday, August 25th 2006 529

Key words: Vocal development, Creativity, Methodology

[email protected]

70.3 Organists’ strategies for musical learning and performancein improvisation

Karin Johansson

Malmö Academy of Music, Lund University, Sweden

BackgroundWestern art music has historically had two interactive strands: interpretation and improvisation.Today though, the tradition of classical music rests mainly on interpretations of written music,and improvisation has gradually disappeared from instrumental music teaching. Furthermore, thetopic of improvisation is underrepresented in musicological research, especially in the field ofWestern classical music (cf Gabrielsson, 1999). The art of organ improvisation can here be seenas a reminiscence but also as a development of a musical practice and a way of learning which isstill existent and flourishing.AimsThe purpose is: a) to further explore the phenomenon of individual professional improvisation,as expressed verbally and musically by organists, b) to investigate the relationship between musi-cal strategies and performance contexts. The dual research perspective on improvisation as con-structed by the participants and as interpreted by the researcher, aims at extending the knowledgeof the creative and learning processes involved.MethodTen professional organists improvising on a high musical level participated in the study, whichtook place at two occasions: One practice session combined with a semi-structured interview, andone public musical performance followed by a concluding interview. Both were DV documentedand MD recorded. Data thus consists of verbal statements and performed music. It was submittedto technical and interpretive analysis, e.g. by using a computer program for qualitative contentanalysis (HyperResearch). Defining organ improvisation as a discursive practice where the dis-course on music as well as the discourse in music is studied (Folkestad, 1996) also meant applyingmulti-faceted perspectives on discourse analysis (e.g. Potter, 1996).ResultsIn the study, organ improvisation is described as a practice simultaneously dependent on andindependent of notation, and as related to rather than opposed to interpretation. This attitudeseems to encourage a multitude of learning and creative approaches. These will be exemplified indetail at the conference, together with the presentation of further results.ConclusionsKnowledge about professional learning and creative strategies in improvisation may be a valuableasset in the process of maintaining and developing Western musical heritage and music education.

Key words: Improvisation, Creativity, Tradition

[email protected]

Page 530: Abstract Book

530 Education VIII

70.4 Preschool children’s peer teaching: A case study of inter-active operation

Yoko Ogawa1, Tadahiro Murao2

1Tottori University, Japan2Aichi University of Education, Japan

BackgroundThe purpose of this study was to clarify the preschool children’s way of teaching to each other.Many researchers have pointed out that children use plenty of strategies when they teach eachother in groups. According to Brand, child-teachers used a broad range of strategies to bringabout learning in their peers, and over 80% of the time was devoted to dialogue between a pairof children (1999, 2003). However, we observe that children playing a new game often give noadvice to each other. If they want to join the new game, they try to imitate and act the same waywith no words. In order to look into this phenomenon deeply, we conducted a semi-structuredexperiment.AimsFor the current study, the following 3 points are examined: (1) by what means do preschoolchildren teach each other how to play a glockenspiel? (2) how do preschool children use verbaland nonverbal tools with each other? (3) are there any differences between adult teaching andchild teaching?MethodExperiment 1 Subjects: The participants were 10 pairs of preschool children, aged 4, 5 and 6 years.A younger child (Average=4.3) and a senior child (Average=6.3) made one pair. All childrenwere drawn from 2 kindergartens in southwest Tottori area. They have never received privatemusic lessons. Musical Stimuli: The beginning phrase of the famous song, “Twinkle twinklelittle star” was used in the task. Procedure: The younger children were asked to learn how toplay a glockenspiel and the older children were asked to teach it to them. The glockenspiel isa very popular and attractive musical instrument among children because it is very difficult toplay. Children’s entire teaching-learning interactions were recorded by 2 Video Camera Recorders(SONY DCR-TRV50).

Experiment 2 Subjects: The participants were 8 pairs of adults and preschool children. The8 adults, aged 20 to 23 years old (Average=21.6) and the 8 children, aged 4 and 5 years old(Average=4.9). All adults were music major students, and all children were drawn from samekindergartens as in Experiment 1. Musical Stimuli: The musical stimuli were the same as inExperiment 1. Procedure: The children were asked to learn how to play a glockenspiel and theadults were asked to teach it to them.ResultsThe older children’s performance was active, however, it used no verbal communication and alsono suggestion to learners. The strategy of younger children was only to watch, repeat, and imitateit. On the other hand, all adults taught it politely to children. There was a considerable differenceof teaching strategies between adults and children.ConclusionsThese findings suggest that there is a point in common between some teaching strategies used byolder children and masters of Japanese traditional music. Younger children can guess, understandand follow older children, although older children gave no advice.

Page 531: Abstract Book

Friday, August 25th 2006 531

Key words: Peer teaching, Teaching strategy, Preschool children

[email protected]

Page 532: Abstract Book
Page 533: Abstract Book

Emotion IV 7171.1 Implicit measures of musical emotion

Daniel Västfjäll1, Patrik Juslin2

1Department of Psychology, Göteborg University, Sweden2Department of Psychology, Uppsala University, Sweden

Music arouses strong emotions, yet it is difficult to systematically measure the experience ofmusical emotions. Today, many researchers interested in musical emotion rely only on self-reportmeasures. Such measures, however, are fraught with problems and disadvantages (i.e. demandcharacteristics and a need for subjective awareness of the feeling state). More recent research(Juslin & Västfjäll, 2006) have suggested that multiple measures of emotion should be used toestablish with certainty that a listener is experiencing a particular emotion (i.e. self-reports, phys-iological measures and behavioral data).

In this paper we focus on behavioral data related to the experience of musical emotion. Morespecifically the aim of this paper is to present an review of implicit behavioral measures of musicalemotion, such as word association, decision time, or distance judgment. A systematic reviewof previous literature suggests that four broad classes of implicit measures may be useful forstudying musical emotions: 1) psychomotor, 2) motivational, 3) information processing, and 4)judgment/behavioral tasks. The paper will give empirical examples of the different classes ofmeasures along with suggestions for how to use such measures in the study of musical emotion.

Key words: Musical emotion, Measurement, Implicit measures

[email protected]

71.2 Explaining the total sonority and affective valence of chords

Norman D. Cook, Takashi X. Fujisawa

Department of Informatics, Kansai University, Takatsuki, Osaka, Japan

533

Page 534: Abstract Book

534 Emotion IV

Much psychophysical research was done in the 1960s on the perception of musical intervals.Subsequent experimental and theoretical work by Terhardt, Parncutt, Huron, Roederer, Tramo,Lerdahl and others has been technically more sophisticated, but without answering two basic ques-tions about music psychology: Why do most normal (musician and non-musician) subjects agreeon the relative sonority of triads (major >= minor > diminished » augmented) (Roberts, Acustica,62, 163-171, 1978)? And why do most subjects (4-years and older, from cultures of the East andWest, over the last few centuries) indicate that major and minor chords evoke positive and negativeemotions, respectively? Starting with Meyer’s ideas from Gestalt psychology (Emotion and Mean-ing in Music, Chicago University Press, Chicago, 1956), we have developed a model of harmonyperception that includes 3-tone effects (Cook, Tone of Voice and Mind, Benjamins, Amsterdam,2002). Not only does the model explain the relative sonority of all 3-tone chords, we maintainthat it is the only model that even attempts to explain the affective valence of the major and minortriads on a psychophysical basis (Cook, Fujisawa & Takami, IEEE Transactions on Speech & Au-dio Processing, 14, 1-10, 2006). The model is built upon Meyer’s idea that the perception of therelative size of neighboring pitch intervals determines the “stability” of any 3-tone combination.The most salient feature of interval combinations is “intervallic equidistance” (e.g., the augmentedchord), which is perceived as unstable because the two intervals have a mid-lying tone that cannotbe unambiguously grouped with either the higher or lower tone. In fact, the upper-partial structureof all triads shows little intervallic equidistance among the major and minor chords, and an abun-dance in the diminished, augmented, suspended-fourth, etc. chords. Moreover, consideration ofrelative interval size explains the positive/negative affect of the major/minor chords on the basisof the “frequency code” - i.e., the “sound-symbolism” of falling/rising pitch that is well-knownfrom both animal vocalizations and human language intonation (Ohala, Phonetica, 41, 1-16, 1984;Sound Symbolism, Cambridge University Press, New York, 1994, pp. 325-347).

Key words: Harmony, Major/minor, Psychophysics

[email protected]

71.3 Principles for expressing emotional content in turntablescratching

Kjetil F. Hansen, Roberto Bresin, Anders Friberg

Department of Speech, eMusic and Hearing - Royal Institute of Technology (KTH) - Stockholm,Sweden

BackgroundScratching is a novel musical style that introduces the turntable as a musical instrument. Soundsare generated by moving vinyl records with one or two hands on the turntable and controllingamplitude with the crossfader with one hand. With this instrument mapping, complex gesturalcombinations that produce unique “tones” can be achieved. These combinations have establisheda repertoire of playing techniques, and musicians (or DJs) know how to perform most of them.Scratching is normally not a melodically based style of music. It is very hard to produce toneswith discrete and constant pitch. The sound is always strongly dependent on the source materialon the record, and its timbre is not controllable in any ordinary way. However, tones can be madeto sound different by varying the speed of the gesture and thereby creating pitch modulations.

Page 535: Abstract Book

Friday, August 25th 2006 535

Consequently timing and rhythm remain as important candidates for expressive playing whencompared to conventional musical instruments, and with the additional possibility to modulate thepitch.AimsThe experiment presented aims to identify acoustical features that carry emotional content inturntable scratching performances, and to find relationships with how music is expressed withother instruments. An overall aim is to investigate why scratching is growing in popularity even ifit a priori seems ineffective as an expressive interface.MethodA number of performances by experienced DJs were recorded. Speed of the record, mixer am-plitude and the generated sounds were measured. The analysis focuses on finding the underlyingprinciples for expressive playing by examining musician’s gestures and the musical performance.The found principles are compared to corresponding methods for expressing emotional intentionsused for other instruments.ResultsThe data analysis is not completed yet. The results will give an indication of which acousticalfeatures DJs use to play expressively on their instrument with musically limited possibilities. Pre-liminary results show that the principles for expressive playing are in accordance with currentresearch on expression.ConclusionsThe results present some important features in turntable scratching that may help explain whyit remains a popular instrument despite its rather unsatisfactory playability both melodically andrhythmically.

Key words: Music performance, New musical interfaces, Scratching

[email protected]

71.4 How do musicians deal with their medical problems?

Elena-Romana Gasenzer, Richard Parncutt

Department of Musicology, University of Graz, Austria

BackgroundMusical careers are becoming more competitive and stressful. The incidence of music-medicalproblems such as carpal tunnel syndrome and focal dystonia is increasing. Musicians fear profes-sional consequences if they seek support.AimsWe explored the relevant music-medical knowledge and preventive strategies of typical classicalmusicians. We plan to develop and promote post-secondary courses in music medicine for musi-cians.MethodParticipants were 27 professional classical musicians living in and near Frankfurt/Main, Germany.They had been playing for between 15 and 55 years. Our questionnaire addressed issues surround-ing music-related medical problems (e.g., amount of practice time, nutrition, stage fright, chronic

Page 536: Abstract Book

536 Emotion IV

diseases, pain while playing). Case reports were compiled for 4 recorder players, 3 flutists, 3 vi-olinists, 2 pianists, a double bassist, a cellist, a clarinettist, and a saxophonist. These highlightedand documented specific conditions, their progress, and their effect on musical performance andeveryday life.ResultsReported medical problems were sport-medical (16), orthopaedic (14), neurological-psychiatric(6), dermatological (4), audiological (3), and dental (2). Violinists reported shoulder and neckproblems, pianists back and hand problems, woodwind players and violinists dermatological prob-lems. Most musicians appeared ill-informed about the nature of their problems or strategies foravoiding or solving them. During their professional development, they had never been exposedto such information and were interested in relevant new course offerings. The questionnaires alsorevealed that far too little specialised music-medical support is available.ImplicationsThere is an urgent need for new investment in the prevention and rehabilitation of music-medicalproblems. Medical problems can be prevented if musicians are appropriately informed, whichcan also save considerable public health expenditure. Many problems can be avoided by simplyadjusting practice methods and techniques, sport (e.g. swimming), and nutrition. Instrumentalmodifications such as special flute keys and piano stools can be useful if properly understood andapplied. Prevention can begin with the first instrumental lessons. Conservatories and music uni-versities should offer more courses in music medicine that cover relevant anatomy and physiologyfor players of specific instruments as well as singers, dancers, and actors.

Key words: Medicine, Stress, Prevention

[email protected]

Page 537: Abstract Book

Perception VI 7272.1 Memory improvement while hearing music: Effects of

musical structure

W. Jay Dowling1, Barbara Tillmann2

1University of Texas at Dallas, USA2CNRS-UMR 5020, Lyon, France

Previous research (Dowling, Tillmann, & Ayers, Music Perception, 2001) demonstrated amemory-improvement effect during the first minute of hearing novel music. Listeners heard clas-sical minuets with one of the intitial phrases (the target) to be tested later. The music continuedfor 4-30 sec and memory was tested with a repetition of the target (T), a similar lure (S), or a dif-ferent lure (D). Memory, especially T/S discrimination, improved with increasing delay, but onlywith the “filler” material between target and test left intact. Musical continuity between target pre-sentation and test appeared essential in producing memory improvement. Here we replicated ourresult with a new selection of pieces (Experiment 1), and proceeded to explore the effect in eightexperiments, focusing on the continuity between target and test, and the intervening material, interms of melodic coherence, meter, key, and timbre. We eliminated the chordal accompaniment ofthe filler, leaving only the melody between target and test, and the memory improvement remained(Experiment 2). We randomly scrambled the beats in each measure of the filler (leaving the har-monic structure relatively intact but distorting the melodic line); memory improvement remainedstrong (Experiment 3). Substituting filler material from gavottes (in 4/4 meter vs. 3/4) eliminatesmemory improvement (Experiment 4). Substituting other minuet material for two measures offiller severely attenuates memory improvement (Experiment 5). Memory improvement remainedwhen the filler was transposed up or down 1 semitone (Experiment 6), even when the test item wastransposed another semitone in the same direction (Experiment 7). The improvement for trans-posed test items remained when the initial presentation of the target was transposed to a differentkey (Experiment 8). However, changing the timbre of the filler from that of the test eliminatesimprovement, whether or not the test item has the same timbre as the target (Experiment 9). Weconclude that musical continuity between target and test (in the sense of thematic and timbral co-herence, melodic and harmonic structure, and meter) is necessary for memory improvement duringlistening. Key changes that left pitch height (and hence melodic continuity) relatively unchangedwere not very disruptive of memory improvement.

537

Page 538: Abstract Book

538 Perception VI

Key words: Recognition memory, Musical phrases, Melodic/rhythmic continuity

[email protected]

72.2 Mapping temporal expectancies for different rhythmicalsurfaces: The role of metrical structure and phenomenalaccents

Riccardo Brunetti1, Alessandro Londei2, Marta Olivetti Belardinelli1

1Department. of Psychology, University of Rome, Italy2ECONA, Interuniversity Centre for Research on Cognitive Processing in Natural and ArtificialSystems, Italy

This study explores the rules regulating the formation of temporal expectancies when we lis-ten to a rhythmic sequence. When exposed to a rhythm we easily extract regularities (or invarianttemporal information) and we continue them by projecting them in the near future. In other words,we self-generate an attentional pulse organized according to the stimulus we have been listeningto. Our ability to generate these expectancies is widely dependant on the metric structure sug-gested by the patterns we entrain to. In Experiment 1, we mapped temporal expectancies evokedby three different repeating patterns: in this first experiment, we have only manipulated phenom-enal accents strength, keeping the metric structure constant in all three patterns. 19 Ss with noextensive musical training participated in the experiment. Expectancies were mapped by samplingrepeatedly the goodness-of-fit judgment of a test tone varying in each trial its time position. Re-sults of the test-tone timing evaluation give an estimation of the expectancies generated by eachpattern, revealing that expectancy waves are quite short (after the stimulus stops) and very depen-dent on phenomenal accent strength. In Experiment 2, we used four patterns with different metricstructures and lengths. We compared the effects of a meterless pattern, two patterns inducingisochronous meters, and a pattern inducing a Non-Isochronous structure. All the patterns werecomposed following a rhythm complexity evaluation algorithm. The timing evaluation judgmenttask after entraining to the patterns was identical to Exp. 1 and was carried out in Exp. 2 by33 musically naïve Ss. Results confirm the crucial role of phenomenal accents time position andstrength, and show that Isochronous meters generate strong and periodic expectancy waves, whileNon-Isochronous meters tend to evoke periodicities of a different level. Our results are consistentwith the recent oscillator models of attending. Discussion proposes an interpretation of the resultsfollowing the theory of Dynamic Attending with special attention devoted to the interpretation ofN-I meters effects.

Key words: Entrainment, Expectancy profiles, Isochronous and non-isochronous meter

[email protected]

72.3 The effect of pitch, tempo and proportional pitch and tempomanipulation on identification of iconic television themes

David Brennan, Catherine Stevens

Page 539: Abstract Book

Friday, August 25th 2006 539

MARCS Auditory Laboratories,University of Western Sydney (Bankstown), New South Wales, Aus-tralia

In a recent experiment Schellenberg and Trehub (2003) demonstrated that participants canidentify the original pitch of well-known TV themes even when compared to manipulations of onesemitone. This experiment described here employed a similar method but expanded the manipu-lation to include tempo as well as pitch and also a third condition in which the tempo and pitch ofthe manipulation were varied proportionally. It was hypothesized that if the relationship betweenpitch and tempo was maintained it should make identification of the original more difficult. Theaim was to (i) investigate whether in this context, participants identify changes in pitch more read-ily than changes in tempo, (ii) to observe whether a proportional change in both pitch and tempois harder to identify than the alteration of those parameters individually and (iii) to see if musicaltraining effected these results. The dependent variable was the proportion of times the originalwas identified when compared with a particular manipulation and the experiment also examinedthe between subjects variable of musical training.

Seventy-two participants listened to three sequences of six themes. The same six themes wereincluded in each sequence with a different pitch, tempo or pitch/tempo manipulation. The se-quences were twenty minutes apart.

As anticipated results showed that in all instances employing an actual theme the original wasidentified above chance level (72%) whereas a control theme returned chance results (51%). Formusicians, there was no significant difference between the correct identification of the originaltheme for excerpt pairs incorporating all three levels of manipulation (pitch only (76%), tempoonly (71%), proportional pitch and tempo (78%)). For nonmusicians however, identification ofthe original theme was significantly poorer for proportional pitch/tempo transposition (62%) thanfor either the pitch only (73%) or tempo only (76%) conditions. Results also showed a deterio-ration in correct identification of the original theme on subsequent identification sequences eventhough the sequences were twenty minutes apart and the manipulated excerpt differed at each pre-sentation. These results imply that nonmusicians memory of musical events is more compositewhile musicians’ memory for musical events allows for the decoupling of individual parameters.

Key words: Pitch, Tempo, Memory

[email protected]

72.4 The automaticity of music reading

Warren Brodsky, Yoav Kessler, Avishai Henik

Ben-Gurion University of the Negev, Beer-Sheva, Israel

BackgroundResearch on score reading has demonstrated that musicians have rapid perceptual coding processes,and economical ways of storing what they perceive in memory. While the psychological effective-ness of music notation is the extent to which readers are able to retrieve information about musicfrom the score, the compactness of the system often poses problems when more than one aspectof the same event has to be noticed causing and increase in the visual density of the information.More efficient sight-readers are those who are particularly attuned to superordinate structures withconsequential economy of coding. By organizing material into higher-order interrelationships,

Page 540: Abstract Book

540 Perception VI

representing certain regulations and limitations, musicians subsequently develop a host of cogni-tive expectancies. But how much of these “grammars” do musicians use, albeit automatically, toassist in their reading music?AimsWhile there are assumptions about the automaticity of music reading, empirical demonstration islacking. To this extent, the current on-going study compared musicians to non-musicians in simpleperceptual tasks involving music notation and tones. Specifically, the study examined interactionsbetween two visual components (notes and beams), as well as between these and the correspondingauditory component.MethodThe study employed a Stroop-like design in which the non-relevant stimuli aspects were eithercongruent, incongruent or neutral with the relevant aspect; a congruency effect of a non-relevantaspect on processing the relevant one reveals automatic processing. The experiment employedvisual stimuli only, where the relevant dimension for judgment was either the notes or the beam.Preliminary ResultsThe experiment compared musicians and non-musicians. The musicians were generally fasterthan non-musicians. Furthermore, two components of the congruency effect were derived foreach group in each of the task. The first component indicated the facilitation of congruent tri-als compared to neutral trials. The second component indicated the interference in incongruenttrials compared to neutral trials. The results showed a larger facilitation for musicians than fornon-musicians, in both the notes and the beam tasks. Moreover, musicians showed an interfer-ence when judging the notes, but not the beams, and non-musicians showed an interference whenjudging the beams, but not the notes. Taken together, these results show that the beam informationis automatically processed during note-reading. While both groups benefited from congruencybetween the notes and the beam, only musicians suffered from incongruency when judging thenotes.

Key words: Music reading, Stroop paradigm, Effects of formal training

[email protected]

Page 541: Abstract Book

Part V

Saturday, August 26th 2006

541

Page 542: Abstract Book
Page 543: Abstract Book

Symposium: For an anthro-pology of musical language asa form of human communica-tion

73

Convenor: Cécile Alzina

Because of its sensory and temporal nature, as well as of its formal aspects, music can becompared to language. However, the communicative function of that language does not appear asclearly if one considers only the phenomenological description of “musical discourse”: it is indeednecessary to also tackle the relationship that is created through the experience of the temporalityof music between the receiver and the musician (composer, performer. . . ) as a form of interactionand tuning either intuitive or more “rational” based on the sharing of experience in its cognitive,affective and cultural dimensions. It is then that music can really be appraised as a languageamong all the ones man has at its disposal to engage with its fellows. This is the thesis that will bedeveloped and debated over the course of the symposium.

At the beginning of life, the auditory sense is solicitated by environmental noises, by speechand by music. We learn to recognize them and to make sense of them. We will start the symposiumby looking at how the elaboration of musical language may be linked to human development, inharmony and coherence with other types of languages, as well as through different cultures. Wewill then look at the specific functional modes of musical language by studying how sound canmake sense by being organized in time to create new proto-linguistic forms of narrativity. Finally,we will attempt to understand musical communication through the way the emergence of a creativeintention of a musician can meet an anticipation of this intension by another musician within animprovised musical dialogue.

Musical language will hence be tackled from an anthropological perspective where, withindifferent cultural frameworks, will go from its developemental elaboration to the working of itscommunicability.

543

Page 544: Abstract Book

544Symposium: For an anthropology of musical language as a form of human communication

73.1 Music is a celebration of mimesis; demonstrating our ca-pacity to create, share and communicate gestures of emo-tio/expressive value

Benjaman Schogler

The University of the West Indies, Cave Hill, USA

There are many and varied factors at work in the development of our complex communicativerepertoire, but at the heart of it all is the individuals desire to communicate, to share feelings. Inmusic and shared musicality we find a vehicle for communicating expressive gestures in soundand movement. Experiments that explore this core competence will try and outline the perceptualpassage of gestural information in as it flows between musicians and dancers.

There is something in the pattern of flow in music and dance that communicates the underlyingemotion or expression. What is this something? Can it be measured? and how is it shared andtranslated between artists?

Expression or gestures must be embodied in certain underlying expressive variables of theflow of movement and sound that are invariant across different modes of expressing them. Theexperiments focus on the transmission of expressive information between musicians and dancersstudied using high-resolution 3D motion capture systems and applying General Tau theory (Lee,2004) to study this flow. Evidence that demonstrates the path of coherent neural information aboutthe control of action will be presented.

Such work suggests a perceptuo-motor description of mimetic communication, illustrating thatat the heart of language and music are bodies moving together in sympathy and harmony.

Key words: Music and dance, Gestures of expressive value, Perceptuo-motor description of mimetic commu-nication

[email protected]

73.2 How sound can make sense by being organized in time

Cécile Alzina

Sound Perception and Design Team of IRCAM and Psychomuse Lab, France

If we consider musical language as a form of human communication, then its organizationin time should be crucial for making sense of its sections. But the temporal window whithinwhich our attention operates spans 2 to 10 seconds on average, depending on the opinion of thedifferent psychologists. Indeed, we cannot permanently maintain a representation of long musicalpassages. Does that mean that perception ignores what has just disappeared in time, to seizethe present moment? By comparing perception between original musical sequences and musicalsequences which were disorganized by inverting the order of their sections we highlight, in severalexperiments, that we are sensitive to this temporal macro organization.

The scrambled versions consisted of 10-second sections on average, which were delimited bylisteners starting from the original version, during a first experiment. The scrambled versions didnot seem surprising to uninformed listeners. However, several elements permit us to state that one

Page 545: Abstract Book

Saturday, August 26th 2006 545

does not perceive the scrambled versions as one does the original versions. Inter alia, the senseassociated by the listeners to the various musical sections was modified depending on whetherthey listen to them in the original version or in the scrambled version. We also observed that thesubjective perception of time was affected by the destructuration of the music. These experimentswere carried out with varied styles of music: rock’n’roll, jazz, contemporary music. The effectsof the destructuration appeared with the different styles, and with musicians or non- musicianslisteners.

Thus, the fact that the organisation of sound in time makes sense with those sounds points tothe proto-linguistic aspect of music, or in other words it highlights the fact that music is essentiallya form of human communication.

Key words: Original and scrambled versions of music sequences, The sense associated to the sounds, Sub-jective perception of time

[email protected]

73.3 Perceptive approach of the improvised modes: Cognitiveethnomusicology/intercultural comparison of the musicallistening

Mondher Ayari1IRCAM Music Representation Team, Paris, France2Institut Supérieur de Musique et de Musicologie de Sousse, Tunisie

Musical systems are domains where complexity resides in two aspects, the analytical one andthe perceptual one. The description of the “musical” object could be difficult to grasp withoutconsidering the interaction of the musician’s criteria and acculturation. This research has as itscore of study, the theoretical exploration of the relations between recognition and segmentation,i.e. articulations between paradigmatic relevance and syntagmatic structuring, in the representa-tion of musical temporal flows. Experimenting with formalization of the dynamic behavior withregards to recognition (human recollection) and emergence of categories (capacities of inference),the modeling of the inductive behaviors and that of the analytical processes, should on the longterm make it possible to propose models of subjective variability, and to explore the fundamentalelements and their relation with the temporal piece of art. In parallel, the implementation of thesetheoretical improvements will lead to new ways of representational modes for modeling musicalforms, and to improve the exploration of the musical contents on large corpus and review of certainhistorical terms of musical ideas.

Key words: Analyse, Perception, Cognition

[email protected]

Page 546: Abstract Book
Page 547: Abstract Book

Education IX 7474.1 Interest in music and shift toward other fields in children

aged 1.6-4

Johannella Tafuri1, Maravillas Diaz2, Roberto Caterina3

1Conservatoire of Music G. B. Martini of Bologna, Italy2University of Bilbao, Spain3Department of Psychology, University of Bologna, Italy

BackgroundAccording to the research carried out in the last 15 years on musical development, it is clearthat children are born with a predisposition to musical engagement. This predisposition can beenhanced or repressed by a huge number of factors that can influence, in a positive or negativeway, the children’s interest in music. We wonder if the steady manifestation of interest in musicor its possible shift toward other fields during the first years of life can be interpreted as sign ofreinforcement or weakening of the initial predisposition.AimsIn order to answer a series of questions related to musical development in children, a longitudinalresearch has been carried out, the inCanto Project, spanning from the 6th month of prenatal lifeuntil the 6th year. Its main goal is to verify the musical abilities developed by children exposedto an appropriate musical environment during the above mentioned period. The present studyparticularly deals with the manifestation in children of their interests toward different activities(musical and non musical) in the period 1.6 to 4 years.MethodThe procedure chosen in the research consisted of a weekly course of music for 119 mothers-to-beand for their children after birth, principally based on singing, playing percussion instruments andmoving. The mothers were requested to sing and listen to music daily at home, to complete andreturn daily diaries and to give the recordings of children’s vocal production. In order to study themanifestation of interests and their changes along years, mothers have been asked to answer in thediaries to different questions about the level of interest showed by children at home toward singing,playing, moving, listening. Mothers have been also asked to rank the order of preference amongthese activities and others, such as drawing, looking at stories in books for children, manipulatingplasticine etc.

547

Page 548: Abstract Book

548 Education IX

Results and ConclusionsEven if the amount of diaries returned and analysed is quite various in the different age stages,the data give remarkable results. The full analysis is not ready yet but the results are beginning toshow two different and even opposite results: from one side a clear preference for singing, fromthe other side the widening of interests toward different activities and some shift of preferencefrom music toward other fields.

Key words: Interest, music, children, development

[email protected]

74.2 Cyclic-harmonic cognitive representations of rhythm: Im-plications for music education

John Neelin

Department of Music, University of Saskatchewan, Canada

BackgroundIn Western music contexts, rhythm is commonly understood in terms of the duration, division,and subdivision of sound along a linear timescale. Some notable music educators, such as EdwinGordon (2000), have argued that linear conceptions of rhythm are largely the result of a closerelationship between rhythm and notation. Rhythm is conventionally taught in terms of notation,and linearity is fundamental to the notation of durations. There is evidence, however, suggestingthat nonlinear approaches to rhythm education may complement the linearity of notation, whileaddressing many of the microrhythmic issues that regularly challenge music teachers and perform-ers. More specifically, nonlinear internal representations of rhythmic structure may facilitate thelearning process at the cognitive stage of skill development.AimsThis paper aims to address two principal concerns:

a) What are the ramifications of recent theories of meter induction, entrainment, dynamic at-tending, and temporal expectancy for the further development of nonlinear models of rhythmicstructure and motion?

b) What roles can cyclic-harmonic cognitive representations of rhythm perform in the percep-tion and performance of microrhythmic music?Main contributionThis paper provides an overview of nonlinear properties of perceived rhythmic structure, as in-dicated by several recent studies. Their findings suggest that rhythm perception may involvemany nonlinear, dynamic (self-organizing), and interrelated processes that are sensitive to the mi-crostructural subtleties of expressive performance. Through graphical representations and notatedexamples, cyclic-harmonic cognitive representations and their associated internal mechanisms arediscussed, and their relevance to the perception and cognition of temporal structures in music isexplored.ImplicationsThere are two important implications for investigating cyclic-harmonic cognitive representationsof rhythm. Firstly, nonlinear internal representations may be congruent with many of the percep-tual processes that allow listeners to induct and follow the periodic structures of music. Secondly,

Page 549: Abstract Book

Saturday, August 26th 2006 549

cyclic-harmonic mental templates and the accompanied development of frequency modulationskills may have practical applications for music teaching and performance.

ReferencesGordon, E. E. (2000). Rhythm: Contrasting the implications of audiation and notation. Chicago:G.I.A. Publications.

Key words: Cognitive representations, Entrainment, Metrical structure

[email protected]

74.3 Music education and critical thinking in early adolescence.A Synectic Literacy intervention

Emilia Barone, Diana Olivieri

Department of Psychology - Faculty of Psychology 1 - 1st University of Rome, “La Sapienza”,Italy

The present study involved a sample of 30 subjects from a public school of Rome, Italy. Thepupils, divided in two groups, belonged to two different classes. One group was educated througha Synectic Literacy intervention, while the other group was educated through traditional teaching.Synectic Literacy combines Media Literacy with the Synectic Operator. The term Synectics comesetymologically from the Greek meaning “to comprise, to contain, to hold together, to include”. Inthe first phase, a battery of tests was administered to each subject of both classes. It included: a)theCritical Thought “Caccia all’Errore 12” (Hunting Error 12) Test of Boncori; b)the Questionnaireabout Television Consumption, in its short form, in which children were asked to indicate the dailyaverage of TV consumption; c)the Television Theme Songs Recognition Survey. Results indicatehigher scores of critical thought and higher scores in the ability of television theme songs recog-nition in the Media Literacy group. In the second phase, correlational research was conducted toexamine the relationship between self-assessed amounts of television viewing, critical thought’sscores, and television theme songs recognition’s ability. Comparisons of data showed the statis-tical significance of the positive relationship between critical thought and television theme songsrecognition’s ability in the group with Media Literacy intervention (p = .0038), while the amountof television consumption resulted neither related to music recognition’s ability nor to the criticalthought at all. Findings run favourable to the current body of research about Media Literacy, whichhighlights how adolescents should be educated to a critical television watching and general massmedia use. In fact, an irresponsible fruition may affect school performances’ levels, particularlyon critical skills and cognitive styles. Research was also able to include previously excluded con-siderations in findings, such as the necessity to introduce popular music in the scholastic programsof Music Education just as an integral part in the lives of adolescents.

Key words: Media literacy, Synectic operator, Music education

[email protected]

Page 550: Abstract Book

550 Education IX

74.4 Mapping musical development in Brazil: Children’s mu-sical practices in Maranhão and Pará

Beatriz Ilari

Federal University of Paraná, Brazil

BackgroundBrazil poses an interesting challenge to the study of musical development. Despite the fact that itstruggles to assert its Western identity, it is a country with diverse influences (African, European,Amerindian), immense cultural differences across regions, and singular musical manifestations.In addition, its history of music education has been one of segregation by gender, religion andsocio-economic status. Therefore, only a few children have learned music formally, with muchmusical development happening informally, through traditional practices. This preliminary studyrefers to an on-going project that intends to map musical practices of Brazilian children acrossthe country, in order to understand how they develop musically. In this first study, two differentmusical practices found in the states of Maranhão and Pará were analyzed.

AimsThis study describes two current musical practices of Brazilian children in Maranhão and Pará,tambor de crioula and cordão de pássaros, respectively. Rhythm, pitch, song lyrics and themes,and movement were some of the studied elements.

MethodInterviews with children and their parents, teachers and mentors; field observation notes; videoand audio recordings of musical practices were taken in the cities of São Luis (Maranhão) and onthe shores of the Amazonic river Quianduba (Pará) in November (2005).

ResultsDespite all differences between the two studied groups, common aspects of their musical practicesemerged. In both cases, children served as performers and audience, and there was much impro-visation in singing and movement. Pitch and intonation were not the main elements of children’ssongs; rhythmic structures seemed to be more important as they sustained movement and dance.In addition, song lyrics in both groups included a mix of traditional and contemporary themes,with some criticisms being made to current social and political problems of Brazil.

ConclusionsTambor de crioula and cordão de pássaros are examples of musical practices that have been un-derstudied and deserve further investigation, as they have are manifestations of a culture that isvery much alive and has clear implications for musical development in Brazil. Implications of thepresent study for traditional theories of musical development will be discussed at the conference

Key words: Brazil, Musical development, Children

[email protected]

74.5 Everyday music among under two-year-olds

Susan Young1, Alison Street1

Page 551: Abstract Book

Saturday, August 26th 2006 551

1University of Exeter, School of Education and Lifelong Learning, UK2University of Exeter, School of Education and Lifelong Learning, UK

BackgroundWhile information concerning both the everyday musical experience of adults and popular cultureamong young children are accumulating from a range of recent studies, there is a significant gap.As yet very little research has explored the everyday musical experiences of young children livingin contemporary domestic situations.AimsThis paper presents the outcomes of an interview study carried out with 86 mothers of under-two-year olds in England which sought wide-ranging and general information concerning everydaymusical experiences of their children in the home.MethodAn interview schedule was prepared which contained both closed and open-ended questions. Asmall team of interviewers was recruited to visit community settings where mothers with youngchildren meet. The sample was deliberately steered to include mothers from working class andminority ethnic backgrounds. The scribed interviews were analysed using a mix of numerical andcategory generating procedures as appropriate.ResultsThe information concerning the everyday musical experiences of under two-year-olds was organ-ised into three main categories: musical resources; recorded music from audio and mixed-mediasources; singing and song repertoire. Significant findings include the discovery of the variety oftoys which produce digitised sounds and tunes made available to young children. Also significantwas that the main sources of music very young children hear is the mother’s own choice playing onthe radio or TV and/or the children’s songs and music which are central in children’s screen-basedmedia.ConclusionsDrawing together the findings, the paper will conclude in general terms that the rapid changes inthe nature of domestic musical activity brought about by technological developments and changesin family life are impacting considerably on the earliest musical experiences of children in thehome and these changes, in turn, suggest far-reaching revisions to how we understand children’smusical development.

Key words: Early childhood, Everyday, Development

[email protected]

74.6 Preschool children self-initiated movement responses tomusic in naturalistic settings: A case study

Claudia Gluschankof

Levinsky College of Education, School of Music, Tel Aviv, Israel

BackgroundAlthough the sight of young children spontaneously moving to music is familiar, their spontaneous

Page 552: Abstract Book

552 Education IX

response to music has rarely been researched. The contexts studied and reported in the reviewedliterature are the home (Smithrim, 1994; Chen-Hafteck, 2004) and classroom. Within the latter,two different approaches can be identified: one focusing on the way children react to music insituations entirely self- or peer-initiated (Moorhead & Pond, 1942/1978), with the other focusingon children’s spontaneously reacting to music in encounters initiated by the childcare staff orteachers (Gorali-Turel, 1997; Holgersen & Fink-Jensen, 2002).AimsThis study’s purpose is to gain understanding of the natural predispositions and the rules andschemata governing the self-initiated movement responses to music of 4 - 5-year-old children in anaturalistic setting.MethodThis is a case study of three girls (one aged four, two aged five) in a college-based kindergarten,in which they work on a choreography of a recorded pop song. The gathered data include avideotaped session (50 minutes; four repetitions of the song ). The method used is audiovisualethnography (Tobin, Wu, and Davidson, 1989). The gathered data was subjected to a frame-by-frame analysis; themes were then identified and interpreted within the specific context.Results and ConclusionsThe girls’ choreographies reflect their understanding of the recorded song, both an intuitive un-derstanding of musical features and an interpretation of lyrics. This is expressed through theirmovements varying between the four different performance versions,. These all sharing a com-mon feature: the gestures are all of brief duration. The following are characteristics of the meansof expression: 1. The movements are closely related to musical features such as form, repetitionand change, phrasing, instrumental vs. vocal, dynamics, texture, etc.; 2. The movements serve aspantomimed illustrations or literal translations of specific words, out of its context in the song’slyrics. The chosen words belong to two types: verbs and nouns (concrete and abstract). Verbs areusually enacted, while nouns are expressed through gestures accepted in the culture as signifyinga certain meaning; 3. The pantomime of specific words occurs within and is subordinate to thelarger movements that are guided by the musical content; 4. Repetitive, generalized gross-motormotions are used to connect the passages that particularly express either musical or textual content.Some features are common to all four versions, while others are absent or emerge from version toversion.

Key words: Music listening, Early childhood, Music and movement

[email protected]

Page 553: Abstract Book

Pitch V 7575.1 Music complexity measures predicting the listening expe-

rience

Søren Tjagvad Madsen1, Gerhard Widmer2

1Austrian Research Institute for Artificial Intelligence (OFAI), Vienna, Austria2Department of Computational Perception, Johannes Kepler University, Linz, Austria

BackgroundThis paper presents a computational model calculating the level of attraction a voice in a score islikely to require at a given time. The model is based on a music information complexity measure.

AimsThe assumption is that we continuously while listening tend to focus on the most complex (leastrepetitive) voice, experiencing this as foreground. This is reflected in the model - based on thecomplexity measured in all voices over a short time window, the model predicts the most complexvoice to be the most interesting in that time window.

We discuss how to measure music complexity of pitch and rhythm, and examine which factorsare the most important.

Main contributionWe present a computational model measuring the information of each voice in a score during ashort time window. For each window we can plot the current complexity of each voice. Movingthe window, we get curves reflecting how active each voice is over time. A curve staying on topfor a period is predicted to attract most attention in that period. A graphical interface presents thescore with emphasized predicted notes while playing the score.

We tend to perceive complex structures in the most structured way possible. Thus measuringcomplexity is in our model measured in terms of (in)predictability. We use entropy of pitches,intervals and durations of notes as measures of complexity. Compression algorithms will also bediscussed.

ImplicationsOne way to automatically test the correctness of our model is by trying to predict the melodynotes in a score with an annotated melody. A music student has prepared Haydn and Mozart stringquartets for testing purposes. The melody in this type of music often coincides with what we

553

Page 554: Abstract Book

554 Pitch V

prefer to listen to.We test which factors (pitches, intervals, durations) that have the greatest impact on melody

note prediction.For the kind of music where a melody is quite clear we present a model fairly competent in

correctly predicting the melody notes (supports the assumption). However if the melody is of amore static character, the most interesting accompaniment line will be expected preferred listeningto.

Key words: Music complexity, Music perception

[email protected]

75.2 Pseudo-Greek modes in traditional music as result of mis-perception

Rytis Ambrazevicius1Kaunas University of Technology, Kaunas, Lithuania2Lithuanian Academy of Music and Theatre, Vilnius, Lithuania (the second affilation of the author)

The paper develops the discourse on perception of musical scales presented in ICMPC8. Itaims to verify the evidence of so-called ancient Greek or Gregorian modes in traditional music.

The study consists of three parts. First, scales of three typical repertoires of Lithuanian tra-ditional solo singing (total of 70 songs; records of 1930s-1990s) are measured. It is found that60-80 percent of the analyzed samples are closer to equitonics (equidistant scale) than to diatonics(scale based on two contrasting intervals). Equitonics of the samples considered is shown to be ininterplay with the steady frame of basic or anchor tones comprising the intervals of the fourth orfifth, as a rule: the generative scale rests on the tonal frame complemented with the loosely-knitintermediate steps.

Second, three typical quasi-equitonic samples from the previous analysis are chosen for per-ceptual experiment. 20 undergraduates of musicology are asked to transcribe the samples payingspecial attention to intervalic relations. The modes are identified as chromatically changing Greekmodes, mostly Aeolian, Ionian, Dorian, and Mixolydian. Several students noticed “slight imper-fection” of intonation. Similar results are obtained with synthesized voice-like melodies based onexact equitonic scales, even chromatic change is identified, although to a lesser degree.

Third, the findings of the two examinations are collated with the transcriptions, interpretations,and classifications of modes in Lithuanian, Russian, Ukrainian and other East-European as wellas Anglo-American ethnomusicological studies. Strikingly enough, prevalence of the mentionedGreek modes, lack or negligible evidences of Phrygian and Lydian modes, and “chromatic alter-nations” are noted in many cases. A mere glance at the transcriptions discussed, however, raisessuspicions that, as a rule, quasiequitonic-based “nontempered” pitch and/or a wide zone of pitchintonation are actually at work.

Therefore aside from chromatic changes, so-called ancient Greek or Gregorian modes seemto be merely “aural ghosts” in many cases. These apparent modes are mere conscious or uncon-scious approximations of archaic, loosely-knit “anhemitonic heptatonics” (Grainger, Sevåg). Thisphenomenon results from the collision of two emic systems of pitch categorization - that of thecultural insider’s (performer’s) and outsider’s (musicologist’s).

Page 555: Abstract Book

Saturday, August 26th 2006 555

Key words: Ancient greek modes, equitonics, diatonics, Chromatic change, pitch categorization, Traditionalmusic

[email protected]

75.3 Pitch spelling using compactness

Aline Honingh

Institute for Logic, Language and Computation, Plantage Muidergracht, Amsterdam, The Nether-lands

BackgroundPitch spelling deals with the problem of assigning appropriate note names to MIDI pitch numbers.The last couple of years, there has been an increasing interest in the problem of pitch spellingand several pitch spelling algorithms have been proposed (Temperley 2001; Meredith 2003; Cam-bouropoulos 2003; Chew_Chen 2005). However, most pitch spelling algorithms involve manyrules and principles.AimsIn this paper, we present a pitch spelling algorithm which is based on only one perceptual principle.Convexity, viewed on the Euler lattice, has shown to be an important notion in music (Honinghand Bod 2005) representing an aspect of consonance. Since the major and minor diatonic scalesas well as all diatonic chords are convex, we believe that the principle of convexity can be usedto distinguish diatonic chords and melodies from those with misspelled notes in it, and a pitchspelling model can be developed. Hence, convexity might be a universal notion that could workfor all types of tonal music. Furthermore, the algorithm is based on one principle which makes thepitch spelling method very simple to use.MethodA program has been written which has as input pitch numbers 0 to 11 representing all 12 semitonesunder octave equivalence. The input is segmented in pieces consisting of a fixed number of notesand for each segment a possible convex representation is searched. This representation is mappedupon the Euler lattice representing the note names. If more than one convex structure of a setexists, the most compact one is chosen.ResultsThe program is tested on the score of the Well-tempered Clavier by J.S. Bach. Preliminary runsresult in 95 percent correctly spelled notes. While not achieving the highest results that are everobtained, the advantage over other algorithms is that our algorithm is very simple and based onone principle.ConclusionsThe pitch spelling algorithm presented here gives promising results while the program is based onone perceptual principle and has only the pitch numbers under octave equivalence as input (meter,note duration etc. are neglected). This offers the possibility of integrating the algorithm with otheralgorithms which do take into account more information from the music. Improved results couldthen be expected.ReferencesCambouropoulos, E. (2003). Pitch spelling, a computational model. Music Perception 20(4),

Page 556: Abstract Book

556 Pitch V

411-429 Chew, E. and Y.-C. Chen (2005). Pitch spelling using the spiral array. Computer MusicJournal 29 (2), 61-76.

Honingh, A. and R. Bod (2005). Convexity and the well-formedness of musical objects. Jour-nal of New Music Research 34(3) 293-303.

Meredith, D. (2003). Pitch spelling algorithms. In R. Kopiez, A. C. Lehmann, I. Wolther andC. Wolf (eds.), Proceedings of the Fifth Triennial ESCOM Conference, Hanover, Germany, pp.204-207 Temperley, D. (2001). The Cognition of Basic Musical Sturctures. The MIT Press.

Key words: Pitch spelling, Convexity

[email protected]

75.4 On the human capability and acoustic cues for discrimi-nating the singing and the speaking voices

Yasunori Ohishi1, Masataka Goto2, Katunobu Itou1, Kazuya Takeda1

1Graduate School of Information Science, Nagoya University, Japan2National Institute of Advanced Industrial Science and Technology, Japan

In this paper, the acoustic cues and the human capability for discriminating singing and speak-ing voices are discussed in order to develop an automatic discrimination system of singing andspeaking voices.

First, we investigate the critical length necessary for singing and speaking voice discriminationby conducting a subjective experiment. In the experiment, the voice segments of 10 different(from 100 ms to 2000 ms) lengths extracted from singing and speaking voices are presented to10 subjects. From the results, we show that human being can discriminate singing and speakingvoices of 200ms long and 1 second long in 70.0% and 99.7% accuracy, respectively. Since evenshort stimuli of 200ms long can be correctly discriminated, not only temporal characteristics butalso short-time spectral feature can be a cue for the discrimination.

Next, using two sets of stimuli, we compare the importance of the temporal and spectral cuesfor the discrimination. The first set of the stimuli is generated by random splicing of waveform,i.e., dividing signal into small chucks and concatenating them in random order. In the set ofstimuli, the temporal structure of the signal is distorted whereas the short-time spectral featureis maintained. The second set of stimuli is generated by low-pass filtering, i.e., eliminating thefrequency component higher than 800 Hz. This set of stimuli maintains temporal structure of theoriginal signal although the short-time spectral feature is distorted.

Experimental results obtained by the subjective test using the above two sets of stimuli, 29.4%and 13.1% reduction of discriminating accuracy is found by the random splicing and the low-passfiltering, respectively. From the results, the relative importance of the temporal structure is foundfor singing and speaking voice discrimination.

In the main paper, the details of subjective tests and their results are discussed. Furthermore,a software system which can automatically discriminate the singing and speaking voices and itsperformance is also reported.

Key words: Subjective experiments, Random splicing of waveform, Low-pass filtering of waveform

[email protected]

Page 557: Abstract Book

Saturday, August 26th 2006 557

75.5 Pitch perception of sounds with different timbre

Allan Vurma1, Jaan Ross2

1Estonian Academy of Music and Theatre, Estonia2University of Tartu, Estonia

BackgroundThe pitch of a sound is directly related to its fundamental frequency which is inversely propor-tional to the length of the waveform period. As a category of perception, however, pitch may beinfluenced by almost every other property of sound besides its waveform rate, for example, by itstimbre, duration, or loudness. When tuning their instruments as well as during performance, mu-sicians constantly need to compare pitches produced by different instruments and/or voice, witheach other. Often there may occur discrepancies as to how musicians estimate the pitch level ofa particular sound presented to them. There is little information about the influence of timbre topitch perception.AimsThis research is aimed at studying to what extent the perceived pitch of a sound is influenced byits timbre.MethodA listening experiment with thirteen professional musicians, including music students, was con-ducted. Pairs of two consecutive tones were presented to the participants. They had to decidewhether the second tone in each pair was (1) flat, (2) sharp, or (3) equal in pitch, in comparisonto the first tone. The tones in a pair could have produced by a viola, trumpet, or a singing voice(classically trained tenor producing /a/-vowel). Fundamental frequency of the tones was variedstepwise up to half of a semitone in comparison with the etalon.ResultsPitch of the singing voice sound was perceived about 20 cents higher in average than pitch ofthe viola sound with the same fundamental frequency. Analogically, pitch of the trumpet soundwas perceived about 15 cents higher than pitch of the viola sound with the same fundamentalfrequency. Those differences were statistically significant (p < .001).ConclusionsThe perceived pitch of a sound depends on its timbre in a statistically significant way. The salienceof pitch may be different for different musical instruments.

Key words: Pitch, Timbre, Perception

[email protected]

75.6 Toward realizing automatic evaluation of playing scaleson the piano

Seiko Akinaga1, Masanobu Miura2, Norio Emura3, Masuzo Yanagida3

1Department of Education, Shukugawa Gakuin College, Japan2Department of Media Informatics, Faculty of Science and Technology, Ryukoku University, Japan

Page 558: Abstract Book

558 Pitch V

3Department of Knowledge Engineering and Computer Sciences, Faculty of Engineering, DoshishaUniversity, Japan

BackgroundPlaying the piano is popular for a method of musical training in various kinds of music educationsystems, and it is thought to be effective for acquiring knowledge in music. It is, however, difficultfor beginners to win skill in piano playing by themselves. A support system for self- educationof piano playing is proposed in the paper to reduce this difficulty. Designed here is a subsystemwhich points out errors and weak points concerning fingering of each individual player. Playingscales within several octaves are usually employed as tasks for practice at the beginning stage, soa special focus is put on automatic evaluation of playing scales.AimsThe aim of this study is to realize automatic evaluation of playing scales on the piano and to pointout defects of the user in his/her playing scales. It is also desirable to make the criteria of thesubjective evaluation for playing scales clear.MethodRecords of playing scales together with corresponding subjective scores are collected in order todetermine the evaluation criteria. A professor majoring in preschool children’s education and her15 students played specified note sequences. Performances were recorded on a MIDI sequencer.Onset intervals, velocities, and durations for each note are recorded and a set of three regres-sion curves for each performance is calculated by spline interpolation for the average value ofeach parameter calculated for each wrist position on the keyboard. Parameters for each curve areregarded as explanatory parameters for scale playing. Scale playing by all subject players is sub-jectivity evaluated by the Professor. The scores obtained are employed as both test data or trainingdata in turn to achieve open tests. KL expansion and k-NN algorithm are tried to predict subjectivescores of new scale-playing data.ResultsOutputs by the proposed system are relatively similar to subjective evaluation scores and the ap-propriateness of values for description parameters is discussed.ConclusionsWeights for evaluating scale playing are obtained using KL expansion on data about onset inter-vals, velocities, and durations. Based on the weights, subjective evaluation score is proved to bewell predictable by KL expansion. A support system for self-education in playing the piano isproposed based on the evaluation scheme derived from the KL expansion.

Key words: Piano, Scale, Subjective evaluation

[email protected]

Page 559: Abstract Book

Cognition II 7676.1 Improving algorithmic music composition with machine

learning

Michael Chan1, John Potter1, Emery Schubert2

1School of Computer Science & Engineering, University of New South Wales, Sydney, Australia2School of Music & Music Education, University of New South Wales, Sydney, Australia

Algorithmic composition of musical sounding music is an interesting but challenging task, be-cause machines do not inherently possess any form of creativity which is necessary to create music.In recent years there has been considerable research activity in this area, with David Cope’s EMIproject being recognised as one of the more successful efforts to date. Our previous system, Au-tomated Composer of Style-Sensitive Music (ACSSM), attempted to extend some components ofCope’s system and has shown some success in generating interesting music, and more importantly,music that contains some of the stylistic elements of Bach. In this paper, we present a new sys-tem, ACSSM II, which overcomes some limitations of ACSSM and takes on the music generationprocess by transforming it into solving a kind of shortest route problem. The system generatesmusic by searching for a sequence of music segments that best satisfy various constraints on thesequence, including length and pitch range, harmonic backbone, and consistency with a proba-bilistic model of a composer’s style. As with all optimisation problems, our problem requires theconstruction of a search space; we adopt a clustering space produced by grouping together musicsegments having similar musical features. The output sequence is simply a path passing throughthese clusters. In order to produce such a sequence, we utilise a genetic algorithm. Genetic al-gorithms can cope well with complex evaluation measures and jump around the search space,avoiding being trapped by locally optimal solutions. To evaluate the system, we have assessed theoverall musical quality of produced music by conducting an experiment involving five individualswith at least three years of musical training. Our results show that the automatically generatedmusic achieved a mean satisfaction score of 7.5/10, which is significantly higher than that given tothe music produced by ACSSM. Hence, the results suggest that ACSSM II is a better system thanits predecessor and is capable of generating reasonably musical sounding music.

Key words: Algorithmic music composition, Artificial intelligence, EMI

[email protected]

559

Page 560: Abstract Book

560 Cognition II

76.2 More about music, language and meaning: The follow-upof Koelsch et al. (2004)

Bénédicte Poulin-Charronnat, Bettina Bock, Julia Grieser, Kerstin Meyer, Stefan Koelsch

Max Planck Institute for Human Cognitive and Brain Sciences, Germany

BackgroundIn Koelsch et al. (2004), written target words, which were conceptually unrelated to a linguisticor a musical prime, elicited a larger N400 than did target words, which were conceptually relatedto the primes. This finding suggests that music, like language, may convey semantic activation.The present study is a behavioral follow-up of the study of Koelsch et al. (2004). New stimuliwere used with a priming paradigm. The results showed that the processing of target words wasfacilitated when the words were preceded by a conceptually related compared to an unrelated(linguistic or musical) prime.AimsThis study contributes to the debate on modularity of language and music processing, and furtherexplores cognitive processes implied in the processing of semantics in both music and language.Koelsch et al. (2004) showed that music, like language, can prime the meaning of words. Par-ticipants were presented visually with target words after hearing either a spoken sentence or amusical excerpt. Both target words, which were conceptually unrelated to musical or linguisticprime, elicited a larger N400 than did target words, which were preceded by conceptually relatedprimes, suggesting that music, like language, may convey semantics.MethodIn a priming paradigm, a written target word was either conceptually related to the prime or not.The prime was either a linguistic context or a musical excerpt. Fifty nonmusicians participantshad to perform a lexical decision task on the target word (word or a nonword).ResultsThe results showed that the target words were processed significantly faster and more accuratelywhen they were preceded by a conceptually related compared to an unrelated linguistic prime andthat the target words were processed significantly faster when they were preceded by a conceptu-ally related compared to an unrelated musical prime.ConclusionsThese findings corroborate the results of Koelsch et al. (2004) and suggest that music like languagemay convey semantics. However, according to a significant Conceptual relation t’ Type of primeinteraction, even if music may activate concepts, this activation is weaker than the activation thatcomes from linguistic information.

Key words: Music and language, Meaning, Priming

[email protected]

76.3 Emergence of harmonic progression using multi-agent com-position model

Takeshi Takenaka, Shintaro Suzuki, Yuriko Hoteida, Kanji Ueda

Page 561: Abstract Book

Saturday, August 26th 2006 561

Research into Artifacts, Center for Engineering. The University of Tokyo, Japan

This study examines how the harmonic progression emergent based on cognitive features us-ing a multi-agent computer simulation. In cognitive psychology, the continuity or naturalness ofharmonic progression is thought to depend on musical context or anticipation of a listener. On theother hand, in music theory, the basic concept used in the tonal functional harmony is based on thefact that all harmonic sounds used in music may be classified in three groups (functions); the tonic,dominant, and subdominant. Many music theories in jazz or popular music also use this conceptand classify the pattern of harmonic progression using these three groups. In the field of artifi-cial intelligence, GTTM (formulated by Lerdahl and Jackendoff, 1983) also focused attention tothese musical functions and proposed the parallelism between music and linguistic structures. We,however, have a question how these functional harmonic progressions arise; resulting from spon-taneous or inevitable continuity of each harmony, or resulting from intentional design of music.The objective of this study is to examine on which conditions associated with cognitive featuresthe continuity of harmonic progression emergent with a computer simulation and to verify theimpression of the generated harmonic progression with a psychological experiment. We constructmulti-agent composition model, in which the musical note agent interacts with other agents basedon cognitive features. We specifically examine sensory consonance which is originally used forthe relationship of two tones existing simultaneously (spatial consonance) as basic feature of mu-sic cognition. Additionally, we propose to extend the concept of consonance to the time dimension(temporal consonance) in consideration of short term memory (STM). The agents learn suitablerelationships with other agents through reinforcement learning. We generate harmonic progres-sions (4-8 triad chord progressions) through the simulation. As a result, the generated harmonicprogressions considering the spatial and temporal consonance revealed to have a good continuityin the psychological experiment using image evaluation. They, however, are not always identicalto the music theory (tonal functional harmony). The authors of this paper think that these findingsare important to reconsider the semiotic approach of music from the psychological viewpoint.

Key words: Multi-agent model, Harmonic progression, Tonal functional harmony

[email protected]

76.4 Measuring melodic redundancy

Klaus Frieler

Institut für syst. Musikwissenschaft, Universität Hamburg, Germany

BackgroundMelodic redundancy can be defined as the amount of repetition contained in a melody (McMullen,1974) and is related to melodic complexity. Shannon entropy, expectancy and variability arethe main concepts for measuring melodic complexity (Vitz, 1964; McMullen, 1974; Simon &Wohlwill, 1968; North & Eerola (2000), which are related but not equivalent. We will focus onthe latter and propose a class of measures for melodic redundancy.

The empirical-comparative approachThe empirical-comparative approach (Müllensiefen & Frieler, 2004) tries to achieve optimal mod-els for music cognition by comparing a large number of models and features with empirical data.

Page 562: Abstract Book

562 Cognition II

The here presented complexity measures can prove to be useful in this general frame-work andcould serve as a feature for statistical classification of melodies.

N-gram complexityAbstract melodies are sequences of events like pitches, intervals or durations. N-grams are definedas sub-sequences of length N, and have been successfully applied to melodic similarity and musicretrieval (Downie, 1999, Frieler & Müllensiefen, 2004).

The N-gram complexity is based on is melodic sub-pattern self-similarity and is defined as themean normalized frequency of all N-grams up to a certain maximal pattern length. The 1-gramcomplexity measures the number of different elements in a melody and comprises therefore olderapproaches( Vitz, 1964; McMullen; 1974). More global aspects can be catched by choosing theupper bound to be the length of the longest repeated pattern.

First explorative resultsA explorative investigation of interval N-gram-complexity with about 2,000 folk and pop songphrases showed high variance and good discriminative power. Religious songs from easternPoland show a higher mean complexity than German childrens songs, which in turn are less re-dundant than the pop songs phrases. These first results confirm the usefulness of our method.

ReferencesDownie, J.S. , “Evaluating a simple approach to music information retrieval. Conceiving melodicN- grams as text.” PhD Thesis, University of Western Ontario, 1999

Eerola, T. & North, A.C., “Expectancy-based model of melodic complexity”, In Woods, C.,Luck, G.B., Brochard, R., O’Neill, S. A., and Sloboda, J. A. (Eds.) Proceedings of the Sixth Inter-national Conference on Music Perception and Cognition. Keele, Staffordshire, UK: Departmentof Psychology. CD-ROM.

McMullen, P. T., “Influence of number of different pitches and melodic redundancy on prefer-ence responses.” Journal of Research in Music Education, 22(3), 198-204, 1974

Müllensiefen, D. & Frieler, K., “Cognitive Adequacy in the Measurement of Melodic Similar-ity: Algorithmic vs. Human Judgments.” Computing in Musicology, Vol. 13, 147-176, 2004

Simon, C.R. & Wohlwill, J.F., “An experimental study of the role of expectation and variationin music”, Journal of Research in Music Education, 16, 227.238, 1968

Vitz, P.C., “Preferences for rates of information presented by sequences of tones” Journal ofExperimental Psychology, 68(2), 176-183, 1964

Key words: Melodic complexity, N-grams

[email protected]

76.5 Testing lerdahl’s Tonal Space Theory - listener’s prefer-ences of performed tonal music

Angelo Martingo

CESEM, Universidade Nova de Lisboa, Portugal

BackgroundPrior research by the author (reported at the ESCOM meeting 2005, held in Porto, Portugal),showed Lerdahl’s theory of tension and attraction developed in Tonal Pitch Space (TPS) to be

Page 563: Abstract Book

Saturday, August 26th 2006 563

an efficient tool for understanding performed deviations from strict tempo and dynamics. Thecurrent, on-going, research is a perceptual analysis of the recordings analysed in that study.

AimsThis paper aims at examining listener’s preferences of seven recorded interpretations of Beethoven’sWaldstein Sonata (initial 8 measures of the second movement) in the light of Lerdahl’s TPS theory.

Method60 university music students participated in the experiment. Subjects were provided with an an-swer sheet and asked to rate the coherence and expressivity of each one of seven recordings of theinitial eight measures of the second movement of Beethoven’s Waldstein Sonata.

The seven recordings used were object of a prior study. In all but one recording, performeddynamics were found to correlate at a significant level either to attraction or global tension val-ues predicted by Lerdahl’s TPS. Specifically, the dynamics correlate significantly with predictedvalues of attraction in three recordings and in three other recordings the dynamics correlate signif-icantly to predicted values of global tension. In order to test the validity of answers, one recordingwas also used in which no significant correlation was found between dynamics and either attractionor global tension.

ResultsPreliminary results show significant relations between listener’s preferences and the relative weightof attraction and tension observed in recorded interpretations. It was found that, although the ratingof coherence and expressivity of a given interpretation varies across subjects, individual subjects’ratings are consistent across different recordings. In fact, for a given subject, recordings seem tobe rated higher or lower according to whether they are shaped by attraction or global tension.

ConclusionsResults prove the consistency of Lerdahl’s theory and its operationality as a cognitive frameworkand research tool. Namely, TPS seems to help enlightening listener’s preferences regarding per-formed tonal music. In addition, these results agree with existent empirical research and cantherefore be taken to confirm that the same cognitive mechanism underlies both the productionand the perception of expressive deviations.

Key words: Tonal pitch space, Perception, Expression

[email protected]

76.6 An acoustical and cognitive approach to the semiotics ofsound objects

Daniele Schön1, Sølvi Ystad2, Mireille Besson1, Richard Kronland-Martinet1

1Institute of Cognitive Neurosciences of the Mediterranean, CNRS and University of the Mediter-ranean, Marseille, France2Equipe Modélisation, Synthèse et Contrôle des Signaux Sonores et Musicaux, CNRS, Laboratoirede Mécanique et d’Acoustique, Marseille, France

In this work we investigated the relation between semantic processing in language and semioticprocessing of “sound objects”. More precisely we designed an experiment to test whether therecan be semantic priming across linguistic and sound object domains. At this aim we selected

Page 564: Abstract Book

564 Cognition II

sounds based on an acousmatic approach, meaning that the source that produced the sounds couldnot be recognized, giving the listener the possibility to experience the sound for its own sakedisconnected from its original context. For this purpose, we built a rather large corpus of shortsounds and asked 10 subjects to write one or more words that they believed to be related to eachsound. Then we selected the most consistent 45 sound/word related pairs. We used the selectedmaterial in an experiment wherein words were shortly presented on a computer screen and werefollowed by a sound that could be related or non-related. We asked subjects to decide as quicklyas possible whether the word/sound pair was related or not. We also recorded EEG signal from32 scalp electrodes. Preliminary results indicate a negative component time-locked to sounds thatresembles the N400 component described for semantic processing of words. Moreover, soundprocessing seem to be strongly influenced by the semantic role (related/non related) of precedingwords. Finally, precise acoustical analyses of sounds give a hint on how acoustical propertiesrelate to meaning. This study allows to better understand the cognitive processing of complexsounds and its relation to linguistic semantic processing.

Key words: Sound object, Semiotics, Language

[email protected]

Page 565: Abstract Book

Timbre and Perception 7777.1 Violin portamento: An analysis of its use by master vio-

linists in selected nineteenth-century concerti

Heejung Lee

Teachers College, Columbia University, USA

BackgroundPortamento [shifting technique] is regarded as one of the most difficult and intricate aspects ofviolin playing. Given that each performer’s decision to select a particular portamento is based onhis/her musical interpretation, the use of portamento is very individualistic in nature. No peda-gogical sources are available for students and teachers to generate critical thinking about when,why, and how violinists might employ portamento. The decision making process regarding its useremains intuitive and amorphous.

AimsThrough the analysis of various portamento styles in real cases, using sound files and acousticalanalysis, this study focused on comparative descriptions of portamento styles of great artists. Pro-viding an image record of sound in time allows the abstract phenomenon of portamento playing tobecome more accessible for both scientific inquiry and instructional conversation.

MethodPortamento styles were examined in real performances by master violinists, Heifetz, Huberman,Kreisler, Mutter, D. Oistrakh, Perlman, Shaham, and Vengerov. Target Intervals (TIs) in whichportamenti were employed were extracted with computer software from the first movement ofBrahms, Lalo, Mendelssohn, and Tchaikovsky concerti. The selected TIs were analyzed, con-sidering musical factors and individual characteristics of performers, including their portamentostyles in multiple performances of the same work. Computer software programs generated visualimages of portamento execution, these spectrographs, displayed the progression of pitch changeand intensity over time as well as patterns and type of portamento.

ResultsResults showed that the performers tended to agree more on the type of portamento in descendingintervals. B- portamento, occasionally called the French slide, and L-portamento, known as theRussian slide, were associated with Kreisler and Heifetz, respectively. Portamento patterns iden-

565

Page 566: Abstract Book

566 Timbre and Perception

tified each violinist’s idiosyncratic performance style. The older generation of violinists tendedto use portamento more frequently than the younger generation. Performers tended to retain thesame type of portamento across performances at different periods of their lives.ConclusionsIt was clear that master performers had a tendency to use portamento as a highly personalizeddevice to exhibit their musicianship, a way to differentiate their performance style from others.Visualization of aural perception, with the use of technology may open up new possibilities forboth teaching strategies and scientific research regarding musical performance.

Key words: Violin portamento, Visualization of aural perception

[email protected]

77.2 Perceptual correlates of violin acoustics

Claudia Fritz1, Ian Cross1, Jim Woodhouse2, Kevin Weaver1, Ulrike Petersen1

1Centre for Music and Science, Faculty of Music, University of Cambridge, UK2Department of Engineering, University of Cambridge, UK

This paper presents the results of a preliminary series of experiments exploring the extent towhich listeners display consistent patterns of preference and discrimination in respect of violinsounds, an issue which has received little prior attention in the literature but which is of greatinterest for violinists, violin-makers and psychologists.

The principal characteristics used by violin makers to differentiate between instruments relateto the physical features of the violin body. The present study provides a test of a method whichenables the same performance to be replayed on different “virtual violins” and it has yieldedpreliminary data on the abilities of different groups of listeners to indicate preferences for, and todiscriminate between, particular violins on the basis of sound alone.

Recordings of real performances were made using a bridge-mounted force transducer, givingan accurate representation of the signal from the violin string. These were then played throughfilter sets corresponding to the admittance curves of different violins. A preliminary experimentused three violins which were selected as differentiable on the basis of informal listening tests,using both single notes and phrases to explore listeners’ abilities to discriminate between pairsof violins. In the second experiment, one violin was used as a basis on which modifications ofdifferent magnitude were applied to resonance peaks in order to determine thresholds of discrim-ination. Three groups of listeners participated in both experiments; non-musicians, violinists andnon-string-playing musicians.

Results of the first experiment indicated that all groups of listeners were able to discrimi-nate between, violins when single notes were used as stimuli. However, the non-musician groupperformed poorly on when phrases were employed (discriminability index d’<1). Moreover, theviolinist and musician groups provided patterns of responses that differed significantly from eachother for the phrase stimuli.

Results of the second experiment showed that the threshold depends, not surprisingly, on thetype of modification and on the note chosen as input, but was not dependent on the musical trainingof the subjects.

Key words: Violin, Timbre, Discrimination

Page 567: Abstract Book

Saturday, August 26th 2006 567

[email protected]

77.3 Construction of a harmonic phrase

Naomi Ziv1, Mariateresa Storino2, Luisa Bonfiglioli2, Roberto Caterina2, Mario Baroni2

1The Max Stern Academic College of Emek Yizre’el, Israel2University of Bologna, Italy

BackgroundPrevious studies (Krumhansl, 2001, 2005; Tillmann, Bharucha, & Bigand, 2000) have establishedthat the “average” listener possesses a certain representation of tonal grammar. This acquiredknowledge, stored in long term memory, guides listening and helps create a schematic represen-tation of musical pieces. Most studies (Povel & Jansen, 2002) either require participants to judgethe appropriateness of short harmonic or melodic sequences, or ask participants to reconstruct,through the puzzle paradigm (Deliége, 1987, 1989, 2001; Koniari, Predazzer & Mélen, 2001),given musical pieces.

AimsThe aim of the present study was to examine whether the grammatical knowledge of the tonalidiom is strong and stable enough to direct the active construction of a sequence of chords. Thesubjects were divided into several groups corresponding to several levels of presumable compe-tence: from a minimum to a maximum. Their competence was valued according to differentcriteria, in order to avoid a too simple distinction between musicians and non musicians.

Method60 participants took part in the study. Participants were presented with 8 piano chords (I I II IIIIV V V7 VI) in random order. The participants were asked to listen to each chord, and then tocombine them in order to create a satisfying musical sequence. Participants were allowed to tryout as many combinations as they liked.

ResultsTwo main criteria were chosen to evaluate the results: a positive criterion (a perfect cadence -subdominant, dominant and tonic - in the final position) and a negative one (the presence of adominant followed by a subdominant). According to these criteria and to their possible mixtures(analyses focused on the positioning of the various chords in the sequence) the results were orga-nized from a maximum to a minimum of apparent tonal competence. Preliminary results seem toindicate that, although a certain theoretical musical competence linked to musical studies seemsto be necessary in order to get accuracy and precision in the construction task, different and notclear relationships can be found between the levels of musical competence and the correctness ofthe results

ConclusionsThe research shows the necessity to establish more accurate distinctions among the participants oftonal tests, beyond the two traditional categories of “musicians” and “non-musicians”.

Key words: Tonal grammar, Construction

[email protected]

Page 568: Abstract Book

568 Timbre and Perception

77.4 Establishing an empirical profile of self-defined “tone deaf-ness”

Karen Wise, John Sloboda

Keele University, UK

BackgroundRecent research has suggested that around 16% of Western adults self define as “tone deaf”(Cuddy, 2005). However, there is a lack of consensus in the literature regarding the exact na-ture of “tone deafness”, and even whether it really exists. One candidate for a formal definitionis “congenital amusia” (Peretz et al. 2003), which is characterised by dense perceptual deficitslimited to the musical domain. However, most people self-defining as tone deaf have no such per-ceptual difficulties (Cuddy, 2005). A belief that one is musically impaired can have consequencesfor a person’s musical engagement (e.g. avoidance, inhibition), and such beliefs can be sociallygenerated and maintained, for example through negative judgements of one’s singing (Knight,2000; Lidman-Magnusson, 1997). Recent qualitative work has shown that in the definitions ofthe general population, “tone deafness” is closely linked with poor perceived singing ability (Slo-boda, Wise & Peretz, 2005), suggesting the need to extend investigations of musical difficulties toinclude production abilities and self-perceptions.

AimsThe present research aims to construct an empirical profile of “tone deafness” through behaviouraland self-report measures. It aims to discover if self-defined tone deaf people show any pattern ofmusical difficulties relative to non-tone-deaf people, and to offer possible explanations for thosedifficulties (e.g. perceptual, cognitive, productive, motivational).

MethodThirty self-reporting tone deaf (TD) and thirty self-reporting non-tone deaf (NTD) participantsare being compared on a comprehensive range of measures for aspects of musical perception,cognition, memory and production (vocal and non-vocal). During testing, participants are askedto rate their own performance. They are asked about their musical history and their reasons forconsidering themselves TD or NTD.

ResultsThis is work-in-progress and results will be presented at the conference. It is expected that theNTD group will perform better than the TD group in production tasks, but not necessarily inperception tasks. The TD group may also show more negative and/or distorted self-perceptionsthan the NTD group.

ConclusionsThe data will shed light on the relative roles of perception, production and self-views in the ex-perience of being “tone deaf”. The research will extend knowledge of musical difficulties beyondperception and memory, and may also suggest future interventions.

Key words: Tone deafness, Perception, Production

[email protected]

Page 569: Abstract Book

Saturday, August 26th 2006 569

77.5 Tonal function modulates speed of visual processing

Nicolas Escoffier1, Barbara Tillmann2

1University of Georgia, Department of Psychology, Athens, Georgia, USA2CNRS-UMR 5020 & IFR 19, Lyon, France

BackgroundHarmonic priming studies have provided evidence that musical expectations influences sung phonemeprocessing (Bigand et al., 2001): phoneme identification is facilitated for harmonically related tar-gets that function as tonic chords in comparison to less-related targets that function as subdominantchords. The observed data pattern suggested two interpretations: a) the processing of phonetic andmusical sounds is not independent, but interacts at some level of processing ; and b) tonal func-tions of target chords create harmonic accents that influence phoneme identification in vocal musicvia listeners’ attention, a process comparable to prosodic cues influencing phoneme monitoring inspeech (Jones & Boltz, 1989).AimsOur study aimed to investigate whether the influence of tonal function extends to the processingof visually presented syllables (Experiments 1 and 2) and geometric forms (Experiment 3).MethodParticipants listened to the harmonic sequences (ending on related or less-related chords) as back-ground music, while seeing visually displayed syllables (Experiments 1 and 2) or geometric forms(Experiment 3) presented synchronously with each chord. For target syllables or forms presentedat the same time as the last chord of the sequence, participants made speeded identification judg-ments on the syllable (is it DI or DU?) or the form (i.e., distinguishing visually similar forms).In Experiments 2 and 3, the target item was indicated by a color change to eliminate eventualambiguities about when to respond.ResultsCorrect response times for both syllable and form identification were faster when a related endingchord was presented at the same time than when a less-related ending chord sounded.ConclusionsThe present findings suggest that the influence of musical expectations on syllable processing isnot restricted to some specific interaction between music and language processing, but might beexplained by attentional processes that are modulated by tonal functions and linked to expecta-tions and temporal integration. This hypothesis is based on the attention theory proposed by Jones(1987), which links musical structures and expectations to attentional cycles. Musically important(i.e., stable) events, which are strongly expected and easily integrated with the preceding informa-tion, might thus facilitate simultaneous processing of additional information.

Key words: Audiovisual interactions, Harmonic priming, Attention

[email protected]

77.6 Music and speech as languages of sound

Alessandra Padula

Page 570: Abstract Book

570 Timbre and Perception

Speech and music are both based on sounds which are uttered and received, and that can beorganized in order to make “sense”.

If we abandon the concept of music as “the art of sound”, we can define music as “the languageof sound” and we put it in the set of non-verbal languages. Therefore it may be interesting toinvestigate which characteristics make music become a language. This study will try to reach thisgoal by means of a detailed comparison between speech and music, also referring to considerationsand researches made in the past centuries in the field of musicology, linguistics, literature, musicpsychology, . . . This comparison, supported by several examples, is first made between speech andmusic and then extended to other types of codes (iconic-visual, animal, . . . ). The analysis is madeexamining first the design features of speech and music and then the different functions that maybe taken by verbal and sound-musical messages. Then we analyse the coding process of an oralverbal message, of a musical sung message and of a musical played message.

Although many experts have denied that music can carry any meaning rather than itself, thisstudy shows how music messages bring and communicate a meaning/sense that can be understoodin two different ways:

• in an empathic/intuitive way

• in a rational way

The results of this study may be useful to:

• music perception and cognition experts

• music teachers

• composers

• music performers (players and singers)

• music therapists

• music psychologists

Key words: Languages of sound

[email protected]

Page 571: Abstract Book

Keynote Lecture II 7878.1 Ethnomusicological research on the organization of musi-

cal time

Simha AromDirecteur de recherche émérite au LMS - CNRS, Paris, France

I have spent about 40 years studying African music, an investigation that led me to reexam-ine, then to reject, the Western conception of the organization of time in music. My presentationstresses that the temporal structuring of music results from a combination of concepts and prin-ciples to be found in all cultures. I would like to address the polysemic nature of the notion ofmeter and the confusion that has been established between the tree notions of Meter, Measure (i.e.the equivalent of “bar”) and Rhythm. I intend to rely on a double comparison - both synchronicand diachronic - involving, on the one hand, musical systems that are practiced today in Subsa-haran Africa, and on the other hand, those that have been practiced in Europe from the MiddleAges and Renaissance to this very day. In terms of perception, what Western tradition defines as“measure”, is in fact a manifestation which essentially pertains to rhythm. The very concept ofmeasure derives its meaning from the practice of musical writing: “Measure” has no existencein itself. The world’s musical production can be divided into two large categories: (1) Measuredmusics, in which every value has a strictly proportional relationship to every other value; (2) Othermusics which, resisting any standardization of time, reject this principle of proportionality. Thenotion of measure has exerted an extraordinary influence on the whole of Art music, since theBaroque period. Only in the XXth century did some composers dare freeing themselves from itsconstraint. Yet, the imperative of a correct performing of music, no matter how complex, can notjustify such constraints. This is shown by the history of music. In fact, at the time of Ars Nova, thenotion of “measure” had not emerged yet. The tactus, which was a simple time standard, providedsynchronicity of the different parts involved in performance. The articulation of durations in manyorally transmitted musics - especially in India and Subsaharan Africa - relies on a principle whichis identical to that of the medieval tactus. From being a mere graphical convention, the notionof measure gradually succeeded in modifying the very conception of Western musicŠs temporalorganization. Definitions I intend to show that the notion of meter, in its current use, involves atleast three different, if not opposite, meanings. I would define meter in the following way: In re-lationship to a constant value taken as a standard, Meter means the determination of time intervalsthat can be obtained either by multiplication or by division - without any dimension of hierar-

571

Page 572: Abstract Book

572 Keynote Lecture II

chy. Accepting such a definition leads to the following axiom: Meter is about the standardizationof time in equal quantities, whereas rhythm is about the modalities according to which differentdurations are assembled. Rhythm is difficult to define, mainly because of its interweaving withmeter. When the notion of measure was introduced in the musicological vocabulary, the idea ofa regularly reiterated accent was introduced with it. Since then, the notions of “meter” (as mea-sure) and of “rhythm ” have become indissociable. Perceiving a sound sequence as a rhythmicalform necessarily requires that at least one of its constitutive elements be marked by a feature thatcontrasts it to the other constitutive elements. Three types of marks are susceptible of providingthis contrast: accentuating, modifying tone-color, alternating durations. In other terms, as soonas you find contrast, you find rhythm. A form that unravels in time is a rhythmical form. Byrhythmical form I mean any sound sequence whose delimitation relies on at least one of the marksI have mentioned. The relationship between a rhythmical sequence and the standard value is dif-ferent, depending on whether the latter can be divided. This means that there exists two types ofstandards: the pulsation (i.e. the equivalent of tactus) and what I coined the “minimal operationalvalue” (i.e. the equivalent of the chronos protos of Ancient Greece). The aksak rhythm and itscognitive implications Let us look at music with an irregular, asymmetrical periodicity, the tempoof which is too fast to allow dividing the minimal value. In this case, the standard can only mergewith that value. Such is the situation with the aksak rhythm (from a Turkish word borrowed fromOttoman musical theory, that means “lame”, “irregular”). Note that the situation I am addressinghere seems to be the only one in music that involves an interference between the variable of tempoand one of the parameters that account for its structural characteristics. Asymmetrical forms -that can only be obtained through addition ore multiplication - are quite frequent in the traditionalmusic of the Balkans. Their lame character results from groupings based on juxtaposing binaryand ternary quantities. The modalities of such a juxtaposition determine the articulation and shapeof the aksak. Given the interaction of their irregular character and of their fast tempo, the aksaksthat consist of primary numbers can not be reduced to an isochronic pulsation. The only criterionfor their temporal organization concerns the way their constitutive values are grouped. Such agrouping, as we have seen, pertains to rhythm. This type of aksak thus offers the most condensedtemporal form possible. Everything in it merges. Form is delineated by a periodic frame, in whichthe metric scaffolding and the articulation of the rhythmic content are one and the same. In otherterms, we are left here with a single level of articulation: that of the form itself. This entails aninteresting cognition issue, concerning the discrepancy between the structural dimension of aksak,and the way aksak is perceived in a given cultural context. The double articulation of rhythmThere is a general agreement on the fact that the elements that constitute “measure” have to beperceptible. If such is the case, it means these elements pertain to rhythm. Thus, Measure, inas much it involves the regular recurrence of a mark, thereby constitutes a first level of rhythmicarticulation. As long as the rhythmic articulation of a musical utterance is aligned on the regular-ity of the measure, as long as it is commetrical, we are left with only this first level. As soon asthe articulation antagonizes this regularity, we find a second - contrametrical - level of rhythmicarticulation. Indeed, the perception of the second level is necessarily determined by the under-lying regularity of the first level, the one established by Measure. The principle of this doublearticulation of rhythm is the very basis of the temporal organization of Western Art music, sincethe end of Renaissance, and to this day. From the point of view of perception, the conflict betweenthe regularity of Measure and the contrametric articulation of Rhythm generates an ambivalence,directly related to the amount of divergence between the two elements. Adding - i.e. superposing- any additional contrametric rhythm, automatically increases the amount of ambivalence. Conse-quently, the stronger the antagonism between the various rhythmical levels, the higher the level ofambiguity, more condensed the musical content.

Page 573: Abstract Book

Saturday, August 26th 2006 573

[email protected]

Page 574: Abstract Book

574 Thematic Index

Thematic Index

Absolute and relative pitch, 454Absolute pitch, 112, 276, 483, 490, 492Absorption, 370Abstraction, 176Acculturation, 264Achievement behavior, 346Acoustic information, 363Action, 375Action planning, 272Active learning, 209Adaptation, 418Adaptation effect, 60Adjectives, 53Adolescence, 316Adolescent identity, 70Adolescent musicians, 321Adult education, 487Advertising, 69Aesthetic judgement., 421Aesthetic response, 92, 227Aesthetics, 414Affect, 302, 352Affective experience, 98Affective priming, 221Affective response, 66, 180Age, 198Age differences, 204Aggression, 292Algorithmic music composition, 559Algorithmic prediction, 131Amusia, 156, 274Amusia evaluation, 119Analyse, 545Analysis, 227, 444Analysis algorithms, 391Analysis methods, 424Analysis of performances, 383Ancient greek modes, equitonics, diatonics,

555Animals, 166Argument, 348, 498Arousal-mood hypothesis, 136Art, 206Articulation, 432Articulation style, 443

Artificial intelligence, 559Arvo pärt, 110Assessment, 238, 322Assessment and support, 275Asymmetry, 490Attention, 425, 569Attitudes, 435Audio alignment, 303Audio description, 257Audio recordings, 478Audio-visual integration, 373Audio-visual perception, 110Audiovisual interactions, 569Auditory feedback, 502Auditory imagery, 272Auditory memory, 452Auditory perception, 342Auditory stimulus, 466Auditory temporal order judgments, 246Auditory-motor interactions, 247Aural recognition, 226Autism, 184, 445Autistic spectrum disorder, 121Autobiographical memory, 386Autocorrelation, 304, 380Automatic analysis, 291Automatic arrangement, 239

Background music, 204Baroque and romantic, 226Beat, 458Beat induction, 304Beat tracking, 162, 303BGM, 91Bi-manual coupling, 484Bilateral model, 485Bimusicality, 68Biological heritage, 140Blind, 172Blue chords, 429Body, 252Body communication, 336Body motion, 56Body movement, 105, 475Bone conduction, 168Borderline personality disorder, 440

Page 575: Abstract Book

575

Boundary detection, 285Brain, 63, 97, 159, 394Brain activity, 63Brain and physiology, 355Brain damage, 157Brain functioning, 277Brain oscillations, 451Brain plasticity, 172Brainfunction, 262Brass performance, 179Brazil, 550Bullismo, 477

Carrier type, 284Categorical perception, 236Categorization training, 520Cathedral choristers, 349Cello performance evaluation, 364Central and vegetative nervous system, 62Central coherence theory, 121Chamber music, 122Channel theory, 113Children, 85, 174, 202, 398, 408, 503, 550Chill, 114Chills, 93Choir singing, 220Choirs, 178Choise of music, 71Choral pedagogy, 448Chord discrimination, 522Chord priming, 211, 426Chord-form, 480Chorus, 467Chromatic change, pitch categorization, 555Chronobiology, 294, 427Classical language, contemporaneous language,

145Classical music, 439Classical singing, 248Classification, 137Classroom music, 206Co-regulation, 292Cochlear implant, 395Cognition, 79, 116, 503, 545Cognition - emotion, 62Cognitive abilities, 112Cognitive biases, 223Cognitive development, 398, 484Cognitive evolution, 42

Cognitive features, 415Cognitive musicology education, 497Cognitive organization of music processing,

441Cognitive psychology, 192Cognitive representations, 549Cognitive schema, 410Cognitive state, 267Collective creation, 72College teaching, 496Communication, 154, 184, 192, 244, 250,

321, 338Communicative musicality, 41, 440Comparative psychology, 492Comparative study, 443Competitive state anxiety inventory-2, 137Complex needs, 275Complexity, 328Composers johnson, feldman, ligeti, nelson,

nakas, mazulis, 305Composition, 133, 238, 475Compositional process, 241Comprehensive musicianship, 448Computational event analysis, 302Computational model, 90Computer aided analysis, 127Computer aided music composition, 83Computer model, 489Computer models, 308Computer-aided composition, 126Computer-generated music, 54Computerbased composition, 148Concert dress, 526Congenital amusia, 277Congruence-association, 370Conservation, 176Conservatoire, 144, 322Conservatory music education, 239Consonance, 95, 522Construction, 567Contemporary music and mathematics, 305Contemporary music composition, 210Context, 270, 314, 320, 342, 343Context and culture, 344Continuator, 388–390, 514Continuity / discontinuity, 40Continuous response, 271Continuous response measurement, 168

Page 576: Abstract Book

576 Thematic Index

Contour, 326Converging evidence, 270Convexity, 95, 556Cooperative tapping, 377COPE, 78Coping strategies, 463Counting, 461Country and hip-hop, 410Creating original operas, 204Creative music, 185Creative thinking in msuic, 406Creativity, 79, 116, 144, 328, 407, 529Critical band, 52Critical evaluation, 498Cross modal expression, 105Cross-cultural, 354Cross-cultural music cognition, 220Cross-cultural research, 106Cross-cultural study, 376Cross-domain mapping, 286Cross-modal, 475Cross-modal transfer, 58Cross-modality, 287Crossmodal auditory-visual perception, 88Cry, 106Cultural differences, 119Cultural identity, 130Cultural interaction, 140Culturally-coded timbres, 134Culture, 490

Dance, 141Dancing, 216Data reduction, 52Dekasegi movement, 69Development, 127, 214, 224, 314, 315, 458,

551Dichotic pitch, 493Didactical experience, 442Directionality, 278Discrimination, 566Diseases, 473Distance, 519Distraction, 197Domain specificity, 396Dosimeters, 107Double-bass, 434Downbeat accent, 508Dramatised performance, 239

Drumming, 129Dual-task methods, 425Duration judgments, 481Dynamic attending, 509

Ear playing, 419Early childhood, 251, 406, 551, 552Early childhood music education, 390Ecstasy (MDMA), 66Educational technology, 497EEG, 109, 463Effect of music, 62Effects, 185Effects of formal training, 540Effects of music, 130Efferent influences, 120Efficiency, 384Electro- acoustic music analysis, 424Elementary school, 127ELM, 69Embodiment, 262EMI, 559Emotion, 63, 64, 79, 84, 89, 94, 100, 103,

106, 114, 124, 131, 137, 153, 154,174, 219, 221, 222, 224, 244, 271,315, 515

Emotion and traditional music, 134Emotion in music, 96, 118Emotion regulation, 352Emotion-relevant trait, 101Emotional development, 174Emotional effects of music, 223Emotional intelligence, 175Emotional intensity, 74Emotional response, 74Emotional responsiveness, 96Emotions, 85, 93, 150, 152, 220, 429Emotions and music recognition, 138Emotions in music, 354Enculturation, 490Entrainment, 188, 433, 458, 538, 549Environment, 506Episodic memory, 85ERP, 99, 125, 278, 452Esthetical empathy, 340Estonian folk song, 416Ethnolinguistics, 420Ethnomusicology, 164, 382, 419, 420, 572Eupraxia, 277

Page 577: Abstract Book

577

Event detection, 486Event related brain potentials, 109Event-related potentials, 131, 247, 431Everyday, 551Everyday life, 100Evolution, 166, 328Expectancy, 243, 375Expectancy profiles, 538Experience of time, 40Experimentation framework, 415Expert listener, 226Expert performance, 434Expertise, 386Exposure, 57, 97Expression, 141, 215, 563Expressive gesture, 41, 525Expressive gestures, 141Expressive performance, 192, 324, 524Expressive timing, 48Expressive timing and tempo, 57Expressiveness, 226External representations, implicit conceptions,

145Eyebrows, 430

Facial expression, 124, 430Feeling adjustment of music, 96Female, 349Fetus, 97Film, 371Film music, 86, 110, 475First grade music education, 507Flow, 142, 262, 515Flow experience, 514fMRI, 63, 76, 92, 248, 267, 379fNIRS, 63Focus groups, 198Fractional noises, 126Framing, 292Functional neuroimaging, 276Functions of music, 502Functions of singing, 130

Gamut of sounds and silences, 241Gender, 205Gender effects, 346Gender stereotyping, 205Genetic factors in pitch perception, 209Genre classification, 326

Geometric representations, 400Geometry, 259German fricatives, 415Gestalt perception, 410Gesture, 74, 192, 262, 338Gestures of expressive value, 544Global musical context, 426Gregorc, 129Groove, 44Grouping structure, 116Guitar, 480

Hand-clapping songs, 484Harmonic expectation, 330Harmonic priming, 569Harmonic progression, 561Harmonic relationships, 521Harmony, 123, 208, 224, 259, 274, 278, 401,

534Harmony, language, processing, 411Harpsichord sound, 488Hausdorff metric, 336Healing music, 91Healing songs, 420Health, 64Hearing-impairment, 94Heart rate, 66, 466Historiometry, 104Hospital, 196Human development, 42Human symbolic thought, 42Humdrum, 378Hybrid acoustic/digital instruments, 363Hypnotic induction, 290Hypo mode, 469

Iconicity, 372Identity, 69, 144, 198Identity development, 106Image, 457Image-schemas, 286Imagination, 271, 457Imitation, 251, 419Immigration, 69Implication, 208Implicative processes, 210Implicit cognition, 453Implicit knowledge, 156, 210Implicit measures, 533

Page 578: Abstract Book

578 Thematic Index

Implicit memory, 385Impression, 450Improvisation, 85, 110, 141, 251, 291, 460,

529Indicators, 232Individual differences, 189Induction, 79Infancy, 320, 342–344Infant, 436Infant music perception, 230Infants, 214–217, 476Informal learning, 505Information technology, 240Information theory, 285Instrumental movement, 75Instrumental teaching, 205Integrated supervision, 485Intelligence, 175Inter-modality, 154Interaction, 321, 455Interaction child/machine, 389Interaction infant-machine, 142Interaction intersubjective, 141Interaction mother-infant, 42Interactive music systems, 329Interactive reflective musical systems, 390,

391Interactive reflexive musical systems, 388,

389Interaural timing difference (ITD), 493Interest, music, children, development, 548Internal hearing, 168Internet research, 84Interpretation, 427Interpretation of emotion, 174Interval cycles, 256Intonation, 95, 178, 455Investigation of natural sound, 488iPod, 491Isochronous and non-isochronous meter, 433,

538Isochrony, 187, 490

Japanese court music gagaku, 164Jazz, 45, 110, 184, 226, 434Jazz improvisation, 253John adams, 234

Key-distance, 401

Language, 76, 99, 262, 339, 378, 459, 465,564

Language acquisition, 384Language learning, 128Languages of sound, 570Learning, 144, 324, 465, 514Learning improvisation, 460Learning processes in music, 479Learning strategies, processing levels, 145Learning styles, 129Learning system, 467Least-squares estimation, 480Lexical tone, 171Lexical tone perception, 412Lexical tones, 448Life-span development, 104Line bisection, 120Linguistics, 448Linstening experiences, 326Listener, 151Listening strategies, 397, 528Locomotion, 150London symphony orchestra, 391Long-term memory, 60Low-pass filtering of waveform, 556Lullaby, 320Lyrics, 82, 445

Machine arrangement, 108Machine learning, 267, 326Major/minor, 534Markov processes, 83Mathematics, 123Meaning, 339, 560Meaning and music, 475Measurement, 533Measures, 474Media, 372, 373Media literacy, 549Medicine, 536MEG, 162, 379Mellion transform, 50Melodic accent, 416Melodic accent perception, 410Melodic categorization, 230Melodic complexity, 562Melodic contour, 433Melodic expectancy, 210Melodic graphic representation, 288

Page 579: Abstract Book

579

Melodic grouping, 285Melodic imagery, 65Melodic interval, 519Melodic intervals, 105Melodic memory, 263Melodic perception, 326, 436Melodic similarity, 336Melodic/rhythmic continuity, 538Melody, 47, 114, 448, 456, 459, 479Melody perception, 65, 452Melody recognition, 242Melody structure, 263, 410Memetics, 90Memories, 347Memorization, 364, 384, 418Memory, 56, 82, 296–299, 326, 539Memory effect, 383Memory for tempo, 58Mental representations, 419mental retardation, 292Metalanguage, 288Metaphor, 287Meter, 162, 234, 458, 490Methodology, 103, 529Metric ambiguity, 508Metric hierarchy, 416Metrical structure, 549Metronome numbers, 464Meyer, 352MFCC, 50Micro-timing, 45Microtiming, 188MIDI, 102Minimalism, 234Mirror neurons, 247, 340Mismatch negativity, 247Mixed music, 469Mobile listening, 491Mode, register, consonance, 221Model building, 406Model selection, 48Modeling, 208Models of timing, 189Modern jazz-style, 239Modern popular songs, 87Monophony generation, 97Mood and arousal, 98Mood induction, 100

Mood regulation, 316, 317Mother language, 426Mother-infant, 321Mother-infant vocal interactions, 440Motion analysis, 180Motion capture, 165Motion tracking, 525Motivation, 323Motivation and practice, 506Motor theory, 262Motor disorder, 158Motorics, 117Movement, 46, 82, 406Mozart effect, 136Multi-agent, 133Multi-agent model, 561Multi-modal perception, 365Multidimensional scaling, 255, 352Multimedia (or media), 370Multimedia messagging, 244Multimodal, 372Multimodality, 336Multisensory processing, 246Muscolar tension arousal, 472Music, 89, 93, 99, 100, 114, 123, 125, 219,

378, 385, 395, 461Music therapy, 206Music analysis, 242Music and cognition, 396Music and dance, 544Music and driving, 199Music and emotion, 355Music and exercise, 200Music and hospitals, 200Music and internet, 71Music and language, 338, 394, 560Music and media, 371Music and memory, 483Music and metronome, 437Music and movement, 552Music and society, 491Music and speech, 98, 235, 412Music and time, 474Music assessment, 406Music complexity, 131, 554Music education, 230, 238, 240, 309, 396,

406, 408, 428, 435, 442, 443, 448,454, 505, 549

Page 580: Abstract Book

580 Thematic Index

Music emotion, 42Music ethnology, 88Music evolution, 90Music experience, 151, 152Music imagery, 120Music in everyday life, 314–317Music information or involvement, 439Music information retrieval, 131, 257, 415Music instruction, 398Music learning, 240Music lessons, 175Music listening, 222, 353, 552Music listening experience, 290Music making, 293Music meaning, 124Music medicine, 294Music memory, 171, 264Music pedagogy, 179, 499Music perception, 172, 354, 408, 422, 457,

554Music performance, 90, 137, 158, 171, 179,

478, 515, 523, 525, 535Music performance and theory, 499Music performance anxiety, 77, 78, 135, 137,

463Music practice, 90Music preference, 317, 337, 352Music preferences, 439Music psychology textbooks, 496Music reading, 159, 231, 540Music recall, 493Music structure, 79, 129Music teaching and learning, 453Music technology, 293, 309Music theory, 127, 133, 400Music theory pedagogy, 230Music therapy, 292, 293, 445, 485Music-education, 347Musical ability, 146Musical appreciation, 428Musical behaviour, 68, 182Musical boundary-detection, 397Musical cognition, 119Musical content analyse, 363Musical creativity, 72, 148, 388, 514Musical development, 165, 203, 275, 550Musical development in adulthood, 502Musical developpment, 142

Musical disorders, 159Musical education, 88Musical elements, 449Musical emotion, 533Musical expertise, 57, 247, 428, 431, 451,

485Musical expression, 365, 449Musical expressiveness, 82Musical feature extraction, 291Musical genres, 151Musical giftedness, 203Musical grammar, 330Musical hermeneutics, 133Musical imagery, 153, 231, 270Musical instrument sounds, 52Musical instruments, 51Musical interaction, 377Musical interpretation, 133Musical involvement, 290Musical meaning, 340, 424, 444Musical memory, 85Musical mind, 336Musical participation, 323Musical performance, 174, 321Musical phrase, 266Musical phrases, 538Musical preferences, 71, 127, 502Musical priming, 210Musical rendering, 113Musical representations, 528Musical rhythm aesthetics„ 421Musical role models, 70Musical salience, 471Musical sophistication, 232Musical structural factor, 450Musical structure, 510Musical structure processing, 121Musical synchronization, 164Musical tension, 362Musical transfer effect, 412Musicality, 485Musician’s body, 75Musicians, 120Mutually complementary parameters, 423

N-grams, 562Narrative, 42Narrative structure, 40Narrativity, 41, 111

Page 581: Abstract Book

581

natural schemata, 81Nature / nurture, 217Nearness and proximity, 336Negative asynchrony, 429Networked music, 72Neural mechanisms, 246Neural networks, 228Neuroplasticity, 451Neuropsychology, 157Neurosciences, 277New statistical method, 266New musical interfaces, 535New technologies, 391, 442Noise generation, 126Non native language perception and produc-

tion, 415Non-arts benefits, 204Non-diatonic chords, 211, 426Non-tonal music, 400Non-western music, 362Nonverbal communication, 180, 365nPVI, 426Number representation, 396

Online learning, 497Oral tradition, 418Organ, 363, 524Original and scrambled versions of music se-

quences, 545Ornamentations, 478Over-training, 158Overall structure, 521

P-centre, 302Pain, 197PANAS, 77Participation, illness, salutogenetic, informal,

well-being, 183Participatory discrepancies, 188Patients, 125Patients with dementia, 185Pattern, 306Pattern model, 258Peak experience, 151Peer learning, 505Peer teaching, 531Peirce, 422Perceived complexity, 302Perceived music qualities, 329

Perception, 79, 92, 116, 187, 196, 235, 243,255, 274, 395, 479, 523, 545, 557,563, 568

Perception and musical characteristics, 134Perception and performance, 464Perception of sound, 488Perceptions, 146Perceptual bias, 344Perceptual onset, 43Perceptual onset time, 302Perceptuo-motor description of mimetic com-

munication, 544Percussions, 391Performance, 45, 63, 74, 94, 109, 122, 144,

252, 296–299, 323, 373, 458Performance analysis, 303Performance anxiety, 321Performance assessment, 449Performance evaluation, 82Performance plans, 479Performance practice, 443, 479Performance rendering, 178Performance skill, 502Performance studies, 239Performance success, 506Performer, 151Personality, 64, 152, 199, 337Personality assessment, 438, 439Phenomenal perception, 433Phonatory processes, 231Phonology, 456Phrase perception in MIDI and performances,

383Phrase segmentation, 108Piano, 53, 102, 487, 558Piano performance, 430, 468, 502Pitch, 402, 520, 539, 557Pitch attraction, 256Pitch change, 47Pitch control, 171Pitch discrimination, 277, 485Pitch height processing, 210Pitch height sorting, 492Pitch memory, 476Pitch perception, 284, 446, 452Pitch proximity, 493Pitch register, 287Pitch representation, 396

Page 582: Abstract Book

582 Thematic Index

Pitch spelling, 556Pitch structure, 482Plasticity, 465Play, 148Playing isochronously, 236Polyphonic, 306Polyphonic music, 524Polyrhythm, 379Popular music, 197Popular music genre, 410Post-tonal music, 127Practice, 296–299Prediction models, 263Preferences, 202Preferred music, 199, 200Prefrontal cortex, 76Preschool children, 531Prevention, 536Preverbal vocalizations, 455Primary education, 407Primary school, 347Priming, 560Priming paradigm, 156Processing speed, 396Production, 170, 235, 568Professional singers, 248Proportion and symmetry, 241Psychoacoutsics, 432Psychological morphology, 294Psychology, 499, 504Psychology of rhythm, 421Psychomotor skills, 487Psychophysics, 534Psychophysiological responses, 100Psychophysiology, 425, 472

Qualia, 517Qualitative information flow, 113Qualitative research, 468Quality, 414Quantum uncertainty principle, 423Quotation, 110

Random splicing of waveform, 556Rank ordering, 470Real-time, 83Real-time listening, 397Real-time models, 304Real-time perception, 54

Real-time visual feedback, 309, 324Reciprocal feedback, 250Recognition memory, 471, 538Rehearsal technique, 122Reinforcement learning, 97Relative pitch, 209Repertoire analysis, 104Revised personality inventory, NEO PI-R, STAXI-

2, 438, 439Rhythm, 162, 164, 166, 187, 214–217, 380,

391, 419, 437, 459, 475, 486Rhythm and melody interactions, 441Rhythm and tempo, 189Rhythm development, 426Rhythm perception, 376Rhythm performance, 507Rhythm production, 376Rhythmic development, 507Rhythmic movement, 437Rhythmic performance, 441Rhythmic/melodic factors, 471Richard Parncutt’s symposium, 496Roughness, 362, 489

S-R compatibility, 402Scale, 558Schenkerian analysis, 482School age, 170Scratching, 535Secondary schools, 116Segmentation, 54, 128, 137Selection, 322Self-esteem, 503Self-learning, 102Semantic descriptors, 329Semantic differential methods, 463Semantics, 53Semiotics, 339, 422, 564Sensation, 74Sense-making, 528Sensorimotor synchronisation, 377Sensory consonance, 133Sensory integration, 481Septo-optic dysplasia, 182Set-class, 522Set-class theory, 431Share intentionality, 140Shepard tones, 520Sight reading, 523

Page 583: Abstract Book

583

Sight-reading, 192Silence, 510Similarity, 112Similarity perception symposium, 360Simlation, 446Singer, 89Singing, 44, 85, 222, 414, 504Singing activities, 435Singing activity, 438, 439Singing development, 349Singing skills, 470Singing voice, 168, 415Social context, 68Social interaction, 70Social psychology of music, 106Social representation of music, 75Social representations, 69Socio-cultural context, 250Sociocultural awareness, 87Software, 348Song perception, 456Song recognition, 114Songs, 128Sound object, 564Sound parameters, 176Sound pressure level, 467Sound synthesis, 473Sound-level exposure, 107Source perception, 51, 150Space, 402Spatial design, 91Spatial-temporal performance, 136Spatio-temporal connectionist models, 355Special needs students, 204Specific language impairment, 384Spectral, 112Spectral analysis, 308, 489Speech perception, 171, 436Spontaneous otoacoustic emissions, 120Stage fright, 77, 78, 135State-anxiety, 223Statistical learning, 330, 517Steelbands, 382Stereotypes, 526Story, 174Strategy, 192Stream segregation, 327Stress, 536

Stroop paradigm, 540Structure, 352Studying, 204Style, 227Style perception, 228Stylistic rules and ideas, 81Subjective evaluation, 450, 558Subjective experiments, 556Subjective perception of time, 545Subjective rhythmization, 509Swing, 46, 380Switzerland:, 443Synchronisation, 466Synchronization, 165, 235, 429Synchrony, 43Syncopation, 302Synectic operator, 549Syntax, 394, 401

Talk, 253Tapping, 162, 235Teacher education, 144Teacher-pupil interaction, 346Teacher-student’s relationship, 504Teachers’ perceptions, 116Teaching, 514Teaching foreign languages, 87Teaching improvisation, 460Teaching musical instruments, 472Teaching strategy, 531Teaching styles, 407Temperament, 101Tempo, 46, 56, 60, 458, 464, 508, 539Tempo drift, 235Tempo perception, 58, 164, 443, 453Tempo, loudness, 101Temporal attention, 452Temporal model, 258Temporal processes, 493Tension, 111, 510Testing and measurement, 232Textbooks, 348The sense associated to the sounds, 545Thematic relationships, 521Theoretical paper, 498Theories of chaos, fractals, algorithms in mu-

sic, 305Theory, 219Theory of flow, 514

Page 584: Abstract Book

584 Thematic Index

Theory testing, 48Therapy, 196Thought, 74Timbral cues, 162Timbre, 50, 51, 53, 129, 429, 463, 473, 557,

566Timbre analysis, 157Timbre perception, 230Time, 306, 479Time duration, 88Time perception, 461, 486Time series analysis, 427Time-line units, 266Time-shrinking, 236Time-stretching, 112Timed identification task, 226Timing, 43, 44, 47, 243, 375Timing control, 164, 272Tonal classification, 276Tonal functional harmony, 561Tonal grammar, 567Tonal harmony, 170Tonal hierarchies, 256Tonal pitch space, 563Tonal schema, 211Tonal tension/attraction, 178Tonality, 111, 212, 255, 482, 517, 519Tonality estimation, 257Tone deafness, 568Tonotopic and periodic cue, 519Tradition, 529Traditional music, 555Training, 264Transfer-effects, 347Transposed-instrument performance, 454Trichord, 519Trinidad and tobago, 382Tuning perception, 446Turkish music, 419TV advertising, 385Twentieth century “art” music, 202Two tonics, 469

Underlying voice-leading, 286

Variability, 343Verbalization, 53Vibrato, 308Vibrato tones, 284

Violin, 566Violin intonation, 171Violin portamento, 566Virtual pitch, 258Visual disability, 182Visual information in music performance, 481Visual perception, 364Visualisation, 90, 363Visualization of aural perception, 566Visuospatial processing, 120Vocal development, 529Vocal gestures, 347Voice, 82, 106, 141, 252Voice leading, 259Voice separation, 327Voice-leading, 519Voicing, 239

Web experiment, 84Well-being, 353Wind instruments, 473Wind orchestra, 108Women, 526Words, 114Work, 353Working memory, 384, 493

XVII century cantatas, 228

Young children, 112Young people, 253

Zone of proximal development, 288

Page 585: Abstract Book

585

Author Index

Aarden, Bret, 242Aaronson, Doris, 396Abe, Jun-ichi, 114Adachi, Mayumi, 216Addessi, Anna Rita, 141, 281, 389, 513Ahlbäck, Sven, 242Aiello, Rita, 396Akinaga, Seiko, 557Aladro-Vico, David, 421Albano, Fabio, 291Alexandra, Russell, 215Alexandre Journeau, Véronique, 73Alexomanolaki, Margarita, 384, 421Almoguera, Arantza, 74Altenmueller, Eckart, 84Altenmüller, Eckart, 83, 92, 113, 157Alzina, Cécile, 544Ambrazevicius, Rytis, 554Amosov, Grigori, 422Anastasia, Adriana, 423Andreas, Roepstorff, 379Anna, Pienimäki, 333Anta, Fernando, 209Arom, Simha, 419, 571Artale, Giovanna, 291, 485Ashley, Richard, 43, 191, 314, 337, 410, 424,

459Atalay , Nart Bedin, 425Atalay, Nart Bedin, 211Augustin, Dorothee, 59Auhagen, Wolfgang, 55, 465Aurélie, Fraboulet, 75Aurélie, Helmlinger, 381Ayari, Mondher, 545Ayers, Lydia, 51Azechi, Nozomi, 426

Bachmann, Kai, 426Bacon, Hilda, 203Baddeley, Alan, 493Bailer, Noraldine, 280Bailes, Freya, 54, 270Baldizzone, Roberta, 110Balzer, Hans-Ullrich, 293Barale, Francesco, 183Barbosa, Álvaro, 71

Barnett, Ruth, 241Barone, Emilia, 549Baroni, Mario, 227, 228, 281, 430, 567Bartolo, Maria Giuseppina, 476Batt-Rawden, Kari Bjerke, 182Beauchamp, James W., 51Beckett, Christine, 158Beecham, Rowena, 395Begosh, Kristen, 297Beigman Klebanov, Beata, 265Beken, Munir, 264Bellemare, Madeleine, 52Benassi, Mariagrazia, 87Benjaman, Schogler, 141Berger, Jonathan, 49, 168Berman, Tamar, 305Beronius Haake, Anneli, 353Bersch-Burauel, Antje, 501Besson , Mireille, 127Besson, Mireille, 464, 563Betke, Margrit, 293Beyer, Esther, 427Bhatara, Anjali, 75, 121Bialunska, Anita, 235, 428Bigand, Emmanuel, 156Birbaumer, Niels, 247Boal Palheiros, Graça, 201, 407Bock, Bettina, 560Bogunovic, Blanka, 505Bolelli, Roberto, 429Bonastre, Carolina, 76, 77Bonfiglioli, Luisa, 291, 430, 567Boon, Roel, 235Bosma, Martijn, 415Boso, Marianna, 183Boyer & Regine Kolinsky, Maud, 127Brabec de Mori, Bernd, 420Brandes, Vera, 293Brandmeyer, Alex, 308, 323Brattico, Elvira, 246, 431Brennan, David, 538Bresin, Roberto, 149, 244, 477, 534Brodsky, Warren, 231, 483, 539Brody, Carlos, D., 187Brosbol, Jens, 431

Page 586: Abstract Book

586 Author Index

Broughton, Mary, 364Brown, Helen, 498Bruderer, Michael, 78Bruhn, Herbert, 495Brunetti, Riccardo, 432, 440, 470, 538Bruno, Gingras, 524Buchler, Michael, 233Bugos, Jennifer, 396Burdette, Jonathan H., 245Burland, Karen, 146Burnham, Denis, 170Burt, Rosie, 143, 321Busch, Veronika, 55Butterworth, Brian, 276, 401Buzzanca, Giuseppe, 228Byron, Tim, 433Bærentsen, Klaus B., 91

Cabrera, Isabel, 222Cabrera, Leticia, 238Calabretto, Roberto, 240Callender, Clifton, 519Caloiero, Claudio, 434Camara, Aintzane, 435Cambouropoulos, Emilios, 327Campana, Daniela, 220Campanino, Mario, 338Canazza, Sergio, 240Cangelosi, Angelo, 354Capellini, Enrico, 470Carbon, Claus-Christian, 59Cardillo, Gina, 435Carlotti, Simona, 513Carlton, Lana, 197Carranza, Raul, 414Carter, Fern-Chantele, 274Carugati, Felix, 281Casas Mas, Amalia, 145Casella Piccinini, Patrizia, 436Casey, Alex, 401Cassidy, Gianna, 198Castro, Sao Luis, 464Caterina, Roberto, 430, 547, 567Cazarim, Thiago, 122Cedervall, Jan, 79Chaffin, Roger, 296–298Chan, Michael, 559Chen, Colleen, 298Chmurzynska, Malgorzata, 282

Chon, Song Hui, 168Chor, Ives, 43Clynes, Manfred, 261Cohen, Annabel, 370Cohen, Dalia, 80, 278Cohen, Les, 229Coimbra, Daniela, 71, 437, 438Collins, Nick, 161, 302Coluzzi, Raffaella, 485Colwyn, Trevarthen, 40, 214Comerford, Lucy, 362Comerford, Peter, 362Connell, Claire, 65Cook, Norman D., 533Corazza, Leonardo, 81Corballis, Michael, 119Costa, Marco, 81, 220Costa, Tommaso, 88Costa-Giomi, Eugenia, 82, 229, 397, 474Costabile, Angela, 476Coutinho, Eduardo, 354Criscuolo, Biancamaria, 122Cross, Ian, 140, 255, 400, 508, 566Cuddy, Lola l., 158Cunha e Costa, Joana, 71Custodero, Lori, 514

Dahl, Sofia, 234Dalca, Ioana, 524Dalla Bella, Simone, 235, 428Dalmonte, Rossana, 70Davidson, Jane, 251, 437, 503, 525Davis, Caroline, 225De Meyer, Hans, 328De Simone, Domenico, 82De Simone, Julie, 184De Voogdt, Liesbeth, 328Dean, Roger, 54DeFonso, Lenore, 439Delavenne, Anne, 440Deliege, Irene, 331Delogu, Franco, 411, 440Demorest, Steven M., 65, 264Demos, Alexander, 396Desain, Peter, 64, 308, 323, 376, 508Devaney, Johanna, 177Di Lorenzo, Pietro, 335Di Maio, Giuseppe, 335Diaz, Maravillas, 547

Page 587: Abstract Book

587

Dibben, Nicola, 353Dimitrova-Grekow, Teodora, 363Dine-Young , Stephen, 105Disley, Alastair, 53Dixon, Simon, 303Doffman, Mark, 45Don, Taylor, 85Dowling, W. Jay, 537Draga, Zec, 489Durbin, Emily, 314

Eck, Douglas, 303Economidou Stavrou, Natassa, 144Eerola, Tuomas, 164Egermann, Hauke, 83Ehlert, Ulrike, 352Eitan, Zohar, 286, 332Elisabeth, Tolbert, 42Ellen, Dissanayake, 41Elliot, Saltzman, 293Emanuele, Enzo, 183Emery, Schubert, 226Emura, Norio, 107, 239, 480, 557Erkkilä, Jaakko, 290Eschrich, Susann, 84Escoffier, Nicolas, 569Eugenia, Costa-Giomi, 85Eugenio, Pattaro, 460Evans, Debra, 203Evans, Paul, 153, 270

Fabian, Dorottya, 226Feng, Dan, 192Ferrari, Laura, 441, 513Fink, Nathan, 85Fischer, Timo, 163Fitz, Kelly, 488Folkestad, Goran, 237Fombonne, Eric, 121Fomina, Anna, 86Franek, Marek, 336Frassinetti, Francesca, 87Frater, Deborah, 188Friberg, Anders, 242, 534Friederici, Angela D., 124Frieler, Klaus, 358, 561Fritz, Claudia, 566Fritz, Tom, 88Fujisawa, Takashi X., 533

Fujita, Rinko, 164

Gabrielsson, Alf, 150Galati, Dario, 88Galley, Niels, 523Gasenzer, Elena-Romana, 535Gattus, Tamara, 87Gebhardt, Stefan, 63George, Christine, 451George, Hoplaros, 85Geringer, John, 449Geringer, John M., 442Giglio, Marcelo, 443Gilda, Urli, 462Gimenes, Marcelo, 89Ginsborg, Jane, 231, 297, 498Giordano, Bruno, 50, 149Giordano, Bruno L., 401Giosmin, Nicola, 423, 443Gloeckner, Nastja, 61Gluschankof, Claudia, 551Go, Tohshin, 444Godoy, Rolf Inge, 262Goebl, Werner, 90Gorman, Misha, 293Goshen, Maya, 173Goto, Masataka, 469, 556Goto, Yasuhiro, 90Granot, Roni, 332, 520Grant, Phillip, 63Grassi, Massimo, 445Grassilli, Cristian, 291Greasley, Alinka, 316Green, Anders C., 91Grekow, Jacek, 363Grewe, Oliver, 92, 113Grieser, Julia, 560Griffiths, Noola, 525Griffiths, T. D., 156Gualda, Fernando, 307Guaus, Enric, 326Gula, Bartosz, 446Gullberg, Anna-Karin, 504Gurung, Kishor, 417Gussmack, Manuela Birgit, 446Gómez, Emilia, 256, 477

Hagoort, Peter, 393Hairston, W. David, 245

Page 588: Abstract Book

588 Author Index

Hajda, John, 370Hajigholamrezaei, Nzanin, 292Hallam, Susan, 145, 204Hannon, Erin, 343Hansen, Kjetil F., 534Harashima, Hiroshi, 466Hargreaves, David, 250Hashida, Mitsuyo, 178Heavner, Tracy, 447Hebert, Sylvie, 158Heil, Martin, 452Hemming, Jan, 346Henik, Avishai, 231, 539Henry, Deborah, 481Herrera, Perfecto, 131, 256, 326Heylen, Eveline, 212Himberg, Tommi, 377, 508Himonides, Evangelos, 167, 274, 348, 413Hinton, Sean, 502Hiraga, Rumi, 93Hiraga, Yuzuru, 469Hitch, Graham, 493Ho, Vincie W.S., 448Hodges, Donald A., 245Hodgson, Paul, 328Hofmann-Engl, Ludger, 257Hollweg, Christopher A., 187Hong, Sujin, 151Honing, Henkjan, 47, 57Honingh, Aline, 94, 555Hoppe, David, 308, 323Horner, Andrew B., 51Horton, Timothy, 400Hoshide, Mika, 444Hoshino, Etsuko, 95Hoteida, Yuriko, 96, 132, 560Houlahan, Micheal, 230Hove, Michael J., 208Howard, David, 53, 283Howard, David M., 348Howland, Kathleen M., 203Hunt, Andy, 53Huotilainen, Minna, 97Huron, David, 517Héjja-Nagy, Katalin, 289Höller, Norman, 58

Iida, Makoto, 466Ilari, Beatriz, 68, 201, 550

Ildefonso, Ana Maria, 123Ilie, Gabriela, 97Imberty, Michel, 40Incasa, Iolanda, 430Itou, Katunobu, 556Ivaldi, Antonia, 70Iversen, John, 162, 165, 393Izal, María, 222

Jabusch, Hans-Christian, 157Janke, Kellie, 188Jaskowski, Piotr, 428Jeng, Shyh-Kang, 136Jennings, A. R., 156Jentschke, Sebastian, 98Johansson, Karin, 529Johnson , Steve, 439Johnson, Chris, 89Johnson, Christopher, 449Jungbluth, Denise, 264Juslin, Patrik, 533Juslin, Patrik N., 99, 219Järvelä, Irma, 484

Kallinen, Kari, 100Kantor-Martynuska, Joanna, 100Karas, James, 418Katai, Osamu, 112Katayose, Haruhiro, 62, 178Katia, Mazokopaki, 214Kato, Nobuko, 93Kawakami, Hiroshi, 112, 450, 463Kazai, Koji, 62Keller, Peter, 271, 301Kendall, Roger, 361, 371Kennett, Chris, 384Keresztesi, Bettina, 520Kessler, Yoav, 231, 539Khan, Bilal, 255Khanmohammad, Reza, 292Kihara, Yukari, 221Kim, Daniella, 342Kim, Jin Hyun, 339Kim, Jung Nyo, 325Kitamura, Tamaki, 101Kjell, Lemström, 333Kleber, Boris, 247Knouf, Nicholas, 102Koch, Iring, 271

Page 589: Abstract Book

589

Koelsch, Stefan, 88, 98, 109, 124, 130, 560Kohlrausch, Armin, 78Kolinsky, Régine, 455Komissarenko, Angelica, 422Kopiez, Reinhard, 83, 92, 103, 113, 523Koroleva, Inna, 394Kotsopoulou, Anastasia, 204Koutsoupidou, Theano, 406Krakowski, Sergio, 391Krantz, Göran, 104Krause, Christina M., 450Kreutz, Gunter, 63Kristen , Susanne, 105Krohn, Kaisu I., 450Kronland-Martinet, Richard, 563Kruger, Jonathan, 179Kruger, Mark, 179, 385Krumhansl, Carol L., 208, 399Kubovy, Michael, 481Kuhl, Patricia, 435Kuusi, Tuire, 431, 521

Ladinig, Olivia, 57, 59Lahav, Amir, 247, 293, 451Lammers, Mark, 385Lamont, Alexandra, 314, 316, 342Lampis, Giulia, 411Lange, Kathrin, 452Lapidaki, Eleni, 453Lartillot, Olivier, 290Laucirica, Ana, 453Laukka, Petri, 99Leder, Helmut, 59Lee, Heejung, 565Lee, Ji In, 523Lee, Kyung Myun, 424Lehmann, Andreas, 418Lehmann, Andreas C., 103, 434, 486, 495Leif, Ostergaard, 379Leimbrink, Kerstin, 454Leman, Marc, 212, 328, 482Lemieux, Anthony, 298LeMieux, Marcy, 502Lenti Boero, Daniela, 106Lesaffre, Micheline, 328Levitin, Daniel, 75Levitin, Daniel J., 121Lhost, Elizabeth, 410Lidji, Pascale, 455

Liljeström, Simon, 99Lisboa, Tania, 297Logan, Topher, 297Londei, Alessandro, 538London, Justin, 508Longhi, Elena, 320Lotze, Martin, 247Louhivuori, Jukka, 219Loui, Psyche, 329Loveday, Catherine, 384Lu, Xiaoling, 456Luck, Geoff, 164, 290Luigi, Frezza, 459Luis, Ivett Flores, 244Lundqvist, Lars-Olov, 99Lusher, Dean S., 275

MacDonald, Raymond, 184, 196, 250, 252Mace, Sandra, 107MacLeod, Rebecca B., 442Madison, Guy, 104, 234Madsen, Clifford K., 442Madsen, Søren Tjagvad, 553Maekawa, Hiroshi, 107Maidhof, Clemens, 109Maier, Wolfgang, 293Maimets-Volt, Kaire, 109Malloch, Stephen, 364Mangani, Marco, 110Marconi, Luca, 111Margulis, Elizabeth, 509Marmel, Frédéric, 210Marques, Carlos, 464Marshall, Nigel, 280Martens, Jean-Pierre, 328Martens, Peter, 457Martinez, Isabel Cecilia, 209, 285, 414Martingo, Angelo, 562Maruyama, Sayuri, 221Masci, Sandra, 485Matsui, Toshie, 518Matsumoto, Ko, 450, 463Mauleon, Claudia, 414Maya, Gratier, 140McAdams, Stephen, 50McAuley, J. Devin, 188, 458McGuiness, Andy, 188McKarrel, Ajax, 390McKinney, Martin, 78

Page 590: Abstract Book

590 Author Index

McLean, James, 179McPherson, Gary E., 347Menon, Vinod, 75Mercier, Ann Mary, 458Merker, Björn, 104Messina, Renato, 111Meyer, Kerstin, 560Micah, Bregman, 357Michaud Baese, Melissa, 459Michele, Biasutti, 459, 460, 462Michelson, Ina, 278Miell, Dorothy, 250, 252Milena, Petrovic, 489Miller, Nathaniel, 188Mills, Janet, 143, 321Miranda, Eduardo, 89Mishra, Jennifer, 384Mitchell, Laura, 196Mito, Yuki, 463Miura, Masanobu, 101, 107, 239, 480, 557Moelants, Dirk, 212, 463, 482Monteiro, Francisco, 201Montorio, Ignacio, 222Morais, José, 455Moran, Aidan, 515Moreno Sala, Maria Teresa, 112Moreno, Sylvain, 127, 464Morgenstern, Martin, 465Morimura, Kumiko, 466Morrison, Steven J., 264Motoyoshi, Tatsuo, 112Muente, Thomas, 84Murakami, Yasuko, 467Murao, Tadahiro, 468, 530Muzik, Pavel, 336Márquez, María, 222Müllensiefen, Daniel, 263, 358, 409

Naemura, Takeshi, 466Nagata, Noriko, 62, 178Nagel, Frederik, 83, 92, 113Nakada, Tomoko, 114Nakajima, Yoshitaka, 235Nakano, Tomoyasu, 469Nakata, Takayuki, 221Nardo, Davide, 470Narmour, Eugene, 207Nater, Urs, 352Natsume, Miwa, 199

Neelin, John, 548Nematian, Masoud, 292Nicholson, George, 297Niki, Powers, 214Nilsson, Bo, 147Nobile, Gianni, 110Noriyuki, Takahashi, 502Nuevo, Roberto, 76, 77, 222Nusseck, Manfred, 163Nuti, Gianni, 106, 471Nöyränen, Maiju, 484

Ockelford, Adam, 181, 274Odena, Oscar, 115, 238Oehler, Michael, 472Ogawa, Yoko, 530Ogorodnikova, Jelena, 394Ohishi, Yasunori, 556Ojamaa, Triinu, 67Olbertz, Franziska, 202Olivetti Belardinelli, Marta, 411, 432, 440,

470, 538Olivier, Lartillot, 359Olivieri, Diana, 549Ollen, Joy, 232Olsson, Bengt, 280Ordoñana, Jose A., 116Ott, Ulrich, 63Ottowitz, Gernot, 293Overy, Katie, 496

Paananen, Pirkko, 169Pachet, François, 388, 513Padula, Alessandra, 116, 117, 473, 569Painsi, Margit, 345, 347, 497Palermiti, Anna Lisa, 476Papadelis, George, 496Papageorgi, Ioulia, 321Paraskevopoulos, Evagelos, 118Parker, Olin, 277Parncutt, Richard, 345, 497, 535Parncuttt, Richard, 347Patel, Aniruddh, 162, 165, 393Patricia, St. John, 512Patston, Lucy, 119Pearce, Marcus, 284Pedder, Wendy, 396Penel, Amandine, 187Pennycook, Bruce, 474

Page 591: Abstract Book

591

Peretz, Isabelle, 37, 118, 127, 156, 158, 273,276, 455

Pesonen, Mirka, 450Peter, Vuust, 379Petersen, Ulrike, 566Petitbon, Elisabetta, 138Pfleiderer, Martin, 409Phillips-Silver, Jessica Phillips-Silver, 475Pihko , Elina, 484Piras, Elisabetta, 176Pisterzi, Anna, 88Pitts, Stephanie, 322Plantinga, Judy, 475Poggi, Isabella, 336Politi, Pierluigi, 183Pothoulaki, Maria, 199Potter, John, 559Poulin-Charronnat, Bénédicte, 560Povilioniene, Rima, 304Pozo Municio, Juan Ignacio, 145Prete, Eugenio, 476Pring, Linda, 181Puiggros, Montserrat, 477Puiggross, Montserrat, 266Pérez-Acosta, Gabriela, 120

Quinn, Sandra, 46, 478Quintin, Eve-Marie, 121

Rados, Ksenija, 505Rainer, Typke, 333Ramirez, Rafael, 266, 477Ramos-Amézquita, Alejandro, 120Randall, Richard, 255Rauscher, Frances, 502Ravaja, Niklas, 100Ray, Sonia, 122Reeve, Robert, 395Repp, Bruno, 162Retra, José, 405Reutens, David C., 172, 275Reuter, Christoph, 472Reybrouck, Mark, 527Ricci Bitti, Pio Enrico, 81, 220, 291Rickard, Nikki, 135Rieger, Martina, 109Riikkilä, Kari, 290Rink, John, 270Rizzuti, Costantino, 125

Roepstorff, Andreas, 91Rogers, Nancy, 519Rognoni, Elena, 88Rosinach, Vincent, 379Ross, Jaan, 152, 394, 557Rowe, Victoria, 205Rowlett, Mary, 439Rozin, Alexander, 351Rubinstein, Bat-sheva, 231Rusconi, Elena, 276, 401Russo, Francesco, 122, 123Russo, Frank, 402Russo, Frank A., 372Russo, Marco, 70Ryan, Charlene, 82Ryf, Stefan, 352

Saare, Hendrik, 152Saari, Timo, 100Saarikallio, Suvi, 315Sadakata, Makiko, 308, 323, 376Salgado, Antonio, 123Sallat, Stephan, 383Salminen, Mikko, 100Salonen, Johanna, 484Saltzman, Elliot, 247, 451Sammler, Daniela, 88, 124Santarcangelo, B, 281Santiago, Diana, 479Santoboni, Riccardo, 125Santos, Andreia, 464Saporta, Yossi, 80Sasaki, Takayuki, 235Sato, Mayumi, 444Sawayama, Koji, 480Schaefer, Rebecca, 64, 508Schellberg, Gabriele, 126Schellenberg, E. Glenn, 175, 223Schlaug, Gottfried, 247Schogler, Benjaman, 544Schubert, Emery, 153, 270, 301, 431, 559Schutz, Michael, 127, 481Schwanhaeusser, Barbara, 170Schön, Daniele, 127, 563Segalerba, Maria Guadalupe, 287Segnini, Rodrigo, 128Seifert, Uwe, 339Selke, Tiina, 205Serra, Sofia, 503

Page 592: Abstract Book

592 Author Index

Serra, Xavier, 477Shevy, Mark, 410Shigemasu, Kazuo, 137Shimano, Rei, 450Shimokawa, Eiko, 444Shiose, Takayuki, 112Siddell, Jeanne, 364Sinnamon, Sarah, 515Sion, Pwyll, 241Slaney, Malcolm, 49Sloboda, John, 273, 568Smiraglia, Stanislao, 69Smith, Gareth, 129Smith, Nicholas, 481Sovdat, Sandra, 58Sowinski, Jakub, 235Spiro, Neta, 265, 382Stachó, László, 174Stadler Elmer, Stefanie, 129, 528Stamou, Lelouda, 514Steinbeis, Nikolaus, 130Stevens, Catherine, 538Stevens, Kate, 364, 375, 433, 520Storino, Mariateresa, 227, 567Strauss, Sabine, 59Street, Alison, 550Streich, Sebastian, 131Stumpo, Francesco, 132Sturdy, Christopher, 491Styns, Frederik, 482Stødkilde-Jørgensen, Hans, 91Sulkin, Idit, 483Sutherland, Mary Elizabeth, 208Suzuki, Shintaro, 96, 132, 560Sven, Ahlbäck, 360Szabó, Csaba, 289Sánchez, Mónica, 171Särg, Taive, 416

Tacka, Philip, 230Tafuri, Johannella, 547Takeda, Kazuya, 556Takenaka, Takeshi, 96, 132, 560Tan, Xueli, 353Tanaka, Satomi, 486Tatsumi, Kazutaka, 486Tekman , Hasan Gürkan, 425Tekman, Hasan Gürkan, 211Tempesti, Lorenzo, 240

ten Hoopen, Gert, 235Terasawa, Hiroko, 49Tervaniemi, Mari, 246, 431, 450, 484Thoma, Mirjam, 352Thompson, William, 402Thompson, William Forde, 97, 223, 372Tillmann, Barbara, 156, 210, 276, 537, 569Timmers, Renee, 286, 308, 323Tippett, Lynette, 119Toda, Kentaro, 112Toiviainen, Petri, 164, 211, 290Toskovic, Oliver, 505Toukhsati, Samia, 135Trainor, Laurel, 475, 481Trainor, Laurel J., 475Traube, Caroline, 52, 379Treffert, Darrold, 181Trehub, Sandra, 213, 319, 342Tripepi, Pasquale, 485Tsapkini, Kyrana, 118Tsuzaki, Minoru, 486, 502, 518Tuomas, Eerola, 357Tykesson, Anders, 133Tymoczko, Dmitri, 258

Ucelli, Stefania, 183Ueda, Kanji, 96, 132, 560Ulrich, Sonja C., 486Umilta, Carlo, 276, 401

Valk-Falk, Maris, 487van Besouw, Rachel, 283van Drumpt, Juliët, 508van Handel, Leigh, 378Vassilakis, Pantelis, 361, 488Veit, Ralf, 247Veltkamp, Remco, 415Vera, Milankovic, 489Vines, Bradley, 524Vitale, Alessia, 347Vitouch, Oliver, 58, 59, 446von Georgi, Richard, 63von Georgi, Susanne, 63Vraka, Maria, 490Vurma, Allan, 557Vuust, Peter, 91Västfjäll, Daniel, 99, 219, 533

Waadeland, Carl Haakon, 45

Page 593: Abstract Book

593

Wan, Catherine Y., 172, 275Wanderley, Marcelo, 524Wanderly, Marcello, 82Warlick, Catherine, 491Warren, J. D., 156Wassenaar, Marlies, 393Watt, Roger, 46, 478Weaver, Kevin, 566Webster, Peter, 406Wehrum, Sina, 63Weisman, Ronald, 491Weisser, Stéphanie, 134Welbel, Jolanta, 135Welch, Graham, 181, 274, 348, 413Werner, Lynne, 342Wessel, David, 329Wheatley, Rebecca, 492Widmer, Gerhard, 90, 553Wiering, Frans, 415Wiggins, Geraint, 284Williams, Simon, 135Williamson, Victoria, 493Wilson, Alistair, 184Wilson, Graeme, 252Wilson, Sarah, 395Wilson, Sarah J., 172, 275Wise, Karen, 273, 568Wolf, Debbie Lynn, 507Won, Sook Young, 168Woodhouse, Jim, 566Woody, Robert, 418Woody, Robert H., 495Woolhouse, Matthew, 255, 400Wu, Tien-Lin, 136Wuytack, Jos, 407Wöllner, Clemens, 180

Yamada, Masashi, 44Yamasaki, Akio, 93Yamasaki, Teruo, 153Yanagida, Masuzo, 107, 239, 480, 557Yopp, Ashley, 396Yoshie, Michiko, 137Young, Susan, 250, 389, 550Ystad, Sølvi, 563

Zammuner, Vanda Lucia, 138Zentner, Marcel, 215Zhou, Jue, 192

Zimmer, Fränk, 347Zimmermann, Sally-Anne, 274Ziv, Naomi, 173, 567