This paper presents a new approach to sound compositionfor soundtrack composers and sound designers. We proposea tool for usable sound manipulation and composition thattargets sound variety and expressive rendering of the compo-sition. We rst automatically segment audio recordings intoatomic grains which are displayed on our navigation tool ac-cording to signal properties. To perform the synthesis, theuser selects one recording as model for rhythmic pattern andtimbre evolution, and a set of audio grains. Our synthesissystem then processes the chosen sound material to createnew sound sequences based on onset detection on the record-ing model and similarity measurements between the modeland the selected grains. With our method, we can create alarge variety of sound events such as those encountered invirtual environments or other training simulations, but alsosound sequences that can be integrated in a music compo-sition. We present a usability-minded interface that allowsto manipulate and tune sound sequences in an appropriateway for sound design.
a usable interface
Synthesizing new sound events based on the rhythmic pattern
and the timbre evolution of a recording model
Output synthesis depends onthe selected sound material
Towards User-friendly Audio Creation
How to easily extend
ones sound design palette?
By collectingand sorting petabytesof sound samples??
By experimenting with sound synthesisalgorithms in too many different platforms, such as Matlab, PureData, Max/MSP, Minim (Processing), Csound, SuperCollider...??
Creating sound events
Offering a usable tool for manipulation, navigation and sound composition
Composing audio grains in time according to the rhythmic patternand the timbre evolution ofa recording model
Managing analysis of sound material and synthesis of new sound eventswithin theMediaCycleframework 
our synthesis technique
with a physical approach
Modal analysis of CAD models
Modal synthesis with user-defined force and location
Interface navigation based on semantic (physical properties) and signal-based features
Amethod for automatic extraction
and classification of meaningful audio grains.
Atechnique for automatic synthesis of
Ausable interface for database manipulation and sound composition
 Stphane Dupont, Christian Frisson, Xavier Siebert, and Damien Tardieu.Browsing sound and music libraries by similarity . In 128th AES Convention, 2010.
 Ccile Picard, Nicolas Tsingos, and Franois Faure.Retargetting example sounds to interactive physics-driven animations . In AES 35th International Conference on Audio for Games, 2009.
C. Picard-Limpens 1 , C. Frisson 2 , D. Tardieu 3 , J. Vanderdonckt 2 , T. Dutoit 3 numediart Research Program in Digital Art Technologies (2007-2012) 1 numediart Research Program,2 Universit Catholique de Louvain, Belgium,3 TCTS, Universit de Mons, Belgium Copyright (c)2010C. Picard-limpens, C. Frisson, D. Tardieu, J. Vanderdonckt, T. Dutoit This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/2.5/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
of audio grains
based on onset detection
Sound material for synthesis
= a recording model
+ audio grains
Synthesis process based on:
+ onset detection
Implementation & Results Browser controls Visual display Recording model Audio grains credits to http://www.lostonwallace.com