Towards User-friendly Audio Creation

Embed Size (px)


This paper presents a new approach to sound compositionfor soundtrack composers and sound designers. We proposea tool for usable sound manipulation and composition thattargets sound variety and expressive rendering of the compo-sition. We rst automatically segment audio recordings intoatomic grains which are displayed on our navigation tool ac-cording to signal properties. To perform the synthesis, theuser selects one recording as model for rhythmic pattern andtimbre evolution, and a set of audio grains. Our synthesissystem then processes the chosen sound material to createnew sound sequences based on onset detection on the record-ing model and similarity measurements between the modeland the selected grains. With our method, we can create alarge variety of sound events such as those encountered invirtual environments or other training simulations, but alsosound sequences that can be integrated in a music compo-sition. We present a usability-minded interface that allowsto manipulate and tune sound sequences in an appropriateway for sound design.

Text of Towards User-friendly Audio Creation

  • 1.
    • Designing
  • a usable interface
  • Synthesizing new sound events based on the rhythmic pattern
  • and the timbre evolution of a recording model
  • Output synthesis depends onthe selected sound material

Towards User-friendly Audio Creation

  • Motivation
  • How to easily extend
  • ones sound design palette?
  • By collectingand sorting petabytesof sound samples??
  • By experimenting with sound synthesisalgorithms in too many different platforms, such as Matlab, PureData, Max/MSP, Minim (Processing), Csound, SuperCollider...??
  • Goals
  • Creating sound events
  • Offering a usable tool for manipulation, navigation and sound composition
  • Research Ideas
  • Composing audio grains in time according to the rhythmic patternand the timbre evolution ofa recording model
  • Managing analysis of sound material and synthesis of new sound eventswithin theMediaCycleframework [1]


  • Future Work
  • Extending
  • our synthesis technique
  • with a physical approach
  • Modal analysis of CAD models
  • Modal synthesis with user-defined force and location
  • Interface navigation based on semantic (physical properties) and signal-based features
  • Conclusion
  • Amethod for automatic extraction
  • and classification of meaningful audio grains.
  • Atechnique for automatic synthesis of
  • coherent soundtracks
  • Ausable interface for database manipulation and sound composition
  • Online video!:
  • Stay tuned on
  • and
  • References
  • [1] Stphane Dupont, Christian Frisson, Xavier Siebert, and Damien Tardieu.Browsing sound and music libraries by similarity . In 128th AES Convention, 2010.
  • [2] Ccile Picard, Nicolas Tsingos, and Franois Faure.Retargetting example sounds to interactive physics-driven animations . In AES 35th International Conference on Audio for Games, 2009.

C. Picard-Limpens 1 , C. Frisson 2 , D. Tardieu 3 , J. Vanderdonckt 2 , T. Dutoit 3 numediart Research Program in Digital Art Technologies (2007-2012) 1 numediart Research Program,2 Universit Catholique de Louvain, Belgium,3 TCTS, Universit de Mons, Belgium Copyright (c)2010C. Picard-limpens, C. Frisson, D. Tardieu, J. Vanderdonckt, T. Dutoit This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License. To view a copy of this license, visit or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

  • Automatic extraction
  • of audio grains
  • based on onset detection
  • Sound material for synthesis
  • = a recording model
  • + audio grains
  • Synthesis process based on:
  • similarity measurements
  • + onset detection

Implementation & Results Browser controls Visual display Recording model Audio grains credits to