1
Designing a usable interface Synthesizing new sound events based on the rhythmic pattern and the timbre evolution of a recording model Output synthesis depends on the selected sound material Towards User-friendly Audio Creation Motivation How to easily extend one’s sound design palette? By collecting and sorting petabytes of sound samples?? By experimenting with sound synthesis algorithms in too many different platforms, such as Matlab, PureData, Max/MSP, Minim (Processing), Csound, SuperCollider...?? Goals Creating sound events Offering a usable tool for manipulation, navigation and sound composition Research Ideas Composing audio grains in time according to the rhythmic pattern and the timbre evolution of a recording model Managing analysis of sound material and synthesis of new sound events within the MediaCycle framework [1] Method Future Work Extending our synthesis technique with a physical approach Modal analysis of CAD models Modal synthesis with user-defined force and location Interface navigation based on semantic (physical properties) and signal-based features Conclusion A method for automatic extraction and classification of meaningful audio grains. A technique for automatic synthesis of coherent soundtracks A usable interface for database manipulation and sound composition Online video!: http://www.dailymotion.com/video/xe8ao0_nu mediart-10-2-audiogarden_tech Stay tuned on http://www.numediart.org and http://audiogarden.org References [1] Stéphane Dupont, Christian Frisson, Xavier Siebert, and Damien Tardieu. Browsing sound and music libraries by similarity. In 128th AES Convention, 2010. [2] Cécile Picard, Nicolas Tsingos, and François Faure. Retargetting example sounds to interactive physics-driven animations . In C. Picard-Limpens 1 , C. Frisson 2 , D. Tardieu 3 , J. Vanderdonckt 2 , T. Dutoit 3 numediart Research Program in Digital Art Technologies (2007-2012) 1 numediart Research Program, 2 Université Catholique de Louvain, Belgium, 3 TCTS, Université de Mons, Belgium Copyright (c) 2010 C. Picard-limpens, C. Frisson, D. Tardieu, J. Vanderdonckt, T. Dutoit This work is licensed under the Creative Commons Attribution- NonCommercial-ShareAlike License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/2.5/ or send a Automatic extraction of audio grains based on onset detection Sound material for synthesis = a recording model + audio grains Synthesis process based on: similarity measurements + onset detection Implementation & Results Browser controls Visual display Recording model Audio grains credits to http://www.lostonwallace.com

Towards User-friendly Audio Creation

Embed Size (px)

DESCRIPTION

This paper presents a new approach to sound compositionfor soundtrack composers and sound designers. We proposea tool for usable sound manipulation and composition thattargets sound variety and expressive rendering of the compo-sition. We rst automatically segment audio recordings intoatomic grains which are displayed on our navigation tool ac-cording to signal properties. To perform the synthesis, theuser selects one recording as model for rhythmic pattern andtimbre evolution, and a set of audio grains. Our synthesissystem then processes the chosen sound material to createnew sound sequences based on onset detection on the record-ing model and similarity measurements between the modeland the selected grains. With our method, we can create alarge variety of sound events such as those encountered invirtual environments or other training simulations, but alsosound sequences that can be integrated in a music compo-sition. We present a usability-minded interface that allowsto manipulate and tune sound sequences in an appropriateway for sound design.

Citation preview

Page 1: Towards User-friendly Audio Creation

• Designing a usable interface

• Synthesizing new sound events based on the rhythmic pattern and the timbre evolution of a recording model

Output synthesis depends on the selected sound material

Towards User-friendly Audio Creation

MotivationHow to easily extendone’s sound design palette?

• By collecting and sorting petabytes of sound samples??

• By experimenting with sound synthesis algorithms in too many different platforms, such as Matlab, PureData, Max/MSP, Minim (Processing), Csound, SuperCollider...??

Goals• Creating sound events

• Offering a usable tool for manipulation, navigation and sound composition

Research Ideas• Composing audio grains in time

according to the rhythmic pattern and the timbre evolution of a recording model

• Managing analysis of sound material and synthesis of new sound events within the MediaCycle framework [1]

Method Future WorkExtendingour synthesis techniquewith a physical approach

• Modal analysis of CAD models

• Modal synthesis with user-defined force and location

• Interface navigation based on semantic (physical properties) and signal-based features

Conclusion• A method for automatic extraction

and classification of meaningful audio grains.

• A technique for automatic synthesis ofcoherent soundtracks

• A usable interface for database manipulation and sound composition

• Online video!: http://www.dailymotion.com/video/xe8ao0_numediart-10-2-audiogarden_tech

• Stay tuned on http://www.numediart.organd http://audiogarden.org

References[1] Stéphane Dupont, Christian Frisson, Xavier Siebert, and Damien

Tardieu. Browsing sound and music libraries by similarity. In 128th AES Convention, 2010.

[2] Cécile Picard, Nicolas Tsingos, and François Faure. Retargetting example sounds to interactive physics-driven animations. In AES 35th International Conference on Audio for Games, 2009.

C. Picard-Limpens1, C. Frisson2, D. Tardieu3, J. Vanderdonckt2, T. Dutoit3

numediart Research Program in Digital Art Technologies (2007-2012)1numediart Research Program, 2Université Catholique de Louvain, Belgium, 3TCTS, Université de Mons, Belgium

Copyright (c) 2010 C. Picard-limpens, C. Frisson, D. Tardieu, J. Vanderdonckt, T. DutoitThis work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/2.5/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

• Automatic extraction of audio grainsbased on onset detection

• Sound material for synthesis= a recording model+ audio grains

• Synthesis process based on:similarity measurements+ onset detection

Implementation & Results

Browser controls

Visual display

Recording model

Audio grains

credits to http://www.lostonwallace.com