23
TIMESCULPT IN OPENMUSIC Karim Haddad [email protected] abstract In the last decade, regarding musical composition, my work was essentially focused on musical time. I have used thouroughly and mostly computer aided composition and most particularly OpenMusic. In this article we will see some compositional strategies dealing with musical durations, skipping any considerations rearding pitch in order to better focus on our subject except when these are related to our main disscussion 1 . I. How "time passes in OpenMusic ..." In OpenMusic time is represented (expressed) in many ways. Time could be : - a number (milliseconds for instance) - a rhythm tree (an internal representation in OpenMusic of musical rhythm notation [2] - see JIM2002 article) - a conventional graphical symbol (a quarter note) in OpenMusic Voice* editor - an "unconventional" graphical object (an OpenMusic temporal object). Another important issue is how musical time is conceived internaly (i.e implemented as a structural entity) [3]. Most computer environments and numerical formats (MIDI for instance), represent the musical flow of time decomposed in two orders : - the date of the event (often called onsets) - the duration of the event (called duration) We can already notice that this conception diverges from our own traditional musical notation system which represents a compactly readable time event + duration 2 . The only reason to that is that generally the computer representation of time is not done with symbols but with digits 3 . That is why, I beleive that today's composers should get familliar with this "listing" representation of musical events. 1 One can argue that these fields are indissociable and most particularly rhythm and duration. We will consider in this article that duration is from a different order than rhythm (think about Pierre Boulez' "temps strie" and "temps lisse" in 'Penser la musique aujourd'hui' [1] which is a well accepted view nowadays). 2 We can notice also that through this conception tempo is meaningless. 3 Time is sequentially expressed and not iconically symbolized (as a whole entity like a measure containing rhythm figures, having a tempo assignement and a rhythm signature).

Time Sculpt 1

Embed Size (px)

Citation preview

Page 1: Time Sculpt 1

TIMESCULPT IN OPENMUSIC

Karim [email protected]

abstract

In the last decade, regarding musical composition, my work was essentially focused onmusical time. I have used thouroughly and mostly computer aided composition and mostparticularly OpenMusic. In this article we will see some compositional strategies dealingwith musical durations, skipping any considerations rearding pitch in order to betterfocus on our subject except when these are related to our main disscussion1.

I. How "time passes in OpenMusic ..."

In OpenMusic time is represented (expressed) in many ways. Time could be :- a number (milliseconds for instance)- a rhythm tree (an internal representation in OpenMusic of musical rhythm notation

[2] - see JIM2002 article)- a conventional graphical symbol (a quarter note) in OpenMusic Voice* editor- an "unconventional" graphical object (an OpenMusic temporal object).

Another important issue is how musical time is conceived internaly (i.e implementedas a structural entity) [3].

Most computer environments and numerical formats (MIDI for instance), represent themusical flow of time decomposed in two orders :

- the date of the event (often called onsets)- the duration of the event (called duration)

We can already notice that this conception diverges from our own traditional musicalnotation system which represents a compactly readable time event + duration2. The onlyreason to that is that generally the computer representation of time is not done withsymbols but with digits3. That is why, I beleive that today's composers should getfamilliar with this "listing" representation of musical events.

1 One can argue that these fields are indissociable and most particularly rhythm and duration. We willconsider in this article that duration is from a different order than rhythm (think about Pierre Boulez'"temps strie" and "temps lisse" in 'Penser la musique aujourd'hui' [1] which is a well accepted viewnowadays).2 We can notice also that through this conception tempo is meaningless.3 Time is sequentially expressed and not iconically symbolized (as a whole entity like a measurecontaining rhythm figures, having a tempo assignement and a rhythm signature).

Page 2: Time Sculpt 1

Since the MIDI standard is integrated in OpenMusic, this representation is common tomost musical objects (see CHORD-SEQ), and since there are OpenMusic libraries thatgenerates CSOUND [3] instruments which uses this "time syntax", one must learn thisparticular representation in order to deal accurately with these objects specially if oneintends to use controled synthesis.

a) dx->x and x->dx

dx->x and x->dx are fluently used objects and are very practical when it comes todurations and rhythm. They are generic functions found under the menufunction>kernel>Series:

- dx->x computes a list of points from a list of intervals and a <start> point.- x->dx computes a list of intervals from a list of points.

Starting from a list of durations in milliseconds and a starting point, dx->x will output alist of intervals (durations) according to these parameters.Vice-versa, x->dx will output a sequence of time-dates starting from a list of durations.

In order to illustrate this mechanism, we will start with a marker file done withAudiosculpt [4] or SND [5]. This file represents a list of time events automaticalygenerated or hand controlled.

In the example below, the soundfile was marked manually.

Illustration 1

Page 3: Time Sculpt 1

Once the analysis file is imported in OpenMusic, we will quantify it using omquantify,as shown in the next figure:

Looking carefully to this rhythmical result, we may notice a wide variety of durations (inthis case it could be considered as a series of durations). We may also consider it as asequence of accelerando/ decelerando profiles ( modal time durations). In either case, it isa rich rhythmical material that could be developped.

This is a possibility to "note" symbolicaly and integrate a sound file into a score4.

4 It is also very practical for coordination between musicians and tape.

Illustration 2

Illustration 3

Page 4: Time Sculpt 1

We can of course extend this "symbolic" information and consider it as acompositional material for example , applying to it counterpointal transformations likerecursion, diminition or extention, and that by simply using multiplication (om*) or thereverse function, permutations, etc...

b) Another aspect of rhythm manipulation I use, is on the antipodes of the precedentexample. Instead of extracting rhythm from a physical source (i.e such as a soudfile) I usedirectly combinatorial processes of rhythmical structures using the internal definition ofrhythm in OpenMusic, Rhythm Trees [2]. Since this description has a wonderfull capacityin creating any inimaginable rhythm, should it be simple or complex, and since the RTstandard is both syntacticaly and semanticaly coherent with musical structure, rhythmmanipulation and transformation is therefore very efficient using these structures.

It is for this reason I came to write the omtree library for OpenMusic whichbasicaly was personal collection of functions. These allow 1) basical rhythmmanipulations 2) practical modifications and 3) some special transformations such asproportional rotations, filtering, substitution, etc...

The whole stucture of “...und wozu Dichter in dürftiger Zeit?...” for twelveinstruments and electronics is written starting from the following generic measure:

which corresponds to the following rhythm tree:

(? (((60 4) ((21 (8 5 -3 2 1)) (5 ( -3 2 1)) (34 ( -8 5 3 -2 1))))))

Rotations are performed on the main proportions based on the fibonaci series(rotations of D elements – durations-, and rotations on the S elements also – subdivisions-). The first rotation being :

Illustration 4

Page 5: Time Sculpt 1

The corresponding rhythm tree being:

(? ((( 60 4) (( 5 ( 2 1 -3)) ( 34 ( 5 3 -2 1 -8)) ( 21 ( 5 -3 2 1 8 ))))))

This is generated by this patch :

The result is a six voice polyphony. The pitches are also organized following anordered rotation and distributed on the six voices as an heterophony.

Illustration 5

Illustration 6

Page 6: Time Sculpt 1

After quantification this will appear in the score as (excerpt):

III. Topology of sound in the symbolic domain

Illustration 7

Illustration 8 “...und wozu Dichter in dürftiger Zeit?...” for twelve instruments and electronics

Page 7: Time Sculpt 1

1) Form

We might consider Sound analysis as an opened field for investigation in the timedomain, where interesting and unprecedent time form and gestures could be found andused as a basic (raw) material for a musical composition. This new approach is madepossible due to fast computation. But the fact that sound analysis being a vast field ofresearch (where one can find multiple kind of analysis and different visual or raw dataformats) , one must take into consideration the nature of this analysis and its originalpurpose. We can say that sound analysis is a rich source where symbolic data could beretreived and remodeled following the needs of the composer in an open perspective forcomposition.

Sound analysis data could also be considered as potential correlated vectors. Theseaccording to the analysis type could be streams of independent linear data, or moreinteressting, arrays of data. The type of data oftenly found are directly related to thesound nature. This could be regarded abstractly as a pseudo random flow or consideredas a coherent interrelated orders of data , depending again on type of analysis choosen.

Using sound analysis as a basis for music material production and most particularly in thetime domain. It is important to note that the following approach is not a “spectral” one inthe traditional sense5, but on the contrary it should be considered as a spectral-timeapproach. Frequency domain will be translated into time domain and vice-versafollowing the compositional context as we will see further on6.

This “translation” is possible with the wide variety of analysis type (additive, modres-resonance modes, ) and not to forget the many data format available, visual, or under theform of numerical data.

This mateial will be used following the musical project. Different order of "translation"in the symbolic field could be used. Form, could be extracted litteraly from the analysisdata or could be applied from a given symbolic material . This mixity of sources(symbolic and analytic) will be coordinated such as compositionaly, sources will fusetogether. That's where tools are very important.We can consider OpenMusic as a black box where analysis and the symbolic fields are

connected in order to produce such fusion in the field of musical time.

2) No one to speak their names (now that they are gone)

The structure of "No one to speak their names (now that they are gone)" for twobass clarinets, string trio and electronics, is based on an aiff stereo sound file of 2.3seconds long.

5 Meaning that form and strategies are primarly based on pitch.6 This was the initial approach in Stockhausen's well known article “...wie die Zeit vergeht...” [6]

Page 8: Time Sculpt 1

Considering the complex nature of this sound file (friction mode on TamTam), ithas been segmented into 7 parts (c.f figure). This segmentation is done according to thedynamical profile of the sound file.

We may consider a sound as an array of n dimension (as it is shown in the figureabove) with a potential information that could be translated into time information. At firstsite, it is natural to construct this array under the additive model (time, frequency,amplitude and phase). This is rather a straightforward description that could be used toprocess directly into time domain or for an eventual resynthesis. Other sound analysis-description is available such as the spectral analysis, lpc analysis, the modres analysis orothers. The modres analysis was chosen. All these described above are discretewindowing analysis where time domain is absent. Most of them have time addressing,

Illustration 10

Illustration 9

Page 9: Time Sculpt 1

but the last one (modres analysis) is an array of dimension 3 (Frequency, amplitude andbandwidth/Pi).

Given a sound analysis/description and an array of n dimension, it is possible to translateit into time domain from array to array, i.e. the analysis data could be read in any plane,vertically, diagonally, etc... or any combination of arrays. This "translation" of course isarbitrary and is meant to be a translation in the symbolic field, the score that in its turn isan array of a totally different class. Although this operation seems arbitrary, (andsomehow it is), there are two arguments in my opinion which are pertinent:

First, the fact that (as we will see later) the sound array is processed in a completeinterdependent way, taking into account all the proportionate weight relations containedwithin it, the coherency of the sound resonance will be somehow "reflected" in thesymbolic domain through specific analogical modes (dynamics, durations and pitch)which are not supposed to be literally associated one by one (i.e. exact correspondence of

Illustration 11frequency, dynamics and bandwidth data

Page 10: Time Sculpt 1

parametrical fields is not necessary) on the contrary in this piece these are permutated.

The second important point is the fact that this translation establishes a strong significantrelation between the electronic part and the acoustical part of the piece. This stratetgyseems to me to be one of the strongest and most coherent binding in mixed musicrepertoire.

Moreover, if we visualize the given data in a 3 dimensional graph (see illustration 10) wewill see many folds ("plis" [6]) of different densities. These are directly related to thepolyphonical gesture representing the point-counterpoint technique used in the score (c.fscore example – illustrations 18-19).

As we can see above, there are two parallel musical processes: The electronic part (tape),which is also entirely constructed with the initial material (the Tam sound file), and thescore part. The semantic bridge is shown as a dashed arrow. It is through analysis thatboth domains communicate with each other7 . In the case of resynthesis, another bridgecould be established in the other direction (from symbolic to sound domain) but this isnot the case in our present composition.

In "No one to speak their names (now that they are gone)'', using the modres analysis, thebandwidth array has been chosen to order each sets of pitches in each fragmentsfollowing its bandwidth. For each parsed pitch segments we will again establish aproportional relation: all pitches / highest pitch.These proportions will be used as snapshots of seven sonic states in a plane of a three-dimensional array (x, y, z), each state being the sum of all energetical weights within onewindow. We will use them to determine our durations all through the composition. The

7Analysis could be thought of as another potential aspect of a sound file, or in other terms, it is an alternativereading/representation of sound

Illustration 12

Page 11: Time Sculpt 1

durations are of two orders:

– Macro durations that represents metric time and will determine a subliminal pulseillustrated by dynamics. Measures are calculated following the proportions computedfrom the last segment.

– Local durations consisting in effective durations given on the four instruments. Thesewill be distributed following the same proportions around measure bars creating anasymmetrical crescendo-decrescendo pairs.

The main persistant concept in the whole work refering to pitch/bandwith/dynamicweights is the notion of symetry. As we have seen in the example above, we can use thisas a compositional element.

Starting from one mode of resonance which was assigned to durations following ourproportional translation:

Illustration 13 Analysis translated into polyphony of durations

Page 12: Time Sculpt 1

we will apply to it a new axe of symetry where all durations will start from and thencontinue in an asymetric mirroring as illustrated below:

This was calculated by the patch below:

Illustration 14

Illustration 15 -35 degrees symetrical axe

Page 13: Time Sculpt 1

The resulting durations could be seen below.Illustration 16

Illustration 17

Page 14: Time Sculpt 1

Durations are not the only elements that are calculated from the analysis. Starting frommeasure 28, pitches are extracted from the analysis and distributed on all four instrumentsfollowing an order based on bandwidth over amplitude giving weight criteria from themost important to the least (result from the patch seen in illustration13):

IV. Hypercomplex minimalism

1) Sound analyis for controling instrumental gestures

As opposed to the examples we have already seen, where data was three dimensional

Illustration 18 Excerpt from the elec. guitar and string trio version.

Illustration 19

Page 15: Time Sculpt 1

arrays of information, therefore complex, we will see here concrete use of a more simpletwo dimensional sound data array.

Ptps analysis is a pitch estimation analysis (pitch and time arrays). When applyied to anoise source or inharmonic sound the analyis output will yield interesting profiles.

This data will be used after its decomposition by n fragments as a mean to controlmusical gesture.

Illustration 20 PTPS analysis

Page 16: Time Sculpt 1

These fragments will be considered as potential fields in the dynamical control ofinstrumental gestures

These “potential” fields will afterwards be filtered and assigned according to the musicalcontext. As we have mentioned above (section 2), the relevance of this technique is thefact that all sound sources used for analysis or in the tape part of the piece, are issuedfrom samples pre-recorded by the musicians, using special playing modes (asmultiphonics, breathing, and others...)

Illustration 21 Fragmented analyis in OpenMusic

Illustration 22 Bow pressure, placement and speed control of the doublebass part in “In lieblicherBlaue...” for amplified bass saxophone, amplified doublebass and electronics.

Page 17: Time Sculpt 1

One however must also take into consideration the fact that musical events areproportionaly balanced between complex gestures in the process of sound production andminimal activity in note and rhythm production , i.e we can eventualy dustinguish twolayers of activity: The “traditional” score notation, and the control processing notation.

2) Adagio for String quartet

Again in this work, the use of a soundfile was a starting point to the whole piece.However the technique is completely different. Instead of using an external analysisprogram, all the work was done in OpenMusic.Using OpenMusic's handling of soundfiles which is limited in playing them and

representing them in the SounFile object under a time/amplitude curve that is mostlycommon to all sound editors, it was in my intention to use limited and reduced data inorder to have a closer affinity with the symbolic field due to the instrumental conotationof the string quartet.

I therefore used the BPC object and downsampled the amplitude curve in order tohave a global satisfying overview of the amplitude profile.

Illustration 23 Excerpt from “In lieblicher Blaue...” for amplified bass saxophone, amplifieddoublebass and electronics

Page 18: Time Sculpt 1

The amplitude having two phases and due to downsampling a more accentuateddifference was therefore created between the positive and negative phase creating asomehow double choir polyphony.

Illustration 24

Page 19: Time Sculpt 1

I then determined four axes intersecting the curves. Durations were then quantifiedstarting from these segments (see illustration 24).

In order to control the result, the two polyphonies were synthesised using Csound and theresult was put in OpenMusic's Maquette object

Illustration 25

Illustration 26

Page 20: Time Sculpt 1

Illustration 27

Page 21: Time Sculpt 1

IV. Conclusion

In the compositional processes presented here, we can distinguish twofundamental operatives : data and algorithms.

Data in itself, could be assimilated as a conceptual idea. It represents thedeterministical drive of "what must be" in a given lap of time decided by the composer'sdeus ex machina.

The algorithms could be seen as the executive part of the composer's idea who isalso deterministic as the given data proposal added to that, a dynamical decisionalpotential that models the propositional data to its own creative role.

These two operatives are elements of a wide dynamic program since thecomputational part (analysis and processing) was executed with different computerprograms such as OpenMusic, Diphone [7], etc' that could be considered part of a uniqueprogram: the composition itself.

Indeed, it is legitimate nowadays to consider a work of art not only under itsaspect of factual performance, its aesthetical reality, but also under its deconstructuralknowledge of its own constitution. I personally adhere to Hegel's [8] 8 thesis and ownperspectival view that the work of art has arrived to its finality, and that modernunderstanding of art (from Descartes to Kant) could not be anymore understood as it wasbefore that. Neither the post-modernist attitude nor the techno-classicism accomplishesthe destiny of modern art, but a conscious study of the state of art of its own medium isnecessary, similar to the renaissance revolution. The French composer and philosopher

8 "In allen diesen Beziehungen ist und bleibt die Kunst nach der Seite ihrer höchsten Bestimmung für uns einVergangenes." ( X, 1, p.16)"In all its relations its supreme destination, Art is and stays to us something that has been." (X, 1, p.16) [14]

Illustration 28 beginning of the Adagio for String quartet.

Page 22: Time Sculpt 1

Hughes Dufourt states “La musique en changeant d'échelle, a changé de langage.9” [9]. Thetechniques in composition and sound exploration must be integrated totally not only inthe praxis of composition but in it's understanding, and better, as a wholly part ofcomposition itself.

Bibliography

1. Pierre Boulez, “Penser la musique aujourd'hui”.2. Carlos Agon, Karim Haddad and Gerard Assayag, “Representation and Rendering of

9 “In changing its scale, Music has also changed its language.”

Page 23: Time Sculpt 1

Rhythmic Structures”, 2002.3. Carlos Agon, “doctorat.......”4. Karlheinz Stockhausen, “... wie die Zeit vergeht...”, 1956.4. Massimo Cacciari, "Icone della Legge". (Adelphi Edizioni), 1985.5. Hughes Dufourt , "L'oeuvre et l'histoire". (Christian Bourgeois Éditeur), 1991.6. Gilles Delleuze "Le Pli - Leibniz et le baroque". (Les éditions de Minuit), 1988.7. F. Hegel Complete Works8. Diphone, Xavier Rodet, Adrien Lefevre, IRCAM9. Csound . Barry Vercoe, MIT10. OpenMusic, C.AAgõn and G. Assayag, IRCAM

Works by Karim Haddad mentionned in this article

No one to speak their names (now that they are gone) 11' (*) 2002electric guitar, string trio and electronicsFirst performance: October 2001, ParisTerritoires polychromes FestivalYvap Quartet

In lieblicher Blaue... 7 ' 2003doublebass, bass saxophone, tape and electronicsFirst performance: February 2003, BeziersReina Portuando, Daniel Kienzy & Jean-Pierre Robert

"...und wozu Dichter in durftiger Zeit?..." 15 ' (*) 2003For twelve instruments and electrtonicsFirst preformance : April 2003, ParisRadio FranceEnsemble 2e2mconducted by Paul Mefano

Adagio for string quartet 15 ' (*) 2004String QuartetFirst preformance : September 2004, ParisRadio FranceDiotima String QuartetNicolas Miribel, Eichi Chijiwa, Franck Chevalier and Pierre Morlet