University of Calgary
PRISM: University of Calgary's Digital Repository
Graduate Studies The Vault: Electronic Theses and Dissertations
2016
AAIM: Algorithmically Assisted Improvised Music
Fay, Simon
Fay, S. (2016). AAIM: Algorithmically Assisted Improvised Music (Unpublished doctoral thesis).
University of Calgary, Calgary, AB. doi:10.11575/PRISM/24629
http://hdl.handle.net/11023/3073
doctoral thesis
University of Calgary graduate students retain copyright ownership and moral rights for their
thesis. You may use this material in any way that is permitted by the Copyright Act or through
licensing that has been assigned to the document. For uses that are not allowable under
copyright legislation or licensing, you are required to seek permission.
Downloaded from PRISM: https://prism.ucalgary.ca
UNIVERSITY OF CALGARY
AAIM:
Algorithmically Assisted Improvised Music
by
Simon Fay
A THESIS
SUBMITTED TO THE FACULTY OF GRADUATE STUDIES
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY
GRADUATE PROGRAM IN COMPUTATIONAL MEDIA DESIGN
CALGARY, ALBERTA
June, 2016
c© Simon Fay 2016
Abstract
The AAIM (Algorithmically Assisted Improvised Music) performance system1 is a portfolio of
interconnectable algorithmic software modules, designed to facilitate improvisation and live per-
formance of electronic music. The AAIM system makes no attempt to generate new materials in
one particular style, nor to act as an autonomous improviser. Instead, the goal of the AAIM sys-
tem is to facilitate improvisation through the variation and manipulation of composed materials
entered by the user. By generating these variations algorithmically, the system gives improvisers
of computer music the ability to focus on macro elements of their performances, such as form,
phrasing, texture, spatialisation, and timbre, while still enabling them to incorporate the rhythmic
and melodic variations of a virtuosic instrumental improviser.
1https://simonjohnfay.com/aaim/
ii
Acknowledgements
First I would like to thank my supervisors David Eagle and Jeff Boyd. Their assistance, sugges-
tions, and support throughout my research was invaluable. I also want to give a special thanks my
friends and collaborators Lawrence Fyfe, Aura Pon, and Ethan Cayko who all worked with me
on a number of pieces using AAIM. Finally, I want to thank all my friends and family who have
always supported and encouraged me.
iii
Table of Contents
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Advancing Technologies and Musical Developments . . . . . . . . . . . . . . . . 21.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4 Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.5 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.6 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.7 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.8 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Musical Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.1 Electronic Music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.1 Musique Concrete. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.2 Elektronische Musik. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.3 BBC Radiophonic Workshop, and Delia Derbyshire. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.4 Modular Synthesisers, and Morton Subotnick. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.5 Feedback Delay, Live Sampling, Terry Riley, Kaffe Mathews. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.6 FM Synthesis, and Paul Lansky. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.7 Sampling, Breakbeats, John Oswald, Plunderphonics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.1.8 Techo and The “New Generation”. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.1.9 Aphex Twin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.10 Autechre. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.1.11 Squarepusher. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.1.12 Matmos and Bjork. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
iv
2.1.13 Richard Devine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.1.14 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.2 Improvisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392.2.1 Improvisation and composition as two points on a continuum . . . . . . . . 402.2.2 The use of ‘referents’ or ‘models’ in improvisation . . . . . . . . . . . . . 412.2.3 Improvisation as the re-use, variation, and manipulation of previously learnt
‘building blocks’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432.3 Algorithms and Computational Thinking in Music . . . . . . . . . . . . . . . . . . 45
2.3.1 Algorithms in improvised music. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3 Related Work - Algorithmic Music Performance Systems . . . . . . . . . . . . . . 553.1 Algorithmic Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.1.1 Mathematical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.1.2 Knowledge-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . 573.1.3 Grammars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573.1.4 Evolutionary Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583.1.5 Systems That Learn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583.1.6 Hybrid Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.2 Other Means of Categorisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593.2.1 Algorithmic Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593.2.2 Instrument v Player Paradigms . . . . . . . . . . . . . . . . . . . . . . . . 603.2.3 Musical ‘problem space’ . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613.3.1 Player Paradigm Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.3.1.1 Voyager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613.3.1.2 prosthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.3.1.3 Continuator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.3.1.4 OMAX-OFON . . . . . . . . . . . . . . . . . . . . . . . . . . . 633.3.1.5 GenJam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643.3.1.6 Kinetic Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . 643.3.1.7 GEDMAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.3.2 Instrument Paradigm Systems . . . . . . . . . . . . . . . . . . . . . . . . 663.3.2.1 Lexikon-Sonate . . . . . . . . . . . . . . . . . . . . . . . . . . 663.3.2.2 Virtuoso . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663.3.2.3 Syncopalooza . . . . . . . . . . . . . . . . . . . . . . . . . . . 673.3.2.4 Liquid Music . . . . . . . . . . . . . . . . . . . . . . . . . . . 673.3.2.5 2020 Beat-Machine . . . . . . . . . . . . . . . . . . . . . . . . 683.3.2.6 ITVL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693.3.2.7 Fugue Machine . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704 AAIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734.2 Artistic Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
v
4.3 Design Philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784.3.1 Algorithmic Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.4 Improvisation with AAIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854.4.1 Composition - Improvisation Continuum . . . . . . . . . . . . . . . . . . 854.4.2 Referent-based Improvisation . . . . . . . . . . . . . . . . . . . . . . . . 864.4.3 Improvisation through the re-use, variation, and manipulation of ‘building
blocks’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 865 Software Portfolio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885.1 AAIM.rhythmGen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.1.1 Input and Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895.1.2 Trigger Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915.1.3 ioiChooser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925.1.4 Indispensability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945.1.5 Additional Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.1.5.1 Tempo Changes . . . . . . . . . . . . . . . . . . . . . . . . . . 985.1.5.2 Timing Variations . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.2 AAIM.patternVary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995.2.1 Input and Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995.2.2 Trigger Output and Basic Variations . . . . . . . . . . . . . . . . . . . . . 1005.2.3 Inserting ‘Extra’ Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005.2.4 Additional Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.2.4.1 Maximum Notes . . . . . . . . . . . . . . . . . . . . . . . . . . 1015.2.4.2 Grouping Determination . . . . . . . . . . . . . . . . . . . . . . 101
5.3 AAIM.loopVary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025.3.1 Input and Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025.3.2 Grain Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055.3.3 Reverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055.3.4 Retrigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065.3.5 Jump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.4 AAIM.melodyVary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075.4.1 Input and Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.4.1.1 Available Pitches . . . . . . . . . . . . . . . . . . . . . . . . . 1095.4.1.2 Actual Pitches . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095.4.1.3 Modal Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5.4.2 Melodic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105.4.2.1 Interval Content . . . . . . . . . . . . . . . . . . . . . . . . . . 1125.4.2.2 Trajectory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125.4.2.3 Possible Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.4.3 Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1145.4.4 Retrograde . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1145.4.5 Retrograde-Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1145.4.6 Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1145.4.7 Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1165.4.8 Scalar Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1185.4.9 Scalar Melody . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
vi
5.5 AAIM.app Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206 Portfolio: Creative Practice and Selected Works . . . . . . . . . . . . . . . . . . . 1256.1 “There is pleasure...” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.1.1 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276.1.2 Spatialisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276.1.3 Musical Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1286.1.4 Additional Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.2 under scored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1316.2.1 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1326.2.2 Musical Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.3 5Four . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1376.3.1 Sound Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1396.3.2 Musical Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
6.4 Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1436.4.1 blueMango . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1436.4.2 BeakerHead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1446.4.3 radioAAIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456.4.4 Winter Lullaby (summersongsforwinterspast mix) . . . . . . . . . . . . . . 1466.4.5 new beginnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1466.4.6 Jam Band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
7 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1487.1 The AAIM system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
7.1.1 Criteria for Musical Algorithms . . . . . . . . . . . . . . . . . . . . . . . 1497.2 Live Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1517.3 Improvisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1527.4 Broad Problem Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1538 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1548.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1548.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
8.2.1 AAIMduino . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1578.2.2 AAIM Max and Pure Data Externals . . . . . . . . . . . . . . . . . . . . . 1578.2.3 AAIM Application and VST’s . . . . . . . . . . . . . . . . . . . . . . . . 1588.2.4 Android/iOS app . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
8.3 Personal Reflections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1588.4 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161A User Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
A.0.1 Getting Started/General Tips . . . . . . . . . . . . . . . . . . . . . . . . . 181A.0.2 Performance Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184A.0.3 Saving/Loading Section Presets . . . . . . . . . . . . . . . . . . . . . . . 187
A.1 AAIM Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188A.1.1 AAIM.rhythmGen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
A.1.1.1 Global Controls . . . . . . . . . . . . . . . . . . . . . . . . . . 188A.1.1.2 inter-onset-interval probabilities . . . . . . . . . . . . . . . . . . 189A.1.1.3 Individual Voice Controls . . . . . . . . . . . . . . . . . . . . . 189
vii
A.1.1.4 tempo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190A.1.1.5 rests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190A.1.1.6 complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190A.1.1.7 timing deviations . . . . . . . . . . . . . . . . . . . . . . . . . 191A.1.1.8 voice on/off . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
A.1.2 AAIM.patternVary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192A.1.2.1 AAIM.patternVary trigger destinations . . . . . . . . . . . . . . 194A.1.2.2 MIDI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194A.1.2.3 OSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194A.1.2.4 AAIM.samplePlayer . . . . . . . . . . . . . . . . . . . . . . . . 195A.1.2.5 sequenceMaker . . . . . . . . . . . . . . . . . . . . . . . . . . 196A.1.2.6 AAIM.samplePlayer2 . . . . . . . . . . . . . . . . . . . . . . . 196
A.1.3 AAIM.looper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198A.1.3.1 AAIM.looperMultiVoiceInterface . . . . . . . . . . . . . . . . . 199A.1.3.2 AAIM.looper . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
A.1.4 AAIM.melodyVary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207A.1.4.1 AAIM.melodyVaryMultiVoiceInterface . . . . . . . . . . . . . . 210A.1.4.2 AAIM.melodyVaryPoly . . . . . . . . . . . . . . . . . . . . . . 211A.1.4.3 AAIM.melodyVary trigger destinations . . . . . . . . . . . . . . 214A.1.4.4 MIDI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214A.1.4.5 OSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214A.1.4.6 AAIM.synth . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
A.1.5 AAIM.sequenceMaker . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218A.1.6 AAIM.FXModule1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222A.1.7 AAIM.polygonInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
A.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224A.2.1 AAIM.patternVary Example . . . . . . . . . . . . . . . . . . . . . . . . . 224
A.2.1.1 Loading the Example . . . . . . . . . . . . . . . . . . . . . . . 224A.2.1.2 Performance and Improvisation . . . . . . . . . . . . . . . . . . 225
A.2.2 AAIM.melodyVary Example . . . . . . . . . . . . . . . . . . . . . . . . . 226A.2.2.1 Loading the Example . . . . . . . . . . . . . . . . . . . . . . . 226A.2.2.2 Performance and improvisation . . . . . . . . . . . . . . . . . . 226
A.2.3 “Jam Band” Preset Example . . . . . . . . . . . . . . . . . . . . . . . . . 228A.2.3.1 Loading the Example . . . . . . . . . . . . . . . . . . . . . . . 228A.2.3.2 Performance and improvisation . . . . . . . . . . . . . . . . . . 230A.2.3.3 Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Example Progression . . . . . . . . . . . . . . . . . . . . . . . . . 233
viii
List of Tables
4.1 Classification of the AAIM system using the categories presented in Chapter 3. . . . 76
5.1 Examples of the relationships between various i-o-i’s and the number of repetitionsof both the ‘base i-o-i’ and the i-o-i itself necessary for synchronisation, using one18 th note as the ‘base’ i-o-i. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.2 The relationships between interval names used by the AAIM.melodyVary moduleand the chromatic intervals they represent. . . . . . . . . . . . . . . . . . . . . . . 112
ix
List of Figures and Illustrations
1.1 The scope of the research presented in this thesis. . . . . . . . . . . . . . . . . . . 71.2 Graphical representation of the methodology used for the research presented in this
thesis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1 Guido d’Arezzo’s mechanical method for composing/improvising melodic lines,by assigning each vowel to a pitch. . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.2 Example of a simple rhythm, ‘3 against 4,’ using the Schillinger method. . . . . . . 50
4.1 Illustration of the placement of the AAIM system between the interface and soundsource. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.2 Expansion of Figure 4.1, illustrating the fundamental role played by the AAIM.rhythmGen mod-ule in triggering each of the other primary modules. . . . . . . . . . . . . . . . . . 79
5.1 Example of a two bar phrase output by the rhythm generator. Blue arrows signifythe choosing of a new i-o-i, and red arrows signify triggers output by the rhythmgenerator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.2 Output of Fig 5.1 in musical notation, with the ‘base’ i-o-i equalling 18note. . . . . 90
5.3 Simple example of the AAIM.rhythmGen ‘ioiMaker’ method. . . . . . . . . . . . . 935.4 Examples of the AAIM.rhythmGen ‘complexity determination’ method. . . . . . . 935.5 The decision-making process used by each AAIM.rhythmGen voice choosing new
i-o-i values and outputting triggers. . . . . . . . . . . . . . . . . . . . . . . . . . . 945.6 A schematic diagram demonstrating two alternative groupings created by the AAIM.rhythmGen ‘gen-
Grouping’ algorithm using the same input. . . . . . . . . . . . . . . . . . . . . . . 965.7 A schematic diagram illustrating the AAIM.rhythmGen ‘indispensability’ algorithm. 975.8 The decision-making process used by the AAIM.patternVary when using the output
of the AAIM.rhythmGen to vary patterns. . . . . . . . . . . . . . . . . . . . . . . . 995.9 The decision-making process used by the AAIM.loopVary module when playing
and varying samples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035.10 Example of how the AAIM.loopVary algorithm segments a sample into n equal
parts, and how the module maps triggers onto these segmentations. . . . . . . . . . 1045.11 Example of mapping the output of the AAIM.rhythmGen module onto a sample
using the AAIM.loopVary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055.12 Variations of a sample using the AAIM.loopVary ‘reverse’ method. . . . . . . . . . 1065.13 Variations of a sample using the AAIM.loopVary ‘retrigger’ method. . . . . . . . . 1065.14 Variations of a sample using the AAIM.loopVary ‘jump’ method. . . . . . . . . . . 1075.15 Demonstration of how the AAIM.melodyVary module converts input of pitch and
trigger type pairs into a melodic line (using the nursery rhyme ‘Mary had a littlelamb’). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.16 Demonstration of how the AAIM.melodyVary module maps the list of ‘AvailablePitches’ onto the list of ‘Actual Pitches’ to create new versions of melodic lines. . . 110
5.17 Demonstration of the modal mapping method used by the AAIM.melodyVary module.1115.18 The AAIM.melodyVary analysis of the intervallic content of the melody used in
Fig. 5.15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
x
5.19 Expansion of the possible notes used by the AAIM.melodyVary scalar note method. 1135.20 Example of the AAIM.melodyVary ‘invert’ method . . . . . . . . . . . . . . . . . . 1155.21 Example of the AAIM.melodyVary ‘retrograde’ method . . . . . . . . . . . . . . . 1165.22 Example of the AAIM.melodyVary ‘expansion’ method . . . . . . . . . . . . . . . 1175.23 Example of the AAIM.melodyVary ‘sequence’ method . . . . . . . . . . . . . . . . 1185.24 Demonstration of the AAIM.melodyVary ‘scalar note’ method. . . . . . . . . . . . 1195.25 Demonstration of the AAIM.melodyVary ‘scalar note’ method, but using the entire
C pentatonic scale rather than just the notes present in the original melody. . . . . . 1215.26 Demonstration of the AAIM.melodyVary ‘scalar melody’ method. . . . . . . . . . . 1225.27 Screenshot of the AAIM.app application main screen. . . . . . . . . . . . . . . . . 1235.28 Screenshot of the AAIM.app application AAIM.rhythmGen interface. . . . . . . . . 123
6.1 The “There is pleasure...” Feedback Frequency Modulation (FFM) synthesis engine.1286.2 The interface used to control “There is pleasure...” during live performances. . . . 1296.3 The ‘Vamp’ section of under scored . . . . . . . . . . . . . . . . . . . . . . . . 1336.4 The ‘A’ section of under scored . . . . . . . . . . . . . . . . . . . . . . . . . . 1356.5 The ‘B’ section of under scored . . . . . . . . . . . . . . . . . . . . . . . . . . 1366.6 The ‘transition’ section of under scored . . . . . . . . . . . . . . . . . . . . . . 1366.7 The form used by the Aspect ensemble when performing under scored at the
2014 joint ICMC and SMC conference. . . . . . . . . . . . . . . . . . . . . . . . 1386.8 The, suggested, skeletal drum pattern for the primary theme of 5Four . . . . . . . . 1406.9 The, suggested, skeletal drum pattern for the secondary theme of 5Four . . . . . . 1416.10 The rhythmic pattern to be used during the ‘shout chorus’ section of 5Four . . . . . 142
A.1 The AAIM system window on load . . . . . . . . . . . . . . . . . . . . . . . . . 180A.2 The AAIM Setup window on system load . . . . . . . . . . . . . . . . . . . . . . 181A.3 The AAIM system performance interface . . . . . . . . . . . . . . . . . . . . . . 184A.4 The central section of the AAIM system performance interface . . . . . . . . . . . 185A.5 The audio section of the AAIM system performance interface . . . . . . . . . . . 185A.6 The polygon interface section of the AAIM system performance interface . . . . . 186A.7 The AAIM.rhythmGen interface on system load . . . . . . . . . . . . . . . . . . . 188A.8 The AAIM.rhythmGen controls on the main window . . . . . . . . . . . . . . . . 190A.9 The AAIM.patternVary interface on system load . . . . . . . . . . . . . . . . . . . 192A.10 The AAIM Setup following initial toggling of the patternVary algorithm . . . . . . 193A.11 patternVary performance interface on AAIM System main window . . . . . . . . . 193A.12 Interface used to create patterns which are subsequently played/manipulated using
the patternVary algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194A.13 Interface used to determine the chance of additional notes being inserted into the
given pattern for each individual voice . . . . . . . . . . . . . . . . . . . . . . . . 194A.14 Interface used for output of MIDI from AAIM . . . . . . . . . . . . . . . . . . . . 195A.15 Interface used for output of OSC messages from AAIM . . . . . . . . . . . . . . . 195A.16 Interface used for AAIM.samplePlayer, a patch which allows the triggering of an
arbitrary number of samples-with each sample being triggered by a single voice . . 195A.17 Global interface used for AAIM.samplePlayer2 . . . . . . . . . . . . . . . . . . . 196A.18 Sample load section of the AAIMsamplePlayerInterface2 patch . . . . . . . . . . . 196
xi
A.19 Sequence section of the AAIMsamplePlayerInterface2 patch . . . . . . . . . . . . 197A.20 Multisliders section of the AAIMsamplePlayerInterface2 patch . . . . . . . . . . . 197A.21 The AAIM Setup following initial toggling of the looper algorithm . . . . . . . . . 198A.22 AAIM.looperMultiVoiceInterface used for control of all AAIM.looper voices si-
multaneously . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200A.23 AAIM.looper used for control of an individual AAIM.looper voice . . . . . . . . . 201A.24 looper performance interface on AAIM System main window . . . . . . . . . . . 201A.25 Multislider section of the AAIM.looperMultiVoiceInterface . . . . . . . . . . . . . 202A.26 AAIM.sequenceMaker section of the AAIM.looperMultiVoiceInterface . . . . . . 202A.27 Sample load section of the AAIM.looperMultiVoiceInterface . . . . . . . . . . . . 203A.28 Waveform section the AAIM.looper interface . . . . . . . . . . . . . . . . . . . . 203A.29 Variations section the AAIM.looper interface . . . . . . . . . . . . . . . . . . . . 204A.30 Sample segment sequence section the AAIM.looper interface . . . . . . . . . . . . 205A.31 Pitch shift sequence section the AAIM.looper interface . . . . . . . . . . . . . . . 206A.32 Audio section the AAIM.looper interface . . . . . . . . . . . . . . . . . . . . . . 206A.33 The AAIM Setup following initial toggling of the melodyVary algorithm . . . . . 207A.34 The AAIM.melodyVaryMultiVoiceInterface patch. . . . . . . . . . . . . . . . . . 208A.35 The AAIM.melodyVary interface. . . . . . . . . . . . . . . . . . . . . . . . . . . 209A.36 The variations section of the AAIM.melodyVary interface. . . . . . . . . . . . . . 209A.37 The variations section of the AAIM.melodyVaryMultiVoiceInterface interface. . . . 209A.38 melodyVary performance interface on AAIM System main window . . . . . . . . 210A.39 Pitch Collection section of the AAIM.melodVaryMultiVoiceInterface . . . . . . . . 211A.40 Melodic content section of the AAIM.melodVaryPoly interface . . . . . . . . . . . 212A.41 The top right section of the AAIM.melodVaryPoly interface . . . . . . . . . . . . 212A.42 The pitch class section of the AAIM.melodVaryPoly interface . . . . . . . . . . . 213A.43 Interface for AAIM.synth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215A.44 Envelope section of AAIM.synth interface. . . . . . . . . . . . . . . . . . . . . . 216A.45 FFM section of AAIM.synth interface. . . . . . . . . . . . . . . . . . . . . . . . . 216A.46 Audiosection of AAIM.synth interface. . . . . . . . . . . . . . . . . . . . . . . . 217A.47 Interface for the AAIM.sequenceMaker patch . . . . . . . . . . . . . . . . . . . . 218A.48 Upper section of AAIM.sequenceMaker interface . . . . . . . . . . . . . . . . . . 218A.49 Lower section of AAIM.sequenceMaker interface . . . . . . . . . . . . . . . . . . 219A.50 AAIM.polgonInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223A.51 Presets section of the AAIM.polgonInterface . . . . . . . . . . . . . . . . . . . . 223
xii
List of Symbols, Abbreviations and Nomenclature
Term Definition
Max [41]
A visual programming language that allows users to create
software by connecting different ‘objects’ and ‘patches,’ primarily
used for music.
Pure Data [137] An open-source visual programming language similar to Max.
Patch A Max or Pure Data program.
ObjectThese are objects in the object oriented programming sense.
Implemented for use in Max or Pure Data.
ModuleDistinct units that undertake different tasks but can be
connected to achieve a more complex system.
Voices
Independent melodic or rhythmic lines in a piece of music.
The number of simultaneous sounds an electronic music
instrument can make.
i-o-i Inter-onset-interval, the length of time between two successive events.
xiii
Chapter 1
Introduction
Music making is amongst the most ubiquitous of human activities. Throughout history, disparate
cultures have all developed unique musical cultures, and cultures and subcultures have grown out
of musical cultures. So important is the act of music making to human culture that neuroscientist
Daniel Levitin has even suggested that it is possibly even more fundamental to our species than
language [106]. The history of music is also intrinsically linked to advancing technologies, with
advancing technologies facilitating advances in music, and developing musical styles prompting
the development of new technologies. The digital computer is not only one of the newest tech-
nologies employed in music making but also one of the most powerful. A technology capable of
generating never before heard timbres, facilitating entirely new means of interaction, and arranging
and triggering sounds in ways that were never possible before.
From the use of vinyl and magnetic tape to create music from recordings of everyday sounds
in musique concrete, through the numerous overdubbed recordings that were combined to create
the songs on Sgt. Peppers Lonely Hearts Club Band, to the use of drum machines and sequencers
to create the first electronic dance music tracks, the ability to arrange sound has possibly affected a
greater change in the way we make music, and the music we make, than any previous technological
affordance. The digital computer is certainly not the first technology used for this purpose, but it
is far more powerful and capable than those that came before. The research presented in this thesis
focusses on this potential for computer-based systems to arrange and trigger musical materials in
entirely new ways, and particularly the potential to facilitate the development of new performance
and improvisational practices. This research has culminated in the evolution of the AAIM sys-
tem, a portfolio of algorithmic modules1 designed to assist improvisers through the variation and
1In this thesis I will refer to a ‘portfolio’ of algorithms to link to the tradition of referring to a collection of artisticworks as a portfolio.
1
manipulation of composed music entered by the user.
This chapter begins with a brief overview of the symbiotic nature of developing technologies
and music (1.1). Section 1.2 then discusses the motivation underpinning this research, and a brief
description of the scope of this research follows (1.3). Section 1.4 presents the central question
behind this research. I describe the challenges that I overcame during this research in Section 1.5,
and this leads to the methodology used to overcome these challenges in Section 1.6. Overcoming
these challenges resulted in a number of research contributions (1.7). This chapter ends with and
overview of the structure of this thesis (1.8).
1.1 Advancing Technologies and Musical Developments
One can observe the intrinsic link between advancing musical practices and technologies in the
musical styles facilitated by any successful musical technology. For example, bone flutes found at
Jiahu in China are possibly the earliest complete instruments found capable of producing multiple
pitches [193]. These flutes would not have been possible without advancements in technology, and
the ability to produce more than one pitch would surely have afforded many new possibilities in
music making. Similarly, the development of new musical ideas has in turn necessitated advance-
ments in the technology used to create the music. This symbiotic relationship is evident in the
evolution of keyboard instruments in the Western world, and particularly the gradual introduction
of the equal tempered scale (wherein the frequency ratio between successive notes in the chro-
matic scale always remains the same), to facilitate a transition towards a musical style that was
more focussed on harmonic modulations. More recently the creation of the first electric guitars
initially served to enable guitarists to play at a higher volume. Subsequently, the affordances of
the instrument have not only made it among the most popular instruments in the world, but also
facilitated the development of countless new styles of music, many of which have unique subcul-
tures associated with them,2 and the creation of sounds that were never possible with an acoustic
2For example punk music and its associated culture
2
guitar.3
Of course, the development of the electric guitar itself would not have been possible without
technological advances allowing humans to control electricity, e.g. transistor, capacitor, etc. How-
ever, while the electric guitar allowed the vibrations of metal strings to be amplified, electrical
technology has arguably had an even more profound effect on modern musical culture in the West-
ern world. Without the invention of technologies such as magnetic tape, the ability to record and
replay the audio of a performance would never have been possible, and much of the most recog-
nisable music of the past century would not only have never been heard, but could never have been
created.4 The ability to record and then replay audio also afforded entirely new possibilities for
music making based on these recordings. The 1948 broadcast of Pierre Schaeffer’s Cinq etudes de
bruits, the first of which was created entirely from audio recordings of trains, is one of the earliest
examples of this new approach to music making. Advancing electrical technologies also facilitated
the creation of simple electrical sound generators, and these were subsequently combined to create
‘synthesisers’ – devices capable of generating sounds solely through the control of electrical cur-
rents. The subsequent invention of sequencers expanded on these new affordances, enabling the
triggering and arranging of sound through the control of electricity - ultimately leading to much of
the development of electronic dance music and its related subcultures.
The ability to control electrical currents has also facilitated the creation of perhaps the most
significant and ubiquitous technology of the modern age: the digital computer. As with preced-
ing technologies, digital computers have not only facilitated developments in music making, but
afforded entirely new approaches to music making. Examples include Milton Babbit’s creation
of “highly organised and structured works” [82, 114] such as Ensembles for Synthesizer5 (1964),
John Chowning’s use of Frequency Modulation (FM) synthesis to create entirely new sounds6,
3https://youtu.be/6bjYAVk-DCs?t=414For example The Beatles’ album Sgt. Peppers Lonely Hearts Club Band was largely created by overdubbing -
layering numerous different recordings on top of one another.5https://www.youtube.com/watch?v=W5n1pZn4izI6https://www.youtube.com/watch?v=988jPjs1gao
3
and modern ‘apps’ that enable anyone with a smartphone to create music.7 There are two main
facets of using computers in music making that are of particular relevance to this thesis. Firstly,
algorithms are essential to how digital computers work. That is, computers require finite sets of
rules to interpret all of the data and instructions users give them. From a musical perspective, this
makes explicit the connection between music making and algorithms, a practice which at the very
least stretches as far back as the beginning of the second century and Guido d’Arezzo’s treatise
Micrologus - which contains a set of rules/instructions for the setting of text to music. While stat-
ing that all computer musicians create algorithmic music may be stretching the point, the use of
algorithms to create and arrange musical events when using a technology so reliant on algorithms
is surely an equally natural act to blowing into a bone flute to create music.
Secondly, the ubiquity of digital computers means that not only is the act of music making a
possibility for anyone with a computer, but also that the same tools and recordings are available
to more people than ever before. In the early days of computer music, due to the size and cost
of computers, the ability to work with computer music was limited to those who had access to
institutions that owned one. For example, Babbitt’s aforementioned Ensembles for Synthesizer,
was composed using the RCA Mark II Sound Synthesiser at the Columbia-Princeton Electronic
Music Centre [82, 114]. In contrast, today not only is software such as Reason8 or Ableton Live9
available to anyone with a computer willing to pay for a licence, but open source alternatives such
as Ardour10 mean that this initial outlay is not even necessary to begin creating electronic music. In
parallel, websites such as YouTube have made recordings of even the most obscure music available
to people across the world. This progression towards the democratisation of musical resources
continues a trend that has been growing since the early to mid 20th century, when first vinyl and
then, to an even greater extent, tape cassettes made recordings of music from around the world
available to everyone with the means to play them. Similarly, the widespread availability of music
7e.g. https://www.youtube.com/watch?v=gLLjRH6GJec andhttps://www.youtube.com/watch?v=uvUkYGZmsTs
8https://www.propellerheads.se/reason9https://www.ableton.com/en/live/
10https://ardour.org/
4
making software continues a trend first started with the availability of affordable synthesisers and
sequencers in the 1970’s and 80’s. The result of this democratisation of resources is that the
supposed divide between ‘art’ and ‘popular’ electronic music has become increasingly unclear,
and countless artists and composers have demonstrated the creative possibilities afforded by an
approach that explores the overlap between these “artificially separated domains” [34, 142].
1.2 Motivation
From Babbitt’s Ensemble for Synthesisers through “the frenetic pace and virtuoso rhythms” [34,
142] of artists such as Squarepusher and Aphex Twin, composers have explored the “potential
of computer music to exceed human limitations” [34, 142] extensively within the realm of fixed
musical works. However, in this author’s experience, much extant music making software/devices
provide limited opportunities for exploiting this potential in an improvisational capacity. I believe
a variety of factors contribute to this limitation.
Firstly, electronic instruments that follow the traditional one-to-one mapping of acoustic in-
struments, such as keyboard-based synthesisers, often make little, if any, effort to explore this
potential. Ultimately their potential is limited by that of the performer. In 1963, Don Buchla built
the Buchla 100 synthesiser, and subsequently the first commercially available sequencer - a way
to program a pattern of control voltages that the sequencer then triggers over time [82, 222]. In
contrast to the electronic instruments following the traditional one-to-one mapping scheme, se-
quencers are not limited in the same way. For example, using a sequencer one single performance
gesture, pressing play, can result in multiple events occurring. Buchla’s analog sequencer afforded
the performer with the ability to vary the sequence in a wide variety of ways, e.g. changing scaling
and direction. However, current digital sequencers rarely afford similar flexibility. As such, where
electronic instruments with one-to-one mapping schemes place the human performer in control of
elements such as pitch and rhythm, sequencers instead treat these features as, essentially, fixed.
Ultimately this limits the possibilities for improvisation to more macro or surface variations, e.g.
5
choosing different loops, adding effects. Again, this limits the potential aesthetics and styles the
user can explore.
From Guido d’Arezzo’s Micrologus through Arnold Schoenberg’s 12-tone technique for com-
position, musicians have used algorithmic techniques and “computational thinking” [47] to explore
existing musical styles or to develop new musical ideas that expand beyond the limits of existing
musical styles. As such, the use of digital computers to develop and execute musical algorithms
was not only a natural step but the continuation of a long and rich tradition. Although compu-
tational power, and size, initially limited the use of computers in live performances, subsequent
technological advancements afforded the exploration of algorithmic improvisation systems. Some
of the earliest of these include the work of Joel Chadabe [27] and Sal Martino’s SalMar Con-
struction [49]. George Lewis’ Voyager, for instrumental performer(s) and computer, in which the
computer ‘improvises’ interactively with the human performer in a free jazz setting [47] is perhaps
the most successful and widely known system in this paradigm. Despite these successes, in my
opinion the potential of such systems to facilitate, or augment, human creativity is no greater than
that of playing with a broad range of human performers in various styles. It is for this reason I be-
lieve that systems following in the tradition of Buchla’s first sequencer have the potential to facility
or augment human creativity in live performance and improvisation. Beyond that, I believe the ap-
proach best suited to affording the development of new musical ideas is to develop algorithms that
enable the user to manipulate and vary their own musical materials, according to features of these
same materials.
1.3 Scope
The research presented in this thesis falls at the intersection of three main research areas: 1) elec-
tronic music performance systems, 2) algorithmic music, and 3) improvisation. Figure 1.1 illus-
trates the intersection of these three areas in graphical form.
6
Figure 1.1: The scope of the research presented in this thesis.
1.4 Goal
The research presented in this thesis explores the use of musical algorithms in electronic music
performance systems. The goal of this research is to enable a greater degree of flexibility and
creativity in electronic music by exploiting the “potential of computer music to exceed human
limitations” [34, 142] within an improvised setting. By providing new affordances to the user this
research opens new possibilities in musical creativity and expression.
1.5 Challenges
In pursuing the research goal to use algorithmic performance systems to facilitate and augment
human creativity and musical expression, three main challenges were addressed:
1. Develop stylistically neutral musical algorithms
Although algorithmic performance systems that are intended to create music within
pre-existing styles have proved successful (e.g. [16], [127]), their flexibility is often
limited by the narrow scope of the music they attempt to emulate. In the design of
any musical system or instrument limitations are inevitable; for example, a six-
7
string guitar can play a maximum of six simultaneous notes. However, a central
tenet of this research is that musical performance systems should strive to afford as
broad a variety of styles, aesthetics, and approaches as acoustic instruments, or to
quote Morton Subotnick: “the medium should always be at the service of the artist”
[168, 113]. To again use the guitar as an example, although the performer is limited
to six simultaneous notes, this has not limited the range of styles that musicians
have explored on the instrument. As such, a significant challenge for this research
was developing algorithms that do not attempt to emulate individual styles but are,
in fact, applicable to a broad variety.
2. Develop multi-voice algorithms for rhythm, pattern, melody, and sample play-
back
To maximise the flexibility and scope of the AAIM system, I needed to develop
algorithms that address four features common to many styles of music:
• Rhythm - addressing the temporal nature of music.
• Pattern - the triggering of a sequence of discreet events over time
(e.g. a drum beat).
• Melody - a series of individual pitches, again triggered over time.
• Sample playback - addressing the widespread use of audio record-
ings, i.e. samples, in electronic music.
It was also necessary for these algorithms to facilitate the usage of multiple voices
that could act independently but within the same overarching structure, thus simu-
lating the interactions between different performers in an ensemble.
3. Address Simoni and Dannenberg’s four evaluation criteria for musical algo-
rithms
In parallel to the above concerns, it was also necessary for the algorithms to be
8
performable, i.e. usable in real-time musical scenarios. In striving to meet this
challenge I used Simoni and Dannenberg’s four evaluation criteria for musical al-
gorithms [160, 4] as a model:
(a) Simplicity - each algorithm should be simple enough to be both run
and controlled during live performances.
(b) Parsimony - each algorithm should require only a minimal amount
of input from the user.
(c) Elegance - each algorithm should also afford a broad a range of
styles and aesthetics using only the users-input.
(d) Tractability - the relationship between the user’s-input and the
algorithms’-output should be as intuitive as possible.
However, in many scenarios these criteria were in conflict, either with each other or
with the above challenges, and as such a balance often had to be struck.
1.6 Methodology
The methodology employed in this thesis is a form of practice-based research, defined by Candy
as:
Practice-based Research is an original investigation undertaken
in order to gain new knowledge partly by means of practice and the
outcomes of that practice. Claims of originality and contribution
to knowledge may be demonstrated through creative outcomes that
may include artefacts such as images, music, designs, models, dig-
ital media or other outcomes such as performances and exhibitions
[25, 3].
9
Proceeding from this definition the methodology employed had four primary components: study,
development, testing, evaluation. However, as can be seen in Figure 1.2, the pathway through these
stages is circular, where evaluation augmented the knowledge gained through study and inspired
the next phase of development.
Figure 1.2: Graphical representation of the methodology used for the research presented in thisthesis.
1. Study
To understand the state of algorithmic performance systems I reviewed the litera-
ture to find relevant related work. This research into the current state of the field
enabled me to understand previous approaches to algorithmic performance systems.
In turn, this allowed me to imagine new approaches that could be taken and possible
augmentations or alterations to previous methods. My years of personal experience
as a performing and improvising musician of both guitar and electronic music aug-
mented this research even further.
2. Develop
Having understood the related work, and applied this to prior knowledge and expe-
rience of performance and improvisation, I designed and programmed algorithms
that facilitated the variation and manipulation of user-defined musical materials.
3. Testing
To test the ability of the algorithms to facilitate live performance and improvisa-
tion, I wrote, and subsequently performed, a series of musical works. Each of these
10
works serves as a ‘framework’ within which the performer is free to improvise by
varying and manipulating composed musical materials, allowing me to evaluate the
improvisational affordances of the algorithms. Each piece also explores different
aesthetics, thus examining the ability of the algorithms to facilitate a broad range
of styles and approaches. Although some of the performances of these works in-
volved the real-time creation of the work in the studio, many also received concert
performances.
4. Evaluate
Following testing, I evaluated the affordances of the algorithms. This evaluation
examined not only successes and shortcomings of the algorithm in affording the
performance in question, but also the extent to which the algorithm addressed the
challenges presented in Section 1.5. The knowledge gained from this evaluation
added to the knowledge already acquired during the study stage, and this then led
to a new phase of development.
1.7 Contributions
This thesis contains some significant research contributions, which I achieved by applying the
methodology described in Section 1.6 to meet the challenges discussed in Section 1.5. The primary
contributions are the AAIM system, discussed in Chapters 4 and 5, and the portfolio of creative
works created using AAIM described in Chapter 6.
• Portfolio of algorithmic modules for the variation of musical materials [64]
The primary contribution of this thesis is the collection of algorithms comprising the
AAIM system, described in Chapter 5. These algorithms cover some of the common
features of electronic music: rhythm, patterns, melodies, and sample playback.
Implementations of the AAIM system are available here:
11
https://simonjohnfay.com/aaim/. As will be discussed later, these downloads
do not represent definitive versions of the AAIM system, but rather serve as exam-
ples of some of the potential uses of AAIM.
The main contributions of the AAIM system are:
– Variation of user-defined materials11
Rather than generate entirely new musical elements, each of the
AAIM modules is concerned with affording the user the ability to
vary and manipulate their musical ideas. This maximises the con-
nection between the music the user inputs and the output of the sys-
tem.
– Facilitate Live Performance12
I designed AAIM for use during live performances. I achieved this by
only developing algorithms that could run in real-time, refining the
algorithms to improve their speed and efficiency, and incorporating
divergent mapping techniques, where one variable influences many
features, allowing control of the systems with a minimal number of
inputs.
– Facilitating Improvisation13
As will be discussed in Section 2.2, there are many different ap-
proaches to improvisation in music, and the extent to which an elec-
tronic music system affords improvisation “probably depends on the
11The use of AAIM to vary user-defined materials can be heard in any of the works contained in the portfolio ofcreative works. For example, in the variation of the melodic and harmonic material in under scored [58], or thevariation of prerecorded samples in 5Four[61].
12AAIM has been used in many concert performances. Both by the author: [70],[66],[69],[68], and [71]; and byother performers: [65] and [67].
13All of the works contained in the creative portfolio demonstrate AAIM’s potential to facilitate improvisation.Perhaps the most obvious examples are radioAAIM[63], BeakerHead[72], and new beginnings[57], all of which wereperformed with no preplanned score or progression.
12
individual as well as the software” [43, xvii]. The form of improvi-
sation facilitated by the AAIM system involves the embellishment
and variation of musical materials. The system strives to ensure that
improvisation is possible through intuitive mapping of variables and
the resulting variations.
– Stylistically Neutral/Broad Problem Space14
I also designed the AAIM system with the intention of it applying
to a broad range of musical scenarios and styles. I achieved this by
developing variation methods that do not rely on the traditions of
any one musical style, but are suited to many. The nature of these
variations are then determined by the music that the user inputs, and
the settings of the variables controlling the alterations. As such, the
output of the system is directly linked to both the style of the music
the user inputs, and the manner in which they tell the system to vary
the music.
Chapter 7 discusses the success of the AAIM system in realising these contributions.
• Portfolio of creative works15
Chapter 6 discusses the musical works that were created using the AAIM system. In
each of these pieces, AAIM was used to enable the performer to manipulate and vary
composed musical materials within a musical framework. These works demonstrate
many of the features of the AAIM system and serve as evidence for the extent to
which the system realises the contributions mentioned above. In addition to this,
however, they also demonstrate the potential for AAIM, and similar algorithmic
approaches, to facilitate the evolution of new artistic and performance practices.
14The range of styles AAIM facilitates is clear in the differences between under scored [58], “There is plea-sure...”[59], and 5Four[61].
15[59] [58] [61] [62] [72] [63] [60] [57] [56]
13
1.8 Thesis Structure
The following is a brief description of the structure of this thesis. Chapter 2 provides the musical
context for the work contained in this thesis. The first section (2.1) provides a brief overview of the
evolution of electronic music, focussing on developments and music that have inspired the design
of the AAIM system. Section 2.2 discusses some extant theories on improvisation, which frame the
nature of improvisation with the AAIM system. The chapter closes with a brief historical overview
of the use of algorithms, and “computational thinking” [47], in Western musical traditions. Chapter
3 provides a survey of algorithmic music systems, focussing in particular on those system designed
for use in live performance. Chapter 4 details the concept and objectives of the AAIM system, and
the approaches used to achieve these goals. Section 4.1 introduces the overarching concept of the
AAIM system and categorises AAIM using the terms introduced in Chapter 3. Section 4.2 expands
upon the ideas discussed in Section 4.1, focussing on the artistic goals of the system, and how
AAIM realises these goals. Chapter 5 contains a description of the underlying algorithms used by
the primary modules of the AAIM system. Chapter 6 presents the portfolio of pieces created using
the AAIM system. Chapter 7 evaluates the AAIM system’s success in realising the contributions
from Section 1.7. Finally, Chapter 8 describes future work in extending the research presented in
this thesis and provides some closing remarks.
14
Chapter 2
Musical Context
This chapter will discuss the context with which the AAIM system engages. The first section (2.1)
provides a brief overview of the history of electronic music, focussing on developments and music
that have inspired the design of the AAIM system. This section also discusses the overlap, or
what Leigh Landy has termed the “fuzzy” divide [102, 73], between ‘art’ and ‘popular’ genres of
electronic music, an idea that underpins much of the research presented later in this thesis.
Section 2.2 discusses some extant theories on improvisation, which frame the concept of impro-
visation with the AAIM system. Each concept presented is supported by examples of long-standing
improvisational traditions from around the world that incorporate the concept.
Section 2.3 closes this chapter and provides a brief historical overview of the use of algo-
rithms, and “computational thinking” [47], in Western musical traditions. This section is included
to demonstrate the rich history of using algorithms to aid in the creation of musical works. Sec-
tion 2.3.1 provides evidence of the use of algorithms, and “computational thinking,” in explicitly
improvised music, namely live coding and jazz.
2.1 Electronic Music
The definition of electronic music, and indeed sometimes even the term itself, is a contentious
matter. For example, in his book on the history of electronic music Joel Chadabe instead used the
term “Electric Music” [28], and ‘electroacoustic’ is also occasionally used as an “overall term” [38,
1]. Despite these variances in terminology, electronic music will be employed here as it is more
common than terms such as electric music, while electroacoustic is also troublesome as it is “often
employed in a more constrained sense of highly designed electronic art music for close listening
with an emphasis on space and timbre” [38, 1]. Similarly, in this thesis, electronic music will be
15
defined as music created through electronic means and using techniques and ideas first explored in
musique concrete or elektronische Musik. This is not an attempt to provide a definitive definition,
but merely as a means to frame both the music discussed in this chapter and the research presented
later in this thesis.
Categorisation within electronic music has proven equally, if not even more, contentious. For
example, a division is often made between ‘live-electronic music’ and ‘tape music.’ In defining
‘live-electronic music,’ Paul Sanden stated that it was “performed music involving the electronic
manipulation of acoustic sound and/or the electronic real-time production of sound,” and where
“electronic manipulation must be actively generated for it to be live” [149, 88]. In this context,
‘Tape Music’ is thus composed works created in the studio and realised on a fixed medium. A
playback device and loudspeaker array then reproduces these fixed pieces in concert, with each
performance differing primarily because of the placement of loudspeakers and the acoustics of the
room. However, rather than being a clear distinction between ‘live’ and ‘fixed,’ there is, in fact,
a continuum of possibilities between these two extremes. One can infer this fact from Gordon
Mumma’s inclusion of fixed media works under the heading of ‘live electronic music’ in his sem-
inal chapter on the topic [26]. More recently Collins et al. have stated this fact outright, noting
that “live control can run on a continuum from a single press of a button to initiate playback, to
in-the-moment fine control of all aspects of the music, at a human gestural rate” [38, 180].
Amongst the most contentious methods of categorisation is the oft-cited division of electronic
music into two broad traditions: ‘art’ and ‘popular.’ At the most fundamental level, this distinction
is made on the basis of popularity. In his discussion of the topic, Landy attributes this difference in
popularity to what he sees as a “separation” of ‘art’ music from everyday life. In contrast, he states
that ‘popular’ electronic music continues to relate to everyday life through its “dance culture and
associated rituals and... through its lyrics” [102, 2]. Morton Subotnick also alludes to this ‘divide’
during the documentary I Dream of Wires.1 However, while Landy posits reasons for this divide,
Subotnick differentiates between those artists who make their money from composing electronic
1http://www.idreamofwires.org/
16
music, and what he terms “avant-garde” electronic music composers who must make their money
through other means. A further method of differentiating between these traditions that has been
discussed is based on the fact that ‘popular’ electronic music is often “driven by a beat” [102, 146].
However, in ‘art’ music “the introduction of rhythmic (or metrical) elements” has often “caused a
certain unease, even disquiet” [53], an attitude that John King described as “the fear of the funk”
[119, 3]. Ben Neill even went to so far as to state that “the division between high-art electronic
music and pop electronic music is best defined in terms of rhythmic content” [119].
Although there are obviously differences at the extremes of this spectrum between ‘art’ and
‘popular’ traditions, there is, as Landy notes, an increasingly “fuzzy” divide [102, 73] between
both traditions, and the research presented herein is, primarily, concerned with this intersection.
Much has been written about this divide, leading Joseph Hyde to state that the “division is no
longer meaningful” [88]. This position is also evident in the broad range of topics covered in Audio
Culture [40, xvi]. In the foreword the editors Cox and Warner state that the music discussed “cuts
across classical music, jazz, rock, reggae, and dance music... is resolutely avant-gardist in character
and all but ignores the more mainstream inhabitations of these genres” [40, xvi]. Nicholas Collins
has echoed this, stating that “cross-fertilisation between these artificially separated domains is con-
stant and only coarse categorisation divides them. On the fringes supposed popular culture is really
a fertile ground of creative one-upmanship that often surpasses the conceptions of conservatoire
composers in experimentation” [34, 142]. Trevor Wishart has expressed a similar, although more
tempered, position. In his opinion “artists such as Squarepusher, Aphex Twin or Richard Devine
help blur the boundaries between art-music and popular entertainment, re-establishing a link lost
towards the end of the nineteenth century” [187, 137], and that in this context “there is no longer
any absolute divide between ‘high art’ and popular culture” [187, 138]. Landy, on the other hand,
does not explicitly state that the divide is meaningless or non-existent, instead positing that this
‘divide’ is ever decreasing [102], a position that Simon Emmerson (e.g. [52] and [53]) has also
expressed.
17
There are numerous reasons for holding these standpoints. Firstly, this ‘divide’ has, to an
extent, always been ‘fuzzy.’ Pioneering figures of ‘art’ electronic music foreshadowed ‘popular’
electronic music genres, and artists within ‘popular’ traditions incorporated techniques and equip-
ment from ‘art’ electronic music as soon as the means became available to them. It is also true that
there are an increasing number of artists from both traditions who are incorporating elements or
ideas developed in the other. Yet more are creating music that shows clear links to both traditions,
but exists independently of either. A further issue with dividing electronic music into categories
of ‘art’ and ‘popular’ is where one would even draw the line. For example, using any of the
characteristics of each tradition given above would result in a categorisation of ‘popular’ music
that, as Hyde, notes “must embrace everything from Britney Spears2 to Squarepusher”3 [88], a
discrepancy that is surely greater than that between the music of Squarepusher and that of many
‘art’ composers. The aforementioned differentiation according to the use of a rhythmic pulse is
also increasingly questionable, with numerous ‘art’ composers increasingly incorporating rhyth-
mic/metric elements. Within ‘popular’ electronic music, entire genres have emerged that make no
use of rhythmic/metric elements. And, as Neill notes, while “many art-music composers scoff at
the idea of using regular 44 rhythm patterns in their works,” “a new generation of composers is
working within that structure to create what is essentially the new art music” [119].
The concept of a ‘divide’ between ‘art’ and ‘popular’ electronic music has not been introduced
here to reinforce or support the idea of a clear division, nor to make value judgements about music
that could be neatly positioned into either category. Instead, the idea has been included because
this ‘fuzzy’ divide at the intersection of ‘art’ and ‘popular’ electronic music, and in particular the
use of rhythmic patterns in electronic music, has significantly influenced the development of both
the AAIM system and the pieces described in Chapter 6. As such, what follows is a brief overview
of the development of electronic music, focussing in particular on music that has inspired the
development of AAIM. In addition, the discussion will also focus on how, in this author’s view,
2https://www.youtube.com/watch?v=C-u5WLJ9Yk43https://www.youtube.com/watch?v=MWCSw_cNxKc
18
these examples are not only indicative of the ‘fuzzy’ divide and cross-fertilisation between ‘art’
and ‘popular’ traditions of electronic music, but also demonstrate the fertile creative grounds that
this ‘fuzzy’ divide affords.
2.1.1 Musique Concrete
Musique concrete is a genre of music that originated in Paris, France, during the early 1940’s, when
Pierre Schaeffer began experimenting with radio equipment at Radiodiffusion-Television Francaise
to create music. Schaeffer coined the term musique concrete “to demonstrate that this new music
started from the concrete sound material, from heard sound, and then sought to abstract musical
values from it” [29]. Using recordings of a variety of ‘real-world’ sounds, such as trains and sauce
pans, Schaeffer created a series of collages named Cinq etudes de bruits, which he broadcast on
the radio on October 5th 1948 [82, 87].
Although Schaeffer coined the term musique concrete, in 1944 the Egyptian composer Halim
El-Dabh had, independent of Schaeffer’s work, composed and presented his own piece of ‘musique
concrete’ Ta’abir Al-Zaar [38, 17]. French-Canadian composer Francis Dhomont was also explor-
ing similar techniques independently in the south of France at the same time [38, 213]. The simul-
taneous, and independent, development of techniques and concepts in different parts of the world
has occurred throughout the history of electronic music, and as such only a very brief overview
of relevant terms will be covered here. As word of Schaeffer’s music began to spread numerous
composers visited his studios in the RTF to learn his techniques, and soon electronic music studios
were being set up all over the world.
In the strictest sense musique concrete implies that not only are recordings of real-world sounds
used as musical materials, but also that the music is itself abstracted from the sounds: “the material
preceded the structure” [82, 88]. However, this is by no means true of all music created using
recorded sound. As such, references herein to music following the tradition of musique concrete
will, unless otherwise stated, imply the use of real-world sounds.
19
2.1.2 Elektronische Musik
In the early 1950’s the Studio fur elektronische Musik des Westdeutschen Rundfunks opened in
Cologne, Germany. As with Schaeffer and others at the RTF studio, the goal of those working in
the WDR was to create new music through the use of electrical equipment. However, the ethos of
the WDR studio differed drastically from that of the RTF on two major points. Firstly, rather than
abstract music from sounds, the WDR approach was to begin with an abstract musical plan, i.e.
score, and then to use sounds to fit this plan. Secondly, to create music according to these plans,
the composer had to use entirely new sounds generated using oscillators and noise generators.
Although some composers had already used electronic instruments such as the theremin and the
ondes Martenot in compositions [15], the WDR studio was unique in its, initially, strict “serial
approach to composing using only electronic signals”4 [82, 92].
Karlheinz Stockhausen, who had previously worked in the RTF studios with Schaeffer, was
amongst the most renowned of the composers to work in the WDR studios and in 1954 composed
Studie I5 and Studie II.6 By the following year Stockhausen had already advanced beyond the
strict limits of elektronische Musik, composing the piece Gesang der Junglinge7 using both elec-
tronically generated sounds and recordings of a choirboy singing [174]. This combination of both
electronic and real-world sounds would become the standard approach for many of the electronic
music studios and composers that followed.
2.1.3 BBC Radiophonic Workshop, and Delia Derbyshire
One such studio was the BBC Radiophonic Workshop, which opened in the late 1950’s. Initially
set up to provide sound effects for radio shows [122, 5], the BBC Radiophonic Workshop was
4Emphasis added.5https://www.youtube.com/watch?v=H4QaMwpVXVM6https://www.youtube.com/watch?v=bwj6ZptPnDo7https://www.youtube.com/watch?v=nffOJXcJCDg
20
home to a number of Britain’s pioneering electronic music composers, including John Baker [122,
111] and Daphne Oram who founded the workshop in 1958 [87, 49]. Delia Derbyshire is perhaps
the most famous and revered of these British electronic music pioneers. One of Derbyshire’s
earliest, and most famous, creations in the studio was the theme tune for the TV show Doctor
Who in 1963.8 Although this piece is perhaps Derbyshire’s most famous work, many of her other
compositions were equally, if not more, innovative. One example is Pot au Feu,9 which reviewers
have described as “three minutes and nineteen seconds of paranoia, virtually a rave track circa
1991”10’ and “angular robot jazz crammed with incident.”11
In 2008 over 250 previously unheard tapes of Derbyshire’s work were discovered, including
an initial recording for the composition Dance from Noah, which former member of the group
Orbital, Paul Hartnoll, called “quite amazing,” and said it “could be coming out next week on Warp
Records.”12 The influence of the work of Derbyshire and her Radiophonic Workshop colleagues is
apparent in the fact that in 2003 Rephlex, a record label set up by Aphex Twin, released a limited
edition compilation of their work, containing 10 of Derbyshire’s pieces.13
2.1.4 Modular Synthesisers, and Morton Subotnick
In 1961, Morton Subotnick and a number of other composers pooled their equipment together and
founded the San Francisco Tape Music Center (SFTMC).14 Initially, the group worked in a similar
way to the musique concrete and elektronische Musik composers, splicing and manipulating tape
of recorded sounds. However, Subotnick and Ramon Sender commissioned engineer Don Buchla
to develop a synthesiser, capable of generating electronic sounds in real-time, and in 1963 Buchla
8https://www.youtube.com/watch?v=CM8uBGANASc9https://www.youtube.com/watch?v=jpdiMcEeTJA
10http://www.elidor.freeserve.co.uk/radiophonic.htm11http://www.bbc.co.uk/music/reviews/9xn812http://news.bbc.co.uk/1/hi/entertainment/7512072.stm13http://www.discogs.com/BBC-Radiophonic-Workshop-Music-From-The-BBC-Radiophonic-Workshop/
release/21286914http://www.mortonsubotnick.com/timeline.html
21
built the Buchla Series 100 modular synthesiser.15
Originally the Buchla 100 was intended purely as a studio device. However, both Subotnick
and Sender envisioned an instrument that they could use during live performance. In response to
this request, Buchla designed a way to “program” a pattern of control voltages, which could then
be used to trigger sounds in sequence, thus creating the first sequencer available on a commercial
synthesiser [82, 222]. The Buchla’s innovative sequencer, which consisted of three separate 16
step sequencers with a further pulse generator module available to control the three sequencers,
allowed the creation of “very complex rhythm over a long period of time, for example, by running
five stages against thirteen” [82, 223].
Subotnick, in particular, became a master of the instrument, and in 196716 Nonesuch Records
commissioned him to create the first entirely synthesised album: Silver Apples of the Moon.17
Throughout the record, the rhythmic impetus of the Buchla sequencer can be heard, as the music
moves from strongly rhythmical passages to moments of rhythmic ambiguity. The possibilities
for timbral modification of sounds afforded by the Buchla synth are also apparent throughout the
recording. The following year Subotnick was commissioned to create a second entirely synthe-
sised record by Nonesuch Records. The Wild Bull,18 released in 1968, demonstrates many of the
same characteristics as Silver Apples of the Moon, with constant timbral changes and fluctuations
between passages of strong rhythmic pulse and rhythmic freedom. Of particular interest is how
many passages of The Wild Bull (for example the first 312 minutes of part B) preempt the frenetic
rhythms of artists such as Venetian Snares19 or Aphex Twin.20
The Buchla was by no means the only modular synthesiser. At approximately the same time
Robert Moog was also developing a modular synthesiser, which was initially made famous by
Wendy Carlos’s 1968 record Switched on Bach [82, 218]. The same year Moog released the Mini-
15http://www.buchla.com/history/16http://www.mortonsubotnick.com/timeline.html17https://www.youtube.com/watch?v=9HoljsO22qA18https://www.youtube.com/watch?v=iLnPWzDSfNs
https://www.youtube.com/watch?v=BYOZ2Ce-iAs19https://www.youtube.com/watch?v=xbV3OJSCxNc20https://www.youtube.com/watch?v=yYV2fKdpjYs
22
moog, “a simple, compact monophonic synthesiser designed for live performance situations” [82,
220]. The success of the Moog synthesisers encouraged a variety of different manufacturers of
commercial synthesisers, and as the number of synthesiser manufacturers grew, they also became
more affordable. In turn, a far broader demographic of musicians began using them as they mi-
grated “from institutional electronic music studios into the homes of composers and musicians”
[82, 221]. For example, Pink Floyd used the EMS Synthi AKS and its sequencer extensively on
their 1973 track On the Run.21 [109]
2.1.5 Feedback Delay, Live Sampling, Terry Riley, Kaffe Mathews
In the early 1960’s Terry Riley was also working in San Francisco creating pieces of musique
concrete at the SFTMC, however, in 1962 he moved to Paris [14, 215]. While there he was ap-
proached to create music for a play named The Gift.22 The jazz trumpet player Chet Baker had
only just arrived in Paris [14, 216] so Riley asked him to contribute as well. While working on the
piece one of the recording engineers designed the first tape delay with feedback, under instructions
from Riley and which he later dubbed the “Time-Lag Accumulator” [134, 105], and the resulting
piece relied heavily upon it. Chet Bakers band performed a rendition of the Miles Davis piece
So What. Riley then spliced and rearranged segments of the recordings [134, 106] and used the
feedback control of the “Accumulator” to create “repetitive textures over the modal harmonies”
[76, 165].
Riley and others continued to explore the possibilities of feedback delay in the following years.
Feedback delay also became one of the defining sounds of “Dub” music by artists such as King
Tubby (aka Osbourne Ruddock).23 Starting from original multitrack recordings of Reggae songs,
Tubby would use a mixing board to push bass and drums to the fore, relegating the “vocals and
21https://www.youtube.com/watch?v=VouHPeO4Gls22http://mikailgraham.com/terryriley/audio.htmhttps://www.youtube.com/watch?v=
lMcmMhfYkNI23https://www.youtube.com/watch?v=ZvYSYOKFCbk
23
other instrumentation a secondary role, being introduced and dropped out at[Tubby’s
]discretion”
[131, 64]. Tubby then passed the resulting sounds through a variety of homemade, or modified,
delay and reverb units [131, 65] creating Dub’s unique sound.
“The Time-Lag Accumulator” foreshadowed the development of Live Sampling, i.e. the prac-
tice of recording and reusing samples during live performance. English band Radiohead, for ex-
ample, use live sampling extensively during live performances of the piece Everything in Its Right
Place,24 from their 2000 album Kid A. Using a Korg Kaoss Pad, band member Johnny Greenwood
not only samples and loops recordings of lead singer Thom Yorke’s voice, but also reverses, pitch
shifts, repeats, and re-arranges segments live.
Many artists have expanded the practice of Live Sampling to form the basis of entire pieces.
One obvious example is the use of guitar looper pedals, which have been used by virtuoso instru-
mentalists such as Michael Manring25 and Pierre Bensusan26 to allow them to “add different layers
and varieties of sounds,” and “appear like... two or three musicians” [135]. A more sophisticated
approach to Live Sampling is apparent in works such as Kaffe Mathews’ Eb & Flo album,27 where
all of the sounds used are “made by live sampling and processing a theremin, the room and its
feedback in performance” [113]. As well as demonstrating a clear link to musique concrete, in its
use of real-world sounds, and minimalism, in its use of materials and loops, Eb & Flo also often
uses elements of repetition and rhythm, although Mathews states she “never attempted to really
make rhythmic music” [144, 37]. One example is the piece Boy and Dog, which begins with a
repeated rhythmic motif before transitioning towards a more rhythmically free sound.
24https://www.youtube.com/watch?v=hvMql9XgIg025https://youtu.be/ABcbhA8gdgY?t=3226http://www.dailymotion.com/video/x2k876_pierre-bensusan-guitare-celtique-co_news27https://kaffematthews.bandcamp.com/album/cd-eb-flo
24
2.1.6 FM Synthesis, and Paul Lansky
While modular synthesisers had afforded musicians interested in electronic music a far more ac-
cessible option than the traditional electronic music studio route, they were still expensive and
tuning was often unreliable. However, decreasing costs and size of digital computers promised to
provide an alternative option. In the early 1970’s John Chowning developed Frequency Modula-
tion (FM) Synthesis [30] [31], a method for synthesising complex sounds by continuously varying
the frequency of one oscillator using the output of another. While the mathematics of FM synthesis
are complex, the algorithm itself is quite simple and facilitated the creation of long chains of con-
nected oscillators. Chowning patented the design in 1974,28 later Yamaha acquired the rights and
released the DX synthesiser series in 1983, which would go on to become one of the best-selling
synthesisers [82, 334].
While the DX synthesisers made sound synthesis more accessible than ever before, it also
provided a whole new sound world for composers to explore. Chowning’s own 1977 composition
Stria29 provides one such example. In 1973, Paul Lansky used Chowning’s FM algorithm to
create his piece Mild und Leise.30 Although quite distinct from the music already discussed here,
it also demonstrates the ‘fuzzy’ divide, as a short segment of the piece was sampled and used as
a loop by Radiohead on their track Idioteque,31 which also samples Short Piece by Arthur Kreger
[108]. Lansky himself has said he really liked how the band used it, and called it “imaginative and
inventive” [103].
In 1985 Lansky composed Idle Chatter,32 using re-synthesised recordings of a human voice.
Idle Chatter is notable for it’s strongly rhythmic and constantly morphing foreground ‘babble,’
set against a much slower paced simple harmonic background ‘choir.’ In addition to echoing
28http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=
%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=4018121.PN.&OS=PN/4018121&RS=PN/401812129https://www.youtube.com/watch?v=988jPjs1gao30https://www.youtube.com/watch?v=4ZDxMb4nago31https://www.youtube.com/watch?v=DNqv3nHyteM32http://paul.mycpanel.princeton.edu/mymp3.html
25
the slowly morphing harmonies of Minimalist composers, this also provides a link to a similar
approach often taken by artists such as Aphex Twin. For example, in the Aphex Twin piece Vord-
hosbn33 (2001), where simple melodic lines run at 12 or 1
4 tempo in comparison to the frantically
changing percussive foreground.
2.1.7 Sampling, Breakbeats, John Oswald, Plunderphonics
Having first been explored in musique concrete, and then expanded to a live performance practice
by the “Time-Lag Accumulator,” artists such as DJ Kool Herc further advanced the practice of
using samples, i.e. sound recordings, in the mid to late 70’s. Using the typical DJ set-up, two
turntables and a mixer, Herc was able to loop instrumental ‘breaks’ in songs by using two copies
of the same record [173, 60] [156, 31]. GrandWizzard Theodore then advanced the practice even
further during the mid-1970’s when he invented scratching [156, 27], i.e. moving the record back
and forth by hand as it is playing. By the end of the 1970’s the “isolation of a break, along with
other effects (such as “scratching” and “cutting,” and so on), began to be considered a musical
form unto itself” [156, 33].
The practice continued to expand, developing into not only hip-hop but also influencing the
development of much subsequent electronic dance music. For example, Grandmaster Flash’s
virtuosic piece The Adventures Of Grandmaster Flash On The Wheels Of Steel34 (1982) is an early
example of the mash-up genre. One pertinent example is the Amen Break.35 Originally a brief
drum solo during a cover of the gospel staple Amen Brother by the band The Winstons [157, 26]
(1969), the ‘break’ was used in hip-hop tracks in the mid-1980’s [24, 192]. JTL Bukem then used
the break during his 1993 piece Music,36 instigating “the use of what may be the most sampled
breakbeat ever” [157, 26]. The Amen Break has subsequently been sampled and used in “thousands
33https://www.youtube.com/watch?v=-iEl7OKrLGI34https://www.youtube.com/watch?v=8Rp1oINrEuU35https://www.youtube.com/watch?v=qwQLk7NcpO436https://www.youtube.com/watch?v=Q5jZXE3bNPg
26
of musical works”, and “effectively came to define” [24, 192] the musical styles of Drum and Bass
and Jungle.
A related, but distinct, development from sampling was the practice of plunderphonics, a term
first coined by John Oswald in a 1985 paper [125]. Of particular interest to Oswald was the issue
of copyright of recorded sound when used as a sound source. Oswald was by no means the first
experimenter in plunderphonics, he himself uses Jim Tenney’s Collage 1,37 from 1961, as an ex-
ample of a previous work in the style. However, the term he coined has gone on to be synonymous
with the style. Oswald’s approach primarily involves using musique concrete techniques on sound
sources derived from famous popular music artists, often with a somewhat ‘tongue in cheek’ atti-
tude [81, 21]. For example, his piece Dab38(1991) uses the Michael Jackson’s song Bad as it’s sole
sound source. Oswald’s piece Pretender is even more overt in its parody, merely consisting of the
playback of Dolly Parton’s The Great Pretender, with the playback speed transitioning from very
fast to very slow over the course of the piece. In contrast Grayfolded39(1991), which uses samples
of the band The Grateful Dead performing their song Dark Star on over 100 different occasions
sliced and mixed to form over 100 minutes of music [11], shows a great deal of sensitivity to the
source material.
2.1.8 Techo and The “New Generation”
While DJ Kool Herc and others were developing the art of sampling into what would eventually
become hip-hop, and then sample/‘break’ based EDM, German band Kraftwerk were using “syn-
thesizer pulses and programmed beats” to create “pristine, post-human” music that would eventu-
ally develop into techno [142, 3]. The band set up their own Kling Klang studio in 1970, and using
a combination of both homemade and, generally cheap, commercial equipment [86] released many
37https://www.youtube.com/watch?v=xC7sdH2XvbU38https://www.youtube.com/watch?v=8xIWLG-F0Ag39https://www.youtube.com/watch?v=er0qOWqxUkg
http://www.pfony.com/grayfolded/
27
successful singles and albums in the mid and late 1970’s [142, 3]. Hit singles such as Autobahn,40
from 1974, brought Kraftwerk worldwide fame, and would ultimately prove a great influence on
techno pioneers such as Carl Craig, who said of Kraftwerk: “They were so stiff, they were funky”
[142, 4].
The second significant influence on the early development of techno was the synthesiser funk
of bands such as Parliament. Another techno pioneer, Juan Atkins, has stated that Parliament’s
1978 hit Flashlight41 was the first primarily synthesised record he had ever heard [142, 5], while
Derrick May described techno as “George Clinton[band leader of Parliament and many other funk
bands]
and Kraftwerk stuck in an elevator with nothing but a sequencer to keep them occupied”
[142, 4]. The music Atkins, May, Craig, and Co. developed would eventually split into myriads
of genres, and the previously mentioned “dancing culture” connected to the music would develop
into late night illegal “raves.” But even at this formative stage in the development of EDM, the
thought behind the music expanded far beyond simply repetitive beats for people to dance to. As
May stated: “We used to sit back and philosophize on what these people thought about when they
made their music, and how they felt the next phase of the music would go... We never just took it
as entertainment, we took it as a serious philosophy” [142, 5].
By the late 1980’s/early 1990’s the culture of illegal raves had largely been suppressed [142,
118]. But EDM continued to gain popularity as it fractured into different genres and sub-genres, a
process that Kembrew McLeod noted in 2001 was happening on a “quite literally, monthly basis”
[115, 60], and which almost certainly has not slowed down since then. Within this ever fracturing
landscape, the 1990’s also saw the rise of Neill’s “new generation of composers” creating “the
new art music” [119]. The previously mentioned English record label Warp were amongst the
earliest promoters of this “new art music,” following a handful of early releases with a series
of compilations under the title Artificial Intelligence. The first Artificial Intelligence release in
1992 makes clear the intention of the music and it’s philosophical links to the early 1980’s techno
40https://www.youtube.com/watch?v=iukUMRlaBBE41https://www.youtube.com/watch?v=T0ZGNGBNIL8
28
pioneers. The front cover states the music within is “Electronic listening music from Warp,” and
the sleevenote contains the following passage:
“Are you sitting comfortably? Artificial Intelligence is for long jour-
neys, quiet nights, and club drowsy dawns. Listen with an open
mind.” [142, 192]
This clearly echoes the philosophical ideals of the early techno pioneers, but also resonates
with the emphasis placed on modes of listening by Milton Babbitt [7], Pierre Schaeffer [152], and
R. Murray Schafer [153]. While the name Artificial Intelligence was originally a “tongue-in-cheek
reference to the idea that this music is totally made by computers” [142, 195], it resulted in some
referring to the music as Intelligent Techno, or later Intelligent Dance Music (IDM). This naming
has caused quite a bit of controversy, with Aphex Twin noting that “It’s basically saying ‘this is
intelligent and everything else is STUPID’” [80]. Subsequently, terms such as Post-Techno [40,
414] [158, 171] and Post-Dance Music Electronica [53, 8] have also been used in an attempt to
circumvent this issue. In truth, however, the advancements since that first Artificial Intelligence
album coupled with the variety of styles and approaches between individual artists, as well as
within individual artists’ repertoires, are so vast that a single sub-genre name is not possible.
2.1.9 Aphex Twin
One of the most prominent and successful of this new generation is Richard D. James, a.k.a. Aphex
Twin (just one of “at least 12 pseudonyms” [176] he uses). James released his first record in
1991, Analogue Bubblebath under the pseudonym AFX. That same year he also co-founded the
aforementioned Rephlex Records label, to promote what he referred to as Braindance, a none-too-
subtle dig at the term Intelligent Techno, and encompassing
the best elements of all genres, e.g traditional, classical, electronic
music, popular, modern, industrial, ambient, hip-hop, electro, house,
29
techno, breakbeat, hardcore, ragga, garage, drum and bass, etc. [141],
an extensive list reflecting James’ own admission that he simply likes something, or he does not
[80]. James was included in the first Artificial Intelligence album, under the name The Dice Man
[142, 199], and released his debut album under the pseudonym Aphex Twin, Selected Ambient
Works 85-92, later that year - containing pieces he had composed when he was in his early teens.
As the title suggests, Selected Ambient Works 85-92 consists primarily of Ambient pieces, demon-
strating James’ softer “synth-siphoned balm for the soul” [142, 200] style. The following year, and
using the pseudonym AFX, James released Analogue Bubblebath Vol 3.42 Containing 13 tracks
of mostly “undanceably angular anti-grooves, adorned with blurts of noise” [142, 202], Analogue
Bubblebath Vol 3 demonstrates James’ penchant for “clangorously percussive noise” [142, 201].
While James’ early work shows a deep concern for sound, they do suffer from what Karlheinz
Stockhausen called “post-African repetitions” [40, 382]. However, by the time of the 1996 Aphex
Twin album, The Richard D. James Album, James had combined both his “synth-siphoned balm
for the soul” and “clangorously percussive noise” to blur the “lines between dance music, ambient
and avant-garde composition” [45]. While the use of meter remained the same and ‘post-African’
rhythms are still evident, most tracks featured continuously varying drum patterns using incredibly
fast repetitions of notes to create ‘impossible’ drum rolls.
Subsequent Aphex Twin releases continued in this same vain, demonstrating, if anything, an
even keaner ear for sounds. The Come to Daddy EP,43 released in 1997, begins with the brutally
aggressive bassline and hyper-edited drumbeats of the title track, only to be followed by the rela-
tively calm and soft Flim44 (a piece that jazz trio The Bad Plus covered.45) In 1999, and again as
Aphex Twin, James released Windowlicker.46 Again combining both the ‘soft’ and ‘hard’ sides of
James work, Windowlicker combines vocal samples that have been pitch shifted to create melodic
42https://youtu.be/233okxv8Fp8?t=58043https://www.youtube.com/watch?v=h-9UvrLyj3k44https://www.youtube.com/watch?v=z56OiPR4r2s&list=PLrp4PrZ-WLrkFxEQMok0DuBuY0xTSeYn_45https://www.youtube.com/watch?v=sX_Iij8Eyts46https://www.youtube.com/watch?v=iZ8sZXFN6jc
30
lines and harmonies, a variety of unusual sound effects, micro-tuned synths, and veers between
relatively steady rhythms and “angular anti-grooves,” before finally climaxing with an aggressive
distorted bassline.
In 2001, with the Aphex Twin album Drukqs, James extended his style into even more distant
extremes. Spanned over 2 CD’s, Drukqs contains some very high tempo pieces such as Vord-
hosbn47 and Mt Saint Michel + Saint Michael’s Mount48 using even more ultra-precisely edited
‘breaks’ and micro-tuned synthesiser lines. Also of interest in both of these pieces is the contrast-
ing speeds at which the percussive and melodic lines run, echoing the similar approach in the work
of Morton Subotnick. Drukqs also contains pieces for prepared piano such as Jynweythek Ylow,49
and pieces such as Gwarek2,50 which is clearly linked to the musique concrete tradition.
2.1.10 Autechre
Also included on Warp’s first Artificial Intelligence (1992) album were English duo Rob Brown
and Sean Booth, a.k.a. Autechre. Both are known to not only “reject academic dogma” but also
“deplore the tyrannical homogeneity of mainstream dance music” [167], and as a band they have
developed a cold, calculated, and angular aesthetic, placing them perhaps even more firmly within
the ‘fuzzy’ divide than Richard James. The year after the release of Artificial Intelligence, Warp
released the duo’s first album, Incunabula.51 As with Richard James, Autechre’s aesthetic would
develop significantly in the following years. However, even at this stage of their development, the
duo’s taste for angular rhythms and stark contrasts in frequency ranges was already evident. The
result is a series of “unlovely but queerly compelling sound sculptures” [142, 196].
47https://www.youtube.com/watch?v=-iEl7OKrLGI48https://www.youtube.com/watch?v=NBDAunPkSAY49https://www.youtube.com/watch?v=4KD8kWksOmc50https://www.youtube.com/watch?v=DbWGun76eRU&list=PL9dThdR6I6dNOKqVgqmmFWES85XKoGbIQ&
index=2051https://www.youtube.com/watch?v=Df2fgXORCMU
31
The groups following releases, Amber52 in 1994, Tri Repetae53 in 1995, and Chiastic Slide54 in
1997, all demonstrate a gradual progression towards a colder more angular sound. By the release
of their untitled 5th album (generally referred to as LP555) in 1998, Autechre had all but abandoned
the warmer aspects of their earlier sound. Instead LP5 consists of cold, calculated,“abstruse and
angular concatenations of sonic glyphs, blocs of distortion and mutilated sample tones” [142, 196].
The 2001 album Confield saw Autechre fully embrace their new cold aesthetic. The duo used
a variety of algorithms to generate and manipulate both rhythmic and melodic lines, giving the
album an even more angular feel than earlier work. Pieces such as Cfern56 still use fixed time
signatures, but the duo use algorithms to generate a plethora of different subdivisions and cross
rhythms within this structure [171].
Their following album, Draft 7.30 in 2003, again demonstrates the duo’s cold angular polyrhyth-
mic, almost to the point of arhythmic, aesthetic on tracks such as Xylin Room.57 In contrast to
Confield, only one piece on Draft 7.30 uses algorithms, the final track Reniform Puls,58 prompting
the duo to describe the album as “really straight” [171]. Subsequent releases such as Quaristice59
(2008), continued to veer between both the rhythmic and arhythmic, echoing Booth’s belief that a
purely “antirhythmic stance” was “as bad as people taking a purely rhythmic stance” [167]. This
progression culminated in the duo’s 2010 album Oversteps,60 which reintroduced much of the
warmer tones common on earlier albums such as Amber while still retaining the “fractured beats”
[93] and angular melodic contours of albums such as Confield.
52https://www.youtube.com/watch?v=O6KtO3zanKA&index=5&list=PLLTeRHdm3hrkLKVi7Gj_
OOak3UN3QG6eR53https://www.youtube.com/watch?v=bDhbxUJnpHM&list=PLjKnruHn_8TcX64rDUBlLxk-MPzYj-J-h&
index=354https://www.youtube.com/watch?v=JT0m_-al-B855https://www.youtube.com/watch?v=gzfgUCOzUoY&list=PL20nqmDEG-Hjv6Su5L9Uj0k2G3WgZFVN5&
index=256https://www.youtube.com/watch?v=VT2JQiP99pI&list=PL2A5872E744FE7F05&index=257https://www.youtube.com/watch?v=prCNa4H9pAA58https://www.youtube.com/watch?v=BQFQIy1su2g59https://www.youtube.com/watch?v=-udhRzzW5jE60https://www.youtube.com/watch?v=Xv69jVDyLhA&list=PL0B9377DAA1F6FAD2
32
2.1.11 Squarepusher
In 1996 Tom Jenkinson, a.k.a. Squarepusher, released his debut album Feed Me Weird Things,
on Rephlex records. both Feed Me Weird Things,61 and 1997 follow-up Hard Normal Daddy,62
fuse jazz funk melodies and grooves, with “manic drum programming work” [34, 151], and a wide
variety of synthesiser sounds. Also of note is how Jenkinson combines his own virtuosity on bass
guitar, with the “exaggerated virtuosity of the machine” [42, 2]. Much of Squarepusher’s work,
and that of similar artists, is created by cutting and pasting sounds on a “microrhythmic level”
[42, 171]. Nicholas Collins has noted how the resulting “frenetic pace and virtuoso rhythms” [34,
142] presents “a fascinating part of contemporary composition exploiting the potential of computer
music to exceed human limitations” [34, 142]. This “exaggerated virtuosity of the machine” is par-
ticularly evident on these early Squarepusher albums, as the combination of virtuosic instrumental
performance and jazz funk grooves allows the listener to feel as if they are actually listening to a
band performing, only for the drums to suddenly include an impossibly fast roll.
Big Loada, released in 1997, still retained the “manic drum programming work” and “virtuoso
rhythms” of Squarepusher’s earlier records. However, it displays a far more ‘electronic’ sound than
his earlier style. For example, the piece A Journey To Reedham63 replaces the warm synthesiser
sounds and jazz funk melodies of early Squarepusher records, with ‘8-bit’ computer game style
melodies. Also notably different on Big Loada, is the general increase in tempo from previous
releases. Both of these aspects are also evident on Come On My Selector,64 where the tempo
is around 200bpm, and the distinction between synthesised and performed basslines is blurred
through the use of audio effects.
Squarepusher’s 1998 release Music Is Rotted One Note,65 returned to the jazzier feel of his
61https://www.youtube.com/watch?v=bpig1xHqUTQ62https://www.youtube.com/watch?v=PTBiTwS_ouQ63https://www.youtube.com/watch?v=B7s09dygL1w64https://www.youtube.com/watch?v=zypOT0RpUWQ65https://www.youtube.com/watch?v=p4itIBVhNbw
33
early releases. However, on Music Is Rotted One Note the memorable melodies of early works
were mostly absent, instead replaced with a ‘jam band’ feel reminiscent of the Miles Davis album
In A Silent Way66 or Bitches Brew.67 This change of approach was a conscious effort on Jenkinson’s
part, in an attempt to re-inject some of the “unpredictability of the randomness of improvising with
live instruments” [79] into his music. Budakhan Mindphone, released the following year, follows
a similar approach. However, by 1999 release Selection Sixteen,68 Squarepusher had reintroduced
the use of sequencers and “manic drum programming” from earlier records.
2001 release Go Plastic69 and 2002’s Do You Know Squarepusher70 both extended the use
of high tempos and “exaggerated virtuosity of the machine,” with a more complex, often ‘thru-
composed,’ approach to form. For example, the piece Go! Spastic,71 transitions through multiple
contrasting sections with little repetition, and contains many rhythmic values “within the audio
rate” [34, 151]. The resulting music “flows with great momentum, such that one can literally be
left behind by the overload of information, only for understanding to catch up on some later cue”
[34, 151]. Commenting on a live performance from this time Ben Neill said that
Squarepusher presented 112 hours of music in which long stretches of
highly processed digital noise and textures that would rival any art-
music composer’s sonic palette alternated with completely frenzied
hyperspeed beats that exceeded 200 beats per minute - hardly dance
music as anyone on this planet would recognize it [119, 4]
and that he felt the experience was similar to that of “the stunned audience in the Philips pavilion
hearing the [Edgar Varese composition] Poeme Electronique for the first time” [119, 4].
66https://www.youtube.com/watch?v=5lAYwYtvoX867https://www.youtube.com/watch?v=ycSAGSO1AI0&list=PL94gOvpr5yt0fSZzCnnYWwUFF3evnG4x468https://www.youtube.com/watch?v=i9FESkbuANI69https://www.youtube.com/watch?v=SqjZUyUtqYs70https://www.youtube.com/watch?v=rYS8edMRgBg71https://www.youtube.com/watch?v=6ddzcYjgNZY
34
2.1.12 Matmos and Bjork
American duo Matmos, Martin Schmidt and Drew Daniel, also specialise in creating music that
does not fit neatly into either ‘art’ or ‘popular’ traditions. The pair initially bonded over a mutual
fondness for Pierre Henry’s Variations pour une Porte et un Soupir72 [46], and have since created
much of their music by combining the use of everyday sounds, a la musique concrete, with an EDM
aesthetic. Their self-titled debut album, released in 1997, uses a wide variety of unusual sound
sources ranging from the “amplified crayfish nerve tissue” [46] on Verber: Amplified Synapse,73
to hair being brushed and breathing sounds on Schluss.74
The second Matmos album, Quasi-Objects (1998), follows in this style sampling everything
from ‘whoopee cushions’ on Stupid Fambaloo75 to human laughter and whistling on The Purple
Island76 [148, 144]. Upon hearing Quasi-Objects, Icelandic singer Bjork is said to have been so
impressed that she ordered “25 copies to hand out to her friends” [46], and subsequently asked
them to remix her piece Alarm Call.77
Bjork was “flabbergasted” [46] by the remix, resulting in a further collaboration on Bjork’s
2001 album Vespertine. Again the duo used a broad range of sound sources including the sound
of “cards shuffling” [46] on Hidden Place78 and walking on rock salt for the rhythm on Aurora.79
[46] Matmos also collaborated with Bjork on her 2004 album Medulla using “various sounds that
emanate from the human vocal tract as source objects from which to construct the sound of a drum
kit” [192, 53]. Pieces such as Where is the Line?80 use this approach to “forge together a hybrid
voice machine, a meta-voice, a dynamic vocal organ capable of producing virtually any utterance,
72https://www.youtube.com/watch?v=aHgKZgNtsEk73https://www.youtube.com/watch?v=AO-x_F7UAlc74https://www.youtube.com/watch?v=GQZW8RM9pyY75https://www.youtube.com/watch?v=c69twp1-Re476https://www.youtube.com/watch?v=X9hZiUeUjEY77https://www.youtube.com/watch?v=dBwfdUi-uiI78https://www.youtube.com/watch?v=cpaK4CUhxJo79https://www.youtube.com/watch?v=CZn-2lOe9Kk80https://www.youtube.com/watch?v=KqwJV449hnY
35
any timbre, any range, rhythm, melody, or harmony” [192, 95].
Although independent of Matmos, Bjork continues to explore the avant-garde of ‘popular’
music and EDM and embrace new technologies. Live performances of pieces from her 2007
album Volta, for example Declare Independence,81 make extensive use of the ReacTable82 and
touch screens. Bjork’s 2011 release Biophilia also explored touch screen technology, and was
released as both an album of songs, and an “innovative and hugely interactive” [89] iPad app,
allowing users to “manipulate musical elements” [114] for each piece on the album.
While working on Vespertine with Bjork, Matmos also released the album A Chance to Cut is
a Chance to Cure (2001). A Chance... continued the duo’s exploration of unusual sound sources,
focussing in particular on sounds created during various surgical operations. For example, the
opening piece Lipostudio... and so on83 uses samples of liposuction operations as percussive el-
ements. L.A.S.I.K84 uses the “high frequency hiss and drones of refractive eye surgery” [192,
55]. Memento Mori was composed “entirely from samples of human skull, goat spine and connec-
tive tissue, and artificial teeth” [192, 55]. The final track on the album, California Rhinoplasty,85
“wittily juxtaposes a nose flute against the disturbing but undeniably rhythmic sounds of crunch-
ing bone, cauterized tissue, buzzing saws, and pumping respirators” [132]. In total the album
includes recordings of “three nose jobs, a chin implant, two laser eye surgeries and two liposuc-
tions” [46]. Perhaps most notable about A Chance... however, is how the duo paradoxically use
these “macabre” and “somewhat disturbing” sound sources to create “upbeat and groovy” [192,
55] music.81https://www.youtube.com/watch?v=ENERd6rjFnw82http://www.reactable.com/83https://www.youtube.com/watch?v=OZQN4KR5j2s84https://www.youtube.com/watch?v=Rj_PPzM9yug85https://www.youtube.com/watch?v=yUJYO4N2mYk
36
2.1.13 Richard Devine
Richard Devine is an American electronic musician who describes his music as “intricately layered,
precisely organized, highly synthetic, rhythmically articulated, intensely programmed, controlled
yet chaotic, organic yet machine-like” [1, 4]. Devine has also acknowledged the influence of com-
posers such as Morton Subotnick, Karlheinz Stockhausen, and John Cage [91] in his work, and that
his “intentions are not to really make people dance, but to engage the listener in a surround-sound
experience of acrobatic sound textures” [1, 4]. Ben Neill has noted that this approach demon-
strates “that it is possible for rhythmic electronic-music composers to work with the most abstract
sound processes, experimental textures and techniques, as well as rhythmic materials that make
references to, but do not fit within, specific pre-existing dance music genres” [119, 5].
In 1999 Devine was included on a compilation of remixes of Warp artists, titled Warp 10+3
Remixes, on which he remixed the Aphex Twin piece Come To Daddy.86 The following year he
released his debut album Lipswitch87, again on Warp, and already in these early releases Devine’s
aesthetic is apparent, an abundance of harsh, constantly morphing, and heavily processed electronic
sounds and rhythmic patterns form the basis of the pieces, while melodies and harmony are often
non-existent. Devine’s 2001 album Alea Mapper, further develops this approach on pieces such as
Vecpr,88 and contrast this rhythmic approach on pieces such as Foci Duplication,89 the drones and
timbral changes of which are reminiscent of Bernard Parmegiani’s Geologie Sonore.90 from the
1975 album De Natura Sonorum
In addition to his own compositions, Devine is also heavily involved in the creation of new
sound design software for others to use. Beginning in 1999 he has created “highly complex and
intensely programmed sound banks” [91] for Native Instrument’s synthesisers. He has also worked
86https://www.youtube.com/watch?v=2mj0F3qlq_E87https://www.youtube.com/watch?v=IQ7BfwTRmdY88https://www.youtube.com/watch?v=QpbCdJNLJek89https://www.youtube.com/watch?v=D6h81qk-uNo90https://www.youtube.com/watch?v=U3z90oPERQY
37
with companies such as Ableton, Izotope, and Propellerheads, and has released numerous sound
sample libraries.91
2.1.14 Summary
In this section I have discussed some of the most significant musical influences inspiring this
research. In particular, I have focussed on electronic music that spans the ‘fuzzy’ divide between art
and popular music. One matter of particular importance in relation to this thesis is the use of rhythm
in many of these examples. Neill’s claim that “the division between high-art electronic music and
pop electronic music is best defined in terms of rhythmic content” [119] may be a simplification
of a complex topic. However, it is true that “the introduction of rhythmic (or metrical) elements”
into electronic ‘art’ music has “caused a certain unease, even disquiet” [53]. This section has
presented the work of artists whose music demonstrates the potential for experimental and avant-
garde aesthetics to be incorporated into music with a strong reliance on rhythmic elements.
Evidence of this ever decreasing divide has also become more apparent in the work of com-
posers more affiliated with the traditions of electronic ‘art’ music. For example, in 1990 Mexican
composer Javier Alvarez released Mambo a la Braque92 as part of the Electro clips compilation by
the empreintes DIGITALes record label. Alvarez reassembled “short musical segments which come
from the well-known mambo Caballo Negro by composer Damaso Perez Prado” [2] into a cubist
sound-mosaic. Of note is not only the strong rhythmic impetus, but also the clearly distinguishable
melodic and harmonic elements of the original mamba. Canadian composer Robert Normandeau’s
2003 composition Puzzle93 is even more overtly rhythmical. The piece consists of “a succession of
small pieces made to fit with one another,” which can be “organized in any order”, and the sounds
used include “door sounds and vocal onomatopoeias” [124]. English composer Robert Ratcliffe
specialises in what he terms musical hybridisation [140], in which he combines elements of both
91http://devinesound.net/92http://www.electrocd.com/en/oeuvres/select/?id=1426193http://www.electrocd.com/en/oeuvres/select/?id=14027
38
EDM and contemporary art music. For example both his 2007 composition Force-to-scale,94 for
wind trio, and 2008 composition Dust up Beats, for string quartet, derive their rhythmic materials
from Aphex Twin pieces, Cock/Ver 1095 and 54 Cymru Beats96 respectively [140].
2.2 Improvisation
Countless books, conferences, and journal articles have discussed the topic of musical improvi-
sation. And yet, a definition of ‘improvisation’ remains elusive. In discussing Javanese Gamelan
music97 Neil Sorrell notes that “because the word ‘improvisation’ has no absolute meaning it must
always be used with care and myriad qualifications” [164, 76]. Similarly, Paul Berliner notes that
“if one defines improvisation in such a way as to include [a particular musical practice], then,
presumably, [that practice] is improvisation” [13, 4]. In discussing improvisation with computer
systems, Roger Dean notes that:
Human-computer interaction obviously occurs whenever a computer
is used, but interactive music mediated by computers involves offer-
ing choices to the user... the point at which this interaction becomes
sufficient to permit the user to improvise cannot be readily defined
and probably depends on the individual as well as the software [43,
xvii].
Rather than attempt to give an absolute definition of improvisation, or champion one theory
on the topic, this section will focus on approaches to, and theories about, improvisation that have
informed the research presented in this paper.
94https://myspace.com/visionfugitive/music/song/force-to-scale-live-excerpt-79438442-8748402095https://www.youtube.com/watch?v=-M8sIzLNVT096https://www.youtube.com/watch?v=BGqm-5zJalQ97https://www.youtube.com/watch?v=16_nS0nkWUs
39
2.2.1 Improvisation and composition as two points on a continuum
Grove Music defines improvisation as “the creation of a musical work, or the final form of a
musical work, as it is being performed,” noting that “it may involve the work’s immediate com-
position by its performers, or the elaboration or adjustment of an existing framework, or anything
in between,” and also that “to some extent every performance involves elements of improvisation,
although its degree varies according to period and place, and to some extent every improvisation
rests on a series of conventions or implicit rules” [121]. In discussing improvisation, Bruno Nettl
hypothesised that “the juxtaposing of composition and improvisation as fundamentally different
processes is false, and that the two are instead part of the same idea” [120, 6]. Margaret Kartomi
has echoed this belief that composition and improvisation are just two points on a continuum, stat-
ing that “since improvisation and composition both involve workings and reworkings of creative
ideas, they are essentially part of the same process” [92, 55]. Similarly, Berliner notes that in an
attempt to distinguish composition and improvisation by drawing up a “precise list of their exclu-
sive properties” he soon realised that his “grasp on them was less strong than [he] had thought, and
that “in fact, their characteristics seemed to overlap hopelessly at the margins” [13, 4].
Dean has also echoed these thoughts, writing that “improvisation and composition in sound
form a continuous spectrum, with mutual interfaces and slippages no more obvious than those
between red and orange” [43, xii]. He continues to differentiate the two on the basis that “im-
provisation is usually distinguished as involving substantial fresh input to the work at the time of
each and every performance” [43, xii]. Numerous scholars have noted the idea that “improvisation
involves some degree of spontaneity in musical decision making during the act of performance”
[170, 71]. For example, in his study of jazz, Berliner wrote that musicians often “reserve the
term for real-time composing - instantaneous decision making in applying and altering musical
materials and conceiving new ideas” [13, 221].
To address the continuum between improvisation and composition, Dean proposes two broad
categories into which improvisation can be said to fall:
40
1. “Applied” improvisation - the improvisation that occurs when a musician/composer
performs on their instrument or computer system in the process of composing and
then arranging or modifying these results into a codified piece of music later.
2. “Pure” improvisation, “an analytical fiction,” occurs during public performances,
particularly that part which is least predetermined. [43, xix]
Dean also notes that “it can be argued that all composition is, or involves, applied improvisation”
[43, xix]. Or as Nettl states:
We must abandon the idea of improvisation as a process separate
from composition and adopt the view that all performers improvise
to some extent. What the pianist playing Bach and Beethoven does
with his models - the scores and the accumulated tradition of perfor-
mance practice - is only in degree, not in nature, different from what
the Indian playing an alap in Rag Yaman and the Persian singing the
Dastgah of Shur do with theirs [120, 19].
2.2.2 The use of ‘referents’ or ‘models’ in improvisation
Of particular interest in the context of this paper is the use of a “model” [120, 11] or “referent” [43,
xii] upon which the improviser(s) base their performance. In jazz, blues, rock, and similar Western
musical traditions, this often takes the form of a harmonic and rhythmic structure, for example a
12-bar blues or popular song, and the improviser then spontaneously creates new melodic lines that
fit within this framework. So common is the use of referents in jazz that the harmonic and rhythmic
structures of numerous songs/pieces have gained new life, often in a modified form, as the basis
for new compositions, i.e. new melodic lines. In some cases these new compositions using an
old structure have become so numerous and common that they become a ‘standard’ progression
(‘referent’), often divorced from the original melody. Perhaps the most prominent example of this
41
is the chord changes for the George Gershwin song I Got Rhythm,98 which has been used as the
basis for so many compositions99 that they are often simply known as Rhythm Changes [90, 415].
However, numerous variations on this idea have been explored in different musical cultures, and
the ‘referent’ could even be a “series of arhythmic pitch or sound structures” [43, xii]. So, even
though some Western improvisers have attempted “to avoid the use of any model” [120, 17], in
truth “total nonreferentiality is almost impossible” [43, xiv].
Although much Western Art music written after 1750 has required a strict “adherence to the
notes of a composition as written down by the composer” [83, 3], earlier music, such as that
of the Renaissance and Baroque periods, often relied heavily on improvisation of the performer.
While scores from these eras often looked simple, “what is seen on the printed page was often
merely an outline to be amplified in performance according to regularized patterns of improvised
embellishment” [83, 3]. This is, for example, evidenced by the widespread use of figured bass, a
technique whereby the composer would indicate the bass note, and the intervals to be played above
it, but leave the actual voicing of the chord to the performers discretion.
The use of melodic lines as a ‘referent’ for improvisation forms the basis of much traditional
Irish folk music. Instrumental musicians often learn a repertoire of relatively simple melodic
lines,100 and then in performance will embellish these melodies using ornamentations, both rhyth-
mic and melodic, varying accents, and microtonal changes [39] [177]. While instrumentalists often
learn music intended to accompany dancing, and therefore very strict rhythmically, singers often
sing unaccompanied in a much more rhythmically free manner, a style called sean nos,101 or ‘old
style,’ singing. Although rhythmically the instrumental and vocal styles are quite different, sean
nos singing also relies on the performer improvising ornamentation on extant melodic lines [177,
628].98https://www.youtube.com/watch?v=fSTkz1BvrXY99For example: Cotton Tail by Duke Ellington, Rhythm-a-Ning by Thelonious Monk, Anthropology by Charlie
Parker, and Meet the Flintstones by Hoyt Curtin100https://www.youtube.com/watch?v=2Z_TheGgFWI101https://www.youtube.com/watch?v=Qvus7IFyFMA
42
In Indian classical music102 the use of ‘referents’ as the basis for improvisation takes many
guises. Some forms of improvisation, such as niraval, are similar to the above examples, in that
they are based on extant compositions [120, 7]. In contrast, others “such as alapana and tanam”
[120, 7], are based not on compositions, but on ragas and talas, that is “the basic concepts of
melodic and rhythmic organization in India” [120, 11]. Similarly, improvisation in Arabic musical
traditions is based on maqams,103 melodic modal configurations defining the pitches and patterns
that can be used, and how the piece should develop [175, 38] [120, 14]. The latter two examples
may seem to contradict the notion of the continuum between composition and improvisation. How-
ever, as Nettl points out, performers of these styles do in fact “take a time-oriented approach” to
these models, “considering them objects that have beginnings and endings, and that consist some-
how of sequences of thought” [120, 12]. As such we must view all models as a “series of obligatory
musical events which must be observed, either absolutely or with some sort of frequency, in order
that the model remain intact” [120, 12].
2.2.3 Improvisation as the re-use, variation, and manipulation of previously learnt ‘building
blocks’
The continuum is also evidenced by the fact that both these and other improvisation practices are
based on a collection of “building blocks,” just as ‘compositions’ in “folk and art music repertories”
are also built on “similar kinds of building blocks” [120, 14]. As Nettl observed in his study:
multiple performances by a single musician, using a single referent/model, tend to vary much less
than performances by different musicians [120, 18]. This fact demonstrates that each musician
establishes their own personal practice and repertoire of ‘building blocks’ upon which they base
their performances. Entirely new material, conceived in the moment and never before practiced in
any way, is, in fact, relatively rare. Instead, improvisational practices tend to be more commonly
about the use of previously practiced materials/ideas in a new way/setting - thus Dean’s assertion
102https://youtu.be/42W5Q5kJmZk?t=59103https://www.youtube.com/watch?v=A3G7VW1RMks
43
that “pure” improvisation is an “analytical fiction” [43, xix].
As noted above, improvisation in jazz is usually, although by no means always, based on a
‘model’ or ‘referent’ such as the chords and rhythmic structure of a 12 bar blues or popular song.
However, despite the common misconception that “jazz musicians pick things out of the air” [13,
1], as trumpeter Wynton Marsalis104 stated:
Jazz is not just, “Well man, this what I feel like playing.” It’s a very
structured thing that comes down from a tradition and requires a lot
of thought and study [13, 63].
In truth, improvisation in jazz is built largely from an individual, although overlapping, repertoire
of ‘building blocks’ each musician learns or develops. As Lonnie Hillyer stated, saxophonist
Charlie Parker105 “intentionally played many of the same phrases over and over again in his solos,”
while Jimmy Robinson in turn says that “all the tenor [saxophone] players were influenced by
[John] Coltrane106 and Charlie Parker. You can hear them use their phrases all the time” [13, 103].
So important is the learning of these ‘building blocks’ in jazz tradition that Berliner dedicates an
entire chapter in his study of jazz to the topic [13, 95]. Berliner covers a wide range of approaches
taken by jazz musicians in learning these ‘building blocks,’ from learning and replicating complete
solos by exemplary musicians, to learning short phrases of solos, to analysing these short phrases
in order to learn, as trumpeter Art Farmer107 stated, “where their choice of notes came from” [13,
105]. While the memorisation of full solos is common, in truth the learning of short phrases is, in
fact, a more important factor within jazz improvisation, or as Berliner states:
Whereas analysis of complete solos teaches students about matters
of musical development and design, analysis of discrete patterns and
melodic cells elucidates the building blocks of improvisation and
104https://www.youtube.com/watch?v=ZBJ-MmTA-eU105https://www.youtube.com/watch?v=UTORd2Y_X6U106https://www.youtube.com/watch?v=dOxXY1wAl9A107https://www.youtube.com/watch?v=6dKD1NMb0do
44
reveals commonalities among improvisations that are not neces-
sarily apparent in the context of larger performances108 [13, 101].
In jazz parlance, these short phrases and melodic cells are referred to as: “vocabulary, ideas, licks,
tricks, pet patterns, crips, cliches, and, in the most functional language, things you can do” [13,
102]. In addition to providing “ready made material that meets the demands of composing music in
performance,” and guaranteeing that you have “something to play when you can’t think of nothing
new” [13, 102], these ‘licks’ help to “establish the relationships of the improvisers to their larger
tradition” [13, 103]. As trumpeter Tommy Turrentine109 states “everybody uses them in jazz...
They all have their things they play. I can tell by the fourth bar of a solo who it is that’s playing”
[13, 103].
2.3 Algorithms and Computational Thinking in Music
This section provides a brief historical overview of the use of mathematical computations and
algorithms in the composition and performance of Western music. Chapter 3 contains a more
detailed discussion of algorithmic systems designed specifically for use in live performance.
An algorithm is defined as a “specific set of instructions for carrying out a procedure or solving
a problem, usually with the requirement that the procedure terminate at some point” [186]. The
growing ubiquity of digital computers over the past 50 or 60 years has meant that “there is no
longer any area of social life that has not been touched by algorithms” [185]. Peter Weibel has
called this the “Algorithmic Revolution” [185]. Austrian composer Karlheinz Essl has also dis-
cussed this “Algorithmic Revolution,” noting that it has “drastically changed not only the way in
which art is produced, but also the function and self-conception of its creators” [54, 108]. How-
ever, the use of algorithms is by no means a new phenomena. Weibel himself notes how Piero della
Francesca’s De prospectiva pingendi (1474) and Albrecht Durer’s illustrated book Instructions on
108Emphasis added.109https://www.youtube.com/watch?v=7-yvmTjXxBY
45
Measurement (1525), “were nothing other than instructions on how to produce paintings, sculp-
tures and architecture” [185]. Similarly, Essl also notes that the idea is much older. He cites the
German writer Johann Wolfgang von Goethe’s concept of the “primordial plant,” from which “one
could invent an infinite number of plants,” including “ones that despite their imaginary existence
could possibly be real” [54, 108].
The use of algorithms and mathematical computations has an equally rich history in Western
music traditions. David Cope has gone as far as to argue that “[he does] not believe that one can
compose without using at least some algorithms,” and that even those who “rely extensively on
intuition for their music actually use algorithms subconsciously” [117, 20]. Amongst the earliest
recorded examples is Micrologus, the “earliest comprehensive treatise on musical practice that
includes a discussion of both polyphonic music and plainchant” [128], written by Guido d’Arezzo
around 1026AD. Within this treatise, Guido suggests treating the parts of a melody as a poet would
the parts of a poem. Individual sounds being analogous to individual letters, groups of sounds to
syllables, groups of ‘syllables’ forming a ‘neume’ that is parallel to a ‘part’ or ‘foot,’ and groups
of neumes being parallel to a line - at which point the singer would take a breath. Guido further
stressed the comparison, advocating the arrangement of neumes so that “their lengths are equal or
in simple ratios to each other, varying the number of units as the poet juxtaposes different feet in
a verse” [128]. Most important in the context of this thesis, however, is the “mechanical method
of melodic invention or improvisation” [128] that Guido then proposed, wherein “a pitch was
assigned to each vowel so the melody varied according to the vowels in the text” [47, 60]. This
simple formula can be seen in Figure 2.1, and facilitated the creation or improvisation of a melodic
line for any text.
Figure 2.1: Guido d’Arezzo’s mechanical method for composing/improvising melodic lines, byassigning each vowel to a pitch.
46
In the 14th and early 15th century, composers such as Guillaume de Machaut often used pe-
riodic repetitions of rhythmic patterns, but with different pitches, or repeated pitches, but with a
different rhythmic line, techniques that are now called isorhythm and isomelic respectively [12]. In
both cases, a clear, and simple, formula is apparent where variation is achieved by simply changing
one of the two key characteristics of a melodic line while leaving the other unchanged. Even more
complicated formulas are evident in the music of composers during the Renaissance (roughly 1300
- 1600) and Baroque periods (roughly 1600 - 1750). For example, the standard requirements for
the exposition, beginning, of a fugue, are that each voice enters one at a time, waiting until the pre-
ceding voice has finished its statement before entering. All of the voices start by stating the subject,
the main theme, either in the original key, or transposed, and often slightly modified, to provide
the answer. Following the completion of the ‘subject,’ each voice then plays each of the counter
subjects, secondary themes, in order [178]. Much of the musical output of Johann Sebastian Bach
consists of “fugues, canons, and similar algorithmic music” [117, 10], and also often contains tech-
niques such as ‘inversion’ [104], i.e. playing melodies ‘upside down,’ and ‘retrograde’ [189], i.e.
playing melodies backwards.
Music of the Classical period (roughly 1750 - 1830) also often contained ‘algorithmic think-
ing.’ For example, Musikalisches Wurfelspiel, ‘Musical Dice Game,’ used dice to randomly gen-
erate musical compositions from a set of composed options, generally the user chose one bar
at a time. The earliest example of the ‘musical dice game’ was “Johann Philipp Kirnberger’s
Der allezeit fertige Menuetten- und Polonaisencomponist (“The Ever-Ready Minuet and Polonaise
Composer”)” [123, 36], first designed in 1757. David Cope has also noted that one can view much
of the music from this period as algorithmic “in the sense that so many parameters are clearly
predetermined prior to composition: form, phrase length, harmonic progression, and so on” [117,
11].
Since the end of the 19th and beginning of the 20th centuries, the use of mathematical elements
in musical compositions has become even more commonplace. French composer Claude Debussy
47
composed many works that utilise the Fibonacci Sequence and Golden Ratio. For example, the
introductions to the final movement of La Mer (1905), and to Rondes de Printemps (1909) are 55
and 21 bars respectively [85, 3]. Reflets dans l’eau110 (1905) is 94 bars long, divided into 58 bars
(further divided into 23 and then 35 bars), at which point the piece reaches its dynamic climax, and
then 36 bars (divided into 24 and then 14 bars) [84, 288]. The ratios of these divisions all approach
the Golden Ratio.
Music for Percussion, Strings, and Celesta111 (1936) by Hungarian composer Bela Bartok also
contains elements of the Fibonacci Sequence and the Golden Ratio. The opening movement is a
fugue and is subdivided in a similar manner to Debussy’s Reflets dans l’eau, although with minor
displacements synchronising main events with a “subsidiary structure of simple ratios of eleven
bars running through the movement” [84, 286]. The third movement of Music for percussion...
is also divided in a similar manner [100, 121], and begins with an even more evident use of the
Fibonacci Sequence, as beats of the opening xylophone solo are grouped as 〈1,1,2,3,5,8,5,3,2,1〉
[163, 83].
French composer, Olivier Messiaen used a variety of mathematical and algorithmic techniques
extensively. Messiaen often used what he termed ‘modes of limited transposition,’ symmetrical
scales created in such a way that “they can only be transposed a small number of times before
the same notes are generated” [78], and ‘non-retrogradable rhythms,’ patterns which “whether
one reads from right to left or from left to right, the order of their values is the same” [133, 4].
The piano part of the opening movement of Messiaen’s Quatuor pour la fin du temps112 (1941)
combines a sequence of 29 chords and 17 rhythmic values (both prime numbers). Messiaen uses
this isorhythmic piano part in combination with “non-retrogradable rhythms and symmetrical pitch
collections” to “create a musical stasis, which is Messiaen’s metaphor for eternity” [98, 194]. Trois
petites liturgies de la presence divine113 (1944) shows many similar traits. The vocal line uses
110https://www.youtube.com/watch?v=LLbpQl1cCl8111https://www.youtube.com/watch?v=ZFTGdFuUdAU112https://www.youtube.com/watch?v=UeSVu1zbF94113https://www.youtube.com/watch?v=DRyn-gRp_p0
48
two ‘modes of limited transposition.’ The right-hand piano part “fits a sequence of 13 chords
[using notes from a ‘mode of limited transposition’]... to a sequence of 18 [rhythmic] values” [78].
The left-hand fits a sequence of 9 chords using a different ‘mode of limited transposition,’ to the
rhythmic sequence of the right-hand, but played in a 3:2 ratio [78].
The chance works of American composer John Cage have also been noted as being algorith-
mic in nature [117, 11]. For example, to create his 1951 piece Music of Changes114 Cage chose
“sounds, rhythms, tempos and dynamics” [136] by tossing coins in order to access musical ma-
terial contained in predefined charts derived from the I Ching [117, 11]. Cage used an “identical
system” [136] to create Imaginary Landscape no.4115 (1951), and, in fact, continued to use the I
Ching and chance operations in the composition of much of his later works [99, 78].
In the first half of the 20th century, many algorithmic techniques were proposed for the com-
position of music. In the early 1920’s Austrian composer Arnold Schoenberg,116 and his students
such as Anton Webern117 and Alban Berg,118 devised a method for composing music using all 12
notes of the chromatic scale, labelled 12 tone serialism [77]. In serialist music “a fixed permuta-
tion, or series, of elements is referential (i.e. the handling of those elements in the composition is
governed, to some extent and in some manner, by the series)” [77]. Early propents of the style,
such as Schoenberg, based their music on a ‘series’ containing the 12 notes of the chromatic scale,
hence the term 12 tone serialism.
Although early works only used series of the 12 chromatic notes, later composers began to
extend the serialist technique, using a ‘series’ to determine features such as “duration, dynamics
and timbre” [77], often simultaneously, within a composition. Karlheinz Stockhausen is amongst
the most prominent of these later total serialist composers. In the aforementioned Studie II119
(1954) Stockhausen determined the pitches, durations, and dynamics using a ratio of 5√
25 , and
114https://www.youtube.com/watch?v=eAjKD12RkEY115https://www.youtube.com/watch?v=iRa0j_EIth0116https://www.youtube.com/watch?v=g7wefv98lvo117https://www.youtube.com/watch?v=8hk3rKIW3qg118https://www.youtube.com/watch?v=crLFn2muMm4&list=PL16BE5CB2012BC938119https://www.youtube.com/watch?v=bwj6ZptPnDo
49
the piece is divided into “5 main sections, and 5 subsections per section, each consisting of 5
‘groups’ of 1 to 5 sounds, with 5 different ‘band-widths’ ” [174]. American composer Milton
Babbitt also epitomised this total serialist approach. With the aid of the RCA Mark II synthesiser,
whose “paper tape reader” made it particularly suited to “highly organised and structured works”
[82, 114], Babbitt created pieces, such as Ensembles for Synthesizer120 in 1964, which were “not
concerned with the invention of new sounds per se but with the control of all aspects of events,
particularly the timing and rate of change of timbre, texture and intensity” [9].
In 1941, Joseph Schillinger wrote The Schillinger System of Musical Composition [155]. Schillinger’s
system provides methods for generating or analysing rhythms, melodies, and harmonies accord-
ing to geometric phase relationships. For example, a basic ‘Schillinger Rhythm’ can be created
by playing a note once every 3rd and every 4th beat, resulting in a rhythm that is 12 beats long
(see Fig. 2.2). Schillinger is perhaps best known as a teacher, with George Gershwin being his
most famous student. Scholars have noted the influence of Schillinger’s system in Gershwin’s I
Got Rhythm Variations121 [75, 439], and in Gershwin’s opera, and largest composition, Porgy and
Bess122 [118, 15].
Figure 2.2: Example of a simple rhythm, ‘3 against 4,’ using the Schillinger method.
Following World War II, the use of computers, and computer algorithms, for the composition
of music began to gain popularity. In 1957, Lejaren Hiller and Leonard Isaacson created a col-
120https://www.youtube.com/watch?v=W5n1pZn4izI121https://www.youtube.com/watch?v=pdzjwXIFYyA122https://www.youtube.com/watch?v=KNmNyyeiRck
50
lection of algorithmic rules and Markov chains [169, 49], using the ILLIAC I mainframe at the
University of Illinois, that in turn created the Illiac Suite,123 the “first work composed by means
of a computer” [166]. John Cage was amongst the first to recognise the possibilities in Hiller and
Leonard’s system, and collaborated with Hiller to create the piece HPSCHD124 (1969) [47, 61],
again using the I Ching and chance operations to compose the work [99, 17]. Greek composer Ian-
nis Xenakis was also experimenting with the use of computer algorithms in musical composition
towards the end of the 1950’s. In Xenakis’ case, rather than use the computer to generate entire
compositions, the algorithms were used to calculate “complex parameters of scores for various
sizes of instrumental groups” [82, 122]. He then used these calculations to aid the composition of
pieces such as ST/10 (1962)125 and ST/4 (1962)126 [82, 122].
Artists working within what have traditionally been considered ‘popular’ music genres have
also used computer algorithms. As mentioned in Section 2.1, English duo Autechre have used
algorithms extensively on many of their releases, and continue to use them on their recent releases
in a more limited capacity. Squarepusher is also known to use custom designed algorithms to
control both digital signal processing [172] and MIDI data [180]. Both Aphex Twin [179] and
Bjork [114] have also used algorithms to create drum patterns and bass lines.
2.3.1 Algorithms in improvised music
The practice of live coding, that is the “real-time scripting [of computer software] during laptop
music performance” [36, 321] is heavily reliant on algorithms. Typically the only interface a
live coding performer uses is the computer keyboard, or occasionally mouse and keyboard, and
the coding interface, relying on “compositional algorithms” [36, 321] to organise and generate
musical materials. This reliance on algorithms can be seen in much of the literature on the topic
123https://www.youtube.com/watch?v=n0njBFLQSk8&index=1&list=PLOpv-xCXkg1kO666mFfBcE1pFGgmlRdtW124https://www.youtube.com/watch?v=t_hTxJpWITw125https://www.youtube.com/watch?v=9XZjCy18qrA126https://www.youtube.com/watch?v=9KsWpuqTYLU
51
(for example [36], and [37]), but also in the reverence with which they are treated in the TOPLAP
Live Coding Manifesto, which states that “Algorithms are thoughts.”127 Alex McLean, a member
of the London-based live coding trio Slub,128 describes their approach, and that of live coding in
general, as:
Improvising music by describing it - coding it - rather than by touch-
ing and hitting an instrument in some way. That description is ‘live’
because a computer is constantly trying to interpret the description
while you write and change it. The point isn’t to describe each sound
individually, but to describe higher level structures - how the sounds
change over time [5].
Slub use a variety of algorithms that describe experiments in “such areas as combinatorial mathe-
matics, chordal progressions, sonified models of dancing people, morphing metres, [and] algorith-
mic breakbeats” to create “patterns, melodies and stranger musical components” [36, 323]. The
result is a sound that “ranges from relentless techno to meditative sonic studies” [36, 323].
The use of algorithms and “computational thinking” [47] can also be found in improvised
instrumental music traditions such as jazz. A very simple example is the widespread use of the
‘12 bar blues’ progression. Much like Cope notes about music of the Classical period, ‘the blues’
can be considered algorithmic “in the sense that so many parameters are clearly predetermined
prior to composition: form, phrase length, harmonic progression, and so on” [117, 11]. A further
example would be the mid to late work of John Coltrane, which displays many mathematical and
algorithmic properties. Alice Coltrane, John Coltrane’s wife and a well respected jazz pianist in
her own right, went so far as to state
“some of [Coltane’s] latest works aren’t musical compositions. I
mean they weren’t based entirely on music. A lot of it has to do with
127http://toplap.org/wiki/ManifestoDraft128https://soundcloud.com/chordpunch/slub-live-in-paris
52
mathematics, some on rhythmic structure and the power of repeti-
tion...” [32, 173]
Giant Steps,129 perhaps Coltrane’s most famous work demonstrates this mathematical fascina-
tion. The piece only modulates by intervals of a major 3rd, moving between the keys of ‘B Major,’
‘G Major,’ and ‘Eb Major,’ thus dividing the octave into three equally spaced divisions [8, 93].
Although Giant Steps is known for being based entirely on this technique of modulating in a cycle
of major or minor thirds, commonly referred to in jazz as Coltrane Changes, Coltrane had already
used the technique on multiple occasions to ‘extend’ extant harmonic progressions. For exam-
ple, Countdown130 uses Coltrane Changes to extend the harmonic progression of the Miles Davis
composition Tune Up,131 26-2132 uses Coltrane Changes to extend the harmonic progression of the
Charlie Parker piece Confirmation133 [8, 92], and Satellite134 uses Coltrane Changes to extend the
harmonic progression of the jazz ‘standard’ How High the Moon135 [182, 137]. In addition to its
use as a compositional tool, both Coltrane and numerous jazz musicians since have ‘superimposed’
Coltrane Changes onto chord progressions during their improvisations [8, 92] [105]. This super-
imposition of Coltrane Changes can be found in much of Coltrane’s own playing, for example,
Limehouse Blues,136 Summertime,137 and Fifth House138 [182, 137].
The use of a cycle of modulations by a major 3rd had already been incorporated into jazz pieces
prior to the composition of Giant Steps, for example during the bridge of the jazz standard Have
You Met Miss Jones139 [182, 136]. However, Coltrane was also inspired by Nicolas Slonimsky’s
Thesaurus of Scales and Melodic Patterns [162] [182, 136], in which Slonimsky derives 1,330
scale patterns by mathematical “methods similar to [Joseph] Schillinger” [139, 602]. Numerous129https://www.youtube.com/watch?v=30FTr6G53VU130https://www.youtube.com/watch?v=lJ7QTRzV9RM131https://www.youtube.com/watch?v=iiSDqJIQk8k132https://www.youtube.com/watch?v=JG9ygPjWdj0133https://www.youtube.com/watch?v=yXK0pZx92MU134https://www.youtube.com/watch?v=gITxsklDH58135https://www.youtube.com/watch?v=djZCe7ou3kY136https://www.youtube.com/watch?v=eTAjh1pjsFI137https://www.youtube.com/watch?v=NEftw9o1joo138https://www.youtube.com/watch?v=ngvgWUGTKYU139https://www.youtube.com/watch?v=2G8UTMtY0JM
53
other jazz performers have used this book as a source of patterns and sequences for their impro-
visations, mainly to provide a more dissonant, ‘outside,’ feel. For example, Charlie Parker uses a
sequence identical to that of “Pattern #629” from Slonimsky’s book during a live performance of
the standard After You’ve Gone140 [188, 188]. Bassist Jaco Pastorius141 began studying Slomin-
sky’s book after being introduced to it by guitarist Joe Diorio142 [116, 57], and both Evan Parker143
[130, 411] and Allan Holdsworth144 [111] have also noted the influence of Slonimsky’s book and
mathematical processes on their playing.
Other jazz musicians have also developed similar mathematical approaches. For example,
George Garzone’s145 Triadic Chromatic Approach [74] involves playing the notes of a triad (either
major, minor, diminished, or augmented), moving a semitone, and then playing the notes of another
triad of the same kind, but in a different inversion. Thus allowing performers to “improvise freely”
without repetition [74, 58]. In Steve Coleman’s146 Symmetrical Movement Concept “the motion
of the melodies involves an expansion and contraction of tones around an axis tone or axis tones,”
where the “expansion and contraction involved is almost always equal on both sides of the axis”
[33]. Coleman is also notable as a “key member of the M-Base collective”147 [44, 173], whose
style is most easily defined by its mathematical approach to rhythm involving “layered cycles of
beats, often involving unusual meters (say, seven, or 11)” [22].
140https://www.youtube.com/watch?v=w9b0N-awQsc141https://www.youtube.com/watch?v=LEs5sKDXZuk142https://www.youtube.com/watch?v=2HCV5QeN2Ts143https://www.youtube.com/watch?v=edrpNKAOcZQ144https://youtu.be/74GBPl2FaK0?t=187145https://www.youtube.com/watch?v=QzDA8mC5fbs146https://www.youtube.com/watch?v=I3EW-fN1YB4147https://www.youtube.com/watch?v=ol8acY0QpVY
54
Chapter 3
Related Work - Algorithmic Music Performance Systems
As discussed in Section 2.3, algorithmic music, i.e. the use of algorithms to compose or aid in
the composition or performance of music, is widely accepted as part of Western musical culture
for many centuries [73, 47, 129]. This chapter provides a survey of a number of algorithmic
music systems, focussing primarily on those systems designed for live performance. Many scholars
have already conducted surveys into the various techniques of algorithmic composition. These
have ranged from those focused on artistic concerns (e.g. [35]), to those that provide a detailed
overview of developments in a particular technique (e.g. [3, 150]), to broad overviews of the field
(e.g. [73, 129]). There are also numerous texts detailing various methods and techniques used in
algorithmic composition (e.g. see [123, 147]).
This chapter will provide an overview of algorithmic techniques used in musical performance
systems (3.1), methods used for classification of algorithmic systems (3.2), and an overview of
some of the exemplar systems in the field and some contemporary systems which have similar
goals to the research presented herein (3.2). The chapter will close with a short discussion of the
trends across performance systems and how these trends influenced the development of AAIM(3.4).
3.1 Algorithmic Methods
There have been numerous attempts to categorise the various Artificial Intelligence (AI) methods
used in algorithmic composition (e.g. see [73, 123]). There are, however, two main problems with
categorising algorithmic music systems, either compositional or performative, according to the AI
methods. Firstly, “many of the AI methods can be considered as equivalent” [129], and secondly
many systems “have more than one prominent feature” [129]. All taxonomies inevitably suffer
from these shortcomings, but during this chapter I use the one proposed by Papadopoulos and
55
Wiggins: Mathematical models, Knowledge-based systems, Grammars, Evolutionary methods,
Systems that learn, and Hybrid systems [129].
3.1.1 Mathematical Models
Mathematical Models are algorithms based primarily on a mathematical theorem or technique.
They tend to be less complex than other methods, and this has resulted in them being used exten-
sively in live music performance systems [129]. Among the most commonly used methods in this
category are: Random Walks, Markov Chains, and Self-Similar Systems.
Random Walks
These are systems that use random number generators, often constrained within desirable limits,
to ‘walk’ through values in the output space, e.g. a random walk through notes of a musical scale.
Markov Chains
Markov chains are a stochastic process in which each step depends only on the state that immedi-
ately proceeded it. A matrix of the probabilities of transitioning between steps represent the chain.
The categorisation of Markov chains is problematic. They are similar to certain types of Grammars
[129], bringing us back to the difficulties in categorising systems according to the AI techniques
they use mentioned above (see Sec ??).
Self-Similar Systems
These are systems whose output is “statistically similar across several orders of magnitude” [73].
They include fractals, 1f noise, and chaotic systems.
Mathematical systems also have disadvantages. For example, stochastic processes require some-
one to determine the probabilities involved and do not easily incorporate deviations from the norm
or higher levels of music [129]. In addition, deterministic models, such as chaotic systems or frac-
tals, do not directly relate to music, a fact which caused Martins and Miranda to argue that such
algorithms “seldom produces significant pieces of music” [112].
56
3.1.2 Knowledge-based methods
Most algorithmic music systems incorporate “some kind of composition rules at some point of the
workflow” [73], which again brings us back to the aforementioned difficulties in categorising these
techniques (see Sec 3.1). Here, however, we are concerned with systems that are based primarily
on rules derived from the analysis and codification of a musical language. This is an obvious
approach to the creation of an algorithmic music system, as composition, and music in general,
has traditionally been taught as a collection of rules [3]. Knowledge-based systems require explicit
rule sets, and this enables them to explain all of the choices made. However, this also limits the
usefulness of the algorithms, as the rules are often difficult to define, and the inclusion of all of the
possible exceptions to these rules is often impossible. In addition to this, the codification of rules
for one style inherently inhibits the ability of the system to produce other styles. It is perhaps for
these reasons that so few performance systems seem to have been built using explicitly knowledge-
based systems.
3.1.3 Grammars
A grammar is “a set of rules to expand high-level symbols into more detailed sequences of symbols
(words) representing elements of formal languages” [73]. David Cope’s EMI system is perhaps
the best know compositional system that uses grammars. This system assigns symbols to signature
phrases in a corpus of musical examples and rearranges these to create a new work [129]. Issues
involved with the use of grammars include their hierarchical nature, which does not relate to all
music, and the fact that they can create many “strings of questionable quality” [129]. In addition
to this, they also require the building of the grammar, which can be a long and difficult, if not
impossible, process.
57
3.1.4 Evolutionary Methods
Evolutionary algorithms use “computational models of evolutionary processes as key elements in
their design and implementation” [165]. They are particularly efficient at searching large problem
spaces and are capable of producing multiple solutions [73], making them suitable for many musi-
cal systems. Difficulties inherent to the use of genetic algorithms within musical systems relate to
evaluating the population of results produced. In particular, this problem lies either in “designing a
non-interactive fitness function” [50, 2049] to judge the results, or the time required for a human to
listen to all of the “potential solutions in order to evaluate a population” [129] if no fitness function
is used. A further difficulty is that “algorithms that were not designed for music... seldom produce
significant pieces of music” [112]. As such, it is of little surprise that many musical performance
systems that use evolutionary computing are “primarily limited to very narrow problem spaces”
[50, 2050].
3.1.5 Systems That Learn
Papadopoulos and Wiggins defined systems that learn as “systems which, in general, do not have
a priori knowledge... of the domain, but instead learn its features by examples” [129]. They
further classify them according to how they store their data: those that store data ‘distributively,’
i.e. Artificial Neural Nets (ANN), and those that store data ‘symbolically,’ i.e. Machine Learning
(ML).
The ability of a system to learn makes it ideal for situations in which there is no prior knowl-
edge of what the input will be, and as such perfect for improvised music, in particular freely impro-
vised music. There are, however, shortcomings associated with learning systems, and in particular
ANN’s. These shortcomings relate to the systems’ inability to “pick up the higher-level features
of music” [129] or to represent time. Papadopoulos and Wiggins also claimed that “[u]sually they
solve toy problems, with many simplifications” [129].
58
3.1.6 Hybrid Systems
Hybrid systems “use a combination of AI techniques” [129]. Their advantage is obvious: they
combine the best of different approaches. However, the disadvantage is that they become complex
and the “implementation, verification and validation is also time consuming” [129]. Despite this,
hybridisation of some form is extremely common. For example, the Kinetic Engine (see 3.3.1.6)
uses Markov chains to generate populations. However, the use of genetic algorithms to create new
musical material is clearly the focus. Markov chains can also be considered as a form of machine
learning, meaning that one could categorise the Kinetic Engine or the Continuator (see 3.3.1.3) as
systems that learns.
3.2 Other Means of Categorisation
As noted above, there are inherent difficulties in attempting to categorise algorithmic music sys-
tems according to the AI method due to the overlapping nature of many algorithmic methods. In
addition, although trends can be inferred in the use of each method, simply knowing the method
used does not provide much information about how a system acts during performance, what tasks
it undertakes, or the possible range of its use. As such, I will present a number of alternative means
for categorising algorithmic systems that I will use when discussing the systems in Section 3.3 and
that have been useful when developing and framing the work in later chapters.
3.2.1 Algorithmic Function
Wooller et al. suggested a framework “positioning algorithmic music systems along two main di-
mensions, function and context” [190]. Of particular interest here is the ‘function’ of the algorithm,
a continuum ranging from algorithms that ‘analyse’ input, to those that ‘transform’ materials, to
those that ‘generate’ new materials [190]. Similarly to categorising systems according to their al-
gorithmic method, an algorithmic system can often be considered to have multiple functions. For
example, a Markov Chain analyses the model, or models, it is given, and generates new material
59
from this through transformation of said input. However, for the purposes of this paper I will
use ‘analyse’ to refer to those systems in which the analysis provides information from which the
system extrapolates musical ideas not explicitly contained within the original data. I will use ‘gen-
erative’ to refer to those systems in which there is a clear intention to generate new materials which
the audience has not heard before. Finally, I will use ‘transformative’ to refer to those systems that
make a clear attempt to morph or vary materials which the audience have already heard in perfor-
mance. Using these definitions a Markov Chain would no longer be an analysing algorithm as it
can only generate materials using the exact relationships in the original data. However, a Markov
Chain could be either transformative or generative depending on whether the model is based on a
large offline database (see 3.3.2.2) or built during the performance (see 3.3.1.3).
3.2.2 Instrument v Player Paradigms
The categorisation of music performance systems according to their most prominent AI method
provides no information about how to use the system to create music. The ‘algorithmic func-
tion,’ meanwhile, only gives a minimal amount of information regarding how the system performs.
For this reason it can be useful to examine certain high level musical characteristics of systems.
Rowe [146] presented a framework for differentiating between systems designed in the “instrument
paradigm,” i.e. those that explicitly rely on direct human control, and those designed in the “player
paradigm,” i.e. those that attempt to act as an individual performer either through autonomous
action or through mimicking and modifying the performance of a human improviser, and typically
striving for some level of artificial intelligence. This is of particular interest in the context of the
research presented herein, which has been explicitly designed in the instrument paradigm.
3.2.3 Musical ‘problem space’
Another useful basis of comparison when examining algorithmic systems is their “problem space”
[181, 118]. “Problem space” refers to the musical tasks a system undertakes, but also incorporates
the genres or styles of music that the system can create. For example, a system that is only capable
60
of creating rhythmic patterns in one specific style is only suitable for use in that particular style,
and, therefore, works within a narrow problem space. In contrast, a system that is capable of
generating rhythms in many styles engages with a much broader musical problem space.
3.3 Overview
This section will present a brief overview of some historically important algorithmic performance
systems, and some contemporary systems relative to this research and the concepts of categori-
sation presented above. The section is ordered with systems in the player paradigm and suited
to more rhythmically free music being presented first, and then followed by those in the instru-
ment paradigm. Later systems then present contemporary works that are closer to the goals of
the research presented in later chapters regarding the development of AAIM, i.e. the instrument
paradigm, the use of referents and rhythmic meter, and the focus on ‘transformation’ rather than
‘generation’ of materials.
3.3.1 Player Paradigm Systems
3.3.1.1 Voyager
George E. Lewis’ Voyager system is perhaps the most renowned algorithmic system in the player
paradigm. The system “analyzes aspects of a human improvisor’s performance” and generates both
“complex responses to the musician’s playing and independent behavior that arises from its own
internal processes” [107, 33]. Lewis conceived the system as an “inter-active ‘virtual improvising
orchestra’ ” [107, 33], and that the system is designed to intentionally avoid predictable results
to its input. This, in combination with the stylistic tendencies of Lewis and the other artists who
have performed with it,1 means that the music produced by Voyager sits very much in the genre of
free-jazz.2
The Voyager system differs from the research presented in later chapters with regard to its
1For example, Evan Parker and Roscoe Mitchell [107, 34]2https://www.youtube.com/watch?v=hO47LiHsFtc
61
stylistic outlook, particularly with regard to rhythm, and Lewis’ desire to endow the system with
independence. However, many of Lewis’ thoughts with regards to his system resonate with this
research. For example, his differentiation between long term consistency and moment-to-moment
unpredictably is echoed in the approach taken in designing AAIM: providing users with the ability
to describe variations at a macro level while the system generates these variations using constrained
randomness at the micro level.
3.3.1.2 prosthesis
As stated above, systems that learn are particularly suited to freely improvised music. One such ex-
ample is Michael Young’s series of prosthesis3 pieces [191] (∼2008), in which ANNs are trained
to interact autonomously with a human performer in a free improvisation. Each system in the
prosthesis series thus clearly sits in the player paradigm. In contrast to many of the systems men-
tioned above, the prosthesis systems all trigger samples and process sounds “with filtering and
ring modulation,” rather than sending MIDI information to an external instrument.
One interesting aspect of these systems is that, although they are intended for use in a free
improvisation, each system is specifically designed to interact with a different acoustic instrument.
That is, flute prosthesis is for flute and computer, and oboe prosthesis is for oboe and computer,
and both use an “instrument-specific sound corpus” [191] as the basis for their output. This cre-
ates an unusual disparity between the ‘freedom’ of the music performed by the system and the
preciseness of the instrumentation required, leading to a narrow musical problem space.
3.3.1.3 Continuator
Francois Pachet’s Continuator4 system [126] (∼2004) demonstrates the categorisation problem
mentioned above. Continuator is based on Markov Models, however, it could also be categorised
as a ‘system that learns.’ The system uses MIDI input from a human improviser to train the Markov
Models, enabling the system to output variations, or ‘continuations,’ of this material, resulting in
3http://tinyurl.com/ms8frux4http://francoispachet.fr/continuator/continuator.html
62
an interactive human-computer improvisation.
This ‘real-time’ training is a fundamental difference between the Continuator and Pachet’s later
Virtuoso system (see Sec. 3.3.2.2). By basing its output entirely on the performance of a human
improviser, the system extends its musical context beyond that of a single style. The Continuator
does not analyse any input to infer rules which are not in the original material. However, the
focus of the Continuator is very much on transforming the models in use rather than generating
new materials. Although input from a human performer is necessary to train the system, it is
clearly in the Player, rather than Instrument, Paradigm. Continuator uses both pitch and rhythmic
information from the human performer when building the models, and therefore its output.
3.3.1.4 OMAX-OFON
Assayag, Bloch, and Chemillier’s OMAX5 system (∼2006) is a MIDI-based “architecture for
an improvisation oriented musician-machine interaction system... based on a statistical learning
model,” and OFON extends this system to the audio domain [6]. OMAX ‘listens’ to the human
performer, looking for patterns of pitches in the input stream, parsing the input into phrases, and
then generates improvisations based on transformations of this input. The OFON extension records
and parses the input audio stream, resulting in an “audio signal reconstructed from the original au-
dio material” [6]. Thus, the OMAX system and the OFON extension analyse and transform the
input of a human performer. A further feature of the system is the ability to perform either in a
particular rhythmic structure or within a rhythmically free setting.
These features make the OMAX system extremely flexible and give it a very broad problem
space. The variety of improvisers who have performed as part of Sandeep Bhagwati’s Native
Alien6 project, which uses the OMAX system, demonstrates the breadth of this problem space. The
OMAX system is also in the player paradigm, although as Blackwell et al. point out all systems
require some human interference [21], as can be seen in videos of the Native Alien project. A
further interesting feature of the Native Alien project is its real-time generation of notated materials
5http://tinyurl.com/nrsgpdu6http://matralab.hexagram.ca/projects/native-alien/
63
upon which the human performer improvises.
3.3.1.5 GenJam
The GenJam7 system [16], by John Biles (∼1994), is one of the most successful experiments with
genetic algorithms in a musical system (see [50], [112]). It models a “novice jazz musician learning
to improvise,” using a “human mentor” to give real-time feedback “which is used to derive fitness
values for the individual measures and phrase” [16]. In performance the system then generates
“full-chorus improvised solos,” “responds interactively” to a human performer when “trading fours
or eights,” “engages in collective improvisation,” and “breeds” input from a human performer with
its own “ideas” [18] to create new materials.
Functionally this system is primarily transformative and generative, focussing on the manip-
ulation of pitch and rhythm, although rhythms are limited to “eighth note multiples” [16]. The
system is clearly within the player paradigm, and Biles has performed trumpet in duets with the
system in numerous concerts [20]. Although GenJam is capable of performing over 150 tunes [17],
ultimately its “use is within a narrowly defined problem space of mid-twentieth century jazz” [50,
2049]. The fact that it only generates melodies, which are played by a standard MIDI synthesiser,
and uses standard MIDI files as backing tracks [17] also limits GenJam’s potential problem space.
3.3.1.6 Kinetic Engine
Arne Eigenfeldt’s Kinetic Engine8 (∼2012) attempts to “create an intelligent improvising instru-
ment... that models a drum ensemble” [48, 97], using “constrained randomness” and rules for each
“agent” in the ensemble that govern interactions between them. However, Eigenfeldt subsequently
expanded the system to generate 4-part polyphonic material, using a genetic algorithm to contin-
ually evolve segments from a corpus [50]. No fitness function is used as the system assumes that
all of the material contained in the corpus is good, and therefore all of the generated material will
also be good.
7http://igm.rit.edu/~jabics/Demo2003.mov8https://www.youtube.com/watch?v=_X0mHIbn-n8
64
The Kinetic Engine functions generatively, and, as stated above, controls both pitch and rhythm
in a polyphonic setting. The first version of the Kinetic Engine sat within the instrument paradigm
allowing the user to act as a conductor, although the subsequent inclusion of a corpus means the
system now acts within the player paradigm. In addition, because the Kinetic Engine can use a
corpus in any style, the system is capable of producing a broad range of musical styles.
3.3.1.7 GEDMAS
Anderson et al.’s Generative Electronic Dance Music Algorithmic System9 (GEDMAS) (∼2013)
uses a corpus of Electronic Dance Music (EDM) pieces, transcribed manually, as models for “prob-
abilistic and 1st order Markov chain models” [4, 5] and is capable of composing full electronic
dance music compositions. The system generates each piece in a top-down manner, beginning
with the overall form, then harmonic progressions, then onset of the instrumental parts, and finally
the individual patterns and melodic lines. Following the generation of the song, the user must then
decide on instrumental timbres and audio effects.
Although the presentation of music generated by GEDMAS at the first Musical Metacreation
Weekend in Sydney were pre-generated [51], the system does generate material in real-time [4,
5], and has been included here to acknowledge the similarities between AAIM and many musical
metacreation (MUME) systems with regards to their treatment of electronic dance music. How-
ever, the goal of “endowing machines with creative behavior” [101] and “autonomy” places such
systems firmly in the player paradigm. Using the above definitions GEDMAS can be said to be
generative, rather than transformative, and does not analyse its corpus to determine suitable ideas
not explicitly contained within the corpus.
9https://soundcloud.com/pitter-pattr
65
3.3.2 Instrument Paradigm Systems
3.3.2.1 Lexikon-Sonate
Lexikon-Sonate10 for Disklavier piano by Karlheinz Essl [55] (∼1992) is an example of a perfor-
mance system that applies only to a narrow problem space, i.e. in the performance of the piece
Lexikon-Sonate. However, the system is atypical, in that it fits the Instrument Paradigm when used
in performance, and Player Paradigm when used in an installation setting.
The Lexikon-Sonate system functions generatively, producing both pitches and rhythmic pat-
terns. The system uses mathematical models, such as Brownian walks and serial techniques, con-
tained in Essl’s own Real-Time Composition Library. The system then combines these models to
create “structure generators,” objects that produce a specific type of musical response, e.g. glissan-
dos or arpeggios, and these generators then interact to produce the music.
3.3.2.2 Virtuoso
Pachet’s Virtuoso11 [127] (∼2012) is an example of a live performance system based on Markov
models. The system produces ‘virtuoso’ level jazz improvisations using a collection of ‘idiomatic’
bebop lines as its training data. Virtuoso uses a separate model for five different scales,12 and
the system then chooses between these depending on the underlying harmony of the piece, and
“various substitution rules” [127]. The user interacts with the system through two hand-held con-
trollers, and chooses notes, rhythm, and guides the choice of model, i.e. scale. A standard MIDI
synthesiser plays the generated melodic lines.
The Virtuoso system functions generatively, as the analyse conducted when developing the
models does not result in the system inferring rules that are not specifically stated in the original
material and, although it is technically transformed, the training data serves as a table of probabil-
ities rather than a means of determining higher level structures. Virtuoso is also in the instrument
paradigm, limited to the narrow problem space of generating melodic lines in a bebop jazz idiom.
10http://www.youtube.com/watch?v=aOOTafrusbw11http://francoispachet.fr/virtuoso/virtuoso.html12Major, minor, diminished, seventh and whole tone [127]
66
3.3.2.3 Syncopalooza
Sioros et al.’s Syncopalooza (∼2013) system uses a “set of formalized transformations” [161, 454]
to remove or add syncopation to a rhythmic line. Rhythmic lines are first converted into binary
patterns. De-syncopation is then achieved by shifting triggers forward to strong metrical positions,
e.g. from a 16th note beat to the next 8th note beat, and syncopation is achieved by doing the
opposite, e.g. from an 8th note beat to the preceding 16th note beat.
Syncopalooza follows the instrument paradigm, and is clearly intended to transform given
rhythmic lines, rather than generate entirely new materials. The system has similarities to the
AAIM system in its goal of allowing the user to transform materials, particularly rhythmic, using
higher level controls, in this case ‘syncopation’ and ‘de-syncopation.’ However, while Synco-
palooza does not limit itself to one particular style, the reliance on binary patterns, the fact that no
allowance has been made for the interaction of multiple rhythmic lines, and the lack of advance-
ment past simple rhythmic lines limits the systems usefulness.
3.3.2.4 Liquid Music
The Liquid Rhythm13 and Liquid Music14 systems (∼2014), are developed by the Toronto based
music software development company WaveDNA, designed to allow users to MIDI based musical
ideas through means other than the ‘piano-roll’ style sequencers found in many computer based
sequencers. Liquid Rhythm enables the user to create beats by combining preset “bar-long or beat-
long patterns” [184], rather than having to insert each note into the pattern individually. These
patterns can then be altered manually, or using a variety of randomisation and rotation techniques.
Liquid Music expands on Liquid Rhythm, enabling the user to create melodic lines and chord pro-
gressions by drawing melodic contours which are then mapped to a collection of preset harmonic
progressions [183].
Although not explicitly algorithmic, the Liquid Music andRhythm systems do have the option
13https://www.youtube.com/watch?v=bGNPBjVHITo14https://www.youtube.com/watch?v=qfJlcUd4hKA
67
to apply constrained random changes to the patterns created, and the system does include some
inherent rules which allow it to suggest complementary patterns to those selected by the user. The
relationship to the research presented herein primarily comes from the desire to present users with
a means to build and vary their own materials in real-time. However, unlike the AAIM system,
the Liquid systems are limited to creating patterns within 44 , contain no methods for manipulating
audio samples, and do not incorporate any rhythmic division more complicated than triplets. In
addition, these systems rely on wholesale changes of the the musical materials rather than affording
the user with the ability to describe higher level transformations of the material.
3.3.2.5 2020 Beat-Machine
Yotaro Shuto’s 2020 Beat-Machine15 (∼2016) drum machine. 2020 contains a sample slicer, grid
sequencers, 12 one-shot samplers, 2 FM synthesisers, a pitch transposer, and digital signal pro-
cessing effects [159]. In addition to the ability to quickly build complex patterns, 2020 also has
a number of constrained randomisation functions, allowing the user to freely manipulate over 500
parameters. This, in Shuto’s words, allows the user to “perform and produce as a live conductor”
[159].
Like the Liquid systems, 2020 is not explicitly sold as an algorithmic system. However, in
addition to the ability to randomise functions, 2020 also relates to the AAIM system in its desire
to provide a new way of producing and performing electronic music based on a repeated pulse.
This is evident in Shuto’s claim that the system allows you to “perform and produce as a live
conductor” [159]. However, in contrast to AAIM, 2020 does not include any means of creating or
varying melodic lines. Furthermore, the goal of the 2020 system seems particularly geared towards
the creation of beats, rather than the variation and manipulation of musical materials.
15https://www.youtube.com/watch?v=jQso4gz8vFM
68
3.3.2.6 ITVL
Klysoft’s ITVL16 (∼2016) is a four voice step sequencer “designed to generate organic melodies”
[94]. In contrast to traditional step sequencer designs, which usually only allow one rhythmic
division for the entire pattern, ITVL enables users to set each step of the sequence to one of six
different rhythmic divisions. The system also allows users to define up to two heptatonic scales
and to instantly map the melodic line to either of these or the chromatic scale. The ITVL system
also includes an “impv,” improvisation mode, in which the system will occasionally choose to play
a note from a different point in the sequence [96, 9].
Although the author’s website describes the ITVL system as a “semi-generative sequencer”
[95], I would categorise it as a transformative system as the focus is definitely on methods of alter-
ing a looped melody rather than generating new melodies from scratch. In contrast to AAIM, aside
from the ‘impv’ mode all of the methods for manipulating the material are based on fundamentally
transforming the entire loop, e.g. randomly choosing the length of every step, rather than methods
that vary the material on a micro level. ITVL is also limited to the aforementioned six rhythmic
divisions and can only create patterns equal in length to some combination of 32 of these divisions.
3.3.2.7 Fugue Machine
Alexander Randon’s Fugue Machine17 (∼2015) is “the world’s first multi-playhead piano roll”
[138]. Again Fugue Machine is not explicitly presented as an algorithmic system, although the
system does afford the use of techniques “techniques used in Baroque music and Serialism” [138],
which as discussed in Chapter 2.3 certainly fall within the field of algorithmic music. The system
enables the user to play up to four copies of their melody, and the user can then manipulate the
speed, direction, and pitch of each line.
While the Fugue Machine does incorporate ideas and techniques from Baroque and serial mu-
sic, in truth the possible transformations and manipulations are somewhat limited. Where a true
16https://youtu.be/LD3H8RUSFIE17https://www.youtube.com/watch?v=KvEJFjDykuE
69
fugue would traditionally contain a countersubject and key changes, the Fugue Machine does not
generate either. Also, where melodic inversions are an important element of Serialist techniques,
again the Fugue Machine does not facilitate such variations. Finally, the Fugue Machine only fa-
cilitates the creation of melodic line, with no methods for creating or varying rhythmic patterns or
manipulating audio samples.
3.4 Discussion
The above discussion has avoided referencing digital audio workstations (DAWs) such as Ableton
Live, which contains MIDI effects for randomly varying velocities and pitches etc., and a ‘Beat
Repeat’ audio effect that can repeat and re-pitch segments of audio. Although programs such as
Live are very powerful performance systems, I find that the affordances for improvisation tend to
be rather limited in comparison to improvisation with acoustic instruments: e.g. choosing differ-
ent patterns or samples, relatively simplistic, and often unmusical, random changes, and placing
reactive effects on top of the music rather than actually manipulating the material. While these can
all be used to create, and improvise, excellent music I have always felt that a similar, albeit more
limited, level of variation and manipulation should be available when improvising live computer
music as when composing fixed works, where one can completely rearrange musical materials.
Section 3.3.1 provided an overview of some systems within the player paradigm. Although this
approach differs from the instrument paradigm explored by AAIM, I believe they provide an inter-
esting point of comparison. While none of the systems discussed has been as successful as Live,
algorithmic systems following the player paradigm seem to have enjoyed more lasting success in
academia.18 While many composers and artists develop their own instrument paradigm systems, in
my experience these primarily serve the performance of one single piece or the developers unique
musical style. In Section 3.3.2 I tried to focus on systems within the instrument paradigm whose
goals were closer to those of AAIM, i.e. allowing users to compose and manipulate their own musi-
18For example, John A. Biles has published many articles about GenJam beginning in 1994 [16] and as recently as2013 [19].
70
cal materials, but it was often difficult to find any peer-review publications detailing these systems.
The development of systems in the player paradigm is fascinating, however, as an improvising mu-
sician I would rather have the means to vary and manipulate musical material rather than interact
with a player paradigm system. As such, AAIM can be seen as my attempt to address what I believe
to be a lack of electronic music performance systems that allow users to vary and manipulate their
own musical materials, in live improvisations.
71
Chapter 4
AAIM
Knowledge works rather like a jigsaw
puzzle. You wait until somebody puts
down a piece and try to find a piece of
your own to place on that living edge.
Derek de Solla Price [110, 93]
Since my first work with electronics in
music, I have felt strongly that music
should come from one human to another
and that the medium should always be at
the service of the artist. The electronic
medium has the possibility, where
desirable, to bypass the performer who has
traditionally acted as the intermediary
between the composer and the audience.
In order to accomplish these things I
needed to develop a software which was
always ‘in service’ to the
composer/musician...
Morton Subotnik [168, 113]
This chapter details the concept and objectives of the AAIM system, and the approaches taken
to achieve these goals. Section 4.1 introduces the overarching concept of the AAIM system and
categorises AAIM using the terms introduced in Chapter 3. Section 4.2 expands upon the concepts
72
discussed in Section 4.1, focussing on the artistic goals of the system, and how AAIM realises these
goals.
Section 4.3 also expands on the ideas and goals presented in Section 4.1, but discusses how
the design of AAIM facilitates these aims. Section 4.3.1 expands upon this discussion, providing
explanations for decisions made when developing the algorithms comprising the AAIM system.
Finally, Section 4.4 returns to the improvisational approaches presented in Section 2.2, detailing
how these approaches are applicable to the AAIM system.
4.1 Concept
In the design of acoustic instruments the interface is either directly connected to the sound source,
such as a guitar or flute, or the interface and sound source are separated, such as piano or, even
more obviously, pipe organ. Electronic music systems, however, allow the insertion of additional
systems between the interface and sound source. For example, the sequencer on the Buchla 100
(see Sec. 2.1.4), which can be utilised to expand the affordances of the interface, and thus the possi-
bilities for a human performer. Algorithmic systems have the capacity to explore these possibilities
even further, affording greater freedom and expression in improvisation, a practice Roger Dean has
termed “hyperimprovisation” [43]. It is the potential of this intermediary state between interface
and sound source that the AAIM system exploits (see Fig. 4.1). In fact, AAIM neither necessitates
nor promotes the use of any particular interface or sound source, and as will be demonstrated in
Chapter 6, AAIM has been controlled using a variety of different interfaces, and has been used to
trigger sounds using an assortment of synthesis and sampling techniques.
More specifically, AAIM comprises four primary algorithmic modules, each designed to ma-
nipulate and vary micro elements of the musical materials such as individual inter-onset-intervals
(i-o-i’s), i.e. the length of time between consecutive triggers, in addition to individual pitch choices,
segments of sample playback, etc.
1. The AAIM.rhythmGen module (5.1) creates any number of independent, but inter-
73
Figure 4.1: Illustration of the placement of the AAIM system between the interface and soundsource.
locking, rhythmic lines.
2. The AAIM.patternVary module (5.2) varies a user-defined sequence of triggers.
3. The AAIM.loopVary module (5.3) varies the playback of a sample.
4. The AAIM.melodyVary module (5.4) varies melodic lines.
As Dean has noted:
A performer may control some process at an analytical level, and
other more rapid processes solely at a motoric level [43, xv].
By attending to these “more rapid processes,” AAIM frees performers from the concerns of “mo-
toric level” tasks, and thus allow them to focus on macro, “analytic,” elements of their work, such
as form, phrasing, texture, density, spatialisation, and overarching timbral and dynamic shifts.
A fundamental tenet of AAIM is that the style of the output should be determined by the user,
rather than being enforced by the system, or to quote Subotnick: “the medium should always be
74
at the service of the artist” [168, 113]. With this in mind, no attempt has been made to codify any
particular musical style. Instead, the focus is on ‘stylistically neutral’ algorithms and methods,
which the user is free to employ, or ignore, as they choose. Thus, the system engages as broad a
musical “problem space” [181, 118] as possible.
In order to engage with such a broad problem space, AAIM does not generate any new ma-
terials. Instead, the system focusses on enabling users to vary and manipulate their own musical
materials, with the nature of many of the variations being directly linked to the input musical ma-
terials (for example see Sec. 5.2.3), while all others are determined by variables controlled by
the user. Admittedly this approach limits the capabilities of AAIM to emulate specific genres in
comparison to other, more genre-specific, systems. However, it is argued that this shortcoming is
outweighed by the breadth of extant styles and approaches it can facilitate, and the possibilities it
provides for entirely new approaches to music making (see Chapter 6 for examples).
To an extent, AAIM is what Greg Schiemer has termed an “improvising machine,” that is “an
instrument that reinterprets input from the real world and allows a performer to influence a musical
outcome without entirely controlling it” [154, 109], but there are differences. Schiemer’s sys-
tems produce results “that the performer cannot always anticipate” [154, 109]. In contrast, while
individual events triggered by AAIM may not be predictable, the overall nature of the variations
created, and their relation to the settings of each variable, are as transparent as possible. Conse-
quently, by making the results of user input apparent, AAIM maximises “the degree to which users
can learn to exploit the capacities of the algorithm,” and minimise “the degree to which the algo-
rithmic changes are indifferent to the nature of user input.” In turn, this leads to a stronger focus
on “improvisation” than “interaction” [43, 73].
Returning to the terms presented in Chapter 3, AAIM is in the Instrument Paradigm, the function
of which is primarily transformative, and performs rhythmic, pattern based, melodic, and sample
playback based tasks using mathematical models (see Tab. 4.1).
75
System AlgorithmicMethod
AlgorithmicFunction
MusicalTask(s)
SystemType
Musical Ap-proach
AAIM MathematicalModels
Transformative RhythmMelodyPatternsSamples
IP StylisticallyNeutral
Table 4.1: Classification of the AAIM system using the categories presented in Chapter 3.
4.2 Artistic Goals
Ultimately, the goal of the AAIM system is to enable the user to act as a conductor or band leader,
that is, to control the overarching nature of the performance while leaving the performance of in-
dividual parts up to the players, in this case the algorithms. This approach is common to much
of electronic music, and is evident, for example, in the use of sequencers to control modular syn-
thesisers.1 However, in contrast to the traditional use of sequencers, AAIM focusses on using this
approach to allow users to determine the type, and magnitude, of variations and manipulations ap-
plied to the musical material, in a sense enabling the user to tell “the orchestra how to improvise”
[43, 63].
While this approach still necessitates a certain ‘distance’ between the user’s input and the
system’s output, relinquishing precise control over individual events to facilitate a more holistic
approach, AAIM strives to retain transparency between the user’s input and the system’s output.
Algorithmic performance systems, particularly those designed within the Player Paradigm, often
eschew predictable responses to user input (for example [107], [154], or [191]). However, by mak-
ing the relationship between input and output as obvious as possible, AAIM minimises the extent
to which this necessary ceding of fine control impacts the user’s experience. Thus, AAIM strives to
afford users the improvisational flexibility of an acoustic instrument, which can only be achieved if
the resulting output is not “indifferent to the nature of user input,” and one can “exploit the capac-
1https://vimeo.com/channels/richarddevine/119810764
76
ities” [43, 73] of their instrument, without the need for the equivalent level of technical virtuosity.
As such, AAIM shifts the performers focus from technical virtuosity, the physical ability to
perform music, to musicality, the creation of materials suitable in the context of the performance.
For example, if a very fast or ‘complex’ rhythmic subdivision is as simple to produce as an 18 th
note, then each are also equally impressive/unimpressive when taken in isolation. As such, the only
concern becomes whether or not these more ‘complex’ and fast subdivisions are suitable within the
musical context of the piece being performed. Similarly, if it is just as easy to play a melody in the
original key as it is to play the melody in a different key, using a different scale, and in retrograde,
then the physical/technical act of performing the modified melody is no more impressive than that
of performing the original melody, and the only concern becomes whether or not the manipulations
are suitable to the musical context.
Although AAIM is intended to shift a user’s focus away from technical virtuosity, and towards
musicality, its use does not necessitate a performance practice devoid of technical virtuosity. The
history of electronic music, and in fact all music, is full of examples of pioneers ‘squeezing’ more
expressivity out of an interface than had previously been thought possible. For instance, the afore-
mentioned examples of King Tubby, and his innovative use of a mixing board, or DJ Kool Herc
and Grandmaster Flash, and their transformation of a turntable from a machine for reproducing
sound into a musical instrument in its own right (see Sec. 2.1). As mentioned above, AAIM has the
potential to be controlled using any number of interfaces, the only criteria being that the interface
is capable of transmitting digital information which is within, or can be scaled to values within,
the required input range (usually 0-1, see Ch. 5). With this in mind, it is conceivable, not only
that a virtuosic performance practice with AAIM could develop using one particular interface, but
that multiple practices could develop, with each practice using a different interface and allowing
users to focus on different elements of the performance, or to use different interactions, but with
AAIM as an intermediary between the interface and sound source in each case.
This shifting of focus is also linked to another fundamental facet of the AAIM approach, the
77
previously mentioned broad musical problem space with which the system engages. As noted
above, AAIM does not attempt to emulate any one particular style, but instead affords a wide variety
of manipulations applicable across a broad number of styles. In other words, AAIM specifically
focusses on the ‘fuzzy divide’ between various electronic music practices (see Ch. 2.1). Evidence
of this is apparent in the variety of styles presented in Chapter 6, for example the contrast in style,
usage, and approach between “There is pleasure...”2 (6.1) and under scored 3 (6.2), or between
5Four4 (6.3) and radioAAIM5 (6.4.3). However, these pieces only scratch the surface of ways
AAIM can be used.
4.3 Design Philosophy
In addition to the ideas discussed above, other ideals and concepts also inspired the design phi-
losophy of the AAIM system. Firstly, AAIM makes no major differentiation between disparate
musical materials. This approach is evident on multiple levels within the system. At a surface
layer, AAIM treats all materials of one kind in the same manner. For example, exactly the same
rules are used by the AAIM.patternVary module to govern the variation of a pattern of triggers
in 54 as are used to vary a pattern in 4
4 . Likewise, when the AAIM.melodyVary module receives a
melodic line there is no additional logic governing the variation of a melodic line in the whole-tone
scale6 than there is governing the variation of a melody in ‘C Major.’7
On a deeper level, while each module affords variations unique to the type of musical ma-
terials that it engages with, the fundamental approach to how each module varies materials is
the same. This underlying similarity is based on another feature of the AAIM design philoso-
phy, that rhythm is fundamental and, therefore, AAIM.rhythmGen is the primary module of the
system (see Fig. 4.2). Although each module can work if disconnected from the rest of the sys-
2https://soundcloud.com/ps_music/there-is-pleasure3https://soundcloud.com/ps_music/under_scored-aspect_demo4https://soundcloud.com/ps_music/5four_demo_0904155https://soundcloud.com/ps_music/radioaaim-2-cheesencrackers6A scale of pitches that are each 2 semitones, 1 whole-tone, apart, e.g.: C, D, E, F#, G#, Bb7A scale containing the notes: C, D, E, F, G, A, B
78
tem, each has been designed with the intention that they connect to the AAIM.rhythmGen mod-
ule, and the most basic variations each module facilitates involves mapping the triggers output
by the AAIM.rhythmGen module to the materials input by the user. This can be seen in how the
AAIM.loopVary module maps triggers to the playback of a sample, shifting playback to the begin-
ning of the ‘grain’ closest to the current position, as output by the AAIM.rhythmGen module, and
how the AAIM.patternVary module varies a pattern, outputting triggers from the nearest beat in the
pattern to the current position.
Figure 4.2: Expansion of Figure 4.1, illustrating the fundamental role played by theAAIM.rhythmGen module in triggering each of the other primary modules.
While AAIM treats all musical materials in similar ways, it also engages as broad a musical
problem space as possible, and therefore must acknowledge that some variations will be better
suited to some styles than others. As such, the nature of the variations is always at the user’s
discretion. This is demonstrated by the fact that none of the weights governing an individual
AAIM.rhythmGen voice’s choice of i-o-i, inter-onset-interval, are hardcoded. Consequently, not
only can different pieces/performances use different rhythmic subdivisions, but individual voices
can generate patterns using a unique collection of i-o-i’s. This also applies to the overall prob-
79
ability of variation in the i-o-i used in, or the insertion of rests into, the rhythmic lines of each
voice. As such, the AAIM.rhythmGen module not only allows the fine tuning of rhythmic varia-
tions on a macro level, but also allows the fine tuning of variables for each rhythmic line. Thus
AAIM.rhythmGen facilitates every scenario from: an ensemble of voices creating an unvarying
rhythmic line, to one varying rhythmic line performed by an entire ensemble, to one ‘soloist’
varying its rhythmic line while the rest of the ensemble performs their original material without
variation, to an entire ensemble of voices each producing their own rhythmic lines. This applies
equally to each voice of the AAIM.patternVary, AAIM.loopVary and AAIM.melodyVary modules,
i.e. each voice can reproduce the materials input by the user, all can vary their material in unison,
or each voice can use discreet settings for each variable, and at no time are any variation methods
enforced by the system.
In their discussion of mapping strategies for computer music interfaces, Rovan et al. propose
three possible categories:
• One-to-One Mapping: The simplest, but generally least expressive, mapping strat-
egy, whereby each input controls one musical parameter.
• Divergent Mapping: One input value controls multiple parameters. Although ini-
tially more expressive than one-to-one mapping, they note that it may nevertheless
prove limited as it does not allow control over any micro features.
• Convergent Mapping: Multiple controls combine to control one parameter. Al-
though initially harder to master than other mapping strategies, ultimately it proves
to be the most expressive approach [145, 69].
Although Rovan et al. originally proposed these categories as strategies for mapping the output
of an interface to some sound source, they are equally applicable when determining strategies for
mapping input to algorithmic systems. As AAIM triggers all micro events, it makes no attempt
to provide any opportunities for one-to-one mapping, although it is possible to include one-to-one
80
mappings that bypass the system. Instead, AAIM focusses entirely on divergent and convergent
mapping possibilities. Examples of divergent mapping include the global variables for Complex-
ity and Rests, which determine the maximum variation of i-o-i’s and probability of rests for all
AAIM.rhythmGen voices simultaneously. As mentioned above, the AAIM.rhythmGen module also
affords discreet Complexity and Rests values for each voice, resulting in convergent mapping,
where both a discreet and global value couple to determine the result. The use of convergent map-
ping is even more pronounced during the process of a voice choosing different i-o-i’s, as the chance
of each i-o-i occurring is determined by an individual variable in addition to the Complexity value;
thus, numerous variables combine to produce the parameters that determine the variation in i-o-i.
An additional bonus of this focus on divergent and convergent mapping is that it facilitates both
a ‘low threshold’ and ‘high ceiling.’ In other words, divergent mappings, such as the global values
for Rests and Complexity, allow users to immediately get results from the system, while convergent
mappings, such as the discreet voice specific values for Rests and Complexity, allow experienced
users to fine tune these results to their own specifications. In turn, this facilitates many of the
previously discussed features and goals of the system, such as the shift in focus from technical
virtuosity to musicality.
4.3.1 Algorithmic Approach
As mentioned above, the algorithms within the AAIM system are based primarily on Mathematical
Models. As discussed in Section 3.1.1, algorithms based on Mathematical Models tend to be
less complex than other algorithmic approaches, and are therefore more suitable to use in live
performance. In addition to this, the fact that they do not require hardcoding of musical rules means
that they also facilitate the use of far broader, less genre specific, approaches. For these reasons,
the algorithms used within the AAIM system are primarily based on Mathematical Models, and
more specifically weighted random choices. The AAIM.rhythmGen module, for example, uses two
main mathematical models to generate rhythmic lines:
81
• The indispensability (5.1.4) algorithm, which determines the velocity of each trig-
ger, and the likelihood of rests occurring in the rhythmic line, and
• The ioiChooser (5.1.3), which determines the i-o-i’s used by each
AAIM.rhythmGen voice.
In both cases, outputs of these algorithms are determined by weighted random choices. The in-
dispensability algorithm first determines the grouping of beats in the bar, either by analysis of
the input materials, direct user input, or random choice. All of these methods of determining the
grouping rely on dividing the total number of beats into subgroups of 2 or 3 beats, and then further
grouping each stratification of subgroups into groups of 2 or 3. At each level, a simple collection
of just two rules determines the importance, indispensability, of the beats, and thus the likelihood
of rests occurring on said beats. The velocity of triggers occurring on each beat is then determined
by simply scaling the indispensability values. This approach allows the creation of rhythmic lines
with any number of beats, and with a number of possible groupings for each. This subgrouping of
total beats into groups of 2 and 3 is typical in much Western music. For example, 54 is often played
as 10 18 th notes, grouped as 3 + 3 + 2 + 2.8
Similarly, the ioiChooser algorithm takes an equally broad approach. A voice repeats its chosen
i-o-i until the i-o-i synchronises with the underlying pulse, and the choices are limited only by
the remaining beats in the phrase, the grouping of beats used by the indispensability algorithm,
and user-defined variables. The algorithm thus ensures that all of the independent rhythmic lines
generated synchronise, both with the underlying pulse and with each other, and that the variations
in i-o-i emphasise the underlying grouping. This is also typical of much Western Music. For
example, jazz ‘standards’ often consist of phrases of 4, 8, or 16 bars, and the performers will almost
always synchronise at those points, often marked by the drummer performing a cymbal crash, and
emphasise the underlying grouping (usually 2 + 2 + 2 + 2 with a swing feel). However, they
are also free to perform different rhythmic subdivisions that do not interfere with these moments8For example Take Five by the Dave Brubeck Band, Seven Days by Sting, or the Mission Impossible Theme Song
by Lalo Schifrin.
82
of synchronisation. Variations from these underlying moments of synchronisation are rare, and
generally used intentionally to add tension, and the use of rhythmic subdivisions that cross these
points is rare. The repetition of a rhythmic subdivision until it synchronises with the underlying
pulse is also typical, while it is rare for a musician to suddenly jump from one rhythmic subdivision
to another, e.g. a triplet to a quintuplet, before completing the first.9
Each of the other primary modules takes a similarly broad approach to varying the materials
input by the user. The AAIM.patternVary module (see Sec. 5.2) maps each AAIM.rhythmGen trig-
ger to the nearest beat in the pattern, and as such all variations in i-o-i result in variations of the
original pattern. The additional Extra feature afforded by the module allows the insertion of addi-
tional triggers into the pattern, the probability of which is determined, for each voice individually,
by the number of triggers in the pattern and a user set variable. Again, this is a broad and non-
genre specific approach that ensures that variations are the result of user input. Similarly, the
AAIM.loopVary module (see Sec. 5.3) also maps each AAIM.rhythmGen trigger to the beginning
of the nearest ‘subdivision’ of the sample. Although, the resulting stuttering effect is common to
many genres of sample-based music, e.g. ‘scratching’ in Hip-Hop, it is by no means unique to
any. Each of the additional variation methods afforded by the AAIM.loopVary module, reversing
subdivisions, replaying/retriggering subdivisions, jumping to a different point in the sample, and
creating new sequences using the subdivisions, are equally non-genre specific. As with the Extra
feature of the AAIM.patternVary, the AAIM.loopVary module determines whether or not to apply
these variations using user-defined variables.
As with the AAIM.patternVary and AAIM.loopVary modules, the AAIM.melodyVary module
(see Sec. 5.4) initially maps triggers from the AAIM.rhythmGen module to the nearest beat in
the user-defined melody. However, the AAIM.melodyVary module affords more detailed variations
than either of the modules mentioned above. The AAIM.melodyVary
module first analyses the melodic line to determine its intervallic content. Although the analysis
does record chromatic intervals, it also groups chromatic intervals together, so that, for example,
9The music of Frank Zappa, such as The Black Page, is one example of an exception to this.
83
both a semitone and a tone are considered a “2nd,” and both three and four semitones are regarded
as a “3rd” etc., giving a list of diatonic intervals. The analysis also records the trajectory of each
intervallic jump, that is whether the pitch rose or fell. The system is then capable of inserting addi-
tional or alternative notes into the melodic line, limiting its choices to those that fit the underlying
scale of the melodic line, and the results of the analysis. Although this restricts the system with
regards to facilitating ideas like “start using more Major 3rds (4 semitones) than Minor 3rds (3
semitones),” it does have advantages over a fully chromatic approach. Firstly, a fully chromatic
approach would, at the very least, require either separate variables for each chromatic note and in-
terval, or enough material to determine the relationships between each chromatic note. In contrast,
the AAIM.melodyVary module only needs the melodic line itself and a variable for the chance of
inserting additional notes. Combined with the direct link between the varied output and the user’s
input, this means that AAIM affords a far broader range of styles with far more ease. For example,
if a user gives the module a melodic line containing only small intervals, e.g. “2nds” and “3rds,”
the additional and alternative note choices will also retain this feature. Similarly, when varying a
melodic line containing only large intervals, e.g. “6ths” and “7ths,” the AAIM.melodyVary module
will only insert additional or alternative notes that retain this feature.
Each of the other variation methods afforded by the AAIM.melodyVary module, playing the
melody backwards (retrograde), inverting or expanding the intervals of the melody, and creating
sequences from fragments of the melody, are also limited to the scale of the underlying pattern
and their probability is determined by a user-defined variable. Again, this limits the system with
regards to some compositional approaches, although this limitation can be diminished by changing
the underlying scale to, for example, a chromatic scale. However, by treating all melodic lines as
fundamentally ‘tonal,’10 and then affording the use of chromatic, and other, scales, it is argued that
the AAIM.melodyVary module facilitates a far broader range of styles than would be afforded if,
for example, the chromatic scale was used as the basis for all variations regardless of user input.
10In this sense ‘tonal’ is intended to imply the use of any fixed collection of 12 or fewer pitch classes, rather thanindicating the use of the Western ‘Major’ or ‘Minor’ scales in particular.
84
4.4 Improvisation with AAIM
This section situates the AAIM system within the context of the improvisational approaches pre-
sented in Section 2.2.
4.4.1 Composition - Improvisation Continuum
As discussed in Section (see Sec. 2.2.1), improvisation and composition should not be thought
of as separate processes, but should instead be viewed as a “continuous spectrum” [43, xii]. This
belief is evident on many levels within the design and approach of the AAIM system, e.g. the fact
that the user needs to input some musical material before the system outputs any music, i.e. users
must ‘compose’ some music before the system allows them to improvise.
Dean’s division of ‘pure’ and ‘applied’ improvisation (see Sec. 2.2.1) is also of particular im-
portance to the AAIM approach to improvisation, and further demonstrates the system’s dedication
to treating composition and improvisation as “part of the same idea” [120, 6]. The avoidance of
stylistically distinct algorithms allows users to focus on their own compositional ideas, rather than
forcing them to work within the limits of an existing style. The ability to fine tune the variables
for each individual voice facilitates a significant degree of control over the variations. In addition,
the intuitive mapping allows users to work in the knowledge that although the variations may dif-
fer, similar settings will always produce similar results. As such, users can explore compositional
ideas, without the need to record every moment in fear of missing a singular event that will never
happen again. Consequently, although the system is designed to facilitate ‘pure’ improvisation,
it is equally adept in ‘applied’ situations. While one could claim this of most algorithmic music
systems, the ‘stylistically neutral’ nature of the algorithms, in addition to the mapping strategies
used, makes this especially true of the AAIM system.
85
4.4.2 Referent-based Improvisation
Improvisation with AAIM is explicitly referent-based, while simultaneously being non-referential
with regards to any singular style or approach. At the most basic level, this is evident in the
use of an underlying phase ramp, upon which the AAIM.rhythmGen module creates all of the
rhythmic variations. Each of the pieces presented in Chapter 6 display this feature, however, a
number also demonstrate methods for subverting this same fact. For example, in 5Four11(6.3) the
underlying phase ramp is subverted through the use of a metric modulation from 10 equally spaced
pulses during a bar, to the impression of 4 equally spaced pulses over an equal amount of time.
5Four also features the use of ‘cross-rhythms,’ resulting in 7, 8, and 12 beats being played over
the underlying 10 pulse pattern. The subversion of the basic pulse is also evident in BeakerHead,
which is based solely on live recordings of the audience, each of which has an arbitrary length and
rhythmic character. This results in numerous sections where the underlying pulse is concealed by
the texture of overlapping samples.
The reliance on referents or models also reveals itself through the use of user-defined musical
materials. As has already been discussed, all of the variations created are based on the input
musical materials. Improvisation with AAIM is, therefore, similar in nature to improvisational
practices based on embellishing pre-composed musical materials, such as a jazz improviser does
when improvising melodic lines over a harmonic progression, rather than generating entirely new
musical materials (see Sec. 2.2).
4.4.3 Improvisation through the re-use, variation, and manipulation of ‘building blocks’
In many cases, the user-defined materials that serve as the foundation for the variation-based im-
provisation with AAIM, also provide the system with information about the nature of the embellish-
ments/variations that occur. For example, the analysis of user-defined patterns by the AAIM.patternVary mod-
ule provides information about rhythmic emphases in the materials, which is then used to determine
11https://soundcloud.com/ps_music/5four_demo_090415
86
the likelihood of rests being inserted into the pattern, and also influences the choice of i-o-i (see
Sec. 5.2). Similarly, the notes chosen by the AAIM.melodyVary scalar note method are deter-
mined via analysis of the intervallic relationships within melodic lines provided (see Sec. 5.4). In
both cases, the ‘referents’ upon which the improvisations are based can also be seen to act as the
“building blocks” [120, 14] that determine the nature of the improvisation.
Beyond this, each of the individual algorithmic methods presented in Chapter 5 is a ‘building
block’ in itself. However, unlike the ‘building blocks’ often learnt by instrumental musicians,
e.g. the collection of “licks” [13, 102] one particular improvising jazz musician may play, in
AAIM these ‘building blocks’ are as vague as possible. That is, rather than specifying a sequence of
pitches from a specific scale that can be played over a specific harmonic progression, they specify
a method for varying any sequence of pitches, from any scale, over any harmonic progression.
The rhythmic variations generated are similarly unbounded, i.e. there is no codification of specific
patterns that fit specific rhythmic structures, and instead all are applicable within any rhythmic
structure.
87
Chapter 5
Software Portfolio
This chapter contains a description of the underlying algorithms used by the primary modules
of the AAIM system. Section 5.1 presents the AAIM.rhythmGen module, an algorithmic method
for generating any number of independent but interlocking rhythmic lines within a user-defined
metric framework, i.e. tempo, time signature, and the number of bars within the phrase. The
AAIM.rhythmGen module serves as the foundation for the entire AAIM system, and its output is
used to trigger musical events by all of the other modules.
Section 5.2 discusses the AAIM.patternVary module, an algorithmic module that uses the out-
put of the AAIM.rhythmGen module to vary a user-defined pattern of musical events, e.g. a drum
pattern. The AAIM.loopVary module, a module that maps the output of the AAIM.rhythmGen mod-
ule onto the playback of a sample, facilitating the variation of recorded material, is then pre-
sented in Section 5.3. Section 5.4 describes the AAIM.melodyVary module, a continuation of
the methods used by the AAIM.patternVary and AAIM.loopVary modules, which exploits the lim-
ited number of relationships within the 12 note chromatic scale, in addition to the output of the
AAIM.rhythmGen module, to afford a broad range of variations of user-defined melodic materials.
Section 5.5 presents the AAIM.app application, a standalone application for Mac OS. The
AAIM.app application allows users to explore all four of the primary modules discussed in Sec-
tions 5.1, 5.2, 5.3, and 5.4, and also contains additional features that facilitate the creation of entire
pieces of music using just the application (for example see Sec. 6.4.3 and 6.4.6).
88
5.1 AAIM.rhythmGen
AAIM.rhythmGen1 is an implementation of several algorithms designed to enable users to generate
and manipulate rhythmic patterns, within metric frameworks, during live musical performances.
5.1.1 Input and Interaction
As with all of the modules in the AAIM system, AAIM.rhythmGen is designed to incorporate both
‘divergent’ and ‘convergent’ mapping strategies (see Sec. 4.3), thus minimising the number of
controls needed during live performance, while still affording precise control over the nature of the
variations created. The module requires the user to set some initial variables:
• The metric framework within which the rhythmic lines are created, consisting of:
1. The number of beats in each bar, and
2. The number of bars in the phrase
• The chance, for each voice, of each i-o-i being used (see Sec. 5.1.3).
However, once these values have been set each voice only requires the user to input 2 variables to
create variations in their rhythmic line:
• The ‘complexity’ of the voice’s rhythmic line (see Sec. 5.1.3).
• And the chance of ‘rests’ being inserted into the rhythmic line (see Sec. 5.1.4).
This can be simplified further by using a list of maximum ‘complexity’ and ‘rest’ values for each
voice, and using a master scale for each - thus lowering the total number of variables needed for
‘live’ performance of the algorithm to just 2 (see Sec. 5.5).
1https://www.youtube.com/watch?v=eoj4ZDR5FSI&feature=youtu.be
89
Figure 5.1: Example of a two bar phrase output by the rhythm generator. Blue arrows signify thechoosing of a new i-o-i, and red arrows signify triggers output by the rhythm generator.
Figure 5.2: Output of Fig 5.1 in musical notation, with the ‘base’ i-o-i equalling 18note.
90
i-o-i Repetitions of ‘base’ Repetitions of i-o-i i-o-i ratioto synchronise to synchronise
1/4 2 1 2.01/6 4 3 1.3331/8 1 1 1.03/16 3 2 0.751/12 2 3 0.6661/16 1 2 0.5
Table 5.1: Examples of the relationships between various i-o-i’s and the number of repetitions ofboth the ‘base i-o-i’ and the i-o-i itself necessary for synchronisation, using one 1
8 th note as the‘base’ i-o-i.
5.1.2 Trigger Output
The AAIM.rhythmGen module requires a repeating phase ramp equivalent in time to the length of
one ‘base’ inter-onset-interval (i-o-i).2 The module treats the ‘base’ i-o-i as 1.0, and every other i-
o-i as some multiplier, allowing the AAIM.rhythmGen to create any i-o-i, irrespective of the length
of the ‘base’ i-o-i (i.e. the tempo).
At the global level, the AAIM.rhythmGen module converts the repeated phase ramp into a
‘position’ within the metric framework:
• Position = (current beat + phase) % (number of beats * number of bars), where:
– Current beat is incremented at the beginning of each phase
The module then uses this position to facilitate the choosing of i-o-i’s as described below (5.1.3).
When a voice chooses a new i-o-i, it records the position within the rhythmic framework. It then
subtracts this beginning position from the current position, allowing each voice to treat its position
as being a distance from 0 (the beginning of it’s current i-o-i) for the purpose of outputting triggers,
whilst still retaining the rhythmic framework.
Having determined the distance from ‘0,’ each AAIM.rhythmGen voice then uses a modulus of
the ratio of:2An ‘inter-onset-interval’ is the length of time between successive musical triggers or events.
91
• The repetitions of the ‘base’ i-o-i that are necessary for synchronisation, to
• The repetitions of the chosen i-o-i necessary for synchronisation (See Tab. 5.1 and
Fig. 5.1)
At each wrap around point, the voice outputs triggers in the form:
• 〈Voice number, velocity, i-o-i, current position〉,
and a new i-o-i is chosen when the wrap around point synchronises with the beginning of a phase
ramp. This approach allows the AAIM.rhythmGen module to create any i-o-i, provided it is re-
peated until it synchronises with the ‘base’ i-o-i.
5.1.3 ioiChooser
In order for each voice of the AAIM.rhythmGen to create different rhythmic lines, the algorithm
requires a list of i-o-i’s to choose from, in the form:
〈repetitions of ‘base’ i-o-i, repetitions of the i-o-i needed to synchronise, relative complexity of the
i-o-i〉 (see Tab. 5.1).
Figure 5.3 demonstrates the mathematical method by which the AAIM.rhythmGen creates a list of i-
o-i’s consisting of any n equally spaced triggers across any m beats, while Figure 5.4 demonstrates
the AAIM.rhythmGen method for determining the relative complexity of each i-o-i. In practice,
however, there is no advantage to using either of these algorithms in real-time, and as such both
were used to generate a list of i-o-i’s and their respective complexities that is stored in the code
and loaded at launch.
To choose a new i-o-i (see Fig. 5.5), the AAIM.rhythmGen voice first discounts all of the i-o-i’s
that require more ‘base’ i-o-i’s to synchronise than the remaining beats in the phrase. Then the
algorithm scales the user set chance, or weight, for each i-o-i by: that i-o-i’s complexity level, the
complexity of the output of each of the other voices, and the maximum complexity allowed for
that voice as set by the user. The voice then further scales these weights to increase the likelihood
92
Figure 5.3: Simple example of the AAIM.rhythmGen ‘ioiMaker’ method.
Figure 5.4: Examples of the AAIM.rhythmGen ‘complexity determination’ method.
93
Figure 5.5: The decision-making process used by each AAIM.rhythmGen voice choosing new i-o-ivalues and outputting triggers.
of i-o-i’s that synchronise with the grouping of the current time signature (see Sec. 5.1.4). A user-
defined value ‘gScale’ determines the extent to which this grouping scale is enforced. Once all of
the i-o-i weights have been scaled accordingly, the algorithm removes any i-o-i with a weight of
‘0.0’ and uses the remaining list of weighted i-o-i’s as a weighted probability table from which it
chooses a new i-o-i.
5.1.4 Indispensability
The AAIM.rhythmGen module determines both the velocity of individual triggers and the possibil-
ity of rests being included in the final rhythm using a modification of Barlow’s beat indispensability
algorithm [10], “indispensability” being the importance of each individual beat within a specific
time signature. Barlow’s original algorithm stratifies the number of beats in each bar according
to prime factors of that number, using an additional algorithm to determine the ‘indispensability’
94
for each beat in a prime number. Rather than approach time signatures as a prime factor problem,
the AAIM.rhythmGen algorithm instead stratifies every time signature, and thus every number, as
groups of 2 and 3, which are in turn grouped in 2s and 3s, continuing until only a single group of
2 or 3 remains.
The AAIM indispensability algorithm first groups the number of beats in the bar by randomly
choosing 2 or 3, storing each selection, until the initial number minus the sum of the random
choices is equal to 4 or less. At this point, the only choices remaining are: 〈2,2〉 (if the remainder
equals 4), 〈3〉, or 〈2〉 (see Fig. 5.6). The advantage of this approach is that it allows the creation
of multiple groupings for any number larger than 4. For example, in Barlow’s algorithm 8 beats
can only be grouped as 〈2,2,2,2〉, while grouping in 2’s and 3’s also allows groupings of 〈2,3,3〉,
〈3,2,3〉, and 〈3,3,2〉.
A secondary advantage of this approach is that it requires only two indispensability rules,3 one
rule for a group of 2, and one rule for a group of 3, rather than a separate set of indispensabilities
for every prime number. The rules used by the AAIM.rhythmGen module are:
• Indispensability of 〈1,0〉 for a group of 2, i.e. an indispensability of 〈1〉 for the first
beat in a group of 2, and 〈0〉 for the second.
• Indispensability of 〈1,0,0.5〉 a group of 3.
Once the indispensability for each individual beat is determined, the algorithm repeats the process,
grouping the resulting number of values from the previous iteration, and adding the indispensability
for each group to the first beat of that group, repeating the entire process until only a single group
remains (see Fig. 5.7). Finally, the indispensability of each beat is normalised between 0 and 1.
This gives the indispensability for each beat in a bar of n beats, and the AAIM.rhythmGen module
then scales these values (e.g. between 1 and 127 for MIDI) to determine the velocity of individual
triggers.3To expand the possibilities of the algorithm 3 rules are actually used, allowing groups of 1 to be enforced manually.
This is not strictly necessary for any time signature with more than 1 beat, and, as such, the algorithm does not creategroups of ‘1’ unless specified by the user.
95
Figure 5.6: A schematic diagram demonstrating two alternative groupings created by theAAIM.rhythmGen ‘genGrouping’ algorithm using the same input.
96
The AAIM.rhythmGen module uses this algorithm across three rhythmic levels: to determine
the indispensability of each beat in a tuplet, the indispensability of each beat in a bar, and the in-
dispensability of each bar in a phrase of bars, with the indispensability of individual beats within
tuplets and for whole bars within phrases being scaled to have a lesser effect on the output than the
indispensability of beats within a bar. At each trigger point the AAIM.rhythmGen voice multiplies
these three values, giving the overall ‘indispensability’ with respect to phrase, bar, and tuplet. The
voice then compares this value to a random number between 0.0 and 1.0, and if the ‘indispens-
ability’ is greater than the random number a trigger is output, otherwise a rest is inserted (see Fig.
5.5). The aforementioned ‘rests’ value enables users to scale the impact of the indispensability,
increasing the lower bound of each indispensability to above 0.0 while keeping the upper bound at
1.0, thus making the insertion of rests less likely.
5.1.5 Additional Features
To expand the possibilities of the AAIM.rhythmGen module two additional features were included.
5.1.5.1 Tempo Changes
Each voice of the AAIM.rhythmGen module can use a different tempo. A voice achieves this by
multiplying the current position by the desired tempo change, and using a modulo value of the
number of beats per bar multiplied by the number of bars to ensure that the length of the rhythmic
phrase remains the same.
5.1.5.2 Timing Variations
The module also allows the timing of each rhythmic line to be offset from the repeating phase ramp
by adding or subtracting the desired deviation, set by the user, to the voices current position, again
using a modulo of the total number of beats in the phrase.
98
5.2 AAIM.patternVary
AAIM.patternVary4 is a multi-voice algorithm inspired by a standard drum machine, and was de-
signed to facilitate the variation and manipulation of patterns of triggers.
Input materials are limited to patterns based on the ‘base’ i-o-i, with each voice essentially
being a monophonic step sequencer with a range of only one note (i.e. each voice’s pattern con-
sists of a series of ‘1’s and ‘0’s, either a note-on or not). Each voice of the AAIM.patternVary is
assigned a corresponding AAIM.rhythmGen voice (see Sec. 5.1), and maps the output of said
AAIM.rhythmGen voice to its own, user-defined, sequence (see Fig. 5.8).
Figure 5.8: The decision-making process used by the AAIM.patternVary when using the output ofthe AAIM.rhythmGen to vary patterns.
5.2.1 Input and Interaction
The AAIM.patternVary module expects input triggers in the form:
〈Voice number, velocity, i-o-i, current position〉 (i.e. the same as the triggers output by the4https://www.youtube.com/watch?v=4urUEwgXfjg&feature=youtu.be
99
AAIM.rhythmGen module)
As with the AAIM.rhythmGen algorithm, AAIM.patternVary is designed to facilitate live perfor-
mance through minimal real-time controls and ‘divergent’ mapping, while still incorporating ‘con-
vergent mapping’ to afford users with more precise control over the output. Initially each voice
requires only a pattern to play/vary, variations then only require a value for ‘extra’ (see Sec. 5.2.3),
in addition to the variations of the output of the AAIM.rhythmGen voice. However, rather than
needing a separate variable to be entered for each voice during performance, the interaction can be
reduced by storing a maximum ‘extra’ value for each voice and using one variable to scale each
voices ‘extra’ value between 0 and their maximum (see Sec. 5.5).
5.2.2 Trigger Output and Basic Variations
The mapping of triggers from the AAIM.rhythmGen module, and basic variations of the pattern,
initially consist of a simple check, ‘if the current position in the phrase coincides with a note
in the inputted pattern output that note.’ In order to insert additional notes into the pattern, the
AAIM.patternVary voice determines the ‘distance’ to the next, or previous, note in the pattern, with
one ‘base’ i-o-i considered ‘1.0.’ The voice then generates another random number between ‘0.0’
and ‘1.0,’ and if this value is greater than or equal to the ‘distance’ a note is output (see Fig 5.8).
5.2.3 Inserting ‘Extra’ Notes
A user-defined setting for ‘extra’ allows the module to insert additional notes into the pattern. The
module divides the total number of notes for the individual voice by the total number of beats in
the current pattern. When a voice receives a trigger that does not correspond to any note in the
original pattern, it multiplies the value for ‘extra’ by the ratio of notes to beats, and if this value is
greater than a random number between 0 and 1 it outputs an additional note.
100
5.2.4 Additional Features
As with the AAIM.rhythmGen module, the AAIM.patternVary module includes 2 additional fea-
tures to expand it’s capabilities.
5.2.4.1 Maximum Notes
The ‘maximum notes’ is a user set variable that limits the number of simultaneous triggers the
AAIM.patternVary module will output. The AAIM.patternVary module first adds the number of
simultaneous triggers for each beat in the original pattern and stores the maximum number occur-
ring on one beat. The user set variable is a multiple of this number, giving the maximum number
of notes allowed. Each time a voice outputs a trigger, the AAIM.patternVary module increments
the count of current notes. If a voice attempts to output a trigger when this count is greater than the
maximum notes allowed the module ignores the trigger. 15 milliseconds after each incrementation
the AAIM.patternVary module decrements the count of current notes. Although this feature was
initially included to simply limit the output of the AAIM.patternVary, extreme values can also be
used as a musical device - a very low value resulting in a much thinner texture, while a much larger
number can, in conjunction with the ‘extra’ variable (see Sec. 5.2.3), result in a much thicker
texture.
5.2.4.2 Grouping Determination
The AAIM.patternVary module automatically analyses the input pattern to determine the grouping
of the beats within the pattern. For each beat in the pattern, the module adds the total number of
notes across each voice. The module then automatically considers the first beat important, and also
considers each subsequent beat that contains more simultaneous notes than the beats immediately
preceding or following as important. The AAIM.patternVary module then counts the number of
beats between each ‘important beat’ and appends each value to a list. The AAIM.patternVarymodule
then passes the resulting list to the AAIM.genGrouping algorithm (see Sec. 5.1.4), so as the
AAIM.rhythmGen module can use the resulting grouping. Thus, the AAIM.rhythmGenmodule links
101
its own rhythmic lines to the underlying grouping of the pattern input into the AAIM.patternVary.
5.3 AAIM.loopVary
The AAIM.loopVary5 module is designed to vary the playback of an audio sample or loop (see Fig.
5.9).
5.3.1 Input and Interaction
As with the AAIM.patternVary module, the AAIM.loopVary module expects triggers in the form
output by the AAIM.rhythmGen module. However, an additional variable for ‘pitch shift ratio’
may be appended to each trigger to achieve pitch shifting. The AAIM.loopVary module treats
samples as successive ‘grains’ of equal length, and during playback automatically implements a
fade-in at the beginning and a fade-out at the end of each grain.
The AAIM.loopVary module also requires an audio sample to vary, and the number of equal
segmentations of the sample (‘nBeats’). The length of each segment, ‘beat length,’ is then deter-
mined by dividing the total length of the audio sample by ‘nBeats.’ The module then numbers
each segment sequentially from 0 to (nBeats− 1) (see Fig. 5.10). This approach allows each
voice to easily re-order the segments, as the position of a segment within the sequence can be
changed without altering its label. It also lets users insert rests into the sequence independent
of the AAIM.rhythmGen output by placing a ‘-1’ in the sequence. Each AAIM.loopVary voice
maps triggers received from the AAIM.rhythmGen module, using the ‘current position’ value, to
the position within its sequence rather than the segment number (see Fig. 5.10). The playback
position of the sample then jumps to the beginning of said segment (given by ‘segment number’
* ‘beat length’). Each AAIM.loopVary voice maps any variations in the output of the associated
AAIM.rhythmGen voice caused by i-o-i changes to the closest step in the sequence (see Fig. 5.11).
Each AAIM.loopVary voice has four further variables to facilitate the variation and manipula-
5https://youtu.be/ZLlhloSorFI
102
Figure 5.9: The decision-making process used by the AAIM.loopVary module when playing andvarying samples.
103
Figure 5.10: Example of how the AAIM.loopVary algorithm segments a sample into n equal parts,and how the module maps triggers onto these segmentations.
104
Figure 5.11: Example of mapping the output of the AAIM.rhythmGen module onto a sample usingthe AAIM.loopVary.
tion of loop playback:
5.3.2 Grain Size
The ‘Grain Size’ variable allows samples to be varied by causing the sample to fade-out before the
end of a grain, creating a gating effect. The user set variable is a value between 0 and 1, where 1
plays the full length of each grain. Any other value results in the AAIM.loopVary voice choosing
a value between x2 and√
x , where x is the user set variable, and multiplies this value by the ‘beat
length’ to give the grain size.
5.3.3 Reverse
The AAIM.loopVary ‘reverse’ method creates variations of sample playback by reversing the play-
back of individual grains/segments (see Fig. 5.12). The user set variable sets the chance of the
AAIM.loopVarymodule playing a grain in reverse.
105
Figure 5.12: Variations of a sample using the AAIM.loopVary ‘reverse’ method.
5.3.4 Retrigger
The AAIM.loopVary ‘retrigger’ method varies the sample playback by forcing the playback posi-
tion to jump back to the beginning of the last segment played (see Fig. 5.13), resulting in an effect
much like a DJ “scratching” a record. Again the user set variable determines the chance of the
AAIM.loopVarymodule retriggering a segment.
Figure 5.13: Variations of a sample using the AAIM.loopVary ‘retrigger’ method.
106
5.3.5 Jump
The AAIM.loopVary ‘jump’ method produces a similar, although more extreme, result to that of
the ‘retrigger’ method. However, rather than only jumping the playback to the beginning of the last
segment played, the ‘jump’ method affords both forward and backward jumps in the playback. A
second, ‘jump size,’ variable sets the maximum size of these jumps as a percentage of the sequence
length (see Fig. 5.14).
Figure 5.14: Variations of a sample using the AAIM.loopVary ‘jump’ method.
5.4 AAIM.melodyVary
The AAIM.melodyVary6 module is designed to allow users to create, manipulate, and vary melodic
lines, and makes some assumptions about the nature of the sounds that it will create, i.e. that the
material contains 12 or fewer ‘pitch classes.’7 As with both the AAIM.patternVary and AAIM.loopVary mod-
ules, each voice of the AAIM.melodyVary module is assigned a corresponding AAIM.rhythmGen voice.
The AAIM.melodyVary module is also differentiated from both the AAIM.patternVary and AAIM.loopVary mod-
6https://youtu.be/fFGgOfY545U7This does not, however, need to be 12 even divisions of the octave as the 12 pitch classes may be derived from
any tuning system.
107
ules by the fact that it includes variation methods that unfold over the length of the original melody,
rather than acting upon single triggers.
5.4.1 Input and Interaction
In contrast to the AAIM.patternVary module, the nature of melodic materials means that each voice
of the AAIM.melodyVary requires a more detailed input. Each note in a melodic line requires:
• Position
• Pitch
• Length
To achieve this the AAIM.melodyVary algorithm requires input that provides information regarding
both pitch and rhythm for each step in the sequence. As such the input needs to be a list of pairs of
numbers:
• One for pitch, and
• One for trigger type, where:
0. is a rest
1. ties a note to the last note (if the pitch does not change)
2. triggers a new note
The algorithm then converts this list to a melodic line recording the beat at which a note begins,
the pitch of this note, and how many base i-o-i’s the note is to be played for (see Fig. 5.15).
Each voice of the AAIM.melodyVary module uses two separate scales to facilitate the manip-
ulation of melodic materials. The module automatically sets both to the pitch classes within the
melody. However, the user can then manually change either, or both, to expand the pitch classes
used during variations.
108
Figure 5.15: Demonstration of how the AAIM.melodyVary module converts input of pitch andtrigger type pairs into a melodic line (using the nursery rhyme ‘Mary had a little lamb’).
5.4.1.1 Available Pitches
The ‘available pitches’ are the scale upon which the AAIM.melodyVarymodule bases its variations
of the melody. For example, the melodic line in Fig. 5.15 only uses the pitches:
• 〈C, D, E, G〉
but by including the pitch:
• 〈A〉
the variations created will use the entire C pentatonic scale. Also including the pitches:
• 〈F, B〉
will result in variations using the full C Major scale etc.
5.4.1.2 Actual Pitches
The ‘actual pitches’ are the pitches onto which the AAIM.melodyVary voice maps it’s output. For
example, if, continuing from the above, the ‘actual pitches’ are set to:
• 〈C, D, Eb, F, G, Ab, Bb〉
109
the output will be in C Minor rather than C Major. To achieve this mapping the AAIM.melodyVary al-
gorithm maps each note of the ‘actual pitches’ onto the nearest pitch in the‘actual pitches’ (see Fig.
5.16).
Figure 5.16: Demonstration of how the AAIM.melodyVary module maps the list of ‘AvailablePitches’ onto the list of ‘Actual Pitches’ to create new versions of melodic lines.
5.4.1.3 Modal Mapping
To expand on the affordance of the ‘Available’ and ‘Actual’ pitches system described above, the
AAIM.melodyVary module also includes a modal mapping feature. This method rotates the inter-
vals of the ‘available pitches’ list to create a ‘mode’ of the original scale, and the module then
maps the melody and all variations onto this scale. The algorithm first records the intervals, in
semitones, in the original scale and its lowest note. It then places the first interval of the original
scale as the last interval in the new mode, and adds each interval in the new mode to the lowest
pitch in the original, giving the new mode (see Fig. 5.17).
5.4.2 Melodic Analysis
In order to facilitate the manipulations described below, each voice of the AAIM.melodyVary
performs a simple analysis of its melody (see Fig. 5.18). The module determines the distance,
and direction, between each pitch and the pitch that follows it, with the first pitch in the melody
considered to be following the last, and then converts the intervals into ‘diatonic’ scale degrees,
110
Figure 5.17: Demonstration of the modal mapping method used by the AAIM.melodyVary module.
Figure 5.18: The AAIM.melodyVary analysis of the intervallic content of the melody used in Fig.5.15.
111
retaining the list of both chromatic and diatonic intervals for future use. This approach allows the
system to capture, and therefore make use of (see Sec. 5.4.8 and 5.4.9), some of the most notable
features of a melodic line:
5.4.2.1 Interval Content
A melodic line using only small interval jumps (e.g. 2nds and 3rds) can, therefore, be varied
using only 2nds and 3rds. But by using diatonic in addition to chromatic intervals, the sys-
tem can use 2nds and 3rds that are in the scale but not in the original melody. Table 5.2 shows
the relationships between semitones/chromatic intervals and diatonic intervals as used within the
AAIM.melodyVary module. The algorithm divides any compound intervals (those greater than 11
semitones) by 12, the remainder giving the equivalent interval within an octave. The algorithm
then multiplies this remainder by 7 and adds this value to the interval to determine the actual name
of the compound interval. The algorithm discards any note repetitions and octave jumps from the
final analysis to prevent feedback loops where the AAIM.melodyVary module continuously repeats
notes.
Interval Name Semitones1 0, 02 1, 23 3, 44 5, 65 6, 76 8, 97 10,11
Table 5.2: The relationships between interval names used by the AAIM.melodyVary module andthe chromatic intervals they represent.
5.4.2.2 Trajectory
As the analysis records both intervals and direction, general aspects of the melodic trajectory can
also be used during variations. For example, a melodic line containing many small upward steps in
112
pitch and only one large downward step will be mimicked in the variations by placing a far higher
chance on small upward steps than large downward steps, while discounting the possibility of large
upward steps or small downward steps.
5.4.2.3 Possible Notes
Following the analysis of the melody, the AAIM.melodyVary algorithm updates its list of ‘available
notes’ into a second list named ‘possible notes,’ used by the ‘scalar notes’ method (see Sec. 5.4.8).
In contrast to the ‘available notes,’ the ‘possible notes’ can include multiple octaves of a pitch
class. The notes within the ‘possible notes’ range between six semitones below the lowest pitch
in the melody, and six semitones above the highest pitch in the melody. The algorithm fills the
list with any pitch within this range, and whose pitch class is within the list of ‘available notes’
(see Fig. 5.19). This approach allows the choice of scalar notes to reach beyond the limits of the
original melody.
Figure 5.19: Expansion of the possible notes used by the AAIM.melodyVary scalar note method.
113
5.4.3 Inverse
The AAIM.melodyVary ‘inverse’ method inverts all of the intervals in the melody (i.e. upward
steps become downward and vice versa) around the ‘median’ of the melody. Firstly, the method
determines the median by dividing the range of the melody (highest note - lowest note) by 2, and
then finding the closest note in the scale of ‘available notes’ to this median. It then converts both
the median and melody into scalar indices. The index of each note in the melodic inversion is then
given by:
• median + ( median - index ), or (2 * median) - index.
In the event of indices in the inverted melody being outside the limits of the ‘Available Pitches,’
the algorithm automatically adds extra octaves of the available pitches to the list (see Fig. 5.20).
5.4.4 Retrograde
The AAIM.melodyVary ‘retrograde’ method creates a retrograde of the given melody, i.e. the same
melody played backwards. To achieve this, the algorithm reverses the original inputs (melodic and
rhythmic), and then recreates the melody from these new lists (see Fig. 5.21).
5.4.5 Retrograde-Inversion
The AAIM.melodyVary ‘retrograde-inversion’ first reverses, using the ‘retrograde’ method, and
then inverts, using the ‘inverse’ method, the melodic line.
5.4.6 Expansion
The AAIM.melodyVary ‘expansion’ method creates a variation of the original melody by expanding
each of the intervals. The algorithm converts each note in the melody into a scalar index. It
then subtracts the first index from each index in the list, giving a list of relative scale steps, and
multiplies each value in this list by the desired expansion. Finally, the algorithm adds the index
of the first note in the original melody to each expanded index. The algorithm then converts the
114
Figure 5.21: Example of the AAIM.melodyVary ‘retrograde’ method
list of expanded indices back to pitches again giving the expanded melody. As with the ‘inverse’
method, if indices in the expanded melody are outside the limits of the ‘available pitches’ list the
algorithm simply appends additional octaves to the list (see Fig. 5.22).
5.4.7 Sequence
The AAIM.melodyVary ‘sequence’ method creates a melodic sequence using material from the
melodic material. The algorithm first divides the original melody at the next nearest note to the
current position, placing the section before this point at the end. It then takes the first n, where n is
a random number between 3 and 7, notes in this sliced melody and treats them as the ‘cell’ upon
which it creates the sequence. The algorithm then converts both the cell and the sliced melody
into scalar indices, and subtracts the first index of the sliced melody from each value in the sliced
melody indices to determine the ‘trajectories’ list. The algorithm then creates the sequence by
adding the cell to each value in ‘trajectories,’ appending each transposed cell to the sequence, until
the number of notes in the sequence equals the number of beats in the original melody. Finally, the
algorithm converts the sequence of indices back into scale notes. In contrast to the above methods,
the ‘sequence’ method does not use any of the original rhythmic material. Instead it gives each
116
note in the sequence a rhythmic value of one base i-o-i (see Fig. 5.23).
Figure 5.23: Example of the AAIM.melodyVary ‘sequence’ method
5.4.8 Scalar Notes
The AAIM.melodyVary ‘scalar note’ method is the only variation method used by the
AAIM.melodyVary module that acts upon individual triggers, rather than creating a new melody
equal in length to that of the original. When the AAIM.melodyVary module receives a trigger,
and the ‘scalar note’ method is chosen, the algorithm checks the last note output by the voice
118
and copies the intervallic analysis of the original melody into a new list. The algorithm chooses
an interval at random from the new list. If the last note plus the interval chosen is in the list of
‘possible notes’ that note is output. Otherwise the algorithm removes the interval from the list and
chooses another interval. The module repeats this process until it chooses a pitch, or there are no
intervals remaining in the list, in which case the algorithm outputs the closest possible pitch in the
scale to the last note output (see Fig. 5.24). The impact of changing the list of ‘available notes’
on the output of the ‘scalar note’ method can be seen in Figure 5.25, where, by using the entire C
pentatonic scale rather than just the 4 pitch classes present in the original melody, the ‘scalar note’
method is capable of expanding beyond the limits of the original melody.
5.4.9 Scalar Melody
The AAIM.melodyVary creates an entirely new melody using the original melodic material. The
method retains the original rhythm, but replaces each note using the ‘scalar note’ method (see Sec.
5.4.8 and Fig. 5.26).
5.5 AAIM.app Application
AAIM.app8 is a standalone application comprising each of the four primary modules of the AAIM sys-
tem described above, in addition to some additional features, which exploit the affordances of the
algorithms. The standalone was made in Max and Python, and uses scripting within Max to auto-
matically create and delete objects and patches. Although this allows the application to contain all
of the AAIM modules, the actual act of scripting these objects is too time-consuming to allow users
to truly use the scripting as a performance tool. As such, AAIM.app represents just one possible
implementation of the AAIM system. It is included herein to demonstrate the possibility of basing
entire performance systems or digital audio workstations (DAWs) upon AAIM, rather than as the
definitive version of the AAIM system.
8https://youtu.be/94M8ls7wlT8
120
Figure 5.25: Demonstration of the AAIM.melodyVary ‘scalar note’ method, but using the entire Cpentatonic scale rather than just the notes present in the original melody.
121
Figure 5.27: Screenshot of the AAIM.app application main screen.
Figure 5.28: Screenshot of the AAIM.app application AAIM.rhythmGen interface.
123
Figure 5.27 shows the main interface of the AAIM.app application with each of the four pri-
mary modules added. The primary performance controls for each of the modules are also clearly
shown in this figure: in the top left of the screen are the ‘complexity’ and ‘rests’ variables for
the AAIM.rhythmGen, directly below these is the ‘extra’ variable for the AAIM.patternVary. To
the right of this is a global scale for each of the AAIM.loopVary variations, and to the right of
the AAIM.loopVary variations control is the global scale for each of the AAIM.melodyVary varia-
tions. The local values, which are scaled by these global controls, for each of the primary modules
can also be fine-tuned within secondary interface screens. For example, Figure 5.28 shows the
AAIM.app AAIM.rhythmGen interface. On the right-hand side of this screenshot, one can see the
multi-sliders controlling discreet ‘complexity’ and ‘rest’ values for each voice. The simplicity of
the performance interface is augmented through a Python script that turns a simple XY controller
into an n-sided polygon, with each corner of the polygon representing a different preset. The XY
controller can then be used as normal, but with the output fading smoothly between n presets,
rather than just outputting two values. One such instance of this polygon interface is visible in
the top righthand corner of Figure 5.27. This particular instance controls the global variables for
each of the four primary modules, facilitating both swift and smooth changes between drastically
different settings using just one interface. Appendix A contains a more detailed description of
AAIM.app and its use. Sections 6.4.3 and 6.4.6 discuss examples of musical works created using
AAIM.app.
124
Chapter 6
Portfolio: Creative Practice and Selected Works
This chapter will present a portfolio of musical works that have been created and performed us-
ing the AAIM system. The chapter begins with a description of “There is pleasure...” (6.1), a
piece composed in 2014 for solo laptop performer, with an intentionally digital aesthetic combin-
ing elements of electroacoustic music and electronic dance music. under scored (2013), a piece
influenced by jazz-funk fusion, composed for either solo laptop performer, or oboe, guitar, and lap-
top performer, is discussed in Section 6.2. Section 6.3 then presents 5Four (2015), a composition
for drum set and laptop performer combining elements of electroacoustic music, electronic dance
music, and jazz. Within each of these sections, an initial explanation about the concepts and goals
of the individual piece is followed by a description of the musical materials used and the manner
in which the AAIM system facilitates the performance of the piece.
Section 6.4 presents 6 further pieces created using the AAIM system. In contrast to Sections
6.1, 6.2, and 6.3, the pieces in Section 6.4 are discussed in less detail, reflecting the fact that
the processes involved in their creation were also somewhat less in depth. However, they have
been included herein to provide additional evidence of the broad “problem space” [181, 118] with
which AAIM engages, and the variety of performance scenarios within which AAIM could be used.
Section 6.4.1 presents a second example of how AAIM can be used in a jazz fusion setting. Section
6.4.2 presents an example of the use of AAIM sound art improvisation without any predetermined
plan. Section 6.4.3 presents an ongoing series of pieces in which AAIM is used as a DJ-ing device.
In Section 6.4.4 an example of AAIM being used as a live remixing device is discussed. Section
6.4.5 presents an ambient soundscape intended as the backing for an art exhibition. Finally, Section
6.4.6 discusses a piece created solely with the AAIM.app application.
125
6.1 “There is pleasure...”
2015 International Computer Music Conference, Denton, TX [64]
2015 Forms of Sound Festival, Calgary, AB
There is pleasure in recognising old things
from a new viewpoint.
Dr. Richard Feynman
“There is pleasure...”1 is a structured improvisation, with an intentionally digital aesthetic, com-
bining elements of modern electronic dance music and computer music, with the use of melodic
and rhythmic forms as a framework for improvisation - as is common in jazz and many other mu-
sics. The performer explores the materials and sounds used from multiple viewpoints, while the
musical materials themselves explore ‘modern’ approaches to ‘old’ techniques. The goal being to
create a variety of moods, ambiances, and textures, using a minimal amount of basic materials -
or to view a limited amount of musical materials from multiple viewpoints. During the improvi-
sation, the AAIM system enables the performer to manipulate and vary the pre-composed musical
materials (see Fig. 6.2).
All of the sounds used during “There is pleasure...” are created with Frequency Modulation
(FM) synthesis, and a constant underlying pulse is used to arrange these sounds in time, two
approaches that are often viewed as dated within the genre of computer music. However, the
underlying pulse is obscured through the use of overlapping tuplets, resulting in a wide variety of
multi-layered rhythmic textures and a sense of rhythmic freedom. These variations in the basic
rhythmic patterns, together with variations in the setting of the FM synthesizers, and the use of
signal processing and sound diffusion techniques, allow the performer to also explore the sounds
from multiple perspectives. Finally, the musical materials used are also viewed from multiple
1https://soundcloud.com/ps_music/there-is-pleasure
126
perspectives - with only three rhythmic patterns used throughout, but with each being used to
trigger different sounds and varied in different ways, at different times during the performance.
6.1.1 Synthesis
“There is pleasure...” began as an attempt to create an FM based drum machine, that could be
controlled using the AAIM system. “There is pleasure...” still reflects these origins with clearly
distinguished, but interlocking, roles for bass, middle, and high voices mimicking the roles of bass
drum, snare, and hi-hat or cymbals in a typical drum machine pattern. A further voice of long
glissandi, within three frequency ranges parallel to those of the three main voices, provides an
additional layer, whose function alternates background and foreground at various times throughout
the performance. Rather than use multiple carriers or modulators for the synthesiser, I decided
to approach the synthesis with the same minimalist approach as was ultimately taken with the
musical materials. Each voice uses only one carrier and one modulator, with optional feedback of
the output to the frequency of the modulator to allow for a greater range of noise based textures
and noise bursts during the attack of notes (see Fig. 6.1).
Each of the variables (i.e. carrier pitch, harmonicity ratio, modulation index, feedback, and
amplitude) are controlled by a base value and an envelope that determines how the variable unfolds
over the course of the note with relation to this base value. A collection of presets were saved for
each voice, consisting of envelopes for the five variables mentioned above. Further variations could
then be obtained through variation of the original base values for each variable without changing
the envelopes. Throughout the improvisation, an adaptation of Bolognesi’s 1f noise algorithm [23]
is used to vary the base value for the carrier pitch in each voice within a range of 1-6 semitones.
This results in a constant fluctuation in the pitch of each voice.
6.1.2 Spatialisation
The standard spatial setup for “There is pleasure...” uses an 8 channel loudspeaker array, although
stereo, 16 channel, and 8 channel plus 2 additional subwoofers have also been explored. All
127
Figure 6.1: The “There is pleasure...” Feedback Frequency Modulation (FFM) synthesis engine.
of the spatialisation is achieved using the ICST ambisonics externals for Max [151]. However,
the movement of each voice through the space is determined using copies of the FM synthesiser
described above as low-frequency oscillators (LFO’s), with carrier frequencies below 20Hz. The
performer controls the average frequency of each of these LFO’s, the likelihood for voices to be
spatialised towards the front or back of the room, the average distance of each voice from the centre
of the room, and the ability to set all of the voices to follow individual trajectories or to move as
a group following the trajectory of the bass voice through the space (see Fig 6.2). The result is a
wide variety of spatial movements, ranging from slow sweeps and moments of stasis, to sounds
frantically jumping between speakers.
6.1.3 Musical Materials
“There is pleasure...” consists of 5 sections, all of which are in 54 time. Each section consists of
a basic pattern that is up to 8 bars long, and as each pattern progresses each voice also progresses
through a sequence of up to 20 presets (See 6.1.1). The performer is free to turn individual voices
128
Figure 6.2: The interface used to control “There is pleasure...” during live performances.
on or off as they see fit, and also change between each section in any order and at any time,
thus enabling dramatic and unexpected changes to occur at the performer’s will. They are also
free to vary and manipulate the materials by altering the complexity and rest probabilities of the
AAIM.rhythmGen algorithm (See 5.1), and the probability for extra notes to be inserted afforded
by the AAIM.patternVary algorithm (See 5.2), extreme settings for each of which results in drastic
alterations of the original materials.
The first section, acts as the ‘main theme,’ and focusses on a clear progression through 8 pre-
sets in the bass voice. The presets used by each of the other voices during this section focus on
dissonant, quasi-pitched, noise based, and percussive tones, which allow the bass sounds to re-
main as the primary focus throughout. The section uses an 8 bar pattern, however, as the bass
voice plays at half time in relation to the other voices, changing preset once every 2 bars (in re-
lation to the three other voices), the actual cycle takes 16 bars to complete. During this section,
the weights for allowable i-o-i’s promote poly rhythmic textures based on complex groupings of
tuplets. This means that the cycle is further obscured once the performer sets the complexity value
for the AAIM.rhythmGen above 0.
The second section consists of a 1 bar pattern, with the bass voice playing at the same tempo
129
as the other voices. In contrast to the first section, none of the voices take the lead role during this
section, and the glissandi are not present in the basic pattern. In addition, the remaining voices
all use a selection of noise based percussive presets. The result is a far more bare and aggressive
soundworld than that of the first section,.
During the third section, the bass, mid, and high voices are all silenced, with the focus being on
extending the glissandi used during the first section,. Within each frequency range, each glissando
begins at approximately the same pitch. However, variations in the presets used, including the
direction of the glissando, result in a wash of sound, reminiscent of sirens or wailing.
The fourth section is the first, and only, section throughout which a consistent pulse can be
readily perceived. This is true regardless of the probabilities for complexity, rests, and extra notes
set by the performer. The section consists of an 8 bar pattern, which cycles through a rearranged
progression of the presets used during the first section,. Again the bass acts as the lead voice, with
the other voices filling in the driving rhythmic pattern.
The fifth, and final, section provides a mirror image of the first. It uses the same rhythmic
patterns as the first. However the base values for harmonicity ratio, modulation index, and feed-
back, in addition to the possible range of variation that the envelopes affecting each can achieve
are significantly more constrained. This results in a series of softer, more harmonious tones and
timbres, which still demonstrate very clear links to those used in the first section,. During this
section, sequences of 16 carrier pitches are used in both the middle and high voices, resulting in
the emergence of melodic fragments. However, while the sequence of timbre presets used on each
section is advanced according to the progression through the underlying pattern, each new pitch is
only selected once a sound event is triggered, thus resulting in a constant morphing of the melodic
fragments. Finally, the base values used by the AAIM.rhythmGen during this section also promote
far more rests to be inserted into the pattern. The combination of all of these alterations results in a
far more tranquil interpretation of the materials than is presented during any of the other sections.
130
6.1.4 Additional Features
Three final additions complete the range of manipulations and variations afforded during perfor-
mance. Firstly, during sections 1, 2, 3, and 4, the performer can manually alter the base values
for harmonicity ratio and modulation index for each voice, thus enabling the insertion of softer
timbres. This is achieved using a single slider controller, and, in addition to affecting the timbre,
this control also affects the frequency range within which the 1f noise algorithm alters the carrier
frequency of each pitch - with softer timbres corresponding to wider ranges of frequency fluctua-
tions. A single slider also controls the amount of signal sent from each voice to its corresponding
delay line. The length of each delay line is initialised randomly between 100ms and 500ms, and
then varied at a slow enough rate for pitch shifting to be almost imperceptible. The delay time
for each is also directly related to the feedback level, with higher delay times resulting in lower
feedback values and vice versa. The result is a constantly morphing and unpredictable series of
delays adding to the overall sonic texture. Finally, the timbres of each of the voices can again be
altered through the use of one of two filters. The first is a basic low pass filter on each voice. The
second is a pitch tracking filter, copies of which are only used on the bass voice and glissandi.
This filter calculates the four most prominent pitches in the spectra of the incoming audio and uses
these as the centre frequencies for four tight resonant bandpass filters through which the original
audio is played, resulting in a bubbling or gurgling effect. The controls for each of these additional
features are visible in the top left-hand corner of the performers interface (see Fig. 6.2).
6.2 under scored
2014 joint International Computer Music Conference and Sound and Music Computing
Conference, Athens, Greece (with the Aspect ensemble)
2014 UofC Graduate Students Composer Concert, Calgary, AB (solo performance)
131
under scored is written for either solo laptop performer,2 or laptop performer with a small
ensemble.3 In both cases, the role of the laptop performer is similar to that of a jazz pianist
performing in an equivalent setting. That is, either playing, varying, and improvising upon the
musical materials in a solo setting, or treating the materials in a more reserved manner when acting
as the foundation for solo improvisers in an ensemble. The materials themselves are inspired by the
jazz-funk fusion of bands such as the Headhunters, Pat Metheny, and George Benson. The intention
being to create a ‘modern’ jazz fusion, which adds elements of computer music to the combination
of rock, funk, and jazz, evident in much of the jazz fusion mentioned above. To facilitate these
goals the laptop performer uses the AAIM.melodyVary module (see Sec. 5.4), in conjunction with
the AAIM.rhythmGen module (see Sec. 5.1), to manipulate and vary the musical materials.
All of the sounds used by the laptop performer are synthesised using a custom designed
wavetable synthesiser, which allows a broad range of tuning options, and the laptop performer
is also provided with control over live DSP of the synthesised sounds. As such the system allows
the laptop performer to drastically alter the sonic nature of the piece, in addition to algorithmically
altering the playback of the material. The piece contains four sections, each of which consists of
a chord sequence and some melodic fragments, and the system enables the laptop performer to
arrange the sections of the piece in any order during performance, allowing an infinite number of
arrangements.
6.2.1 Synthesis
As mentioned above, all of the audio synthesis in under scored is generated using a modified
variable wavetable synthesiser with four possible wavetables [143, 159]. Rather than use combi-
nations of basic waveforms, i.e. sine, square, saw, and triangle, the synthesiser uses single frames
of output from a phase vocoder [143, 566]. This allows the user to sample timbres from live instru-
ments (or prerecorded audio samples) and base each of the four timbres on these. Each timbre is
2https://soundcloud.com/ps_music/under_scored-demo3https://soundcloud.com/ps_music/under_scored-aspect_demo
132
also assigned a user-defined amplitude envelope, i.e. attack, decay, sustain, and release values, and
an XY interface allows for smooth crossfading between of each of the wavetables and interpolating
between each associated amplitude envelope.
Within the synthesiser, each of the four waveforms is looped continuously. Each ‘note on’
message the synthesiser receives opens an individual voice of a variable delay pitch shifter tuned
to the pitch of the desired note, through which the combined wavetables passed. The transposed
sound is then fed through the interpolated amplitude envelope to create the note. The final feature
of the synthesiser allows the ratios that the pitch shifter uses to be changed, thus enabling a wide
variety of alternative tuning systems.
6.2.2 Musical Materials
As mentioned previously, under scored is based on four sections in the style of jazz funk fusion,
each of which is in an 114 time signature, written as 6+6+6+4
8 in the score (see Fig. 6.3). The title of
the piece was inspired by the intentionally minimal materials used, consisting simply of harmonic
progressions and melodic fragments, mimicking the nature of ‘lead sheets’ for jazz standards,
which themselves usually only contain harmonic progressions and melodic lines. The reason I
took such a minimal approach was to maximise the possibilities for improvisation, while still
ensuring a clear referent throughout.
Figure 6.3: The ‘Vamp’ section of under scored
The ‘vamp’ section (see Fig. 6.3) acts as both the beginning and ending of the piece, and an
133
improvisatory refrain returned to at various points during the piece. This section also serves as
an introduction to many of the ideas explored throughout. For example, the previously mentioned
use of an 114 time signature. Although the use of time signatures other than 4
4 are used in jazz
fusion pieces, they are often more common as a single bar within a phrase of 44 rather than an
entire phrase. In more traditional jazz styles, the majority of pieces tend to only use a 44 time
signature. The nature of the rhythmic grouping used within this section also informs much of
the later material. As mentioned above the 114 time signature is notated as 6+6+6+4
8 in the score,
indicating the cellular ‘call and response’ nature of the rhythmic grouping. This cellular approach
continues in the materials of the latter sections. However, at all times this was only used as a
conceptual guide to the materials rather than acting as a mathematical structure for the piece. Also
apparent in the ‘vamp’ section is the absence of ascending perfect fourths in the root movements
of the chords. Again, this idea is explored throughout the piece, and places it at odds with many
traditional jazz styles - wherein these changes are amongst the most commonly used, and form the
basis of two of the most common ‘jazz progressions’: ‘ii - V - I’ and ‘iv - ii - V - I.’ This approach
lends a more ‘modal’ feel to the music and allows shifts between distant modalities, in this case,
the most prominent being ‘E minor,’ but the harmony shifts to modalities linked with ‘Bb’ and ‘F’
major. Finally, the diminution of the chord durations as the bar progresses, 6 quarter notes for the
‘E minor,’ followed by 112 quarter notes for the ‘G’ and ‘Eb’ chords, and 1 quarter note each for
‘Bb’ and ‘D minor,’ is echoed in the diminution and augmentation of subsequent sections.
The ‘A’ section of under scored (see Fig. 6.4) expands upon the harmonic movements used
during the ‘vamp’ section, again avoiding ascending perfect fourths in the roots and shifting from
‘E minor’/‘G major’ to more distant modalities linked to ‘Bb major.’ As mentioned above, the
material is intended to be cellular in nature, forming a hierarchy of ‘call and response’ phrases.
The first five notes of the melody forming a ‘call’ to the ‘response’ of the following five, these
ten notes then forming the ‘call’ to the ‘response’ of the final five notes in the first bar. The first
bar then forms a ‘call’ to which the second bar is the ‘response.’ This call and response phrase
134
Figure 6.4: The ‘A’ section of under scored
is then repeated and followed by the final bar, forming another call and response phrase. The
diminished nature of this final response phrase again links to the cellular character of the rhythm
itself, mimicking the final four beat grouping in the time signature, which is diminished with
respect to the groups of 6 that precede it.
The ‘B’ section of under scored is primarily based on a descending chromatic base line
variant of a passus duriusculus, beginning on ‘B’ and ultimately ending a tritone below on ‘F.’
Beginning on a ‘B, minor,’ the harmony moves away from the ‘E minor’-centric nature of both the
‘vamp’ and ‘A’ sections. The ‘call’ of the initial ‘B minor’ chord is answered with a ‘Bb major,’
and then this whole ‘call and response’ phrase is transposed down one whole step, ‘A minor’ and
‘Ab major,’ thus forming another ‘call and response.’ No melody is written for these first 2 bars of
the ‘B’ section, allowing either a break in the melodic line or a brief improvised phrase to form a
‘call’ before the subsequent ‘response’ of the melody. The melodic line again forms a hierarchical
order of calls and responses, and as with the ‘A’ section, the final bar of the ‘B’ section acts as
the ‘response’ to the ‘call’ of the preceding materials. However, in contrast to the ‘A’ section,
the harmonic movement in the ‘B’ section is far slower, changing only once every 5 or 6 quarter
135
Figure 6.5: The ‘B’ section of under scored
notes during the first 4 bars, and once every 3 or 2 quarter notes during the final bar, echoing
the previously mentioned diminution and augmentation of the chord durations during the ‘vamp’
section.
%% %%%%% %%%% %%%% %%%%% %%%%% %%
%%
%% %%%% %%%
%
8
''8
''
((((
( $($( $( $* $( * $* $*$((( $($( $( $
($( $ ($($($( $($( $(( $ ((( $($( $
($($($($( $( $($( $ ,
----------
2
13
Transition10
Figure 6.6: The ‘transition’ section of under scored
The ‘transition’ section of under scored (see Fig. 6.6) is again based on a hierarchical ‘call
and response’ approach, and echoes the descending whole tone transposition used during the ‘B’
section. Although in this case, the transposed ‘response’ is not an exact replication of the preced-
ing material - a ‘Bb major’ chord substituting for the expected ‘G minor’ chord. The harmonic
movement again avoids steps of an ascending perfect fourth, instead favouring descending perfect
136
fourths and movements of a third, in addition to a descending semitone movement during the final
bar. The final bar of this section also provides another example of the diminution and augmentation
of chord lengths, while the total length of this section, three bars, acts as a diminution of the five
bar phrases in sections ‘A’ and ‘B.’ This section contains no melodic line, as it serves as a transition
during improvisations, either between the ‘vamp’ and ‘A’ sections or between two ‘A’ sections.
Although, as previously stated, the laptop performer is given the ability to arrange the sections
in any order he/she chooses during performance, the sections themselves are intended to imply a
similar cellular ‘call and response’ form to that of the materials contained within. Both the ‘vamp’
and ‘A’ sections are suitable for repetition, while both the ‘B’ and ‘transition’ sections are more
suited to acting as a ‘response’ to repetitions of either the ‘A’ or ‘vamp’ sections. Such an approach
can be seen in Figure 6.7, which is a rough reproduction of the form used by the Aspect ensemble,4
when performing under scored at the 2014 joint International Computer Music Conference and
Sound and Music Computing conference in Athens.
6.3 5Four
2015 Music for Sounds Concert, Calgary, AB
5Four is written for drum kit and laptop performer,5 and draws influence from jazz, electronic
dance music, and computer music. 5Four explores the interaction between a live drummer and
the digital equivalents: drum machines and samplers, aiming to explore the interplay between the
fluidity of a live drummer and the “exaggerated virtuosity of the machine” [42, 2]. Sonically, the
piece combines traditional drum kit performance practice with sounds generated through extended
techniques. This accentuates both the quasi-pitched and inharmonic sounds the drum kit is capa-
ble of producing, while still retaining the rhythmic approach that has seen the drum kit become
one of the primary instruments in many 20th and 21st century styles of music. Throughout the
4Aura Pon - oboe and electronics; Simon Fay - guitar and electronics; Lawrence Fyfe - tablet controller and theAAIM system. http://innovis.cpsc.ucalgary.ca/Research/Aspect
5https://soundcloud.com/ps_music/5four_demo_090415
137
Figure 6.7: The form used by the Aspect ensemble when performing under scored at the 2014joint ICMC and SMC conference.
performance the division between the live drummer and sampled material is blurred, mirroring the
“fuzzy” divide [102, 73] between ‘popular’ and ‘art’ electronic music. The piece also incorporates
a number of idiomatic jazz ‘cliches,’ for example ‘trading phrases’ and a ‘shout chorus,’ further
expanding the scope of the work.
All of the sounds used in the performance of 5Four are either created live by the drummer,
or were performed by the drummer in advance, recorded, and then used as samples by the laptop
performer. The laptop performer is also free to apply digital signal processing to both the live
and sampled sounds. This tight-knit collection of sound sources blurs the division between the
live drummer and laptop performer. Only a skeletal score, containing suitable drum patterns,
and indications of how transitions between sections should occur, is provided to the drummer.
This promotes an improvisatory approach in the drummer’s playing, and emphasises the need for
interaction between the drummer and laptop performer. To further facilitate this goal, the laptop
performer is free to use any of the features of the AAIM.rhythmGen and AAIM.loopVary modules
138
to trigger, manipulate, and vary the prerecorded samples.
6.3.1 Sound Sources
As stated above, all of the samples used by the laptop performer during performance of 5Four are
derived from the drum kit. 8 separate instances of the AAIM.loopVary module, triggered by 8
AAIM.rhythmGen voices, are used to play 8 different loops. Each of these loops consists of a
variety of sounds created using both traditional and extended drumming techniques. The loops
are all grouped into sections, 1 loop in the drone section, 4 in the primary theme, and 3 in the
secondary theme. However, while the ‘bass lines’ underpinning the main and secondary themes
are intrinsically linked to each section (turning one on automatically results in the other being
turned off and the section changing), the laptop performer is free to turn on/off any of the other
loops at any time.
Both the aforementioned bass lines and the drone are derived from a variety of ‘cymbal scrapes,’
exciting the cymbal applying pressure with a drumstick and ‘scraping’ the stick across the cymbal,
and rubbing a rubber ball attached to a flexible metal handle across both cymbals and the ‘skins’ of
each of the drums. The main and secondary themes of the piece also consist of additional ‘melodic’
fragments, comprising ‘scrapes’ of individual drum pieces, and more percussive lines created from
assorted ancillary sounds - such as the squeaking of the drum stool, striking or scraping drum
stands etc. In editing and arranging these sounds I made only minor alterations to the original sam-
ples, limited to noise reduction, filtering, splicing, and collaging. This helped to retain the unique
and engaging timbres and quasi-pitched nature of the original recordings. Instead, the piece asks
the laptop performer to to vary and manipulate the playback of the loops using the AAIM system,
and to add additional layers of sonic activity through live digital signal processing of both the live
drums and the sampled loops.
139
6.3.2 Musical Materials
As stated above, 5Four consists of 3 main musical sections and the focus throughout is on the
improvisation of, and interaction between, both the drummer and laptop performer. This is em-
phasised by the minimal score provided, comprising a series of suggested beats, a suggested form,
and suggestions for how the transition between each section should occur. The drummer then uses
the score as a framework upon which he bases his own performance.
As the title suggests, the piece is written in 54 time. However, this is subverted throughout
to create a more fluid and loose rhythmic feel. The opening, ‘drone,’ section contains no metric
cues. This frees the drummer from the constraints of the meter and allowing them to explore and
interact with the timbral qualities of the drone. During the initial presentation of this section, the
drummer mimics the ‘free’ rhythmic feel, accentuates the timbral changes, and gradually increases
the intensity of their own improvisation in parallel to the gradual swell of the drone. The drone
climaxes with the introduction and accentuation of high frequency sampled material increasing the
tension before abruptly cutting to the main section and primary theme.
Figure 6.8: The, suggested, skeletal drum pattern for the primary theme of 5Four
The primary theme continues the dark atmosphere of the drone, and introduces the rhythmic
pulse, with the 54 time signature clearly evident in the looped samples. Although the use of loop-
based sections is common to much electronic dance music, the primary theme of 5Four points
towards alternative ways of approaching loops through both its time signature. Most electronic
dance music is in 44 and uses loops based on powers of 2, i.e. 2 bar, 4 bar, 8 bar, 16 bar. In
contrast, the primary theme of 5Fouris based on a 5 bar loop using a 54 meter (see Fig. 6.8). Again,
the drummer is asked to improvise. Although, in this case, a skeletal example of a suitable drum
140
pattern is provided to demonstrate the desired sound. Throughout this section, the laptop performer
applies digital signal processing to both the looped audio and the drummer’s live performances,
and also uses the AAIM.rhythmGen and AAIM.loopVary modules to vary the playback of the loops.
This blurs the lines between the sound of the live drum kit and recorded audio, and also mimicks the
improvised nature of the drummer’s performance through the introduction of rhythmic variations.
Following an undefined number of repetitions of the main section, the laptop performer grad-
ually begins ‘thinning’ the texture of the main theme by muting each of the individual loops. The
laptop then introduces each of the individual loops for the secondary theme, in essence ‘cross-
fading’ between the two sections. The loops comprising the secondary theme mimic those of the
primary theme, with a particularly prominent bass voice again evident. However, the third section
utilises a metric modulation, changing the time signature from 54 to 4
4 while retaining the actual
length of the bar, thus subverting the strong 54 pulse of the primary theme and implying a decrease
in tempo, giving this section a ‘heavy’ rhythmic feel. The retention of the original pulse through
the use of quintuplets adds to this ‘heavy’ rhythmic feel, and leads to a sense of ‘shuffle’ or ‘swing’
in the drummer’s rhythm. As with the primary theme, the secondary theme also points towards al-
ternate approaches to looped material through it’s rhythm and phrase length, in this case a 3 bar
phrase. Again, only a skeletal example of a suitable drum pattern is provided (see Fig. 6.9), and
both the drummer and laptop performer improvise throughout.
Figure 6.9: The, suggested, skeletal drum pattern for the secondary theme of 5Four
The third section ends in the same manner as it began, with the laptop performer gradually mut-
ing individual loops, and ‘crossfading’ into the next section, a return to the ‘drone.’ However, in
contrast to the first rendition of the ‘drone,’ the drummer retains the metrically modulated 44 pulse
141
of the secondary theme throughout. The drummer again improvises throughout, while the laptop
performer mimics aspects of the drummer’s performance and also interjects additional ideas. This
is made possible through the use of additional AAIM.rhythmGen voices, each of which trigger sam-
ples from a collection of individual drum hits. Each of these additional AAIM.rhythmGen voices
run at a rate that results in simple cross-rhythms, superimposing tempos of 35 , 7
10 , and 45 times the
initial tempo, thus creating a further level of rhythmic complexity and expanding upon the metric
modulation of the second section.
The second rendition of the drone section also switches abruptly to the main theme following
its climax. However, the use of a variety of tempo changes during the second rendition of the
main theme significantly subverts the underlying tempo. During this section, the laptop performer
and drummer take turns as the primary soloist. Each emphasising their own improvisation for a 5
bar phrase, and supporting the improvisation of the other performer during the next 5 bar phrase -
mimicking the common use of ‘trading fours’ in jazz performance, wherein each soloist improvises
over a four bar measure in turn.
Figure 6.10: The rhythmic pattern to be used during the ‘shout chorus’ section of 5Four
This unplanned sequence of tempo changes and trading phrases is then followed by a final,
previously unheard, section of the piece. Although the material presented during this section ini-
tially appears to be new, it is derived from a simple re-ordering of ‘grains’ of the loops used during
the main theme. This section also mimics a common practice in jazz, that of the ‘shout chorus,’
where all of the performers perform a composed rhythmically strict passage (see Fig. 6.10). The
strict rhythmic nature of this section helps to reintroduce and reinforce the initial tempo and 54 time
signature, before the main theme is again performed, without tempo changes, to close the piece.
142
6.4 Others
6.4.1 blueMango
blueMango expands upon some of the ideas explored in under scored (6.2), namely the combi-
nation of laptop performer controlling the AAIM system with live instrumental performers in a jazz
fusion setting. And, like under scored , blueMango was also written for the Aspect ensemble.6
However, while the laptop performer used the AAIM.melodyVary module to control a synthesiser
in under scored , in blueMango the laptop performer instead uses the AAIM.loopVary module to
vary and manipulate the playback of samples of the instrumentalists made during performance, in
essence using AAIM as an extended looper.
blueMango is based on a 17 bar melody, the ‘head’ in jazz parlance, and harmonic progres-
sion in 54 time inspired by the Charles Mingus composition Goodbye Pork Pie Hat,7 while the
‘loose’ feel was inspired by Ornette Coleman’s Lonely Woman.8 Each performance begins with
the instrumentalists playing the ‘head,’ which the laptop performer records live and uses as their
primary sound source throughout the remainder of the piece. Using the AAIM.rhythmGen and
AAIM.loopVary modules, the laptop performer loops this entire progression, altering the rhythm,
grain size, direction, and position of the playback, to create a constantly morphing backing for the
soloists. Fragmentation and repetition of segments of the loop further extend the progression.
Following the initial rendition of the ‘head,’ each instrumentalist takes turns improvising over
the progression, as the laptop performer gradually begins introducing more variations and frag-
mentations to the playback. A concurrent improvisation by the instrumentalists then follows the
individual solos. Simultaneously, the increasing number of fragmentations leads to a gradual dis-
integration of the original material, and climaxes in the introduction of a rhythmic motif, based on
an arbitrary segment of the original recording. This rhythmic motif then becomes the basis for the
6https://soundcloud.com/ps_music/bluemango7https://www.youtube.com/watch?v=6sfe_8RAaJ08https://www.youtube.com/watch?v=DNbD1JIH344
143
improvisation, before disintegrating back to the original material again. blueMango then ends with
a final rendition of the ‘head,’ as the original loop is gradually faded out.
6.4.2 BeakerHead
2015 Beakerhead Festival, Calgary, AB
BeakerHead9 demonstrates the use of the AAIM system in an unplanned sound art improvisation.
The performance consisted solely of recordings of audience members at the 2015 BeakerHead10
festival in Calgary, AB, and was created as part of a collaboration between the Faculty of Arts
the University of Calgary, and the Beakerhead festival. A mobile audio unit, comprising a laptop,
battery-powered loudspeaker, audio interface, MIDI controller, and touch screen interface, was
constructed, allowing the performers11 to travel through, and interact with, the audience at the
event. Passing audience members were recorded speaking into a microphone for a short amount
of time, and these recordings were immediately incorporated into the ever-changing soundscape.
The performers used four AAIM.loopVary modules to manipulate and vary the playback of these
recordings, allowing up to four independent samples to be played, played with rhythmic variations,
or for the performers to create entirely new patterns by reordering the playback of the samples.
Rather than using any specific approach, or attempting to incorporate any specific musical
ideas or material, BeakerHead relied entirely on the interaction between the two performers, and
their ability to use the AAIM system to create music using the recordings of the audience. To this
end, both the MIDI interface and touchscreen interface allowed control over many of the same
variables. This opened the door for either performer to ‘veto’ any musical decision made by the
other, allowing both performers to influence the progression of the piece, and resulting in moments
where both performers struggled for control as their desires contrasted with those of the other.
9https://soundcloud.com/ps_music/beakerhead_calgary2015-maa_mix10http://beakerhead.com/11Simon Fay and Ethan Cayko
144
6.4.3 radioAAIM
radioAAIM is a series of pieces12 created using the AAIM.loopVary module, within the AAIM.app ap-
plication, as a ‘DJ’ device. Each piece in the series uses a selection of samples taken from various
pieces by other composers/bands. These samples were then mixed live to create a quasi DJ set
or mash-up, similar to Grandmaster Flash’s piece The Adventures Of Grandmaster Flash On The
Wheels Of Steel.13 In contrast to many, although by no means all, DJ sets and live mash-ups, each
of the radioAAIM series focuses not only on playing and mixing the original materials but also on
the variation and manipulation of the source material live during the performance.
Rather than use a score or plan of any kind, I create the form of each of these pieces organi-
cally in performance, a result of practice with both the loops and the system combined with im-
provisatory intuition. Instead, the ‘referent’ for each piece consists of a ‘bank’ of loops, which all
‘work’ together in the context of that particular radioAAIM piece. For example radioAAIM 2am
uses a collection of samples taken from both ‘popular’ and ‘art’ electronic music compositions,
an attempt to combine the ‘art’ and ‘popular’ styles of electronic music. In contrast, both ra-
dioAAIM cheeseNcrackers and radioAAIM cheeseNcrackers2 consist of a selection of funk and
soul pieces. These stylistic differences in the materials used also influenced a different approach
to the use of the AAIM system during performance. The nature of the material used in ra-
dioAAIM 2am suggested a rather extreme use of the AAIM.rhythmGen and AAIM.loopVary mod-
ules when varying the loops, while a more subtle approach was used in both radioAAIM cheeseNcrackers
and radioAAIM cheeseNcrackers2. Further differences to a typical DJ set afforded by the AAIM sys-
tem include the facility to create polyrhythmic textures using the original materials, examples of
which can be heard in both radioAAIM cheeseNcrackers and radioAAIM 2am.12https://soundcloud.com/ps_music/sets/radioaaim13https://www.youtube.com/watch?v=8Rp1oINrEuU
145
6.4.4 Winter Lullaby (summersongsforwinterspast mix)
Winter Lullaby (summersongsforwinterspast mix) uses AAIM as a live ‘remixing’ tool for a solo
laptop performer.14 All of the audio used during the piece derives from a live recording of
Carmen Braden’s15 composition Winter Lullaby, for vibraphone and soprano, combined with a
collection of samples taken from a drum kit. The piece uses both the AAIM.patternVary and
AAIM.loopVary modules to allow the performer to vary the material during live performance.
Stylistically Winter Lullaby (summersongsforwinterspast mix) combines Western art song and
electronic dance music.
The piece has four main sections, the first two of which consist of loops created by rearranging
material from the original recording. In fact, it is not until the third section that a section of the
original composition can be heard in full, demonstrating the capabilities of the AAIM system in
deriving new materials from pre-existing music. In contrast to much of the music presented herein,
a more rigid form is provided, to which the performer is expected to adhere. However, the exact
number of repetitions of each loop is left unspecified, allowing the performer to approach each
performance individually. The performer is asked to use the capabilities of the AAIM system to
embellish the material throughout.
6.4.5 new beginnings
2013 Sonic Arts Waterford and Kolonia Artystow artist exchange, Gdansk, Poland
new beginnings16 was performed at the opening of Polish artist Mateusz Bykowski’s exhibition
You see what you want to see.17 As the nature of the performance situation required the music
to support the exhibition, new beginnings largely eschews dramatic changes or moments of inten-
sity. Instead, the focus is on creating an ever-morphing ambient soundscape, where the surface
14https://soundcloud.com/ps_music/winter-lullabysummersongsforwinterspast-mix15http://blackicesound.com/16https://soundcloud.com/ps_music/newbeginnings17Kolonia Artystow, Gdansk, Poland, 21/06/2013
146
layer of the piece gives the impression of stasis or extremely gradual changes, but a more intense
observation reveals constant variation both within individual sounds and between the sounds used.
To achieve these ends, new beginnings uses four AAIM.loopVary voices, each controlled by an
individual AAIM.rhythmGen voice. No plan or score was created governing the form of the piece,
and instead the piece was based entirely on a collection of samples with inharmonic spectra, rang-
ing from bells and pieces of metal being struck, to pieces of glass being broken. The performance
then consisted of choosing samples, creating alternate permutations of the playback sequences,
and using the features of the AAIM system to create further variations on a micro level.
6.4.6 Jam Band
Jam Band18 provides a second example of how the AAIM.app application can be used to create a
complete piece of music. Jam Band uses both the AAIM.melodyVary and AAIM.patternVary mod-
ules, and all of the sounds are generated using the FFM synthesiser, sampler, and sampler banks
contained within the application. The piece was primarily conceived as an example of some of
the capabilities of the AAIM system and particularly those of the AAIM.app application. Stylisti-
cally, Jam Band imagines the jam-band rock of groups such as the Grateful Dead, played by the
instruments of electronic dance music, synthesisers and drum machines.
The piece consists of ten presets, divided loosely into three parts by changing time signatures,
thus demonstrating the application’s ability to afford users with the opportunity to explore chang-
ing time signatures, and to save and recall multiple presets. Another defining feature of each of
these sections is the use of the AAIM.melodyVary ‘Modal Mapping’ method, allowing each section
to be based on similar musical materials but played in a different mode. In addition to determin-
ing the melodic material, drum pattern, and time signature of each section, the presets used also
demonstrate the facility of the AAIM system to vary materials in different ways given different
variables, with the primary ‘soloist,’ changing between different presets.
18https://soundcloud.com/ps_music/jamband_0206/s-Qr3pz
147
Chapter 7
Evaluation
The main contribution of the AAIM system is the portfolio of algorithms covering the generation
of musical rhythms, and the mapping of these rhythms onto musical materials: drum patterns,
melodies, and sample playback. By using these rhythms to manipulate and vary user-defined
materials in real-time, AAIM facilitates the live performance and improvisation of electronic music
in a vast range of genres with widely differing degrees of complexity/variation.
This chapter will evaluate the AAIM system’s success in realising these contributions. Section
7.1 reiterates the approaches used in the AAIM system, which have been discussed in detail in
previous chapters. Section 7.1.1 then addresses how AAIM achieves the four criteria for musical
algorithms given by Simoni and Dannenberg: simplicity, parsimony, elegance, and tractability
[160, 4]. Section 7.2 presents evidence of the AAIM system’s ability to facilitate live performance.
Section 7.3 then discusses evidence of the system’s ability to afford improvisation. Finally, Section
7.4 discusses how the pieces discussed in Chapter 6 demonstrate the broad problem space with
which AAIM engages.
7.1 The AAIM system
The approach used by the AAIM system is firstly evident in the algorithms used by the AAIM.rhythmGen mod-
ule. The expansion of Barlow’s Indispensabbility algorithm [10] to work with groups of 2 and 3
beats, as opposed to prime number subdivisions, enables the AAIM.rhythmGen module to create
a far more comprehensive range of rhythmic groupings (see Sec. 5.1.4). The AAIM.ioiChooser
algorithm (see Sec. 5.1.3), is also, to the authors knowledge, unique, in that it makes possible any
rhythmic subdivision, in any time signature, and determines which i-o-i to use purely on the basis
of user set variables.
148
The AAIM.rhythmGen module’s approach to generating rhythms necessitates a complementary
approach in mapping these rhythms to musical materials, and this can be seen in how the other three
primary modules generate their most basic variations (see Sec. 5.2, 5.3, 5.4). Even as the simplest
of the primary modules, the AAIM.patternVary module’s Extra feature (see Sec. 5.2.3) provides a
method for inserting additional triggers into a pattern, while still retaining the fundamental nature
of the original.
The mapping approach used by AAIM is most apparent in the AAIM.loopVary module (see Sec.
5.3). The module’s treatment of every sample as a sequence of equally spaced and sized grains not
only allows the sequence, and thus the sample, to be easily rearranged, but also allows the module
to easily translate the rhythmic lines output by the AAIM.rhythmGen module into changes in the
sample playback position without changing the total length of the loop. In addition to this, it means
that the playback of individual grains can be easily reversed, without needing to reverse the entire
loop.
The AAIM.melodyVary module’s melodic analysis (see Sec. 5.4.2), Scalar Note (see Sec.
5.4.8), and Scalar Melody (see Sec. 5.4.9) methods demonstrate the module’s unique approach
to melodic variation. Rather than analyse melodic lines according to chromatic intervals, the use
of tonal intervals and trajectory allows the module to infer more musical information from less
musical input. In turn, this analysis enables the Scalar Note method to insert additional, or alter-
native, notes into a melodic line, which retains the fundamental features of the original line. The
Scalar Melody method is even more unique, in that it expands upon the Scalar Note method to
create entire melodic lines that retain the fundamental features of the original.
7.1.1 Criteria for Musical Algorithms
In his 1973 monograph, Donald Knuth presents a list of aesthetic criteria for the evaluation of an
algorithm [97]. From Knuth’s list, Simoni and Dannenberg selected four in particular that musical
algorithms should “strive to meet” [160, 4]:
149
• Tractability
• Simplicity
• Parsimony
• Elegance
In this section, the AAIM system will be evaluated according to its success in meeting these criteria.
It is important to note, however, that in many scenarios these criteria were in conflict, either with
each other or with the goals of the software, and as such a balance often had to be found.
As discussed in Chapter 4, one of the primary ideals of the AAIM system is that the relationship
between the user’s input and the system’s output be as intuitive as possible. As such it can be seen
that ‘tractability’ has been a guiding ideal in the design of AAIM since its earliest conception.
This has been achieved by ensuring that all variations are directly linked to user inputs, be they the
original musical materials or variables. Although the system primarily uses weighted probabilities,
these weights are always set by the user.
The ‘simplicity,’ ‘elegancy,’ and ‘parsimony’ of the AAIM system has also been discussed at
various points herein. For example, in addition to affording a far greater number of rhythmic
groupings, the use of groups of 2 and 3 in the AAIM.rhythmGen Indispensability method (see Sec.
5.1.4) also simplifies Barlow’s original algorithm, as it requires only two rules, rather than an in-
dividual rule for every prime number. This reliance on just two rules is also clearly parsimonious,
and the method’s ability to use these two rules to generate groupings in any time signature demon-
strates the elegant nature of the method. The use of groups of 2 and 3 exemplifies the occasional
conflicts resulting from using these four criteria. The method could be further simplified by only
using groups of 1. However, this would limit every number to only one grouping (1+1+...), thus
severely limiting the capabilities of the system. It could also be argued that such a limitation would
also make the algorithm both less elegant and less parsimonious.
150
The AAIM.rhythmGen method for creating i-o-is also meets these criteria. The modules’s equal
treatment of every i-o-i enables it to create any i-o-i, but only requires knowledge of:
• The number of repetitions of the underlying pulse needed for synchronisation, and
• The number of repetitions of the i-o-i itself required for synchronisation.
The AAIM.rhythmGen ioiChooser (see Sec. 5.1.3) method further demonstrates how the system
has met these four criteria. When using this method to choose an i-o-i, each AAIM.rhythmGen voice
only needs knowledge of the above constants, the number of beats left in the rhythmic phrase, and
user-defined variables.
Each of the other primary modules also fulfil these criteria. For example, the
AAIM.patternVary module only requires the user’s pattern and variables for the probability of
additional notes being inserted to create variations of any pattern in any time signature. The
AAIM.loopVary module only requires information about the number of grains to divide a sample
into to afford variations of any audio recording, and the AAIM.melodyVary module allows varia-
tions of any melodic line to be created, but only requires the melodic line itself and a handful of
variables determining the nature of the variations.
7.2 Live Performance
As all of the pieces presented in Chapter 6 were created in real-time using AAIM, they all demon-
strate the AAIM system’s capability to facilitate live performance. However, AAIM has also been
employed in many concerts. For example, I was invited to perform“There is pleasure...” (6.1) at
both the 2015 International Computer Music Conference in Denton, TX, and the 2015 Forms of
Sound Festival in Calgary, AB. I was also invited to perform 5Four (6.3) at the Music for Sounds
concert in Calgary, AB.1 The Aspect ensemble2 were invited to perform under scored (6.2) at
1Ethan Cayko - drumset; Simon Fay - the AAIM system2Aura Pon - oboe and electronics; Simon Fay - guitar and electronics; Lawrence Fyfe - tablet controller and the
AAIM system
151
the 2014 joint ICMC and SMC conference in Athens, Greece, and I was also invited to perform
under scored as a solo laptop performer, at the 2014 UofC Graduate Students Composer Concert
in Calgary, AB. As part of an artist exchange with Sonic Arts Waterford,3 I was invited to perform
at the opening of Polish artist Mateusz Bykowski’s exhibition You see what you want to see4 at
Kolonia Artystow, in Gdansk, Poland, in 2013 (see Sec. 6.4.5). Finally, myself and Ethan Cayko
were invited to perform at the 2015 Beakerhead festival in Calgary, AB (see Sec. 6.4.2).5
7.3 Improvisation
Each of the piece presented in Chapter 6 also demonstrate the AAIM system’s capacities to fa-
cilitate improvisation. However, the improvisational capabilities of the system are perhaps most
apparent in those pieces that do not rely on any predetermined form or material. For example, the
performance of BeakerHead involved no preconceived notions of form or materials, as the entire
performance used recordings of the audience at the 2015 Beakerhead festival. As such improvisa-
tion was essential in turning otherwise non-musical materials, i.e. people speaking, into the basis
for a musical performance. The radioAAIM (6.4.3) series also demonstrates the improvisational
affordances of the AAIM system. While the materials used in the series are inherently more ‘mu-
sical’ than those employed in BeakerHead, the lack of any predetermined score necessitated an
improvisational approach to both the materials used and the overall form. Similarly, as no prede-
termined form was used in new beginnings, the improvisational capacities of AAIM were vital in
facilitating the performance.
While each of the other pieces discussed in Chapter 6 rely on more defined forms and materials,
the improvisational capabilities of AAIM are evident in the variations of the underlying materials
that can be heard throughout. This form of improvisation, the elaboration and ornamentation of
musical ‘building blocks,’ has been discussed in detail in 2.2, and its relationship to improvisation
3http://www.sonicartswaterford.com/4http://www.kolonia-artystow.pl/article/mateusz-bykowski/2245Ethan Cayko - MIDI controller and the AAIM system; Simon Fay - tablet controller and the AAIM system
152
with AAIM was discussed in 4.4.
7.4 Broad Problem Space
Evidence of the broad problem space with which AAIM engages was also presented in Chapter 6.
For example, the extensive range of aesthetics, genres, and approaches is apparent in the contrast
between the melodic sound of under scored , and the percussive and inharmonic sounds of 5Four.
Or the contrast between the primarily electronic dance music style of Winter Lullaby (summer-
songsforwinterspast mix) (6.4.4), and the largely electroacoustic style of “There is pleasure...”.
Similarly, BeakerHead and Jam Band (6.4.6) demonstrate the system’s capabilities to facilitate
both completely open form, and stricter, more rigid forms. The AAIM system’s ability to afford the
creation of different music with widely ranging degrees of complexity/variation can be heard in the
differing levels of variation and complexity within, and between, the radioAAIM series (6.4.3), or
in the varying levels of rhythmic complexity and variation evident in “There is pleasure...”, 5Four,
and under scored . Further uses of the AAIM, within purer forms of electronic dance music, can
be heard on my Fours a crowd E.P.6
6https://perezslime.bandcamp.com/album/fours-a-crowd
153
Chapter 8
Conclusions
This thesis has described the development of the AAIM performance system, a portfolio of in-
terconnectable algorithmic software modules designed to facilitate improvisation and live perfor-
mance of electronic music. The primary inspiration for this research was a desire to exploit the
“potential of computer music to exceed human limitations” [34, 142] within live performances,
and a belief that algorithmic performance systems have the capacity to augment human creativity
in live performance and improvisation. To examine the effectiveness of the AAIM system in meet-
ing these goals, I created a portfolio of musical works using the system. These works explored
the affordances of the system in a variety of styles and genres, and many were performed in live
concerts.
The development of AAIM led to a number of contributions. This chapter describes those
research contributions (8.1) along with some ideas for future research (8.2). The chapter ends with
some personal reflections (8.3) and closing remarks (8.4).
8.1 Contributions
During the course of this research I answered the challenges described in Sec. 1.5. Those answers
resulted in the contributions of this research.
• Portfolio of algorithmic modules for the variation of musical materials
The main contribution of the research presented in this thesis is the AAIM system, a
portfolio of algorithms facilitating the variation and manipulation of rhythms, pat-
terns, melodies, and sample playback in electronic music. The main contributions
of the AAIM system are:
154
– Variation of user-defined materials1
A fundamental tenet of the AAIM system is that the musical output
should be directly linked to the aesthetics of the user, or to quote
Morton Subotnick “the medium should always be at the service of
the artist. [168, 113]. To this end, each of the AAIM modules is
concerned with allowing the user to vary and manipulate their own
musical ideas.
– Facilitate Live Performance2
A further principle of AAIM is that the system should be capable of
working in live performances. This was achieved by only develop-
ing algorithms that could run in real-time, refining the algorithms to
improve their speed and efficiency, and incorporating divergent map-
ping techniques, where one variable effects many features, allowing
users to control the systems with a minimal number of controls. To
evaluate this capability of the AAIM system I wrote and performed
a portfolio of creative works using the system (see Chap. 6).
– Facilitating Improvisation3
As discussed in Section 2.2, there are a wide range of different ap-
proaches to improvisation in music, and the extent to which a elec-
tronic music system affords improvisation is debatable and “prob-
ably depends on the individual as well as the software” [43, xvii].
The form of improvisation facilitated by the AAIM system involves
1The use of AAIM to vary user-defined materials can be heard in any of the works contained in the portfolio ofcreative works. For example, in the variation of the melodic and harmonic material in under scored [58], or thevariation of prerecorded samples in 5Four[61].
2AAIM has been used in many concert performances. Both by the author: [70],[66],[69],[68], and [71]; and byother performers: [65] and [67].
3All of the works contained in the creative portfolio demonstrate AAIM’s potential to facilitate improvisation.Perhaps the most obvious examples are radioAAIM[63], BeakerHead[72], and new beginnings[57], all of which wereperformed with no preplanned score or progression.
155
the embellishment and variation of musical materials, an approach
that is common in many styles (see Sec. 2.2.2 and 2.2.3). As Dean
noted, interaction and improvisation with algorithmic systems form
a continuum, and one relevant factor is “the degree to which users
can learn to exploit the capacities of the algorithm (versus the de-
gree to which the algorithmic changes are indifferent to the nature
of user input)” [43, 73]. Despite these inherent uncertainties the
AAIM strives to maximise its improvisational potential through in-
tuitive mapping of variables and the resulting variations. The port-
folio of creative works presented in Chapter 6 also demonstrate this
improvisational potential, as discussed in Section 7.3.
– Stylistically Neutral/Broad Problem Space4
The final contribution of the AAIM system is the broad problem
space with which it engages. Rather than attempt to create one in-
dividual style, each of the modules comprising the AAIM system
instead use generic methods suitable to many styles. By allowing
the user control over what variation methods are possible, and fo-
cussing on varying user-defined materials, the AAIM system max-
imises the extent to which the user is in control of the aesthetic out-
come. Evidence of this broad problem space is apparent in the range
of styles and approaches presented in the portfolio of creative works
(see Chap. 6).5
• Portfolio of creative works6
In addition to providing evidence for the extent to which the system meets the con-
4The range of styles AAIM facilitates is apparent in the differences between under scored [58], “There is plea-sure...”[59], and 5Four[61].
5Also see https://perezslime.bandcamp.com/album/fours-a-crowd.6[59] [58] [61] [62] [72] [63] [60] [57] [56]
156
tributions mentioned above, the portfolio of creative works discussed in Chapter 6
also represents a contribution in its own right. By using the AAIM to facilitate the
variation and manipulation of composed musical materials within a musical frame-
work, these pieces demonstrate the development of a personal performance practice
and aesthetic. They also show the potential for AAIM, and similar algorithmic ap-
proaches, to facilitate the development of new artistic and performance practices.
8.2 Future Work
The development of AAIM will continue beyond the research presented in this thesis. Here are the
ways I intend to develop the system in future work:
8.2.1 AAIMduino
AAIMduino is an on-going project to implement the AAIM system using the Arduino open-source
electronics platform. The first of these ports is the AAIMduino.drums,7 a drum machine based on
the AAIM.rhythmGen and AAIM.patternVary modules. Although much of the underlying meth-
ods are unchanged from those described herein, some features have been modified to compensate
for the comparatively limited number of controllers possible, compared with using a laptop and
mouse, but also to take advantage of the improved tactility of using physical sensors. Although I
have emphasised the fact that AAIM is not reliant on any one interface, I believe that open-source
projects represents the best of both worlds: tactility of physical sensors combined with the freedom
to alter the interface/code as you choose.
8.2.2 AAIM Max and Pure Data Externals
The portfolio of algorithms comprising the AAIM system is currently in the form of Max patches
and Python scripts. I would like to write dedicated C externals for each of the modules, which
7http://simonjohnfay.com/aaimduino/
157
could then be used within Max or Pure Data. This would make the system much cleaner and
easier to use, and would likely lead to performance improvements also.
8.2.3 AAIM Application and VST’s
Although the AAIM.app application does allow users to explore each of the modules of the system,
the fact that it was written using Max means that it is not as flexible as I would like. Ultimately
I would like a DAW (digital audio workstation) that uses AAIM as the fundamental means of
organising and triggering events. A dedicated DAW would also have the advantage of allowing me
to include more macro compositional algorithms than those currently contained in AAIM. However,
as this would clearly be a massive undertaking I believe a more realistic goal would be the creation
of a collection of VST’s (virtual studio technology) that would allow users to use the AAIM within
any DAW. I have already begun this process by converting the AAIM system into a collection of
Max for Live8 devices.9 This would open up my research to a potentially larger audience, and
allow electronic musicians with no coding experience to use AAIM.
8.2.4 Android/iOS app
Finally, I would like to develop a series of ‘apps’ for Android and iOS tablets. This would further
expand the potential audience, and take advantage of the near ubiquity of touch screen devices in
the modern world.
8.3 Personal Reflections
I am very proud of the AAIM system and the music I have created using it. I genuinely believe that
the ideas explored in AAIM have the potential to facilitate a new paradigm of digital sequencers.
However, throughout the development of AAIM I have made a number of decisions that have then
limited my options at latter stages. For example, the choice to base the AAIM.rhythmGen on re-
8Max for Live allows you to create custom Max externals specifically for use within Ableton Live.9https://www.youtube.com/watch?v=w7bb7VelfX4&
158
peated phase ramp was influenced by a typical step sequencer design. This approach worked per-
fectly when mapping triggers to the AAIM.patternVary and AAIM.melodyVary, but proved more
difficult when trying to map triggers to the AAIM.loopVary. Although programs such as Pro-
pellerheads’ ReCycle10 allow users to slice the audio by transients, I felt that the advantages of
such an approach would be outweighed by the additional difficulty imposed when mapping the
AAIM.rhythmGen output to the slices. In contrast, while the ‘one size fits all’ approach I took
by dividing the sample into equal divisions has limitations, I decided that the advantage of being
able to apply the same methods to every sample without having to alter the functionality of the
AAIM.rhythmGen module exceeded these limitations.
The choice to base the AAIM.melodyVary ‘scalar note’ method on weighted probabilities of
diatonic intervals also placed limits on the system. By approaching melodic material in this way
the module is adept at varying shorter melodic ideas and fragments, but does not work nearly as
well when presented with longer pitch based materials. I have often considered ways to incorporate
options to change from the method used to Markov chains to expand the modules capabilities.
However, I felt that the fact that Markov chains require such large amounts of data to develop
models would either require a corpus-based approach or require the user to enter a large amount
of composed material. I decided against the corpus-based approach as I felt it would move the
system away from the goal of basing variations on the user’s material, while I felt that users could
divide their material into smaller sections when working with longer ideas. That said, I do think an
additional mode whereby the module would begin using Markov chains could expand its scope.
8.4 Closing Remarks
This thesis has detailed my research into the development of the AAIM performance system, a
portfolio of interconnected algorithmic modules designed to facilitate the performance and impro-
visation of electronic music. AAIM addresses the challenge of facilitating greater flexibility and
10https://www.propellerheads.se/recycle
159
creativity in live electronic music through the manipulation and variation of user-defined musical
materials. I showed that AAIM facilitates both live performance and improvisation by using the
system to create and perform a portfolio of musical works. The breadth of genres and approaches
employed in these pieces also illustrates the broad problem space that AAIM engages with. The cre-
ative works comprising this portfolio also demonstrate the potential for algorithmic performance
systems to facilitate the development of new artistic and performance practices, which exploit the
ability of computers to “exceed human limitations” [34, 142].
Beyond AAIM and the portfolio of creative works, it is my hope that others can apply the
methodology employed for my research in a variety of settings. Computers are the most powerful
music making tool humanity has created, and their unique affordances should enable users to
explore new performance practices and aesthetic directions. I hope that AAIM will not only be used
by others to create music, but that it will inspire new algorithmic approaches to live performance
of electronic music. I also hope that the portfolio of creative works encourages others to explore
the fertile creative grounds that is the ‘fuzzy’ divide between ‘art’ and ‘popular’ electronic music.
160
Bibliography
[1] Monty Adkins. Schaeffer est mort! Long live Schaeffer! In EMS07 - Electroacoustic Music
Studies, June 2007.
[2] Javier Alvarez. Mambo a la Braque. http://www.electrocd.com/en/oeuvres/
select/?id=14261, 1990, Accessed September 20, 2015.
[3] Torsten Anders and Eduardo R Miranda. Constraint programming systems for modeling
music theories and composition. ACM Computing Surveys (CSUR), 43(4):30, 2011.
[4] Christopher Anderson, Arne Eigenfeldt, and Philippe Pasquier. The Generative Electronic
Dance Music Algorithmic System (GEDMAS). In Proceedings of the Artificial Intelligence
and Interactive Digital Entertainment (AIIDE’13) Conference, 2013.
[5] Tom Armitage. Making music with live computer code, Wired UK. http://www.wired.
co.uk/news/archive/2009-09/25/making-music-with-live-computer-code-.
aspx, September 2009, Accessed October 17, 2015.
[6] Gerard Assayag, Georges Bloch, and Marc Chemillier. Omax-ofon. Sound and Music
Computing (SMC), 2006, 2006.
[7] Milton Babbitt. Who cares if you listen? High Fidelity, 8(2):38–40, 1958.
[8] David Baker. Modern concepts in jazz improvisation. Alfred Music, 1990.
[9] Elaine Barkin and Martin Brody. Babbitt, Milton, Oxford Music Online.
http://www.oxfordmusiconline.com.ezproxy.lib.ucalgary.ca/subscriber/
article/grove/music/01645?q=milton+babbitt&search=quick&pos=1&_start=
1#firsthit, Accessed October 8, 2015.
[10] Clarence Barlow and Henning Lohner. Two essays on theory. Computer Music Journal,
18(3):44–60, 1987.
161
[11] Andy Battaglia. Grateful Dead’s ‘Dark Star’ Gets New Life,
The Wall Street Journal. http://www.wsj.com/articles/
grateful-deads-dark-star-gets-new-life-1406845735, July 31, 2014, Accessed
September 6, 2015.
[12] Margaret Bent. Isorhythm, Oxford Music Online. http://www.oxfordmusiconline.
com.ezproxy.lib.ucalgary.ca/subscriber/article/grove/music/13950?q=
isorhythm&search=quick&pos=1&_start=1#firsthit, Accessed October 7, 2015.
[13] Paul F Berliner. Thinking in Jazz: The Infinite Art of Improvisation. University of Chicago
Press, 2009.
[14] David W Bernstein. The San Francisco Tape Music Center: 1960s Counterculture and the
Avant-Garde. University of California Press, 2008.
[15] Karin Bijsterveld and Trevor J Pinch. ”Should One Applaud?” Breaches and Boundaries in
the Reception of New Technology in Music. Technology and Culture, 44(3):536–559, 2003.
[16] John A Biles. Genjam: A genetic algorithm for generating jazz solos. In Proceedings of the
International Computer Music Conference, pages 131–131. International Computer Music
Association, 1994.
[17] John A Biles. Genjam: Evolution of a jazz improviser. Creative evolutionary systems, 168,
2002.
[18] John A Biles. Genjam in perspective: a tentative taxonomy for GA music and art systems.
Leonardo, 36(1):43–45, 2003.
[19] John A Biles. Performing with technology: Lessons learned from the genjam project. In
MUME 2013 Workshop, 2013.
[20] John A Biles. Genjam. http://igm.rit.edu/~jabics/GenJam.html, Accessed May
11, 2014.
162
[21] Tim Blackwell, Oliver Bown, and Michael Young. Live algorithms: towards autonomous
computer improvisers. In Computers and Creativity, pages 147–174. Springer, 2012.
[22] Larry Blumenfeld. A Saxophonist’s Reverberant Sound, Wall Street Journal. http:
//www.wsj.com/articles/SB10001424052748703302604575294532527380178, June
11, 2010 , Accessed September 27, 2015.
[23] Tommaso Bolognesi. Automatic composition: Experiments with self-similar music. Com-
puter Music Journal, pages 25–36, 1983.
[24] Oliver Bown, Alice Eldridge, and Jon McCormack. Understanding interaction in contempo-
rary digital music: from instruments to behavioural objects. Organised Sound, 14(02):188–
196, 2009.
[25] Linda Candy. Practice based research: A guide. CCS Report, 1:1–19, 2006.
[26] Joel Chadabe. Live-Electronic Music. In The Development and Practice of Electronic
Music, eds. Appleton, Jon H and Perera, Ronald, pages 286–335. Prentice Hall, 1975.
[27] Joel Chadabe. Interactive composing: An overview. Computer Music Journal, pages 22–
27, 1984.
[28] Joel Chadabe. Electric Sound:{The} Past and Promise of Electronic Music. Prentice-Hall,
Upper Saddle River, New Jersey, 1997.
[29] Michel Chion. Guide to Sound Objects: Pierre Schaeffer and musical research (Guide des
Objets Sonores: Pierre Schaeffer et la recherche musicale, 1983) trans. J. Dack and C.
North. Institut National de l’Audiovisuel & Editions Buchet/Chastel, Paris, 2009.
[30] John M Chowning. The synthesis of complex audio spectra by means of frequency modu-
lation. Journal of the Audio Engineering Society, 21(7):526–534, 1973.
163
[31] John M Chowning. The synthesis of complex audio spectra by means of frequency modu-
lation. Computer Music Journal, pages 46–54, 1977.
[32] Bill Cole. John Coltrane. Da Capo Press, 2001.
[33] Steve Coleman. Symmetrical Movement Concept. http://m-base.com/essays/
symmetrical-movement-concept/, Accessed September 27, 2015.
[34] Nicholas Collins. Towards autonomous agents for live computer music: realtime machine
listening and interactive music systems. PhD thesis, University of Cambridge, 2007.
[35] Nick Collins. Musical form and algorithmic composition. Contemporary Music Review,
28(1):103–114, 2009.
[36] Nick Collins, Alex McLean, Julian Rohrhuber, and Adrian Ward. Live coding in laptop
performance. Organised Sound, 8(03):321–330, 2003.
[37] Nick Collins and Fredrik Olofsson. klipp av: Live algorithmic splicing and audiovisual
event capture. Computer Music Journal, 30(2):8–18, 2006.
[38] Nick Collins, Margaret Schedel, and Scott Wilson. Electronic Music. Cambridge University
Press, 2013.
[39] Deborah Colon. Focal Point: Strings in World Music - Approaching Improvisation Through
Irish Fiddle Style. American String Teacher, 54(2):84–87, 05 2004.
[40] Christoph Cox and Daniel Warner. Audio Culture: Readings in Modern Music. Continuum,
2004.
[41] cycling74. Max. https://cycling74.com/, Accessed June 15, 2016.
[42] Anne Danielsen. Musical rhythm in the age of digital reproduction. Ashgate Publishing,
Ltd., 2010.
164
[43] Roger T Dean. Hyperimprovisation: computer-interactive sound improvisation. AR Edi-
tions, Inc., 2003.
[44] Michael Dessen. Improvising in a different clave: Steve Coleman and AfroCuba de Matan-
zas. The Other Side of Nowhere: Jazz, Improvisation, and Communities in Dialogue, pages
173–192, 2004.
[45] Jon Dolan and Michaelangelo Matos. The 30 Greatest EDM Al-
bums of All Time. http://www.rollingstone.com/music/lists/
the-30-greatest-edm-albums-of-all-time-20120802, August 2, 2012, Accessed
September 9, 2015.
[46] Tom Doyle. Matmos: The Art Of Cut & Paste. Sound On Sound, May, 2004.
[47] Michael Edwards. Algorithmic composition: computational thinking in music. Communi-
cations of the ACM, 54(7):58–67, 2011.
[48] Arne Eigenfeldt. Kinetic engine: toward an intelligent improvising instrument. In Proceed-
ings of the Sound and Music Computing Conference, pages 97–100, 2006.
[49] Arne Eigenfeldt. Real-time composition or computer improvisation? a composer?s search
for intelligent tools in interactive computer music. Proceedings of the Electronic Music
Studies 2007, 2007.
[50] Arne Eigenfeldt. Corpus-based recombinant composition using a genetic algorithm. Soft
Computing, 16(12):2049–2056, 2012.
[51] Arne Eigenfeldt, Oliver Bown, Philippe Pasquier, and Aengus Martin. Towards a Taxon-
omy of Musical Metacreation: Reflections on the first Musical Metacreation Weekend. In
Proceedings of the Artificial Intelligence and Interactive Digital Entertainment (AIIDE’13)
Conference, Boston, 2013.
165
[52] Simon Emmerson. From dance! To “dance”: distance and digits. Computer Music Journal,
25(1):13–20, 2001.
[53] Simon Emmerson. Pulse, metre, rhythm in electro-acoustic music. In Proceedings of the
2008 Electroacoustic Music Studies Network International Conference, 2008.
[54] Karheinz Essl. Algorithmic composition. In The Cambridge companion to electronic mu-
sic, pages 107–127. Cambridge University Press, 2007.
[55] Karlheinz Essl. Lexikon Sonate. http://www.essl.at/works/Lexikon-Sonate.html,
Accessed May 9, 2014.
[56] Simon Fay. Jam Band, musical composition. https://soundcloud.com/ps_music/
jamband_0206, 2013.
[57] Simon Fay. new beginnings, musical composition. https://soundcloud.com/ps_
music/newbeginnings, 2013.
[58] Simon Fay. under scored , musical composition. https://soundcloud.com/ps_
music/under_scored-aspect_demo, 2013.
[59] Simon Fay. “There is pleasure...”, musical composition. https://soundcloud.com/ps_
music/there-is-pleasure, 2014.
[60] Simon Fay. Winter Lullaby (remix), musical composition. https://soundcloud.com/
ps_music/winter-lullabysummersongsforwinterspast-mix, 2014.
[61] Simon Fay. 5Four, musical composition. https://soundcloud.com/ps_music/5four_
demo_090415, 2015.
[62] Simon Fay. blueMango, musical composition. https://soundcloud.com/ps_music/
bluemango, 2015.
166
[63] Simon Fay. radioAAIM, musical composition. https://soundcloud.com/ps_music/
sets/radioaaim, 2015.
[64] Simon Fay. “There is pleasure...”: An Improvisation Using the AAIM Performance System.
In Proceedings of the International Computer Music Conference, pages 400–403. Interna-
tional Computer Music Association, 2015.
[65] Simon Fay. under scored , musical performance w/Aspect Ensemble. International Com-
puter Music Conference, Athens, Greece. 16 September, 2014.
[66] Simon Fay. 5Four, musical performance w/Ethan Cayko. Music for Sounds Concert, Cal-
gary, AB. 10 April, 2015.
[67] Simon Fay. BeakerHead, musical performance w/Ethan Cayko. Beakernight, Beakerhead
Festival, Calgary, AB. 19 September 2015.
[68] Simon Fay. under scored , musical performance. UofC Graduate Students Composer
Concert, Calgary, AB. 22 January 2014.
[69] Simon Fay. “There is pleasure...”, musical performance. Forms of Sound Festival, Calgary,
AB. 29 January, 2015.
[70] Simon Fay. “There is pleasure...”, musical performance. International Computer Music
Conference, Denton, TX. 30 September, 2015.
[71] Simon Fay. new beginnings, musical performance. Sonic Arts Waterford/Kolonia Artystow
Artist Exchange, Gdansk, Poland. 23 June 2013.
[72] Simon Fay and Ethan Cayko. BeakerHead, musical composition. https://soundcloud.
com/ps_music/beakerhead_calgary2015-maa_mix, 2015.
[73] Jose David Fernandez and Francisco Vico. AI methods in algorithmic composition: A
comprehensive survey. Journal of Artificial Intelligence Research, 48:513–582, 2013.
167
[74] George Garzone. Basics of the Triadic Chromatic Approach. DOWN BEAT, 76(5):58–59,
2009.
[75] Steven E Gilbert. Gershwin’s art of counterpoint. Musical Quarterly, pages 423–456, 1984.
[76] Richard Glover. Minimalism, Technology and Electronic Music. In Ashgate Research
Companion to Minimalist and Post-Minimalist Music, pages 161–180. Ashgate, 2013.
[77] Paul Griffiths. Serialism, Oxford Music Online. http://www.oxfordmusiconline.
com.ezproxy.lib.ucalgary.ca/subscriber/article/grove/music/25459?q=
serialism&search=quick&pos=1&_start=1#firsthit, Accessed October 7, 2015.
[78] Paul Griffiths. Messiaen, Olivier, Oxford Music Online. http://www.
oxfordmusiconline.com.ezproxy.lib.ucalgary.ca/subscriber/article/
grove/music/18497?q=+Messiaen&search=quick&pos=2&_start=1#firsthit,
Accessed October 8, 2015.
[79] Jason Gross. Squarepusher, Perfect Sound Forever. http://www.furious.com/
perfect/squarepusher.html, January 1999, Accessed September 13, 2015.
[80] Jason Gross. Aphex twin interview. http://www.furious.com/perfect/aphextwin.
html, September 1997, Accessed September 9, 2015.
[81] Kevin Holm-Hudson. Quotation and context: Sampling and John Oswald’s plunderphonics.
Leonardo Music Journal, pages 17–25, 1997.
[82] Thom Holmes. Electronic and experimental music: technology, music, and culture. Rout-
ledge, 2012.
[83] Imogene Horsley. Improvised Embellishment in the Performance of Renaissance Poly-
phonic Music. American Musicological Society, 1951.
168
[84] Roy Howat. Debussy, Ravel and Bartok: Towards Some New Concepts of Form. Music &
Letters, pages 285–293, 1977.
[85] Roy Howat. Debussy in proportion: a musical analysis. Cambridge University Press, 1986.
[86] Ralf Hutter. Kraftwerk Revealed. Electronics & Music Maker, pages 62–67, September
1981.
[87] Jo Hutton. Daphne Oram: innovator, writer and composer. Organised Sound, 8(01):49–56,
2003.
[88] Joseph Hyde. Off the Map? Sonic Art in the New Media Landscape. Circuit: Musiques
contemporaines, 13(3):33–40, 2003.
[89] Gareth James. Bjork - Biophilia. http://www.clashmusic.com/reviews/
bjork-biophilia, 2011, Accessed September 20, 2015.
[90] Topi Jarvinen. Tonal hierarchies in jazz improvisation. Music Perception, pages 415–437,
1995.
[91] Stephanie Jorgl. Richard Devine: Architect of Aural Mayhem. http://www.audiohead.
net/interviews/richarddevine/index.html, 2011, Accessed September 20, 2015.
[92] Margaret J Kartomi. Musical improvisations by children at play. World of Music, 33(3):53–
65, 1991.
[93] Adam Kennedy. Autechre Oversteps Review. http://www.bbc.co.uk/music/reviews/
wpq4, 2010, Accessed September 11, 2015.
[94] Klysoft. ITVL. https://cycling74.com/project/
itvl-the-semi-generative-step-sequencer/#.V1xG8WQrIy4, Accessed June
10, 2016.
169
[95] Klysoft. ITVL. http://klysoft.bigcartel.com/itvl, Accessed June 10, 2016.
[96] Klysoft. ITVL User Manual. http://www.klysoft.net/download/ITVLmanual.pdf,
Accessed June 10, 2016.
[97] Donald E Knuth. Fundamental Algorithms: The Art of Computer Programming. Addison-
Wesley, Inc., 1973.
[98] Timothy Koozin. Spiritual-temporal imagery in music of Olivier Messiaen and Toru
Takemitsu. Contemporary Music Review, 7(2):185–202, 1993.
[99] Richard Kostelanetz. Conversing with Cage. Routledge, 2003.
[100] Jonathan Kramer. The fibonacci series in twentieth-century music. Journal of Music The-
ory, pages 110–148, 1973.
[101] Metacreation Lab. Musical Metacreation. http://metacreation.net/about/, Ac-
cessed June 10, 2016.
[102] Leigh Landy. Understanding the Art of Sound Organization. Cambridge, Mass: MIT Press,
2007.
[103] Paul Lansky. my radiohead adventure. http://paul.mycpanel.princeton.edu/
about-misc.html, Accessed September 5, 2015.
[104] Alison Latham. Inversion, Oxford Music Online. http://www.oxfordmusiconline.
com.ezproxy.lib.ucalgary.ca/subscriber/article/opr/t114/e3461?q=
inversion&search=quick&pos=3&_start=1#firsthit, Accessed October 7, 2015.
[105] Mark Levine. The jazz theory book. O’Reilly Media, Inc., 2011.
[106] Daniel J Levitin. This is your brain on music: Understanding a human obsession. Atlantic
Books Ltd, 2011.
170
[107] George E Lewis. Too many notes: Computers, complexity and culture in voyager. Leonardo
Music Journal, 10:33–39, 2000.
[108] Marvin Lin. Radiohead’s Kid A. Bloomsbury Publishing USA, 2010.
[109] Andy Mabbett. Pink Floyd-The music and the mystery: The Music and the Mystery. Om-
nibus Press, 2010.
[110] Jo Marchant. Decoding the Heavens: A 2,000-year-old-computer–and the Century-long
Search to Discover Its Secrets. Da Capo Press, 2010.
[111] Neville Marin. Castles Made of Sand: Interview with Allan Holdsworth. Guitarist, Novem-
ber, 1987.
[112] J Martins and Eduardo R Miranda. Emergent rhythmic phrases in an a-life environment. In
Proceedings of ECAL 2007 Workshop on Music and Artificial Life (MusicAL 2007), pages
10–14, 2007.
[113] Kaffe Mathews. cd eb + flo. https://kaffematthews.bandcamp.com/album/
cd-eb-flo, 2003, Accessed September 6, 2015.
[114] Evelyn McDonnell. A new map to Bjork’s music, Los Angeles Times. http://articles.
latimes.com/2011/oct/18/entertainment/la-et-bjork-20111018/2, October 18,
2011, Accessed September 5, 2015.
[115] Kembrew McLeod. Genres, subgenres, sub-subgenres and more: Musical and social dif-
ferentiation within electronic/dance music communities. Journal of Popular Music Studies,
13(1):59–75, 2001.
[116] Bill Milkowski. Jaco: the extraordinary and tragic life of Jaco Pastorius. Hal Leonard
Corporation, 2005.
171
[117] Keith Muscutt. Composing with algorithms: An interview with David Cope. Computer
Music Journal, 31(3):10–22, 2007.
[118] Paul Nauert. Theory and Practice in “Porgy and Bess”: The Gershwin-Schillinger Connec-
tion. Musical Quarterly, pages 9–33, 1994.
[119] Ben Neill. Pleasure beats: rhythm and the aesthetics of current electronic music. Leonardo
Music Journal, 12:3–6, 2002.
[120] Bruno Nettl. Thoughts on improvisation: A comparative approach. Musical Quarterly,
pages 1–19, 1974.
[121] Bruno Nettl et al. Improvisation, Grove Music Online. http://www.
oxfordmusiconline.com.ezproxy.lib.ucalgary.ca/subscriber/article/
grove/music/13738?q=improvisation&search=quick&pos=1&_start=1#firsthit,
Accessed October 17, 2015.
[122] Louis Niebur. Special Sound: the creation and legacy of the BBC Radiophonic Workshop.
Oxford University Press on Demand, 2010.
[123] Gerhard Nierhaus. Algorithmic composition: paradigms of automated music generation.
Springer, 2009.
[124] Robert Normandeau. Puzzle. http://www.electrocd.com/en/oeuvres/select/?id=
14027, 2003, Accessed September 20, 2015.
[125] John Oswald. Plunderphonics, or audio piracy as a compositional prerogative. In Wired
Society Electro-Acoustic Conference, 1985.
[126] Francois Pachet. Beyond the cybernetic jam fantasy: The continuator. Computer Graphics
and Applications, IEEE, 24(1):31–35, 2004.
172
[127] Francois Pachet. Musical virtuosity and creativity. In Computers and Creativity, pages
115–146. Springer, 2012.
[128] Claude Palisca and Dolores Pesce. Guido of Arezzo, Oxford Music Online.
http://www.oxfordmusiconline.com.ezproxy.lib.ucalgary.ca/subscriber/
article/grove/music/11968?q=Guido+d%27Arezzo&search=quick&pos=15&
_start=1#firsthit, Accessed October 7, 2015.
[129] George Papadopoulos and Geraint Wiggins. AI methods for algorithmic composition: A
survey, a critical view and future prospects. In AISB Symposium on Musical Creativity,
pages 110–117. Edinburgh, UK, 1999.
[130] Evan Parker. Evan Parker. Contemporary Music Review, 25(5-6):411–416, 2006.
[131] Christopher Partridge. Dub in Babylon: Understanding the Evolution and Significance of
Dub Reggae in Jamaica and Britain from King Tubby to Post-Punk. Equinox Publishing,
2010.
[132] Heather Phares. Matmos: A Chance to Cut is a Chance to Cure. http://www.
allmusic.com/album/a-chance-to-cut-is-a-chance-to-cure-mw0000116152,
2011, Accessed September 20, 2015.
[133] Anthony Pople. Messiaen: Quatuor pour la fin du temps. Cambridge University Press,
1998.
[134] Keith Potter. Four Musical Minimalists: La Monte Young, Terry Riley, Steve Reich, Philip
Glass, volume 11. Cambridge University Press, 2002.
[135] Anil Prasad. Pierre bensusan, servant of the music. http://www.innerviews.org/
inner/bensusan.html, 1999, Accessed September 5, 2015.
173
[136] James Pritchett. Cage, John, Oxford Music Online. http://www.oxfordmusiconline.
com.ezproxy.lib.ucalgary.ca/subscriber/article/grove/music/A2223954?q=
john+cage&search=quick&pos=1&_start=1#firsthit, Accessed October 10, 2015.
[137] Miller Puckette. Pure Data. https://puredata.info/, Accessed June 15, 2016.
[138] Alexander Randon. Fugue Machine. https://alexandernaut.com/fuguemachine/,
Accessed June 10, 2016.
[139] Nancy Yunhwa Rao. American Compositional Theory in the 1930s: Scale and Exoticism in
“The Nature of Melody” by Henry Cowell. Musical Quarterly, pages 595–640, 2001.
[140] Robert Ratcliffe. New forms of hybrid musical discourse: an exploration of stylistic and
procedural cross-fertilisation between contemporary art music and electronic dance music.
In International Computer Music Conference, pages 235–242, 2011.
[141] Rephlex Records. Braindance. http://web.archive.org/web/20010302124112/
http://www.rephlex.com/braindance.htm, Accessed September 8, 2015.
[142] Simon Reynolds. Energy Flash: A Journey Through Rave Music and Dance Culture. Faber
& Faber, 2013.
[143] Curtis Roads. The computer music tutorial. MIT press, 1996.
[144] Tara Rodgers. Pink noises: Women on electronic music and sound. Duke University Press,
2010.
[145] Joseph Butch Rovan, Marcelo M Wanderley, Shlomo Dubnov, and Philippe Depalle. In-
strumental gestural mapping strategies as expressivity determinants in computer music per-
formance. In Kansei, The Technology of Emotion. Proceedings of the AIMI International
Workshop, pages 68–73. Citeseer, 1997.
174
[146] Robert Rowe. Interactive music systems: machine listening and composing. MIT press,
1992.
[147] Robert Rowe. Machine musicianship. MIT press, 2004.
[148] Mike Rubin. Matmos: Quasi Objects. Spin, August, 1998.
[149] Paul Sanden. Liveness in Modern Music: Musicians, Technology, and the Perception of
Performance. Routledge, 2013.
[150] Antonino Santos, Bernardino Arcay, Julian Dorado, Juan Romero, and Jose Rodriguez.
Evolutionary computation systems for musical composition. In International Conference
Acoustic and Music: Theory and Applications (AMTA 2000), volume 1, pages 97–102.
Citeseer, 2000.
[151] Jan C Schacher and Philippe Kocher. Ambisonics spatialization tools for max/msp. Omni,
500:1, 2006.
[152] Pierre Schaeffer. Traite des objets musicaux. Paris: Editions du Seuil, 1966.
[153] R Murray Schafer. The tuning of the world. Alfred A. Knopf, 1977.
[154] Greg Schiemer. Improvising Machines: Spectral Dance and Token Objects. Leonardo
Music Journal, 9:107–114, 1999.
[155] Joseph Schillinger. The Schillinger System of Musical Composition. Carl Fischer Inc., New
York, NY, 1941.
[156] Joseph G Schloss. Making Beats: The Art of Sample-Based Hip-Hop. Wesleyan University
Press, 2014.
[157] Peter Shapiro. Drum’n’Bass: The Rough Guide. Rough guides, 1999.
[158] Philip Sherburne. 12k: between two points. Organised Sound, 6(03):171–176, 2001.
175
[159] Yotaro Shuto. 2020 THE SEMI MODULAR BEAT-MACHINE. https://www.
kickstarter.com/projects/394188952/2020-the-semi-modular-beat-machine/
description, Accessed June 10, 2016.
[160] Mary Simoni and Roger B Dannenberg. Algorithmic Composition: A Guide to Composing
Music with Nyquist. University of Michigan Press, 2013.
[161] George Sioros, Marius Miron, D Cocharro, C Guedes, and F Gouyon. Syncopalooza: Ma-
nipulating the syncopation in rhythmic performances. In Proceedings of the 10th Interna-
tional Symposium on Computer Music Multidisciplinary Research, pages 454–469. Labora-
toire de Mecanique et d’Acoustique Marseille, 2013.
[162] Nicolas Slonimsky. Thesaurus of scales and melodic patterns. Amsco, 1947.
[163] Peter F Smith. The dynamics of delight: Architecture and aesthetics. Routledge, 2003.
[164] Neil Sorrell. The Gamelan. London: Faber, 1990.
[165] William M Spears, Kenneth A De Jong, Thomas Back, David B Fogel, and Hugo De Garis.
An overview of evolutionary computation. In Machine Learning: ECML-93, pages 442–
459. Springer, 1993.
[166] Andrew Stiller. Hiller, Lejaren, Oxford Music Online. http://www.
oxfordmusiconline.com.ezproxy.lib.ucalgary.ca/subscriber/article_
citations/grove/music/13044?q=illiac&search=quick&pos=1&_start=1, Ac-
cessed October 8, 2015.
[167] David Stubbs. Autechre: The Futurologists. The Wire, 230:28–33, 2003.
[168] Morton Subotnick. The use of computer technology in an interactive or “real time” perfor-
mance environment. Contemporary Music Review, 18(3):113–117, 1999.
176
[169] Martin Supper. A few remarks on algorithmic composition. Computer Music Journal,
25(1):48–53, 2001.
[170] R Anderson Sutton. Do javanese gamelan musicians really improvise? In the course of
performance: Studies in the world of musical improvisation, pages 69–92, 1998.
[171] Paul Tingen. Autechre. Sound On Sound, April, 2004.
[172] Paul Tingen. Squarepusher. Sound On Sound, May, 2011.
[173] David Toop. The Rap Attack: African jive to New York Hip Hop. South End Press, 1984.
[174] Richard Toop. Stockhausen, Karlheinz, Oxford Music Online. http://www.
oxfordmusiconline.com.ezproxy.lib.ucalgary.ca/subscriber/article/
grove/music/26808?q=stockhausen&search=quick&pos=2&_start=1#S26808.4,
Accessed October 8, 2015.
[175] Habib Hassan Touma. The Music of the Arabs, trans. Laurie Schwartz. Portland, Oregon:
Amadeus Press. ISBN 0-9313, 1996.
[176] Martin Turenne. Aphex Twin: The Contrarian, Exclaim.ca. http://exclaim.ca/Music/
article/aphex_twin-contrarian, Apr 01, 2003, Accessed September 8, 2015.
[177] Fintan Vallely. The companion to Irish traditional music. NYU Press, 1999.
[178] Paul Walker. Fugue, Oxford Music Online. http://www.oxfordmusiconline.com.
ezproxy.lib.ucalgary.ca/subscriber/article/grove/music/51678?q=fugue&
search=quick&pos=1&_start=1#firsthit, Accessed October 7, 2015.
[179] Tony Ware. Still Hacking After All These Years. In Keyboard Presents the Evolution of
Electronic Dance Music. Backbeat Books, 2011.
[180] Tony Ware. Squarepusher, Electronic Musician. http://www.emusician.com/artists/
1333/squarepusher/44867, May 2012, Accessed October 17, 2015.
177
[181] Rodney Waschka. Composing with Genetic Algorithms: GenDash. In Evolutionary Com-
puter Music, pages 117–136. Springer, 2007.
[182] Keith Waters. “Giant Steps” and the ic4 Legacy. Integral, pages 135–162, 2010.
[183] WaveDNA. Liquid Music. https://www.wavedna.com/liquid-music/, Accessed June
9, 2016.
[184] WaveDNA. Liquid Rhythm. https://www.wavedna.com/liquid-rhythm/, Accessed
June 9, 2016.
[185] Peter Weibel. Algorithmic Revolution. The History of Interactive Art. http://zkm.de/
en/event/2004/10/algorithmic-revolution, Accessed October 9, 2015.
[186] Eric W. Weisstein. Algorithm, MathWorld–A Wolfram Web Resource. http://
mathworld.wolfram.com/Algorithm.html, Accessed October 23, 2015.
[187] Trevor Wishart. Globally Speaking. Organised Sound, 13(02):137–140, 2008.
[188] Carl Woideck. Charlie Parker: his music and life. University of Michigan Press, 1998.
[189] Christoph Wolff. Bach, Johann Sebastian, Oxford Music Online. http:
//www.oxfordmusiconline.com.ezproxy.lib.ucalgary.ca/subscriber/
article/grove/music/40023pg10#S40023.3.7, Accessed October 7, 2015.
[190] Rene Wooller, Andrew R Brown, Eduardo Miranda, Joachim Diederich, and Rodney Berry.
A framework for comparison of process in algorithmic music systems. Generative Arts
Practice, 2005.
[191] Michael Young. NN music: improvising with a living computer. In Computer music mod-
eling and retrieval. Sense of sounds, pages 337–350. Springer, 2008.
[192] Miriama Young. Singing the Body Electric: The Human Voice and Sound Technology.
Ashgate, 2015.
178
[193] Juzhong Zhang, Garman Harbottle, Changsui Wang, and Zhaochen Kong. Oldest playable
musical instruments found at jiahu early neolithic site in china. Nature, 401(6751):366–368,
1999.
179
Appendix A
User Guide
Figure A.1: The AAIM system window on load
The AAIM (Algorithmically Assisted Improvised Music) performance system is a portfolio of
interconnectable algorithmic software patches, developed using Max1, designed to facilitate im-
provisation and live performance of electronic music. The goal of the AAIM system is to facilitate
improvisation through the variation and manipulation of composed materials entered by the user.
By generating these variations algorithmically the system attempts to afford improvisers of com-
puter music the ability to focus on macro elements of their performances, such as form, phrasing,
texture, spatialisation, and timbre, while still enabling them to incorporate the melodic and/or
rhythmic variations of a virtuosic instrumental improviser.
———————————— NOTE ————————————
Skip to section A.2 to hear the system in action.
1http://www.cycling74.com
180
A.0.1 Getting Started/General Tips
Figure A.2: The AAIM Setup window on system load
• Use the AAIM Setup window (see Fig. A.2) to load the three main modules of the
AAIM system:
– AAIM.patternVary (see Sec. A.1.2)
– AAIM.looper (see Sec. A.1.3)
– AAIM.melodyVary (see Sec. A.1.4)
———————————— NOTE ————————————
Any changes made using “AAIM Setup” patch, any changes to the number of voices
for any of the main modules, and all those made using the
AAIM.FXModule1 (see Sec. A.1.6), are scripted automatically. It is not advis-
able to do this, or to create/delete voices within any of the modules, during live
performance as it will likely cause disontinuities in audio and timing
181
• Press the “esc” key on your keyboard or click the button with the picture of the
speaker in the main window/performance Interface (see Sec. A.0.2 and Fig. A.3)
to turn on both audio and the system.
• In all patches/interfaces voices are counted from left to right, or from bottom to top.
• AAIM.rhythmGen (see Sec. A.1.1) always organises the voices of the other mod-
ules in the same way:
– AAIM.patternVary voices first
– AAIM.looper voices second
– AAIM.melodyVary voices third
• Time Signatures
– The AAIM system makes no differentiation between time signatures
- i.e. there is no special treatment/consideration given to any particu-
lar time signature. Therefore, variables which work well for certain
material in 44 will work equally well with similar material in 5
4 ,98 ,
etc.
– Generally it works best to treat bars of 44 as bars of 8
8 , bars of 54 as 10
8
etc.
– Although AAIM does not currently afford phrases within which the
time signature changes(i.e. 3 bars of 44 and a bar of 3
4 ) equivilant re-
sults can be achieved by summing the total beats desired and treating
it as one bar (i.e. one bar of 154 ).
– Both the “patternVary” and “melodyVary” algorithms attempt a low
level analysis of the rhythm of the materials they are given in order
to determine the grouping of beats in the material. This information
182
is then used by the “rhythmGen” algorithm to influence the choice of
inter-onset-intervals and insertion of rests (see Fig. A.7), in addition
to the velocity of each individual note/trigger.
183
A.0.2 Performance Interface
Figure A.3: The AAIM system performance interface
Although each of the modules have individual interfaces, the AAIM system performance in-
terface (see Fig. A.3) is intended as the main performance interface. The left hand side of the
interface provides controls over variable relating to each the 4 main modules and are discussed in
Sections A.1.1, A.1.2, A.1.3, and A.1.4.
• The central section of the interface provides some non performance related controls
(see Fig. A.4) .
– The panel at the top of this section of the interface displays the cur-
rent beat and bar (i.e. current position in the rhythmic phrase) in
addition to giving control over the basic tempo in beats per minute
(bpm) and the relative tempo.
– Below this, within the red panel, are buttons which open the “Audio
Settings” patch, the “keyboard Shortcuts” patch, general “Notes”
184
patch, and the AAIM Setup patch (see Fig. A.2). There is also but-
ton labelled “Recorder” which opens a patch that allows recording
of all audio output from the system.
Figure A.4: The central section of the AAIM system performance interface
• At the bottom right of the interface (see Fig. A.5) are the gain sliders and FX buttons
for each of the audio modules, in addition to buttons for opening the AAIM.FXModule1
(see Sec. A.1.6) for each.
Figure A.5: The audio section of the AAIM system performance interface
• The top right hand corner of the AAIM system performance interface provides the
main performative controls (see Fig. A.6).
– The polygon interface (see Sec. A.1.7) enables interpolation be-
tween multiple ‘preset’ states for the sliders labelled “complexity,”
“rests,” “patternVary extra,” “looper Variations,” and “melodyVary
185
Figure A.6: The polygon interface section of the AAIM system performance interface
Variations.” This allows swift changes between a wide variety of
system states.
· Pressing the button labelled “Presets” opens the AAIM.polygonInterface
presets patch (see Sec. A.1.7).
· Each preset contains 5 values:
· complexity
· rests
· patternVary Variations
· looper Variations
· melodyVary Variations
– To the right of the polygon interface the grey numbered buttons
recall ‘Section Presets’ (see Sec. A.0.3), while the red numbered
boxes store the systems current settings as a Section Preset. The
button labelled “write” saves the current Section Presets to file, and
the button labelled “read” opens a Section Preset file.
186
A.0.3 Saving/Loading Section Presets
• Section Presets can be saved and loaded using the top right corner of the perfor-
mance interface (see Fig. A.6).
• Once a section preset file has been read it is necessary to press the red button to
the right of of the “read” button in order to initialise all of the scripting (i.e. to
load each module). This may take 4/5 presses of the button in order to ensure all of
the information has been loaded (unfortunately automatic loading does not work...),
however once all of the modules used are loaded another file of section presets can
be loaded without this second step being necessary provided it uses an identical
collection of modules...
• Data included in these presets include all scripting data, therefore care should be
taken when saving presets. If a preset, for example, uses a different FX module to
previous presets this FX module will be created each time that preset is loaded and
destroyed each time another preset, which does not use said FX module, is loaded,
likely causing both discontinuities in the audio and delays in timing. As such each
section preset for a piece should use all of the same modules!
• If preset sections involve changes of time signature/tempos etc. best results will be
garnered if said sections are changed at the very end of a phrase (determined by the
“nBars”).
• Sample folders are not loaded with Section presets. Therefore once all of the mod-
ules have loaded it is necessary to manually load samples (see Sec. A.1.2.4, Sec.
A.1.2.6, and Sec. A.1.3), and then press the red button a number of times again.
• AAIM system comes with some example presets to demonstrate some of the fea-
tures of the system (see Sec. A.2)
187
A.1 AAIM Modules
A.1.1 AAIM.rhythmGen
Figure A.7: The AAIM.rhythmGen interface on system load
AAIM.rhythmGen is an algorithmic system for the generation of rhythms, the output of which
is used to trigger sounds using the other modules of the AAIM system.
To open the main interface click the button labelled “rhythmGen Interface” on the main screen/performance
interface (see Sec. A.0.2).
A.1.1.1 Global Controls
The top left of the interface provides control over global aspects of the module (see Fig. A.7). The
number of rhythmic lines generated are determined using the number box labelled “nVoices” (this
will automatically update when modules/voices within modules are created or destroyed). The
number box labelled ‘nBars’ determines the number of bars in the rhythmic patterns created(all
voices will always synchronise on the first beat of the first bar), and the number box labelled
‘nBeats’ determines the number of beats in each of these bars. The drop down menu labelled ‘base
ioi’ determines the fundamental rhythmic values of all of the rhythmic lines - all rhythmic lines
will be related to a constant pulse of this value in the selected tempo (see Sec. A.0.2).
188
A.1.1.2 inter-onset-interval probabilities
The final global control is a multislider labelled ‘inter-onset-interval probabilities.’ An inter-onset-
interval (ioi) is the length of time between successive musical onsets or triggers. In the AAIM
system these are represented as a fraction or ration of the ’base ioi,’ with 1.0 being equal to the
base ioi. This multislider sets the probability of each ioi occurring during a rhythmic pattern.
Whenever a voice chooses an ioi it is repeated until it synchronises with the underlying base ioi
- i.e. a ioi of 0.8 will be repeated produce 5 equally spaced triggers of the course of 4 beats
(quintuplets), a value of 0.75 will produce 2 triggers over the course of 3 beats (dotted rhythms),
0.667 will produce 3 triggers over the course of 2 beats (triplets), 0.571 will produce 7 triggers
over 4 beats etc (septuplets). Higher values will produce fewer triggers over a longer time, and
lower values will produce more rapid fire triggers.
The AAIM.polygonInterface (see Sec. A.1.7) provides a selection of preset i-o-i probabilities:
1. Basic i-o-i’s
2. Basic i-o-i’s 2
3. Triplets
4. Basic i-o-i’s 2
5. Quintuplets
6. Triplets 2
7. Septuplets
A.1.1.3 Individual Voice Controls
The controls in both the bottom left and top right of the interface correspond to values for each
individual rhythmic line generated.
189
A.1.1.4 tempo
The multislider labelled ‘tempo’ in the top right corner of the interface shifts the overall tempo of
individual voice in relation to the base ioi. The values available are:
• 0 - regular tempo
• 1 - half time
• 2 - double time
• 3 - triple time
Figure A.8: The AAIM.rhythmGen controls on the main window
A.1.1.5 rests
The multislider labelled ‘rests’ sets the probability of rests being inserted(individual trigger points
not resulting in triggers) into each rhythmic line. These values are then scaled using the ‘rests’
slider in the performance interface (see Fig. A.8 and Sec. A.0.2).
A.1.1.6 complexity
The multislider labelled ‘complexity’ scales the probability of individual voices inserting ioi’s not
equal to 1.0 into their own rhythmic lines. These values are then scaled using the ‘complexity’
slider in the performance interface (see Fig. A.8 and Sec. A.0.2).
190
A.1.1.7 timing deviations
The multislider labelled ‘timing deviations’ (bottom left of the interface) sets micro deviations in
timing for each voice - shifting the timing of each voice to land slightly before or after the under-
lying beat. These values are then scaled using the ‘timing deviations’ slider in the performance
interface (see Fig. A.8 and Sec. A.0.2).
A.1.1.8 voice on/off
The multislider labelled ‘voice on/off’ (bottom left of the interface) turns individual rhythmic
voices on or off (NOTE: This may result in minor delays when turning voices on in order to ensure
synchronisation)
191
A.1.2 AAIM.patternVary
Figure A.9: The AAIM.patternVary interface on system load
The patternVary algorithm is used for playing and manipulating drum machine style patterns
(see Fig. A.9)
• In the “AAIM Setup” window (see Fig. A.2) check the box labeled “patternVary”
• The “AAIM Setup” window should change to reveal a menu (Initially showing
“MIDI”) and a button(initially labelled “MIDI Out Interface”)(see Fig. A.10).
Use this menu to choose the destination of the messages(see Section A.1.2.1)
• In the main window(which should have resized to reveal a new section of the inter-
face) click the button labelled “patternVary Interface”(see Fig. A.11)
• Create a pattern using the interface provided(voices are counted from bottom to
top, while time (i.e. beats) are counted from left to right) (see Fig. A.12)
• In addition to mapping rhythms/triggers generated by the AAIM.rhythmGen algo-
rithm AAIM.patternVary is also capable of inserting additional triggers into a pat-
tern. The likelyhood of this happening is influenced by 3 factors:
– The number of notes in the given pattern for an individual voice (see
Fig A.12). i.e. if no notes are given for a particular voice no notes
192
will be added, and the more notes there are for an individual voice
in a given pattern the more likely extra notes will be triggered in that
voices.
– The chance of extra notes being added for each individual voice as
determined by the “voice Extra” multislider within the patternVary
interface (see Fig. A.13)
– Finally the overall chance of additional notes being inserted into the
pattern is scaled using the “patternVary Extra” slider in the main
window (see Fig. A.11 and Sec. A.0.2). This value is the final word
on the extent of the variations (i.e. if the value is 0 no additional
notes will be inserted regardless of the other factors!!!)
Figure A.10: The AAIM Setup following initial toggling of the patternVary algorithm
Figure A.11: patternVary performance interface on AAIM System main window
193
Figure A.12: Interface used to create patterns which are subsequently played/manipulated usingthe patternVary algorithm
Figure A.13: Interface used to determine the chance of additional notes being inserted into thegiven pattern for each individual voice
A.1.2.1 AAIM.patternVary trigger destinations
Output from the AAIM.patternVary algorithm can be used to trigger sounds using a number of
methods. The particular method used is chosen using the drop down menu in the AAIM Setup
window (see Fig. A.10).
A.1.2.2 MIDI
Triggers from the AAIM.patternVary algorithm can be sent to any MIDI device/program using the
AAIM.midiOut subpatcher (see Fig. A.14)
A.1.2.3 OSC
Triggers from the AAIM.patternVary algorithm can be sent to any OSC capable device/program
using the AAIM.OSCOut subpatcher (see Fig. A.15)
194
Figure A.14: Interface used for output of MIDI from AAIM
Figure A.15: Interface used for output of OSC messages from AAIM
A.1.2.4 AAIM.samplePlayer
The AAIM.samplePlayer patch (see Fig. A.16) uses the output of the AAIM.patternVary to trigger
samples from a given folder. The user must drag and drop the folder of samples onto the designated
area of the interface (NOTE: Only audio samples should be present within the folder as otherwise
the system will attempt, and fail, to treat other files as audio files!!!) and each sample will be
automatically assigned to an individual voice.
Figure A.16: Interface used for AAIM.samplePlayer, a patch which allows the triggering of anarbitrary number of samples-with each sample being triggered by a single voice
195
A.1.2.5 sequenceMaker
The sequenceMaker output of the patternVary uses the AAIM.sequenceMaker patch (see Sec.
A.1.5) to further extend the capabilities of the patternVary algorithm.
A.1.2.6 AAIM.samplePlayer2
Figure A.17: Global interface used for AAIM.samplePlayer2
Figure A.18: Sample load section of the AAIMsamplePlayerInterface2 patch
AAIM.samplePlayer2 uses a separate AAIM.samplePlayer (see Sec. A.1.2.4) and
AAIM.sequenceMaker (see Sec. A.1.5) for each individual voice of the patternVary algorithm.
This allows each voice to trigger samples from different collections of samples (i.e. one voice
196
Figure A.19: Sequence section of the AAIMsamplePlayerInterface2 patch
Figure A.20: Multisliders section of the AAIMsamplePlayerInterface2 patch
triggering from a collection of bass drum samples and one triggering from a collection of snare
samples).
The collection of numbered buttons along the top of the interface opens the individual AAIM.samplePlayer
and AAIM.sequenceMaker for each voice. Samples can either be loaded by using the individual
AAIM.samplePlayer patches or by dragging and dropping sample folders onto numbered boxes
at the bottom of the interface (see Fig. A.18). Once a sample folder has been loaded using this
method the interface will display the name of the folder loaded. The number of ‘notes’ for each
individual AAIM.sequenceMaker (see Sec. A.1.5) is automatically set to the number of samples
in the folder for that corresponding voice.
The 2 “live.tab” interface objects in addition to the 2 number boxes (see Fig. A.19) correspond
to the equivalent controls on each of the AAIM.sequenceMaker interfaces (see Fig. A.47), and
triggers these operations on each of the individual voices simultaneously.
The multisliders labelled “sample voice panning,” “sample voice pitch,” and “sample voice
volume” (see Fig. A.20) correspond to the equivalent controls on each of the AAIM.samplePlayer
interfaces (see Fig. A.16), changing the corresponding value for each of the samples in that voice
to the same value.
197
A.1.3 AAIM.looper
AAIM.looper is an algorithm for mapping the output of AAIM.rhythmGen (see Sec. A.1.1) onto
playback of an audio sample or loop. The AAIM system provides an interface for control of both
individual voices (individual samples) and control of all AAIM.looper voices simultaneously.
The algorithm divides an audio sample into n equally spaced segments, numbers these segments
sequentially, and then plays these segments according to a user set sequence.
Figure A.21: The AAIM Setup following initial toggling of the looper algorithm
• In the “AAIM Setup” window (see Fig. A.2) check the box labeled “looper.”
• The “AAIM Setup” window should change to reveal a number box and a button
labelled “looperMultiVoice Interface” (see Fig. A.21).
• Press this button to open the AAIM.looperMultiVoiceInterface patch (see Fig. A.22).
• Drag and drop a folder of audio samples onto the designated area at the bottom of
the AAIM.looperMultiVoiceInterface patch (see Fig. A.22).
• Use the drop down menu to load a sample from the folder for one voice, and open
that AAIM.looper interface by clicking the correspondingly numbered button at the
top of the interface (see Fig. A.22).
• The top section of the interface displays the waveform of the sample loaded (see
Fig. A.23), and also also the selection of a specific section of the sample
198
• Using the section of the interface directly below the waveform and to the right of
the interface set the number of equally divided samples by either setting the original
bpm of the sample (using the number box labelled “original bpm of sample”) or the
number of segments (using the number box labelled “beats in sample”).
• Use the AAIM.sequenceMaker (see Sec. A.1.5) labelled “sample segment sequencer”
on the left of the interface to set the playback sequence of the sample.
• Use the multislider at the top right of the AAIM.looper voice interface (see Fig.
A.23) to create variations of the playback, or the multislider at the top right of
the AAIM.looperMultiVoiceInterface (see Fig. A.22) to set a default setting for all
looper voices and then use the slider at the top right of the AAIM.looperMultiVoiceInterface
or looper variations slider on the
main/performance interface (see Fig. A.24) to set the chance of these variations
occurring.
A.1.3.1 AAIM.looperMultiVoiceInterface
AAIM.looperMultiVoiceInterface gives the user simultaneous control over all AAIM.looper voices.
However, when a sample is loaded for the first time changes will need to be made using the indi-
vidual AAIM.looper voice interface (see Sec. A.1.3.2).
The interface is divided into 4 sections (see Fig. A.22).
• The numbered buttons along the top of the interface open the individual AAIM.looper
voice interfaces (see Sec. A.1.3.2).
• Immediately below these are 4 multisliders (see Fig. A.25) which correspond to
the respective controls on the individual AAIM.looper voice interfaces (see Sec.
A.1.3.2), and a final slider on the right which controls the likelihood of these looper
variations occurring.
199
Figure A.22: AAIM.looperMultiVoiceInterface used for control of all AAIM.looper voices simul-taneously
200
Figure A.23: AAIM.looper used for control of an individual AAIM.looper voice
Figure A.24: looper performance interface on AAIM System main window
201
Figure A.25: Multislider section of the AAIM.looperMultiVoiceInterface
• Below these are two AAIM.sequenceMaker (see Sec. A.1.5) interfaces which con-
trol the play back of the sample (see Sec. A.1.3.2). The one on the right creates
identical playback sequences for each AAIM.looper voice (see Fig. A.26). The one
on the left creates individual playback sequences for each AAIM.looper voice. At
the bottom of this interface is a drop down menu which allows multivoice control
of playback:
– Choosing one of the voice numbers will result in the output of that
voice being mimicked in the playback of all of the other voices.
Figure A.26: AAIM.sequenceMaker section of the AAIM.looperMultiVoiceInterface
• At the bottom of the interface are controls which allows the loading of samples
and turning on/off of each individual AAIM.looper voice (see Fig. A.27). Drag-
ging and dropping a folder of audio samples/loops onto the designated box will
result in the drop down menus below automatically filling with the contents of
the folder. Loops/samples for each voice can then be loaded using these menus,
202
or by pressing one of the buttons labelled “load” and choosing the file for an in-
dividual voice using the resulting dialogue. Once a sample is loaded using the
AAIM.looperMultiVoiceInterface the interface will change to reveal the name of
the sample loaded beneath the controls for that individual voice. Each of the tog-
gles correspond to turning on/off an individual voice.
Figure A.27: Sample load section of the AAIM.looperMultiVoiceInterface
A.1.3.2 AAIM.looper
The AAIM.looper interface (see Fig. A.23) is divided into 5 main sections:
• At the top left of the interface is a toggle to turn on/off the algorithm. beside this is
an interface object which displays the waveform of a sample once it is loaded and
allows selection of a specific section of the sample (see Fig. A.28).
Figure A.28: Waveform section the AAIM.looper interface
• Below this and to the right of the interface is a section labelled “variations” (see
Fig. A.29).
– Pressing the button labelled “replace” opens a dialogue which loads
a sample.
203
– Pressing the button labelled “normalize” normalises the sample.
– The number box labelled “beats in sample” sets the number of seg-
ments the sample is divided into.
– The multislider object in this section of the interface sets the chance
of algorithmic variations of the sample playback:
· grainsize - the average length of each sample seg-
ment during playback.
· reverse - the chance of individual segments being
played in reverse
· retrigger - the chance of the sequence of segments
being ignored and individual sample segments being
repeated
· jump - the chance of playback jumping
backwards/forwards in the sequence
· jumpsize - the maximum size of these jumps (relative
to the number of jumps in the sequence)
Figure A.29: Variations section the AAIM.looper interface
• To the left is an AAIM.sequenceMaker (see Sec. A.1.5) interface labelled “sam-
ple segment sequencer” which determines the sequence of segments used during
playback of the sample (see Fig. A.30).
204
– A step set to ‘0’ results in an automatic pause being inserted into the
pattern.
– Every other value corresponds the beginning of one of the segments
of the sample and will cause playback to jump to that point of the
sample.
Figure A.30: Sample segment sequence section the AAIM.looper interface
• To the bottom left of the interface is a second AAIM.sequenceMaker (see Sec.
A.1.5) interface labelled “pitch shift sequencer” which sets a second sequence which
controls pitch changes during playback (see Fig. A.31).
– At the very bottom of the interface is a section labelled “pitch shift
range.”
· The slider allows selection of a range in semitones
within which pitch shifts occur.
· The number box to the upper left of this slider sets
the range to any equal distance above and below the
the original pitch, while the number box at the very
bottom left of the interface shows the exact pitch shift
of each step in semitones.
205
– The number box labelled “notes” sets the number of even divisions
of the pitch shift range
– Any step in the pitch shift sequence set to ‘0’ results in no pitch shift
occurring, every other value results in a pitch shift occurring.
Figure A.31: Pitch shift sequence section the AAIM.looper interface
• The final section of the interface provides volume and panning controls, and a but-
ton which opens an AAIM.FXModule1 patch unique to the AAIM.looper voice (see
Fig. A.32).
Figure A.32: Audio section the AAIM.looper interface
206
A.1.4 AAIM.melodyVary
AAIM.melodyVary is an algorithm designed to play, manipulate, and vary melodies. The AAIM
system provides an interface for control of both individual voices and control of all AAIM.melodyVary
voices simultaneously.
Figure A.33: The AAIM Setup following initial toggling of the melodyVary algorithm
• In the “AAIM Setup” window (see Fig. A.2) check the box labeled “looper.”
• The “AAIM Setup” window should change to reveal a number box and a button
labelled “melodyVaryMultiVoice Interface” (see Fig. A.33).
• Press this button to open the AAIM.melodyVaryMultiVoiceInterface patch (see Fig.
A.34).
• Press one of the numbered buttons along the top of the interface to open the corre-
sponding AAIM.melodyVary interface (see Fig. A.35).
• First set the range of melody using the number box in the top right corner of the
interface labelled “minimum pitch.”
• Second set the number of beats in the melody using the number box labelled “nBeats.”
• Use the colour coded multislider to set the pitches of the melody.
207
• Finally, use the lower multislider object to set the rhythmic content of the melody,
where:
– 0 - rest - inserts a rest into the melody
– 1 - tie - either results in the length of a note being extended (if pitch
doesn’t change), or triggers a new note (if pitch changes or previous
step was a rest)
– 2 - new note - always triggers a new note
• Use the variations section of the AAIM.melodyVary interface (see Fig. A.36 and
Sec. A.1.4.2) or the variations section of the
AAIM.melodyVaryMultiVoiceInterface (see Fig. A.37 and Sec. A.1.4.1) interface
to create variations of the melody.
Figure A.34: The AAIM.melodyVaryMultiVoiceInterface patch.
208
Figure A.35: The AAIM.melodyVary interface.
Figure A.36: The variations section of the AAIM.melodyVary interface.
Figure A.37: The variations section of the AAIM.melodyVaryMultiVoiceInterface interface.
209
A.1.4.1 AAIM.melodyVaryMultiVoiceInterface
The AAIM.melodyVaryMultiVoiceInterface patch provides control over all of the
AAIM.melodyVary voices simultaneously (see Fig. A.34).
• The numbered buttons at the top of the interface open each of the corresponding
AAIM.melodyVary interfaces.
• Immediately below this is a multislider object which sets the velocity range within
which each trigger is mapped for each voice.
• Below this a 7 further multisliders, labelled ‘original,’ ‘inversion,’ ‘retrograde,’
‘retrograde-inversion,’ ‘sequence,’ ‘scalar melody,’ and ‘scalar notes,’ which cor-
respond to the respective variables within each AAIM.melodyVary voice (see Sec.
A.1.4.2).
These values are then scaled using the slider along the right side of the interface,
or using the ‘melodyVary variations’ slider on the performance interface (see Fig.
A.38)
Figure A.38: melodyVary performance interface on AAIM System main window
• At the bottom of the interface are 3 objects which relate to the pitch collection used
for the melodies (see Fig. A.39).
210
– At the very bottom is a button which labelled ‘individual scales’
when off, or ‘common scale’ when on - in which case all voices use
the same collection of pitches.
– This collection of pitches is shown/set using the multislider which is
labelled “C, C# Db, D, etc.”
– Finally the slider between these 2 interface objects rotates the inter-
vallic relationships of the pitch collection being used.
e.g. if the pitch collection is a C major scale, the rotations will be
each of the modes of the major scale (ionian, dorian, etc.) starting
on C, with both the original melody and all variations being mapped
onto this new pitch collection.
Figure A.39: Pitch Collection section of the AAIM.melodVaryMultiVoiceInterface
A.1.4.2 AAIM.melodyVaryPoly
The AAIM.melodyVaryPoly interface provides direct control over the melodic content and varia-
tion of each of the AAIM.melodyVary algorithms. It is divided into 4 main sections:
• The left side of the interface sets the melodic content used by the AAIM.melodyVary
algorithm (see Fig. A.40). The upper multislider sets the melodic/pitch content - it
is colour coded so each pitch class (note of the chromatic scale) is given a unique
colour. The lower multislider sets the rhythmic content of the melody, where:
– 0 - rest - inserts a rest into the melody
211
– 1 - tie - either results in the length of a note being extended (if pitch
doesn’t change), or triggers a new note (if pitch changes or previous
step was a rest)
– 2 - new note - always triggers a new note
Figure A.40: Melodic content section of the AAIM.melodVaryPoly interface
• The top right of the interface (see Fig. A.41) consists 1 slider, and 2 number boxes:
Figure A.41: The top right section of the AAIM.melodVaryPoly interface
– The slider labelled “velocity range” sets the range of velocities within
which each trigger received is mapped.
– The number box labelled “nBeats” sets the number of beats in the
melody.
– The number box labelled “minimum pitch” sets the minimum pitch
displayed on the left of the interface.
212
• Below this is the section which sets the chance of each of the variations occuring
(see Fig. A.36):
– original - play the original melody
– inversion - invert the intervals of a melody (i.e. all upward steps
become downward steps and vice versa)
– retrograde - play melody in reverse
– retrograde-inversion - both retrograde and inversion
– expansion - expands each interval to keep overall shape of melody
but spread of a greater range
– sequence - creates a melodic sequence out of a selection of notes
from the original melody
– scalar melody - creates a new melody based on the intervallic con-
tent of the original melody
– scalar - sets the possibility of notes from the scale being inserted
into the melody - choices are based on the intervallic content of the
original melody
• Finally, the bottom right section of the interface sets the pitch content used in output
of the melody (see Fig. A.42).
Figure A.42: The pitch class section of the AAIM.melodVaryPoly interface
– The upper multislider sets the pitch classes (i.e. scale) used in the
melody and by the ‘scalar’ variations
213
– The lower multislider sets the pitch classes (i.e. scale) onto which
the the output is mapped (i.e. the actual pitches which are played)
– Finally the slider at the bottom rotates the intervallic content of the
pitch classes in the upper multislider.
e.g. if the pitch collection is a C major scale, the rotations will be
each of the modes of the major scale (ionian, dorian, etc.) starting
on C, with both the original melody and all variations being mapped
onto this new pitch collection.
A.1.4.3 AAIM.melodyVary trigger destinations
Output from the AAIM.patternVary algorithm can be used to trigger sounds using a num- ber of
methods. The particular method used is chosen using the drop down menu in the AAIM Setup
window (see Fig. A.33).
A.1.4.4 MIDI
Note on/off messages from the AAIM.melodyVary can be sent to any MIDI device/program using
the AAIM.midiOut patch (see Sec. A.1.2.2). One instance of the patch is used for each voice -
thus allowing different voices to be sent to different MIDI devices.
A.1.4.5 OSC
Note on/off messages from the AAIM.melodyVary can be sent to any OSC device/program using
the AAIM.OSCOut patch (see Sec. A.1.2.3). One instance of the patch is used for each voice -
thus allowing different voices to be sent to different OSC destinations.
A.1.4.6 AAIM.synth
AAIM.synth (see Fig. A.43) is a simple synthesiser which uses a feedback frequency modulation
synthesis algorithm. As with both the MIDI and OSC output options a separate synthesiser is used
for each AAIM.melodyVary voice.
The interface for each synthesis voice consists of 3 main sections:
214
Figure A.43: Interface for AAIM.synth
• The left hand side of the interface contains controls for the amplitude envelope (see
Fig. A.44).
– A - attack - length of time it takes a note to reach maximum ampli-
tude.
– D - decay - length of time it takes for a note to lower in amplitude to
the sustain level after maximum has been reached.
– S - sustain - amplitude level which each note rests at.
– R - release - length of time it takes until amplitude goes to 0, after a
note off has been received.
• The central section of the interface provides controls over the FFM synthesis algo-
rithm (see Fig. A.45).
215
Figure A.44: Envelope section of AAIM.synth interface.
Figure A.45: FFM section of AAIM.synth interface.
– The multislider labelled “index,” “ratio,” and “feedback” (in addi-
tion to the three number boxes directly below) set initial states for
the three basic parameters of the FFM synthesiser.
– The three number boxes below these provide a means of extending
the sounds created by setting a range within which the correspond-
ing parameter will change. This change is directly linked to the am-
plitude envelope, with the current amplitude value (0 — 1) being
multiplied by this value and added to the initial state.
– The slider at the bottom of the interface sets the extent to which
either low or high pitches are low-pass filtered (technically this is not
actual filter, but rather the effects of both the “ratio” and “feedback”
variables are lessened thus giving the effect of low pass filtering).
– The slider to the left of this section of the interface sets the extent of
pitch sliding (glissandi) between each note.
– The toggle at the bottom right sets whether or not a synthesis voice
216
is in legato mode.
· In legato mode a voice is automatically monophonic
(no release section of notes if another note is trig-
gered), and a new note trigger will result in the ampli-
tude envelope always beginning for the current am-
plitude.
• Finally, the far right of the interface (see Fig. A.46) provides control over the output
volume and an individual instance of the AAIM.FXModule1 (see Sec. A.1.6).
Figure A.46: Audiosection of AAIM.synth interface.
217
A.1.5 AAIM.sequenceMaker
Figure A.47: Interface for the AAIM.sequenceMaker patch
AAIM.sequenceMaker (see Fig. A.47) is a patch which allows the creation and modification
of simple step sequences. Sequences can either be created manually using the 2 multislider objects
or using the various standard manipulations described below.
Figure A.48: Upper section of AAIM.sequenceMaker interface
The upper section of the interface creates the initial sequence (see Fig. A.48). The number box
labelled “nBeats” determines the number of steps/beats in the sequence, this can then be altered
using the number box labelled “sequenceLength,” the value of which is multiplied by the number
of beats to give the total number of steps in the sequence. The number box labelled “notes” sets
the number of possible values for each step. The “live.tab” interface at the very top of the interface
gives 4 basic preset sequences which can be created using the combination of values from the three
number boxes described above:
218
• off
– This sets all of the steps(nBeats * sequence Length) to 0
• modulo
– This set the first slider to 1, the value of each subsequent step in-
creases by 1 until the value of a step is equal to the number of ‘notes’
at which point the count returns to 1 for the subsequent slider.
• stop
– The first slider is set to 1, the value of each subsequent step increases
by 1 until the value of a step is equal to the number of ‘notes,’ after
which point all the steps are set to 0
• random
– Each step is set to a random number between 1 and the number of
‘notes.’
Figure A.49: Lower section of AAIM.sequenceMaker interface
The lower section of the interface, labelled “output sequence,” provides controls which allow
further manipulation of the sequence (see Fig. A.49). The “live.tab” on the right of this section of
the interface provide more advanced manipulations of the sequence to those afforded by the upper
section of the interface:
219
• original
– This sets the ‘output sequence’ to be identical to that of the ‘original
sequence.’
• shuffle
– This divides the sequence into groups of 2/3 steps and rearranges
these groups randomly
• undoShuffle
– This returns the ‘output sequence’ to the sequence which was in use
immediately before the shuffle operation was used.
• rotate
– This shifts all of the values one step to the right, and places the last
step at the beginning of the sequence.
• retrograde
– This reverses the sequence of values - placing last value first, first
value last, etc.
• invert
– This inverts each of the values - i.e. sets a value of 1 to equal the
maximum value, sets a maximum value to equal 1, etc.
• retrogradeBar
– This manipulation results in a retrograde of each ‘bar’ in the sequnce
- where a ‘bar’ is equal to ‘nBeats’ (see Fig. A.48).
220
• palindrome
– This manipulation doubles the length of the sequence by adding a
retrograde of the sequence to the end of the sequence.
• generatePattern
– This final manipulation works in conjunction with the two number
boxes labelled “generatePattern Length” and “generatePattern Pause
Ratio.” It produces a ‘shuffled version of the sequence where the
possibility of pauses being included in the pattern equals the “gen-
eratePattern Pause Ratio,” and the length is equal to:
· (nBeats * sequenceLength * generatePattern Length)
221
A.1.6 AAIM.FXModule1
AAIM.FXModule1 is a patch which enables the creation of FX chains using 7 basic DSP algo-
rithms:
1. reverb
2. delay
3. bitcrusher
4. filter
5. flanger
6. chorus
7. harmoniser
To open the AAIM.FXModule1 interface press the labelled “FX” beside the gain slider asso-
ciated with the module you want to play through effect (e.g. see Fig. A.46). Use the number
box at the top left of the interface to set the number of modules in the FX chain, and the use the
corresponding drop down menus to choose each FX module.
———————————— NOTE ————————————
All of the FX modules are scripted (i.e. created) on request, as such it is not advisable to cre-
ate FX chains live during performances...
222
A.1.7 AAIM.polygonInterface
Figure A.50: AAIM.polgonInterface
Figure A.51: Presets section of the AAIM.polgonInterface
The AAIM.polygonInterface patch (see Fig. A.50) allows interpolation between n presets or
settings using the mouse, and displays this as a geometric shape with (n - 1) sides. The initial
setting uses 7 presets, as this creates a 6 sided polygon with all corners being equidistant from the
centre point and the corners immediately clockwise and anticlockwise from it.
Pressing the button labelled “Presets” opens the AAIM.polygonInterface preset subpatch (see
Fig. A.51).
• The number of of presets, and thus sides of the polygon, can be changed using the
number box labelled “nPresets.”
• The number of values stored in each preset can be changed using the number box
labelled “nValues”
———————————— NOTE ————————————
Changing the number of values in each preset is not advisable unless you are using
the patch within your own Max patch because within the AAIM system all of the
mapping, and thus the number of values, is preprogrammed
223
• Each preset is represented by one of the multislider objects at the bottom of the
interface.
A.2 Examples
A.2.1 AAIM.patternVary Example
The AAIM.patternVary example demonstrates some of the basic features and setup of both the
AAIM.patternVary (see Sec. A.1.2) and AAIM.rhythmGen (see Sec. A.1.1) modules.
A.2.1.1 Loading the Example
• Press the button labelled “read” on the top right corner of the AAIM system inter-
face (see Fig. A.6), and use the resulting dialogue to open the
AAIM patternVary Example.json file.
• Press the red button directly to the right of the button labelled “read” twice:
– The first press loads the modules.
– The second press loads the basic pattern.
• Pressing the labelled “patternVary Interface” on the AAIM system (see Fig. A.1)
to open the patternVary interface and view the drum pattern (a transcription of the
first bar of the Amen Break2).
• Turning on the system (press the esc key on your keyboard, or press the button with
the image of a speaker on the AAIM system performance interface) will result in
the pattern being played using the “AU DLS Synth.”
2http://en.wikipedia.org/wiki/Amen_break
224
A.2.1.2 Performance and Improvisation
• The polygon interface (see Sec. A.1.7) on the main window/performance interface
(see Sec. A.0.2) can then be used to create variations of the original pattern. The
presets in use by the polygon interface are as follows:
1. Original Pattern.
2. Much “fuller” version (i.e. maximum “extra” value) with maximum com-
plexity (i.e. a lot of variation in the i-o-i’s3 used.).
3. Slightly less “full” version than polygon preset 2, but still with maximum
complexity.
4. Same as preset 3, but with more “rests” inserted into the pattern.
5. No variation in i-o-i’s (i.e. “complexity”), few “extra” notes added to pat-
tern, and maximum number of “rests.”
6. Same as preset 5, but more “extra” notes added into the pattern.
7. No variation variation in i-o-i’s used or “rests” inserted into pattern, but
maximum “extra” notes.
3inter-onset-intervals
225
A.2.2 AAIM.melodyVary Example
The AAIM.melodyVary example demonstrates some of the basic features and setup of both the
AAIM.melodyVary (see Sec. A.1.4) and AAIM.rhythmGen (see Sec. A.1.1) modules.
A.2.2.1 Loading the Example
• Press the button labelled “read” on the top right corner of the AAIM system inter-
face (see Fig. A.6), and use the resulting dialogue to open the
AAIM melodyVary Example.json file.
• Press the red button directly to the right of the button labelled “read” twice:
– The first press loads the modules.
– The second press loads the basic pattern.
• Pressing the labelled “melodyVaryMultiVoice Interface” on the AAIM Setup win-
dow (see Fig. A.33) to open the AAIM.melodyVaryMultiVoice interface (see Sec.
A.1.4.1), each of the 4 individual melodic lines which comprise the melody (a tran-
scription of the opening of Erik Satie’s Gymnopedie No. 14) can be viewed by
pressing the numbered buttons along the top of this interface.
• Turning on the system (press the “esc” key on your keyboard, or press the button
with the image of a speaker on the AAIM system performance interface) will result
in the pattern being played using the “AU DLS Synth.”
A.2.2.2 Performance and improvisation
• The polygon interface (see Sec. A.1.7) on the main window/performance interface
(see Sec. A.0.2) can then be used to create variations of the original pattern. The
presets in use by the polygon interface are as follows:
1. Original Pattern.
4https://en.wikipedia.org/wiki/Gymnop%C3%A9dies
226
2. Maximum melodic variations, and some variation in i-o-i’s5 used.
3. Maximum melodic variations and maximum variations in i-o-i’s used.
4. Some melodic variations, but maximum variations in i-o-i’s used and max-
imum number of rests inserted.
5. No variation in i-o-i’s (i.e. “complexity”), but maximum melodic varia-
tions and maximum number of “rests.”
6. Same as preset 5, but fewer “rests.”
7. No variation variation in i-o-i’s used or “rests” inserted into pattern, but
maximum melodic variations.
5inter-onset-intervals
227
A.2.3 “Jam Band” Preset Example
The “Jam Band” example demonstrates the use of Section Presets (see Sec. A.0.3) to change:
• Melodic and rhythmic patterns
• Variation settings within the AAIM.rhythmGen (see Sec. A.1.1),
AAIM.patternVary (see Sec. A.1.2), and AAIM.melodyVary modules (see Sec.
A.1.4).
• Time Signature.
In addition to use of the following modules:
• AAIM.samplePlayer2 (see Sec. A.1.2.6).
• AAIM.sequenceMaker (see Sec. A.1.5).
• AAIM.synth (see Sec A.1.4.6).
• AAIM.FXModule1 (see Sec. A.1.6).
In addition to this it is intended as an example of how the AAIM system can be used in the creation
and performance of an entire piece of music.
A.2.3.1 Loading the Example
• Press the button labelled “read” on the top right corner of the AAIM system inter-
face (see Fig. A.6), and use the resulting dialogue to open the
AAIM technoJamBand.json file.
• Press the red button directly to the right of the button labelled “read” at least three
times to load all of the modules and patterns.
228
• Turning on the system (press the “esc” key on your keyboard, or press the button
with the image of a speaker on the AAIM system performance interface) will result
in the first section preset (see Sec. A.2.3.3) being played.
229
A.2.3.2 Performance and improvisation
• During each preset section (see Sec. A.2.3.3) the AAIM.polygonInterface (see Sec.
A.1.7 and A.0.2 and Fig. A.6) can be used to create variations of the basic material,
and in all cases the polygon presets used roughly equate to:
1. Original material
2. Melodic variations, additional drum hits, some variation in i-o-i’s used (i.e.
rhythmic complexity).
3. Melodic variations, additional drum hits, maximum variation in i-o-i’s used.
4. Some melodic variations, additional drum hits, maximum variation in i-o-
i’s used, and maximum ‘rests’ inserted into pattern.
5. Some melodic variations, additional drum hits, and maximum ‘rests’ in-
serted into pattern, but no (or little) variation in i-o-i’s used.
6. Some melodic variations, additional drum hits, and ‘rests’ inserted into
pattern, but no variation in i-o-i’s used.
7. Melodic variations, additional drum hits, but no ‘rests’ inserted into pat-
tern, and no variation in i-o-i’s used.
• Time signature changes occur most naturally at the very end/beginning of a phrase
(i.e. 4 bar loop).
230
A.2.3.3 Form
There are 10 “section presets” in total, divided loosely into 3 parts by time signature:
————————————– 44 or 8
8 ————————————–
1. The basic drum pattern
2. Basic drum pattern with addition cymbal hits
3. Bass line added (AAIM.melodyVary voice 1)
4. Melodic line added (AAIM.melodyVary voice 2)
5. Harmonic (“pad” like sound) voices added (AAIM.melodyVary voices 3 and 4)
• Melodic line (AAIM.melodyVary voice 2) is treated as the “soloist” - i.e.
lots of rhythmic fills and melodic variations:
6. Same material as Preset 5, but with fewer simultaneous drum hits allowed
• Uses the same material but treats the drum pattern (i.e. the AAIM.patternVary
pattern) as soloist.
————————————– 78 ————————————–
7. Variation of material using only slightly different melodic lines, but with melodic mode
rotated (see Sec. A.1.4.2) to result in a major key version of the melody.
8. “Melodic ‘answer” to the melodic “call” of preset 7.
• One snare drum plays in double time while the melodic line plays at half
time.
• Melodic line treated as soloist.
————————————– 108 ————————————–
231
9. Slight variation on original material, but with melodic mode rotated (see Sec. A.1.4.2) to
result in a second minor key version of the melody.
• Bass line played in double time
• Slightly more emphasis on variations of drum pattern than melodic lines
10. Same material as preset 9, but with melodic mode rotated again (see Sec. A.1.4.2) to result
in a third minor key version of the melody. “Melodic ‘answer” to the melodic “call” of
preset 9.
• Bass line played in double time
• Melodic line played in half time
• Even more emphasis on variations of drum pattern than melodic lines
232
Example Progression
——————— Introduction of the basic material ———————
• 1 (1 bars)
• 2 (1 bars)
• 3 (2 bars)
• 4 (4 bars)
——————— Introduction of the secondary material ———————
• 9 (4 bars)
• 10 (4 bars)
——————— Variation of the basic material ———————
• Alternation between 6 and 5 (n*4 bars w/variations)
——————— Variation of the secondary material ———————
• 9 (4 bars w/variations)
• 10 (4 bars w/variations)
——————— Introduction and variation of tertiary material ———————
• 7 (4 bars no variations)
• 8 (4 bars no variations)
• Repeat w/variations
——————— Return and variation of the basic material ———————
• Alternation between 6 and 5 (n*4 bars w/variations)
——————— Return of secondary material ———————
• 9 (4 bars no variations)
• 10 (4 bars no variations)
——————— Return of basic material and outro ———————
233