MSc Arts Computing Project plan - Modelling creative use of rhythm DSLs

Similar documents
A Model of Musical Motifs

A Model of Musical Motifs

Melodic Pattern Segmentation of Polyphonic Music as a Set Partitioning Problem

PRESCOTT UNIFIED SCHOOL DISTRICT District Instructional Guide January 2016

Extracting Significant Patterns from Musical Strings: Some Interesting Problems.

Chords not required: Incorporating horizontal and vertical aspects independently in a computer improvisation algorithm

Doctor of Philosophy

ISE 599: Engineering Approaches to Music Perception and Cognition

Early Applications of Information Theory to Music

Melody Retrieval On The Web

Perceptual Evaluation of Automatically Extracted Musical Motives

Computational Parsing of Melody (CPM): Interface Enhancing the Creative Process during the Production of Music

THE CONSTRUCTION AND EVALUATION OF STATISTICAL MODELS OF MELODIC STRUCTURE IN MUSIC PERCEPTION AND COMPOSITION. Marcus Thomas Pearce

Musical Creativity. Jukka Toivanen Introduction to Computational Creativity Dept. of Computer Science University of Helsinki

2 2. Melody description The MPEG-7 standard distinguishes three types of attributes related to melody: the fundamental frequency LLD associated to a t

Music Performance Panel: NICI / MMM Position Statement

Open Research Online The Open University s repository of research publications and other research outputs

A MULTI-PARAMETRIC AND REDUNDANCY-FILTERING APPROACH TO PATTERN IDENTIFICATION

Step by Step: Standards-Based Assessment in General Music

Tool-based Identification of Melodic Patterns in MusicXML Documents

Music Composition with Interactive Evolutionary Computation

ISE : Engineering Approaches to Music Perception and Cognition

A probabilistic approach to determining bass voice leading in melodic harmonisation

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION

The purpose of this essay is to impart a basic vocabulary that you and your fellow

Automatic Polyphonic Music Composition Using the EMILE and ABL Grammar Inductors *

StepSequencer64 J74 Page 1. J74 StepSequencer64. A tool for creative sequence programming in Ableton Live. User Manual

PRESCOTT UNIFIED SCHOOL DISTRICT District Instructional Guide January 2016

Music Radar: A Web-based Query by Humming System

Algorithmic Composition: The Music of Mathematics

Sound visualization through a swarm of fireflies

Blues Improviser. Greg Nelson Nam Nguyen

Robert Alexandru Dobre, Cristian Negrescu

Interacting with Generative Music through Live Coding

Outline. Why do we classify? Audio Classification

PRESCOTT UNIFIED SCHOOL DISTRICT District Instructional Guide January 2016

Automatic Composition from Non-musical Inspiration Sources

Take a Break, Bach! Let Machine Learning Harmonize That Chorale For You. Chris Lewis Stanford University

Computational Modelling of Harmony

Melodic Outline Extraction Method for Non-note-level Melody Editing

MOTIVE IDENTIFICATION IN 22 FOLKSONG CORPORA USING DYNAMIC TIME WARPING AND SELF ORGANIZING MAPS

SIBELIUS ACADEMY, UNIARTS. BACHELOR OF GLOBAL MUSIC 180 cr

A Bayesian Network for Real-Time Musical Accompaniment

Computing, Artificial Intelligence, and Music. A History and Exploration of Current Research. Josh Everist CS 427 5/12/05

Automated Accompaniment

MELONET I: Neural Nets for Inventing Baroque-Style Chorale Variations

Making Progress With Sounds - The Design & Evaluation Of An Audio Progress Bar

Connecticut Common Arts Assessment Initiative

Eighth Grade Music Curriculum Guide Iredell-Statesville Schools

Evolutionary Computation Applied to Melody Generation

Characteristics of Polyphonic Music Style and Markov Model of Pitch-Class Intervals

Critical Thinking 4.2 First steps in analysis Overcoming the natural attitude Acknowledging the limitations of perception

arts.lausd.net 2 nd Grade ELEMENTARY SCOPE AND SEQUENCE

Etna Builder - Interactively Building Advanced Graphical Tree Representations of Music

The Human Features of Music.

On the Music of Emergent Behaviour What can Evolutionary Computation bring to the Musician?

A Logical Approach for Melodic Variations

BBC Bitesize Primary Music Animation Brief

2014 Music Style and Composition GA 3: Aural and written examination

Automatic Generation of Four-part Harmony

Melody classification using patterns

Exploring the Rules in Species Counterpoint

City, University of London Institutional Repository

THE ELEMENTS OF MUSIC

Key Assessment Criteria Being a musician

An Empirical Comparison of Tempo Trackers

1 Overview. 1.1 Nominal Project Requirements

Pitch Spelling Algorithms

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

Music Model Cornerstone Assessment. Artistic Process: Creating-Improvisation Ensembles

TIATracker v1.0. Manual. Andre Kylearan Wichmann, 2016

Digital audio and computer music. COS 116, Spring 2012 Guest lecture: Rebecca Fiebrink

Music Composition with RNN

Music Curriculum. Rationale. Grades 1 8

Using different reference quantities in ArtemiS SUITE

THE BASIS OF JAZZ ASSESSMENT

Automatic characterization of ornamentation from bassoon recordings for expressive synthesis

MANOR ROAD PRIMARY SCHOOL

CSC475 Music Information Retrieval

NATIONAL and KENTUCKY ACADEMIC STANDARDS in MUSIC

Palestrina Pal: A Grammar Checker for Music Compositions in the Style of Palestrina

jsymbolic and ELVIS Cory McKay Marianopolis College Montreal, Canada

Kansas State Music Standards Ensembles

Curriculum Standard One: The student will listen to and analyze music critically, using the vocabulary and language of music.

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

MMS 8th Grade General Music Curriculum

A wavelet-based approach to the discovery of themes and sections in monophonic melodies Velarde, Gissel; Meredith, David

On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance

Agreed key principles, observation questions and Ofsted grade descriptors for formal learning

Evolving Cellular Automata for Music Composition with Trainable Fitness Functions. Man Yat Lo

Four Head dtape Echo & Looper

California Subject Examinations for Teachers

Towards A Framework for the Evaluation of Machine Compositions

Music Explorations Subject Outline Stage 2. This Board-accredited Stage 2 subject outline will be taught from 2019

Curriculum Standard One: The student will listen to and analyze music critically, using the vocabulary and language of music.

FANTASTIC: A Feature Analysis Toolbox for corpus-based cognitive research on the perception of popular music

NH 67, Karur Trichy Highways, Puliyur C.F, Karur District UNIT-III SEQUENTIAL CIRCUITS

SYNTHESIS FROM MUSICAL INSTRUMENT CHARACTER MAPS

Building a Better Bach with Markov Chains

Methodology Primary Level 3

Transcription:

MSc Arts Computing Project plan - Modelling creative use of rhythm DSLs Alex McLean 3rd May 2006 Early draft - while supervisor Prof. Geraint Wiggins has contributed both ideas and guidance from the start of this project, he has not yet been party to this text. 1 Introduction This project crosses three primary disciplines; Music, Artificial Intelligence (AI) and Computer Science (CS). For example it visits rhythm and performance from the area of Music, symbolic modelling and computational creativity from AI, and Domain Specific Languages (DSLs) and online programming from CS. The broad aims of the project are to specify a language in which rhythm may be expressed, to model human creative use of that language, and use that model to make interventions during musical performance. These aims will be related in greater detail in section 2, after the following brief description of the motivations behind this project. 1.1 Motivations This project is borne out of a number of desires, including the following. 1.1.1 To program rhythm more expressively General Purpose Languages (GPLs, see section 3.1) are often frustrating languages in which to describe music. Even a trained touch-typist might take up to a minute to express a rhythmic idea as part of a improvised performance, engaging several loops, conditonal and other statements in order to describe something quite straightforward in rhythmic terms. Worse, these expressions often take similar forms, but not always in a way that may be sensibly abstracted into a clear procedures in a library or class file. 1.1.2 To model individual creativity This is really two motivations in one; firstly a desire to preserve a style or creative process for posterity and secondly to free oneself from going through 1

the same motions over and over again. Once an adequate model is established it can become a starting point for a new creative development. 1 1.1.3 To understand creativity Whether trained or self-taught, programmers or musicians become thoroughly absorbed into the language of their craft. The question often arises sooner or later, what am I doing? The feeling of not understanding one s own methods can be unsettling. This motivation follows from the previous one, once we have a model, we can attempt to understand it. 2 Proposal 2.1 Review The review section of this project will be lengthly, drawing from many areas including those mentioned in the introduction. An initial reading list is shown in the references section of this proposal, which will help towards a grounding in the area of Intelligent Sound and Music Systems in which I will work. Due to the cross-disciplinary nature of this project, the review section will to some extent be combinatoral, reviewing key relevant papers in each area, and likewise key papers connecting each pair of areas. 2.2 Demonstrators This project will demonstrate 1. A language specific to the problem domain of composing and improvising rhythm 2. An on-line method for recording and modelling human use of this language 3. A live algorithm 2, using the on-line model to make its own live changes to a rhythm. These changes could be evaluated by a human performer and then fed back into the model There is both dependency and conflict between the first step and the second and third steps. The dependency is clear - to model use of a language one must first define it. However if the language is too complex, it would be outside the scope of an MSc project to model it. It is therefore necessary for the first step to aim for simplicity rather than comprehesiveness. Following work could examine the modelling of and creativity within more complex grammars. The three parts of the project will hang together as an on-line system for improvising rhythm. A live human performer can perform rhythms with it, have the use of the language during the performance modelled and hopefully enhanced with computational introventions. These introventions will be controllable and perhaps graded to provide feedback to improve the model. 1 To take this train of thought further, to design software that finds new creative development itself would be a fine goal indeed, but would most likely prove to be beyond the scope of this project. 2 For more about live algorithms, see http://homepages.gold.ac.uk/michaelyoung/lamweb/ 2

2.3 Evaluation This part of the project will evaluate the success or otherwise of the demonstrators. It is difficult to define this section so early in the project, but evaluation could entail seeking to establish that creative aspects of individuals are being modelled, perhaps by comparing the results of different musicians to look for different patterns of use, and/or by conducting tests to see if listeners can identify the human use of the system with that generated by their computer model. establish that the system makes worthwhile interventions, by asking human performers to grade its performance. To allow proper evaluation of the results, in a proportion of the tests the model could be replaced with choices made directly by a performer, or with arbitrary choices that aren t based on the behaviour of the performer at all. In each case the evaluation would in practice seek to disprove the thesis in order to conduct the most thorough tests of it. 3 Appendices 3.1 Domain Specific Languages A Domain Specific Language (DSL) is a compact language expressive within a certain domain. For example the well known Structured Query Language (SQL) is a DSL for describing operations on database records. DSLs are contrasted with General Purpose Languages (GPLs) such as Java, Lisp and Perl. DSLs are less comprehensive than GPLs, but are much more expressive in their domain. DSLs are characterised by having some notion of control flow and state memory, but not having the general purpose constructs of conditionals, loops or recursion. That said in practice a DSL may almost by accident have both loops and conditionals, however these features would be designed for certain uses and using them generally would be considered misuse. For example sendmail.cf is a configuration file for a mail server, and has reached such complexity that it is notionally turing complete 3, however writing a useful program in sendmail.cf would be very difficult, understanding such a program only more so. Developing DSLs rather than using GPLs allow potential for faster development programs that are more easily maintained programs that are more easily understood and reasoned about The final point is perhaps the more interesting, as it may be applied to artificial as well as human reasoning. If software reasons within a DSL, then surely it stands a better chance of coming up with useful answers? 3 Bar the lack of infinite storage. 3

3.1.1 Embedded DSLs DSLs are often embedded in GPLs. For example comprehensive languages such as Perl allow complex pattern matching and text manipulation operations to be expressed compactly in regular expressions. In the following example we see some Perl code passing a string to be manipulated by a regular expression. my $string = good morning, world ; $string =~ s/^good (morning evening)/what a fine $1/; Similarly, the afore-mentioned SQL is commonly embedded in GPLs. Another technique is to pare down a GPL into a DSL, a common technique among functional ML and Lisp programmers.[11] 3.2 My first rhythm DSL Here follows the specification for a simple rhythm DSL. It is not intended as an example of a well-developed, fully expressive language, but nonetheless has already shown some usefulness during live performances. The length of the pattern can be specified as an integer as the first line of the rhythm description. If the integer isn t specified then the pattern will be of infinite length, although as this simple language allows only limited complexity, the pattern will repeat sooner or later. The language comprises of just two verbs, pulse and depulse. Both take one parameter, an interval. So for example 16 pulse 4 will result in a loop of 16 ticks, with a sound event (denoted by x ) every fourth tick starting from tick 1: x...x...x...x... An offset modifier can be added to start from a different point, for example 16 pulse 4 offset 2 results in this pattern:..x...x...x...x. depulse works the same way, but wipes any events every given number of measures. In this example 16 pulse 4 depulse 3 offset 2 the depulse will wipe any event occuring on every third tick, starting from the 2nd tick. It will therefore wipe the event placed on the ninth tick by the previous pulse instruction thus: 4

x...x...x... Finally, pulse, depulse and offset have the shorter aliases of. (full stop),! and + respectively, and whitespace is optional. This allows patterns to be expressed compactly, for example produces 32.4!3+2.1+26 x...x...x...x...x.xxxxxx 3.2.1 Discussion This language has great limitations, all it specifies is whether a sound plays at each discrete position within a rigid time structure. It offers no access to synthesis parameters or other means to control the sound, and does not allow deviations in timing structure to be described. However, once a pattern is made changes in timbre, pitch and timing can be imposed by some other process. This is a particular advantage of embedded DSLs as described in section 3.1, any shortcomings of a DSL can be taken up by the GPL it is embedded within. Even so, there are clearly strong interellations between a sound s position in time and its timbre, and it would be useful to be able to express both in the same language. While developing a language is not envisaged to be the main thrust of this project, further experimentation in this direction will likely follow during the project s course, and may find its way into an appendix of the final project report. 3.3 Live Programming As non-live, off-line programming is so well established in many areas of computer science, it s shortcomings are first underlined here. The cycle of moving to make a change, reacting to the results then deciding upon further changes will be familiar to many artists, for example a painter at a canvas. However in normal practice, software is developed in a looser cycle of writing and modifying sourcecode, compiling it to machine code, executing it, examining the results and then returning to modify the sourcecode to fix bugs or develop it further. The difference is that while a painting exists before the painter as metaphor for a living system, the software does not. Sourcecode only comes alive after it is compiled and executed. Before seeing and understanding the impact of a change, the programmer has to switch roles to a user, with some pause enforced by the compilation phase. For an artistic programmer (if there is any other kind) the problem is clear; they are separated from their art even while they are making it. Whereas a painter may skip between their imagination and their painting, a programmer must not only wait each time for their computer to catch up, but also navigate through their software to the certain state that they wish to examine. If this is still not looking like a serious problem we should consider the programmer working before an audience, writing software to make music for them. If a recompile is necessary each time a change is to be heard, then the flow and progress of the music will have to be interrupted. 5

In short, this is no way to improvise. Live programming, also known as on-the-fly or interactive programming, avoids these problems by placing the programmer directly in the execution phase. Taking advantage of the flexibility of interpreted programming languages, live programmers enact changes to a program while a computer executes it. Such live changes need not result in any loss of state data, the computer continues following the newly replaced instructions. While live programming has a history going back to the 1980s, it is only recently that broad exploration and discussion has begun to take place under the banner of The Organisation for the Proliferation of Live Algorithm Programming (TOPLAP).[13] TOPLAP members practice live programming for both music and visual arts, using a wide range of conventional and self-built technologies. References [1] Boden, The Creative Mind: Myths and Mechanisms (2nd ed.), Routledge pubs, 2004 [2] Wiggins & Smaill, Musical Knowledge: what can Artificial Intelligence bring to the musician?, chapter in collection: Readings in Music and Artificial Intelligence, ed. Miranda, Harwood Academic Publishers, 2000 [3] Ferrand, Nelson & Wiggins, A Probabilistic Model for Melody Segmentation, Electronic Proceedings of the 2nd International Conference on Music and Artificial Intelligence (ICMAI 2002), 2002 [4] Pearce & Wiggins, Rethinking Gestalt influences on melodic expectancy, Proceedings of ICMPC-8, ed. S. D. Lipscomb, R. Ashley, R. O. Gjerdingen and P. Webster, pp 367-371, 2004 [5] Rutherford & Wiggins, An Experiment in the Automatic Creation of Music which has Specific Emotional Content, 7th International Conference on Music Perception and Cognition, Sydney, Australia, 2002 [6] Pearce & Wiggins, Improved Methods for Statistical Modelling of Monophonic Music, Journal of New Music Research, 2003 [7] Pearce & Wiggins, An empirical comparison of the performance of PPM variants on a prediction task with monophonic music, Proceedings of the AISB 03 Symposium on Creativity in Arts and Science, 2003 [8] Ponsford, Wiggins & Mellish, Statistical Learning of Harmonic Movement, Journal of New Music Research, 1999 [9] Ferrand, Nelson & Wiggins, Memory and Melodic Density: A Model for Melody Segmentation, Proceedings of the XIV Colloquium on Musical Informatics (XIV CIM 2003), pp 95-98, 2003 [10] Pearce, Conklin & Wiggins, Methods for combining statistical models of music, Music Modelling and Retrieval, ed. Will, pp 295-312, 2004 6

[11] Hudak, Modular Domain Specific Languages and Tools, Proceedings: Fifth International Conference on Software Reuse, 1998 [12] Zanette, Zipf s law and the creation of musical context, 2006 [13] Ward, Rohrhuber, Olofsson, McLean, Griffiths, Collins & Alexander, Live Algorithm Programming and a Temporary Organisation for its Promotion, Proceedings of READ ME software art conference, 2004 [14] Le Poidevin, Robin, The Experience and Perception of Time, The Stanford Encyclopedia of Philosophy (Winter 2004 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/win2004/entries/timeexperience/>. 7