PLAYING WITH TENSION PRASHANTH THATTAI RAVIKUMAR NATIONAL UNIVERSITY OF SINGAPORE

Size: px
Start display at page:

Download "PLAYING WITH TENSION PRASHANTH THATTAI RAVIKUMAR NATIONAL UNIVERSITY OF SINGAPORE"

Transcription

1 PLAYING WITH TENSION PRASHANTH THATTAI RAVIKUMAR NATIONAL UNIVERSITY OF SINGAPORE 2015

2 PLAYING WITH TENSION GENERATING MULTIPLE VALID ACCOMPANIMENTS FOR THE SAME LEAD PERFORMANCE PRASHANTH THATTAI RAVIKUMAR B.Tech, National Institute of Technology, Trichy, 2012 A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF ARTS COMMUNICATIONS AND NEW MEDIA NATIONAL UNIVERSITY OF SINGAPORE 2015

3

4 Acknowledgment Foremost, I would like to express my sincere gratitude to my supervisors Prof. Lonce Wyse and Prof. Kevin McGee for their continuous support, patience, motivation, enthusiasm and immense knowledge in guiding me to learn and do research. "To define is to limit" I cannot quantify the knowledge that I have learned from them in the past two years. Their constant guidance, support and dedication has been a immense inspiration for me to finish this dissertation. Besides my supervisors, I would like to thank Dr. Srikumar Karaikudi Subramanian, who has been a friend, a mentor and a person to look upto. I will long cherish the memorable coffee-chats that have lead to so many new insights about the thesis, music and varied things in life. I thank my fellow lab mates from the Partner Technologies group, Dr. Alex Mitchell, Teong Leong, Chris, Jing, Evelyn, and Kakit, for their stimulating discussions every week. Our weekly meetings used to be a ton of fun in terms of discussing and learning diverse perspectives of doing research. I thank the faculty, the staff and the graduate students of the Communications and New Media department for supporting and housing me as a graduate student for the last two years. I thank the musicians, Dr. Ghatam Karthik, Mr. Trichur Narendran, Mr. Arun Kumar, Mr. Sumesh Narayan, Mr. Sriram, Mr. Hari, Mr. Shrikanth, Mr. Santosh and all others who have imparted their musical knowledge to help my understanding of the genre. This thesis could not have progressed as much as it has, if not for the musical insights and inspirations that I drew from our group music jamming sessions. I take this moment to thank to my close friends and music collaborators - Vinod, Vishnu, Lakshmi Narasimhan, Prasanna and Arun who have enhanced my musical growth and helped me achieve the insights that I have in this thesis. I thank my close friends Shyam and Kameshwari who have been a constant source of support during the tough times. I thank my friend Akshay for the intellectually stimulating conversations. I also thank him for his timely help during the thesis revisions. I thank Spatika Narayanan for her help in proof-reading the document. Last but not the least, I would also like to thank my family. March 20, 2015 ii

5 Name : Prashanth Thattai Ravikumar Degree : Master of Arts Supervisor(s) : Associate Professor Kevin McGee, Associate Professor Lonce Wyse Department : Communications and New Media Thesis Title : Playing with Tension Generating multiple valid accompaniments for the same lead performance Abstract One area of research interest in computational creativity is the development of interactive music systems that are able to perform variant, valid accompaniment for the same lead performance. Although previous work has tried to solve the problem of generating multiple valid accompaniments for the same lead input, success has been limited. Broadly, retrieval-based music systems use static databases and produce accompaniment that is too repetitive; generation-based music systems that use hand-coded grammars are less repetitive, but have a more limited range of pre-defined accompaniment options; and finally, transformation-based music systems produce accompaniment choices which are predictably valid for only a few cases. This work goes beyond the existing work by proposing a model of choice generation and selection that generates multiple valid accompaniment choices given the same input. The model is applied to generate secondary percussive accompaniment to an lead percussionist in a Carnatic improvisational ensemble. The central insight the main original contribution is that the generation of valid alternate variations of secondary accompaniment can be accomplished by formally representing the relationship between lead and accompaniment in terms of musical tension. By formalizing tension ranges for acceptable accompaniment, an algorithmic system is able to generate alternate accompaniment choices that are acceptable in terms of a restricted notion of sowkhyam (roughly, musical consonance). In the context of this thesis, restricted sowkhyam refers to the sowkhyam of accompaniment coniii

6 sidered independent of the secondary performer (and his creativity). The research proceeded in three stages. First, Carnatic music performances were analyzed in order to model the performance structures and improvisation rules that provide the freedom and constraints in secondary percussion playing. Second, based on the resulting tension model, a software synthesis system was implemented that can take a transcribed selection of a Carnatic musical performance and algorithmically generate new performances, each with different secondary percussion accompaniment that meet the criteria of restricted sowkhyam. Third, a study was conducted with six expert participants to evaluate the results of the synthesis. The main contribution of this thesis is the development and validation of a tension model that, assuming restricted sowkhyam, is able to generate alternate variations of secondary accompaniment that are as valid as the original accompaniment. Keywords : Carnatic rhythmic improvisation, Improvisational accompaniment iv

7 Contents 1 Introduction Structure of this document Related work Retrieval-based music systems Retrieval from a database Retrieval using dynamic learning models Generation-based music systems Hand-coded grammars Online learning of grammars Transformation-based music systems Transformation function is pre-given User selects the transformation function Research problem Summary of the related work Proposed solution Method Analysis of the Carnatic musical performances Model development Evaluating the tension model System development Background: Carnatic quartet performance Overview Musical structures Choices in different styles of accompaniment playing Musical actions in the improvisation Major variations Minor variations v

8 6 System: design criteria & constraints Research/Implementation model Lead percussionist: improvisation and variation Secondary percussionist: accompaniment and variation Possible approaches The Direct Mapping model The Horizontal Continuity model The tension model Tension model applied to secondary playing Tension model applied to generate multiple accompaniments 36 9 Tension synthesis protocol Choose Carnatic performance recording Choose a sixteen bar sample of performance recording Transcribe the sixteen-bar selection Transcribing double hits Transcribing hit loudness Transcribing rhythmic repetition of bars Compute tension scores for each hit Compute tension scores for each beat Compute tension range for each bar Generate all viable accompaniment sequences Enumerate all unique triplet values for each beat Collect all viable 8-beat (1-bar) sequences Collect secondary sequences that meet tension constraints Construct secondary transcription for entire piece Synthesize performance Tension synthesis: practical details Separating tracks from original recording Storing the transcript Sequencing audio from a transcript Creating a new recording Study protocol Participants Materials vi

9 Documents Equipment Recordings (original) Recordings (with new accompaniment) Study Disclaimer Study Session Protocol Gather demographic information Explain evaluation criteria Sequencing the recordings Evaluate recordings Evaluation Study results RQ1: does system produce acceptable accompaniment Recording Recording Recording RQ2: are accompaniments inside the range better? Recording Recording Recording RQ3: do ratings decrease as a function of distance Recording Recording Recording Summary Potential objections Discussion Algorithmic limitations Transcription limitations System limitations Future work 89 Appendices 93 A Key Terms 95 A.1 Terms: tension model A.2 Terms: Carnatic music vii

10 B Enumerating the accompaniment sequences 99 C Assigning perceptual scores 101 C.1 Diction C.2 Loudness C.3 Note duration D Transcription: internal representation 105 D.1 Transcription: internal representation E Results 111 E.1 Complete results for recordings E.2 Complete results for variants F Study documents 115 F.1 Session checklist F.2 Demographic questionnaire F.3 Participant variant sequence F.4 Evaluation sheet F.5 Participant observation form F.6 Participant definition sheet viii

11 List of Tables 9.1 Rhythmic repetition of bars Tension scores for each hit Tension scores for each beat Computing TZP and tension range for a bar Computing TZP and tension range for a bar Computing TZP and tension range for a bar Lookup table for 2-beats Possible 2-beat diction combinations Possible 2-beat diction combinations Valid 3-beat diction combination Two bars (average tension scores) Two bars of valid sequences Rhythmic repetition of bars, with accompaniment Participant data Two bars of valid sequences Two bars of valid sequences Variants by distance value Distance of variants used for recording Distance of variants used for recording Distance of variants used for recording Recording sequences for participants Variant sequences for participant Average accompaniment rating per recording Average rating for variants of recording Average rating for variants of recording Average rating for variants of recording Accompaniment ratings for variants of recording Accompaniment ratings for variants of recording Accompaniment ratings for variants of recording Accompaniment ratings for different variants ix

12 12.9 Accompaniment ratings for different variants Accompaniment ratings for different variants C.1 Weights for lead strokes C.2 Weights for secondary strokes C.3 Perceived loudness of lead and secondary hits C.4 Weights for loudness C.5 Weights for note duration D.1 Transcription of recording 1, bars D.2 Transcription of recording 2, bars D.3 Transcription of recording 3, bars E.1 Accompaniment ratings for recordings 1, 2, and E.2 Accompaniment ratings for variants F.1 Recording and variant sequences x

13 List of Figures 5.1 The Carnatic quartet (from left): lead percussionist, secondary, vocalist, Tambura (provides the background drone), and violinist Two bars of lead and secondary playing Different minor variations Direct Mapping Horizontal Continuity: secondary follows the lead changes Tension-relaxation visualization Tension between lead and secondary xi

14 xii

15 List of Algorithms 1 Hit tension score calculation Beat tension score calculation Unique 1-hit and 2-hit triplets Unique 1-beat triplets Unique 8-beat triplets xiii

16 Chapter 1 Introduction This chapter introduces the research area of musical improvisational accompaniment systems and highlights an important problem in this field. Improvisational accompaniment systems differ from score-following, solo-trading, and tap-along systems in that they are able to produce multiple valid musical alternatives for the same performance. Developing musical accompaniment systems that generate multiple valid accompaniments by modeling the constraints of accompaniment playing, is the problem of interest in this thesis. Computational creativity is an emerging field of research in artificial intelligence, cognitive psychology, philosophy, and the arts. The goal of computational creativity is to model, simulate or replicate human creativity using a computer. One area of research interest in computational creativity is the development of improvisational music systems that are able to perform variant, valid accompaniment for the same lead performance. Developing musical accompaniment systems that generate multiple valid accompaniments by modeling the constraints of accompaniment playing, is the problem of interest in this thesis. Although previous work has tried to solve the problem of generating multiple valid accompaniments for the same lead input, success has been limited. Broadly, retrieval-based music systems that use static databases are produce accompaniment that is too repetitive; generation-based music systems that use hand-coded grammars are less repetitive, but have a more limited range of pre-defined accompaniment options; and finally, transformation-based music systems produce accompaniment choices which are predictable valid for only a few cases. 1

17 This work goes beyond the existing work by proposing a model of choice generation and selection that generates multiple valid accompaniment choices given the same input. 1.1 Structure of this document The remainder of this document is structured as follows: Related work This chapter summarizes the previous work on improvisational accompaniment systems developed for generating multiple valid accompaniments by modeling the constraints of accompaniment playing. Research problem This chapter identifies a significant problem left open by previous work and presents the research focus: to develop a model of rhythmic accompaniment for Carnatic ensemble music that produces multiple musically valid accompaniments, given the same input. Method This chapter provides a brief overview of the method used during this thesis research. The method included the analysis of Carnatic music performances, development of different models of accompaniment playing, their implementation as computer programs, and their evaluation. Background This chapter describes the roles and activities of the lead and secondary percussionist within a Carnatic quartet performance. It further describes the musical structure and provides examples of different scenarios of lead and secondary percussion playing in a performance ensemble. System design criteria This chapter describes the narrow subset of constraints that guided the research and development of the secondary accompaniment system. The structural constraints separate the music into improvisational cycles made of eight bars in a 4/4 time signature. The input constraints restrict the lead to minor bar variations. The output constraints restrict the scope of secondary accompaniment to playing compliant accompaniment to the lead. Within these constraints, the secondary system still has the freedom to play a variety of valid accompaniments in a given situation. Possible approaches This chapter describes two seemingly-reasonable approaches Direct Mapping and Horizontal Continuity and shows why they will not effectively solve the central research problem. 2

18 The tension model This chapter describes the tension model that was developed to address the shortcomings of the previous models. Applied to the activity of secondary accompaniment playing in a Carnatic performance, the tension model is used as a constraint satisfaction mechanism to generate multiple accompaniments given the same lead. Tension synthesis protocol This chapter describes the main steps involved in synthesizing recordings with variant valid accompaniment. Tension synthesis: practical details This chapter describes the different steps in the synthesis process in terms of the different technologies used to implement them. Study protocol This chapter describes the study conducted with musical experts for evaluating the ability of the system to produce alternate valid secondary accompaniments for a Carnatic musical performance. Study results This chapter describes the main results from the user study and uses them to answer the research questions. Potential objections This chapter highlights the aspects of the study design that could raise objections about the claims made from this work. Discussion This chapter identifies the main limitations of the research reported here and discusses their impact on the findings from the study. Future work This chapter proposes directions for future work. The next chapter reviews work on developing improvisational accompaniment systems that generate multiple valid accompaniments by modeling the constraints of accompaniment playing. 3

19 4

20 Chapter 2 Related work This chapter summarizes the previous work on improvisational accompaniment systems developed for generating multiple valid accompaniments by modeling the constraints of accompaniment playing. Previous work has developed retrieval-based music systems, generation-based music systems and transformationbased music systems to solve the problem. Retrieval-based music systems use dynamic learning models to produce different sequence continuations given the same input, but at any given point in the performance they produce deterministic output. Generation-based music systems dynamically update the production rules of a grammar that are used to generate different accompaniments, but at any given point in the performance the production rules produce deterministic output. Transformation-based music systems generate permutations of a source rhythm representation to generate multiple accompaniments, but the generated choices are not always musically valid. Previous work that has tried to solve the research problem can be classified into retrieval-based, generation-based, and transformation-based music systems. This chapter reviews the systems and highlights the problems they solve. 2.1 Retrieval-based music systems Retrieval-based music systems use musical parameters to retrieve the best possible accompaniment from a set of accompaniment patterns. The focus is on optimizing the parameters for efficient representation and real-time 5

21 retrieval. There are two variations of retrieval-based music systems based on the type of data structure used to store the accompaniment: retrieval from a database and retrieval using dynamic learning models Retrieval from a database The first type of retrieval-based music systems store the accompaniments in a database which is queried to retrieve the accompaniment. The accompaniments in the database are organized by their musical features. Retrieval systems extract the necessary musical features from the input, package them into a data format which is suitable to query the database, and retrieve the accompaniment. The best matching accompaniment is retrieved and played. Impact is an accompaniment system that uses case-based reasoning and production rules to retrieve accompaniment from a database of accompaniment patterns (Ganascia, Ramalho, and Rolland, 1999). It extracts metalevel descriptions of musical scenarios (such as the beginning and end of a bar), fills in the sections and the duration of chords, and uses the result to form a query. This query is used to retrieve the best matching accompaniment from the database. The best accompaniment is selected according to a measure of mathematical distance between the query (called target case) and each of the patterns in the database. Given a single input, the system always returns one accompaniment (the best matching accompaniment) as output. Cyber-Joao is an adaptation of the Impact system that optimizes the number of parameters used for the retrieval (Dahia et al., 2004). It ranks the different musical features based on expert knowledge data, and uses the ranking to determine the important musical features in a given performance situation. Each rhythm is distinctly characterized by a single set of accompaniment values and the musical features are used to query and retrieve the accompaniment pattern from the database. Since each rhythm in the database is distinctly characterized by a single set of accompaniment values, there is always only one accompaniment available for any given musical scenario Retrieval using dynamic learning models In order to overcome the limitations of statically stored accompaniment options, systems were developed with capabilities to model the input rather than statically store it. 6

22 One of the earlier systems that retrieved accompaniment using Markov models was the M system (Zicarelli, 1987). It listens to a musicians performance as streams of MIDI data and builds a Markov chain representation on the fly. It traverses over the representation in order to send the output. Another well known example is the Continuator system (Pachet, 2002a; Pachet, 2002b). The Continuator uses Markov modeling to build possible sequence continuations of musical sequences played earlier in the performance. For any given sequence of musical notes, the accompaniment is retrieved by selecting the longest sequence continuation. A later version of the Continuator system models the trade-offs between adaptation and continuity of the retrieved accompaniment (Cabral, Briot, and Pachet, 2006). Apart from finding a continuation sequence, the system constantly reviews the relationship between the retrieved accompaniment and the harmonic context to retrieve a new continuation in case of any mismatch. Another system, Omax, in addition to listening to lead, listens to its own past improvisations (Assayag et al., 2006). In a special self-listening state, the system listens to its own outputs to bias its Markov model. This results in a variety of possible choices for future accompaniment, depending on whether the system was listening to itself or to the lead. The second variation of the retrieval systems also use Markov models to produce sequence continuations of accompaniment. These systems model the music as sequence continuations, based on listening to the improviser s input. Given a starting note or a sequence, the model is traversed to produce the musical continuation. As the system listens to more of the input it changes the Markov model and the sequence continuations. Thus it is able to produce multiple alternate accompaniments for different situations. Although the use of modeling approaches improves performance over the static database approach, at any point in the performance these systems retrieve and play only one valid accompaniment. There is, however, one non-accompaniment system that falls broadly into this category, but which generates musically-valid variations that do meet the musical constraints of a given melodic line (Donze et al., 2013). This control improvisation system generates variations of a lead melody in jazz. Specifically, given a reference melodic and harmonic sequence, the system builds a probabilistic model of all state transitions between the notes of the melody. The probability values assigned to the transitions determine the variations of the main melody produced. Assigning a high probability to transitions of the reference melody (called direct transitions), it produces melodic sequences similar to the reference melody. Assigning 7

23 low probability to the direct transitions, it produces melodic sequences different from the reference line. Thus, given the same harmonic progression and a reference melodic line, the system produces variations by controlling a single parameter, the probability value of transitions. Although it is not an accompaniment system, the approach could conceivably be used as the basis for one, but not without significant modification. This is because the generation part of the system is entirely influenced by itself, by what it played earlier. Without modification, this would result in an odd accompaniment scenario, one in which the choices of the accompanist are based on his own decisions rather than being based on the changes played by an improviser. And if the goal was to transform this into an accompaniment system, it would not be sufficient to simply modify the system so that it listened to the lead performer; many of the challenges and limitations described in future chapters would still appear Generation-based music systems Generation-based music systems use musical grammars to generate accompaniment. The grammars contain production rules that associate the characteristics of the input rhythm with an output accompaniment rhythm. The grammars are either hand-coded by a human expert or automatically inducted by listening to performances. There have been several systems developed using each type of grammar. 2.2 Hand-coded grammars Voyager (Lewis, 2000) and Cypher (Rowe, 1992) are examples of accompaniment systems that uses hand-coded grammars to generate accompaniment responses. They contain pre-defined sub-routines that are triggered by specific conditions to generate the different accompaniment responses. However, the rules of these grammars are rigid and unchanging, and as a result, these systems are limited in their ability to respond to the same input with alternative outputs Online learning of grammars One improvement over hand-coded grammars is the development of grammars that are more flexible and learn on the fly. ImprovGenerator is an example of an accompaniment system that learns musical grammars online (Kitani and Koike, 2010). It listens to the varia- 8

24 tions of a base rhythm and generates production rules corresponding to the variations. The different production rules are assigned a probability value that changes over the course of a performance. FILTER is another system that employs an online learning approach (Van Nort, Braasch, and Oliveros, 2009; Van Nort, Braasch, and Oliveros, 2012). It is an improvising instrument system that reacts in novel and interesting ways by recognizing the gestures of a performer. The system comes pre-loaded with 20 gestures and the transitions between the gestures are modeled by a Markov model. Over the course of a performance, it varies the transition probabilities of the gestures to produce interesting and varied responses. However, the relation between the gesture and the output parameters itself remains constant. In other words, it does a better job of generating different responses over the course of a performance, but at any given point in the performance, it will produce the same accompaniment given the same input. 1 Grammar systems that use online learning are more flexible and generate more varied responses compared to the systems developed using handcoded grammars. However, in both cases, the grammars are modeled deterministically and once the grammar is inducted, the same input will produce the same output. 2.3 Transformation-based music systems Transformation-based music systems apply a transformation function on the input to generate the output. The transformation function is usually a mathematical operation that is applied on each of the input parameters to produce the output accompaniment values. Multiple accompaniments are generated by permuting a representation of the input parameters. There are two kinds of transformation systems based on how the transformations are generated: systems where transformation function is pre-given and systems where the user selects the transformation function Transformation function is pre-given In pre-given transformation systems, the transformation function computed is given through a target accompaniment value, which is given as input to the system. The transformation function is computed as a function of the 1 One notable thing about FILTER is that it models the interplay between lower level audio features and higher level gestural parameters. This will be discussed in more detail in later chapters. 9

25 target accompaniment and is applied on the input values to generate the accompaniment. Ambidrum is one system that uses a statistical measure of rhythmic ambiguity to generate rhythmic accompaniment (Gifford and Brown, 2006). It measures rhythmic ambiguity using a statistical correlation between the rhythmic metre and three rhythmic variables: the beat velocity, pitch, and duration. The system is given the target correlation values which it uses to transform the input to the output which can be either metrically coherent or metrically ambiguous rhythms. Metrically coherent rhythms are musically valid as accompaniment and are generated by Ambidrum system when its target correlation matrix (transformation function) is an identity function. When the transformation function is not an identity function, the rhythms generated by Ambidrum are metrically ambiguous and their musical appropriateness varies widely. Another system, Clap-along, uses values from the target accompaniment to move the input towards the target (Young and Bown, 2010). The system uses four musical features to compute the distance between the source and the target accompaniment and progressively modifies the source towards the target. For each generation, the system generates 20 choices and finds the closest rhythm to the target by computing the Euclidean distance. When the performer repeatedly claps the exact same pattern, the system is able to slowly evolve its output towards the target accompaniment. However, variations in the performer s rhythms causes unstable changes in the system s output, often resulting in inappropriate accompaniment. The main limitation of both these systems (and systems like these) is that there are very few cases when the accompaniment generated by the systems is predictably valid (musically) User selects the transformation function In order to get transformation systems to generate [musically predictable] output, systems have been created in which user preference is used to generate transformation functions. For example, NeatDrummer generates drumtracks by transforming the other musical parts in the song (Hoover, Szerlip, and Stanley, 2011; Hoover, Rosario, and Stanley, 2008). The different accompaniment tracks are generated by giving different input tracks (like piano, violin, and vocal) to an Artificial Neural Network, called CPPN, that generates the output rhythms. The CPPN is initially trained by using the input from the different audio tracks. In the successive generations, the 10

26 user ranks the generated tracks, which are used to generate multiple CPPNs in each step. Thus the different CPPNs generate multiple accompaniment tracks according to the user preference. In the successive generations, the user ranks the generated tracks, the properties of which are permuted to generate multiple transformations function that generates different accompaniment drum tracks. The problem with the approach followed by NeatDrummer is related to the musical validity of the generated tracks. Although the user s preferences are used to generate multiple alternate transformation functions, the user actually has minimal control over the accompaniment generation process itself. Thus, the system generates musically valid accompaniment in only a few cases. 11

27 12

28 Chapter 3 Research problem This chapter identifies a significant problem left open by previous work and presents the research focus: to develop a model of rhythmic accompaniment for Carnatic ensemble music that produces multiple musically valid accompaniments, given the same input. Although there has been previous work on improvised accompaniment playing systems, none of them have addressed the problem of generating multiple accompaniment, given the same input. This work goes beyond the existing work by proposing a formal model of choice generation that provides multiple valid accompaniment choices given the same lead input. This formal model was used to develop a rhythmic improvisation system specifically, a system that will provide percussive improvisational accompaniment for a human lead percussionist during Carnatic performance. 3.1 Summary of the related work Although there has been previous work that has tried to solve the problem of accompaniment playing, the problem of generating multiple accompaniment given the same input is still largely unsolved. Retrieval-based music systems use dynamic learning models to produce different sequence continuations, but at any given point in the performance they produce a single valid output. Generation-based music systems dynamically update the production rules of a grammar that are used to generate different accompaniments, but at any given point in the performance they produce a single valid output. Transformation-based music systems permute a source rhythm representation to generate multiple accompani- 13

29 ments, but the generated choices correspond to valid musical descriptions in very few cases. Among the different systems surveyed in the related work, the FILTER systems comes close to generating multiple accompaniment, given the same input. It models the interplay between the low-level audio features and the higher level gestural parameters (that have visual correspondences), to identify the player s intent and adapts the output in interesting ways. Though this interplay produces a variety of responses, the mapping between the gesture and the associated output parameters is one-to-one. Once a mapping is established, the system produces the same outputs for the same input. Another related system is the control improvisation system that generates variations of a lead melody in jazz (Donze et al., 2013). It is not accompaniment system but is relevant in that it models accompaniment constraints to generate multiple variations of a given melodic line. This system is an enhancement of factor oracle approach used in (Assayag et al., 2006) and generates variations of a lead melody in jazz, such that they satisfy a given accompaniment specification. Given a reference melodic and harmonic sequence, the system builds a probabilistic model of all state transitions between the notes in the melodic. The probability values assigned to the transitions determine the variations of the main melody produced. Assigning a high probability to transitions of the reference melody (called direct transitions) produces melodic sequences similar to the reference melody and assigning low probability of direct transitions produces melodic sequences different from the reference line. Thus, given the same harmonic progression and a reference melodic line, the system produces variations by controlling a single parameter, the probability value of transitions. However, this system would not scale very well in an accompaniment scenario as the generation part of the system is purely influenced by what it played (or listened to) earlier. This results in an odd accompaniment scenario, one in which, the choices of the accompanist are based on his own decisions rather than being based on the changes played by an improviser. This raises concerns about the validity of the accompaniment played using such a system. 14

30 3.2 Proposed solution The central goal of the work reported here is to develop an algorithm that generates valid alternate variations of secondary accompaniment for recordings of Carnatic musical performances. The central insight the main original contribution is that the generation of valid alternate variations of secondary accompaniment can be accomplished by formally representing the relationship between lead and accompaniment in terms of musical tension. By formalizing tension ranges as constraints for acceptable accompaniment, an algorithmic system is able to generate alternate accompaniment choices that are acceptable in terms of a restricted notion of sowkhyam (roughly, musical consonance). In the context of this thesis restricted sowkhyam refers to the sowkhyam of accompaniment considered independent of the secondary performer (and his creativity). This specifically ignores influences of any particular school of percussion playing, any particular secondary performer s playing style, creative kanjira variations, and the tonal quality unique to any performer s instrument. Unless otherwise noted, sowkhyam in this document refers to the restricted sowkhyam described above. The central insight is roughly as follows. For any given performance, there is a degree of sowkhyam (consonance) between the lead and secondary accompaniment. For the work reported here, this degree of (restricted) sowkhyam has been numerically formalized as the inverse relationship: tension. With this formalization, any synthesized accompaniment that has equivalent or less tension (relative to the lead) is considered equally sowkhyam as the original. The research resulted in a system that can take a transcribed selection of a Carnatic musical performance and algorithmically generate new performances, each with different secondary percussion accompaniment that meet the criteria of restricted sowkhyam as well as the original secondary accompaniment. In order to evaluate the ability of the system to produce alternate valid secondary accompaniments for a Carnatic musical performance, a study was conducted with musical experts to address three related research questions: 15

31 RQ1: Does the system produce secondary accompaniment that is rated at least as high as the original accompaniment? RQ2: Are accompaniments inside the range better (i.e., do variants within the range get higher scores than variants outside the range)? RQ3: Do the ratings for accompaniment decrease as a function of the distance from the tension zero point? The remaining chapters in this thesis provide details about the synthesis protocol for generating alternate valid accompaniments, the study protocol used to evaluate the system, the results of the study and their analysis. 16

32 Chapter 4 Method This chapter provides a brief overview of the method used during this thesis research. The method included the analysis of Carnatic music performances, development of different models of accompaniment playing, their implementation as computer programs, and their evaluation. 4.1 Analysis of the Carnatic musical performances Since the rules and constraints for secondary improvisation in Carnatic ensemble are not clearly specified in the literature (or in oral tradition), the first step involved the development of a method to systematically understand secondary improvisation in performances. Different performance recordings were analyzed to find performance structures (e.g., bar and improvisation cycle) and improvisation rules (e.g., forced and discretionary playing) that restricted/imposed constraints on the playing, but also offered some flexibility for improvisation. The analysis of the performances was used to develop the different models of accompaniment playing. 4.2 Model development During the course of this thesis, different models were developed to solve the research problem of developing systems that play multiple valid accompaniment given the same lead input. They were the Direct Mapping model, the Horizontal Continuity model, and the tension model. The first two models were limited in their ability to generate multiple accompaniments that are musically valid. In order to address these shortcomings, a 17

33 third model was developed that generates multiple valid accompaniment for the same lead input: the tension model. 4.3 Evaluating the tension model In order to evaluate the ability of the system to produce alternate valid secondary accompaniments for a Carnatic musical performance, a study was conducted with musical experts to answer three related research questions: RQ1: Does the system produce secondary accompaniment that is rated at least as high as the original accompaniment? RQ2: Are accompaniments inside the range better (i.e., do variants within the range get higher scores than variants outside the range)? RQ3: Do the ratings for accompaniment decrease as a function of the distance from the tension zero point? The methodology followed to answer these research questions was to generate accompaniment variants that were qualitatively different from an evaluation standpoint. For this, six accompaniment variants of 16-bar duration were created with different distance values from the original. These were presented to musical experts who evaluated and rated them. 4.4 System development The Direct Mapping and the Horizontal Continuity accompaniment models were implemented as computer programs that were evaluated in restricted real-time performance settings. For these models, the accompaniment system is a computer program that plays a melody, accepts percussive input through a midi controller, and combines both of those with the secondary accompaniment (which is algorithmically generated). The lead s input is used to drive the algorithmic secondary accompaniment generation, and the combined output is played back through a speaker. The accompaniment system built using the tension model accepts percussive input for lead and secondary through the keyboard and combines both of those to algorithmically generate multiple accompaniments. The inputs are obtained in their respective format on the keyboard. It (input) consists of sequences of diction, note duration, and loudness represented in array format. Using this input, the system algorithmically generates the corresponding arrays for the secondary accompaniments. The tension model and its generation process are described in much more detail in section 8. 18

34 Chapter 5 Background: Carnatic quartet performance This chapter describes the roles and activities of the lead and secondary percussionist within a Carnatic quartet performance. It further describes the musical structure and provides examples of different scenarios of lead and secondary percussion playing in a performance ensemble. 5.1 Overview Figure 5.1: The Carnatic quartet (from left): lead percussionist, secondary, vocalist, Tambura (provides the background drone), and violinist. Figure 5.1 shows the performance of a Carnatic quartet. There are four improvising performers on the stage, the vocalist, the violinist and the lead (mridangist) and the secondary (kanjirist) percussionists. The vocal- 19

35 ist performs the main melody and the violinist plays the accompaniment melody. The lead percussionist improvises relative to the melody (vocal and violin) and the secondary percussionist mostly provides accompaniment to the lead percussionist. The actions of the secondary percussionist are constrained by what the lead percussionist plays and by what the secondary percussionist predicts that the lead will play. There is one main difference in the nature of playing on the lead and by the secondary drums. The lead uses both hands to simultaneously strike the different sides of the lead drum, called the mridangam. The secondary uses one hand to strike the drum, while controlling the tension of the membrane that he strikes with the other hand. The secondary drum is called the kanjira or the frame drum. In general, the secondary percussionist is trying to follow the lead. This means that the secondary takes cues from the lead, is guided by what the lead plays, is not allowed to freely improvise, and has fairly constrained choices in terms of accompaniment selection. However, as noted earlier, the secondary also has some degree of freedom in playing. Within the constraints imposed by the lead, the secondary percussionist may: Proactively suggest variations for the lead to incorporate Improvise by making references to earlier changes played by the lead Improvise somewhat freely in the last bar of an improvisational cycle Play complementary accompaniment using off-beat strokes, rolls, changes to the accent structure of the accompaniment rhythm, etc. Note that there is also one exceptional case whereby the secondary largely ignores the lead percussionist and instead follows the melody directly. This is only justified if the lead is playing the same rhythmic patterns without variation, but even so, such a decision is controversial and problematic for a number of reasons. The work detailed in this document does not attempt to handle this case. 5.2 Musical structures Musical improvisations occur in cycles of many different lengths, such as 10, 12, 14, 32, or 64 beats. This document considers improvisations that take place within a 32 beat (or 64 beat) improvisation cycle which is known as the Adi talam in Carnatic music. Since the structure of improvisations that happen in a 64 and 32 beat improvisational cycle are similar, for simplicity s sake the descriptions are provided for a 32 beat improvisational cycle. The improvisation cycle is further divided into different bars. In a 4/4 time 20

36 signature, each bar consists of four beats. The numerator of 4/4 denotes the number of beats in a bar and the denominator denotes that each beat has a duration of a quarter note. Similarly, in an 8/4 time signature, each bar contains eight beats each of quarter note duration. Improvisational cycles of 64 beats usually contain 8 bars in an 8/4 time signature. Figure 5.2 shows a rhythm pattern of two-bar duration, containing eight lead and secondary hits. Bar1 : Lead : num thi dhin dhin num thi dheem dhin Accompaniment : ta ka di mi ta ka jo no Figure 5.2: Two bars of lead and secondary playing In an improvisational cycle, typically the lead introduces a rhythmic groove in the first two bars, plays variations of the groove in the following bars and either syncopates or intensifies the groove in the final bar. There are three different styles of accompaniment that the secondary can play. They are: Compliant accompaniment: the secondary complies with the actions of the lead and closely matches the different changes played by the lead Interactive accompaniment: the secondary introduces changes in the accompaniment by referring to the past actions of the lead Proactive accompaniment: the secondary plays complementary rhythm patterns, supplements certain hits of the lead, matches the musical structure of the melody, etc. Each of these accompaniment scenarios impose different demands and constraints on the secondary percussionist and result in very different kinds of musical choices and decisions. 5.3 Choices in different styles of accompaniment playing In compliant accompaniment playing style, the secondary has the most constraints and the fewest choices in terms of accompaniment playing. Basically, the secondary must strictly follow the changes in the bar activity and loudness of the lead. Within those constraints, the secondary is able 21

37 to make some discretionary choices about diction, note duration, and loudness. In interactive accompaniment playing, the secondary has the freedom to deviate from the lead but is constrained by the type of deviations played by the lead. Deviations played by the secondary are typically in the form of changes in the bar activity and loudness. In rare cases, the deviations include major structural changes (e.g., violating bar boundaries, grouping of hits). In this style of playing, the secondary is allowed to deviate from the lead but is restricted to the deviations played by the lead in earlier cycles of the improvisation. In proactive accompaniment playing, the secondary percussionist has the maximum freedom to deviate from the lead. The secondary is free to change the bar activity, loudness and musical structure (alternate groupings of hits, changing the lead s groove) of the accompaniment. The deviations are constrained by their appropriateness to the melodic structure. This type of improvised behavior is very discretionary and depends greatly on the aesthetic sense and skill of the secondary percussionist. Transcriptions of representative examples of accompaniment playing are provided in the appendix. 5.4 Musical actions in the improvisation This section describes the musical actions corresponding to the different variations in improvisations played by the lead and secondary percussionists. There are broadly two kinds of variations that the lead and secondary play while improvising in performances: major variations and minor variations. The main distinction between the two is that major variations either change or obstruct the flow of the rhythmic groove whereas minor variations are considered as variations around the rhythmic groove Major variations Major variations are variations in playing that obstruct the groove. In performance situations, these are usually played to align the rhythm structure with the melodic structure. The actions in major variations include playing rolls, interspersing pauses with rolls, changing the grouping of hits, speed-doubling and playing sequences that violate bar-boundaries. The lead introduces major variations in order to minimize the sense of repetition between the different bars, and play accompaniment that better suits 22

Chords not required: Incorporating horizontal and vertical aspects independently in a computer improvisation algorithm

Chords not required: Incorporating horizontal and vertical aspects independently in a computer improvisation algorithm Georgia State University ScholarWorks @ Georgia State University Music Faculty Publications School of Music 2013 Chords not required: Incorporating horizontal and vertical aspects independently in a computer

More information

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION ABSTRACT We present a method for arranging the notes of certain musical scales (pentatonic, heptatonic, Blues Minor and

More information

Doctor of Philosophy

Doctor of Philosophy University of Adelaide Elder Conservatorium of Music Faculty of Humanities and Social Sciences Declarative Computer Music Programming: using Prolog to generate rule-based musical counterpoints by Robert

More information

Improving Piano Sight-Reading Skills of College Student. Chian yi Ang. Penn State University

Improving Piano Sight-Reading Skills of College Student. Chian yi Ang. Penn State University Improving Piano Sight-Reading Skill of College Student 1 Improving Piano Sight-Reading Skills of College Student Chian yi Ang Penn State University 1 I grant The Pennsylvania State University the nonexclusive

More information

Sudhanshu Gautam *1, Sarita Soni 2. M-Tech Computer Science, BBAU Central University, Lucknow, Uttar Pradesh, India

Sudhanshu Gautam *1, Sarita Soni 2. M-Tech Computer Science, BBAU Central University, Lucknow, Uttar Pradesh, India International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2018 IJSRCSEIT Volume 3 Issue 3 ISSN : 2456-3307 Artificial Intelligence Techniques for Music Composition

More information

II. Prerequisites: Ability to play a band instrument, access to a working instrument

II. Prerequisites: Ability to play a band instrument, access to a working instrument I. Course Name: Concert Band II. Prerequisites: Ability to play a band instrument, access to a working instrument III. Graduation Outcomes Addressed: 1. Written Expression 6. Critical Reading 2. Research

More information

Arts Education Essential Standards Crosswalk: MUSIC A Document to Assist With the Transition From the 2005 Standard Course of Study

Arts Education Essential Standards Crosswalk: MUSIC A Document to Assist With the Transition From the 2005 Standard Course of Study NCDPI This document is designed to help North Carolina educators teach the Common Core and Essential Standards (Standard Course of Study). NCDPI staff are continually updating and improving these tools

More information

CPU Bach: An Automatic Chorale Harmonization System

CPU Bach: An Automatic Chorale Harmonization System CPU Bach: An Automatic Chorale Harmonization System Matt Hanlon mhanlon@fas Tim Ledlie ledlie@fas January 15, 2002 Abstract We present an automated system for the harmonization of fourpart chorales in

More information

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance Methodologies for Expressiveness Modeling of and for Music Performance by Giovanni De Poli Center of Computational Sonology, Department of Information Engineering, University of Padova, Padova, Italy About

More information

Music Morph. Have you ever listened to the main theme of a movie? The main theme always has a

Music Morph. Have you ever listened to the main theme of a movie? The main theme always has a Nicholas Waggoner Chris McGilliard Physics 498 Physics of Music May 2, 2005 Music Morph Have you ever listened to the main theme of a movie? The main theme always has a number of parts. Often it contains

More information

GENERAL MUSIC Grade 3

GENERAL MUSIC Grade 3 GENERAL MUSIC Grade 3 Course Overview: Grade 3 students will engage in a wide variety of music activities, including singing, playing instruments, and dancing. Music notation is addressed through reading

More information

Robert Alexandru Dobre, Cristian Negrescu

Robert Alexandru Dobre, Cristian Negrescu ECAI 2016 - International Conference 8th Edition Electronics, Computers and Artificial Intelligence 30 June -02 July, 2016, Ploiesti, ROMÂNIA Automatic Music Transcription Software Based on Constant Q

More information

TEST SUMMARY AND FRAMEWORK TEST SUMMARY

TEST SUMMARY AND FRAMEWORK TEST SUMMARY Washington Educator Skills Tests Endorsements (WEST E) TEST SUMMARY AND FRAMEWORK TEST SUMMARY MUSIC: CHORAL Copyright 2016 by the Washington Professional Educator Standards Board 1 Washington Educator

More information

K-12 Performing Arts - Music Standards Lincoln Community School Sources: ArtsEdge - National Standards for Arts Education

K-12 Performing Arts - Music Standards Lincoln Community School Sources: ArtsEdge - National Standards for Arts Education K-12 Performing Arts - Music Standards Lincoln Community School Sources: ArtsEdge - National Standards for Arts Education Grades K-4 Students sing independently, on pitch and in rhythm, with appropriate

More information

The Human Features of Music.

The Human Features of Music. The Human Features of Music. Bachelor Thesis Artificial Intelligence, Social Studies, Radboud University Nijmegen Chris Kemper, s4359410 Supervisor: Makiko Sadakata Artificial Intelligence, Social Studies,

More information

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment Gus G. Xia Dartmouth College Neukom Institute Hanover, NH, USA gxia@dartmouth.edu Roger B. Dannenberg Carnegie

More information

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene Beat Extraction from Expressive Musical Performances Simon Dixon, Werner Goebl and Emilios Cambouropoulos Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria.

More information

Outline. Why do we classify? Audio Classification

Outline. Why do we classify? Audio Classification Outline Introduction Music Information Retrieval Classification Process Steps Pitch Histograms Multiple Pitch Detection Algorithm Musical Genre Classification Implementation Future Work Why do we classify

More information

Chapter Five: The Elements of Music

Chapter Five: The Elements of Music Chapter Five: The Elements of Music What Students Should Know and Be Able to Do in the Arts Education Reform, Standards, and the Arts Summary Statement to the National Standards - http://www.menc.org/publication/books/summary.html

More information

Building a Better Bach with Markov Chains

Building a Better Bach with Markov Chains Building a Better Bach with Markov Chains CS701 Implementation Project, Timothy Crocker December 18, 2015 1 Abstract For my implementation project, I explored the field of algorithmic music composition

More information

Computer Coordination With Popular Music: A New Research Agenda 1

Computer Coordination With Popular Music: A New Research Agenda 1 Computer Coordination With Popular Music: A New Research Agenda 1 Roger B. Dannenberg roger.dannenberg@cs.cmu.edu http://www.cs.cmu.edu/~rbd School of Computer Science Carnegie Mellon University Pittsburgh,

More information

WESTFIELD PUBLIC SCHOOLS Westfield, New Jersey

WESTFIELD PUBLIC SCHOOLS Westfield, New Jersey WESTFIELD PUBLIC SCHOOLS Westfield, New Jersey Office of Instruction Course of Study MUSIC K 5 Schools... Elementary Department... Visual & Performing Arts Length of Course.Full Year (1 st -5 th = 45 Minutes

More information

SAMPLE ASSESSMENT TASKS MUSIC CONTEMPORARY ATAR YEAR 12

SAMPLE ASSESSMENT TASKS MUSIC CONTEMPORARY ATAR YEAR 12 SAMPLE ASSESSMENT TASKS MUSIC CONTEMPORARY ATAR YEAR 12 Copyright School Curriculum and Standards Authority, 2015 This document apart from any third party copyright material contained in it may be freely

More information

Music Performance Panel: NICI / MMM Position Statement

Music Performance Panel: NICI / MMM Position Statement Music Performance Panel: NICI / MMM Position Statement Peter Desain, Henkjan Honing and Renee Timmers Music, Mind, Machine Group NICI, University of Nijmegen mmm@nici.kun.nl, www.nici.kun.nl/mmm In this

More information

PLOrk Beat Science 2.0 NIME 2009 club submission by Ge Wang and Rebecca Fiebrink

PLOrk Beat Science 2.0 NIME 2009 club submission by Ge Wang and Rebecca Fiebrink PLOrk Beat Science 2.0 NIME 2009 club submission by Ge Wang and Rebecca Fiebrink Introduction This document details our proposed NIME 2009 club performance of PLOrk Beat Science 2.0, our multi-laptop,

More information

Student Performance Q&A: 2001 AP Music Theory Free-Response Questions

Student Performance Q&A: 2001 AP Music Theory Free-Response Questions Student Performance Q&A: 2001 AP Music Theory Free-Response Questions The following comments are provided by the Chief Faculty Consultant, Joel Phillips, regarding the 2001 free-response questions for

More information

Sample assessment task. Task details. Content description. Year level 7

Sample assessment task. Task details. Content description. Year level 7 Sample assessment task Year level 7 Learning area Subject Title of task Task details of task Type of assessment Purpose of assessment Assessment strategy Evidence to be collected Suggested time Content

More information

A repetition-based framework for lyric alignment in popular songs

A repetition-based framework for lyric alignment in popular songs A repetition-based framework for lyric alignment in popular songs ABSTRACT LUONG Minh Thang and KAN Min Yen Department of Computer Science, School of Computing, National University of Singapore We examine

More information

On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance

On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance RHYTHM IN MUSIC PERFORMANCE AND PERCEIVED STRUCTURE 1 On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance W. Luke Windsor, Rinus Aarts, Peter

More information

A PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES

A PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES 12th International Society for Music Information Retrieval Conference (ISMIR 2011) A PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES Erdem Unal 1 Elaine Chew 2 Panayiotis Georgiou

More information

Jazz Melody Generation and Recognition

Jazz Melody Generation and Recognition Jazz Melody Generation and Recognition Joseph Victor December 14, 2012 Introduction In this project, we attempt to use machine learning methods to study jazz solos. The reason we study jazz in particular

More information

CHAPTER 14: MODERN JAZZ TECHNIQUES IN THE PRELUDES. music bears the unmistakable influence of contemporary American jazz and rock.

CHAPTER 14: MODERN JAZZ TECHNIQUES IN THE PRELUDES. music bears the unmistakable influence of contemporary American jazz and rock. 1 CHAPTER 14: MODERN JAZZ TECHNIQUES IN THE PRELUDES Though Kapustin was born in 1937 and has lived his entire life in Russia, his music bears the unmistakable influence of contemporary American jazz and

More information

Rhythmic Dissonance: Introduction

Rhythmic Dissonance: Introduction The Concept Rhythmic Dissonance: Introduction One of the more difficult things for a singer to do is to maintain dissonance when singing. Because the ear is searching for consonance, singing a B natural

More information

The Ambidrum: Automated Rhythmic Improvisation

The Ambidrum: Automated Rhythmic Improvisation The Ambidrum: Automated Rhythmic Improvisation Author Gifford, Toby, R. Brown, Andrew Published 2006 Conference Title Medi(t)ations: computers/music/intermedia - The Proceedings of Australasian Computer

More information

Automatic Notes Generation for Musical Instrument Tabla

Automatic Notes Generation for Musical Instrument Tabla Volume-5, Issue-5, October-2015 International Journal of Engineering and Management Research Page Number: 326-330 Automatic Notes Generation for Musical Instrument Tabla Prashant Kanade 1, Bhavesh Chachra

More information

PRESCHOOL (THREE AND FOUR YEAR-OLDS) (Page 1 of 2)

PRESCHOOL (THREE AND FOUR YEAR-OLDS) (Page 1 of 2) PRESCHOOL (THREE AND FOUR YEAR-OLDS) (Page 1 of 2) Music is a channel for creative expression in two ways. One is the manner in which sounds are communicated by the music-maker. The other is the emotional

More information

Generating Rhythmic Accompaniment for Guitar: the Cyber-João Case Study

Generating Rhythmic Accompaniment for Guitar: the Cyber-João Case Study Generating Rhythmic Accompaniment for Guitar: the Cyber-João Case Study Márcio Dahia, Hugo Santana, Ernesto Trajano, Carlos Sandroni* and Geber Ramalho Centro de Informática and Departamento de Música*

More information

StepSequencer64 J74 Page 1. J74 StepSequencer64. A tool for creative sequence programming in Ableton Live. User Manual

StepSequencer64 J74 Page 1. J74 StepSequencer64. A tool for creative sequence programming in Ableton Live. User Manual StepSequencer64 J74 Page 1 J74 StepSequencer64 A tool for creative sequence programming in Ableton Live User Manual StepSequencer64 J74 Page 2 How to Install the J74 StepSequencer64 devices J74 StepSequencer64

More information

QUALITY OF COMPUTER MUSIC USING MIDI LANGUAGE FOR DIGITAL MUSIC ARRANGEMENT

QUALITY OF COMPUTER MUSIC USING MIDI LANGUAGE FOR DIGITAL MUSIC ARRANGEMENT QUALITY OF COMPUTER MUSIC USING MIDI LANGUAGE FOR DIGITAL MUSIC ARRANGEMENT Pandan Pareanom Purwacandra 1, Ferry Wahyu Wibowo 2 Informatics Engineering, STMIK AMIKOM Yogyakarta 1 pandanharmony@gmail.com,

More information

SAMPLE ASSESSMENT TASKS MUSIC JAZZ ATAR YEAR 11

SAMPLE ASSESSMENT TASKS MUSIC JAZZ ATAR YEAR 11 SAMPLE ASSESSMENT TASKS MUSIC JAZZ ATAR YEAR 11 Copyright School Curriculum and Standards Authority, 2014 This document apart from any third party copyright material contained in it may be freely copied,

More information

STRAND I Sing alone and with others

STRAND I Sing alone and with others STRAND I Sing alone and with others Preschool (Three and Four Year-Olds) Music is a channel for creative expression in two ways. One is the manner in which sounds are communicated by the music-maker. The

More information

Robert Rowe MACHINE MUSICIANSHIP

Robert Rowe MACHINE MUSICIANSHIP Robert Rowe MACHINE MUSICIANSHIP Machine Musicianship Robert Rowe The MIT Press Cambridge, Massachusetts London, England Machine Musicianship 2001 Massachusetts Institute of Technology All rights reserved.

More information

FINE ARTS Institutional (ILO), Program (PLO), and Course (SLO) Alignment

FINE ARTS Institutional (ILO), Program (PLO), and Course (SLO) Alignment FINE ARTS Institutional (ILO), Program (PLO), and Course (SLO) Program: Music Number of Courses: 52 Date Updated: 11.19.2014 Submitted by: V. Palacios, ext. 3535 ILOs 1. Critical Thinking Students apply

More information

Connecticut State Department of Education Music Standards Middle School Grades 6-8

Connecticut State Department of Education Music Standards Middle School Grades 6-8 Connecticut State Department of Education Music Standards Middle School Grades 6-8 Music Standards Vocal Students will sing, alone and with others, a varied repertoire of songs. Students will sing accurately

More information

Musical Creativity. Jukka Toivanen Introduction to Computational Creativity Dept. of Computer Science University of Helsinki

Musical Creativity. Jukka Toivanen Introduction to Computational Creativity Dept. of Computer Science University of Helsinki Musical Creativity Jukka Toivanen Introduction to Computational Creativity Dept. of Computer Science University of Helsinki Basic Terminology Melody = linear succession of musical tones that the listener

More information

Curriculum Standard One: The student will listen to and analyze music critically, using the vocabulary and language of music.

Curriculum Standard One: The student will listen to and analyze music critically, using the vocabulary and language of music. Curriculum Standard One: The student will listen to and analyze music critically, using the vocabulary and language of music. 1. The student will analyze the uses of elements of music. A. Can the student

More information

Concert Band and Wind Ensemble

Concert Band and Wind Ensemble Curriculum Development In the Fairfield Public Schools FAIRFIELD PUBLIC SCHOOLS FAIRFIELD, CONNECTICUT Concert Band and Wind Ensemble Board of Education Approved 04/24/2007 Concert Band and Wind Ensemble

More information

Music. Last Updated: May 28, 2015, 11:49 am NORTH CAROLINA ESSENTIAL STANDARDS

Music. Last Updated: May 28, 2015, 11:49 am NORTH CAROLINA ESSENTIAL STANDARDS Grade: Kindergarten Course: al Literacy NCES.K.MU.ML.1 - Apply the elements of music and musical techniques in order to sing and play music with NCES.K.MU.ML.1.1 - Exemplify proper technique when singing

More information

Evolutionary Computation Applied to Melody Generation

Evolutionary Computation Applied to Melody Generation Evolutionary Computation Applied to Melody Generation Matt D. Johnson December 5, 2003 Abstract In recent years, the personal computer has become an integral component in the typesetting and management

More information

DJ Darwin a genetic approach to creating beats

DJ Darwin a genetic approach to creating beats Assaf Nir DJ Darwin a genetic approach to creating beats Final project report, course 67842 'Introduction to Artificial Intelligence' Abstract In this document we present two applications that incorporate

More information

MUSIC COURSE OF STUDY GRADES K-5 GRADE

MUSIC COURSE OF STUDY GRADES K-5 GRADE MUSIC COURSE OF STUDY GRADES K-5 GRADE 5 2009 CORE CURRICULUM CONTENT STANDARDS Core Curriculum Content Standard: The arts strengthen our appreciation of the world as well as our ability to be creative

More information

Instrumental Performance Band 7. Fine Arts Curriculum Framework

Instrumental Performance Band 7. Fine Arts Curriculum Framework Instrumental Performance Band 7 Fine Arts Curriculum Framework Content Standard 1: Skills and Techniques Students shall demonstrate and apply the essential skills and techniques to produce music. M.1.7.1

More information

Computational Modelling of Harmony

Computational Modelling of Harmony Computational Modelling of Harmony Simon Dixon Centre for Digital Music, Queen Mary University of London, Mile End Rd, London E1 4NS, UK simon.dixon@elec.qmul.ac.uk http://www.elec.qmul.ac.uk/people/simond

More information

Stafford Township School District Manahawkin, NJ

Stafford Township School District Manahawkin, NJ Stafford Township School District Manahawkin, NJ Fourth Grade Music Curriculum Aligned to the CCCS 2009 This Curriculum is reviewed and updated annually as needed This Curriculum was approved at the Board

More information

Detecting Musical Key with Supervised Learning

Detecting Musical Key with Supervised Learning Detecting Musical Key with Supervised Learning Robert Mahieu Department of Electrical Engineering Stanford University rmahieu@stanford.edu Abstract This paper proposes and tests performance of two different

More information

Missouri Educator Gateway Assessments

Missouri Educator Gateway Assessments Missouri Educator Gateway Assessments FIELD 043: MUSIC: INSTRUMENTAL & VOCAL June 2014 Content Domain Range of Competencies Approximate Percentage of Test Score I. Music Theory and Composition 0001 0003

More information

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC G.TZANETAKIS, N.HU, AND R.B. DANNENBERG Computer Science Department, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213, USA E-mail: gtzan@cs.cmu.edu

More information

DEPARTMENT/GRADE LEVEL: Band (7 th and 8 th Grade) COURSE/SUBJECT TITLE: Instrumental Music #0440 TIME FRAME (WEEKS): 36 weeks

DEPARTMENT/GRADE LEVEL: Band (7 th and 8 th Grade) COURSE/SUBJECT TITLE: Instrumental Music #0440 TIME FRAME (WEEKS): 36 weeks DEPARTMENT/GRADE LEVEL: Band (7 th and 8 th Grade) COURSE/SUBJECT TITLE: Instrumental Music #0440 TIME FRAME (WEEKS): 36 weeks OVERALL STUDENT OBJECTIVES FOR THE UNIT: Students taking Instrumental Music

More information

SAMPLE ASSESSMENT TASKS MUSIC GENERAL YEAR 12

SAMPLE ASSESSMENT TASKS MUSIC GENERAL YEAR 12 SAMPLE ASSESSMENT TASKS MUSIC GENERAL YEAR 12 Copyright School Curriculum and Standards Authority, 2015 This document apart from any third party copyright material contained in it may be freely copied,

More information

CSC475 Music Information Retrieval

CSC475 Music Information Retrieval CSC475 Music Information Retrieval Symbolic Music Representations George Tzanetakis University of Victoria 2014 G. Tzanetakis 1 / 30 Table of Contents I 1 Western Common Music Notation 2 Digital Formats

More information

A Real-Time Genetic Algorithm in Human-Robot Musical Improvisation

A Real-Time Genetic Algorithm in Human-Robot Musical Improvisation A Real-Time Genetic Algorithm in Human-Robot Musical Improvisation Gil Weinberg, Mark Godfrey, Alex Rae, and John Rhoads Georgia Institute of Technology, Music Technology Group 840 McMillan St, Atlanta

More information

Part II: Dipping Your Toes Fingers into Music Basics Part IV: Moving into More-Advanced Keyboard Features

Part II: Dipping Your Toes Fingers into Music Basics Part IV: Moving into More-Advanced Keyboard Features Contents at a Glance Introduction... 1 Part I: Getting Started with Keyboards... 5 Chapter 1: Living in a Keyboard World...7 Chapter 2: So Many Keyboards, So Little Time...15 Chapter 3: Choosing the Right

More information

Audio Feature Extraction for Corpus Analysis

Audio Feature Extraction for Corpus Analysis Audio Feature Extraction for Corpus Analysis Anja Volk Sound and Music Technology 5 Dec 2017 1 Corpus analysis What is corpus analysis study a large corpus of music for gaining insights on general trends

More information

TEST SUMMARY AND FRAMEWORK TEST SUMMARY

TEST SUMMARY AND FRAMEWORK TEST SUMMARY Washington Educator Skills Tests Endorsements (WEST E) TEST SUMMARY AND FRAMEWORK TEST SUMMARY MUSIC: INSTRUMENTAL Copyright 2016 by the Washington Professional Educator Standards Board 1 Washington Educator

More information

6.UAP Project. FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System. Daryl Neubieser. May 12, 2016

6.UAP Project. FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System. Daryl Neubieser. May 12, 2016 6.UAP Project FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System Daryl Neubieser May 12, 2016 Abstract: This paper describes my implementation of a variable-speed accompaniment system that

More information

CTP431- Music and Audio Computing Music Information Retrieval. Graduate School of Culture Technology KAIST Juhan Nam

CTP431- Music and Audio Computing Music Information Retrieval. Graduate School of Culture Technology KAIST Juhan Nam CTP431- Music and Audio Computing Music Information Retrieval Graduate School of Culture Technology KAIST Juhan Nam 1 Introduction ü Instrument: Piano ü Genre: Classical ü Composer: Chopin ü Key: E-minor

More information

BayesianBand: Jam Session System based on Mutual Prediction by User and System

BayesianBand: Jam Session System based on Mutual Prediction by User and System BayesianBand: Jam Session System based on Mutual Prediction by User and System Tetsuro Kitahara 12, Naoyuki Totani 1, Ryosuke Tokuami 1, and Haruhiro Katayose 12 1 School of Science and Technology, Kwansei

More information

West Linn-Wilsonville School District Primary (Grades K-5) Music Curriculum. Curriculum Foundations

West Linn-Wilsonville School District Primary (Grades K-5) Music Curriculum. Curriculum Foundations Curriculum Foundations Important Ideas & Understandings Significant Strands Significant Skills to be Learned & Practiced Nature of the Human Experience Making connections creating meaning and understanding

More information

Music Curriculum. Rationale. Grades 1 8

Music Curriculum. Rationale. Grades 1 8 Music Curriculum Rationale Grades 1 8 Studying music remains a vital part of a student s total education. Music provides an opportunity for growth by expanding a student s world, discovering musical expression,

More information

SENECA VALLEY SCHOOL DISTRICT CURRICULUM

SENECA VALLEY SCHOOL DISTRICT CURRICULUM SENECA VALLEY SCHOOL DISTRICT CURRICULUM Course Title: Course Number: 0960 Grade Level(s): 9 10 Periods Per Week: 5 Length of Period: 42 Minutes Length of Course: Full Year Credits: 1.0 Faculty Author(s):

More information

6 th Grade Instrumental Music Curriculum Essentials Document

6 th Grade Instrumental Music Curriculum Essentials Document 6 th Grade Instrumental Curriculum Essentials Document Boulder Valley School District Department of Curriculum and Instruction August 2011 1 Introduction The Boulder Valley Curriculum provides the foundation

More information

Hip Hop Robot. Semester Project. Cheng Zu. Distributed Computing Group Computer Engineering and Networks Laboratory ETH Zürich

Hip Hop Robot. Semester Project. Cheng Zu. Distributed Computing Group Computer Engineering and Networks Laboratory ETH Zürich Distributed Computing Hip Hop Robot Semester Project Cheng Zu zuc@student.ethz.ch Distributed Computing Group Computer Engineering and Networks Laboratory ETH Zürich Supervisors: Manuel Eichelberger Prof.

More information

Music Radar: A Web-based Query by Humming System

Music Radar: A Web-based Query by Humming System Music Radar: A Web-based Query by Humming System Lianjie Cao, Peng Hao, Chunmeng Zhou Computer Science Department, Purdue University, 305 N. University Street West Lafayette, IN 47907-2107 {cao62, pengh,

More information

Grade 4 General Music

Grade 4 General Music Grade 4 General Music Description Music integrates cognitive learning with the affective and psychomotor development of every child. This program is designed to include an active musicmaking approach to

More information

Analysis and Clustering of Musical Compositions using Melody-based Features

Analysis and Clustering of Musical Compositions using Melody-based Features Analysis and Clustering of Musical Compositions using Melody-based Features Isaac Caswell Erika Ji December 13, 2013 Abstract This paper demonstrates that melodic structure fundamentally differentiates

More information

Sample assessment task. Task details. Content description. Year level 9

Sample assessment task. Task details. Content description. Year level 9 Sample assessment task Year level 9 Learning area Subject Title of task Task details Description of task Type of assessment Purpose of assessment Assessment strategy Evidence to be collected Suggested

More information

2 2. Melody description The MPEG-7 standard distinguishes three types of attributes related to melody: the fundamental frequency LLD associated to a t

2 2. Melody description The MPEG-7 standard distinguishes three types of attributes related to melody: the fundamental frequency LLD associated to a t MPEG-7 FOR CONTENT-BASED MUSIC PROCESSING Λ Emilia GÓMEZ, Fabien GOUYON, Perfecto HERRERA and Xavier AMATRIAIN Music Technology Group, Universitat Pompeu Fabra, Barcelona, SPAIN http://www.iua.upf.es/mtg

More information

ILLINOIS LICENSURE TESTING SYSTEM

ILLINOIS LICENSURE TESTING SYSTEM ILLINOIS LICENSURE TESTING SYSTEM FIELD 143: MUSIC November 2003 Illinois Licensure Testing System FIELD 143: MUSIC November 2003 Subarea Range of Objectives I. Listening Skills 01 05 II. Music Theory

More information

Welcome to the UBC Research Commons Thesis Template User s Guide for Word 2011 (Mac)

Welcome to the UBC Research Commons Thesis Template User s Guide for Word 2011 (Mac) Welcome to the UBC Research Commons Thesis Template User s Guide for Word 2011 (Mac) This guide is intended to be used in conjunction with the thesis template, which is available here. Although the term

More information

GCSE MUSIC Composing Music Report on the Examination June Version: 1.0

GCSE MUSIC Composing Music Report on the Examination June Version: 1.0 GCSE MUSIC 42704 Composing Music Report on the Examination 4270 June 2013 Version: 1.0 Further copies of this Report are available from aqa.org.uk Copyright 2013 AQA and its licensors. All rights reserved.

More information

Sound visualization through a swarm of fireflies

Sound visualization through a swarm of fireflies Sound visualization through a swarm of fireflies Ana Rodrigues, Penousal Machado, Pedro Martins, and Amílcar Cardoso CISUC, Deparment of Informatics Engineering, University of Coimbra, Coimbra, Portugal

More information

Computational Parsing of Melody (CPM): Interface Enhancing the Creative Process during the Production of Music

Computational Parsing of Melody (CPM): Interface Enhancing the Creative Process during the Production of Music Computational Parsing of Melody (CPM): Interface Enhancing the Creative Process during the Production of Music Andrew Blake and Cathy Grundy University of Westminster Cavendish School of Computer Science

More information

Singer Recognition and Modeling Singer Error

Singer Recognition and Modeling Singer Error Singer Recognition and Modeling Singer Error Johan Ismael Stanford University jismael@stanford.edu Nicholas McGee Stanford University ndmcgee@stanford.edu 1. Abstract We propose a system for recognizing

More information

SMCPS Course Syllabus

SMCPS Course Syllabus SMCPS Course Syllabus Course: High School Band Course Number: 187123, 188123, 188113 Dates Covered: 2015-2016 Course Duration: Year Long Text Resources: used throughout the course Teacher chosen band literature

More information

Improvising with The Blues Lesson 3

Improvising with The Blues Lesson 3 Improvising with The Blues Lesson 3 Critical Learning What improvisation is. How improvisation is used in music. Grade 7 Music Guiding Questions Do you feel the same way about improvisation when you re

More information

A Bayesian Network for Real-Time Musical Accompaniment

A Bayesian Network for Real-Time Musical Accompaniment A Bayesian Network for Real-Time Musical Accompaniment Christopher Raphael Department of Mathematics and Statistics, University of Massachusetts at Amherst, Amherst, MA 01003-4515, raphael~math.umass.edu

More information

INSTRUMENTAL MUSIC SKILLS

INSTRUMENTAL MUSIC SKILLS Course #: MU 82 Grade Level: 10 12 Course Name: Band/Percussion Level of Difficulty: Average High Prerequisites: Placement by teacher recommendation/audition # of Credits: 1 2 Sem. ½ 1 Credit MU 82 is

More information

Whole School Plan Music

Whole School Plan Music Whole School Plan Music Introductory Statement The staff of Scoil Bhríde have collaboratively drawn up this whole school plan in Music. This plan is for the information of teachers, others who work in

More information

An Interactive Case-Based Reasoning Approach for Generating Expressive Music

An Interactive Case-Based Reasoning Approach for Generating Expressive Music Applied Intelligence 14, 115 129, 2001 c 2001 Kluwer Academic Publishers. Manufactured in The Netherlands. An Interactive Case-Based Reasoning Approach for Generating Expressive Music JOSEP LLUÍS ARCOS

More information

BEGINNING INSTRUMENTAL MUSIC CURRICULUM MAP

BEGINNING INSTRUMENTAL MUSIC CURRICULUM MAP Teacher: Kristine Crandall TARGET DATES First 4 weeks of the trimester COURSE: Music - Beginning Instrumental ESSENTIAL QUESTIONS How can we improve our individual music skills on our instrument? What

More information

Music Curriculum Kindergarten

Music Curriculum Kindergarten Music Curriculum Kindergarten Wisconsin Model Standards for Music A: Singing Echo short melodic patterns appropriate to grade level Sing kindergarten repertoire with appropriate posture and breathing Maintain

More information

SAMPLE ASSESSMENT TASKS MUSIC CONTEMPORARY ATAR YEAR 11

SAMPLE ASSESSMENT TASKS MUSIC CONTEMPORARY ATAR YEAR 11 SAMPLE ASSESSMENT TASKS MUSIC CONTEMPORARY ATAR YEAR 11 Copyright School Curriculum and Standards Authority, 014 This document apart from any third party copyright material contained in it may be freely

More information

Extracting Significant Patterns from Musical Strings: Some Interesting Problems.

Extracting Significant Patterns from Musical Strings: Some Interesting Problems. Extracting Significant Patterns from Musical Strings: Some Interesting Problems. Emilios Cambouropoulos Austrian Research Institute for Artificial Intelligence Vienna, Austria emilios@ai.univie.ac.at Abstract

More information

Hidden Markov Model based dance recognition

Hidden Markov Model based dance recognition Hidden Markov Model based dance recognition Dragutin Hrenek, Nenad Mikša, Robert Perica, Pavle Prentašić and Boris Trubić University of Zagreb, Faculty of Electrical Engineering and Computing Unska 3,

More information

Grade 5 General Music

Grade 5 General Music Grade 5 General Music Description Music integrates cognitive learning with the affective and psychomotor development of every child. This program is designed to include an active musicmaking approach to

More information

Kindergarten Music Music

Kindergarten Music Music Course The Park Hill K-8 music program was developed collaboratively and built on both state and national standards. The K-8 music program provides students with a continuum of essential knowledge and

More information

Palestrina Pal: A Grammar Checker for Music Compositions in the Style of Palestrina

Palestrina Pal: A Grammar Checker for Music Compositions in the Style of Palestrina Palestrina Pal: A Grammar Checker for Music Compositions in the Style of Palestrina 1. Research Team Project Leader: Undergraduate Students: Prof. Elaine Chew, Industrial Systems Engineering Anna Huang,

More information

Contest and Judging Manual

Contest and Judging Manual Contest and Judging Manual Published by the A Cappella Education Association Current revisions to this document are online at www.acappellaeducators.com April 2018 2 Table of Contents Adjudication Practices...

More information

School of Church Music Southwestern Baptist Theological Seminary

School of Church Music Southwestern Baptist Theological Seminary Audition and Placement Preparation Master of Music in Church Music Master of Divinity with Church Music Concentration Master of Arts in Christian Education with Church Music Minor School of Church Music

More information

Perception-Based Musical Pattern Discovery

Perception-Based Musical Pattern Discovery Perception-Based Musical Pattern Discovery Olivier Lartillot Ircam Centre Georges-Pompidou email: Olivier.Lartillot@ircam.fr Abstract A new general methodology for Musical Pattern Discovery is proposed,

More information