AUTOMATED COMPOSITION OF MUSICAL SCORES BASED ON THE STYLE OF PREVIOUS WORKS. A Thesis. Justin Fincher. Submitted to the Graduate School

Size: px
Start display at page:

Download "AUTOMATED COMPOSITION OF MUSICAL SCORES BASED ON THE STYLE OF PREVIOUS WORKS. A Thesis. Justin Fincher. Submitted to the Graduate School"

Transcription

1 AUTOMATED COMPOSITION OF MUSICAL SCORES BASED ON THE STYLE OF PREVIOUS WORKS A Thesis by Justin Fincher Submitted to the Graduate School Appalachian State University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE May 2006 Major Department: Computer Science

2 AUTOMATED COMPOSITION OF MUSICAL SCORES BASED ON THE STYLE OF PREVIOUS WORKS A Thesis by Justin Fincher May 2006 APPROVED BY: Kenneth H. Jacker Chairperson, Thesis Committee Alice A. McRae Member, Thesis Committee Scott R. Meister Member, Thesis Committee Edward G. Pekarek Jr. Chairperson, Computer Science Judith Domer Dean of Graduate Studies and Research

3 ABSTRACT AUTOMATED COMPOSITION OF MUSICAL SCORES BASED ON THE STYLE OF PREVIOUS WORKS. (May 2006) Justin Fincher, Appalachian State University Appalachian State University Thesis Chairperson: Kenneth H. Jacker Music has been in existence for much of recorded human history. While the composition of music is a creative process, there are many mathematical constructs involved. Computing machinery continually increases in capability and processing power. Many aspects of everyday life involve computers and one field in which this is especially true is music composition. Computers have become involved in virtually every aspect of the creation of music. This thesis evaluates the particular area of automated composition where musical scores are created without human intervention. There are many genres of music and to adequately create music that is part of a specific genre, the composer must be familiar with the aspects of musical style that define the genre. Transition tables are used in the initial creation of the music and genetic algorithms and Self-Organizing Maps are used to shape the music into the style of the input pieces. These methods are adequate for the analysis and generation of simple music, but do not fully encompass the many characteristics of most complicated scores. iii

4 Contents 1 Introduction Short History of Computer Composition Goals Overview of Thesis Music Representation Transition Tables Genetic Algorithm Self-Organizing Maps Results, Conclusions, and Future Work Brief Background of Automated Composition Pre-Computer Techniques Computer-based Techniques Influence on This Thesis Transition Tables Introduction Definition Usage Initialization of the Tables Generation Insufficiently Populated Tables Creation of Genetic Algorithm Input Genetic Algorithm Introduction History and Description Input Implementation Fitness Function Crossover Mutation iv

5 5 Self-Organizing Map Introduction Coding of Compositions Calculation of Red Value Calculation of Green Value Calculation of Blue Value Training Usage AutoComposer Program Interface and Usage Program Flow Results and Conclusions Major Scales Input Output Banjo Folk Music Input Output Bach Preludes Input Output SOM Output Conclusions Further Work Recapitulation Possible Project Extensions The Future of Automated Composition Bibilography 52 Vita 57 v

6 List of Tables 5.1 Tempo Values Dynamics Values vi

7 List of Figures 3.1 Score of the First Six Notes of a C4 Major Scale Directed Graph of the First Six Notes of a C4 Major Scale Directed Tree of Transitions User Interface Flow of AutoComposer Class Diagram of AutoComposer Major Scales Generated Scales Folk Music Bach Preludes SOM Images vii

8 List of Listings 1.1 Excerpt from Master orchestra file An Example Score File viii

9 Chapter 1 Introduction 1.1 Short History of Computer Composition In the beginning, computers performed basic mathematical calculations. As each generation of computers increased in processing power, the functional capacity broadened to provide much more than the simple calculations of their earliest predecessors. The increase in the number of calculations a computer performs can increase functionality, but it must still rely on a human programmer to explicitly provide its instructions. Computers develop to mimic human beings and in this process the ability for them to learn becomes more important. Programming computers to learn greatly increases their functionality and opens the door to entirely new application areas. The capacity of a computer to learn about information is increasing, and one promising set of new techniques are classified as unsupervised learning [18, 29] As the name suggests, unsupervised learning techniques strive to have the computer extract meaningful information from input with as little help from human 1

10 2 beings as possible. Unsupervised learning removes the necessary training that is required for its alternative, supervised learning. In supervised learning, a human operator must provide a computer with a data set that has already been analyzed by people. This data set is meant to train the supervised learning technique being used, giving it an example of what the operator wishes it to do in the future. With unsupervised learning techniques, the input data is provided to the computer and the computer attempts to parse, classify, or analyze the data without any hints or direction from the operator. In addition to the ability to learn, humans have another advantage over computers: creativity. While a computer may be programmed to learn and extract information from its environment, it does so in a way that is explicitly determined by a human. A computer cannot adequately adapt its behavior, modify the techniques it employs, or develop novel ways to learn from its surrounding environment without the aid of humans. A particular area in which this becomes especially relevant is in the arts, most notably music. The composition of music is a creative process. If it were not, music would not likely have progressed past emulating naturally occurring environmental sounds. Music is an important part of culture, and computers did not have to become very powerful before people began experimenting with them as a tool for music generation. Most human composers of music have listened to music in the past and draw from what has been heard previously. When the library of familiar music is small, it is likely that new compositions will sound very similar to the previously heard music. In the process of having a computer compose music on its own, it is a logical early step to use

11 3 a small subset of musical compositions to influence the style in which the computer will compose. Once a computer is able to draw from a small number of pieces and produce similar compositions, the library of previously heard compositions can be increased and the ability of a computer to creatively produce new music will gradually increase. This thesis evaluates techniques that use a similar process. 1.2 Goals The primary goal of this thesis is to compose music of a particular style. The style to be emulated is dictated by the pieces of music provided by the user as input. The end result is a composition that, while being novel, retains the stylistic elements found in the provided pieces. A secondary goal is to evaluate the techniques used on their suitability for this application as well as ability to become more tailored and more powerful. 1.3 Overview of Thesis Music Representation AutoComposer uses modified CSound files as input. CSound [4] is a well-known program for music representation and generation and therefore provides a standard input format that is easily understood. For our purposes, the full capability of CSound is not needed and that is why a simplified CSound file is used as input. CSound employs two files in the creation of music: a score file and an orchestra file. The orchestra

12 4 s r = kr = 4410 ksmps = 10 nchnls = 1 Listing 1.1: Excerpt from Master orchestra file i n s t r 1 a1 pluck 10000, , 10000, 1, 1 out a1 endin i n s t r 2 a1 pluck 10000, , 10000, 1, 1 out a1 endin i n s t r 3 a1 pluck 10000, , 10000, 1, 1 out a1 endin file defines the instruments that will be used in the composition while the score file contains the times and durations of the notes being played by the instruments. A master orchestra file has been created and is used with all score files. An excerpt from this orchestra file is seen in Listing 1.1. The variable sr represents the sample rate. This specifies how many times per second the sound is rendered. The control rate is stored in variable kr and ksmps represents sr/kr. The variable nchnls stores the number of channels to be used in the rendering of the music: 1 for mono, 2 for stereo. After the header, the specific instruments are defined. The first line declares the instrument number. The second line has a label, a signal generating opcode, amplitude, frequency[17], frequency ratio, modulation indices, and waveshape. Line 3 contains the out command which writes the instrument of the designated label to the hard drive. The fourth line ends the instrument s definition.

13 5 Listing 1.2: An Example Score File ; C4 Major S c a l e ; Function 1 uses the GEN10 subroutine to compute a s i n e ; wave f ; i n s t r s t a r t duration amplitude i i i i i i i i Score files are used as input to AutoComposer. An example score file is seen in Listing 1.2. The score file begins with a function declaration defining the type of sound to be created. A simple sine wave approximation is used in all the inputs for this thesis. Following the function is a list of notes. Each note has four attributes: instrument number, time at which the note is played in seconds (time index), the duration (in seconds) that the note is to be played, and the amplitude. Each instrument represents a key on a standard 88 key piano (1 being the lowest, 88 being the highest[30] and 0 representing a rest). While this limits the program to the range of notes on a piano, it simplifies some of the analysis. The program produces CSound files as output. Once the output composition is created and saved as a score file, it can then be rendered into an audio format. This rendering requires the master orchestra file used with the input files as well as the newly created score file.

14 Transition Tables Two transition tables are employed in the initial generation of the composition. A second-order table is used to store the relationship between a note and the note directly prior. A third-order table is used to store the relationship between a note and the two notes directly prior. As the input pieces are read, the transition tables are initialized. The resulting values stored in the transition tables represent all note transitions that are found in the pieces. The value stored in each table element represents the number of times that particular transition was seen in the input. In addition to populating the transition tables, information is stored about which intervals are often seen in the input pieces for use when new transitions must be generated. The key disadvantage in using a transition table is its restricted scope. Since the only notes evaluated are those one or two prior to the current note, there is a limited view of the composition and therefore the evaluation of certain characteristics (e.g., large repeating patterns, overall structure, etc.) is excluded. A genetic algorithm is used to reduce the effect of these deficiencies. Before the genetic algorithm is able to be used, an initial population must first be created. In our case, the initial population is a set of compositions created using the transition tables. The size of this initial population is determined by the user. The transition tables are used to create each composition and each is added to the initial population. Once the initial population has been created, it is passed to the genetic algorithm for further refinement Genetic Algorithm As mentioned above, a genetic algorithm begins with an initial population. In our case, the initial population consists of a set of compositions generated using transition tables. The fitness function serves to provide a measure of the goodness of a composition. The accuracy of a fitness function is a key determinate of the success of

15 7 a genetic algorithm. The measure of the fitness of a composition should accurately portray how similar in style it is to the input pieces. The fitness function incorporates various elements that characterize the style of a composition, but also ensures that the new compositions are notably distinguishable from the input pieces. Once an appropriate fitness function is created, the program evaluates the fitness of each member of the initial population and gives it a fitness score. A crossover algorithm is employed to take pieces of high fitness and combine them, hopefully in a way that will increase the fitness of the resulting children. While the fitness function is integral to the functioning of the genetic algorithm, the method by which the population members are combined is also important to the algorithm s success. After the combination method is used to generate the children of the initial population, the fitness of each of the new population members is calculated. This process is repeated until user-selected criteria (e.g., fitness stabilization) are met. One advantage of the genetic algorithm is modularity. Once the main framework is implemented, the customization of key functions (fitness function, crossover method, mutation) is fairly simple. If a fitness function is created that is more successful than the one currently in use, it is easily incorporated into the genetic algorithm without having to make modifications. The same is true for the combination method employed. This allows for exploration and experimentation to evaluate which fitness functions and combination methods may be most suited for this application Self-Organizing Maps As part of the measure of fitness, Self-Organizing Maps (SOMs)[19] serve to provide one means of evaluating the population members. The time consuming nature of SOMs precludes their use as the primary fitness function. Instead, the SOM is used every tenth generation. This allows for variance in fitness measures without significantly increasing the generation time. In addition to providing a different method of

16 8 population member evaluation, the SOM provides color-coded visualizations of the generations evaluated. SOMs are an unsupervised learning technique. In other words, they organize an input data set without human intervention. Compositions are mapped to a RGB (Red, Green, Blue) value and this color encoding is the attribute used in categorization by the SOM. The resultant image shows the input pieces, as well as the members of the population, organized according to their representative colors. An average Euclidean distance is used to evaluate how similar the members of the population are to the input compositions Results, Conclusions, and Future Work Several different genres of music are used as input pieces and the relative success of the analysis and generation is evaluated. Examples are cited and an analysis of advantages and pitfalls is presented. The nature of the project lends itself to many facets of exploration and expansion and these various avenues are discussed.

17 Chapter 2 Brief Background of Automated Composition The beginnings of automated composition can be traced back to 500 B.C. It was at this time that Pythagoras propagated the idea that music and mathematics were not only intertwined, but virtually the same study.[7] This pairing of mathematics and music eventually led to music processing with computers. Before computer-based methods were created, many other types of automated composition were created and studied. 2.1 Pre-Computer Techniques The first recorded apparatus for the automated composition of music is the Greek Æolian harp in the second century B.C.[21] These harps, along with simple wind chimes, were the first instruments to use a seemingly random aspect of nature, the wind, to generate music without human intervention. Many similar instruments have been created since and wind chimes continue to be a popular music generation device. Simple mechanical devices like the Æolian harp were the primary method of automated composition until Mozart began experimenting with the Musikalisches 9

18 10 Wurfelspiel (musical dice games).[7] In this technique, multiple sections of music were created, and the choice of which ones were to be included, and at what time, were decided by the rolled dice. While the composer still had to create the pieces to be used, the choice is considered automated because it was not actively decided by the creator. Foreshadowing the eventual use of computers in the composition of music, Joseph Schillinger, a mathematician, developed representations of various aspects of music.[27] Many of these representations employed complicated mathematical formulas. A few years later Milton Babbit developed a system which serialized articulation, pitch, rhythm, and dynamics.[6] In the 1950 s, what was deemed chance music began growing as a popular area of automated composition. John Cage, with Marcel Duchamp, created an example of one of these pieces by attaching electronics to a chess board so that the proceeding game would generate the music as the pieces were moved.[7] Steve Reich created another example of chance music in 1968 called Pendulum music where there were microphones hanging above a loudspeaker, to which they were attached, generating feedback.[7] The performers pulled the microphones back in unison and released them with the resulting feedback as the microphones passed over the speakers comprising the composition. The only other act those involved performed was the simultaneous unplugging of the microphones once they had all come to rest. While chance music was becoming more popular, more algorithmic approaches were also being developed. S.R. Holtzman provided one of these techniques in his formulation of a Generative Grammar Definition Language.[16, 7] This more structured approach provided a framework for composition, but limited some of the created aspects. The structured nature of these techniques were well adapted to the growing use of computers in composition.

19 Computer-based Techniques One of the first experiments with automated composition using computing hardware was by H. Olson and H. Belar in They connected a system for generating sound and two random number generators.[7] Weighted probabilities were used to assist in determining the output and the music was limited to a specific style. The next major development was by Lejaren Hiller and Leonard Isaacson.[7] Their work also involved random number generators. The Illiac computer was used to run many of the resulting programs. This lead to the creation of the Illiac Suite for String Quartet.[14] One advantage of the programs written for the Illiac was that the output was in standard music notation. In the 1960 s, Pierre Barbaud began developing a system of automatic composition involving the using of existing tonal harmonies.[7] His process involved taking known elements of music and creating different combinations and permutations to create new music. This work led to the evaluation of existing music as an element from which new music can be composed. When existing music is to be used in the composition process, it is beneficial to be able to accurately evaluate the input music. This led to a growing focus on the analysis of musical style.[5] James Gabura developed programs to provide comparisons between the styles of different computers and to categorize input pieces as having stylistic similarities.[11] Brooks, Hopkins, Neumann, and Wright created one of the first programs that used existing pieces to create new compositions in a similar style. They limited the input set to hymns. The input was also confined to a certain degree of similarity where all pieces had to be in the key of C Major and have the same meter. The program involved randomness as well as a recursive algorithm for fixing sections that were stylistically inaccurate.[7] Yaakov Kirschen worked with existing pieces as input, but in a way quite

20 12 different from Brooks et al.[7] His technique involved the juxtaposition of the different musical elements of the inputs. For example, the pitches of one piece could be applied to the tempo of another. This was one of the first uses of previous music in automated composition, and it led to further analysis of existing works in the creation of new music. One of the most recent, and arguably most successful, techniques in the automated composition of music has been by David Cope.[7] His development of Experiments in Musical Intelligence (EMI) has grown since 1981 to be a capable producer of new compositions in the style of previous works. EMI began as a physical construction and has grown through its life cycle to a powerful program. 2.3 Influence on This Thesis Many of the methods used in AutoComposer have been drawn from other work done in the field. Michael Mozer investigated the use of transition tables (both secondand third-order) in the composition of music.[24] In his research, he extended the transition table through the use of a neural network. In this thesis, several other methods are employed to extend the functionality of the transition table. Several researchers have attempted to evolve new music. Some approaches have focused on the development of the actual audio waveform[20] while others have attempted to generate musical scores.[28] The quality of the new pieces has been evaluated using tonal theory[28] and comparisons to input compositions.[9] The ability to evaluate and rate similarity between compositions is a valuable tool in the creation of new works. This is especially true when there are input pieces whose style is used to mold the style of the newly generated pieces. Self-Organized Maps have been used in the classification of many data types[1, 2, 31] and have shown promise in the analysis of music as well.[26]

21 13 While there has been much work in the field of automated computer composition, only those most relevant to the formulation of this thesis have been mentioned. Several techniques have shown promise, and the combination of these techniques in AutoComposer serves as a continued evaluation of how these techniques can work together in the task of music synthesis.

22 Chapter 3 Transition Tables 3.1 Introduction In the earliest attempts at composing music with computers, one of the primary limitations was the computational capacity of the computer. This limitation forced those attempting to generate music to find simple and straightforward methods. Techniques were needed that could provide a method of music analysis without requiring much computer memory or processing power. The Transition tables were one of the first methods to meet these requirements. 3.2 Definition The concept behind a transition table is not complex. It uses a single note to be the predictor of what note will follow. The table stores probabilities of transition from a specific note to another. In the second-order table, the column index represents the note that is the destination of the transition while the row index represents the origin. In this thesis, an expanded transition table was also used. In the standard second-order transition table, only the note immediately previous is used. By adding 14

23 15 another dimension, a third-order table is created which uses more than just the immediately prior note. The third-order table uses the two prior notes. This extends the scope of the table and increases its usefulness. Since the tables represent transitions and these transitions are assigned a value, they can also be represented as a weighted, directed graph. Figure 3.1 shows a simple Figure 3.1: Score of the First Six Notes of a C4 Major Scale progression of notes from C4 (note 40) to A4 (note 49). If this were part of an input piece used to initialize the transition table, the resulting transitions could be 40 C D4 E4 F4 47 G4 49 A4 Figure 3.2: Directed Graph of the First Six Notes of a C4 Major Scale represented by the directed graph in Figure 3.2. Both second-order and third-order tables are simple in concept, though this simplicity comes at a cost. Many compositions have more structure than is evident in note-by-note evaluation. By only analyzing one or two previous notes, any structure that extends beyond two notes is essentially ignored. Because this is the first stage of the music generation process, there will be additional stages to address the limits imposed by the scope of the transition table.

24 Usage Initialization of the Tables The transition tables are initialized with the input pieces chosen by the user. Whenever a new note is read (with the exception of the first note of a composition for the second-order table and the first two notes for the third-order table) the appropriate value in the transition table is incremented. As mentioned above, the row represents the current note and the column represents the previous note. If a note 44 is read and it was preceded by a note 42, the value at the intersection of row 44 and column 42 would be incremented by one. This results in a table where each value represents the number of times a specific transition was seen in the input pieces. The third-order table has three dimensions and therefore the two prior notes of each note in the melody are evaluated as the pieces are read. For any given note, the third-order transition table stores the likelihood of it being seen following specific combinations of two other notes. For example, if after the above mentioned sequence a note 41 is read, the value is incremented at the intersection of row 44, column 42, and depth 41. This would represent that a transition from 44 to 42 to 41 was seen in the input piece Generation The transition tables are used to decide which note should be chosen as the next note in the composition. The first note must be provided before the transition tables become useful. The user has the option of choosing the note that will begin the newly generated composition, or the note can be chosen randomly. After the note is chosen, the transition tables can then be used to generate the remainder of the composition. The transition tables are searched to find out which notes commonly followed the note chosen to begin the new composition. If there are several options, the note

25 17 is chosen with a weighted randomness that is determined by the number of times a sequence was seen in the input compositions. If, for example, two notes were found to follow the current note, the first having a value in the transition tables of 4 and the second having a value of 6, then the first note will have a 4 out of 10 chance of being chosen (40%) and the second note will be chosen the remaining 6 out of 10 (60%) times. This process is followed until the composition reaches the required length. Typically the length is similar to that of the input pieces. In the earlier versions of AutoComposer, all pieces generated with transition tables began with a seed note and progressed until the satisfactory length was reached. This resulted in many pieces ending on notes that caused the compositions to sound incomplete. The capability was added to generate pieces starting at both the beginning and the end. By doing this a piece is guaranteed to end on the same note with which it began allowing compositions to sound more completed. If, during the composition of the piece, the corresponding entry in the transition table is a 0, the current note is considered stranded. After a note has been found to be stranded, the notes exact octaves away are evaluated. For example, if a C4 (note 40) is stranded, other C notes are used to produce the next note. If this fails to produce the next note as well, the table is insufficiently populated and this situation is described in the next section Insufficiently Populated Tables There are cases in which the input pieces do not provide enough notes, or the beginning of the new piece falls outside the range of the input pieces. In these cases, the transition tables are considered to be sparse and must be further populated. This is true when the current note has no transitions from it to another note. This stranded note becomes the focus point around which the transition table is further populated.

26 18 While the transition table stores transitions between explicit notes, the number of steps up or down between the notes is also stored without regard to the specific pitches. This information is then used to assist further populating the transition table. If, in the input pieces, it was common to see a note preceded by another 2 steps lower in pitch, then there is a greater likelihood that a transition from the stranded note to the note two steps lower in pitch will be added to the transition table. To ensure that there are sufficient transitional choices, 5 to 25 additional transitions are added to the table. The above-mentioned pitch-independent steps are used in this generation process. Starting 44 with the stranded note, the common steps up or down are used to determine which transitions should be created in the table. While this generation of new transitions begins at the stranded note, it branches to provide transitions from more that only the stranded note. In Figure 3.3, note 44 is the stranded note from which new Figure 3.3: Directed Tree of Transitions transitions are added. In addition to the ones from note 44, transitions are added from the newly accessible notes as well(e.g., from 42 to 40). This branching provides a larger scope to reduce the possibility of the next note chosen being stranded as well. The values in the transition table are permanently altered so future compositions are less likely to encounter stranded notes during their composition.

27 Creation of Genetic Algorithm Input The genetic algorithm takes as input an initial population. In our case, this population is composed of a set of compositions with a single composition being a population member. The transition tables are used to generate each population member that is to be part of the initial population. Once the genetic algorithm is initialized, the genetic algorithm completes the next stage of composition.

28 Chapter 4 Genetic Algorithm 4.1 Introduction Genetic algorithms [12, 15, 10] are most suited for applications where there is either a lack of a clear correct answer, or the process of finding a solution is unreasonably time consuming. Genetic algorithms are adept at finding approximations to solutions. As the name implies, the concept is biologically based and the basic premise is to evolve an approximate solution to the given problem. The algorithm begins with a population and new members are created through genetic crossover and mutation. The more fit members of the population are the most likely to be involved in the production of new individuals. Hundreds of generations are created to give ample interaction for the genetic algorithm to produce viable solutions. Genetic algorithms are suited for this project because the nature of music evaluation is subjective resulting in no clearly definable right answer. While quantitative analysis can be useful, individual interpretation of stylistic attributes causes categorization to be less definite. The nature of genetic algorithms is suited for creating new compositions that are of similar, but not necessarily exactly the same, style. 20

29 History and Description Charles Darwin could be credited with creating the original inspiration that later developed into genetic algorithms[8]. The idea of survival of the fittest began as a biological construct though it has grown to apply to other areas as well. I. Rechenberg s 1973 work, Evolutionsstrategie[10], is one of the earliest references incorporating the concepts of evolutionary biology into the realm of computers, but it is John Holland s Adaptation in Natural and Artificial Systems [15] that is most cited as the first formulation of genetic algorithms. The primary reason for the development of genetic algorithms is the robustness involved[12]. As models of biological systems, genetic algorithms serve to adapt and adjust allowing them to perform well in a wide array of environments. These abilities allow genetic algorithms to find answers in search spaces that may be too large to evaluate with traditional methods. A disadvantage to this approach is the possibility of migrating towards a local solution when a more optimal solution may actually be available. Another consideration is that genetic algorithms often serve to provide approximate solutions quickly, but exact solutions are not as likely. An advantage of genetic algorithms is the extent to which certain components can be tailored to specific tasks. One of the most important pieces of a genetic algorithm is its fitness function. The fitness function gives a measure of how close a possible solution is to optimal. If the fitness can not be accurately measured, the success of the genetic algorithm will be severely limited. The second important component is the combination or crossover method. This determines how the possible solutions are combined to form new possibilities. The goal is to retain diversity while promoting those attributes that cause a solution to be more fit. Though the solutions that appear most optimized are more likely to be involved in the creation of new solutions, less fit solutions are often included to help prevent reaching a sub-optimum goal and to promote diversity[12].

30 22 The fitness function and crossover method are often chosen depending on the type of the genetic algorithm s input. Crossover methods that may fare well in one context can provide extremely poor results when applied to another data set. The modularity of these two components allows them to be easily replaced so that comparisons can be made and the genetic algorithm can be appropriately tailored to the current task. 4.3 Input The input to the genetic algorithm is a population. Traditionally, each population member is a bit string representing the data set. While this provides a measure of simplicity, it also restricts the data that can be used as input. In this project, each of the population members is a musical composition. Transition tables are used to create the initial population that will begin evolving through the genetic algorithm. Since the initial note of each composition is identical, a certain amount of similarity between members is expected. This similarity gives the genetic algorithm more initial focus than if the input population were more varied. 4.4 Implementation Fitness Function After the transition tables have created an initial population of a size determined by the user, the genetic algorithm first evaluates the fitness of each member. This involves calculating the average correlation between the member and the input pieces. This average is then compared to the average correlation amongst the input compositions themselves. To reduce the possibility of creating a composition that is almost

31 23 indistinguishable from an input piece, a note-by-note measure of similarity to the input pieces is calculated. If that similarity is above a certain threshold (e.g., 75% of the notes are identical), the fitness is penalized. The member of the initial population having the highest fitness is saved along with its corresponding fitness value. This is done each generation to preserve the best solutions in cases where near optimal solutions are found early in the generation process. Due to the probabilistic nature of genetic algorithms[12], there is no guarantee that each successive generation will be more fit than the last. By storing the most fit member of each generation, a collection of fit population members will be available. This allows the selection of the final composition to be from the entire generation process instead of focusing only on the final generation. As a periodic measure of fitness, a Self-Organizing Map (SOM) is used. Due to the time a SOM takes to initialize, it is used in computing the fitness of population members only on every tenth generation. The SOM provides a color-coded visualization of the population in addition to providing a means to compute the fitness of the population members. Each population member must be converted to a corresponding color before being processed by this method and those details, as well as the detailed description of the methodology behind the SOM, are described in the next chapter Crossover For a genetic algorithm to function properly, it must have a way to create new populations. The method by which the new populations are made affects the diversity and fitness of the following generations. The crossover[15] or combination process is dictated by the type of data each population member contains. In this project, several crossover methods are employed and the user is allowed to choose before generating compositions. The first crossover method is the most simple. The split method takes two

32 24 population members and finds the midpoint of each. The last halves are switched creating two new compositions. The first child contains the first half of the first composition and the second half of the second composition and vice-versa creating two new members in the next population. Since both members are split evenly, the resultant pieces have a length that is the average of the lengths of the two members being crossed. A second crossover method, switch-at-note, takes two population members and begins with the first. It adds notes from the first composition to the child until it finds a note in common with the second composition. It then reads from the second composition for the remainder of the child creation. The process is carried out beginning with the second composition as well so two new members are the final result. The third combination method is the snake. The snake method begins with the first composition, reads a user selected number of notes (e.g., 5), then reads from the second composition, starting at the corresponding note(e.g., 6). The indicated number of notes are then read from the second member and it snakes back to reading from the first. These note sections are read, alternating between the members, until a child of appropriate length is created or until the end of one of the members being crossed is reached. As with the other combination methods, this is repeated beginning with the second composition as well. Variable snake is the fourth crossover method and is a variant of the abovementioned snake method. With the variable snake method, the basic concept of snaking between the population members is still present. The key difference is that, instead of reading in fixed-sized sections, the length of each section is randomly determined. The fifth and final crossover is the alternate method. As the name suggests, the alternate method switches between reading from the first and second members.

33 25 It functions similarly to the snake method, except only one note is read from each piece before switching to the other Mutation Adequate crossover methods can provide a great deal of variance amongst population members, but mutation is often used to help increase the genetic diversity[12, 15]. In areas where there are multiple solutions of high quality, but the intermediate solutions are inadequate, a genetic algorithm can occasionally migrate towards a peak in fitness that is not actually optimal. For example, a generation may have a higher average fitness than the previous generation, but the way the population members were modified may limit the potential of the generation. While the fitness improved, the population may continue to evolve in a way that restricts future generations from reaching a level of fitness that may have been possible if a different path of evolution had been followed. Mutation provides a pseudo-random means for increasing the diversity of the population. Though mutation serves to prevent an overabundance of uniformity, it must be used appropriately to have the desired affect. If a mutation rate is set too high, it will be difficult for the genetic algorithm to converge to any solution. If it is too low, the result is very little diversification. In this project, the mutation rate dictates the chance of a mutation occurring to a population member during the creation of subsequent generations. If the mutation rate is 0.05, there is a 5% chance that any given composition will be mutated. If a composition is mutated, a random note within it is chosen. All notes from the chosen note forward are removed and the transition tables are used to regenerate the portion of the composition that was deleted.

34 Chapter 5 Self-Organizing Map 5.1 Introduction Self-Organizing Maps, also called Kohonon Neural Networks after the creator, Teuvo Kohonen, are an unsupervised learning technique that serves to categorize and visualize multidimensional data sets[19]. As an unsupervised learning technique, they do not require direction from an operator. Once the Self-Organizing Map is provided with the data set, it runs autonomously until termination criteria have been met. The map is a 400 by 400 pixel image and each pixel is randomly initialized. When it is initialized, each pixel is assigned a random weight. In this project, the weights are the corresponding Red, Green and Blue (RGB) values indicating the color of the pixel. Input data is then read and the map is modified to cluster the similar inputs. This process is repeated numerous times, with each subsequent iteration being more fine-tuned than the previous. This process terminates when either the map has stabilized or a specific number of iterations have been carried out. This process is described in more detail later in the chapter. 26

35 Coding of Compositions The typical example of a Self-Organizing Map involves the categorization of colors. Each color is multidimensional in that it has a red, green, and blue value. Each color value is represented by two hexadecimal digits. The advantage of using colors as input is the easily interpreted resulting visualization. To provide this type of visualization in the context of music, we first assign each composition a color dictated by certain musical characteristics Calculation of Red Value The first evaluation of the compositions involves the note density, or how many notes occur within a given amount of time. This is determined by the types of notes (whole, half, quarter, etc.) and the tempo. While the durations of notes, relative to the tempo, are specifically defined, much music has a more subjective measure of tempo. This is especially true with music using Italian tempo markings. Instead of a specific measure, such as beats per minute (bpm), these indications are descriptions (e.g., Andante means at a walking pace )[33]. A common range of tempos is from 30 bpm to 240 bpm[25], so for the purposes of AutoComposer, the tempo descriptions will be mapped to this range as seen in Table 5.1. With those values defined, the measure of note density can be quantitatively evaluated. The density of the input pieces to the SOM are recorded as notes per second. In addition to the average number of notes per second, the variance[29] is also calculated. These values are then mapped to the hexadecimal encoding representing the red portion of the RGB value. The first hexadecimal digit encodes the average note density with 0 mapping to 0 and f mapping to 32 or above. This mapping means that for every two beats per second the average increases, the hexadecimal digit increases by one. For example, if the average density of a piece was found to be

36 28 Table 5.1: Tempo Values Tempo Description Beats Per Minute Grave 30 Adagio 51 Largo 72 Lento 93 Andante 114 Moderato 135 Allegretto 156 Allegro 177 Vivace 198 Presto 219 Prestissimo , an a would be recorded. As the first hexadecimal digit, the average note density will be the primary determinate of the redness of the representative image. The secondary determinate is the variance in note density throughout the piece. The variance is calculated by computing the average difference from the mean note density for each second in the piece. If there is no variance (i.e. every second has the same number of notes) the resultant hexadecimal digit is 0. A hexadecimal mapping of f indicates that every second of the piece had a difference in note density of half of the total range of note densities found in the composition. Since this value is the second hexadecimal digit, it provides a more subtle variance in the shade of red produced Calculation of Green Value The green value is determined using the directional flow of the composition. The directions are up, down and repeat.[13, 7] An instance of up (U) is recorded when a note is followed by a note of higher pitch. Down (D) is stored when a note is followed by a one of lower pitch. If a pitch is followed by another note of the same pitch, repeat (R) is recorded. These designators indicate the broad flow of the composition without recording the specific intervals between the notes. For example, a note of

37 29 pitch 40 followed by one of pitch 42 would not be differentiated from a note of pitch 40 being followed by a note of pitch 50. The flow of the piece is typically evaluated in pairs of directions[13]. results in nine categories into which pairs of flow indicators are classified. This These categories are sorted from downward to upward trends (DD, DR, RD, DU, RR, UD, RU, UR, UU). After each flow pair of the piece is classified, the average is calculated and mapped to a hexadecimal green value. The result is a higher level of green in compositions with a more upward flow and a lack of green in those compositions with a more downward flow Calculation of Blue Value The dynamics of the composition[7] are used to determine the encoding for the blue value. Common dynamics range from ppp (Pianissimo Piano) to fff (Fortissimo Table 5.2: Dynamics Values Dynamic Indicator Amplitude ppp 1000 pp 5143 p 9286 mp mf f ff fff Forte)[32]. These values are mapped to CSound amplitude values as in Table 5.2. For each population member, the average dynamic level is calculated resulting in the first hexadecimal digit and primary indicator of the amount of blue in the representative image. The second hexadecimal digit is calculated using the dynamic variance, similar to the measure of note density mentioned previously.

38 Training The basic idea of training the SOM is that each piece is used to modify the colors in the map until the similar colors in the map are grouped together. As mentioned earlier, the map is initialized by randomly assigning an RGB value to each pixel. Depending on the size of the data set used to train the map, one of two approaches is typically used. If the data set is large, a random sample from the data set may be used to train the set. The second approach is used in AutoComposer because the size of the input is sufficiently small. In this technique, all members are used in the training of the map. The first step is to take a population member and find the area in the map that is most similar (Best Matching Unit). This is done using the Euclidean distance measure of (R pixel R member ) 2 + (G pixel G member ) 2 + (B pixel B member ) 2. This formula indicates how near in color the input composition is to the current pixel. Every pixel in the image is compared to the color of the input piece and the pixel that is closest to the color of the input is recorded as the best matching unit(bmu). Once the BMU is found, the neighbors are scaled using the scaling function described below. The scaling function determines which neighbors are modified and to what extent. The scope of this function initially encompasses much of the map (e.g., all pixels within half the radius of the map) and gradually decreases the area modified as the training progresses. The amount of change neighbors undergo is determined using a linear function where the farther away a pixel is, the less it will be affected. By beginning with a large area of influence, the initial iterations of training to affect a large amount of area to provide a broad training. As more iterations are complete, the training becomes more fine-tuned and focused. When a particular neighbor pixel is modified, its color is adjusted to become closer to that of the pixel found to be the BMU. For example, if the BMU was a

39 31 completely green pixel, then when its neighbors are adjusted, they all become more green. Their green values are increased while their red and blue values are decreased. By making the neighboring pixels more similar to the BMU, future compositions whose colors are similar are more likely to find a BMU that is nearby. After many iterations, this increases the likelihood that similar pieces will find BMUs near each other resulting in a clustering of similar pieces. The training process continues until a user-specified threshold has been met. The training can continue through a specified number of iterations. A second option is to evaluate how much the map is changing each iteration and ending the training when the change is sufficiently small. During the final iteration, the compositions find the BMU and store its coordinates. 5.4 Usage While the typical use of SOMs is to categorize data, they also provide a way to evaluate similarity between data members. The Euclidean distance between two pixels gives a measure of similarity, as does the difference between the colors themselves. The primary goal of AutoComposer is to produce new compositions that are similar to the input pieces. A SOM is suited to provide the functionality that will assist in this goal. The primary function of the Self-Organizing Map is to provide a measure of fitness for the generated compositions. Using the previously mentioned coordinates that were stored in the final iteration, the Euclidean distance between a population member and the center of the input piece cluster provides a measure of how similar it is to the inputs. The average distance to the input pieces is also be compared to the average distance among the input pieces themselves. The smaller the difference between these averages, the more fit the population member is. When the input

40 32 pieces and generated compositions are indicated on the map, it also provides an easily understood visualization of pieces generated. A secondary function of SOMs in AutoComposer is the evaluation of techniques used to classify the style of the various compositions. If the trained SOM has the inputs scattered across the entire map, it indicates that the measure of style did an unsatisfactory job of identifying compositions of similar style. This is a useful tool for indicating what future modifications may result in a more accurate stylistic analysis.

41 Chapter 6 AutoComposer Program AutoComposer is the program written for this thesis. It is used for the evaluation and creation of music using the techniques described in the previous chapters. This chapter provides an overview of the functionality of AutoComposer. 6.1 Interface and Usage When the program is started, the user sees the main screen (Figure 6.1a). The left column provides a listing of the files that have been selected as input to the program. To select files, the Add Files button is used, or the user may add files through the File menu item. All files chosen are automatically added to the list. If the user wishes not to use the files listed, the Clear Files button removes all selected files. By selecting Options from the File menu, the user is shown a dialog (Figure 6.1b) with multiple options that affect the way music is generated by the program. The initial generation can be done using the second-order, third-order, or both transition tables. The number of iterations used to train the Self-Organizing Map is also user-specified. Most of the options affect the genetic algorithm. There are three options for the conditions under which the genetic algorithm stops. It can run for the user- 33

42 34 Figure 6.1: User Interface (a) Main Screen (b) Options Screen

43 35 entered number of generations, until the fitness of the best composition from each generation has stabilized, or until the fitness comes within a predetermined threshold. The second two options allow the genetic algorithm to complete early, but neither will continue past the entered number of generations. The seed note that begins each generated composition can be chosen. The user also has the option of allowing AutoComposer to randomly choose the note. The rate of mutation is determined by the user. There are five choices of combination or crossover method and two of them have further options. The Snake and Varied Snake combination methods take an input dictating the length of the sections used. In the Varied Snake method, the value determines the maximum length of a section. Once the options have been accepted and the files have been chosen, a composition can be generated. After the user clicks the Generate Composition button, the music is generated and the resulting score is display in the tabbed window. The tab that is highlighted indicates the information that is currently displayed in the window below it. The window defaults to Output Score which displays the CSound formatted composition. SOM Images allows the user to see the trained Self-Organizing Map for each generation where it was used. The Input Colors window displays the corresponding color encoding for each piece that was read as input to the program. By clicking on the Output Color tab, the user can display the color encoding for the newly generated composition. If the user is not satisfied with the composition that has been created, the Generate Composition button can be pressed again to generate a new composition from the already selected input pieces. If the Save Composition button is pressed, the generated CSound file of the composition is saved. A CSound rendering tool is then used to convert the file from CSound to wave or MIDI file formats.

44 Program Flow After the graphical user interface (GUI) is used to select the input files, the files are read in and parsed. As they are parsed, the transition tables are updated to reflect each note that is read. Once this has occurred, the transition tables can be used to generate the number of population members indicated by the user. This initial population is then passed to the genetic algorithm as input. The genetic algorithm then evaluates the fitness of each member. After the most fit member is stored, the population members combine to create new compositions using the indicated crossover method. Each population member is evaluated to determine whether it will mutate, then the process repeats. If the generation number is evenly divisible by ten (every tenth generation) a Self-Organizing Map is created and passed the input compositions as well as the evolving population of compositions. After the SOM is trained, it gives a distance measure which is then used to determine the fitness of each population member in that generation. Once all generations have run, the results are passed back to the GUI where the information is presented as described above. Figure 6.2: Flow of AutoComposer

45 Figure 6.3: Class Diagram of AutoComposer 37

46 Chapter 7 Results and Conclusions Several different categories of music were used to evaluate AutoComposer. The first input set is made up of several major scales. The second category is folk music written for the banjo. The third and most complex input is composed of several short preludes written by Johann Sebastian Bach. 7.1 Major Scales Input The first input used in the program is a set of several major scales. Initially, all scales started on a base note and proceeded up the scale until it reached an interval an octave from the original note. It then progressed back downward to the original note. This provided a very structured input, but due to the correlational nature of the music evaluation, it was found to be trivial to produce other major scales. To provide a more challenging input, the scales were given multiple forms. Instead of each scale simply starting at the base note and progressing up and back down the octave, the lengths vary, as does the flow (see Section 5.2.2). All scales end on a note a number of exact octaves from the original note, but not necessarily the 38

47 39 original note itself. The scales seen in Figure 7.1 were used as part of the more varied input. Even though the modified input provides a less standardized set of pieces, there is a high degree of inter-group similarity. The lengths of the pieces vary, as does the flow, but all notes are the same duration and same amplitude. Since the tempo and note durations are the same, the main differentiating factor with respect to the Self-Organizing Map is the flow. Though this limits the organizational ability of the SOM, the simplicity of the scales did not require the additional categorization Output The output compositions generated using these scales as input resembled major scales, but did not fit the exact form. An upward major scale has the form where each number represents how many half-steps are between each note. The varying lengths of the input pieces made the correlational evaluation less rigid and AutoComposer therefore was less able to adequately determine the general form of the inputs. The sole use of the third-order transition table resulted in more accurate output pieces. Because the extra dimension extended the scope of the table, general trends were more readily captured. Figure 7.2 (a) and (b) show output compositions generated by the second-order and third-order tables, respectively. The piece generated by the second-order table is limited by only knowing a single previous note. In the scales that go up and then back down, each note (except the base note) follows both a note lower and a note higher at some point in the piece. Since the second-order table can only evaluate the note immediately before a note, it has no indication about the previous flow of the composition (i.e. whether it was progressing up or down). One characteristic that is more striking to human listeners is the ending. The

48 40 Figure 7.1: Major Scales (a) C4 8 (b) D6 (c) E5 (d) F3

49 41 Figure 7.2: Generated Scales (a) Second-Order (b) Third-Order transition tables do not store the point in the composition at which they are being used to generate new music. This causes many generated pieces to seem to end abruptly and without resolution. In the major scales, all finished on the same note (e.g., C) though possibly at a different octave. If the user chooses to generate the composition from both the end and beginning, this problem is lessened, but it does not assure the creation of a composition that sounds complete. 7.2 Banjo Folk Music Input The second input set used is a series of folk compositions[3], previously arranged for banjo. Because a piano is able to play most of the notes found on a banjo, no transcription was necessary. These pieces introduce a more complicated structure than that found in the major scales. Notably, each contains notes of varying durations.

50 42 There was some variation in amplitude between pieces as well, though each piece maintained the same amplitude throughout. There was also some variation in tempo, which was also maintained within each piece. To provide another degree of similarity and an additional area of stylistic similarity, the pieces chosen are all in the same key. In many of the input pieces, there are repeating patterns. Some have verses that repeat the exact same notes and durations. Others have a repeating set of note durations, but the pitches are not the same. These repeating patterns provide another characteristic of the compositions that contribute to their musical style. An excerpt from one of the input pieces can be seen in Figure 7.3 (a) Output One of the most notable differences between the output compositions and the inputs are the lack of the above-mentioned repeating patterns. Though the correlational evaluation is able to provide a measure of similar structure, there are various sequences of notes that may have the same correlational coefficient (with respect to the input pieces) but may not be similar sounding to human listeners. There is some repetition present in the generated compositions. Because much of it is due to the increased likelihood of the transitions that were repeated in the inputs, the repetitions found are typically of a short length. The placement of rests also provides a discrepancy between the styles of the input and generated compositions. In human compositions, the rests break up phrases that are cohesive units of melody. The limited scope of the transition tables in the initial generation and of the genetic algorithm in evaluating these units causes rests to occur in the middle of phrases. With regard to the compositions initially generated using the second-order table, unexpected note choices are sometimes made following a rest. The second-table stores only that a rest was the previous note and is unaware of the note previous to the rest allowing for large jumps that may not have occurred

51 43 Figure 7.3: Folk Music (a) Excerpt from Input (b) Excerpt from Generated

52 44 in the original pieces. In the output generated using the folk songs, there is again the lack of a standard ending. The notes at the end of the piece are rarely the notes that would be found at the end of a composition in the key of the input pieces. While the generation from both ends assisted in creating more appropriate endings to compositions, there were still instances where the resultant composition sounds unfinished. These cases were those compositions whose initial note was not indicative of the key in which the piece was written. In these cases, AutoComposer can still be used to generate an entire song. Then a human can make a few small changes (e.g., moving a rest or adjusting the final few notes) that add substantially to the categorization of the new piece into the stylistic genre of the inputs, but also reduces the automated nature of the program. 7.3 Bach Preludes Input The third set of input compositions contains several short preludes by Johann Sebastian Bach[22]. These compositions are substantially longer and more complicated than those found in the previous two input sets. Within each piece there is variation in the amplitude of the notes. There is a also a greater variety of note durations since the inputs have varying tempos. As with the folk pieces, all the preludes were written in the same key. Due the more complex nature of these compositions, the overall structure of the piece is more important. In addition to the note progressions, the changing dynamics and durations create a multifaceted musical piece. Though there are fewer repeating patterns than in the folk music, there are relations between the various phrases that contribute to the cohesiveness of the composition.

53 45 Figure 7.4: Bach Preludes (a) Excerpt from Input (b) Excerpt from Generated One challenge associated with the compositions chosen is the amount of similarity as evaluated by AutoComposer. Though all the pieces are short preludes composed by Bach, there is much variance in certain aspects such as dynamics and note density. This causes the resulting color codes to be less similar, increasing the difficulty of properly evaluating how alike the newly generated pieces are to the original inputs. In addition to the limitations of the transition tables in the placement of rests (as mentioned in the discussion of the folk melody output), the simplification of the input set also served to add confounds to the music analysis. Because the primary melody was extracted for use in the analysis. This caused certain anomalies in the music that would not be heard in the original piece. One of the primary instances is the placement of rests. Because a harmonic line may be playing while the melodic line is at a rest, music that was continuous may have gaps of silence after the melodic extraction.

54 Output The lack of a clear phrasing structure is found in the resultant output compositions as were apparent in the folk compositions. While each composition contained phrases that fit well within the style of the input preludes, there were often surrounding sections that did not. The placement of rests causing a break in the piece also occurred, though similar rests exist in some of the input pieces after the melody extraction. The previously mentioned problem of the ending of the compositions resulted in many compositions sounding unfinished. While certain stylistic qualities were retained in the generated pieces, the lack of an overall structure gave the compositions a wandering feel that was not compatible with the style in the input preludes. 7.4 SOM Output The Self-Organizing Map was integral to the evaluation of musical style. It provided a visualization that indicated the progress of the genetic algorithm. The SOM also gave a measure of stylistic similarity that was different than the standard measure used in the genetic algorithm. This variety added to the ability of the program to recognize and create elements of musical style. In Figure 7.5 (a), is an initialized SOM before the training has taken place. Each pixel has been randomly assigned a color. Figure 7.5 (b) shows the SOM after being trained by an evolving population generating a folk composition for ten iterations. The color of the map has become more blue and green and clusters of these colors are beginning to form. The same map after 50 training iterations is seen in Figure 7.5 (c). There is an obvious grouping of color and there are few clusters indicating a relatively homogeneous population. Figure 7.5 (d) shows a Self- Organizing Map that was trained for 50 iterations with an evolving population of Bach preludes. Multiple clusters can be seen. The increase in color variance with

55 47 Figure 7.5: SOM Images (a) Initialized Map (b) Trained for 10 Iterations of Folk Music (c) Trained for 50 Iterations of Folk Music (d) Trained for 100 Iterations of Bach Preludes

Musical Creativity. Jukka Toivanen Introduction to Computational Creativity Dept. of Computer Science University of Helsinki

Musical Creativity. Jukka Toivanen Introduction to Computational Creativity Dept. of Computer Science University of Helsinki Musical Creativity Jukka Toivanen Introduction to Computational Creativity Dept. of Computer Science University of Helsinki Basic Terminology Melody = linear succession of musical tones that the listener

More information

Algorithmic Music Composition

Algorithmic Music Composition Algorithmic Music Composition MUS-15 Jan Dreier July 6, 2015 1 Introduction The goal of algorithmic music composition is to automate the process of creating music. One wants to create pleasant music without

More information

Doctor of Philosophy

Doctor of Philosophy University of Adelaide Elder Conservatorium of Music Faculty of Humanities and Social Sciences Declarative Computer Music Programming: using Prolog to generate rule-based musical counterpoints by Robert

More information

Building a Better Bach with Markov Chains

Building a Better Bach with Markov Chains Building a Better Bach with Markov Chains CS701 Implementation Project, Timothy Crocker December 18, 2015 1 Abstract For my implementation project, I explored the field of algorithmic music composition

More information

Analysis and Clustering of Musical Compositions using Melody-based Features

Analysis and Clustering of Musical Compositions using Melody-based Features Analysis and Clustering of Musical Compositions using Melody-based Features Isaac Caswell Erika Ji December 13, 2013 Abstract This paper demonstrates that melodic structure fundamentally differentiates

More information

Pitch correction on the human voice

Pitch correction on the human voice University of Arkansas, Fayetteville ScholarWorks@UARK Computer Science and Computer Engineering Undergraduate Honors Theses Computer Science and Computer Engineering 5-2008 Pitch correction on the human

More information

Music Morph. Have you ever listened to the main theme of a movie? The main theme always has a

Music Morph. Have you ever listened to the main theme of a movie? The main theme always has a Nicholas Waggoner Chris McGilliard Physics 498 Physics of Music May 2, 2005 Music Morph Have you ever listened to the main theme of a movie? The main theme always has a number of parts. Often it contains

More information

A Comparison of Methods to Construct an Optimal Membership Function in a Fuzzy Database System

A Comparison of Methods to Construct an Optimal Membership Function in a Fuzzy Database System Virginia Commonwealth University VCU Scholars Compass Theses and Dissertations Graduate School 2006 A Comparison of Methods to Construct an Optimal Membership Function in a Fuzzy Database System Joanne

More information

Experiments on musical instrument separation using multiplecause

Experiments on musical instrument separation using multiplecause Experiments on musical instrument separation using multiplecause models J Klingseisen and M D Plumbley* Department of Electronic Engineering King's College London * - Corresponding Author - mark.plumbley@kcl.ac.uk

More information

Evolutionary Computation Applied to Melody Generation

Evolutionary Computation Applied to Melody Generation Evolutionary Computation Applied to Melody Generation Matt D. Johnson December 5, 2003 Abstract In recent years, the personal computer has become an integral component in the typesetting and management

More information

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG?

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG? WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG? NICHOLAS BORG AND GEORGE HOKKANEN Abstract. The possibility of a hit song prediction algorithm is both academically interesting and industry motivated.

More information

Bach-Prop: Modeling Bach s Harmonization Style with a Back- Propagation Network

Bach-Prop: Modeling Bach s Harmonization Style with a Back- Propagation Network Indiana Undergraduate Journal of Cognitive Science 1 (2006) 3-14 Copyright 2006 IUJCS. All rights reserved Bach-Prop: Modeling Bach s Harmonization Style with a Back- Propagation Network Rob Meyerson Cognitive

More information

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring 2009 Week 6 Class Notes Pitch Perception Introduction Pitch may be described as that attribute of auditory sensation in terms

More information

EVOLVING DESIGN LAYOUT CASES TO SATISFY FENG SHUI CONSTRAINTS

EVOLVING DESIGN LAYOUT CASES TO SATISFY FENG SHUI CONSTRAINTS EVOLVING DESIGN LAYOUT CASES TO SATISFY FENG SHUI CONSTRAINTS ANDRÉS GÓMEZ DE SILVA GARZA AND MARY LOU MAHER Key Centre of Design Computing Department of Architectural and Design Science University of

More information

LESSON 1 PITCH NOTATION AND INTERVALS

LESSON 1 PITCH NOTATION AND INTERVALS FUNDAMENTALS I 1 Fundamentals I UNIT-I LESSON 1 PITCH NOTATION AND INTERVALS Sounds that we perceive as being musical have four basic elements; pitch, loudness, timbre, and duration. Pitch is the relative

More information

Various Artificial Intelligence Techniques For Automated Melody Generation

Various Artificial Intelligence Techniques For Automated Melody Generation Various Artificial Intelligence Techniques For Automated Melody Generation Nikahat Kazi Computer Engineering Department, Thadomal Shahani Engineering College, Mumbai, India Shalini Bhatia Assistant Professor,

More information

Poway Unified School District Instrumental Music Scope and Sequence Grades 5 through 12

Poway Unified School District Instrumental Music Scope and Sequence Grades 5 through 12 Poway Unified School District Instrumental Music Scope and Sequence Grades 5 through 12 The mission of the Poway Unified School District Instrumental Music Program is to provide a quality music education

More information

Take a Break, Bach! Let Machine Learning Harmonize That Chorale For You. Chris Lewis Stanford University

Take a Break, Bach! Let Machine Learning Harmonize That Chorale For You. Chris Lewis Stanford University Take a Break, Bach! Let Machine Learning Harmonize That Chorale For You Chris Lewis Stanford University cmslewis@stanford.edu Abstract In this project, I explore the effectiveness of the Naive Bayes Classifier

More information

Automated Accompaniment

Automated Accompaniment Automated Tyler Seacrest University of Nebraska, Lincoln April 20, 2007 Artificial Intelligence Professor Surkan The problem as originally stated: The problem as originally stated: ˆ Proposed Input The

More information

CSC475 Music Information Retrieval

CSC475 Music Information Retrieval CSC475 Music Information Retrieval Symbolic Music Representations George Tzanetakis University of Victoria 2014 G. Tzanetakis 1 / 30 Table of Contents I 1 Western Common Music Notation 2 Digital Formats

More information

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance Methodologies for Expressiveness Modeling of and for Music Performance by Giovanni De Poli Center of Computational Sonology, Department of Information Engineering, University of Padova, Padova, Italy About

More information

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION ABSTRACT We present a method for arranging the notes of certain musical scales (pentatonic, heptatonic, Blues Minor and

More information

DJ Darwin a genetic approach to creating beats

DJ Darwin a genetic approach to creating beats Assaf Nir DJ Darwin a genetic approach to creating beats Final project report, course 67842 'Introduction to Artificial Intelligence' Abstract In this document we present two applications that incorporate

More information

Computer Coordination With Popular Music: A New Research Agenda 1

Computer Coordination With Popular Music: A New Research Agenda 1 Computer Coordination With Popular Music: A New Research Agenda 1 Roger B. Dannenberg roger.dannenberg@cs.cmu.edu http://www.cs.cmu.edu/~rbd School of Computer Science Carnegie Mellon University Pittsburgh,

More information

Supervised Learning in Genre Classification

Supervised Learning in Genre Classification Supervised Learning in Genre Classification Introduction & Motivation Mohit Rajani and Luke Ekkizogloy {i.mohit,luke.ekkizogloy}@gmail.com Stanford University, CS229: Machine Learning, 2009 Now that music

More information

Chapter 1 Overview of Music Theories

Chapter 1 Overview of Music Theories Chapter 1 Overview of Music Theories The title of this chapter states Music Theories in the plural and not the singular Music Theory or Theory of Music. Probably no single theory will ever cover the enormous

More information

Sudhanshu Gautam *1, Sarita Soni 2. M-Tech Computer Science, BBAU Central University, Lucknow, Uttar Pradesh, India

Sudhanshu Gautam *1, Sarita Soni 2. M-Tech Computer Science, BBAU Central University, Lucknow, Uttar Pradesh, India International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2018 IJSRCSEIT Volume 3 Issue 3 ISSN : 2456-3307 Artificial Intelligence Techniques for Music Composition

More information

MUSIC CURRICULUM FRAMEWORK 1 Based on UbD Template 2.0 (2011): Stage 1 Desired Results

MUSIC CURRICULUM FRAMEWORK 1 Based on UbD Template 2.0 (2011): Stage 1 Desired Results MUSIC CURRICULUM FRAMEWORK 1 Based on UbD Template 2.0 (2011): Stage 1 Desired Results Elementary General Music Lisa Judkins and Loretta Koleck Fifth Grade Course Title Teacher(s) Grade Level(s) Course

More information

Analysis of local and global timing and pitch change in ordinary

Analysis of local and global timing and pitch change in ordinary Alma Mater Studiorum University of Bologna, August -6 6 Analysis of local and global timing and pitch change in ordinary melodies Roger Watt Dept. of Psychology, University of Stirling, Scotland r.j.watt@stirling.ac.uk

More information

Evolutionary jazz improvisation and harmony system: A new jazz improvisation and harmony system

Evolutionary jazz improvisation and harmony system: A new jazz improvisation and harmony system Performa 9 Conference on Performance Studies University of Aveiro, May 29 Evolutionary jazz improvisation and harmony system: A new jazz improvisation and harmony system Kjell Bäckman, IT University, Art

More information

Robert Alexandru Dobre, Cristian Negrescu

Robert Alexandru Dobre, Cristian Negrescu ECAI 2016 - International Conference 8th Edition Electronics, Computers and Artificial Intelligence 30 June -02 July, 2016, Ploiesti, ROMÂNIA Automatic Music Transcription Software Based on Constant Q

More information

PRESCHOOL (THREE AND FOUR YEAR-OLDS) (Page 1 of 2)

PRESCHOOL (THREE AND FOUR YEAR-OLDS) (Page 1 of 2) PRESCHOOL (THREE AND FOUR YEAR-OLDS) (Page 1 of 2) Music is a channel for creative expression in two ways. One is the manner in which sounds are communicated by the music-maker. The other is the emotional

More information

Hidden Markov Model based dance recognition

Hidden Markov Model based dance recognition Hidden Markov Model based dance recognition Dragutin Hrenek, Nenad Mikša, Robert Perica, Pavle Prentašić and Boris Trubić University of Zagreb, Faculty of Electrical Engineering and Computing Unska 3,

More information

DISTRIBUTION STATEMENT A 7001Ö

DISTRIBUTION STATEMENT A 7001Ö Serial Number 09/678.881 Filing Date 4 October 2000 Inventor Robert C. Higgins NOTICE The above identified patent application is available for licensing. Requests for information should be addressed to:

More information

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment Gus G. Xia Dartmouth College Neukom Institute Hanover, NH, USA gxia@dartmouth.edu Roger B. Dannenberg Carnegie

More information

STRAND I Sing alone and with others

STRAND I Sing alone and with others STRAND I Sing alone and with others Preschool (Three and Four Year-Olds) Music is a channel for creative expression in two ways. One is the manner in which sounds are communicated by the music-maker. The

More information

Computing, Artificial Intelligence, and Music. A History and Exploration of Current Research. Josh Everist CS 427 5/12/05

Computing, Artificial Intelligence, and Music. A History and Exploration of Current Research. Josh Everist CS 427 5/12/05 Computing, Artificial Intelligence, and Music A History and Exploration of Current Research Josh Everist CS 427 5/12/05 Introduction. As an art, music is older than mathematics. Humans learned to manipulate

More information

Music Fundamentals. All the Technical Stuff

Music Fundamentals. All the Technical Stuff Music Fundamentals All the Technical Stuff Pitch Highness or lowness of a sound Acousticians call it frequency Musicians call it pitch The example moves from low, to medium, to high pitch. Dynamics The

More information

AutoChorale An Automatic Music Generator. Jack Mi, Zhengtao Jin

AutoChorale An Automatic Music Generator. Jack Mi, Zhengtao Jin AutoChorale An Automatic Music Generator Jack Mi, Zhengtao Jin 1 Introduction Music is a fascinating form of human expression based on a complex system. Being able to automatically compose music that both

More information

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene Beat Extraction from Expressive Musical Performances Simon Dixon, Werner Goebl and Emilios Cambouropoulos Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria.

More information

ST. JOHN S EVANGELICAL LUTHERAN SCHOOL Curriculum in Music. Ephesians 5:19-20

ST. JOHN S EVANGELICAL LUTHERAN SCHOOL Curriculum in Music. Ephesians 5:19-20 ST. JOHN S EVANGELICAL LUTHERAN SCHOOL Curriculum in Music [Speak] to one another with psalms, hymns, and songs from the Spirit. Sing and make music from your heart to the Lord, always giving thanks to

More information

How to Obtain a Good Stereo Sound Stage in Cars

How to Obtain a Good Stereo Sound Stage in Cars Page 1 How to Obtain a Good Stereo Sound Stage in Cars Author: Lars-Johan Brännmark, Chief Scientist, Dirac Research First Published: November 2017 Latest Update: November 2017 Designing a sound system

More information

Music theory B-examination 1

Music theory B-examination 1 Music theory B-examination 1 1. Metre, rhythm 1.1. Accents in the bar 1.2. Syncopation 1.3. Triplet 1.4. Swing 2. Pitch (scales) 2.1. Building/recognizing a major scale on a different tonic (starting note)

More information

AU-6407 B.Lib.Inf.Sc. (First Semester) Examination 2014 Knowledge Organization Paper : Second. Prepared by Dr. Bhaskar Mukherjee

AU-6407 B.Lib.Inf.Sc. (First Semester) Examination 2014 Knowledge Organization Paper : Second. Prepared by Dr. Bhaskar Mukherjee AU-6407 B.Lib.Inf.Sc. (First Semester) Examination 2014 Knowledge Organization Paper : Second Prepared by Dr. Bhaskar Mukherjee Section A Short Answer Question: 1. i. Uniform Title ii. False iii. Paris

More information

The Mathematics of Music and the Statistical Implications of Exposure to Music on High. Achieving Teens. Kelsey Mongeau

The Mathematics of Music and the Statistical Implications of Exposure to Music on High. Achieving Teens. Kelsey Mongeau The Mathematics of Music 1 The Mathematics of Music and the Statistical Implications of Exposure to Music on High Achieving Teens Kelsey Mongeau Practical Applications of Advanced Mathematics Amy Goodrum

More information

Student Performance Q&A:

Student Performance Q&A: Student Performance Q&A: 2012 AP Music Theory Free-Response Questions The following comments on the 2012 free-response questions for AP Music Theory were written by the Chief Reader, Teresa Reed of the

More information

Feature-Based Analysis of Haydn String Quartets

Feature-Based Analysis of Haydn String Quartets Feature-Based Analysis of Haydn String Quartets Lawson Wong 5/5/2 Introduction When listening to multi-movement works, amateur listeners have almost certainly asked the following situation : Am I still

More information

Music Composition with Interactive Evolutionary Computation

Music Composition with Interactive Evolutionary Computation Music Composition with Interactive Evolutionary Computation Nao Tokui. Department of Information and Communication Engineering, Graduate School of Engineering, The University of Tokyo, Tokyo, Japan. e-mail:

More information

Instrumental Music I. Fine Arts Curriculum Framework. Revised 2008

Instrumental Music I. Fine Arts Curriculum Framework. Revised 2008 Instrumental Music I Fine Arts Curriculum Framework Revised 2008 Course Title: Instrumental Music I Course/Unit Credit: 1 Course Number: Teacher Licensure: Grades: 9-12 Instrumental Music I Instrumental

More information

Music Genre Classification

Music Genre Classification Music Genre Classification chunya25 Fall 2017 1 Introduction A genre is defined as a category of artistic composition, characterized by similarities in form, style, or subject matter. [1] Some researchers

More information

SAMPLE ASSESSMENT TASKS MUSIC GENERAL YEAR 12

SAMPLE ASSESSMENT TASKS MUSIC GENERAL YEAR 12 SAMPLE ASSESSMENT TASKS MUSIC GENERAL YEAR 12 Copyright School Curriculum and Standards Authority, 2015 This document apart from any third party copyright material contained in it may be freely copied,

More information

Notes on David Temperley s What s Key for Key? The Krumhansl-Schmuckler Key-Finding Algorithm Reconsidered By Carley Tanoue

Notes on David Temperley s What s Key for Key? The Krumhansl-Schmuckler Key-Finding Algorithm Reconsidered By Carley Tanoue Notes on David Temperley s What s Key for Key? The Krumhansl-Schmuckler Key-Finding Algorithm Reconsidered By Carley Tanoue I. Intro A. Key is an essential aspect of Western music. 1. Key provides the

More information

Automatic Music Clustering using Audio Attributes

Automatic Music Clustering using Audio Attributes Automatic Music Clustering using Audio Attributes Abhishek Sen BTech (Electronics) Veermata Jijabai Technological Institute (VJTI), Mumbai, India abhishekpsen@gmail.com Abstract Music brings people together,

More information

Algorithmic Composition in Contrasting Music Styles

Algorithmic Composition in Contrasting Music Styles Algorithmic Composition in Contrasting Music Styles Tristan McAuley, Philip Hingston School of Computer and Information Science, Edith Cowan University email: mcauley@vianet.net.au, p.hingston@ecu.edu.au

More information

Growing Music: musical interpretations of L-Systems

Growing Music: musical interpretations of L-Systems Growing Music: musical interpretations of L-Systems Peter Worth, Susan Stepney Department of Computer Science, University of York, York YO10 5DD, UK Abstract. L-systems are parallel generative grammars,

More information

Automatic Polyphonic Music Composition Using the EMILE and ABL Grammar Inductors *

Automatic Polyphonic Music Composition Using the EMILE and ABL Grammar Inductors * Automatic Polyphonic Music Composition Using the EMILE and ABL Grammar Inductors * David Ortega-Pacheco and Hiram Calvo Centro de Investigación en Computación, Instituto Politécnico Nacional, Av. Juan

More information

Largo Adagio Andante Moderato Allegro Presto Beats per minute

Largo Adagio Andante Moderato Allegro Presto Beats per minute RHYTHM Rhythm is the element of "TIME" in music. When you tap your foot to the music, you are "keeping the beat" or following the structural rhythmic pulse of the music. There are several important aspects

More information

Instrumental Music III. Fine Arts Curriculum Framework. Revised 2008

Instrumental Music III. Fine Arts Curriculum Framework. Revised 2008 Instrumental Music III Fine Arts Curriculum Framework Revised 2008 Course Title: Instrumental Music III Course/Unit Credit: 1 Course Number: Teacher Licensure: Grades: 9-12 Instrumental Music III Instrumental

More information

Composer Style Attribution

Composer Style Attribution Composer Style Attribution Jacqueline Speiser, Vishesh Gupta Introduction Josquin des Prez (1450 1521) is one of the most famous composers of the Renaissance. Despite his fame, there exists a significant

More information

University of Huddersfield Repository

University of Huddersfield Repository University of Huddersfield Repository Millea, Timothy A. and Wakefield, Jonathan P. Automating the composition of popular music : the search for a hit. Original Citation Millea, Timothy A. and Wakefield,

More information

A Bayesian Network for Real-Time Musical Accompaniment

A Bayesian Network for Real-Time Musical Accompaniment A Bayesian Network for Real-Time Musical Accompaniment Christopher Raphael Department of Mathematics and Statistics, University of Massachusetts at Amherst, Amherst, MA 01003-4515, raphael~math.umass.edu

More information

Automatic Composition from Non-musical Inspiration Sources

Automatic Composition from Non-musical Inspiration Sources Automatic Composition from Non-musical Inspiration Sources Robert Smith, Aaron Dennis and Dan Ventura Computer Science Department Brigham Young University 2robsmith@gmail.com, adennis@byu.edu, ventura@cs.byu.edu

More information

Long and Fast Up/Down Counters Pushpinder Kaur CHOUHAN 6 th Jan, 2003

Long and Fast Up/Down Counters Pushpinder Kaur CHOUHAN 6 th Jan, 2003 1 Introduction Long and Fast Up/Down Counters Pushpinder Kaur CHOUHAN 6 th Jan, 2003 Circuits for counting both forward and backward events are frequently used in computers and other digital systems. Digital

More information

UNIT 1: QUALITIES OF SOUND. DURATION (RHYTHM)

UNIT 1: QUALITIES OF SOUND. DURATION (RHYTHM) UNIT 1: QUALITIES OF SOUND. DURATION (RHYTHM) 1. SOUND, NOISE AND SILENCE Essentially, music is sound. SOUND is produced when an object vibrates and it is what can be perceived by a living organism through

More information

Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng

Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng Introduction In this project we were interested in extracting the melody from generic audio files. Due to the

More information

Music. Curriculum Glance Cards

Music. Curriculum Glance Cards Music Curriculum Glance Cards A fundamental principle of the curriculum is that children s current understanding and knowledge should form the basis for new learning. The curriculum is designed to follow

More information

CURRICULUM MAP ACTIVITIES/ RESOURCES BENCHMARKS KEY TERMINOLOGY. LEARNING TARGETS/SKILLS (Performance Tasks) Student s perspective: Rhythm

CURRICULUM MAP ACTIVITIES/ RESOURCES BENCHMARKS KEY TERMINOLOGY. LEARNING TARGETS/SKILLS (Performance Tasks) Student s perspective: Rhythm CURRICULUM MAP Course Title: Music 5 th Grade UNIT/ORGANIZING PRINCIPLE: PACING: Can students demonstrate music literacy? UNIT NUMBER: ESSENTIAL QUESTIONS: CONCEPTS/ CONTENT (outcomes) 1) Sings alone and

More information

1 Overview. 1.1 Nominal Project Requirements

1 Overview. 1.1 Nominal Project Requirements 15-323/15-623 Spring 2018 Project 5. Real-Time Performance Interim Report Due: April 12 Preview Due: April 26-27 Concert: April 29 (afternoon) Report Due: May 2 1 Overview In this group or solo project,

More information

MELODIC NOTATION UNIT TWO

MELODIC NOTATION UNIT TWO MELODIC NOTATION UNIT TWO This is the equivalence between Latin and English notation: Music is written in a graph of five lines and four spaces called a staff: 2 Notes that extend above or below the staff

More information

1 Ver.mob Brief guide

1 Ver.mob Brief guide 1 Ver.mob 14.02.2017 Brief guide 2 Contents Introduction... 3 Main features... 3 Hardware and software requirements... 3 The installation of the program... 3 Description of the main Windows of the program...

More information

Oak Bay Band MUSIC THEORY LEARNING GUIDE LEVEL IA

Oak Bay Band MUSIC THEORY LEARNING GUIDE LEVEL IA Oak Bay Band MUSIC THEORY LEARNING GUIDE LEVEL IA Oak Bay Band MUSIC THEORY PROGRAM - LEVEL IA The Level IA Program is intended for students in Band 9. The program focuses on very simple skills of reading,

More information

2. AN INTROSPECTION OF THE MORPHING PROCESS

2. AN INTROSPECTION OF THE MORPHING PROCESS 1. INTRODUCTION Voice morphing means the transition of one speech signal into another. Like image morphing, speech morphing aims to preserve the shared characteristics of the starting and final signals,

More information

INSTRUMENTAL MUSIC SKILLS

INSTRUMENTAL MUSIC SKILLS Course #: MU 18 Grade Level: 7 9 Course Name: Level of Difficulty: Beginning Average Prerequisites: Teacher recommendation/audition # of Credits: 2 Sem. 1 Credit provides an opportunity for students with

More information

Lab experience 1: Introduction to LabView

Lab experience 1: Introduction to LabView Lab experience 1: Introduction to LabView LabView is software for the real-time acquisition, processing and visualization of measured data. A LabView program is called a Virtual Instrument (VI) because

More information

Study Guide. Solutions to Selected Exercises. Foundations of Music and Musicianship with CD-ROM. 2nd Edition. David Damschroder

Study Guide. Solutions to Selected Exercises. Foundations of Music and Musicianship with CD-ROM. 2nd Edition. David Damschroder Study Guide Solutions to Selected Exercises Foundations of Music and Musicianship with CD-ROM 2nd Edition by David Damschroder Solutions to Selected Exercises 1 CHAPTER 1 P1-4 Do exercises a-c. Remember

More information

Instrumental Music II. Fine Arts Curriculum Framework

Instrumental Music II. Fine Arts Curriculum Framework Instrumental Music II Fine Arts Curriculum Framework Strand: Skills and Techniques Content Standard 1: Students shall apply the essential skills and techniques to perform music. ST.1.IMII.1 Demonstrate

More information

The Human Features of Music.

The Human Features of Music. The Human Features of Music. Bachelor Thesis Artificial Intelligence, Social Studies, Radboud University Nijmegen Chris Kemper, s4359410 Supervisor: Makiko Sadakata Artificial Intelligence, Social Studies,

More information

Optimization of Multi-Channel BCH Error Decoding for Common Cases. Russell Dill Master's Thesis Defense April 20, 2015

Optimization of Multi-Channel BCH Error Decoding for Common Cases. Russell Dill Master's Thesis Defense April 20, 2015 Optimization of Multi-Channel BCH Error Decoding for Common Cases Russell Dill Master's Thesis Defense April 20, 2015 Bose-Chaudhuri-Hocquenghem (BCH) BCH is an Error Correcting Code (ECC) and is used

More information

Arts, Computers and Artificial Intelligence

Arts, Computers and Artificial Intelligence Arts, Computers and Artificial Intelligence Sol Neeman School of Technology Johnson and Wales University Providence, RI 02903 Abstract Science and art seem to belong to different cultures. Science and

More information

Lab P-6: Synthesis of Sinusoidal Signals A Music Illusion. A k cos.! k t C k / (1)

Lab P-6: Synthesis of Sinusoidal Signals A Music Illusion. A k cos.! k t C k / (1) DSP First, 2e Signal Processing First Lab P-6: Synthesis of Sinusoidal Signals A Music Illusion Pre-Lab: Read the Pre-Lab and do all the exercises in the Pre-Lab section prior to attending lab. Verification:

More information

Powerful Software Tools and Methods to Accelerate Test Program Development A Test Systems Strategies, Inc. (TSSI) White Paper.

Powerful Software Tools and Methods to Accelerate Test Program Development A Test Systems Strategies, Inc. (TSSI) White Paper. Powerful Software Tools and Methods to Accelerate Test Program Development A Test Systems Strategies, Inc. (TSSI) White Paper Abstract Test costs have now risen to as much as 50 percent of the total manufacturing

More information

Diatonic-Collection Disruption in the Melodic Material of Alban Berg s Op. 5, no. 2

Diatonic-Collection Disruption in the Melodic Material of Alban Berg s Op. 5, no. 2 Michael Schnitzius Diatonic-Collection Disruption in the Melodic Material of Alban Berg s Op. 5, no. 2 The pre-serial Expressionist music of the early twentieth century composed by Arnold Schoenberg and

More information

MELONET I: Neural Nets for Inventing Baroque-Style Chorale Variations

MELONET I: Neural Nets for Inventing Baroque-Style Chorale Variations MELONET I: Neural Nets for Inventing Baroque-Style Chorale Variations Dominik Hornel dominik@ira.uka.de Institut fur Logik, Komplexitat und Deduktionssysteme Universitat Fridericiana Karlsruhe (TH) Am

More information

8/16/16. Clear Targets: Sound. Chapter 1: Elements. Sound: Pitch, Dynamics, and Tone Color

8/16/16. Clear Targets: Sound. Chapter 1: Elements. Sound: Pitch, Dynamics, and Tone Color : Chapter 1: Elements Pitch, Dynamics, and Tone Color bombards our ears everyday. In what ways does sound bombard your ears? Make a short list in your notes By listening to the speech, cries, and laughter

More information

Music Study Guide. Moore Public Schools. Definitions of Musical Terms

Music Study Guide. Moore Public Schools. Definitions of Musical Terms Music Study Guide Moore Public Schools Definitions of Musical Terms 1. Elements of Music: the basic building blocks of music 2. Rhythm: comprised of the interplay of beat, duration, and tempo 3. Beat:

More information

A Novel Approach to Automatic Music Composing: Using Genetic Algorithm

A Novel Approach to Automatic Music Composing: Using Genetic Algorithm A Novel Approach to Automatic Music Composing: Using Genetic Algorithm Damon Daylamani Zad *, Babak N. Araabi and Caru Lucas ** * Department of Information Systems and Computing, Brunel University ci05ddd@brunel.ac.uk

More information

Frankenstein: a Framework for musical improvisation. Davide Morelli

Frankenstein: a Framework for musical improvisation. Davide Morelli Frankenstein: a Framework for musical improvisation Davide Morelli 24.05.06 summary what is the frankenstein framework? step1: using Genetic Algorithms step2: using Graphs and probability matrices step3:

More information

Outline. Why do we classify? Audio Classification

Outline. Why do we classify? Audio Classification Outline Introduction Music Information Retrieval Classification Process Steps Pitch Histograms Multiple Pitch Detection Algorithm Musical Genre Classification Implementation Future Work Why do we classify

More information

Music Complexity Descriptors. Matt Stabile June 6 th, 2008

Music Complexity Descriptors. Matt Stabile June 6 th, 2008 Music Complexity Descriptors Matt Stabile June 6 th, 2008 Musical Complexity as a Semantic Descriptor Modern digital audio collections need new criteria for categorization and searching. Applicable to:

More information

GRATTON, Hector CHANSON ECOSSAISE. Instrumentation: Violin, piano. Duration: 2'30" Publisher: Berandol Music. Level: Difficult

GRATTON, Hector CHANSON ECOSSAISE. Instrumentation: Violin, piano. Duration: 2'30 Publisher: Berandol Music. Level: Difficult GRATTON, Hector CHANSON ECOSSAISE Instrumentation: Violin, piano Duration: 2'30" Publisher: Berandol Music Level: Difficult Musical Characteristics: This piece features a lyrical melodic line. The feeling

More information

Modeling memory for melodies

Modeling memory for melodies Modeling memory for melodies Daniel Müllensiefen 1 and Christian Hennig 2 1 Musikwissenschaftliches Institut, Universität Hamburg, 20354 Hamburg, Germany 2 Department of Statistical Science, University

More information

use individual notes, chords, and chord progressions to analyze the structure of given musical selections. different volume levels.

use individual notes, chords, and chord progressions to analyze the structure of given musical selections. different volume levels. Music Theory Creating Essential Questions: 1. How do artists generate and select creative ideas? 2. How do artists make creative decisions? 3. How do artists improve the quality of their creative work?

More information

Topic 10. Multi-pitch Analysis

Topic 10. Multi-pitch Analysis Topic 10 Multi-pitch Analysis What is pitch? Common elements of music are pitch, rhythm, dynamics, and the sonic qualities of timbre and texture. An auditory perceptual attribute in terms of which sounds

More information

Evolving Musical Scores Using the Genetic Algorithm Adar Dembo 3350 Thomas Drive Palo Alto, California

Evolving Musical Scores Using the Genetic Algorithm Adar Dembo 3350 Thomas Drive Palo Alto, California 1 Evolving Musical Scores Using the Genetic Algorithm Adar Dembo 3350 Thomas Drive Palo Alto, California 94303 adar@stanford.edu (650) 494-3757 Abstract: This paper describes a method for applying the

More information

June 3, 2005 Gretchen C. Foley School of Music, University of Nebraska-Lincoln EDU Question Bank for MUSC 165: Musicianship I

June 3, 2005 Gretchen C. Foley School of Music, University of Nebraska-Lincoln EDU Question Bank for MUSC 165: Musicianship I June 3, 2005 Gretchen C. Foley School of Music, University of Nebraska-Lincoln EDU Question Bank for MUSC 165: Musicianship I Description of Question Bank: This set of questions is intended for use with

More information

Sample assessment task. Task details. Content description. Year level 8. Theme and variations composition

Sample assessment task. Task details. Content description. Year level 8. Theme and variations composition Sample assessment task Year level 8 Learning area Subject Title of task Task details Description of task Type of assessment Purpose of assessment Assessment strategy Evidence to be collected Suggested

More information

Simple motion control implementation

Simple motion control implementation Simple motion control implementation with Omron PLC SCOPE In todays challenging economical environment and highly competitive global market, manufacturers need to get the most of their automation equipment

More information

Using an Evolutionary Algorithm to Generate Four-Part 18th Century Harmony

Using an Evolutionary Algorithm to Generate Four-Part 18th Century Harmony Using an Evolutionary Algorithm to Generate Four-Part 18th Century Harmony TAMARA A. MADDOX Department of Computer Science George Mason University Fairfax, Virginia USA JOHN E. OTTEN Veridian/MRJ Technology

More information

Chapter 3. Boolean Algebra and Digital Logic

Chapter 3. Boolean Algebra and Digital Logic Chapter 3 Boolean Algebra and Digital Logic Chapter 3 Objectives Understand the relationship between Boolean logic and digital computer circuits. Learn how to design simple logic circuits. Understand how

More information

Musical Signal Processing with LabVIEW Introduction to Audio and Musical Signals. By: Ed Doering

Musical Signal Processing with LabVIEW Introduction to Audio and Musical Signals. By: Ed Doering Musical Signal Processing with LabVIEW Introduction to Audio and Musical Signals By: Ed Doering Musical Signal Processing with LabVIEW Introduction to Audio and Musical Signals By: Ed Doering Online:

More information