U-AMP: User Input Based Algorithmic Music Platform

Size: px
Start display at page:

Download "U-AMP: User Input Based Algorithmic Music Platform"

Transcription

1 U-AMP: User Input Based Algorithmic Music Platform Dept. of CIS - Senior Design David Cerny dcerny@seas.upenn.edu Univ. of Pennsylvania Philadelphia, PA Jiten Suthar jiten@seas.upenn.edu Univ. of Pennsylvania Philadelphia, PA Israel Geselowitz gisrael@seas.upenn.edu Univ. of Pennsylvania Philadelphia, PA ABSTRACT Can computers learn to compose music, or is that a task that requires human creativity? U-AMP is a flexible platform for exploring different techniques in algorithmic composition as applied to a user-inputted MIDI line. In addition, U- AMP includes several machine learning-based algorithm that are able to create complex musical compositions from simple melodies. A support vector machine classifier was trained on Bach chorales to generate a three part accompaniment to the input melody, creating a novel four-part harmony. Our results show that the performance of the machine learning algorithms is far superior to attempts at generating four part harmonies using music theory alone. Beatlize is an algorithm that moves beyond Bach, by training an additional system on the music of The Beatles. Our Beatlize algorithm uses additional techniques like chord repetition generate more poppy four part harmonies. When run on the same input, Beatlize and Bachify generate very different results. 1. INTRODUCTION Algorithmic composition is generally defined as the use of a rule or procedure to put together a piece of music [12]. The field contains a multitude of different techniques for attempting to generate musical compositions algorithmically. In order to compose music using a computer, musical notes can be chosen at random, based on set rules, or probabilistically. However, techniques also exist for generating compositions based on an existing data set. One such technique is to use machine learning to generate a musical composition by learning from the material in a data set and creating a musical piece based on the compositions analyzed. Algorithmic music composition has been around since long before computers were invented, and with recent advances in artificial intelligence and machine learning (along with faster computers), the field of algorithmic music composition has made great strides. However, getting a computer-generated song to sound human is incredibly difficult. Composers and programmers alike have built their own algorithms with varying degrees of success. No matter how many algorithms are written, advances in technology allow for newer algorithms with different techniques for composing music. A key issue within the field of algorithmic is how best to impose a song structure on algorithmically generated melodies. Many compositions created using the techniques mentioned above lack the verse-chorus structure and general cohesiveness of human compositions. In addition, song structure Advisor: Chris Callison-Burch (ccb@cis.upenn.edu). is heavily reliant of genre, and so techniques for generating song structure for one genre of music do not necessarily function in other contexts. Another challenge faced by those who work with algorithmic composition is that there is a wide variety of techniques and methodologies that achieve very different goals, despite a shared broader goal of creating music computationally. If various techniques could be combined, one might be able to generate a musical composition based on user input as in machine improvisation, but also to use music theoretic rules to ensure that the output is melodic. In order to respond to these challenges, U-AMP provides a flexible framework for integrating various techniques in algorithmic composition while adhering to the core idea of generation based on user input. U-AMP s ease of use and customizability makes it a useful tool for composers, both experienced and not, to generate complex multi-part harmonies based on a single MIDI input line. As long as the user inputs a melodic line, generating compositions with a strong chordal structure is possible. To prevent dissonance from occurring when generating multiple melodic lines based on a single input line, machine learning was used. The music of Johan Sebastian Bach was used as a training set for a support vector machine classifier. Then, the Bachify algorithm generates melodic lines chord by chord in order to apply Bach s use of chord progressions to the user input. The results are a good degree of accuracy when testing the SVM on Bach music not included in the training set and, more importantly, quality musical compositions. In order to experiment with different genres, an additional training set of Beatles music was used. Instead of simply running the Bachify algorithm with the SVM trained on the music of The Beatles, a new algorithm was written that takes into account the poppier nature of the new training data. The Beatlize algorithm generates rhythm chords and a bass line, and also repeats chords to mimic the style of the Beatles. The flexibility of U-AMP allows for a variety of future work to be done. Possible extension to U-AMP include turning the wrapper classes into a full Python MIDI library, training for macrostructure as well as chordal structure, and adding new genres of music. There is even the potential for an machine improvisation. 2. RELATED WORK 2.1 History of Algorithmic Composition

2 Algorithmic composition has existed in various forms since well before computers were invented. Mozart, at one point, composed various musical excerpts that could be combined to form a waltz; the ordering of the pieces was determined by rolling dice. Other composers, such as Joseph Haydn and Philipp Kirnberger, came up with dice-based composition techniques [3]. Combinatoric and permutation-based algorithms have also existed since before the 12th century, and composers have used both randomness and rules to compose music for over two thousand years [12, 3]. Another later pioneer of algorithmic composition was Joseph Schillinger, who in the early 1900s, developed mathematical methods for musical composition. The first instance of computer-generated music was at the University of Illinois in the 1950s, where the ILLIAC (Illinois Automatic Computer) created a musical suite scored for a string quartet [12]. Their work used a three-step process for composition: starting with material from a random generator, the output was fed into transformation algorithms and then the result was tested against a rule-based system [3]. It was around this time that composers began experimenting heavily with statistical, probability-based methods to combat the often incomprehensible noise that came from some earlier attempts at algorithmic composition [3]. Advances in computer science fields such as AI, machine learning, and genetic algorithms began making their way into algorithmic composition as well. With improvements in computer speeds, real-time algorithmic music generation for live performances became a possibility, and many musicians have experimented with this [3]. 2.2 Categories and Technology of Music Composition Algorithms Numerous techniques and algorithms have been used for music composition, from simple die rolls to complicated algorithms using the latest in machine-learning techniques. There are several main categories of algorithms used for music: aleatoric (chance-based) methods, determinancy (rulebased), and stochastic (probability-based) methods [12]. Often, these methods are combined with each other, as we can see with the ILLIAC composition, which used both aleatoric and deterministic methods to compose its pieces. Probability-based methods are used most notably for note selection, where the relative probability of each note occurring next in the piece is based on statistical distributions [7]. Strict stochastic methods often result in continuouslychanging compositions, where songs evolve from their start into something completely different. Markov chains are often used in the implementation of stochastic processes [7]. State machines are sometimes used for rule-based songwriting; cellular automata are a popular way of implementing state machines. Grammars, or sets of rules that expand high-level symbols into more detailed low-level descriptions, are frequently used as an attempt to deal with the issue of macrostructure and present hierarchical structures for a piece [7]. Macrostructure is extremely common in human-generated music (for example, verse-chorus-verse-chorus-bridge-chorus is a very high-level macrostructure), but macrostructure is often lost in stochastic pieces that transform as they go, due to the algorithm s inherent randomness. Genetic algorithms in music work as they do elsewhere: an initial population is selected, evaluated based on a fitness function, qnd then changes using crossover or mutation [7]. Fitness can be evaluated using deterministic methods or user-inputted, subjective evaluation. The data used as a base input to any of these algorithms also varies between techniques. A common technique is to start with some sort of random generator, and then apply various algorithms to the output of the generator. Other programs use Markov chains on user-inputted notes or play along with a musician in real-time, which adds an element of human creativity into the process of generating music [6]. Other research has involved using fractals, DNA, or even images to attempt to inject meaning and creativity into an algorithm [7, 13]. These are just a few of the most common techniques for algorithmic composition; there is no technique that is universally considered to be better than others, as each type of algorithm attempts to solve different problems and produces extremely different results. In more recent years, the field of algorithmic music composition has been advanced more by the development of these techniques than by single significant, influential works in the field. Therefore, we have looked at state-of-the-art techniques in this field instead of individual related works. 2.3 Interesting Problems and Difficulties in Algorithmic Composition Developing a macrostructure, a major component of music written by human composers, provides interesting challenges to the field of algorithmic composition. Stochastic processes often tend to wander, but excessive use of grammars often imposes too much of a hierarchical structure, which eliminates some human elements such as improvisation [8]. Balancing song structure with more free-forming music remains an extremely difficult task in computer music. Even advanced learning systems, that generate probability-based models from more structured input (for example, providing songs in a certain genre as training data) have problems with higher-level structures such as phrasing [8]. Fitness functions for genetic algorithms also presents numerous difficulties. Fitness functions can be computerized and objective or human-based; however, there are downsides and difficulties with both. Objective fitness functions require the system to have a significant amount of knowledge about what makes a song sound good; however, when using human listening as a fitness function you run into a fitness bottleneck, where people are simply unable to process the information as quickly as may be necessary [8]. In addition, it is nearly impossible to understand or replicate the user s reasoning when determining the fitness of a piece of music. Finally, the most difficult task by far in the field is the concept of creativity [8]. Musicians will purposely deviate from the norm in a song, perhaps playing certain sections softer or louder, changing key or rhythm, or changing chord progressions. Though computers can understand melodies and come up with new ones, algorithmic composition has thus far not been able to mimic the creative process of the human brain. Attempts at recreating the results of creativity without modeling the process often fall flat. 3. SYSTEM MODEL The aim of the U-AMP platform is to allow a user to write music generation algorithms, quickly apply them to any type of MIDI input, and listen to the results. The platform con-

3 sists of a graphical user interface (GUI) that allows a user to easily perform the actions listed above, a set of Music Utility functions that abstract away the need to interface with the MIDI file format, and a set of music generation algorithms. Figure 1 outlines the overall system design for the process of generating a composition based on MIDI input and concluding with musical output. Each of the major components of the process is described and discussed in greater detail below. MIDI input. More specific generation algorithms will be discussed in later sections. Generated Output and Playback MIDI files generated by an algorithm will appear in the GUI, and can be played and listened to directly from the GUI. Training Data and SVM Classifier Certain generation algorithms make use of machine learning techniques in order to predict chord patterns. In order to make use of a predictor such as a support vector machine (SVM) classifier, song data on a particular composer (such as Bach or the Beatles) must be converted into a usable format to allow the SVM classifier to make predictions. The primary goal of U-AMP, however, is to implement complicated generation algorithms that produce great-sounding music; the components of the platform make this possible. Figure 1: System Model Block Diagram Input A MIDI file imported from anywhere on the user s computer. MIDI data is the primary mode of representation of music in an electronic domain. Music is encoded in the MIDI file format as a series of event messages consisting of note information such as pitch, volume, and timing (rhythm). More complexity is possible with the MIDI specification, but the additional complexity is not utilized in U-AMP. Music Utility Functions U-AMP provides a variety of functions that allow generation algorithms to easily interact with MIDI input and output, while abstracting away the details of the MIDI file format. Instead, algorithms interact with more intuitive classes such as Note and Song, and can read or write a MIDI file with a single function call, which allows a user to focus on the musical aspects of writing an algorithm without worrying about the technical details. Certain common music-related functions, such as key detection, are also provided. Generation Algorithms Generation algorithms are user-created Python scripts that transform a MIDI input file into a MIDI output file. The generation algorithms represent the heart of the process for writing music and are responsible for the generation of the musical composition. As these files are simply Python scripts, generation algorithms can be as simple as transposing the notes of a musical piece or as complicated as creating a lengthy, original song based only on a line of 4. SYSTEM IMPLEMENTATION 4.1 Technology Both the U-AMP platform and all generation algorithms will be written in the Python programming language. Python was selected as the language of choice as it makes developing applications and writing scripts significantly faster and easier than using a language such as C++ or Java. One of the goals of the platform is to make writing music generations as quick and easy as possible, which makes Python an excellent choice of language due to its simplicity in development. A variety of Python libraries have been used within the platform. Pygame s MIDI library is used for both MIDI input from a controller and sound playback from within the GUI [11]. Pygame s library is simple and easy-to-use, and seems to currently be the best option for MIDI input and playback in the Python language. Parsing MIDI files and writing to MIDI files is done using the python-midi library [5]. Python-midi is an older library that has not been updated for some years, and although many of the methods are difficult to use, the base implementation is solid and works better than other Python MIDI libraries. Due to these shortcomings in python-midi, it is necessary to write additional functions built on top of python-midi s functions that make interacting with MIDI files easy. For the GUI, wxpython was used [2]. wxpython provides a very standard GUI library that is less customizable than something like TKinter, but makes development quicker and provides a more standardized look and feel to projects written with it. wxpython allows for a professional-looking GUI that is simpler and easier to implement than other GUI libraries. 4.2 Implementation GUI As stated above, wxpython was used as a GUI library for U-AMP [2]. The main component of the GUI is the ability to select an input MIDI file and a Python generation script and click the Generate button to run the algorithm. The selection component of the GUI was done with wxpython s ListBox; the contents of the list boxes are the contents of separate folders on the user s computer, accessed from the

4 than trying to parse NoteEvents. Key Detection Simple detection of the musical key of a MIDI file was implemented by looping through the file and counting the number of occurrences of each note, and for each of the 12 possible major keys, determining what percentage of the total notes in the MIDI file belong in that key. Figure 2: GUI Screenshot OS library. Also, multiple files can be selected at once in the input and output column. After selecting an input (or a group of inputs) and a script, when the Generate button is clicked, the algorithm is run as a Python script on each file selected. The output MIDI file or files are written to another folder on the user s computer, and can be copied, moved, or opened in other music programs like any MIDI file. A timer loop updates the directory listings in the list boxes. In addition, there is a delete button under the list of inputs and under the list of outputs. Selecting some number of files and pressing deletes removes them from user s computer. The Variance button is used exclusively for the machine learning algorithms. Pressing it creates a pop up box with a slider for selecting the variance from the best choice. The variance ranges from 0 (the default) to 100. How this functions in terms of the machine learning algorithms will be explained below in the sections on Bachify and Beatlize MIDI Handling Wrapper Classes The two main wrapper classes that were designed are Note and Song. MIDI as a file format is difficult to work with, due to the fact that reading and writing music through NoteOn and NoteOff events is very unintuitive. Thus, the Note class contains within it a pitch, a velocity (or volume), a start time, and a duration. Thus, Note objects model how notes are represented in sheet music, with the addition of velocity, which is important for music playback. The Song class is a wrapper class that allows for more intuitive manipulation of MIDI files. A Song contains a list of notes, as well as some additional MIDI data such as resolution (related to BPM) and key. Song also contains a method for sorting notes by their start time. The Song class contains methods for reading in a MIDI file and converting it to the Song format, as well as writing a MIDI file with the notes of the Song as NoteOn events. When reading in a MIDI file, the NoteOn events are matched to NoteOff events in order to creates Notes of the correct duration. When writing a MIDI file, NoteOn and NoteOff events are generated based on a Note s start time and duration. The main advantage of a the Song class is that adding new notes and parsing the existing notes to generate another line based on the input is significantly easier Fill Embellishing Notes Embellishing notes are tones notes that are not part of the chord structure, but fit within the harmonic structure anyway. Classical music, particularly the compositions of Bach, rely on embellishing notes for improved musical quality. For algorithms designed to output classical music, the fillembelnotes method plays an important role in making sure that the compositions generated evoke classical pieces Generation Algorithms Transpose Transpose is the simplest generation algorithm we ve written. In musical terms, transposing a song is raising or lowering the pitch of every single note in a song by a constant number of steps or half-steps. For example, transposing a song up one step would make every C note a D, every D an E, etc. The transpose algorithm opens up the MIDI file and calls methods that transform the file into a data structure that can be looped through. For every note in the data structure, a constant is added to the pitch and the resulting note is written to the output data structure, which is then written to a MIDI file by making a single function call. Harmonize Using the key detection utility described, the Harmonize algorithm loops through each note in the input file, and writes both that note and another, slightly higher note, into the output file. As the key of the input is known, the selected note is chosen to be the note that is either 3 or 4 half-steps above the input note; the note chosen is one of the two that is in the key of the piece. Melody Generator A simple melody generator was written in order to test the algorithms that generate counterpoint or multi-part harmony. Melody Generator generates a line of notes using random choice constrained by a simple melodic structure. The output is also kept within a certain range of notes to prevent the melody from containing notes that are too high or too low. The output is melodious, but lacks any cohesive structure beyond that. Counterpoint Having now established key and basic harmony, the next progression in algorithms is the counter algorithm, our first musical algorithm. This implements Species Counterpoint. It begins with 1st Species Counterpoint, whereby each note in the input is matched with exactly one note of output. The manner in which each of the output notes is selected is carefully bound within two main rules: vertical consonance and proximity. Vertical consonance is the idea that the next note selected in output should form a consonant interval with the original input note at that particular point in time. Since the

5 output will be layered over the input, this type of analysis is completely essential to getting a result that sounds pleasant. If vertical consonance is not taken into consideration, then a generative algorithm may select notes that clash with the original user input and thus would sound very discordant when overlaid with that input. The consonant intervals considered are: perfect octave above/below (12 half-steps up or down), major 6th above (9 half-steps up), minor 6th below (8 half-steps down), perfect 5th above (7 half steps up), major 3rd above (4 halfsteps up), minor 3rd below (3 half-steps down), and unison (0 half-steps, i.e. same as input pitch). The counterpoint algorithm introduces aleatoric methodology for the first time by randomly selecting one of the above consonant interval offsets from the next input note as an initial starting point in generating the next pitch. After this step, the proximity rule is taken into consideration, as described in the following paragraph. The proximity rule is the other bounding rule in contrapuntal line generation. In order for a generated output line to be truly contrapuntal, it must work as a stand-alone melody. This is not necessarily (and in fact, is very unlikely) to be achieved using just vertical consonance which, as the name implies, only looks vertically at the next input note and offsets the corresponding output note by the interval. More specifically, it is very likely for the generated output line to jump around note-to-note and sound disconnected. Since this algorithm is attempting to capture a Western Classical sound, this sort of output is not acceptable. The proximity rule adds a horizontal, backwardlooking element to mitigate this issue of output notes jumping around. Essentially, it checks to see if the proposed next pitch is within a certain threshold of the previous pitch. If the proposed pitch is too far, then it is discarded and vertical consonance is used again to randomly select another proposed pitch. This process is repeated until a pitch that satisfies both rules is found. If no such pitch exists (which happens quite rarely if the original input is a well-formed melody), then just vertical consonance is used to randomly select a pitch, and proximity is ignored. As a final step, a function was written to fill in embellishing notes into the final output line to create rhythmic interest and solidify the melody. Embellishing notes include passing notes, which are quick filler notes placed in between two notes that skipped one pitch, and neighboring notes which are quick tones added in when two pitches repeat. By incorporating the two above rules and embellishing tones, the counterpoint algorithm is able to generate independent melodic lines that not only sound good on their own, but also sound great when juxtaposed over the original input melody. Bachify Having successfully constructed working counterpoint algorithms, the next order of complexity involved generating three additional melodic lines (whereas counterpoint only generated one additional line). At first, counterpoint was run three times on the original input to generate three contrapuntal lines, CP1, CP2, and CP3. However, this did not yield musically favorable results because even though each of the generated lines worked with the original input melody independently, they did not work with each other. In other words, CP1 could have clashed with CP2, CP2 with CP3, etc. Any time more than two pitches are sounded together, a chord results. The music theory surrounding chords and their change over time (a chord progression) can be complex and even undefined for some genres of music. It was clear at this stage that in order to advance the research to the next level, some sort of abstraction or incorporation of chordal theory had to be incorporated. To do this, machine learning was used. The idea was to take existing musical works by famous composers and musicians, and then run a support vector machine classifier trained on this data to predict a sequence of chords. In the case of the Bachify algorithm, approximately 400 Bach chorales were obtained from the website jsbchorales.net [4] for use as the main dataset. A significant challenge was to convert these MIDI files into a data format usable by the support vector machine classifier. This process involved two main phases: 1) Extracting the chordal data from the MIDI files and 2) Featuring the chords optimally for greatest classifier accuracy. All of this was done with the assistance of scikit-learn, a machine learning library for Python [9]. Both of these processes are described in greater detail below. Figure 3: Representation of Chordal Data Extraction Chordal Data Extraction For each of the Bach chorale MIDI files, a python utility was created that analyzed every single beat and fitted it with a best-match chord type and root. The utility constructed all four possible chord types (rooted at each of the four voices in the Bach chorale) and assigned each type an internal score based on how closely it matched a chord against a database. The highest score was chosen to be the chord playing at that moment in time. The chord was then restated in a form independent of the piece s key signature. This format was X CT where X is a numerical value representing the number of semitones the root of the chord is from the key of the piece, and CT is the chord type (major, minor, dominant 7th, etc.). The process was repeated for every single beat over 400 Bach chorales, resulting in over 15,000 data points for training and testing once the data was featurized.

6 not only with the original input but also with each other, something our counterpoint implementation was simply unable to rival. Figure 4: Featurization Four Part Beatles The Beatlize algorithm was trained on a dataset of Beatles chord progressions collected by the Queen Mary University of London [10]. The dataset included chords playing, current key, and beats, all timestamped to 1/1000 of a second in text format. For chords, a start and end timestamp was included. In order to create a text file to train a SVM classifier, the current key was examined and each chord that played in that key was taken from a normal format (e.g. C Maj) and converted into a format that removes the note and replaces it with the number of half-steps offset from the root note of the chord (e.g. in the key of C, C Maj becomes 0 Maj). After training the classifier, it is able to provide chord predictions based on the Beatles training data. Other than this difference, it is identical to Bachify. Data Featurization and Moving Window The second major aspect of the classifier was featurization of the extracted data. In this phase, the data is organized into a format that is recognizable and trainable by the support vector machine classifier. A moving window was utilized, which meant the classifier would look back x chords and use that information to make a prediction on the most probable next chord to follow (once trained). After much experimentation, the optimal window size for accuracy was a window size of two (see Figure??). Furthermore, the optimal feature representation for the data was a single string of the form X CT as described above. In the end, there were over 15,000 data entries with two features per data entry: the t-1 X CT string and the t-2 X CT string (where t-1 and t-2 are the previous beat chord and the chord two beats ago respectively). Having optimized and fitted the classifier with the Bach choral featurized data set, the last part of the algorithm was to utilize the chord predictions to generate the melodic lines. This part of the algorithm also proved to be quite complex. Firstly, the classifier had to be slightly modified to return a ranked ordering of the top predicted next chords given two previous chords. This was necessary because a major unique point of U-AMP is its ability to write music over top of a given user input. As a result, our Bachify algorithm has to possess the functionality to forgo choices in order to conform to the notes in user input. That is, if the most probabilistic choice of the classifier is a chord that doesn t contain the user input tone, then that chord cannot be used. In this case, the second-most probabilistic choice is considered, etc. Lastly, the algorithm then utilized the same music theory conventions outlined in the Counterpoint algorithm section in actually generating the three melodic lines, GL1, GL2, and GL3. The idea of horizontal proximity was strongly adhered to as the tones of the chord returned by the classifier were assigned to each of the voices. Then, embellishing notes such as passing notes and neighboring notes were filled in to smooth out the melodies. This resulted in three independent melodic lines that worked in perfect harmony Figure 5: Beatles Classifier Beatlize Beatlize is a unique algorithm that is also trained on the Beatles data set, but differs from Four Part Beatles in that it is not identical to Bachify. Other than the difference in training data, there are two major differences between Beatlize and Bachify. The first difference is in chord repetition. While Bach chorales often change chord every beat, Beatles songs are likely to contain repeated strumming of the same chord. Repetitions of chords, however, have to be removed from the data passed in to a classifier, as otherwise the highest-probability chord would almost always be the same chord that was just played; no chord patterns would emerge, as the most likely result would be infinite repetition of the same chord. Since an SVM classifier cannot handle repetition for this reason, in order to mimic the repeated chords that are characteristic of many Beatles songs, another simple probabilistic model was created by looking at the distribution of the number of times a chords would repeat before changing. In order to choose the next chord in Beatlize, the probability that the chord would change given how many times it had already repeated was calculated based on this data. The currently playing chord would change with this probability (using random numbers), or if the input note at that time did not match the chord. This allows chords to repeat in a Beatles-like fashion, but removes dissonance by changing the chord if the melody note at the given time does not match. The second difference between Bachify and Beatlize is with instrumentation and voices. Bachify uses four distinct voices, each playing a melodic line. In order to replicate a pop sound for the Beatles, the Beatlize algorithm uses a melody

7 line, a line of guitar-like chords, and a bass line. This results in a pop feel that is incredibly different from the sound of Bachify, even if the two algorithms are run on the same input Variance Variance represents the degree to which the machine learning algorithms avoid each entry on the n-best list. Variance is an integer value between 0 and 100, with 0 as the default. When variance is 0, the Bachify and Beatlize algorithms always choose the best choice when predicting the next chord. As the variance increases, there is an increased probability that the best choice will be passed over for the next best choice. For example, when variance is 50, there is a 50% chance that the best choice chord will be skipped, and then a 50% chance that the second best chord will be skipped. Because our n-best list contains the top three choices, the third best chord will always be chosen if the first two are skipped. This introduces a probabilistic element that prevents the algorithm from always generating the same composition when run on a particular input. 5. RESULTS Over the course of the development U-AMP, a variety of different algorithms were developed and run on several different types of input. Firstly, there were a few standard melodies such as Mary Had a Little Lamb and Twinkle Twinkle Little Star used for testing the basic functionality of the algorithms. Secondly, the output of the Melody Generator was used to test the various algorithms on a wide range of different input. Finally, single melodic lines were extracted from Bach compositions left out of the training set in order to compare the original Bach compositions with the output of Bachify run on a melodic line from that composition. In addition, to analyze the differences between Bachify and Beatlize, Beatlize was run on the same input lines extracted from the Bach chorales in order to compare the output both with the original Bach pieces and the output of Bachify Output of Music Theoretic Algorithms The output of the first algorithms developed is simple and adhered very closely to the user-inputted line. The Transpose algorithm simply outputs a melodic line that, when played in tandem with the input, is extremely dissonant. This is due to the fact that melodic structure does not imply harmonic structure, and thus the input and output were only connected in their melodic similarities. The Harmonize algorithm was the first to implement a basic harmonic structure. Using key detection, the output at least lacks any dissonance. The Counterpoint algorithm was the first algorithm designed that introduces some degree of creativity. The counterpoint line preserves harmonic structure, but varies in rhythm such that the output sounds like a new musical composition that simply includes the original input. This differentiates Counterpoint from the previous music theoretic algorithms. The output of Counterpoint x3 is notable insomuch as it acts as a contrast to the output of the machine learning algorithms. Because Counterpoint x3 only generates harmonic structure for each of the three outputted lines in relation to the input, the output is often extremely dissonant. Counterpoint x3, which fails to create a four part harmonic structure, provides evidence that machine learning techniques are more effective for creating complex harmonies than music theoretic algorithms. Figure 6: Output of Counterpoint x3 run on Twinkle Twinkle Little Star Figure 7: Output of Bachify run on Twinkle Twinkle Little Star Output of Bachify Figure 8: Accuracy Statistics Throughout the development phase, 80% of the available featurized data points were used to train the classifier while the remaining 20% were used to test it to ensure all tests were performed on data the classifier had never seen before. After setting parameters to achieve optimal correctness, Bachify was able to accurately predict the next chord 59.2% of the time. A match is considered correct if the actual next chord in the test file is one of the top three most probabilistic chord choices returned by the classifier. Compared

8 to a random guess probability of 1.4% (75 possible classes or selections), this figure is certainly a substantial improvement. Of course the real evidence is in the sound itself. Although a subjective evaluation metric, playing back an untrained Bach chorale and then running Bachify on one of the melodic lines as input results in very similar-sounding experiences (stylistically speaking) Output of Beatlize Figure 9: Output of Beatlize run on Twinkle Twinkle Little Star 6. ETHICS While there were no ethical problems faced during the production and testing of U-AMP, the potential for moral issues remains if use of our platform because more widespread. The key issue lies within the realm of copyright law and whether or not a composition produced by one of U-AMP s more complex algorithms is a form of plagiarism. Being as though the output contains the original input line as well as computer-generated melodic lines, a user could run Bachify or Beatlize on a copyrighted song and produce a composition that contains within it the copyrighted material. While chord progressions and chords themselves cannot be copyrighted, melodies are considered the main aspect of a song that is copyrighted. Copyright cases in which an artists are accused of plagiarism often focus on melodic similarity [1]. In the case of U-AMP, this is problematic due to the fact that the output of Beatlize and Bachify will always contain the entire original input melody. While the music of Bach is in the public domain, there is little stopping users from using copyrighted material as input and then attempting to sell the output as original music. If U-AMP were to become an easily accessible application, the simplicity of producing plagiarized material would be a difficult obstacle to over- Figure?? displays a result of running the Beatlize algorithm on Twinkle Twinkle Little Star. There are three come. The program would need to come with a copyright parts to this composition: the melodic input line, the rhythm/chord warning to make sure that users are aware of the ethical line meant to emulate a guitar, and a bass line. As discussed ramifications of trying to copy the music of other artists. in the Implementation section, repetition of chords is clearly visible in the final result, as many of the chords in the middle line are played repeatedly. The chord progression shown 7. FUTURE WORK in Figure?? is C G Am G C F C Bb C G F, and was predicted using a variance of 0, meaning these chords are the Due to the fact that U-AMP has a wide range of different features, there are a number of possible directions in which best options that matched the input note (if one existed) at the given time. The result when playing back the MIDI is a rhythmic pop sound reminiscent of a guitar and bass played underneath the input melody. future work might be taken. For example, more features Impact of Variance The SVM classifier will return an n-best list of chords for each prediction, ordered starting from the best choice. For a variance of 50, this means that for each choice in the list, there is a 50% chance that chord choice will be skipped. Overall, there would be a 50% chance the best choice is chosen, a 25% chance the second-best choice is picked (50% if the first choice was already skipped), and a 12.5% chance the third-best choice is picked. Algorithms typically will also skip a chord if the input note at that time is not contained in the chord, so the actual distribution will be altered by the input melody. The result of using a higher variance is that chord progressions tend to be more experimental and choose more interesting chords, sometimes at the expense of sounding perfect. With a variance of 0, the chords that tend to be chosen are often the most standard chords in a given key, and very rarely does the prediction deviate from expected chords. When variance is increased, more experimental and unexpected chord choices appear. 7th chords, augmented chords and suspended chords (among others) begin to appear in the final compositions, as well as chords whose roots deviate from the typical notes in a given key. Increasing variance produces often unexpected results, and would be especially useful for a composer looking for chord patterns that deviate from the norm. could be added to the GUI, additional algorithms could be created which are trained on different data sets, and the existing algorithms can always be tweaked for better performance. One direction to move in is towards the improvement of the MIDI wrapper classes. One thing that this project taught us is that there does not exist a very good MIDI library for Python, particularly one that is designed with music composition in mind. The wrapper classes could be improved and made more robust such that they can develop into a full-fledged MIDI library for Python. Another possible direction involves finding data sets of composers who produced music in different genres and creating algorithms designed for generating music within that genre. One idea would be to find a jazz data set and to create a jazzify algorithm which takes into account the idiosyncrasies of jazz as a genre. The advantage of U-AMP is that the underlying structure of the platform makes integrating algorithms easy, and so additional machine learning algorithms could be written as Python scripts and simply added into the algorithm folder. In addition, existing algorithms could be changed to incorporate song structure. Information about the macrostructure of every Beatles song was included in the data set that we used, and so it would be easiest to incorporate song structure into Beatlize. This would require an additional

9 machine learning component, perhaps using a different technique such as Hidden Markov Models to tag sections of the input as verses or the chorus. Finally, our machine learning algorithms could be adapted to be improvisational. Bachify and Beatlize are totally backward looking, and so could potentially be run on user input as it is generated, creating chord progressions in real time. Also, both of the machine learning algorithms are fast due to the fact that the training is done in advance and the data sets do not change over time. U-AMP is designed to be flexible and adaptive, and so each of these improvements can be worked on and included without needing to fundamentally change the underlying structure of the program. What makes U-AMP exciting is how much more work can be done on improving the quality of the output, designing new algorithms, and generally working towards producing better computer-generated musical compositions. [9] F. Pedregosa et al. Scikit-learn: Machine Learning in Python. In: Journal of Machine Learning Research 12 (2011), pp [10] Reference Annotations: The Beatles. Tech. rep. Queen Mary University of London, url: net/content/reference-annotations-beatles. [11] Pete Shinners. pygame. http : / / www. pygame. org / news.html. Computer software. Version [12] Mary Simoni. Algorithmic Composition: A Gentle Introduction to Music Composition Using Common LISP and Common Music. In: Ann Arbor, Michigan: Scholarly Publishing Office, the University of Michigan University Library, [13] Stefan Zhelyazkov et al. Reading Music from Images. In: (2013). url: CSE400_2012_2013/reports/13_report.pdf. 8. CONCLUSIONS At it s core, U-AMP functions based on the idea that analyzing the creative output of talented composers is a far more powerful tool for composing four part harmonies than simply following music theoretic rules. In addition, because the chords are generated based on a user input, there is more melodic structure to the compositions than if the chords were generated independent of an input line. Finally, different genres of music require more than different training sets, but different compositional techniques as well in order to create output that mimics the style of the training set. In conclusion, U-AMP succeeds in being both an adaptive platform and an effective answer to the question of how machine learning algorithms applied to user input can succeed in generating high quality musical compositions. References [1] Charles Cronin et al. Music Copyright Infringement Resource. USC Gould School of Law url: http: //mcir.usc.edu/. [2] Robin Dunn and Harri Pasanen. wxpython. wxpython.org/. Computer software. Version 3.0. [3] Karlheinz. Essl. Algorithmic Composition. In: The Cambridge Companion to Electronic Music. Cambridge University Press, url: /CCOL [4] Margaret Greentree. Bach Chorales url: http: // [5] Giles Hall. pythhon-midi. python-midi/. Computer software. [6] Bruce L. Jacob. Algorithmic Composition as a Model of Creativity. In: Organised Sound 1 (1996), pp [7] H. Jarvelainen. Algorithmic Musical Composition. In: TiK Seminar on content creation. (2000). url: /2000/papers/hanna/alco.pdf. [8] George Papadopoulos and Geraint Wiggins. AI Methods for Algorithmic Composition: A Survey, a Critical View and Future Prospects. In: AISB SYMPOSIUM ON MUSICAL CREATIVITY. 1999, pp

Building a Better Bach with Markov Chains

Building a Better Bach with Markov Chains Building a Better Bach with Markov Chains CS701 Implementation Project, Timothy Crocker December 18, 2015 1 Abstract For my implementation project, I explored the field of algorithmic music composition

More information

Musical Creativity. Jukka Toivanen Introduction to Computational Creativity Dept. of Computer Science University of Helsinki

Musical Creativity. Jukka Toivanen Introduction to Computational Creativity Dept. of Computer Science University of Helsinki Musical Creativity Jukka Toivanen Introduction to Computational Creativity Dept. of Computer Science University of Helsinki Basic Terminology Melody = linear succession of musical tones that the listener

More information

CPU Bach: An Automatic Chorale Harmonization System

CPU Bach: An Automatic Chorale Harmonization System CPU Bach: An Automatic Chorale Harmonization System Matt Hanlon mhanlon@fas Tim Ledlie ledlie@fas January 15, 2002 Abstract We present an automated system for the harmonization of fourpart chorales in

More information

Evolutionary Computation Applied to Melody Generation

Evolutionary Computation Applied to Melody Generation Evolutionary Computation Applied to Melody Generation Matt D. Johnson December 5, 2003 Abstract In recent years, the personal computer has become an integral component in the typesetting and management

More information

Jazz Melody Generation and Recognition

Jazz Melody Generation and Recognition Jazz Melody Generation and Recognition Joseph Victor December 14, 2012 Introduction In this project, we attempt to use machine learning methods to study jazz solos. The reason we study jazz in particular

More information

Sudhanshu Gautam *1, Sarita Soni 2. M-Tech Computer Science, BBAU Central University, Lucknow, Uttar Pradesh, India

Sudhanshu Gautam *1, Sarita Soni 2. M-Tech Computer Science, BBAU Central University, Lucknow, Uttar Pradesh, India International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2018 IJSRCSEIT Volume 3 Issue 3 ISSN : 2456-3307 Artificial Intelligence Techniques for Music Composition

More information

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION ABSTRACT We present a method for arranging the notes of certain musical scales (pentatonic, heptatonic, Blues Minor and

More information

CSC475 Music Information Retrieval

CSC475 Music Information Retrieval CSC475 Music Information Retrieval Symbolic Music Representations George Tzanetakis University of Victoria 2014 G. Tzanetakis 1 / 30 Table of Contents I 1 Western Common Music Notation 2 Digital Formats

More information

StepSequencer64 J74 Page 1. J74 StepSequencer64. A tool for creative sequence programming in Ableton Live. User Manual

StepSequencer64 J74 Page 1. J74 StepSequencer64. A tool for creative sequence programming in Ableton Live. User Manual StepSequencer64 J74 Page 1 J74 StepSequencer64 A tool for creative sequence programming in Ableton Live User Manual StepSequencer64 J74 Page 2 How to Install the J74 StepSequencer64 devices J74 StepSequencer64

More information

Take a Break, Bach! Let Machine Learning Harmonize That Chorale For You. Chris Lewis Stanford University

Take a Break, Bach! Let Machine Learning Harmonize That Chorale For You. Chris Lewis Stanford University Take a Break, Bach! Let Machine Learning Harmonize That Chorale For You Chris Lewis Stanford University cmslewis@stanford.edu Abstract In this project, I explore the effectiveness of the Naive Bayes Classifier

More information

Outline. Why do we classify? Audio Classification

Outline. Why do we classify? Audio Classification Outline Introduction Music Information Retrieval Classification Process Steps Pitch Histograms Multiple Pitch Detection Algorithm Musical Genre Classification Implementation Future Work Why do we classify

More information

Algorithmic Music Composition

Algorithmic Music Composition Algorithmic Music Composition MUS-15 Jan Dreier July 6, 2015 1 Introduction The goal of algorithmic music composition is to automate the process of creating music. One wants to create pleasant music without

More information

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene Beat Extraction from Expressive Musical Performances Simon Dixon, Werner Goebl and Emilios Cambouropoulos Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria.

More information

6.UAP Project. FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System. Daryl Neubieser. May 12, 2016

6.UAP Project. FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System. Daryl Neubieser. May 12, 2016 6.UAP Project FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System Daryl Neubieser May 12, 2016 Abstract: This paper describes my implementation of a variable-speed accompaniment system that

More information

Bach-Prop: Modeling Bach s Harmonization Style with a Back- Propagation Network

Bach-Prop: Modeling Bach s Harmonization Style with a Back- Propagation Network Indiana Undergraduate Journal of Cognitive Science 1 (2006) 3-14 Copyright 2006 IUJCS. All rights reserved Bach-Prop: Modeling Bach s Harmonization Style with a Back- Propagation Network Rob Meyerson Cognitive

More information

Doctor of Philosophy

Doctor of Philosophy University of Adelaide Elder Conservatorium of Music Faculty of Humanities and Social Sciences Declarative Computer Music Programming: using Prolog to generate rule-based musical counterpoints by Robert

More information

1 Overview. 1.1 Nominal Project Requirements

1 Overview. 1.1 Nominal Project Requirements 15-323/15-623 Spring 2018 Project 5. Real-Time Performance Interim Report Due: April 12 Preview Due: April 26-27 Concert: April 29 (afternoon) Report Due: May 2 1 Overview In this group or solo project,

More information

Algorithmic Composition: The Music of Mathematics

Algorithmic Composition: The Music of Mathematics Algorithmic Composition: The Music of Mathematics Carlo J. Anselmo 18 and Marcus Pendergrass Department of Mathematics, Hampden-Sydney College, Hampden-Sydney, VA 23943 ABSTRACT We report on several techniques

More information

Music Composition with RNN

Music Composition with RNN Music Composition with RNN Jason Wang Department of Statistics Stanford University zwang01@stanford.edu Abstract Music composition is an interesting problem that tests the creativity capacities of artificial

More information

Music Morph. Have you ever listened to the main theme of a movie? The main theme always has a

Music Morph. Have you ever listened to the main theme of a movie? The main theme always has a Nicholas Waggoner Chris McGilliard Physics 498 Physics of Music May 2, 2005 Music Morph Have you ever listened to the main theme of a movie? The main theme always has a number of parts. Often it contains

More information

Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng

Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng Introduction In this project we were interested in extracting the melody from generic audio files. Due to the

More information

Chords not required: Incorporating horizontal and vertical aspects independently in a computer improvisation algorithm

Chords not required: Incorporating horizontal and vertical aspects independently in a computer improvisation algorithm Georgia State University ScholarWorks @ Georgia State University Music Faculty Publications School of Music 2013 Chords not required: Incorporating horizontal and vertical aspects independently in a computer

More information

Automatic Composition from Non-musical Inspiration Sources

Automatic Composition from Non-musical Inspiration Sources Automatic Composition from Non-musical Inspiration Sources Robert Smith, Aaron Dennis and Dan Ventura Computer Science Department Brigham Young University 2robsmith@gmail.com, adennis@byu.edu, ventura@cs.byu.edu

More information

Analysis and Clustering of Musical Compositions using Melody-based Features

Analysis and Clustering of Musical Compositions using Melody-based Features Analysis and Clustering of Musical Compositions using Melody-based Features Isaac Caswell Erika Ji December 13, 2013 Abstract This paper demonstrates that melodic structure fundamentally differentiates

More information

A Transformational Grammar Framework for Improvisation

A Transformational Grammar Framework for Improvisation A Transformational Grammar Framework for Improvisation Alexander M. Putman and Robert M. Keller Abstract Jazz improvisations can be constructed from common idioms woven over a chord progression fabric.

More information

Palestrina Pal: A Grammar Checker for Music Compositions in the Style of Palestrina

Palestrina Pal: A Grammar Checker for Music Compositions in the Style of Palestrina Palestrina Pal: A Grammar Checker for Music Compositions in the Style of Palestrina 1. Research Team Project Leader: Undergraduate Students: Prof. Elaine Chew, Industrial Systems Engineering Anna Huang,

More information

LSTM Neural Style Transfer in Music Using Computational Musicology

LSTM Neural Style Transfer in Music Using Computational Musicology LSTM Neural Style Transfer in Music Using Computational Musicology Jett Oristaglio Dartmouth College, June 4 2017 1. Introduction In the 2016 paper A Neural Algorithm of Artistic Style, Gatys et al. discovered

More information

Robert Alexandru Dobre, Cristian Negrescu

Robert Alexandru Dobre, Cristian Negrescu ECAI 2016 - International Conference 8th Edition Electronics, Computers and Artificial Intelligence 30 June -02 July, 2016, Ploiesti, ROMÂNIA Automatic Music Transcription Software Based on Constant Q

More information

Blues Improviser. Greg Nelson Nam Nguyen

Blues Improviser. Greg Nelson Nam Nguyen Blues Improviser Greg Nelson (gregoryn@cs.utah.edu) Nam Nguyen (namphuon@cs.utah.edu) Department of Computer Science University of Utah Salt Lake City, UT 84112 Abstract Computer-generated music has long

More information

PRACTICE FINAL EXAM. Fill in the metrical information missing from the table below. (3 minutes; 5%) Meter Signature

PRACTICE FINAL EXAM. Fill in the metrical information missing from the table below. (3 minutes; 5%) Meter Signature Music Theory I (MUT 1111) w Fall Semester, 2018 Name: Instructor: PRACTICE FINAL EXAM Fill in the metrical information missing from the table below. (3 minutes; 5%) Meter Type Meter Signature 4 Beat Beat

More information

The purpose of this essay is to impart a basic vocabulary that you and your fellow

The purpose of this essay is to impart a basic vocabulary that you and your fellow Music Fundamentals By Benjamin DuPriest The purpose of this essay is to impart a basic vocabulary that you and your fellow students can draw on when discussing the sonic qualities of music. Excursions

More information

Hip Hop Robot. Semester Project. Cheng Zu. Distributed Computing Group Computer Engineering and Networks Laboratory ETH Zürich

Hip Hop Robot. Semester Project. Cheng Zu. Distributed Computing Group Computer Engineering and Networks Laboratory ETH Zürich Distributed Computing Hip Hop Robot Semester Project Cheng Zu zuc@student.ethz.ch Distributed Computing Group Computer Engineering and Networks Laboratory ETH Zürich Supervisors: Manuel Eichelberger Prof.

More information

ORB COMPOSER Documentation 1.0.0

ORB COMPOSER Documentation 1.0.0 ORB COMPOSER Documentation 1.0.0 Last Update : 04/02/2018, Richard Portelli Special Thanks to George Napier for the review Main Composition Settings Main Composition Settings 4 magic buttons for the entire

More information

y POWER USER MUSIC PRODUCTION and PERFORMANCE With the MOTIF ES Mastering the Sample SLICE function

y POWER USER MUSIC PRODUCTION and PERFORMANCE With the MOTIF ES Mastering the Sample SLICE function y POWER USER MUSIC PRODUCTION and PERFORMANCE With the MOTIF ES Mastering the Sample SLICE function Phil Clendeninn Senior Product Specialist Technology Products Yamaha Corporation of America Working with

More information

AutoChorale An Automatic Music Generator. Jack Mi, Zhengtao Jin

AutoChorale An Automatic Music Generator. Jack Mi, Zhengtao Jin AutoChorale An Automatic Music Generator Jack Mi, Zhengtao Jin 1 Introduction Music is a fascinating form of human expression based on a complex system. Being able to automatically compose music that both

More information

MELODIC AND RHYTHMIC EMBELLISHMENT IN TWO VOICE COMPOSITION. Chapter 10

MELODIC AND RHYTHMIC EMBELLISHMENT IN TWO VOICE COMPOSITION. Chapter 10 MELODIC AND RHYTHMIC EMBELLISHMENT IN TWO VOICE COMPOSITION Chapter 10 MELODIC EMBELLISHMENT IN 2 ND SPECIES COUNTERPOINT For each note of the CF, there are 2 notes in the counterpoint In strict style

More information

Computational Modelling of Harmony

Computational Modelling of Harmony Computational Modelling of Harmony Simon Dixon Centre for Digital Music, Queen Mary University of London, Mile End Rd, London E1 4NS, UK simon.dixon@elec.qmul.ac.uk http://www.elec.qmul.ac.uk/people/simond

More information

Musical Harmonization with Constraints: A Survey. Overview. Computers and Music. Tonal Music

Musical Harmonization with Constraints: A Survey. Overview. Computers and Music. Tonal Music Musical Harmonization with Constraints: A Survey by Francois Pachet presentation by Reid Swanson USC CSCI 675c / ISE 575c, Spring 2007 Overview Why tonal music with some theory and history Example Rule

More information

Exploring the Rules in Species Counterpoint

Exploring the Rules in Species Counterpoint Exploring the Rules in Species Counterpoint Iris Yuping Ren 1 University of Rochester yuping.ren.iris@gmail.com Abstract. In this short paper, we present a rule-based program for generating the upper part

More information

The Human Features of Music.

The Human Features of Music. The Human Features of Music. Bachelor Thesis Artificial Intelligence, Social Studies, Radboud University Nijmegen Chris Kemper, s4359410 Supervisor: Makiko Sadakata Artificial Intelligence, Social Studies,

More information

Computers Composing Music: An Artistic Utilization of Hidden Markov Models for Music Composition

Computers Composing Music: An Artistic Utilization of Hidden Markov Models for Music Composition Computers Composing Music: An Artistic Utilization of Hidden Markov Models for Music Composition By Lee Frankel-Goldwater Department of Computer Science, University of Rochester Spring 2005 Abstract: Natural

More information

Computing, Artificial Intelligence, and Music. A History and Exploration of Current Research. Josh Everist CS 427 5/12/05

Computing, Artificial Intelligence, and Music. A History and Exploration of Current Research. Josh Everist CS 427 5/12/05 Computing, Artificial Intelligence, and Music A History and Exploration of Current Research Josh Everist CS 427 5/12/05 Introduction. As an art, music is older than mathematics. Humans learned to manipulate

More information

University of Huddersfield Repository

University of Huddersfield Repository University of Huddersfield Repository Millea, Timothy A. and Wakefield, Jonathan P. Automating the composition of popular music : the search for a hit. Original Citation Millea, Timothy A. and Wakefield,

More information

Mobile Edition. Rights Reserved. The author gives permission for it to be freely distributed and

Mobile Edition. Rights Reserved. The author gives permission for it to be freely distributed and Mobile Edition This quick start guide is intended to be springboard to get you started learning and playing songs quickly with chords. This PDF file is by Bright Idea Music All Rights Reserved. The author

More information

DJ Darwin a genetic approach to creating beats

DJ Darwin a genetic approach to creating beats Assaf Nir DJ Darwin a genetic approach to creating beats Final project report, course 67842 'Introduction to Artificial Intelligence' Abstract In this document we present two applications that incorporate

More information

A collection of classroom composing activities, based on ideas taken from the Friday Afternoons Song Collection David Ashworth

A collection of classroom composing activities, based on ideas taken from the Friday Afternoons Song Collection David Ashworth Friday Afternoons a Composer s guide A collection of classroom composing activities, based on ideas taken from the Friday Afternoons Song Collection David Ashworth Introduction In the latest round of Friday

More information

FINE ARTS Institutional (ILO), Program (PLO), and Course (SLO) Alignment

FINE ARTS Institutional (ILO), Program (PLO), and Course (SLO) Alignment FINE ARTS Institutional (ILO), Program (PLO), and Course (SLO) Program: Music Number of Courses: 52 Date Updated: 11.19.2014 Submitted by: V. Palacios, ext. 3535 ILOs 1. Critical Thinking Students apply

More information

Elements of Music - 2

Elements of Music - 2 Elements of Music - 2 A series of single tones that add up to a recognizable whole. - Steps small intervals - Leaps Larger intervals The specific order of steps and leaps, short notes and long notes, is

More information

MSc Arts Computing Project plan - Modelling creative use of rhythm DSLs

MSc Arts Computing Project plan - Modelling creative use of rhythm DSLs MSc Arts Computing Project plan - Modelling creative use of rhythm DSLs Alex McLean 3rd May 2006 Early draft - while supervisor Prof. Geraint Wiggins has contributed both ideas and guidance from the start

More information

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC G.TZANETAKIS, N.HU, AND R.B. DANNENBERG Computer Science Department, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213, USA E-mail: gtzan@cs.cmu.edu

More information

Automated Accompaniment

Automated Accompaniment Automated Tyler Seacrest University of Nebraska, Lincoln April 20, 2007 Artificial Intelligence Professor Surkan The problem as originally stated: The problem as originally stated: ˆ Proposed Input The

More information

AP Music Theory Curriculum

AP Music Theory Curriculum AP Music Theory Curriculum Course Overview: The AP Theory Class is a continuation of the Fundamentals of Music Theory course and will be offered on a bi-yearly basis. Student s interested in enrolling

More information

Figured Bass and Tonality Recognition Jerome Barthélemy Ircam 1 Place Igor Stravinsky Paris France

Figured Bass and Tonality Recognition Jerome Barthélemy Ircam 1 Place Igor Stravinsky Paris France Figured Bass and Tonality Recognition Jerome Barthélemy Ircam 1 Place Igor Stravinsky 75004 Paris France 33 01 44 78 48 43 jerome.barthelemy@ircam.fr Alain Bonardi Ircam 1 Place Igor Stravinsky 75004 Paris

More information

Melodic Minor Scale Jazz Studies: Introduction

Melodic Minor Scale Jazz Studies: Introduction Melodic Minor Scale Jazz Studies: Introduction The Concept As an improvising musician, I ve always been thrilled by one thing in particular: Discovering melodies spontaneously. I love to surprise myself

More information

Computer Coordination With Popular Music: A New Research Agenda 1

Computer Coordination With Popular Music: A New Research Agenda 1 Computer Coordination With Popular Music: A New Research Agenda 1 Roger B. Dannenberg roger.dannenberg@cs.cmu.edu http://www.cs.cmu.edu/~rbd School of Computer Science Carnegie Mellon University Pittsburgh,

More information

Analysis of local and global timing and pitch change in ordinary

Analysis of local and global timing and pitch change in ordinary Alma Mater Studiorum University of Bologna, August -6 6 Analysis of local and global timing and pitch change in ordinary melodies Roger Watt Dept. of Psychology, University of Stirling, Scotland r.j.watt@stirling.ac.uk

More information

Chapter 40: MIDI Tool

Chapter 40: MIDI Tool MIDI Tool 40-1 40: MIDI Tool MIDI Tool What it does This tool lets you edit the actual MIDI data that Finale stores with your music key velocities (how hard each note was struck), Start and Stop Times

More information

Music Genre Classification and Variance Comparison on Number of Genres

Music Genre Classification and Variance Comparison on Number of Genres Music Genre Classification and Variance Comparison on Number of Genres Miguel Francisco, miguelf@stanford.edu Dong Myung Kim, dmk8265@stanford.edu 1 Abstract In this project we apply machine learning techniques

More information

CHAPTER ONE TWO-PART COUNTERPOINT IN FIRST SPECIES (1:1)

CHAPTER ONE TWO-PART COUNTERPOINT IN FIRST SPECIES (1:1) HANDBOOK OF TONAL COUNTERPOINT G. HEUSSENSTAMM Page 1 CHAPTER ONE TWO-PART COUNTERPOINT IN FIRST SPECIES (1:1) What is counterpoint? Counterpoint is the art of combining melodies; each part has its own

More information

Hidden Markov Model based dance recognition

Hidden Markov Model based dance recognition Hidden Markov Model based dance recognition Dragutin Hrenek, Nenad Mikša, Robert Perica, Pavle Prentašić and Boris Trubić University of Zagreb, Faculty of Electrical Engineering and Computing Unska 3,

More information

Feature-Based Analysis of Haydn String Quartets

Feature-Based Analysis of Haydn String Quartets Feature-Based Analysis of Haydn String Quartets Lawson Wong 5/5/2 Introduction When listening to multi-movement works, amateur listeners have almost certainly asked the following situation : Am I still

More information

Study Guide. Solutions to Selected Exercises. Foundations of Music and Musicianship with CD-ROM. 2nd Edition. David Damschroder

Study Guide. Solutions to Selected Exercises. Foundations of Music and Musicianship with CD-ROM. 2nd Edition. David Damschroder Study Guide Solutions to Selected Exercises Foundations of Music and Musicianship with CD-ROM 2nd Edition by David Damschroder Solutions to Selected Exercises 1 CHAPTER 1 P1-4 Do exercises a-c. Remember

More information

Elements of Music David Scoggin OLLI Understanding Jazz Fall 2016

Elements of Music David Scoggin OLLI Understanding Jazz Fall 2016 Elements of Music David Scoggin OLLI Understanding Jazz Fall 2016 The two most fundamental dimensions of music are rhythm (time) and pitch. In fact, every staff of written music is essentially an X-Y coordinate

More information

MELONET I: Neural Nets for Inventing Baroque-Style Chorale Variations

MELONET I: Neural Nets for Inventing Baroque-Style Chorale Variations MELONET I: Neural Nets for Inventing Baroque-Style Chorale Variations Dominik Hornel dominik@ira.uka.de Institut fur Logik, Komplexitat und Deduktionssysteme Universitat Fridericiana Karlsruhe (TH) Am

More information

Jam Tomorrow: Collaborative Music Generation in Croquet Using OpenAL

Jam Tomorrow: Collaborative Music Generation in Croquet Using OpenAL Jam Tomorrow: Collaborative Music Generation in Croquet Using OpenAL Florian Thalmann thalmann@students.unibe.ch Markus Gaelli gaelli@iam.unibe.ch Institute of Computer Science and Applied Mathematics,

More information

Music Theory. Fine Arts Curriculum Framework. Revised 2008

Music Theory. Fine Arts Curriculum Framework. Revised 2008 Music Theory Fine Arts Curriculum Framework Revised 2008 Course Title: Music Theory Course/Unit Credit: 1 Course Number: Teacher Licensure: Grades: 9-12 Music Theory Music Theory is a two-semester course

More information

Rhythmic Dissonance: Introduction

Rhythmic Dissonance: Introduction The Concept Rhythmic Dissonance: Introduction One of the more difficult things for a singer to do is to maintain dissonance when singing. Because the ear is searching for consonance, singing a B natural

More information

Rethinking Reflexive Looper for structured pop music

Rethinking Reflexive Looper for structured pop music Rethinking Reflexive Looper for structured pop music Marco Marchini UPMC - LIP6 Paris, France marco.marchini@upmc.fr François Pachet Sony CSL Paris, France pachet@csl.sony.fr Benoît Carré Sony CSL Paris,

More information

An Approach to Classifying Four-Part Music

An Approach to Classifying Four-Part Music An Approach to Classifying Four-Part Music Gregory Doerfler, Robert Beck Department of Computing Sciences Villanova University, Villanova PA 19085 gdoerf01@villanova.edu Abstract - Four-Part Classifier

More information

J536 Composition. Composing to a set brief Own choice composition

J536 Composition. Composing to a set brief Own choice composition J536 Composition Composing to a set brief Own choice composition Composition starting point 1 AABA melody writing (to a template) Use the seven note Creative Task note patterns as a starting point teaches

More information

ANNOTATING MUSICAL SCORES IN ENP

ANNOTATING MUSICAL SCORES IN ENP ANNOTATING MUSICAL SCORES IN ENP Mika Kuuskankare Department of Doctoral Studies in Musical Performance and Research Sibelius Academy Finland mkuuskan@siba.fi Mikael Laurson Centre for Music and Technology

More information

Arts, Computers and Artificial Intelligence

Arts, Computers and Artificial Intelligence Arts, Computers and Artificial Intelligence Sol Neeman School of Technology Johnson and Wales University Providence, RI 02903 Abstract Science and art seem to belong to different cultures. Science and

More information

Music Key Stage 3 Success Criteria Year 7. Rhythms and rhythm Notation

Music Key Stage 3 Success Criteria Year 7. Rhythms and rhythm Notation Music Key Stage 3 Success Criteria Year 7 Rhythms and rhythm Notation Can identify crotchets, minims and semibreves Can label the length of crotchets, minims and semibreves Can add up the values of a series

More information

Harmonising Chorales by Probabilistic Inference

Harmonising Chorales by Probabilistic Inference Harmonising Chorales by Probabilistic Inference Moray Allan and Christopher K. I. Williams School of Informatics, University of Edinburgh Edinburgh EH1 2QL moray.allan@ed.ac.uk, c.k.i.williams@ed.ac.uk

More information

Notes on David Temperley s What s Key for Key? The Krumhansl-Schmuckler Key-Finding Algorithm Reconsidered By Carley Tanoue

Notes on David Temperley s What s Key for Key? The Krumhansl-Schmuckler Key-Finding Algorithm Reconsidered By Carley Tanoue Notes on David Temperley s What s Key for Key? The Krumhansl-Schmuckler Key-Finding Algorithm Reconsidered By Carley Tanoue I. Intro A. Key is an essential aspect of Western music. 1. Key provides the

More information

Algorithmic Composition in Contrasting Music Styles

Algorithmic Composition in Contrasting Music Styles Algorithmic Composition in Contrasting Music Styles Tristan McAuley, Philip Hingston School of Computer and Information Science, Edith Cowan University email: mcauley@vianet.net.au, p.hingston@ecu.edu.au

More information

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes hello Jay Biernat Third author University of Rochester University of Rochester Affiliation3 words jbiernat@ur.rochester.edu author3@ismir.edu

More information

Artificial Intelligence Approaches to Music Composition

Artificial Intelligence Approaches to Music Composition Artificial Intelligence Approaches to Music Composition Richard Fox and Adil Khan Department of Computer Science Northern Kentucky University, Highland Heights, KY 41099 Abstract Artificial Intelligence

More information

2 2. Melody description The MPEG-7 standard distinguishes three types of attributes related to melody: the fundamental frequency LLD associated to a t

2 2. Melody description The MPEG-7 standard distinguishes three types of attributes related to melody: the fundamental frequency LLD associated to a t MPEG-7 FOR CONTENT-BASED MUSIC PROCESSING Λ Emilia GÓMEZ, Fabien GOUYON, Perfecto HERRERA and Xavier AMATRIAIN Music Technology Group, Universitat Pompeu Fabra, Barcelona, SPAIN http://www.iua.upf.es/mtg

More information

Pitfalls and Windfalls in Corpus Studies of Pop/Rock Music

Pitfalls and Windfalls in Corpus Studies of Pop/Rock Music Introduction Hello, my talk today is about corpus studies of pop/rock music specifically, the benefits or windfalls of this type of work as well as some of the problems. I call these problems pitfalls

More information

A probabilistic approach to determining bass voice leading in melodic harmonisation

A probabilistic approach to determining bass voice leading in melodic harmonisation A probabilistic approach to determining bass voice leading in melodic harmonisation Dimos Makris a, Maximos Kaliakatsos-Papakostas b, and Emilios Cambouropoulos b a Department of Informatics, Ionian University,

More information

A Novel Approach to Automatic Music Composing: Using Genetic Algorithm

A Novel Approach to Automatic Music Composing: Using Genetic Algorithm A Novel Approach to Automatic Music Composing: Using Genetic Algorithm Damon Daylamani Zad *, Babak N. Araabi and Caru Lucas ** * Department of Information Systems and Computing, Brunel University ci05ddd@brunel.ac.uk

More information

2016 HSC Music 1 Aural Skills Marking Guidelines Written Examination

2016 HSC Music 1 Aural Skills Marking Guidelines Written Examination 2016 HSC Music 1 Aural Skills Marking Guidelines Written Examination Question 1 Describes the structure of the excerpt with reference to the use of sound sources 6 Demonstrates a developed aural understanding

More information

The 5 Step Visual Guide To Learn How To Play Piano & Keyboards With Chords

The 5 Step Visual Guide To Learn How To Play Piano & Keyboards With Chords The 5 Step Visual Guide To Learn How To Play Piano & Keyboards With Chords Learning to play the piano was once considered one of the most desirable social skills a person could have. Having a piano in

More information

Jazz Melody Generation from Recurrent Network Learning of Several Human Melodies

Jazz Melody Generation from Recurrent Network Learning of Several Human Melodies Jazz Melody Generation from Recurrent Network Learning of Several Human Melodies Judy Franklin Computer Science Department Smith College Northampton, MA 01063 Abstract Recurrent (neural) networks have

More information

Melody Sauce is an AU / VST / MIDI FX device that creates melodies as MIDI.

Melody Sauce is an AU / VST / MIDI FX device that creates melodies as MIDI. Melody Sauce is an AU / VST / MIDI FX device that creates melodies as MIDI. Designed as a co-creation tool for anyone making music in electronic pop, dance and EDM styles, Melody Sauce provides a quick

More information

A Model of Musical Motifs

A Model of Musical Motifs A Model of Musical Motifs Torsten Anders Abstract This paper presents a model of musical motifs for composition. It defines the relation between a motif s music representation, its distinctive features,

More information

Perceptual Evaluation of Automatically Extracted Musical Motives

Perceptual Evaluation of Automatically Extracted Musical Motives Perceptual Evaluation of Automatically Extracted Musical Motives Oriol Nieto 1, Morwaread M. Farbood 2 Dept. of Music and Performing Arts Professions, New York University, USA 1 oriol@nyu.edu, 2 mfarbood@nyu.edu

More information

Diamond Piano Student Guide

Diamond Piano Student Guide 1 Diamond Piano Student Guide Welcome! The first thing you need to know as a Diamond Piano student is that you can succeed in becoming a lifelong musician. You can learn to play the music that you love

More information

5. The JPS Solo Piano Arranging System

5. The JPS Solo Piano Arranging System 5. The JPS Solo Piano Arranging System a. Step 1 - Intro The combination of your LH and RH components is what is going to create the solo piano sound you ve been looking for. The great thing is that these

More information

Elements of Music. How can we tell music from other sounds?

Elements of Music. How can we tell music from other sounds? Elements of Music How can we tell music from other sounds? Sound begins with the vibration of an object. The vibrations are transmitted to our ears by a medium usually air. As a result of the vibrations,

More information

In all creative work melody writing, harmonising a bass part, adding a melody to a given bass part the simplest answers tend to be the best answers.

In all creative work melody writing, harmonising a bass part, adding a melody to a given bass part the simplest answers tend to be the best answers. THEORY OF MUSIC REPORT ON THE MAY 2009 EXAMINATIONS General The early grades are very much concerned with learning and using the language of music and becoming familiar with basic theory. But, there are

More information

Composer Style Attribution

Composer Style Attribution Composer Style Attribution Jacqueline Speiser, Vishesh Gupta Introduction Josquin des Prez (1450 1521) is one of the most famous composers of the Renaissance. Despite his fame, there exists a significant

More information

Course Overview. Assessments What are the essential elements and. aptitude and aural acuity? meaning and expression in music?

Course Overview. Assessments What are the essential elements and. aptitude and aural acuity? meaning and expression in music? BEGINNING PIANO / KEYBOARD CLASS This class is open to all students in grades 9-12 who wish to acquire basic piano skills. It is appropriate for students in band, orchestra, and chorus as well as the non-performing

More information

Etna Builder - Interactively Building Advanced Graphical Tree Representations of Music

Etna Builder - Interactively Building Advanced Graphical Tree Representations of Music Etna Builder - Interactively Building Advanced Graphical Tree Representations of Music Wolfgang Chico-Töpfer SAS Institute GmbH In der Neckarhelle 162 D-69118 Heidelberg e-mail: woccnews@web.de Etna Builder

More information

Tonal Atonality: An Analysis of Samuel Barber's "Nocturne Op. 33"

Tonal Atonality: An Analysis of Samuel Barber's Nocturne Op. 33 Ursidae: The Undergraduate Research Journal at the University of Northern Colorado Volume 2 Number 3 Article 3 January 2013 Tonal Atonality: An Analysis of Samuel Barber's "Nocturne Op. 33" Nathan C. Wambolt

More information

Topic 10. Multi-pitch Analysis

Topic 10. Multi-pitch Analysis Topic 10 Multi-pitch Analysis What is pitch? Common elements of music are pitch, rhythm, dynamics, and the sonic qualities of timbre and texture. An auditory perceptual attribute in terms of which sounds

More information

Creating a Feature Vector to Identify Similarity between MIDI Files

Creating a Feature Vector to Identify Similarity between MIDI Files Creating a Feature Vector to Identify Similarity between MIDI Files Joseph Stroud 2017 Honors Thesis Advised by Sergio Alvarez Computer Science Department, Boston College 1 Abstract Today there are many

More information

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment Gus G. Xia Dartmouth College Neukom Institute Hanover, NH, USA gxia@dartmouth.edu Roger B. Dannenberg Carnegie

More information

TECHNOLOGY FOR USE IN THE LESSON ROOM AND REHEARSAL ROOM. Dr. Brad Meyer Director of Percussion Studies Stephen F. Austin State University

TECHNOLOGY FOR USE IN THE LESSON ROOM AND REHEARSAL ROOM. Dr. Brad Meyer Director of Percussion Studies Stephen F. Austin State University TECHNOLOGY FOR USE IN THE LESSON ROOM AND REHEARSAL ROOM Dr. Brad Meyer Director of Percussion Studies Stephen F. Austin State University EMAIL: meyerbe@sfasu.edu WEBSITE: www.brad-meyer.com TUNERS: TonalEnergy

More information