Visualizing the Chromatic Index of Music

Size: px
Start display at page:

Download "Visualizing the Chromatic Index of Music"

Transcription

1 Visualizing the Chromatic Index of Music Dionysios Politis, Dimitrios Margounakis, Konstantinos Mokos Multimedia Lab, Department of Informatics Aristotle University of Thessaloniki Greece {dpolitis, Abstract Musical imaging is a recent trend in visualizing hidden dimensions of one-dimensional audio signals. The ascription of colors to psychoacoustic phenomena is consistent to the music perception depicted in the variety of scales and styles of ethnic music. Audio tools based on Software Engineering techniques are built for visualizing the chrominance of global music. Key-Words: - Color in Music, Psychoacoustics, Scales, Styles and Perception, Audio Tools. 1. Introduction In our previous WEDELMUSIC paper, entitled Determining the Chromatic Index of Music, a multidimensional model for musical chromatic analysis was thoroughly presented [1]. Some algorithms were developed in order to index the chrominance of a scale, as well as the chrominance of a musical piece. The peculiarities of different kinds of music were considered with respect to the distinction of these special characteristics. A colorful sequence, based on these indices, was finally produced, which was a unique and exact chromatic representation of a musical composition (see Fig. 1). The aim of this paper is to present the tool, which was developed during our research about chroma in music [1][2]. MEL-IRIS v.1.0. provides an integrated environment for chromatic analysis of MIDI and audio pieces, classification according to theirs chroma and visualization of their chromatic index in real time. MEL-IRIS derives from the words Melodic Irida. Iris is the name of the goddess Irida, as it is pronounced in the ancient Greek language. Irida is associated with colors, as she is said to be the Greek goddess of the rainbow. MEL-IRIS was mainly developed in BORLAND C++ BUILDER, while MATLAB was used for the initial processing of audio files. Figure 1. Chromatic strips produced by MEL-IRIS. 2. Goals The main goal of this research effort is to suggest a new music classification schema, based on the musical chroma. MEL-IRIS is designed for processing musical pieces from audio servers, creating a unique chromatic index for each of them and classifying them according to the chromatic index. The chromatic indices are metadata that can be utilized in a wide range of applications, e.g. MIR systems. They can serve as a musical genus identifier, or even as an artist identifier. Genuses or genres are not perceived merely as Western music predicates [3] but as concepts of ethnic music perception in diachrony [4] and synchrony [5][6]. A colorful strip can be associated with a musical piece serving as a signature and as a classifier as well. Further processing of the colorful strips could lead to a fancy real-time animation, based on the chromatic elements of a song, or even to some kind of algorithmic audio-vis ual show. Finally, a music composer can take advantage of the chromatic indices, as he can chromatically process his own musical compositions himself. Considering the primary correspondence between chromatic sequences and feelings, an artist is able to fix the desired, or even an additive, emotional value in his own musical pieces. 3. Theoretical background Next, the basic principles, which were applied into MEL-IRIS, are compendiously discussed. The

2 mathematical background of our application is analytically presented in our previous WEDELMUSIC paper [1]. To start with, we define as chromatic any sound with frequency irrelevant to the discrete frequencies of the scale. In proportion to the distance of the interval, that this sound creates with its neighbors (previous and next sound), we can estimate, how much chromatic this sound is. In order to come near to modes, different than the Western ones, we further subdivided the spaces, using the half-flat sign and the half-sharp sign from the Arabic music [7]. As a result of this addition, the minimum space between two notes is the quartertone. For the correspondence of a frequency (from the whole spectrum of frequencies) to a note by a personal computer to make sense, microtonal thresholds for each note were defined, using the formula: We used the fact that the note A4 corresponds to 440 Hz as a benchmark (see Table I). Note Low threshold High threshold C C# D D D Table I. Oriental scales microtonal spectrum thresholds. The procedure of the chromatic analysis is serial and consists of five steps (the output at one step is the input to the next step): 1. Extraction of melodic sequence (frequencies) 2. Scale Matching 3. Segmentation 4. Calculation of chromatic values 5. Creation of color strip It is obvious that the procedure for melody isolation is non-identical for MIDI and audio files. This differentiation has lead to a special handling of.wav and.mp3 files, using sonograms analysis in MATLAB environment. The rest steps are identical for both MIDI and audio files, with an exception in step 3 (segmentation), where audio files may again be treated optionally in a special way. In step 2, a simple algorithm is used in order to extract the dominant frequencies of a melody and, according to the spaces they create among them, a scale matches the musical piece. The selected scale is the one, the spaces of which better approximate the spaces of the dominant (1) frequencies. Essentially, the scale that yields the minimum error rate (calculated from the absolute values of the differences between the several combinations of spaces) is chosen. The standard segmentation method, which is used in the application, is a modified version of the Cambouropoulos-Widmer algorithm [8]. Heuristics are also used. The algorithm is constrained by some rules, e.g. - IF (time>1 sec) AND (NO sound is played) THEN split the segment at exactly the middle of the silence time - Segments that contain less than 10 notes are not allowed, etc. The alternative for audio files (if the user does not want to use the standard method) is the automated segmentation, which is based on the features of the particular wavelet. The initial chroma of a musical piece (cº) is the chroma c of the chosen scale. According to the sequence of frequencies, which was the output of the first step, each space affects the current cvalue (possible increment or reduction), creating this way a continuous chromatic c value of the musical piece (see Fig.2). 1,75 1,65 1,55 1,45 1,35 1,25 FAIRUZ: Ya Zambaa (WAV) SONG 16 Figure 2. A c time diagram of an audio file. The amount of c values is equal to the amount of the notes, which comprise the melody. These c values produce the final colorful strip. This chromatic visualization (see Fig. 1) consists of boxes that represent the segments. Each box represents a segment. The length of a box is proportional to the duration of the segment it represents. This results in the real-time lengthways creation of the chromatic strip. The basic color of a segment is the average <c> of the c values that correspond to all the notes of the particular segment. As the creation of a box comes near to the end, the basic color changes in order to achieve a smooth transition to the basic color of the next segment. A 12-grade color scale was designed to correspond c values to colors [9]. On the 2

3 following Table II colors are ranged in chromatical order, beginning from white and ending to black [10]. c Color 1 White 1.1 Sky Blue / Turqoise 1.2 Green 1.3 Yellow / Gold 1.4 Orange 1.5 Red 1.6 Pink 1.7 Blue / Royal Blue 1.8 Purple 1.9 Brown 2 Gray 2.1 Black Table II. Colors c values correspondence. The actual color of each segment is characterized from the combination of the R G B variables (Red Green Blue) [11]. The values of R G B are calculated from the functions on the following Table III, given the average <c>. c (input) R-G-B (output) c <= 1 R = G = B = < c <= 1.1 R = 2550 c G = B = < c <= 1.2 R = 0 G = 255 B = 2550 c < c <= 1.3 R = 2550 c 3060 G = 255 B = < c <= 1.4 R = 255 G = 1270 c B = < c <= 1.5 R = 255 G = 1280 c B = < c <= 1.6 R = 255 G = 0 B = 2550 c < c <= 1.7 R = 2550 c G = 0 B = < c <= 1.8 R = 1280 c 2176 G =0 B = 1270 c < c <= 1.9 R = 128 G = 0 B = 1280 c < c <= 2 R = 128 G = B = 1280 c < c <= 2.1 R = G = B = 1280 c c > 2.1 R = G = B =0 Table III. Calculation of R G B variables. 4. MEL-IRIS v.1.0. : A short description of the Audio Tool The MEL-IRIS project is programmed using Borland C++ Builder 6 compiler and uses the Paradox database. It can work under any Microsoft Window operating system and is fully functional both on stand alone systems and on networks, where users have the right to share, view, edit and search over existed records on the same database as long as a Borland Database Engine is installed. It supports internal multi-windowing viewing and requires the existence of Microsoft Media Player for the playback of audio files Frequency and segment extraction Opening the audio file an internal automated editor for MIDI files or sonogram analyzer for other audio files is triggered that separates melody according to the file format it corresponds. In MIDI files for example notes and frequencies are represented as events of binary code in the file (see Fig. 3) and are converted into real-world representations such as notes, delta time, velocity and other essential information such as tempo and time signature which help us to estimate the exact time of each note (see Fig. 4). Figure 3. The binary representation of a MIDI file. During this step possible segments of audio file and their time in milliseconds are calculated using a modified algorithm derived from the Cambouropoulos-Widmer clustering algorithm [8]. Finally the user has the opportunity to save notes, frequencies and segments in text files for further examination and analysis also essential for the other parts (file conv.txt contains the frequencies for each note, 3

4 segments.txt the number of notes for each segment and times.txt the partial-segmented times in milliseconds of the audio file). Byzantine music that each of them has a unique value chroma (see Fig. 6). The latter, inserts the song into five categories that show how chromatic a song is, depending on the selected scale chroma from scale match (see Fig. 7). The five categories are: Very Low Chromatic ( scale chroma <= 1.3). Low Chromatic (1.3 < scale chroma <= 1.6). Medium Chromatic (1.6 < scale chroma <= 1.9). High Chromatic (1.9 < scale chroma <= 2.2). Very High Chromatic ( scale chroma > 2.2). Figure 4. The real-world representation of a MIDI file. The attributes that are kept for further use for each song name are chroma, tone, name and origin (see Fig. 8). By these attributes along with the scale distribution and the segment file a sample for the visual representation of audio file is created (file deigma.txt is the partial-segmentated sample of the audio file consisting of time value in milliseconds, chroma value and brightness value for each segment) Chroma extraction In this part we use the files we created from Frequency and segment extraction. Opening these files we automatically see the scale distribution of the audio file based on our scale algorithm along with a prompt to name the song in order to keep track of our file system and to use it on our chromatic categorization. Scale distribution consists of seven values, which are unique for every audio file (see Fig. 5). Figure 6. Scale Bank. Figure 5. Scale distributions. At this point the user first runs scale match and then index chroma (see Fig. 5). The former, automatically finds the best suitable scale, mode and chroma taking into consideration our Scale Bank - a database containing scales and modes taken from Western, Balkan, Arabic, Oriental, Ancient Greek and Figure 7. Song classification. 4

5 Figure 8. Scale Attributes. Figure 9. Chromatic strips. CPU tick). The refresh rate of the chromatic strip is equal to one millisecond for better visualization. Two pixels of our strip are colored for about every second that passes using a step method. According to the chroma value and brightness value of each segment, which is taken from our sample file, a RGB color is being chosen for every pixel using a mapping-color algorithm (see Fig. 9). At the end of each segment a black pixel is created (see Fig. 9) which shows the end of the segment. The user can see the exact time in milliseconds that the segment ended while the audio file plays (see Fig. 10) and also pause or resume the process for further examination of the visualization. Finally all chromatic strips are saved in a personal database based on the name of the song in order to keep track with our experiments of the visual representation of our audio files Audio files processing A special process for the extraction of the sequence of frequencies from audio voice recordings is required, as mentioned before. Therefore, MEL-IRIS provides a special interface (using MATLAB), in order to produce sonograms. The melodic sequence can thus be extracted from the sonograms. The following figure shows the interface, which is discussed here. Figure 10. Partial-segmentation time Visual representation In this part the user selects the audio file he wants to play. While he hears to the music a chromatic strip is filled. The exact time the play starts our coloring procedure begins. The coloring of the strip is synchronized to the playback of the audio file because we use internal CPU time to calculate both the coloring delay, which is taken from our sample file created on Chroma extraction and the audio file delay (every millisecond is converted to a Figure 11. The audio-processing interface. As it can be seen, the user can change the parameters on Windows menu and FFT menu (Sample Frequency, Frequency Limit, Window Size, Window, FFT Size, FFT overlap) that are used in spectrum analysis. Figure 12 shows the default values of MEL-IRIS. MEL-IRIS also offers the choice of automatic segmentation of an audio file, based on the attributes of its wavelet. The user is allowed to use this automatic method, modify it, or even split the piece up manually, by 5

6 defining the beginning and the end of each segment, according to his acoustic perception. After the segmentation of the audio file is done, we can create the sonogram and the sequence of frequencies of a particular segment (see Fig. 13), using the Spectrum Analysis button from the Action Menu (see Fig 12). The sequence of frequencies is produced with sampling of frequencies that bear the highest volume (the darkest peaks on the 3-d graph) at a specific time. We use very short pre-defined time intervals for the sampling. Clicking on the Spectogram button, the user is given the option to view another more flexible view of the same sonogram, which can be processed in several ways (see Figure 14). Figure 13. The array editor (extracted frequencies). Figure 14. MEL-IRIS spectograms. The results of spectrum analysis are automatically used as the input of step 2, and the serial procedure continues as described before. 5. Observations Figure 12. The Main Menu. We tested a very large amount of musical pieces in MEL-IRIS and the results were more than interesting? they were also encouraging for the consecution of our research. To begin with, one observation is that the classification that aroused from Scale Index gave a chromatic dimension to the way of music perception [9]. The happy and shiny songs were categorized together, while the sad and melancholic pieces fell into another category. Moreover, the heavy and strange (for the western musicians) hearings came under another category. Apart from the similarities in hearings, we also observed that the chromatic strips of the songs in the same category appeared to be quite similar in colors and/or the melody evolution, e.g. Chant Sacris del Orient and Salmos para el 3er Milenio por Soueur Marie Keyrouz in the very high chromatic category. Finally, an important observation is that the distinction between audio files and MIDI files can very easily be done from the chromatic strips. This stems from the fact that the 6

7 freedom in melody motion and the capability of using the whole spectrum of frequencies in audio recordings gives greater chromatic fluctuation, in contrary to MIDI files, where this freedom is limited. However, it is also possible to achieve chromatic variance in MIDI files using pitch bend (pitch wheel). 6. Future work Our aim is to continue our research and improve the capabilities of MEL-IRIS with new elements enrichment. On of them is the multi-channel chromatic process of music for MIDI files, where multiple strips would depict the chroma of a musical piece (one strip for every channel, not only for the melody). These strips would be mixed together or separated, according to the evolution of the musical composition. Another goal, upon which we already work, is the creation of music from colors, which is the reverse of what we have presented. It is about an algorithmic composer that will be able to create new music and mix already recorded musical patterns, which will be stored in a dynamic database. The start-point of this process of musical synthesis will be the user s choices from the chromatic palette. Finally, one of our future plans is the design of a unique interface, suitable for chromatic emotional synthesis. It will be suitable for composers and singers, in order to change the chroma of their musical pieces at will. [8] Cambouropoulos, E., Widmer, G., Automatic motivic analysis via melodic clustering, Journal of New Music Research, 29 (4), 2000, pp [9] Juslin, P.: Communicating Emotion in Music Performance: A Review and Theoretical Framework in Juslin, P. & Sloboda, J. (eds.), Music and Emotion: Theory and Research, Oxford University Press, [10] Chamoudopoulos D., Music and Chroma, The Arts of Sound, Papagregoriou Nakas, Greece, 1997, pp [11] Fels, S., Nishimoto, K. and Mase, K., MusiKalscope: A Graphical Musical Instrument, in IEEE Multimedia Magazine, Vol. 5, No.3, July-September 1998, pp References [1] Politis, D., Margounakis, D., Determining the Chromatic Index of Music, Proceedings, 3 rd WEDELMUSIC Conference, September, 15-17, [2] Politis, D., Margounakis, D., In Search for Chroma in Music, Proceedings, 7 th WSEAS International Multi Conference on Circuits, Systems, Communication and Computers CSCC2003, Corfu, July 7-10, [3] Tzanetakis, G., Cook, P., Musical Genre Classification of Audio Signals, IEEE Transactions on Speech and Audio Processing, 10(5), July [4] West, M.L, Ancient Greek Music, Oxford University Press, [5] Burns, E., Intervals, Scales and Tuning, in Deutch, D. (Ed.), The Psychology of Music, 2 nd edition, Academic Press, London [6] Shepard, R., Pitch Perception and Measurement, in Cook, P. (Ed.), Music, Cognition and Computerized Sound, MIT Press, Cambridge, Massachusetts, [7] Giannelos, D., La Musique Byzantine, L Harmattan, 1996, pp

In Search for Chroma in Music

In Search for Chroma in Music In Search for Chroma in Music DIONYSIOS POLITIS, DIMITRIOS MARGOUNAKIS Department of Informatics Aristotle University of Thessaloniki University Campus, Thessaloniki, GR-541 24 GREECE {dpolitis,dmargoun}@csd.auth.gr

More information

Determining the Chromatic Index of Music

Determining the Chromatic Index of Music Determining the Chromatic Index of Music Dionysios Politis, Dimitrios Margounakis Multimedia Lab, Department of Informatics Aristotle University of Thessaloniki Greece {dpolitis, dmargoun}@csd.auth.gr

More information

An Integrated Music Chromaticism Model

An Integrated Music Chromaticism Model An Integrated Music Chromaticism Model DIONYSIOS POLITIS and DIMITRIOS MARGOUNAKIS Dept. of Informatics, School of Sciences Aristotle University of Thessaloniki University Campus, Thessaloniki, GR-541

More information

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC G.TZANETAKIS, N.HU, AND R.B. DANNENBERG Computer Science Department, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213, USA E-mail: gtzan@cs.cmu.edu

More information

CSC475 Music Information Retrieval

CSC475 Music Information Retrieval CSC475 Music Information Retrieval Monophonic pitch extraction George Tzanetakis University of Victoria 2014 G. Tzanetakis 1 / 32 Table of Contents I 1 Motivation and Terminology 2 Psychacoustics 3 F0

More information

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT

More information

Automatic characterization of ornamentation from bassoon recordings for expressive synthesis

Automatic characterization of ornamentation from bassoon recordings for expressive synthesis Automatic characterization of ornamentation from bassoon recordings for expressive synthesis Montserrat Puiggròs, Emilia Gómez, Rafael Ramírez, Xavier Serra Music technology Group Universitat Pompeu Fabra

More information

Analyzing & Synthesizing Gamakas: a Step Towards Modeling Ragas in Carnatic Music

Analyzing & Synthesizing Gamakas: a Step Towards Modeling Ragas in Carnatic Music Mihir Sarkar Introduction Analyzing & Synthesizing Gamakas: a Step Towards Modeling Ragas in Carnatic Music If we are to model ragas on a computer, we must be able to include a model of gamakas. Gamakas

More information

Speech and Speaker Recognition for the Command of an Industrial Robot

Speech and Speaker Recognition for the Command of an Industrial Robot Speech and Speaker Recognition for the Command of an Industrial Robot CLAUDIA MOISA*, HELGA SILAGHI*, ANDREI SILAGHI** *Dept. of Electric Drives and Automation University of Oradea University Street, nr.

More information

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION ULAŞ BAĞCI AND ENGIN ERZIN arxiv:0907.3220v1 [cs.sd] 18 Jul 2009 ABSTRACT. Music genre classification is an essential tool for

More information

Music Radar: A Web-based Query by Humming System

Music Radar: A Web-based Query by Humming System Music Radar: A Web-based Query by Humming System Lianjie Cao, Peng Hao, Chunmeng Zhou Computer Science Department, Purdue University, 305 N. University Street West Lafayette, IN 47907-2107 {cao62, pengh,

More information

Research Article. ISSN (Print) *Corresponding author Shireen Fathima

Research Article. ISSN (Print) *Corresponding author Shireen Fathima Scholars Journal of Engineering and Technology (SJET) Sch. J. Eng. Tech., 2014; 2(4C):613-620 Scholars Academic and Scientific Publisher (An International Publisher for Academic and Scientific Resources)

More information

Robert Alexandru Dobre, Cristian Negrescu

Robert Alexandru Dobre, Cristian Negrescu ECAI 2016 - International Conference 8th Edition Electronics, Computers and Artificial Intelligence 30 June -02 July, 2016, Ploiesti, ROMÂNIA Automatic Music Transcription Software Based on Constant Q

More information

A FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES

A FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES A FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES Panayiotis Kokoras School of Music Studies Aristotle University of Thessaloniki email@panayiotiskokoras.com Abstract. This article proposes a theoretical

More information

CS229 Project Report Polyphonic Piano Transcription

CS229 Project Report Polyphonic Piano Transcription CS229 Project Report Polyphonic Piano Transcription Mohammad Sadegh Ebrahimi Stanford University Jean-Baptiste Boin Stanford University sadegh@stanford.edu jbboin@stanford.edu 1. Introduction In this project

More information

A Matlab toolbox for. Characterisation Of Recorded Underwater Sound (CHORUS) USER S GUIDE

A Matlab toolbox for. Characterisation Of Recorded Underwater Sound (CHORUS) USER S GUIDE Centre for Marine Science and Technology A Matlab toolbox for Characterisation Of Recorded Underwater Sound (CHORUS) USER S GUIDE Version 5.0b Prepared for: Centre for Marine Science and Technology Prepared

More information

Work Package 9. Deliverable 32. Statistical Comparison of Islamic and Byzantine chant in the Worship Spaces

Work Package 9. Deliverable 32. Statistical Comparison of Islamic and Byzantine chant in the Worship Spaces Work Package 9 Deliverable 32 Statistical Comparison of Islamic and Byzantine chant in the Worship Spaces Table Of Contents 1 INTRODUCTION... 3 1.1 SCOPE OF WORK...3 1.2 DATA AVAILABLE...3 2 PREFIX...

More information

Music Representations

Music Representations Lecture Music Processing Music Representations Meinard Müller International Audio Laboratories Erlangen meinard.mueller@audiolabs-erlangen.de Book: Fundamentals of Music Processing Meinard Müller Fundamentals

More information

Analysis of local and global timing and pitch change in ordinary

Analysis of local and global timing and pitch change in ordinary Alma Mater Studiorum University of Bologna, August -6 6 Analysis of local and global timing and pitch change in ordinary melodies Roger Watt Dept. of Psychology, University of Stirling, Scotland r.j.watt@stirling.ac.uk

More information

Melody Retrieval On The Web

Melody Retrieval On The Web Melody Retrieval On The Web Thesis proposal for the degree of Master of Science at the Massachusetts Institute of Technology M.I.T Media Laboratory Fall 2000 Thesis supervisor: Barry Vercoe Professor,

More information

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance Methodologies for Expressiveness Modeling of and for Music Performance by Giovanni De Poli Center of Computational Sonology, Department of Information Engineering, University of Padova, Padova, Italy About

More information

Supervised Learning in Genre Classification

Supervised Learning in Genre Classification Supervised Learning in Genre Classification Introduction & Motivation Mohit Rajani and Luke Ekkizogloy {i.mohit,luke.ekkizogloy}@gmail.com Stanford University, CS229: Machine Learning, 2009 Now that music

More information

2 2. Melody description The MPEG-7 standard distinguishes three types of attributes related to melody: the fundamental frequency LLD associated to a t

2 2. Melody description The MPEG-7 standard distinguishes three types of attributes related to melody: the fundamental frequency LLD associated to a t MPEG-7 FOR CONTENT-BASED MUSIC PROCESSING Λ Emilia GÓMEZ, Fabien GOUYON, Perfecto HERRERA and Xavier AMATRIAIN Music Technology Group, Universitat Pompeu Fabra, Barcelona, SPAIN http://www.iua.upf.es/mtg

More information

Week 14 Query-by-Humming and Music Fingerprinting. Roger B. Dannenberg Professor of Computer Science, Art and Music Carnegie Mellon University

Week 14 Query-by-Humming and Music Fingerprinting. Roger B. Dannenberg Professor of Computer Science, Art and Music Carnegie Mellon University Week 14 Query-by-Humming and Music Fingerprinting Roger B. Dannenberg Professor of Computer Science, Art and Music Overview n Melody-Based Retrieval n Audio-Score Alignment n Music Fingerprinting 2 Metadata-based

More information

Music Genre Classification and Variance Comparison on Number of Genres

Music Genre Classification and Variance Comparison on Number of Genres Music Genre Classification and Variance Comparison on Number of Genres Miguel Francisco, miguelf@stanford.edu Dong Myung Kim, dmk8265@stanford.edu 1 Abstract In this project we apply machine learning techniques

More information

Lab experience 1: Introduction to LabView

Lab experience 1: Introduction to LabView Lab experience 1: Introduction to LabView LabView is software for the real-time acquisition, processing and visualization of measured data. A LabView program is called a Virtual Instrument (VI) because

More information

Hidden Markov Model based dance recognition

Hidden Markov Model based dance recognition Hidden Markov Model based dance recognition Dragutin Hrenek, Nenad Mikša, Robert Perica, Pavle Prentašić and Boris Trubić University of Zagreb, Faculty of Electrical Engineering and Computing Unska 3,

More information

VISUAL CONTENT BASED SEGMENTATION OF TALK & GAME SHOWS. O. Javed, S. Khan, Z. Rasheed, M.Shah. {ojaved, khan, zrasheed,

VISUAL CONTENT BASED SEGMENTATION OF TALK & GAME SHOWS. O. Javed, S. Khan, Z. Rasheed, M.Shah. {ojaved, khan, zrasheed, VISUAL CONTENT BASED SEGMENTATION OF TALK & GAME SHOWS O. Javed, S. Khan, Z. Rasheed, M.Shah {ojaved, khan, zrasheed, shah}@cs.ucf.edu Computer Vision Lab School of Electrical Engineering and Computer

More information

Usability of Computer Music Interfaces for Simulation of Alternate Musical Systems

Usability of Computer Music Interfaces for Simulation of Alternate Musical Systems Usability of Computer Music Interfaces for Simulation of Alternate Musical Systems Dionysios Politis, Ioannis Stamelos {Multimedia Lab, Programming Languages and Software Engineering Lab}, Department of

More information

Subjective Similarity of Music: Data Collection for Individuality Analysis

Subjective Similarity of Music: Data Collection for Individuality Analysis Subjective Similarity of Music: Data Collection for Individuality Analysis Shota Kawabuchi and Chiyomi Miyajima and Norihide Kitaoka and Kazuya Takeda Nagoya University, Nagoya, Japan E-mail: shota.kawabuchi@g.sp.m.is.nagoya-u.ac.jp

More information

Representing, comparing and evaluating of music files

Representing, comparing and evaluating of music files Representing, comparing and evaluating of music files Nikoleta Hrušková, Juraj Hvolka Abstract: Comparing strings is mostly used in text search and text retrieval. We used comparing of strings for music

More information

Music Segmentation Using Markov Chain Methods

Music Segmentation Using Markov Chain Methods Music Segmentation Using Markov Chain Methods Paul Finkelstein March 8, 2011 Abstract This paper will present just how far the use of Markov Chains has spread in the 21 st century. We will explain some

More information

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM A QUER B EAMPLE MUSIC RETRIEVAL ALGORITHM H. HARB AND L. CHEN Maths-Info department, Ecole Centrale de Lyon. 36, av. Guy de Collongue, 69134, Ecully, France, EUROPE E-mail: {hadi.harb, liming.chen}@ec-lyon.fr

More information

Keywords: Edible fungus, music, production encouragement, synchronization

Keywords: Edible fungus, music, production encouragement, synchronization Advance Journal of Food Science and Technology 6(8): 968-972, 2014 DOI:10.19026/ajfst.6.141 ISSN: 2042-4868; e-issn: 2042-4876 2014 Maxwell Scientific Publication Corp. Submitted: March 14, 2014 Accepted:

More information

StepSequencer64 J74 Page 1. J74 StepSequencer64. A tool for creative sequence programming in Ableton Live. User Manual

StepSequencer64 J74 Page 1. J74 StepSequencer64. A tool for creative sequence programming in Ableton Live. User Manual StepSequencer64 J74 Page 1 J74 StepSequencer64 A tool for creative sequence programming in Ableton Live User Manual StepSequencer64 J74 Page 2 How to Install the J74 StepSequencer64 devices J74 StepSequencer64

More information

Topics in Computer Music Instrument Identification. Ioanna Karydi

Topics in Computer Music Instrument Identification. Ioanna Karydi Topics in Computer Music Instrument Identification Ioanna Karydi Presentation overview What is instrument identification? Sound attributes & Timbre Human performance The ideal algorithm Selected approaches

More information

Fraction by Sinevibes audio slicing workstation

Fraction by Sinevibes audio slicing workstation Fraction by Sinevibes audio slicing workstation INTRODUCTION Fraction is an effect plugin for deep real-time manipulation and re-engineering of sound. It features 8 slicers which record and repeat the

More information

Computational Models of Music Similarity. Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST)

Computational Models of Music Similarity. Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST) Computational Models of Music Similarity 1 Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST) Abstract The perceived similarity of two pieces of music is multi-dimensional,

More information

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene Beat Extraction from Expressive Musical Performances Simon Dixon, Werner Goebl and Emilios Cambouropoulos Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria.

More information

Smart Traffic Control System Using Image Processing

Smart Traffic Control System Using Image Processing Smart Traffic Control System Using Image Processing Prashant Jadhav 1, Pratiksha Kelkar 2, Kunal Patil 3, Snehal Thorat 4 1234Bachelor of IT, Department of IT, Theem College Of Engineering, Maharashtra,

More information

HUMAN PERCEPTION AND COMPUTER EXTRACTION OF MUSICAL BEAT STRENGTH

HUMAN PERCEPTION AND COMPUTER EXTRACTION OF MUSICAL BEAT STRENGTH Proc. of the th Int. Conference on Digital Audio Effects (DAFx-), Hamburg, Germany, September -8, HUMAN PERCEPTION AND COMPUTER EXTRACTION OF MUSICAL BEAT STRENGTH George Tzanetakis, Georg Essl Computer

More information

A Framework for Segmentation of Interview Videos

A Framework for Segmentation of Interview Videos A Framework for Segmentation of Interview Videos Omar Javed, Sohaib Khan, Zeeshan Rasheed, Mubarak Shah Computer Vision Lab School of Electrical Engineering and Computer Science University of Central Florida

More information

Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting

Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting Dalwon Jang 1, Seungjae Lee 2, Jun Seok Lee 2, Minho Jin 1, Jin S. Seo 2, Sunil Lee 1 and Chang D. Yoo 1 1 Korea Advanced

More information

MPEG-7 AUDIO SPECTRUM BASIS AS A SIGNATURE OF VIOLIN SOUND

MPEG-7 AUDIO SPECTRUM BASIS AS A SIGNATURE OF VIOLIN SOUND MPEG-7 AUDIO SPECTRUM BASIS AS A SIGNATURE OF VIOLIN SOUND Aleksander Kaminiarz, Ewa Łukasik Institute of Computing Science, Poznań University of Technology. Piotrowo 2, 60-965 Poznań, Poland e-mail: Ewa.Lukasik@cs.put.poznan.pl

More information

Audio Feature Extraction for Corpus Analysis

Audio Feature Extraction for Corpus Analysis Audio Feature Extraction for Corpus Analysis Anja Volk Sound and Music Technology 5 Dec 2017 1 Corpus analysis What is corpus analysis study a large corpus of music for gaining insights on general trends

More information

Study of White Gaussian Noise with Varying Signal to Noise Ratio in Speech Signal using Wavelet

Study of White Gaussian Noise with Varying Signal to Noise Ratio in Speech Signal using Wavelet American International Journal of Research in Science, Technology, Engineering & Mathematics Available online at http://www.iasir.net ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629

More information

ECE3296 Digital Image and Video Processing Lab experiment 2 Digital Video Processing using MATLAB

ECE3296 Digital Image and Video Processing Lab experiment 2 Digital Video Processing using MATLAB ECE3296 Digital Image and Video Processing Lab experiment 2 Digital Video Processing using MATLAB Objective i. To learn a simple method of video standards conversion. ii. To calculate and show frame difference

More information

Using different reference quantities in ArtemiS SUITE

Using different reference quantities in ArtemiS SUITE 06/17 in ArtemiS SUITE ArtemiS SUITE allows you to perform sound analyses versus a number of different reference quantities. Many analyses are calculated and displayed versus time, such as Level vs. Time,

More information

Therefore we need the help of sound editing software to convert the sound source captured from CD into the required format.

Therefore we need the help of sound editing software to convert the sound source captured from CD into the required format. Sound File Format Starting from a sound source file, there are three steps to prepare a voice chip samples. They are: Sound Editing Sound Compile Voice Chip Programming Suppose the sound comes from CD.

More information

International Journal of Advance Engineering and Research Development MUSICAL INSTRUMENT IDENTIFICATION AND STATUS FINDING WITH MFCC

International Journal of Advance Engineering and Research Development MUSICAL INSTRUMENT IDENTIFICATION AND STATUS FINDING WITH MFCC Scientific Journal of Impact Factor (SJIF): 5.71 International Journal of Advance Engineering and Research Development Volume 5, Issue 04, April -2018 e-issn (O): 2348-4470 p-issn (P): 2348-6406 MUSICAL

More information

Outline. Why do we classify? Audio Classification

Outline. Why do we classify? Audio Classification Outline Introduction Music Information Retrieval Classification Process Steps Pitch Histograms Multiple Pitch Detection Algorithm Musical Genre Classification Implementation Future Work Why do we classify

More information

A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS

A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS Mutian Fu 1 Guangyu Xia 2 Roger Dannenberg 2 Larry Wasserman 2 1 School of Music, Carnegie Mellon University, USA 2 School of Computer

More information

Director Musices: The KTH Performance Rules System

Director Musices: The KTH Performance Rules System Director Musices: The KTH Rules System Roberto Bresin, Anders Friberg, Johan Sundberg Department of Speech, Music and Hearing Royal Institute of Technology - KTH, Stockholm email: {roberto, andersf, pjohan}@speech.kth.se

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 7.9 THE FUTURE OF SOUND

More information

Topic 10. Multi-pitch Analysis

Topic 10. Multi-pitch Analysis Topic 10 Multi-pitch Analysis What is pitch? Common elements of music are pitch, rhythm, dynamics, and the sonic qualities of timbre and texture. An auditory perceptual attribute in terms of which sounds

More information

A prototype system for rule-based expressive modifications of audio recordings

A prototype system for rule-based expressive modifications of audio recordings International Symposium on Performance Science ISBN 0-00-000000-0 / 000-0-00-000000-0 The Author 2007, Published by the AEC All rights reserved A prototype system for rule-based expressive modifications

More information

Music Complexity Descriptors. Matt Stabile June 6 th, 2008

Music Complexity Descriptors. Matt Stabile June 6 th, 2008 Music Complexity Descriptors Matt Stabile June 6 th, 2008 Musical Complexity as a Semantic Descriptor Modern digital audio collections need new criteria for categorization and searching. Applicable to:

More information

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES Vishweshwara Rao and Preeti Rao Digital Audio Processing Lab, Electrical Engineering Department, IIT-Bombay, Powai,

More information

Enhancing Music Maps

Enhancing Music Maps Enhancing Music Maps Jakob Frank Vienna University of Technology, Vienna, Austria http://www.ifs.tuwien.ac.at/mir frank@ifs.tuwien.ac.at Abstract. Private as well as commercial music collections keep growing

More information

Interacting with a Virtual Conductor

Interacting with a Virtual Conductor Interacting with a Virtual Conductor Pieter Bos, Dennis Reidsma, Zsófia Ruttkay, Anton Nijholt HMI, Dept. of CS, University of Twente, PO Box 217, 7500AE Enschede, The Netherlands anijholt@ewi.utwente.nl

More information

Thought Technology Ltd Belgrave Avenue, Montreal, QC H4A 2L8 Canada

Thought Technology Ltd Belgrave Avenue, Montreal, QC H4A 2L8 Canada Thought Technology Ltd. 2180 Belgrave Avenue, Montreal, QC H4A 2L8 Canada Tel: (800) 361-3651 ٠ (514) 489-8251 Fax: (514) 489-8255 E-mail: _Hmail@thoughttechnology.com Webpage: _Hhttp://www.thoughttechnology.com

More information

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical and schemas Stella Paraskeva (,) Stephen McAdams (,) () Institut de Recherche et de Coordination

More information

Pre-processing of revolution speed data in ArtemiS SUITE 1

Pre-processing of revolution speed data in ArtemiS SUITE 1 03/18 in ArtemiS SUITE 1 Introduction 1 TTL logic 2 Sources of error in pulse data acquisition 3 Processing of trigger signals 5 Revolution speed acquisition with complex pulse patterns 7 Introduction

More information

QUALITY OF COMPUTER MUSIC USING MIDI LANGUAGE FOR DIGITAL MUSIC ARRANGEMENT

QUALITY OF COMPUTER MUSIC USING MIDI LANGUAGE FOR DIGITAL MUSIC ARRANGEMENT QUALITY OF COMPUTER MUSIC USING MIDI LANGUAGE FOR DIGITAL MUSIC ARRANGEMENT Pandan Pareanom Purwacandra 1, Ferry Wahyu Wibowo 2 Informatics Engineering, STMIK AMIKOM Yogyakarta 1 pandanharmony@gmail.com,

More information

Analysis and Clustering of Musical Compositions using Melody-based Features

Analysis and Clustering of Musical Compositions using Melody-based Features Analysis and Clustering of Musical Compositions using Melody-based Features Isaac Caswell Erika Ji December 13, 2013 Abstract This paper demonstrates that melodic structure fundamentally differentiates

More information

(Skip to step 11 if you are already familiar with connecting to the Tribot)

(Skip to step 11 if you are already familiar with connecting to the Tribot) LEGO MINDSTORMS NXT Lab 5 Remember back in Lab 2 when the Tribot was commanded to drive in a specific pattern that had the shape of a bow tie? Specific commands were passed to the motors to command how

More information

Creating a Feature Vector to Identify Similarity between MIDI Files

Creating a Feature Vector to Identify Similarity between MIDI Files Creating a Feature Vector to Identify Similarity between MIDI Files Joseph Stroud 2017 Honors Thesis Advised by Sergio Alvarez Computer Science Department, Boston College 1 Abstract Today there are many

More information

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng S. Zhu, P. Ji, W. Kuang and J. Yang Institute of Acoustics, CAS, O.21, Bei-Si-huan-Xi Road, 100190 Beijing,

More information

KNX Dimmer RGBW - User Manual

KNX Dimmer RGBW - User Manual KNX Dimmer RGBW - User Manual Item No.: LC-013-004 1. Product Description With the KNX Dimmer RGBW it is possible to control of RGBW, WW-CW LED or 4 independent channels with integrated KNX BCU. Simple

More information

Chapter 40: MIDI Tool

Chapter 40: MIDI Tool MIDI Tool 40-1 40: MIDI Tool MIDI Tool What it does This tool lets you edit the actual MIDI data that Finale stores with your music key velocities (how hard each note was struck), Start and Stop Times

More information

MAutoPitch. Presets button. Left arrow button. Right arrow button. Randomize button. Save button. Panic button. Settings button

MAutoPitch. Presets button. Left arrow button. Right arrow button. Randomize button. Save button. Panic button. Settings button MAutoPitch Presets button Presets button shows a window with all available presets. A preset can be loaded from the preset window by double-clicking on it, using the arrow buttons or by using a combination

More information

The Measurement Tools and What They Do

The Measurement Tools and What They Do 2 The Measurement Tools The Measurement Tools and What They Do JITTERWIZARD The JitterWizard is a unique capability of the JitterPro package that performs the requisite scope setup chores while simplifying

More information

A Bayesian Network for Real-Time Musical Accompaniment

A Bayesian Network for Real-Time Musical Accompaniment A Bayesian Network for Real-Time Musical Accompaniment Christopher Raphael Department of Mathematics and Statistics, University of Massachusetts at Amherst, Amherst, MA 01003-4515, raphael~math.umass.edu

More information

Computational Modelling of Harmony

Computational Modelling of Harmony Computational Modelling of Harmony Simon Dixon Centre for Digital Music, Queen Mary University of London, Mile End Rd, London E1 4NS, UK simon.dixon@elec.qmul.ac.uk http://www.elec.qmul.ac.uk/people/simond

More information

Wipe Scene Change Detection in Video Sequences

Wipe Scene Change Detection in Video Sequences Wipe Scene Change Detection in Video Sequences W.A.C. Fernando, C.N. Canagarajah, D. R. Bull Image Communications Group, Centre for Communications Research, University of Bristol, Merchant Ventures Building,

More information

SynthiaPC User's Guide

SynthiaPC User's Guide Always There to Beautifully Play Your Favorite Hymns and Church Music SynthiaPC User's Guide A Product Of Suncoast Systems, Inc 6001 South Highway 99 Walnut Hill, Florida 32568 (850) 478-6477 Table Of

More information

A repetition-based framework for lyric alignment in popular songs

A repetition-based framework for lyric alignment in popular songs A repetition-based framework for lyric alignment in popular songs ABSTRACT LUONG Minh Thang and KAN Min Yen Department of Computer Science, School of Computing, National University of Singapore We examine

More information

A Case Based Approach to the Generation of Musical Expression

A Case Based Approach to the Generation of Musical Expression A Case Based Approach to the Generation of Musical Expression Taizan Suzuki Takenobu Tokunaga Hozumi Tanaka Department of Computer Science Tokyo Institute of Technology 2-12-1, Oookayama, Meguro, Tokyo

More information

6.UAP Project. FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System. Daryl Neubieser. May 12, 2016

6.UAP Project. FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System. Daryl Neubieser. May 12, 2016 6.UAP Project FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System Daryl Neubieser May 12, 2016 Abstract: This paper describes my implementation of a variable-speed accompaniment system that

More information

Automated extraction of motivic patterns and application to the analysis of Debussy s Syrinx

Automated extraction of motivic patterns and application to the analysis of Debussy s Syrinx Automated extraction of motivic patterns and application to the analysis of Debussy s Syrinx Olivier Lartillot University of Jyväskylä, Finland lartillo@campus.jyu.fi 1. General Framework 1.1. Motivic

More information

CSC475 Music Information Retrieval

CSC475 Music Information Retrieval CSC475 Music Information Retrieval Symbolic Music Representations George Tzanetakis University of Victoria 2014 G. Tzanetakis 1 / 30 Table of Contents I 1 Western Common Music Notation 2 Digital Formats

More information

Towards the tangible: microtonal scale exploration in Central-African music

Towards the tangible: microtonal scale exploration in Central-African music Towards the tangible: microtonal scale exploration in Central-African music Olmo.Cornelis@hogent.be, Joren.Six@hogent.be School of Arts - University College Ghent - BELGIUM Abstract This lecture presents

More information

Lab 5 Linear Predictive Coding

Lab 5 Linear Predictive Coding Lab 5 Linear Predictive Coding 1 of 1 Idea When plain speech audio is recorded and needs to be transmitted over a channel with limited bandwidth it is often necessary to either compress or encode the audio

More information

S I N E V I B E S FRACTION AUDIO SLICING WORKSTATION

S I N E V I B E S FRACTION AUDIO SLICING WORKSTATION S I N E V I B E S FRACTION AUDIO SLICING WORKSTATION INTRODUCTION Fraction is a plugin for deep on-the-fly remixing and mangling of sound. It features 8x independent slicers which record and repeat short

More information

Ydea-C5 System. Automatic Brightness Adjustment_DMX User Manual

Ydea-C5 System. Automatic Brightness Adjustment_DMX User Manual Ydea-C5 System Automatic Brightness Adjustment_DMX User Manual Automatic Brightness Adjustment_DMX includes 3 modes: timing adjustment, light sensation adjustment, and brightness priority; 1 Timing adjustment:

More information

Semi-automated extraction of expressive performance information from acoustic recordings of piano music. Andrew Earis

Semi-automated extraction of expressive performance information from acoustic recordings of piano music. Andrew Earis Semi-automated extraction of expressive performance information from acoustic recordings of piano music Andrew Earis Outline Parameters of expressive piano performance Scientific techniques: Fourier transform

More information

Statistical Modeling and Retrieval of Polyphonic Music

Statistical Modeling and Retrieval of Polyphonic Music Statistical Modeling and Retrieval of Polyphonic Music Erdem Unal Panayiotis G. Georgiou and Shrikanth S. Narayanan Speech Analysis and Interpretation Laboratory University of Southern California Los Angeles,

More information

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine Project: Real-Time Speech Enhancement Introduction Telephones are increasingly being used in noisy

More information

Automatic Piano Music Transcription

Automatic Piano Music Transcription Automatic Piano Music Transcription Jianyu Fan Qiuhan Wang Xin Li Jianyu.Fan.Gr@dartmouth.edu Qiuhan.Wang.Gr@dartmouth.edu Xi.Li.Gr@dartmouth.edu 1. Introduction Writing down the score while listening

More information

CVP-609 / CVP-605. Reference Manual

CVP-609 / CVP-605. Reference Manual CVP-609 / CVP-605 Reference Manual This manual explains about the functions called up by touching each icon shown in the Menu display. Please read the Owner s Manual first for basic operations, before

More information

Audio Compression Technology for Voice Transmission

Audio Compression Technology for Voice Transmission Audio Compression Technology for Voice Transmission 1 SUBRATA SAHA, 2 VIKRAM REDDY 1 Department of Electrical and Computer Engineering 2 Department of Computer Science University of Manitoba Winnipeg,

More information

Analysing Musical Pieces Using harmony-analyser.org Tools

Analysing Musical Pieces Using harmony-analyser.org Tools Analysing Musical Pieces Using harmony-analyser.org Tools Ladislav Maršík Dept. of Software Engineering, Faculty of Mathematics and Physics Charles University, Malostranské nám. 25, 118 00 Prague 1, Czech

More information

jsymbolic 2: New Developments and Research Opportunities

jsymbolic 2: New Developments and Research Opportunities jsymbolic 2: New Developments and Research Opportunities Cory McKay Marianopolis College and CIRMMT Montreal, Canada 2 / 30 Topics Introduction to features (from a machine learning perspective) And how

More information

ESP: Expression Synthesis Project

ESP: Expression Synthesis Project ESP: Expression Synthesis Project 1. Research Team Project Leader: Other Faculty: Graduate Students: Undergraduate Students: Prof. Elaine Chew, Industrial and Systems Engineering Prof. Alexandre R.J. François,

More information

Singer Recognition and Modeling Singer Error

Singer Recognition and Modeling Singer Error Singer Recognition and Modeling Singer Error Johan Ismael Stanford University jismael@stanford.edu Nicholas McGee Stanford University ndmcgee@stanford.edu 1. Abstract We propose a system for recognizing

More information

Etna Builder - Interactively Building Advanced Graphical Tree Representations of Music

Etna Builder - Interactively Building Advanced Graphical Tree Representations of Music Etna Builder - Interactively Building Advanced Graphical Tree Representations of Music Wolfgang Chico-Töpfer SAS Institute GmbH In der Neckarhelle 162 D-69118 Heidelberg e-mail: woccnews@web.de Etna Builder

More information

Analytic Comparison of Audio Feature Sets using Self-Organising Maps

Analytic Comparison of Audio Feature Sets using Self-Organising Maps Analytic Comparison of Audio Feature Sets using Self-Organising Maps Rudolf Mayer, Jakob Frank, Andreas Rauber Institute of Software Technology and Interactive Systems Vienna University of Technology,

More information

Melodic Pattern Segmentation of Polyphonic Music as a Set Partitioning Problem

Melodic Pattern Segmentation of Polyphonic Music as a Set Partitioning Problem Melodic Pattern Segmentation of Polyphonic Music as a Set Partitioning Problem Tsubasa Tanaka and Koichi Fujii Abstract In polyphonic music, melodic patterns (motifs) are frequently imitated or repeated,

More information

Musical Signal Processing with LabVIEW Introduction to Audio and Musical Signals. By: Ed Doering

Musical Signal Processing with LabVIEW Introduction to Audio and Musical Signals. By: Ed Doering Musical Signal Processing with LabVIEW Introduction to Audio and Musical Signals By: Ed Doering Musical Signal Processing with LabVIEW Introduction to Audio and Musical Signals By: Ed Doering Online:

More information

Part 1: Introduction to Computer Graphics

Part 1: Introduction to Computer Graphics Part 1: Introduction to Computer Graphics 1. Define computer graphics? The branch of science and technology concerned with methods and techniques for converting data to or from visual presentation using

More information