International Journal of Research in Engineering and Innovation (IJREI) journal home page: ISSN (Online):
|
|
- Alannah Hawkins
- 5 years ago
- Views:
Transcription
1 International Journal of Research in Engineering and Innovation Vol-2, Issue-3 (2018), International Journal of Research in Engineering and Innovation (IJREI) journal home page: ISSN (Online): An efficient method for tonic detection from south Indian classical music Unnikrishnan G School of Computer Sciences, Mahatma Gandhi University, Kottayam, Kerala, India Abstract This paper proposes a novel method to identify the tonic value of Carnatic Music recordings. For the recognition and classification of ragas (Scales), we need to first transcribe or extract the different notes constituting those ragas. Transcription of music is the process of analyzing an acoustic musical signal to obtain the musical parameters of the sounds that occur in it. Sa is the tonic or basic note, based on which all other notes are derived in Indian Classical Music. Hence in order to identify the raga of an Indian classical music performance, identification of Sa is necessary. The proposed method proved successful with monophonic and polyphonic recordings which is a major advancement from earlier methods 2018 ijrei.com. All rights reserved Keywords: Pitch Estimation, Tonic detection, Sruthi, Raga, Octaves, Relative Pitch Ratio 1. Introduction Computational musicology is an interdisciplinary research area focusing on the investigation of musicological questions with computational methods. It takes contribution from both computer science and musicology. The main objective of Computational Musicology is to represent a musical problem in terms of algorithms and corresponding data structures. The focus of research in Computational Musicology is not to study music as such, but to design methods to retrieve musical information from the acoustical signals of music recordings. Tasks involved in Computational Musicology include genre classification, raga recognition, melody extraction, artist recognition, song recommendation etc. Carnatic music is the classical music of the southern states of India. Carnatic music compositions (called kritis, keerthanas etc) are based on ragas. The rendering of a composition typically start with alapana (improvisation) of the raga in which the composition is made, followed by the kriti. There are thousands of Carnatic ragas. A raga is a melodic concept. Matanga Muni, in his text Brihaddeshi, defines raga as that which colours the mind of good through a specific swara and varna (literally colour) or through a type of dhwani (sound) [1]. A definition of raga from a computational perspective is given by Chordia and Rae [2]. They define raga as a melodic abstraction which can be defined as a collection of melodic phrases. These phrases are sequences of notes or swaras that are often inflected with various micro-pitch alterations and articulated with an expressive sense of timing. Longer phrases are built by joining these melodic atoms together. Features of a raga are a set of constituent notes (swaras), their progressions (ascent/descent or arohana/avarohana), the way they are intonated using various movements (gamakas), and their relative position, strength and duration. The constituent notes of a raga relate themselves to the base note or tonic called Adhara Shadjam denoted by Sa. The tonic is the base frequency selected by an artist for comfortable rendering. It may vary from artist to artist and performance to performance. The sequence of the constituent notes of a raga starting with Sa in one octave and ending with Sa in the next higher octave is called the Arohana (Ascent). Similarly the sequence starting from the upper Sa to lower Sa is called the Avarohana (Descent). The ascent and descent patterns can also be vakra containing subsidiary ascents and descents. In addition to the Arohana and Avarohana, a raga normally has sanchara prayogas which are melodic phrases peculiar to that raga. The sanchara prayogas will adhere to the Arohana- Avarohana pattern of the raga. They are distinguished by gamakas (microtonal ornamentations). Raga recognition and classification is a central topic in Indian music theory, inspiring rich debate on the essential Corresponding author: Unnikrishnan G Id: ukgkollam@gmail.com 293
2 characteristics of ragas and the features that make two ragas similar or dissimilar [3]. Automatic raga recognition is the process of identifying ragas using computational methods. The core problem is to correctly identify the raga of a recording by analyzing the recording using computational methods. Automatic raga recognition has tremendous potential use in various areas including Music Information Retrieval, Teaching and learning of music, Practicing music, Multimedia Databases, Interactive Composition, Accompaniment Systems etc. 2. Pitch Intervals and Musical Notes Efforts to describe and measure the properties of music date back to antiquity. Ancient Vedic texts on music mention the notion of octave equivalence and divide an octave into swarasthanams (Pitch intervals). There are seven basic swaras, known as Sapta Swaras. They are Shadjam (Denoted by Sa ), Rishabham (Ri), Gandharam (Ga), Madhyamam (Ma), Panchamam (Pa), Dhaivatham (Da) and Nishadam (Ni). Sa is the tonic or Adhara Shadjam, based on which all other notes are derived. A series of swaras, beginning with Sa and ending with Ni, is called a Sthayi or Octave [Table 1]. The frequency of a note in an octave will be twice the frequency of the same note in the previous octave. Out of the seven swaras, Sa and Pa are constant. They are called Achala Swaras (fixed notes). The remaining five swaras have varieties and they are called Chala Swaras (varying notes) [Table 2]. For extraction of notes, the relative pitch of each note with respect to Sa in an octave is considered [3]. A method for extraction of notes should satisfy the Relative Pitch Ratio (RPR) [Table 2]. Observe that in rows 3,4,10 & 11 of Table 2, two different names are used to denote the same note position. The notes at those positions have the same RPR. The pairs of notes having this property are Ga1 & Ri2, Ri3 & Ga2, Ni1 & Da2 and Da3 & Ni2. Thus there are a total of 16 note names even though the note positions are 12 in number. This naming convention of using two different names to denote the same note is a unique feature of Carnatic music and it allows certain combinations of notes, which would have been impossible otherwise [4]. Table 1: Three Octaves in Indian Classical Music Mandra Sthayi Madhya Sthayi Thara Sthayi Sa Ri Ga Ma Pa Dha Ni Sa Ri Ga Ma Pa Dha Ni Sa Ri Ga Ma Pa Dha Ni For example, the combination Ri1-Ga1 or Sudha Rishabham and Sudha Gandharam becomes allowed only under this convention. Otherwise, the combination would have been Ri1- Ri2, which is not allowable, because both are variations of Ri (Rishabham). Various combinations of the notes discussed above constitute different ragas in Carnatic Music. Each raga will have a unique sequence of notes with uniformly increasing frequency in the ascent (called Arohanam) and decreasing frequency in the descent (called Avarohanam) that determines the characteristic of the raga. In general, all music compositions and other forms of musical improvisations based on a raga must contain the notes constituting that raga. The ascent or the descent of a raga should generally contain at least 4 notes. The common forms of raga scales are pentatonic or Audava scales, that is, those containing five notes including Sa, hexatonic or Shadava containing six notes and heptatonic or the complete scale called the Sampurna containing seven notes [5]. There are such 72 Sampurna ragas which constitute the Melakartha Raga ystem in Carnatic music [Table 3]. These ragas are also called Janaka (Parent) Ragas as all other ragas in Carnatic music are considered to be generated from these ragas by various rearrangements of notes. Such generated ragas are called janya (Child) ragas. Table 2: Musical notes and RPR values in Carnatic System No Symbol Relative Pitch Ratio (RPR) Decimal Value of RPR 1 Sa Ri1 16/ Ri2 Ga1 9/ Ga2 Ri3 6/ Ga3 5/ Ma1 4/ Ma2 17/ Pa 3/ Dha1 8/ Dha2 Ni1 5/ Ni2 Dha3 9/ Ni3 15/ Sa
3 RG Combination R1G1 R1G2 R1G3 R2G2 R2G3 R3G3 Table 3: The Melakartha Raga System in Carnatic Music DN M1 M2 Combination No. Raga No. Raga D1N1 1 Kanakangi 37 Salagam D1N2 2 Ratnangi 38 Jalarnavam D1N3 3 Ganamoorthi 39 Jhalavarali D2N2 4 Vanaspathi 40 Navaneetham D2N3 5 Manvathi 41 Pavani D3N3 6 Thanaroopi 42 Raghupriya D1N1 7 Senavathi 43 Gavambodhi D1N2 8 Hanumathodi 44 Bhavapriya D1N3 9 Dhenuka 45 Subhapantuvarali D2N2 10 Natakapriya 46 Shadvidhamargini D2N3 11 Kokilapriya 47 Suvarnangi D3N3 12 Roopavathi 48 Divyamani D1N1 13 Gayakapriya 49 Dhavalambari D1N2 14 Vakulabharanam 50 Namanarayani D1N3 15 Mayamalavagowla 51 Kamavardhani D2N2 16 Chakravakam 52 Ramapriya D2N3 17 Suryakantham 53 Gamanasrama D3N3 18 Hatakambari 54 Viswambhari D1N1 19 Jhankaradhwani 55 Syamalangi D1N2 20 Natabhairavi 56 Shanmukhapriya D1N3 21 Keeravani 57 Simhendramadhyamam D2N2 22 Kharaharapriya 58 Hemavathi D2N3 23 Gowri Manohari 59 Dharmavathi D3N3 24 Varunapriya 60 Neethimathi D1N1 25 Mararanjini 61 Kanthamani D1N2 26 Charukesi 62 Rishabhapriya D1N3 27 Sarasangi 63 Lathangi D2N2 28 Harikamboji 64 Vachaspathi D2N3 29 Sankarabharanam 65 Mechakalyani D3N3 30 Naganadini 66 Chithrambari D1N1 31 Yagapriya 67 Sucharithra D1N2 32 Ragavardhani 68 Jyothiswaroopini D1N3 33 Gangeyabhooshani 69 Dhathuvardhani D2N2 34 Vagadheeswari 70 Nasikabhooshani D2N3 35 Soolini 71 Kosalam D3N3 36 Chalanatta 72 Rasikapriya 3. Tonic Detection 3.1 Critical Bands and Dissonance For the recognition and classification of ragas, we need to first transcribe or extract the different notes constituting those ragas. Transcription of music is the process of analyzing an acoustic musical signal to obtain the musical parameters of the sounds that occur in it. It can be seen as transforming an acoustic signal into a symbolic representation. Sa is the tonic or Adhara Shadjam, based on which all other notes are derived. Hence in order to identify the raga of a carnatic music performance, first we have to find the frequency of Sa (or the base frequency called tonic in which the performance is rendered). This is the first phase of any raga recognition process. The proposed method for tonic detection is based on the following peculiar properties of musical notes. When we try to study about the frequencies of a musical scale, there is an important concept called the critical bands. When sound enters the ear, it causes vibrations on the basilar membrane within the inner ear. Different frequencies of sound cause different regions of the basilar membrane and its fine hairs to vibrate. This is how the brain discriminates between various frequencies. However, if two frequencies are close together, there is an overlap of response on the basilar membrane a large fraction of total hairs set into vibration are caused by both frequencies. When the frequencies are nearly the same, they can t be distinguished as separate frequencies. Instead an average frequency is heard. If the two frequencies are 440 Hz and 450 Hz, for example, we will hear 445 Hz. If the lower frequency is kept at 440 Hz and the higher one is raised slowly, then there will come a point where the two frequencies are still indistinguishable and there is just a 295
4 roughness to the total sound. This is called dissonance. It would continue until finally the higher frequency would become distinguishable from the lower. At this point, further raising the higher frequency would cause less and less dissonance. When two frequencies are close enough to cause the roughness or dissonance described above, they are said to be within a critical band on the basilar membrane. For much of the audible range, the critical band around some central frequency will be stimulated by frequencies within about 15% of that central frequency [David R. Lapp]. In the study of musical notes and musical scales, critical bands play an important role. Two frequencies that stimulate areas within the same critical band on the basilar membrane will produce dissonance which is undesirable in music. 3.2 Consonance The opposite of dissonance is consonance pleasant sounding combinations of frequencies. In the previous section, the simultaneous sounding of a 440 Hz with a 450 Hz was discussed. If the 450 Hz is replaced with an 880 Hz (2 x 440 Hz), you would hear excellent consonance. This especially pleasant sounding combination comes from the fact that every crest of the sound wave corresponding to 440 Hz would be in step with every other crest of the sound wave corresponding to 880 Hz. So doubling the frequency of one tone always produces a second tone that sounds good when played with the first. This interval between two frequencies is called a diapason. Diapason means literally through all. 440 Hz and 880 Hz sound so good together, in fact, that they sound the same. As the frequency of the 880 Hz tone is increased to 1760 Hz (2x880Hz or 4x440Hz), it sounds the same as when the frequency of the 440 Hz tone is increased to 880 Hz. This feature has led widely different cultures to historically use an arbitrary frequency and another frequency, exactly one diapason higher, as the first and last notes in the musical scale. As mentioned above, frequencies separated by one diapason not only sound good together, but they sound like each other. So an adult and a child or a man and a woman can sing the same song together simply by singing in different diapasons. And they ll do this naturally, without even thinking about it. The same applies to a vocalist and his supporting instrumentalist [7]. The above mentioned feature has been used as the underlying principle in the proposed tonic detection method. This method calculates the tonic based on a sample taken from the recording under study. This sample may contain either the vocal part or a supporting instrument like violin or a combination of both. Normally the base frequency of vocal and a supporting instrument like violin will have a difference of one diapason. That is, the frequency of a note generated from the violin will be twice the frequency of the same note generated by the vocalist. However, as mentioned above, due to consonance, two notes separated by a diapason will sound alike. This is the reason a violinist and a vocalist are able to perform in unison. Based on this fact, it is hypothesized that the tonic identification can be independent of the medium of performance. That is, we can identify the tonic from the sound of violin or from the sound of vocalist or from a combination of these two. In all these cases, the detected tonic can be used to identify the raga from the violin portion as well as from the vocal portion or from a combination of these two. It is also hypothesized that, since the tonic is medium independent, it can also be used to identify raga from portions containing polyphonic music, for example, from portions where the sound of mridangam ( a percussion instrument in Carnatic music) or some other accompanying instrument is also present. These hypotheses have been successfully proved through experiments. This is a major advancement from earlier works where the tonic was found either by tuning an oscillator and noting the value in Hz [2] or by categorizing instruments as either male or female and asking explicitly for the tonic of the performer [8]. 3.3 The Proposed Method First of all, the wave form of the recording was analysed using any wave editor such as wavepad. By observing the lower amplitude portions which indicates the ending portions of raga visthara (elaboaration of a raga accompanied by the thanpura and sometimes the violin) or any other finishing portions. From this portion, a small piece was chosen for tonic detection. This musical piece from which the tonic Sa was to be extracted was stored as a wav file with a sampling frequency of 44.1 KHz. The musical signal contained in the wav file was first decomposed with a frame size of 25 ms. Pitch estimation was performed for each frame and the corresponding frequencies were obtained. Autocorrelation method was used for pitch estimation. The extracted frequencies included groups of nonzero frequency values separated by zeros. The musical piece may contain frequencies other than the tonic frequency indicating, probably, the presence of other notes or even noise. Hence, as a criteria for separating the tonic frequency, it was assumed that more than one zero value coming together indicated a note boundary. That is, when more than one zero occurred together, it indicated the gap between two notes. So the nonzero frequencies up to that point represented a note. In order to fix the correct frequency of the note, all the nonzero frequencies up to that point were grouped and analyzed. Most of these frequencies were having only slight differences in their values and hence an average of these frequencies seemed to be an immediate choice for the frequency of the actual note. However, it was observed that there existed some very high and very low frequencies among these extracted frequencies. This could be due to the various noises that can occur during a real performance. Due to the presence of these highly variant frequencies, the average differed highly from most of the frequencies. Obviously, average was not a good choice. In order to obtain a frequency that represented most of the extracted and grouped frequencies and to filter out the highly 296
5 variant abnormal frequencies, another statistical measure median was chosen. Median of the frequencies in the group was computed and it was fixed as the frequency of the candidate (probable) tonic of that frame. The same process was repeated to find the candidate tonics from the subsequent frames. The process terminated when all the extracted frequencies were examined. The result is an array containing the resultant candidate tonic values. Majority of these candidate tonics were almost the same with only negligible differences. Again, the median of these candidate tonics was taken as the tonic of the audio recording under analysis. 3.4 Algorithm 1. Get the input wav file. 2. Decompose the whole signal contained in the input file into frames. 3. Estimate the pitch of the signal contained in each frame and obtain the corresponding frequencies. 4. Look for more than one zeros coming together, to identify a note boundary. 5. Evaluate median of the nonzero frequencies up to that boundary and fix it as a candidate tonic value. 6. Repeat from step Stop when all the extracted frequencies have been examined. 8. Fix the median of the candidate tonic values as the resultant tonic. 4. Results & Conclusions Studies were conducted on recordings of musical performances by 63 renowned Carnatic musicians in 91 ragas. In the case of Melakartha ragas, atleast two performances in each raga, by different musicians were included. Also, as part of cross-verification to test the effectiveness of tonic detection, performances in 70 (out of 72) melakartha ragas by a single musician were also included. In this case, the tonic value was obtained from only one performance among these 70 performances. For the rest, the same tonic was assumed since the performances were of similar nature (melakartha raga demonstrations in 70 ragas by Nookala Chinna Satyanarayana). This assumption proved to be totally correct as there was a success rate of around 92% with this assumed tonic. Also, the tonic values were found to be independent of the medium of rendering. Hence tonic extracted from violin also suited for the vocalist and vice versa. The method successfully detected the ragas from polyphonic recordings which is a major advancement from earlier methods. Out of the seventy recordings in melakartha ragas, 44 were having strong presence of mridangam and violin along with vocal. Also, the recordings used in this study were of varying qualities. Still, the ragas were detected correctly. 4.1 Results Summary Table 4: Various ragas-various performers Average Sample Duration (in seconds) Number of Ragas Tested No. of Correctly Identified Ragas Success Rate Table 5: Various ragas-same performer Average Sample Duration (in seconds) Number of Ragas Tested No. of Correctly Identified Ragas Success Rate Waveforms 1 Audio waveform amplitude time (s) Figure 1: Raga: Hamsadhwani (Arohana: SaRi2Ga3PaNi3Sa, Avarohana: Sa Ni3PaGa3Ri2Sa). 297
6 Hamsadhwani is an audava or pentatonic raga having five notes Sa, Ri2, Ga3, Pa and Ni3. Table 6: Note frequencies extracted from the wave form in Figure 1 Note Extracted Frequency (Hz) Computed RPR Ideal RPR Sa Ri Ga Pa Ni Audio waveform amplitude time (s) Figure 2: Raga: Mayamalavagowla (Arohanam: SaRi1Ga3Ma1PaDa1Ni3Sa, Avarohanam: Sa Ni3Da1PaMa1Ga3Ri1Sa). Mayamalavagowla is a Sampoorna (Heptatonic) raga having seven notes Sa, Ri1, Ga3, Ma1, Pa, Da1, and Ni3. Table 7: Note frequencies extracted from the wave form in figure 2 Note Extracted Frequency Computed Ideal (Hz) RPR RPR Sa Ri Ga Ma Pa Da Ni Figure 1 shows the wave form of the arohana of Hamsadhwani. Six notes Sa, Ri2, Ga3, Pa, Ni3 and the upper (Thara sthayi) Sa can be observed in the figure. Similarly, Figure 2 shows the wave form of the arohana of Mayamalavagowla. Eight notes Sa, Ri1, Ga3, Ma1, Pa, Da1, Ni3 and the upper (Thara sthayi) Sa can be observed in the figure. Table 6 and 7 show the extracted frequency values from the waveforms shown in Figure 1 and Figure 2 respectively, using the proposed method. They also show the computed RPR values using the extracted frequencies and the ideal or theoretical RPR values. It can be observed that the computed RPR values are very close to the ideal values which shows the accuracy of the proposed method. References [1] 1. Brihaddesi of Sri Matanga Muni Edited and Translated by Prem Lata Sharma. [2] Chordia, P., Rae, A., Raag Recognition using Pitch-Class and Pitch- Class Dyad Distributions, Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR), Vienna, Austria, 2007 [3] Arvindh Krishnaswamy, On the Twelve Basic Intervals in South Indian Classical Music, Audio Engineering Society Convention Paper presented at the 115th Convention, 2003 October 10-13, New York. [4] K. Balasubramanian, Combinatorial Enumeration of Ragas (Scales of Integer Sequences) of Indian Music, Journal of Integer Sequences, Vol. 5 (2002). [5] Arvindh Krishnaswamy, Application of Pitch Tracking to South Indian Classical Music, In Proc. IEEE ICASSP 2003, Apr 6-10, Hong Kong [6] Anssi P. Klapuri, Automatic Music Transcription as We Know it Today, Journal of New Music Research 2004, Vol. 33, No. 3 [7] David R. Lapp, The Physics of Music and Musical Instruments, Wright Center for Innovative Science Education, Tufts University Medford, Massachusetts [8] James K N, Realtime Raga Detection and Analysis using Computer, Ph.D Thesis, CUSAT, Kochi. 298
Chapter 0 Fundamentals of Indian Classical Music
Chapter Fundamentals of Indian Classical Music.1 Symbols, Notes and Octave Indian Classical Music (ICM) has its origin from Sama Veda, one of the four Vedas. Bharata Muni has illustrated ICM in his book
More informationInternational Journal of Computer Architecture and Mobility (ISSN ) Volume 1-Issue 7, May 2013
Carnatic Swara Synthesizer (CSS) Design for different Ragas Shruti Iyengar, Alice N Cheeran Abstract Carnatic music is one of the oldest forms of music and is one of two main sub-genres of Indian Classical
More informationAvailable online at ScienceDirect. Procedia Computer Science 46 (2015 )
Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 46 (2015 ) 381 387 International Conference on Information and Communication Technologies (ICICT 2014) Music Information
More informationAvailable online at International Journal of Current Research Vol. 9, Issue, 08, pp , August, 2017
z Available online at http://www.journalcra.com International Journal of Current Research Vol. 9, Issue, 08, pp.55560-55567, August, 2017 INTERNATIONAL JOURNAL OF CURRENT RESEARCH ISSN: 0975-833X RESEARCH
More informationOBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES
OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES Vishweshwara Rao and Preeti Rao Digital Audio Processing Lab, Electrical Engineering Department, IIT-Bombay, Powai,
More informationOn Music related. A shot of Kapi
On Music related. A shot of Kapi If you ever thought a coffee and kapi meant the same, you may want to sit someone from Tamilnadu down and get them to explain the fine line of distinction to you. In music
More information(Published in the Journal of Sangeet Natak Akademi, New Delhi, (1999) pages ) Synthesizing Carnatic Music with a Computer
(Published in the Journal of Sangeet Natak Akademi, New Delhi, 133-134 (1999) pages 16-24. ) Synthesizing Carnatic Music with a Computer M.Subramanian 1. Introduction. The term Computer Music is generally
More informationRaga Identification by using Swara Intonation
Journal of ITC Sangeet Research Academy, vol. 23, December, 2009 Raga Identification by using Swara Intonation Shreyas Belle, Rushikesh Joshi and Preeti Rao Abstract In this paper we investigate information
More informationRobert Alexandru Dobre, Cristian Negrescu
ECAI 2016 - International Conference 8th Edition Electronics, Computers and Artificial Intelligence 30 June -02 July, 2016, Ploiesti, ROMÂNIA Automatic Music Transcription Software Based on Constant Q
More informationAnalyzing & Synthesizing Gamakas: a Step Towards Modeling Ragas in Carnatic Music
Mihir Sarkar Introduction Analyzing & Synthesizing Gamakas: a Step Towards Modeling Ragas in Carnatic Music If we are to model ragas on a computer, we must be able to include a model of gamakas. Gamakas
More informationPitch Based Raag Identification from Monophonic Indian Classical Music
Pitch Based Raag Identification from Monophonic Indian Classical Music Amanpreet Singh 1, Dr. Gurpreet Singh Josan 2 1 Student of Masters of Philosophy, Punjabi University, Patiala, amangenious@gmail.com
More informationMusical Acoustics Lecture 15 Pitch & Frequency (Psycho-Acoustics)
1 Musical Acoustics Lecture 15 Pitch & Frequency (Psycho-Acoustics) Pitch Pitch is a subjective characteristic of sound Some listeners even assign pitch differently depending upon whether the sound was
More informationMOTIVIC ANALYSIS AND ITS RELEVANCE TO RĀGA IDENTIFICATION IN CARNATIC MUSIC
MOTIVIC ANALYSIS AND ITS RELEVANCE TO RĀGA IDENTIFICATION IN CARNATIC MUSIC Vignesh Ishwar Electrical Engineering, IIT dras, India vigneshishwar@gmail.com Ashwin Bellur Computer Science & Engineering,
More informationPOST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS
POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS Andrew N. Robertson, Mark D. Plumbley Centre for Digital Music
More informationA KARNATIC MUSIC PRIMER
A KARNATIC MUSIC PRIMER P. Sriram PUBLISHED BY The Carnatic Music Association of North America, Inc. ABOUT THE AUTHOR Dr. Parthasarathy Sriram, is an aerospace engineer, with a bachelor s degree from IIT,
More informationTranscription of the Singing Melody in Polyphonic Music
Transcription of the Singing Melody in Polyphonic Music Matti Ryynänen and Anssi Klapuri Institute of Signal Processing, Tampere University Of Technology P.O.Box 553, FI-33101 Tampere, Finland {matti.ryynanen,
More informationAUTOMATICALLY IDENTIFYING VOCAL EXPRESSIONS FOR MUSIC TRANSCRIPTION
AUTOMATICALLY IDENTIFYING VOCAL EXPRESSIONS FOR MUSIC TRANSCRIPTION Sai Sumanth Miryala Kalika Bali Ranjita Bhagwan Monojit Choudhury mssumanth99@gmail.com kalikab@microsoft.com bhagwan@microsoft.com monojitc@microsoft.com
More informationAPPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC
APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC Vishweshwara Rao, Sachin Pant, Madhumita Bhaskar and Preeti Rao Department of Electrical Engineering, IIT Bombay {vishu, sachinp,
More informationPitch correction on the human voice
University of Arkansas, Fayetteville ScholarWorks@UARK Computer Science and Computer Engineering Undergraduate Honors Theses Computer Science and Computer Engineering 5-2008 Pitch correction on the human
More informationPitch. The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high.
Pitch The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high. 1 The bottom line Pitch perception involves the integration of spectral (place)
More informationAppreciating Carnatic Music Dr. Lakshmi Sreeram Indian Institute of Technology, Madras. Lecture - 07 Carnatic Music as RAga Music
Appreciating Carnatic Music Dr. Lakshmi Sreeram Indian Institute of Technology, Madras Lecture - 07 Carnatic Music as RAga Music What we have so far seen is that Carnatic music is primarily a melodic system,
More informationCategorization of ICMR Using Feature Extraction Strategy And MIR With Ensemble Learning
Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 57 (2015 ) 686 694 3rd International Conference on Recent Trends in Computing 2015 (ICRTC-2015) Categorization of ICMR
More informationAppreciating Carnatic Music Dr. Lakshmi Sreeram Indian Institute of Technology, Madras
Appreciating Carnatic Music Dr. Lakshmi Sreeram Indian Institute of Technology, Madras Lecture - 09 Lecture title: Understanding Rag -2 (RAga & Swara) (Music Starts: 00:21) (Music Ends: 06:30) We just
More informationSinger Traits Identification using Deep Neural Network
Singer Traits Identification using Deep Neural Network Zhengshan Shi Center for Computer Research in Music and Acoustics Stanford University kittyshi@stanford.edu Abstract The author investigates automatic
More informationConsonance perception of complex-tone dyads and chords
Downloaded from orbit.dtu.dk on: Nov 24, 28 Consonance perception of complex-tone dyads and chords Rasmussen, Marc; Santurette, Sébastien; MacDonald, Ewen Published in: Proceedings of Forum Acusticum Publication
More informationTHE importance of music content analysis for musical
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2007 333 Drum Sound Recognition for Polyphonic Audio Signals by Adaptation and Matching of Spectrogram Templates With
More informationSpeech and Speaker Recognition for the Command of an Industrial Robot
Speech and Speaker Recognition for the Command of an Industrial Robot CLAUDIA MOISA*, HELGA SILAGHI*, ANDREI SILAGHI** *Dept. of Electric Drives and Automation University of Oradea University Street, nr.
More informationBinning based algorithm for Pitch Detection in Hindustani Classical Music
1 Binning based algorithm for Pitch Detection in Hindustani Classical Music Malvika Singh, BTech 4 th year, DAIICT, 201401428@daiict.ac.in Abstract Speech coding forms a crucial element in speech communications.
More informationCS229 Project Report Polyphonic Piano Transcription
CS229 Project Report Polyphonic Piano Transcription Mohammad Sadegh Ebrahimi Stanford University Jean-Baptiste Boin Stanford University sadegh@stanford.edu jbboin@stanford.edu 1. Introduction In this project
More informationCSC475 Music Information Retrieval
CSC475 Music Information Retrieval Monophonic pitch extraction George Tzanetakis University of Victoria 2014 G. Tzanetakis 1 / 32 Table of Contents I 1 Motivation and Terminology 2 Psychacoustics 3 F0
More informationTake a Break, Bach! Let Machine Learning Harmonize That Chorale For You. Chris Lewis Stanford University
Take a Break, Bach! Let Machine Learning Harmonize That Chorale For You Chris Lewis Stanford University cmslewis@stanford.edu Abstract In this project, I explore the effectiveness of the Naive Bayes Classifier
More informationMeasurement of overtone frequencies of a toy piano and perception of its pitch
Measurement of overtone frequencies of a toy piano and perception of its pitch PACS: 43.75.Mn ABSTRACT Akira Nishimura Department of Media and Cultural Studies, Tokyo University of Information Sciences,
More informationTOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC
TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC G.TZANETAKIS, N.HU, AND R.B. DANNENBERG Computer Science Department, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213, USA E-mail: gtzan@cs.cmu.edu
More informationAppreciating Carnatic Music Dr. Lakshmi Sreeram Indian Institute of Technology, Madras
Appreciating Carnatic Music Dr. Lakshmi Sreeram Indian Institute of Technology, Madras Lecture - 08 Lecture title: Understanding RAga - 1 (RAga & Swara) So, we have been talking about raga, what then is
More informationMODELLING OF MUSIC AS A COMMUNICATION SYSTEM AND QUANTITATIVE DESCRIPTION OF EMOTIONS IN CARNATIC MUSIC
MODELLING OF MUSIC AS A COMMUNICATION SYSTEM AND QUANTITATIVE I. Main Problem Being Addressed II. The solution addresses two main problems: B.SAI VENKATESH Problem 1 - Universalization of Music: Music
More informationMusic Radar: A Web-based Query by Humming System
Music Radar: A Web-based Query by Humming System Lianjie Cao, Peng Hao, Chunmeng Zhou Computer Science Department, Purdue University, 305 N. University Street West Lafayette, IN 47907-2107 {cao62, pengh,
More informationKrzysztof Rychlicki-Kicior, Bartlomiej Stasiak and Mykhaylo Yatsymirskyy Lodz University of Technology
Krzysztof Rychlicki-Kicior, Bartlomiej Stasiak and Mykhaylo Yatsymirskyy Lodz University of Technology 26.01.2015 Multipitch estimation obtains frequencies of sounds from a polyphonic audio signal Number
More informationAutomatic Rhythmic Notation from Single Voice Audio Sources
Automatic Rhythmic Notation from Single Voice Audio Sources Jack O Reilly, Shashwat Udit Introduction In this project we used machine learning technique to make estimations of rhythmic notation of a sung
More informationDrum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods
Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods Kazuyoshi Yoshii, Masataka Goto and Hiroshi G. Okuno Department of Intelligence Science and Technology National
More informationProc. of NCC 2010, Chennai, India A Melody Detection User Interface for Polyphonic Music
A Melody Detection User Interface for Polyphonic Music Sachin Pant, Vishweshwara Rao, and Preeti Rao Department of Electrical Engineering Indian Institute of Technology Bombay, Mumbai 400076, India Email:
More informationInternational Journal of Advance Engineering and Research Development MUSICAL INSTRUMENT IDENTIFICATION AND STATUS FINDING WITH MFCC
Scientific Journal of Impact Factor (SJIF): 5.71 International Journal of Advance Engineering and Research Development Volume 5, Issue 04, April -2018 e-issn (O): 2348-4470 p-issn (P): 2348-6406 MUSICAL
More informationAudio Feature Extraction for Corpus Analysis
Audio Feature Extraction for Corpus Analysis Anja Volk Sound and Music Technology 5 Dec 2017 1 Corpus analysis What is corpus analysis study a large corpus of music for gaining insights on general trends
More informationA QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM
A QUER B EAMPLE MUSIC RETRIEVAL ALGORITHM H. HARB AND L. CHEN Maths-Info department, Ecole Centrale de Lyon. 36, av. Guy de Collongue, 69134, Ecully, France, EUROPE E-mail: {hadi.harb, liming.chen}@ec-lyon.fr
More informationAcoustic Measurements Using Common Computer Accessories: Do Try This at Home. Dale H. Litwhiler, Terrance D. Lovell
Abstract Acoustic Measurements Using Common Computer Accessories: Do Try This at Home Dale H. Litwhiler, Terrance D. Lovell Penn State Berks-LehighValley College This paper presents some simple techniques
More informationMusic Representations
Lecture Music Processing Music Representations Meinard Müller International Audio Laboratories Erlangen meinard.mueller@audiolabs-erlangen.de Book: Fundamentals of Music Processing Meinard Müller Fundamentals
More informationAutomatic Music Clustering using Audio Attributes
Automatic Music Clustering using Audio Attributes Abhishek Sen BTech (Electronics) Veermata Jijabai Technological Institute (VJTI), Mumbai, India abhishekpsen@gmail.com Abstract Music brings people together,
More informationDAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes
DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring 2009 Week 6 Class Notes Pitch Perception Introduction Pitch may be described as that attribute of auditory sensation in terms
More informationDOCTORAL DISSERTATIONS OF MAHATMA GANDHI UNIVERSITY A STUDY OF THE REFERENCES CITED
DOCTORAL DISSERTATIONS OF MAHATMA GANDHI UNIVERSITY A STUDY OF THE REFERENCES CITED UNNIKRISHNAN S* & ANNU GEORGE** *Assistant Librarian Sr. Sc. **Assistant Librarian Sel.Gr. University Library Mahatma
More informationA FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES
A FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES Panayiotis Kokoras School of Music Studies Aristotle University of Thessaloniki email@panayiotiskokoras.com Abstract. This article proposes a theoretical
More informationTopics in Computer Music Instrument Identification. Ioanna Karydi
Topics in Computer Music Instrument Identification Ioanna Karydi Presentation overview What is instrument identification? Sound attributes & Timbre Human performance The ideal algorithm Selected approaches
More informationMusic Theory. Fine Arts Curriculum Framework. Revised 2008
Music Theory Fine Arts Curriculum Framework Revised 2008 Course Title: Music Theory Course/Unit Credit: 1 Course Number: Teacher Licensure: Grades: 9-12 Music Theory Music Theory is a two-semester course
More informationAutomatic music transcription
Music transcription 1 Music transcription 2 Automatic music transcription Sources: * Klapuri, Introduction to music transcription, 2006. www.cs.tut.fi/sgn/arg/klap/amt-intro.pdf * Klapuri, Eronen, Astola:
More informationCharacteristics of Polyphonic Music Style and Markov Model of Pitch-Class Intervals
Characteristics of Polyphonic Music Style and Markov Model of Pitch-Class Intervals Eita Nakamura and Shinji Takaki National Institute of Informatics, Tokyo 101-8430, Japan eita.nakamura@gmail.com, takaki@nii.ac.jp
More informationInstrument Recognition in Polyphonic Mixtures Using Spectral Envelopes
Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes hello Jay Biernat Third author University of Rochester University of Rochester Affiliation3 words jbiernat@ur.rochester.edu author3@ismir.edu
More informationPHYSICS OF MUSIC. 1.) Charles Taylor, Exploring Music (Music Library ML3805 T )
REFERENCES: 1.) Charles Taylor, Exploring Music (Music Library ML3805 T225 1992) 2.) Juan Roederer, Physics and Psychophysics of Music (Music Library ML3805 R74 1995) 3.) Physics of Sound, writeup in this
More informationDepartment of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement
Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine Project: Real-Time Speech Enhancement Introduction Telephones are increasingly being used in noisy
More informationStudy of White Gaussian Noise with Varying Signal to Noise Ratio in Speech Signal using Wavelet
American International Journal of Research in Science, Technology, Engineering & Mathematics Available online at http://www.iasir.net ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629
More informationCreative Computing II
Creative Computing II Christophe Rhodes c.rhodes@gold.ac.uk Autumn 2010, Wednesdays: 10:00 12:00: RHB307 & 14:00 16:00: WB316 Winter 2011, TBC The Ear The Ear Outer Ear Outer Ear: pinna: flap of skin;
More informationModes and Ragas: More Than just a Scale *
OpenStax-CNX module: m11633 1 Modes and Ragas: More Than just a Scale * Catherine Schmidt-Jones This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 3.0 Abstract
More informationPiano Transcription MUMT611 Presentation III 1 March, Hankinson, 1/15
Piano Transcription MUMT611 Presentation III 1 March, 2007 Hankinson, 1/15 Outline Introduction Techniques Comb Filtering & Autocorrelation HMMs Blackboard Systems & Fuzzy Logic Neural Networks Examples
More informationMusical Acoustics Lecture 16 Interval, Scales, Tuning and Temperament - I
Musical Acoustics, C. Bertulani 1 Musical Acoustics Lecture 16 Interval, Scales, Tuning and Temperament - I Notes and Tones Musical instruments cover useful range of 27 to 4200 Hz. 2 Ear: pitch discrimination
More informationProceedings of the 7th WSEAS International Conference on Acoustics & Music: Theory & Applications, Cavtat, Croatia, June 13-15, 2006 (pp54-59)
Common-tone Relationships Constructed Among Scales Tuned in Simple Ratios of the Harmonic Series and Expressed as Values in Cents of Twelve-tone Equal Temperament PETER LUCAS HULEN Department of Music
More informationThe Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng
The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng S. Zhu, P. Ji, W. Kuang and J. Yang Institute of Acoustics, CAS, O.21, Bei-Si-huan-Xi Road, 100190 Beijing,
More informationAutomatic characterization of ornamentation from bassoon recordings for expressive synthesis
Automatic characterization of ornamentation from bassoon recordings for expressive synthesis Montserrat Puiggròs, Emilia Gómez, Rafael Ramírez, Xavier Serra Music technology Group Universitat Pompeu Fabra
More informationArticle Music Melodic Pattern Detection with Pitch Estimation Algorithms
Article Music Melodic Pattern Detection with Pitch Estimation Algorithms Makarand Velankar 1, *, Amod Deshpande 2 and Dr. Parag Kulkarni 3 1 Faculty Cummins College of Engineering and Research Scholar
More informationMUSI-6201 Computational Music Analysis
MUSI-6201 Computational Music Analysis Part 9.1: Genre Classification alexander lerch November 4, 2015 temporal analysis overview text book Chapter 8: Musical Genre, Similarity, and Mood (pp. 151 155)
More informationModes and Ragas: More Than just a Scale
Connexions module: m11633 1 Modes and Ragas: More Than just a Scale Catherine Schmidt-Jones This work is produced by The Connexions Project and licensed under the Creative Commons Attribution License Abstract
More informationTONAL HIERARCHIES, IN WHICH SETS OF PITCH
Probing Modulations in Carnātic Music 367 REAL-TIME PROBING OF MODULATIONS IN SOUTH INDIAN CLASSICAL (CARNĀTIC) MUSIC BY INDIAN AND WESTERN MUSICIANS RACHNA RAMAN &W.JAY DOWLING The University of Texas
More informationPHY 103: Scales and Musical Temperament. Segev BenZvi Department of Physics and Astronomy University of Rochester
PHY 103: Scales and Musical Temperament Segev BenZvi Department of Physics and Astronomy University of Rochester Musical Structure We ve talked a lot about the physics of producing sounds in instruments
More informationAN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY
AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT
More informationModes and Ragas: More Than just a Scale
OpenStax-CNX module: m11633 1 Modes and Ragas: More Than just a Scale Catherine Schmidt-Jones This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 3.0 Abstract
More informationSpectral analysis of Gamaka Swaras of Indian music
ndianjournal of Traditional Knowledge Vol.5(4), October 2006, pp. 439-444 Spectral analysis of Gamaka Swaras of ndian music Karuna Nagarajan, Heisnam Jina Devi, N V C Swamy* & H R Nagendra Swami Vivekananda
More informationAuditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are
In: E. Bruce Goldstein (Ed) Encyclopedia of Perception, Volume 1, Sage, 2009, pp 160-164. Auditory Illusions Diana Deutsch The sounds we perceive do not always correspond to those that are presented. When
More informationWHAT INTERVALS DO INDIANS SING?
T WHAT INTERVALS DO INDIANS SING? BY FRANCES DENSMORE HE study of Indian music is inseparable from a study of Indian customs and culture. If we were to base conclusions upon the phonograph record of an
More informationIMPROVED MELODIC SEQUENCE MATCHING FOR QUERY BASED SEARCHING IN INDIAN CLASSICAL MUSIC
IMPROVED MELODIC SEQUENCE MATCHING FOR QUERY BASED SEARCHING IN INDIAN CLASSICAL MUSIC Ashwin Lele #, Saurabh Pinjani #, Kaustuv Kanti Ganguli, and Preeti Rao Department of Electrical Engineering, Indian
More informationLecture 9 Source Separation
10420CS 573100 音樂資訊檢索 Music Information Retrieval Lecture 9 Source Separation Yi-Hsuan Yang Ph.D. http://www.citi.sinica.edu.tw/pages/yang/ yang@citi.sinica.edu.tw Music & Audio Computing Lab, Research
More informationVoice & Music Pattern Extraction: A Review
Voice & Music Pattern Extraction: A Review 1 Pooja Gautam 1 and B S Kaushik 2 Electronics & Telecommunication Department RCET, Bhilai, Bhilai (C.G.) India pooja0309pari@gmail.com 2 Electrical & Instrumentation
More informationEfficient Computer-Aided Pitch Track and Note Estimation for Scientific Applications. Matthias Mauch Chris Cannam György Fazekas
Efficient Computer-Aided Pitch Track and Note Estimation for Scientific Applications Matthias Mauch Chris Cannam György Fazekas! 1 Matthias Mauch, Chris Cannam, George Fazekas Problem Intonation in Unaccompanied
More informationLecture 7: Music
Matthew Schwartz Lecture 7: Music Why do notes sound good? In the previous lecture, we saw that if you pluck a string, it will excite various frequencies. The amplitude of each frequency which is excited
More informationAnalysis and Clustering of Musical Compositions using Melody-based Features
Analysis and Clustering of Musical Compositions using Melody-based Features Isaac Caswell Erika Ji December 13, 2013 Abstract This paper demonstrates that melodic structure fundamentally differentiates
More informationMUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES
MUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES Jun Wu, Yu Kitano, Stanislaw Andrzej Raczynski, Shigeki Miyabe, Takuya Nishimoto, Nobutaka Ono and Shigeki Sagayama The Graduate
More informationChord Classification of an Audio Signal using Artificial Neural Network
Chord Classification of an Audio Signal using Artificial Neural Network Ronesh Shrestha Student, Department of Electrical and Electronic Engineering, Kathmandu University, Dhulikhel, Nepal ---------------------------------------------------------------------***---------------------------------------------------------------------
More informationEfficient Vocal Melody Extraction from Polyphonic Music Signals
http://dx.doi.org/1.5755/j1.eee.19.6.4575 ELEKTRONIKA IR ELEKTROTECHNIKA, ISSN 1392-1215, VOL. 19, NO. 6, 213 Efficient Vocal Melody Extraction from Polyphonic Music Signals G. Yao 1,2, Y. Zheng 1,2, L.
More informationLOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU
The 21 st International Congress on Sound and Vibration 13-17 July, 2014, Beijing/China LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU Siyu Zhu, Peifeng Ji,
More informationSemi-supervised Musical Instrument Recognition
Semi-supervised Musical Instrument Recognition Master s Thesis Presentation Aleksandr Diment 1 1 Tampere niversity of Technology, Finland Supervisors: Adj.Prof. Tuomas Virtanen, MSc Toni Heittola 17 May
More informationAppendix A Types of Recorded Chords
Appendix A Types of Recorded Chords In this appendix, detailed lists of the types of recorded chords are presented. These lists include: The conventional name of the chord [13, 15]. The intervals between
More informationCHAPTER 4 SEGMENTATION AND FEATURE EXTRACTION
69 CHAPTER 4 SEGMENTATION AND FEATURE EXTRACTION According to the overall architecture of the system discussed in Chapter 3, we need to carry out pre-processing, segmentation and feature extraction. This
More informationA PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES
12th International Society for Music Information Retrieval Conference (ISMIR 2011) A PERPLEXITY BASED COVER SONG MATCHING SYSTEM FOR SHORT LENGTH QUERIES Erdem Unal 1 Elaine Chew 2 Panayiotis Georgiou
More informationMusical Signal Processing with LabVIEW Introduction to Audio and Musical Signals. By: Ed Doering
Musical Signal Processing with LabVIEW Introduction to Audio and Musical Signals By: Ed Doering Musical Signal Processing with LabVIEW Introduction to Audio and Musical Signals By: Ed Doering Online:
More informationMusic Information Retrieval Using Audio Input
Music Information Retrieval Using Audio Input Lloyd A. Smith, Rodger J. McNab and Ian H. Witten Department of Computer Science University of Waikato Private Bag 35 Hamilton, New Zealand {las, rjmcnab,
More informationListening with Awareness. Hassan Azad. (The author is a mathematician by profession and a senior student of sitar-nawaz Ustad Mohammad Shareef Khan)
Listening with Awareness Hassan Azad (The author is a mathematician by profession and a senior student of sitar-nawaz Ustad Mohammad Shareef Khan) This essay is addressed to listeners of Raag music who
More informationMelody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng
Melody Extraction from Generic Audio Clips Thaminda Edirisooriya, Hansohl Kim, Connie Zeng Introduction In this project we were interested in extracting the melody from generic audio files. Due to the
More informationIntroductions to Music Information Retrieval
Introductions to Music Information Retrieval ECE 272/472 Audio Signal Processing Bochen Li University of Rochester Wish List For music learners/performers While I play the piano, turn the page for me Tell
More informationAN APPROACH FOR MELODY EXTRACTION FROM POLYPHONIC AUDIO: USING PERCEPTUAL PRINCIPLES AND MELODIC SMOOTHNESS
AN APPROACH FOR MELODY EXTRACTION FROM POLYPHONIC AUDIO: USING PERCEPTUAL PRINCIPLES AND MELODIC SMOOTHNESS Rui Pedro Paiva CISUC Centre for Informatics and Systems of the University of Coimbra Department
More informationReducing False Positives in Video Shot Detection
Reducing False Positives in Video Shot Detection Nithya Manickam Computer Science & Engineering Department Indian Institute of Technology, Bombay Powai, India - 400076 mnitya@cse.iitb.ac.in Sharat Chandran
More informationThe Pythagorean Scale and Just Intonation
The Pythagorean Scale and Just Intonation Gareth E. Roberts Department of Mathematics and Computer Science College of the Holy Cross Worcester, MA Topics in Mathematics: Math and Music MATH 110 Spring
More informationAn interesting comparison between a morning raga with an evening one using graphical statistics
Saggi An interesting comparison between a morning raga with an evening one using graphical statistics by Soubhik Chakraborty,* Rayalla Ranganayakulu,** Shivee Chauhan,** Sandeep Singh Solanki,** Kartik
More informationStatistical Modeling and Retrieval of Polyphonic Music
Statistical Modeling and Retrieval of Polyphonic Music Erdem Unal Panayiotis G. Georgiou and Shrikanth S. Narayanan Speech Analysis and Interpretation Laboratory University of Southern California Los Angeles,
More informationOutline. Why do we classify? Audio Classification
Outline Introduction Music Information Retrieval Classification Process Steps Pitch Histograms Multiple Pitch Detection Algorithm Musical Genre Classification Implementation Future Work Why do we classify
More informationMelody Retrieval On The Web
Melody Retrieval On The Web Thesis proposal for the degree of Master of Science at the Massachusetts Institute of Technology M.I.T Media Laboratory Fall 2000 Thesis supervisor: Barry Vercoe Professor,
More information