MUSIC NEUROTECHNOLOGY: FROM MUSIC OF THE SPHERES TO MUSIC OF THE HEMISPHERES

Size: px
Start display at page:

Download "MUSIC NEUROTECHNOLOGY: FROM MUSIC OF THE SPHERES TO MUSIC OF THE HEMISPHERES"

Transcription

1 Symmetry: Culture and Science Vol. 26, No. x, page_first-page_last, 2015 MUSIC NEUROTECHNOLOGY: FROM MUSIC OF THE SPHERES TO MUSIC OF THE HEMISPHERES Eduardo Reck Miranda Interdisciplinary Centre for Computer Music Research (ICCMR), Faculty of Arts and Humanities Plymouth University The House, Plymouth PL4 8AA United Kingdom Abstract: The emerging field of Music Neurotechnology combines musical research with other fields such as Artificial Intelligence, Bioengineering, Neurosciences, Symmetry and Medicine. Musicians have an extraordinary opportunity nowadays to develop new approaches to composition informed by our understanding of the brain, which would have been unthinkable a few years ago. This paper reports on the outcomes of two projects: one is concerned with the development of brain-computer music interfacing (BCMI) technology and the other is looking into developing methods to express brain information by means of music. A BCMI system allows a person to control musical devices by means of commands expressed by brain signals, which are detected by means of brain monitoring technology. We are interested in developing BCMI aimed at medical applications and people with special needs, in particular for people with severe physical disability. In the second half of the paper we introduce the inside story of Corpus Callosum, a piece for chamber orchestra composed through a new approach to musical composition using information from brain scans. 1. INTRODUCTION Imagine if you could play a musical instrument with signals detected directly from your brain. Would it be possible to generate music representing brain activity? What would the music of our brains sound like? These are some of the questions addressed by our research into Music Neurotechnology 1. This paper presents the outcomes of two projects that epitomize the research that we have been conducting it the field of Music Neurotechnology at Plymouth University s Interdisciplinary Centre for Computer Music Research (ICCMR): one is concerned with the development of brain-computer music interfacing (BCMI) technology and the other is looking into developing methods to represent and express brain information by means of music, literally and metaphorically. The work presented here in a way continues a 1 The term Music Neurotechnology appeared in print for the first time in 2009 in the editorial of Computer Music Journal, volume 33, number 1, page 1.

2 2 E. R. MIRANDA tradition in musical research that can be traced back to the Middle Ages, however with a contemporary outlook. Latin translations of enigmatic but allegedly illuminating texts from ancient Egypt and Greece formed the basis of the European scholastic understanding of the world from the late Middle Ages through to the Renaissance. Music had a privileged place in the socalled Hermetic Tradition that flourished at the time. It was believed that music evoked cosmic spirits that could influence our body and soul, and to a certain extent, social behavior and politics. Scholars looked into developing astrological music, using combinations of tones representing numerical relationships and properties associated with planets and stars (James 1993). They aspired to produce music that could establish sympathetic relationships between the human soul and cosmic forces, and which might even have medicinal powers. The belief that astrological music had the power to influence our body and mind stemmed from the assumption that there were symmetrical correspondences between the notions of music of the spheres (i.e., cosmic music), music of the human organism (i.e., harmonious and healthy functioning of our body) and musica instrumentalis (i.e., ordinary music played on musical instruments). Such symmetry provided support at the time for explanations of how the human organism reacts to music. For instance, if a piece of musica instrumentalis embedded numerical information that evoked planet Venus, then the person who heard it would have become (or remained) calm and peaceful, because this planet was associated with love, calmness, softness, and so on. Obviously, our understanding of the world has evolved beyond recognition since the Renaissance period and so has the notion of music. Paradoxically, however, those medieval scholars elicited a number of issues and ideas that are still pertinent today, but which can be tackled with far more sophisticated theoretic and scientific tools than before. For instance, take the notion of symmetrical correspondences. Symmetry is indeed an important theoretical scaffold to describe the world. But today we have a much more advanced understanding of the concept of Symmetry in nature than we had 500 years ago: the notion that symmetrical natural phenomena are prone to rupture constitutes one of the pillars of our modern understanding of the evolution of material structures and complex systems, such as the human brain. For instance, in a paper presented at a conference on music and science held in 2014 at the Moscow State Tchaikovsky Conservatory, György Darvas remarked that the asymmetric nature of the human brain is a consequence of a series of morphological violations that have been taking place at molecular level since primordial times (Darvas 2007). Another noteworthy medieval concept, which still holds its currency today, is the notion that different musical traits can influence our body in specific ways, in particular our brain. Burgeoning research into musical neuroscience has been providing increasingly strong evidence that music does indeed affect brain functioning and plasticity. For instance, it is possible to produce neural correlates of emotions with music (Schmidt

3 MUSIC NEUROTECHNOLOGY 3 and Trainor 2001; Daly et al. 2014) and it has been demonstrated that music can be used as a complementary non-pharmacological treatment for symptoms of dementia (Svansdottir and Snaedal 2006), to cite but two examples. The following section introduces our BCMI project, which is aimed at the development of assistive music technology to enable people with severe physical disabilities to make music using brain signals. In addition to building the technology, we are particularly interested in developing approaches to compose music with it. Next, we introduce Corpus Callossum, a piece of music for a chamber orchestra, which was composed using fmri brain scans 2. Here we explain the brain-inspired methods that we developed for musical composition. The paper ends with a brief concluding commentary. 2 BCMI: BRAIN-COMPUTER MUSIC INTERFACING Brain-computer interfacing technology, or BCI, allows a person to control devices by means of commands expressed by brain signals, which are detected by means of brain monitoring technology (Dornhege et al. 2007). We are interested in developing braincomputer music interfacing technology (Figure 1), or BCMI, aimed at people with special needs and music therapy, in particular for people with severe physical disability who have relatively preserved cognitive functions. Severe brain injury, spinal cord injury and locked-in syndrome result in weak, minimal or no active movement, which therefore prevent the use of gesture-based musical instruments. These patient groups are currently either excluded from music recreation and therapy, or are left to engage in a considerably less active manner through listening (or receptive) methods only (Miranda et al. 2011). 2.1 Approaches to BCMI Currently, the most viable and practical method of detecting brain signals for BCMI is through the electroencephalogram, abbreviated as EEG 3, recorded with electrodes placed on the scalp. The EEG expresses the overall electrical activity of millions of neurones, but it is a difficult signal to handle because it is extremely faint: it is filtered by the membranes that separate the cortex from the skull (meninges), the skull itself and 2 fmri stands for functional magnetic resonance imaging. It is a technique for measuring brain activity, which works by detecting the changes in blood oxygenation and flow that occur in response to neural activity. It works upon the principle that when a brain area is more active it consumes more oxygen and to meet this increased demand blood flow increases to the active area. fmri can be used to show which parts of the brain are involved in a particular mental process. 3 The EEG is a measurement of brainwaves detected using electrodes placed on the scalp. It is measured as the voltage difference between two or more electrodes on the surface of the scalp, one of which is taken as a reference. Other methods for measuring brain activity include MEG (magnetoencephalography), PET (positon emission tomography) and fmri (functional magnetic resonance imaging), but they are not practical for BCI.

4 4 E. R. MIRANDA the scalp. This signal needs to be amplified significantly and analyzed in order to be used in a BCI. In BCI research, it is often assumed that: (a) there is information in the EEG that corresponds to different cognitive tasks, or at least a function of some sort, (b) this information can be detected and (c) users can be trained to produce EEG with such information voluntarily (Miranda 2010). In general, power spectrum analysis is the most commonly used method to analyze the EEG; please refer to (Miranda and Castet 2014) for an overview of EEG analysis methods. In simple terms, power spectrum analysis (Fast Fourier Transform, or FFT) breaks the EEG signal into different frequency bands and reveals the distribution of power between them. This is useful because it is believed that specific distributions of power in the spectrum of the EEG can encode different cognitive behaviors. Figure 1: A BCMI system extracts information from the user s EEG to control musical systems. In this photo, a person is playing a Disklavier MIDI piano through a BCMI in our research laboratory. As far as BCI systems are concerned, the most important frequency activity in the EEG spectrum sits below 40Hz. There are five, possibly six, recognized bands of EEG activity below 40Hz, also referred to as EEG rhythms, which are often associated with specific states of mind. For instance, the frequencies falling between 8Hz and 13Hz are referred to as alpha rhythms and are usually associated with a state of relaxed wakefulness. The exact boundaries of these bands are not so clearly defined and the meaning of these associations can be contentious. In practice, however, the exact meaning of EEG rhythms is not so crucial for a BCI system. Rather, what is crucial is to be able to establish whether or not users can produce power within distinct frequency bands voluntarily. For instance, we have used alpha rhythms to implement an early proof-of-concept BCMI system, which enabled a person to switch between two types of generative algorithms to produce piano music (Figure 1) in the style of Robert Schumann (when alpha rhythms were detected in the EEG) and Ludwig van Beethoven (when alpha rhythms were not detected) (Miranda 2006).

5 MUSIC NEUROTECHNOLOGY 5 Broadly speaking, there are two approaches to steering the EEG for a BCI: conscious effort and operant conditioning. Conscious effort induces changes in the EEG by engaging in specific cognitive tasks designed to produce specific EEG activity (Miranda et al. 2004; Curran and Stokes 2003). The cognitive task that is most often used in this case is motor imagery because it is possible to detect changes in the EEG of a subject imagining the movement of a limb, such as a hand (Dornhege et al. 2007). Operant conditioning involves the presentation of a task in conjunction with some form of feedback, which allows the user to develop a somewhat unconscious control of the EEG (Kaplan et al. 2005). In between the two aforementioned approaches sits a paradigm referred to as evoked potentials, which is the paradigm that we have adopted for the system that will be introduced below. Evoked potentials (EP) are spikes that appear in the EEG in response to external stimuli. EP can be evoked from auditory, visual or tactile stimuli producing auditory (AEP), visual (VEP) and somatosensory (SSEP) evoked potentials, respectively. It is very difficult to detect the electrophysiological response to a single event in an ongoing EEG stream. However, if the person is subjected to repeated stimulation at short intervals (e.g., 9 repetitions per second, or 9Hz) then the brain s response to each subsequent stimulus is evoked before the response to the prior stimulus has decayed. This prevents the signal to return to a baseline state. Rather, it produces a steady-state response, which can be detected with no major difficulties. Steady-state visual evoked potential (SSVEP) is a robust paradigm for a BCI, provided the user is not severely visually impaired. Typically, visual targets are presented to a user on a computer monitor representing tasks to be performed. These could be spelling words from an alphabet or selecting directions for a wheelchair to move, and so on. Each target is encoded by a flashing visual pattern reversing at a unique frequency; e.g., 11Hz. In order to select a target, the user must simply direct their gaze at the flashing pattern corresponding to the action he or she would like to perform. As the user s spotlight of attention falls over a particular target, the frequency of the unique pattern reversal rate can be accurately detected in his or her EEG through spectral analysis. It is possible to classify not only a user s choice of target, but also the extent to which he or she is attending it. This gives scope for SSVEP-based BCIs where each target is not a simple binary switch but can represent an array of options depending on the user s level of attention. Effectively, each target of a BCI-based SSVEP system can be implemented as a switch with a potentiometer. This immediately suggests a number of musical applications. 2.2 ICCMR s first SSVEP-based BCMI system In 2011 we completed the implementation of our first SSVEP-based BCMI system, which we tested with a patient with locked-in syndrome at the Royal Hospital for Neuro-disability, in London.

6 6 E. R. MIRANDA Figure 2: A patient with locked-in syndrome testing the BCMI. The system comprised four targets, as shown on the computer screen in front of the patient in Figure 2. Each target image represents a different musical instrument and a sequence of notes (Figure 3). Each image flashes reversing its colour (in this case the colour is red) at different frequencies: 7Hz, 9Hz, 11Hz and 15Hz, respectively. Thus, for instance if the person gazes at the image flashing at 15Hz, then the system will activate the xylophone instrument and will produce a melody using the sequence of six notes that was associated with this target; these notes are set beforehand, and the number of notes can be other than six. The more the person attends to this icon, the more prominent is the magnitude of his or her brain s SSVEP response to this stimulus, and vice-versa. This produces a varying control signal, which is used to produce the melody. Also, it provides a visual feedback to the user; the size of the icon increases or decreases as a function of this control signal. The melody is generated as follows: the sequence of six notes is stored in an array, whose index varies from one to six. The amplitude of the SSVEP signal is normalized so that it can be used as an index that slides up and down through the array. As the signal varies, the corresponding index triggers the respective musical notes stored in the array (Figure 4). The system requires just three electrodes on the scalp of the user: a pair placed on the region of the visual cortex and a ground electrode placed on the front head. We use active electrodes and their impedances are kept below 5Ω by parting hair at openings in electrode cap and using conductive gel. Filters are used to reduce interference of AC mains noise and artifacts such those generated by blinking eyes or moving facial muscles. SSVEP data was then filtered via parallel filters to extract band power across the frequencies correlating to the flashing stimuli.

7 MUSIC NEUROTECHNOLOGY 7 Figure 3: Each target image is associated with a musical instrument and a sequence of notes. Figure 4: Notes are selected according to the level of the SSVEP signal. The patient took approximately 15 minutes to learn how to use the system and she was able to quickly learn how to make melodies by increasing and decreasing the level of her SSVEP signal. Suggestions and criticism from the staff of the hospital and the patient with respect to improvements and potential further developments were collected (Miranda et al. 2011). Two important challenges emerged from this exercise:

8 8 E. R. MIRANDA (a) The system produced synthesized sounds and the music sounded mechanical; it lacked expressivity. Therefore, the music should preferably be played with real acoustic musical instruments. (b) Our system enabled a one-to-one interaction with a musical synthesiser. However, it was immediately apparent that it would be desirable to design a system that would promote interaction amongst the participants. Therefore, a BCMI should to enable a group of patients to make music together. 2.3 ICCMR s second SSVEP-based BCMI and Activating Memory In order to address the abovementioned challenges, we produced a new version of the BCMI system, and a bespoke composition entitled Activating Memory. Activating Memory is piece for a string quartet and a BCMI quartet. Each member of the BCMI quartet is furnished with our new SSVEP-based BCMI system, which enables him or her to generate a musical score in real-time. That is, each member of the BCMI quartet generates a part for the string quartet, which is displayed on a computer screen for the respective performer to sight-read during a performance. The new BCMI system works similarly to the one described in section 2.2, with the fundamental difference that the visual targets are associated with short musical phrases. Moreover, instead of flashing images on a computer monitor, we designed a device with flashing LEDs and LCD screens, which display what the LEDs represent (Figure 5). The LCD provides an efficient way to change the set of options available for selection. Also, this device increases the SSVEP response to the stimuli because it enables us to produce more precise flashing rates than the ones that we were able to produce using standard computer monitors. Activating Memory was composed as a musical game involving four players. The composition method is inspired by Arca Musurgica, an extraordinary musical device built in 1650, in Rome, by Jesuit Father Athanasius Kircher, and described in his book Musurgia Universalis (Kircher 1650). Kircher s device consisted of a box holding a number of wooden slats. Each of them contained a set of numbers, corresponding to sets of notes and rhythmic patterns. These materials could be combined in a number of ways to form compositions. In the 18th century, a number of musical dice games inspired by Arca Musurgica, appeared in Europe, including Wolfgang Amadeus Mozart s famous Musikalisches Würfelspiel. For this piece, Mozart wrote a series of short phrases, which could be selected randomly, based on the rolls of a dice, and sequenced to form minuets and trios (Mozart 2003).

9 MUSIC NEUROTECHNOLOGY 9 Figure 5: Photo of our new SSVEP stimuli device. In Activating Memory, different choices of musical riff are displayed on the LCD screens instead of numbers 1, 2, 3 and 4. Figure 6: An example of two sets of four musical riffs on offer for two subsequent sections the violoncello part. Activating Memory is composed on the fly by combining sets of programmed musical phrases, or riffs. For each section, the system provides four choices of musical phrases, or riffs, for each part of the string quartet, which are selected by the BCMI quartet (Figure 6). The selected riffs for each instrument are relayed to the computer monitors facing the string quartet for sight-reading. While the string quartet is playing the riffs for a section, the system provides the BCMI quartet with another set of choices for the next section. Once the current section has been played, the chosen new riffs for each instrument are subsequently relayed to the musicians, and so on. In order to give enough time for the BCMI quartet to make choices, the musicians repeat the respective

10 10 E. R. MIRANDA riffs four times. The system follows an internal metronome, which guarantees synchronization of all parts. Activating Memory has been publicly performed on a number of occasions but not with disabled patients yet. This allowed us to make final adjustments to the system and music. As we write this paper, we are in the process of organizing a trial at the hospital with four patients in the summer of In the meantime, we already started building a new version of the system, which will include usage of the amplitude of the SSVEP signal and a new EEG measurement that is able to reveal the affective (or emotional ) state of the BCMI user (Eaton et al. 2014). 2.4 Symmetry of brain activation for a BCMI with emotions The circumplex model of affect introduced in (Russell 1980) provides a way of parameterizing emotional responses to musical stimuli in two dimensions: valence (or positivity) and arousal (or energy of activation). Figure 7: Quadrants with 12 discrete affective descriptors from the Russell s circumplex model (Eaton et al. 2014). We re-organized the circumplex model to create 12 affective categories, which can be indexed via Cartesian co-ordinates (Figure 7). Thus, an emotional trajectory moving from pleased, via happy, to excited, can be represented by a vector which

11 MUSIC NEUROTECHNOLOGY 11 gradually increases in arousal whilst maintaining positive valence: {(v2, a4), (v2, a5), (v2, a6)}. In order to measure EEG correlates of valance and arousal we are placing electrodes on the front of the scalp, over the prefrontal cortex, which is a region of the brain believed to handle emotion (Ramirez and Vamvakousis 2012). Strong EEG activity in the spectral region between 8Hz and 13Hz is known to indicate a relaxed state of mind, and this combined with increased activity in spectral region between 13Hz and 30Hz indicate arousal; i.e., alertness in mental activity. The symmetry of activation levels across the left and right hemispheres indicates a difference between a motivated approach and a more negative, withdrawal type of mental state, which is directly related to valence. We are currently designing and conducting a number of tests with transformative music algorithms to generate music representing the affective descriptors shown in Figure 7 (Williams at al. 2014). We envisage a scenario whereby those transformative algorithms would alter the SSVEP-selected riffs in order to convey a specific mood or mood trajectory. 3 MUSICAL CREATIVITY WITH NEUROSCIENCE Musicians have an extraordinary opportunity today to develop new approaches to composition that would have been unthinkable a few years ago. The second half of this paper will examine the impact of Music Neurotechnology on musical creativity. Corpus Callosum is piece for chamber orchestra composed by means of a new approach to musical composition combining sophisticated brain imaging technology, musical Artificial Intelligence and ideas from Symmetry theory. In Chapter 12 of the book Guide to Brain-Computer Music Interfacing (Miranda and Castet 2014) we introduced this new approach in the context of a composition for orchestra entitled Symphony of Minds Listening, which is a precursor of Corpus Callosum. Symphony of Minds Listening is an experimental symphonic piece in three movements based on the fmri brain scans taken from three different persons while they listened to the second movement of Ludwig van Beethoven s Seventh Symphony: a ballerina, a philosopher and a composer (the author of this paper). In simple terms, we deconstructed the Beethoven movement to its essential elements and stored them with information representing their structural features. Then, we reassembled these elements into a new composition, using the same instrumentation as for Beethoven s 7th symphony, but with a twist: the fmri information influenced the process of reassembling the music. Please refer to (Miranda et al. 2014) for more details; below we present only a brief overview of this method.

12 12 E. R. MIRANDA Figure 8: A typical representation of an fmri snapshot, showing 8 transversal slices of the brain. The fmri technique measures brain activity by detecting associated changes in blood flow. The measurements can be represented graphically by colour-coding the strength of activation across the brain. Figure 8 shows a typical representation of an fmri scan of a person listening to music, displaying the activity of the brain at a specific window of time. In this case, each time window lasts for two seconds. The figure shows 8 planar surfaces, or slices, from the top to the bottom of the brain, and the respective activity detected in these areas. Figure 9 shows an example of a 3D rendition of such an fmri scan, devised by Dan Lloyd, of Trinity College in Hartford, USA: it displays different areas of the brain, represented by different colours (or, shades of grey), responding in a coordinate manner to the music. Each scanning session generated sets of fmri data, each of which we associated to a measure of the second movement of Beethoven s 7 th symphony. This is shown schematically in Figure 10. Figure 9: An artistic 3D rendering of an fmri scan.

13 MUSIC NEUROTECHNOLOGY 13 The score of Beethoven s movement was deconstructed by means of a piece of software of our own design, which extracted statistical information about the structure of the music and used this information to reconstruct it. The process of reconstruction, however, is influenced by the fmri data. Effectively, during the reconstruction process the fmri data altered the original music. Not surprisingly, the scanned fmri data differed amongst the three listeners. Therefore, brain activity from three different minds yielded three different movements for the resulting composition, each of which bearing varied degrees of resemblance to the original symphony. Figure 10: The result of a scanning section is a set of fmri data for each measure of Beethoven s piece. (Note: this is only a schematic representation; the brain imaging does not correspond to the actual music shown.) 3.1 Corpus Callosum, creativity and brain asymmetry Corpus Callosum, for an ensemble of 25 performers, revisits and further develops Symphony of Minds Listening s compositional methods. Our ambition to convey literal musical representations of the data, while remaining faithful to the original form of Beethoven s movement, heavily constrained our musical imagination during the composition of Symphony of Minds Listening. In Corpus Callosum, however, we allowed more freedom to the way in which we handled the materials produced by the computer. And this time we worked with fmri data from this author s own brain only. The title of the composition refers to the part of the brain that connects its left and right hemispheres, and facilitates communication between them: the corpus callosum. The human brain is divided into two asymmetric hemispheres. The left hemisphere is

14 14 E. R. MIRANDA largely engaged in processing the details of things. It is often associated with a more objective, or scientific, knowledge of the world. Conversely, the right hemisphere is largely engaged in taking a more holistic view of things and is often associated with a more subjective, or poetic, interpretation of the world. In a normal brain, the two hemispheres work together, are highly interconnected, and interact through the corpus callosum. Philosopher Friedrich Nietzsche suggested that great artistic creations could only result from the articulation of a mythological dichotomy referred to as the Apollonian and Dionysian. In ancient Greek mythology, Apollo is the god of the sun and is associated with rational and logical thinking, self-control and order. Conversely, Dionysus is the god of wine and is associated with irrationalism, intuition, passion and anarchy. These two gods represent two conflicting creative drives, constantly stimulating, provoking one another. The notion that the Apollonian and the Dionysian tend to counter each other reminds me of the way in which the brain functions at all levels. Inhibitory processes pervade the functioning of our brain, from the microscopic level of neurones communicating with one another, to the macroscopic level of interaction between larger networks of millions of neurones. The book Thinking Music (Miranda 2014) offers a deeper discussion on the relationship between brain asymmetry and the cognitive pushand-pull behind musical creativity. During the scanning session to collect the data for Symphony of Minds Listening, this author remembers that sometimes he lost concentration on the music and his mind wondered off. We reckon that this is a typical manifestation of brain asymmetry at work: while one side of his brain was striving to pay attention to musical detail, the other was making mental associations, producing imageries, eliciting feelings, and so on. The ensemble for Corpus Callosum is divided into two groups: one to be placed on the left side of the stage and the other on the right side. The group on the left side represents the left hemisphere of the brain, whereas the group on the right side represents the right hemisphere. The composition develops as an interaction between these two realms (Figure 11). The instruments on the right hand side 4 of the stage play segments that were composed emphasizing orchestration. These passages do not have rhythm or melody. Rather, the instruments play clusters of sustained notes. Here the focus is on timbre. Conversely, the instruments on the left hand side 5 play modifications of passages pinched from Beethoven s score. These are noticeably rhythmic and melodic segments; timbre is deemed secondary here. 4 2 violins, 1 viola, 1 violoncello, 2 flutes, 2 oboes, 2 clarinets, 1 bass clarinet, 2 bassoons, 2 French horns, 2 trumpets, 1 trombone, 1 bass trombone and percussion (2 players). 5 2 violins, 1 viola, 1 violoncello and 1 marimba (or piano).

15 MUSIC NEUROTECHNOLOGY 15 The piece was composed with the aid of pieces of software that generated the orchestrations and made the musical modifications based the composer s fmri data. Before we explain how these pieces of software work, let us briefly explain how the fmri data were dealt with. The time resolution of the Siemens Allegra 3T scanner that we used to collect the brain data is 2 seconds. That is, it took 2 seconds to take a snapshot comprising 36 image slices of the brain, like the ones shown in Figure 8. Each slice comprised 64 x 64 picture elements, known as voxels, or volume pixels. Thus, each snapshot comprised approximated 150,000 continuously varying voxels. Figure 11: Excerpt from the score of Corpus Callosum, showing only the string quartet at the top and the woodwinds. The first 3 measures hold music representing the left side of the hemisphere whereas the last 4 measures hold music representing the right side.

16 16 E. R. MIRANDA The subjects were scanned listening the second movement of Beethoven's Seventh Symphony twice. The scanning began with 30 seconds without music, then 460 seconds of Beethoven music, then 18 seconds without music, and finally more 460 seconds of Beethoven again. Thus each run generated 484 snapshots. The raw fmri data were first pre-processed following standard procedures for functional neuroimaging using Statistical Parametric Mapping software (Ashburner et al. 2013). Each of the 484 snapshots produced 150,000 voxels, which are very complex for direct analysis. Instead, the image series were further processed with Independent Component Analysis, or ICA (Stone 2004). Informally, ICA separates ensembles of voxels that oscillate in unison. These are unified as supervoxels representing temporally coherent networks of brain activity. The coloured patches on the 3D renditions shown in Figures 9 and 10 correspond to ICA components. A total of ICA 25 components were calculated from the fmri data. In order to rank these components in order of musical significance, the activity of each component during the first pass through the Beethoven listening was compared to that same component during the second pass. If these two segments of a component time series were correlated (with at least p<.05), we hypothesized that the activity might be musically driven, since the stimulus, that is, the music, would be identical at the corresponding time points in the two passes through the music. The order of strength of the 25 ICA components is as follows: 25, 15, 14, 8, 5, 10, 11, 18, 6, 2, 4, 1, 17, 16, 13, 20, 21, 3, 22, 24, 12, 7, 9, 23 and 19. For practical reasons, the actual ICA values were normalized to range from 0 to 9. As a last step, the varying components were resampled to match the timing of the Beethoven score measure by measure. Thus, each time point was indexed to a measure of the Beethoven score. The movement comprises 278 measures therefore each ICA component comprises a time series of 278 values, ranging from 0 (meaning lowest fmri intensity) to 9 (highest fmri intensity). As an example, Table 1 shows the values of the first 5 strongest ICA components (that is, 25, 15, 14, 8 and 5, with p<.002) for the first 10 measures of Beethoven s music, yielded by the fmri of this author. Beethoven Measure ICA 25 ICA 15 ICA 14 ICA ICA 5 Table 1: The values of the strongest 5 ICA components for the first 10 measures of Beethoven s music.

17 MUSIC NEUROTECHNOLOGY Right side: generative orchestrations The materials to compose for the group of instruments representing the right side of the brain were created with a piece of software that generates orchestrations. We modified of a program called ATO-MS (Maresz 2013) to enable it to take into account brain data to generate the orchestrations 6. The process of generating the orchestrations for Corpus Callosum is illustrated in Figure 12. The system takes a given audio file, which is a section from the recording of Beethoven s symphony, and analyses its spectrum using Fast Fourier Transform (FFT) with a window lasting for two seconds. This roughly corresponds to 1 measure of music. This analysis produces a set of 25 frequencies values in Hz, which are the frequencies of the first 25 most prominent partials of the spectrum for every 2 seconds of audio. Thus, for instance, if the audio lasts for 8 seconds, then the analysis produces 4 sets of 25 frequency values each. Figure 12: The process of generating orchestrations. 6 ATO-MS was originally created at IRCAM, or Institut de Recherche et Coordination Acoustique/Musique. This is a French institute for science, music and sound created by composer Pierre Boulez in 1970s, which also promotes the creation of contemporary classical music. It is affiliated to the Centre Georges Pompidou in Paris.

18 18 E. R. MIRANDA The partials of the spectrum of a sound are normally described by means of frequency values and their respective amplitudes (i.e., power). However, we discarded the amplitudes produced by the FFT analysis. Instead we replaced the amplitudes to the partials by the intensity of the 25 ICA components associated to the respective measures of Beethoven s symphony. This results in what is referred to in Figure 12 as the fmri modulated spectrum. The audio database in Figure 12 contains spectral analysis of recordings from all instruments of a symphonic orchestra. It holds analyses of all musical notes that can be produced by those instruments, including different levels of loudness, kinds of articulations and playing techniques. Figure 13: The audio recording of bars lasts for 8 seconds. Given an instrumentation specification, that is the list of instruments that one wants the system to produce orchestrations with, the system searches the database for combinations of instruments whose blended spectrum best approximate the fmrimodulated spectrum. The system produces a number of suggestions for the composer to work with. As an example, let us examine how the 4 th measure of the excerpt shown in Figure 11 was composed. The audio segment (Figure 13) that was used as a target for the orchestration of the last 4 bars of the score shown in Figure 11 corresponds to bars of the 2 nd movement of Beethoven s 7 th symphony. The FFT analysis of the first 2 seconds of this sound produced 25 frequencies, as shown in Table 2. The ICA coefficients for the fmri scan taken at the moment the composer was listening to bar 139 of the symphony are also shown in Table 2. These are the ICA coefficients that replaced the original amplitudes calculated by the FFT analysis, producing the fmrimodulated spectrum as a result. In this case the instrument specification consisted of 2 flutes, 2 oboes, 2 clarinets, 1 bass clarinet and 2 bassoons. Amongst the various suggestions generated by the system, the composer selected the chord shown in the 4 th measure of score shown in Figure 11. Note that one oboe was not used. Also note that individual musical notes of the chord

19 MUSIC NEUROTECHNOLOGY 19 have different dynamics. It is important that the players observe this during the performance in order to produce the desired spectral behavior 7. Partial Frequency in Hz Amplitude (0 9) (ICA 25) (ICA 15) (ICA 14) (ICA 8) (ICA 5) (ICA 10) (ICA 11) (ICA 18) (ICA 6) (ICA 2) (ICA 4) (ICA 1) (ICA 17) (ICA 16) (ICA 13) (ICA 20) (ICA 21) (ICA 3) (ICA 22) (ICA 24) (ICA 12) (ICA 7) (ICA 9) (ICA 23) (ICA 19) Table 2: The fmri-modulated spectrum corresponding to bar 139 of the 2nd movement of Beethoven s 7 th symphony Left side: musical transformations The materials to compose for the group of instruments representing the left side of the brain were created using a number of transformation algorithms. These are algorithms that modify a given musical sequence. The amount of modification is scaled according to an index, referred to as the fmri_index. Effectively, this is the value of the ICA 7 In fact, the amplitudes of the fmri-modulated spectrum are not static. They vary within the 2 seconds long window.

20 20 E. R. MIRANDA analysis extrapolated from the fmri scans, with values between 0 and 9. In order to use the fmri_index as a control signal (CS) for the transformation algorithms, the data is scaled to a range between 0.1 and 1.0. The system applies the following simple scaling process to the value of the fmri_index: CS = {(fmri_index + 1) * 0.1}. A difference value d between the input and the transformed music is also calculated. This difference is then multiplied by CS to give a final scaled modifier value: SMV. The value of SMV is added to the input signal to directly transform the output. This gives a degree of fmri-controlled variability in each transformation: a high fmri_index value will result in major transformations of the music, whereas a low fmri_index value will result in minor transformations. Two of the transformation algorithms are explained below with examples illustrating the effect of varying the value of the fmri_index: pitch inversion and pitch scrambling Pitch inversion algorithm Given an input musical sequence, the pitch inversion algorithm creates a new sequence, which is the input sequence turned upside-down. For instance, a sequence rising in pitch would descend in pitch after being passed through this transformation. In order to illustrate this, let us consider the example in Figure 14. The notes {B4, B4, D5, C5, B4} are converted into MIDI values 8 as follows {71, 71, 74, 72, 71}. Figure 14: An example of a short musical sequence. Pitch inversion is achieved simply by subtracting the current MIDI pitch value from 128 (which is the highest possible MIDI note). For instance, the transformed pitch values for our example created using this technique would be as follows: ( = 57), ( = 57), ( = 54), ( = 56) and ( = 57). The resulting MIDI values are 57, 57, 54, 56 and 57, yielding the following inverted pitch sequence {A3, A3, F#3, G#3, A3} (Figure 15). Figure 15: Newly inverted sequence, after transformation of measure in Figure MIDI is a technical standard for encoding music that allows a wide variety of electronic musical instruments, computers and other related devices to connect and communicate with one another. MIDI uses a range of 128 pitch values, from 1 to 128.

21 MUSIC NEUROTECHNOLOGY 21 The example above assumed a maximal fmri index value of 9, which once scaled to create a CS gives 1.0. However, varied degrees of transformations are possible by scaling the amount of transformation in function of the fmri_index value. The difference between the input and the transformed pitches is multiplied by CS, before being summed with the input to create the final transformed output value, as follows: New_pitch = {Input_pitch + ((Input_pitch transf_pitch) * [(fmri_index + 1) * 0.1])} Let us examine what happens if we assume an fmri_index equal to 5, which yields a CS equal to 0.6. In this case, we should expect an output approximately half way between the original pitch and the inversion; in other words, an almost neutral set of intervals. First, the difference d between the maximal inversion and the input signal for each of the MIDI values needs to be calculated as follows: d = {(57 71), (57 71), (54 74), (56 72), (57 71)} d = { 14, 14, 20, 16, 14} Then, the scaled modifier values are calculated by multiplying the respective d values by the value of CS, which is equal to 0.6, as follows: SMV= {( 14 * 0.6), ( 14* 0.6), ( 20 * 0.6), ( 16 * 0.6), ( 14 * 0.6)} SMV = { 8.4, 8.4, 12, 9.6, 8.4} Finally, the SMV values are summed with the original input to give a transformed set of output values: New_pitches = {(71 8.4), (71 8.4), (74 12), (72 9.6), (71 8.4)} New_pitches = {62.6, 62.6, 62, 62, 62.6} Pitch values are rounded up to the nearest whole number in order to match the MIDI standard, giving a transformed set of pitch values equal to {63, 63, 62, 62, 63}, which is rendered as {D#4, D#4, D4, D4, D#4}, as shown in Figure 16. Figure 16: Sequence after inversion with fmri_index = 5, giving a nearly neutral set of pitch intervals Pitch-scrambling algorithm In simple terms, the pitch-scrambling algorithm orders the pitch values of the input signal into a numerical list, which is then re-ordered randomly. This provides a stochastic component to the transformation algorithm. Using the same measure as for

22 22 E. R. MIRANDA the previous example (Figure 14) as a starting point, let us examine the result of applying this transformation, as follows: Input pitches: {71, 71, 74, 72, 71} Order pitches in ascending order: {71, 71, 71, 72, 74} Scramble the order of pitches randomly: {74, 72, 71, 71, 71} Output pitches: {74, 72, 71, 71, 71} Figure 17: The result from applying the pitch-scrambling algorithm four times on the same input. In this case, the output would be rendered as {D5, C5, B4, B4, B4}. Re-running the transformation, a further three times would give further variants, for example: {72, 74, 71, 71, 71}, {71, 74, 72, 71, 71} and {71, 74, 71, 72, 71}, rendered as {C5, D5, B4, B4, B4}, {B4, D5, C5, B4, B4} and {B4, D5, B4, C5, B4}, respectively, as illustrated in Figure 17. As with the pitch inversion algorithm, the value of fmri_index can be used to create a control signal with which the amount of transformation can be varied. In order to illustrate this, let us assume an fmri_index equal to 3. This gives a CS value of 0.4. Considering the same input measure as before (Figure 14) and the transformed values from the first pitch scramble shown in Figure 17, the values of d between the first scramble and the input signal are calculated as follows: d = {(74 71), (72 71), (71 74), (71 72), (71 71)} d = {3, 1, 3, 1, 0} The scaled modifier values are then calculated by multiplying the difference values by CS = 0.4: SMV = {(3 * 0.4), (1 * 0.4), ( 3 * 0.4), ( 1 * 0.4), (0 * 0.4)} SMV = {1.2, 0.4, 1.2, 0.4, 0} Finally, the SMV values are summed with the values of the original input to give a transformed set of output values: New_pitches = {( ), ( ), (74 1.2), (72 0.4), (71 0)} New_pitches = {72.2, 71.4, 72.8, 71.6, 71} As in the previous example, pitch values are rounded up to the nearest whole number, giving a transformed set of pitch values equal to {72, 71, 73, 72, 71}, which is rendered as {C5, B4, C#5, C5, B4}, as shown in Figure 18. Note that the output is significantly closer in overall structure to the unscrambled input than the first scrambled transformation shown in Figure 17; only the first and third notes changed.

23 MUSIC NEUROTECHNOLOGY 23 Figure 18: Transformed output created by pitch scrambling algorithm assuming fmri_index = 3. 4 CONCLUDING REMAKS The technology and the compositional method developed for Activating Memory and Corpus Callosum illustrate the interdisciplinary nature of Music Neurotechnology research and its benefits to both, scientific research and musical creativity. Activating Memory is an unprecedented piece of music, which is aimed at being much more than mere entertainment or a commodity for the music industry. Here, eight participants can engage in collaborative music making together, where four of them are not required to move. This forms a suitable creative environment for engaging severely physically disabled patients in music making: they are given an active musical voice to playfully interact between themselves and with the musicians of a string quartet. The first public performance of Activating Memory took place in February 2014 at the Peninsula Arts Contemporary Music Festival, Plymouth, UK. Physically disabled patients were not involved in his first performance. However, we are currently working with colleagues at the Royal Hospital for Neuro-disability in London to trial Activating Memory with severely paralyzed patients, with the objective of creating the Paramusical Ensemble (4 paralyzed musicians and 4 string players) and stage its first public performance in the summer of On the research front, we are currently developing new forms of BCMI control other than the SSVEP-based method. For instance, we are developing ways to detect EEG patterns related to emotional states to control music algorithms, such as the transformational algorithms that we developed for Corpus Callosum. We are in the process of adapting those transformational algorithms to work in real-time, in order to generate scores informed by emotional coefficients. ACKNOWLEDGEMENTS A thank the following post-graduate students, colleagues and collaborators from other institutions for their invaluable contributions to the work presented in this paper: Aurelien Antoine (ICCMR PhD student), Joel Eaton (ICCMR PhD student), Duncan Williams (ICCMR Research Fellow), Anders Vinjas (Notam, Norway), Phillipe Esling (Ircam, France), Zoran Jozipovic (New York University, USA), Dan Lloyd (Trinity College Hartford, USA). REFERENCES Ashburner, J. and The FIL Methods Group at UCL (2013). SMP8 Manual. London: Insitute of Neurology, University College London. Available online at (Assessed on 04 November 2013).

24 24 E. R. MIRANDA Curran, E. A. and Stokes, M. J. (2003). Learning to control brain activity: a review of the production and control of EEG components for driving brain-computer interface (BCI) systems. Brain and Cognition 51(3): Daly, I., Malik, A., Hwang, F., Roesch, E., Weaver, J., Kirke, A., Williams, D., Miranda, E.and Nasuto, S. J. (2014). Neural correlates of emotional responses to music: an EEG study. Neuroscience Letters 753: Darvas, G. (2007). Symmetry. Basel: Birkhäuser Verlag. Dornhege, G., del Millan, J., Hinterberger, T., McFarland, D. and Muller, K-R. (Eds.) (2007). Toward Brain- Computer Interfacing. Cambridge, MA: The MIT Press. Eaton, J., Williams, D., Miranda, E. R. (2014). Affective Jukebox: A Confirmatory Study of EEG Emotional Correlates in Response to Musical Stimuli. Proceedings of Joint ICMC SMC th Sound and Music Conference 14th International Computer Music Conference, University of Athens, Greece. Kaplan, A., Ya Kim, J.J., Jin, K. S., Park, B.W., Byeon. J.G. and Tarasova. S.U. (2005). Unconscious operant conditioning in the paradigm of brain-computer interface based on color perception. International Journal of Neurosciences, 115: Kircher, A. (1650). Musurgia Universalis. Two volumes. Rome: Ex typographia Haeredum F. Corbellitti. James, J. (1993). The Music of the Spheres: Music, Science and the Natural Order of the Universe. London: Abacus. Maresz, Y. (2013). On Computer-Assisted Orchestration. Contemporary Music Review 32(1): Miranda, E. R., Lloyd, D., Josipovich, Z. and Williams, D. (2014). Creative Music Neurotechnology with Symphony of Minds Listening. In E. R. Miranda and J. Castet (Eds.), Guide to Brain-Computer Music Interfacing, pp London: Springer. Miranda, E. R. (2014). Thinking Music. Plymouth: University of Plymouth Press. Miranda, E. R. and Castet, J. (Eds.) (2014). Guide to Brain-Computer Music Interfacing. London: Springer. Miranda, E. R., Magee, W., Wilson, J. J., Eaton, J., Palaniappan, R. (2011). Brain-Computer Music Interfacing (BCMI): From Basic Research to the Real World of Special Needs. Music and Medicine 3(3): Miranda, E. R. (2010). Plymouth brain-computer music interfacing project: from EEG audio mixers to composition informed by cognitive neuroscience. International Journal of Arts and Technology 3(2/3): Miranda, E R. (2006). Brain-Computer music interface for composition and performance. International Journal on Disability and Human Development 5(2): Miranda, E. R., Roberts, S. and Stokes, M. (2004). On Generating EEG for Controlling Musical Systems. Biomedizinische Technik 49(1): Nietzsche, F. (1872). Die Geburt der Tragödie aus dem Geiste der Musik. Leipzig: Verlag von E. W. Fritzsch. Mozart, W. A. (2003). Musikalisches Würfelspiel. Sheet music. Mainz: Schott Music. Ramirez, R. and Vamvakousis, Z. (2012). Detecting Emotion from EEG Signals Using the Emotive Epoc Device. Brain Informatics Lecture Notes in Computer Science, pp London: Springer. Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology 39(6): Schmidt, L. A., and Trainor, L. J. (2001). Frontal brain electrical activity (EEG) distinguishes valence and intensity of musical emotions. Cognition & Emotion 15: Stone, J. V. (2004). Independent Component Analysis: A Tutorial Introduction. Cambridge, MA: The MIT Press. Svansdottir, H. B. and Snaedal, J. (2006). Music therapy in moderate and severe dementia of Alzheimer s type: a case-control study. International Psychogeriatrics 18(4): Williams, D., Kirke, A., Miranda, E. R., Roesch, E., Daly, I., and Nasuto, S. (2014). Investigating affect in algorithmic composition systems. Psychology of Music (Published online before print August 15, 2014, doi: / )

Brain Computer Music Interfacing Demo

Brain Computer Music Interfacing Demo Brain Computer Music Interfacing Demo University of Plymouth, UK http://cmr.soc.plymouth.ac.uk/ Prof E R Miranda Research Objective: Development of Brain-Computer Music Interfacing (BCMI) technology to

More information

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring 2009 Week 6 Class Notes Pitch Perception Introduction Pitch may be described as that attribute of auditory sensation in terms

More information

Computer-Aided Musical Imagination. Eduardo R. Miranda

Computer-Aided Musical Imagination. Eduardo R. Miranda Computer-Aided Musical Imagination Eduardo R. Miranda Perhaps one of the most significant aspects differentiating humans from other animals is the fact that we are inherently musical. Our compulsion to

More information

Timbre blending of wind instruments: acoustics and perception

Timbre blending of wind instruments: acoustics and perception Timbre blending of wind instruments: acoustics and perception Sven-Amin Lembke CIRMMT / Music Technology Schulich School of Music, McGill University sven-amin.lembke@mail.mcgill.ca ABSTRACT The acoustical

More information

REAL-TIME NOTATION USING BRAINWAVE CONTROL

REAL-TIME NOTATION USING BRAINWAVE CONTROL REAL-TIME NOTATION USING BRAINWAVE CONTROL Joel Eaton Interdisciplinary Centre for Computer Music Research (ICCMR) University of Plymouth joel.eaton@postgrad.plymouth.ac.uk Eduardo Miranda Interdisciplinary

More information

Topic 10. Multi-pitch Analysis

Topic 10. Multi-pitch Analysis Topic 10 Multi-pitch Analysis What is pitch? Common elements of music are pitch, rhythm, dynamics, and the sonic qualities of timbre and texture. An auditory perceptual attribute in terms of which sounds

More information

A FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES

A FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES A FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES Panayiotis Kokoras School of Music Studies Aristotle University of Thessaloniki email@panayiotiskokoras.com Abstract. This article proposes a theoretical

More information

Toward a Computationally-Enhanced Acoustic Grand Piano

Toward a Computationally-Enhanced Acoustic Grand Piano Toward a Computationally-Enhanced Acoustic Grand Piano Andrew McPherson Electrical & Computer Engineering Drexel University 3141 Chestnut St. Philadelphia, PA 19104 USA apm@drexel.edu Youngmoo Kim Electrical

More information

TongArk: a Human-Machine Ensemble

TongArk: a Human-Machine Ensemble TongArk: a Human-Machine Ensemble Prof. Alexey Krasnoskulov, PhD. Department of Sound Engineering and Information Technologies, Piano Department Rostov State Rakhmaninov Conservatoire, Russia e-mail: avk@soundworlds.net

More information

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT

More information

A prototype system for rule-based expressive modifications of audio recordings

A prototype system for rule-based expressive modifications of audio recordings International Symposium on Performance Science ISBN 0-00-000000-0 / 000-0-00-000000-0 The Author 2007, Published by the AEC All rights reserved A prototype system for rule-based expressive modifications

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Musical Acoustics Session 3pMU: Perception and Orchestration Practice

More information

Brain-Computer Interface (BCI)

Brain-Computer Interface (BCI) Brain-Computer Interface (BCI) Christoph Guger, Günter Edlinger, g.tec Guger Technologies OEG Herbersteinstr. 60, 8020 Graz, Austria, guger@gtec.at This tutorial shows HOW-TO find and extract proper signal

More information

Brain.fm Theory & Process

Brain.fm Theory & Process Brain.fm Theory & Process At Brain.fm we develop and deliver functional music, directly optimized for its effects on our behavior. Our goal is to help the listener achieve desired mental states such as

More information

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical and schemas Stella Paraskeva (,) Stephen McAdams (,) () Institut de Recherche et de Coordination

More information

Music Training and Neuroplasticity

Music Training and Neuroplasticity Presents Music Training and Neuroplasticity Searching For the Mind with John Leif, M.D. Neuroplasticity... 2 The brain's ability to reorganize itself by forming new neural connections throughout life....

More information

Algorithmic Music Composition

Algorithmic Music Composition Algorithmic Music Composition MUS-15 Jan Dreier July 6, 2015 1 Introduction The goal of algorithmic music composition is to automate the process of creating music. One wants to create pleasant music without

More information

EEG Eye-Blinking Artefacts Power Spectrum Analysis

EEG Eye-Blinking Artefacts Power Spectrum Analysis EEG Eye-Blinking Artefacts Power Spectrum Analysis Plamen Manoilov Abstract: Artefacts are noises introduced to the electroencephalogram s (EEG) signal by not central nervous system (CNS) sources of electric

More information

Computational Modelling of Harmony

Computational Modelling of Harmony Computational Modelling of Harmony Simon Dixon Centre for Digital Music, Queen Mary University of London, Mile End Rd, London E1 4NS, UK simon.dixon@elec.qmul.ac.uk http://www.elec.qmul.ac.uk/people/simond

More information

Experiments on musical instrument separation using multiplecause

Experiments on musical instrument separation using multiplecause Experiments on musical instrument separation using multiplecause models J Klingseisen and M D Plumbley* Department of Electronic Engineering King's College London * - Corresponding Author - mark.plumbley@kcl.ac.uk

More information

The Healing Power of Music. Scientific American Mind William Forde Thompson and Gottfried Schlaug

The Healing Power of Music. Scientific American Mind William Forde Thompson and Gottfried Schlaug The Healing Power of Music Scientific American Mind William Forde Thompson and Gottfried Schlaug Music as Medicine Across cultures and throughout history, music listening and music making have played a

More information

Music BCI ( )

Music BCI ( ) Music BCI (006-2015) Matthias Treder, Benjamin Blankertz Technische Universität Berlin, Berlin, Germany September 5, 2016 1 Introduction We investigated the suitability of musical stimuli for use in a

More information

Enhancing Music Maps

Enhancing Music Maps Enhancing Music Maps Jakob Frank Vienna University of Technology, Vienna, Austria http://www.ifs.tuwien.ac.at/mir frank@ifs.tuwien.ac.at Abstract. Private as well as commercial music collections keep growing

More information

Extending Interactive Aural Analysis: Acousmatic Music

Extending Interactive Aural Analysis: Acousmatic Music Extending Interactive Aural Analysis: Acousmatic Music Michael Clarke School of Music Humanities and Media, University of Huddersfield, Queensgate, Huddersfield England, HD1 3DH j.m.clarke@hud.ac.uk 1.

More information

Computing, Artificial Intelligence, and Music. A History and Exploration of Current Research. Josh Everist CS 427 5/12/05

Computing, Artificial Intelligence, and Music. A History and Exploration of Current Research. Josh Everist CS 427 5/12/05 Computing, Artificial Intelligence, and Music A History and Exploration of Current Research Josh Everist CS 427 5/12/05 Introduction. As an art, music is older than mathematics. Humans learned to manipulate

More information

The Space Between Us: Evaluating a multi-user affective braincomputer

The Space Between Us: Evaluating a multi-user affective braincomputer The Space Between Us: Evaluating a multi-user affective braincomputer music interface Joel Eaton, Duncan Williams, Eduardo Miranda Interdisciplinary Centre for Computer Music Research, Plymouth University,

More information

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS Andrew N. Robertson, Mark D. Plumbley Centre for Digital Music

More information

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene Beat Extraction from Expressive Musical Performances Simon Dixon, Werner Goebl and Emilios Cambouropoulos Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria.

More information

Blending in action: Diagrams reveal conceptual integration in routine activity

Blending in action: Diagrams reveal conceptual integration in routine activity Cognitive Science Online, Vol.1, pp.34 45, 2003 http://cogsci-online.ucsd.edu Blending in action: Diagrams reveal conceptual integration in routine activity Beate Schwichtenberg Department of Cognitive

More information

Preface. system has put emphasis on neuroscience, both in studies and in the treatment of tinnitus.

Preface. system has put emphasis on neuroscience, both in studies and in the treatment of tinnitus. Tinnitus (ringing in the ears) has many forms, and the severity of tinnitus ranges widely from being a slight nuisance to affecting a person s daily life. How loud the tinnitus is perceived does not directly

More information

Music Representations

Music Representations Lecture Music Processing Music Representations Meinard Müller International Audio Laboratories Erlangen meinard.mueller@audiolabs-erlangen.de Book: Fundamentals of Music Processing Meinard Müller Fundamentals

More information

LESSON 1 PITCH NOTATION AND INTERVALS

LESSON 1 PITCH NOTATION AND INTERVALS FUNDAMENTALS I 1 Fundamentals I UNIT-I LESSON 1 PITCH NOTATION AND INTERVALS Sounds that we perceive as being musical have four basic elements; pitch, loudness, timbre, and duration. Pitch is the relative

More information

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes hello Jay Biernat Third author University of Rochester University of Rochester Affiliation3 words jbiernat@ur.rochester.edu author3@ismir.edu

More information

Audio Feature Extraction for Corpus Analysis

Audio Feature Extraction for Corpus Analysis Audio Feature Extraction for Corpus Analysis Anja Volk Sound and Music Technology 5 Dec 2017 1 Corpus analysis What is corpus analysis study a large corpus of music for gaining insights on general trends

More information

Music and the emotions

Music and the emotions Reading Practice Music and the emotions Neuroscientist Jonah Lehrer considers the emotional power of music Why does music make us feel? On the one hand, music is a purely abstract art form, devoid of language

More information

Why Music Theory Through Improvisation is Needed

Why Music Theory Through Improvisation is Needed Music Theory Through Improvisation is a hands-on, creativity-based approach to music theory and improvisation training designed for classical musicians with little or no background in improvisation. It

More information

2014 Music Style and Composition GA 3: Aural and written examination

2014 Music Style and Composition GA 3: Aural and written examination 2014 Music Style and Composition GA 3: Aural and written examination GENERAL COMMENTS The 2014 Music Style and Composition examination consisted of two sections, worth a total of 100 marks. Both sections

More information

Computer Coordination With Popular Music: A New Research Agenda 1

Computer Coordination With Popular Music: A New Research Agenda 1 Computer Coordination With Popular Music: A New Research Agenda 1 Roger B. Dannenberg roger.dannenberg@cs.cmu.edu http://www.cs.cmu.edu/~rbd School of Computer Science Carnegie Mellon University Pittsburgh,

More information

BRAIN-ACTIVITY-DRIVEN REAL-TIME MUSIC EMOTIVE CONTROL

BRAIN-ACTIVITY-DRIVEN REAL-TIME MUSIC EMOTIVE CONTROL BRAIN-ACTIVITY-DRIVEN REAL-TIME MUSIC EMOTIVE CONTROL Sergio Giraldo, Rafael Ramirez Music Technology Group Universitat Pompeu Fabra, Barcelona, Spain sergio.giraldo@upf.edu Abstract Active music listening

More information

Controlling Musical Tempo from Dance Movement in Real-Time: A Possible Approach

Controlling Musical Tempo from Dance Movement in Real-Time: A Possible Approach Controlling Musical Tempo from Dance Movement in Real-Time: A Possible Approach Carlos Guedes New York University email: carlos.guedes@nyu.edu Abstract In this paper, I present a possible approach for

More information

Unit Outcome Assessment Standards 1.1 & 1.3

Unit Outcome Assessment Standards 1.1 & 1.3 Understanding Music Unit Outcome Assessment Standards 1.1 & 1.3 By the end of this unit you will be able to recognise and identify musical concepts and styles from The Classical Era. Learning Intention

More information

CHILDREN S CONCEPTUALISATION OF MUSIC

CHILDREN S CONCEPTUALISATION OF MUSIC R. Kopiez, A. C. Lehmann, I. Wolther & C. Wolf (Eds.) Proceedings of the 5th Triennial ESCOM Conference CHILDREN S CONCEPTUALISATION OF MUSIC Tânia Lisboa Centre for the Study of Music Performance, Royal

More information

SYNTHESIS FROM MUSICAL INSTRUMENT CHARACTER MAPS

SYNTHESIS FROM MUSICAL INSTRUMENT CHARACTER MAPS Published by Institute of Electrical Engineers (IEE). 1998 IEE, Paul Masri, Nishan Canagarajah Colloquium on "Audio and Music Technology"; November 1998, London. Digest No. 98/470 SYNTHESIS FROM MUSICAL

More information

A BCI Control System for TV Channels Selection

A BCI Control System for TV Channels Selection A BCI Control System for TV Channels Selection Jzau-Sheng Lin *1, Cheng-Hung Hsieh 2 Department of Computer Science & Information Engineering, National Chin-Yi University of Technology No.57, Sec. 2, Zhongshan

More information

Music, Brain Development, Sleep, and Your Baby

Music, Brain Development, Sleep, and Your Baby WHITEPAPER Music, Brain Development, Sleep, and Your Baby The Sleep Genius Baby Solution PRESENTED BY Dorothy Lockhart Lawrence Alex Doman June 17, 2013 Overview Research continues to show that music is

More information

Harmony, the Union of Music and Art

Harmony, the Union of Music and Art DOI: http://dx.doi.org/10.14236/ewic/eva2017.32 Harmony, the Union of Music and Art Musical Forms UK www.samamara.com sama@musicalforms.com This paper discusses the creative process explored in the creation

More information

Towards Brain-Computer Music Interfaces: Progress and Challenges

Towards Brain-Computer Music Interfaces: Progress and Challenges 1 Towards Brain-Computer Music Interfaces: Progress and Challenges Eduardo R. Miranda, Simon Durrant and Torsten Anders Abstract Brain-Computer Music Interface (BCMI) is a new research area that is emerging

More information

Music Information Retrieval with Temporal Features and Timbre

Music Information Retrieval with Temporal Features and Timbre Music Information Retrieval with Temporal Features and Timbre Angelina A. Tzacheva and Keith J. Bell University of South Carolina Upstate, Department of Informatics 800 University Way, Spartanburg, SC

More information

Jump Jam Jiggle! Gustav Holst. Arranger and Presenter, Kate Page Musicians of the West Australian Symphony Orchestra

Jump Jam Jiggle! Gustav Holst. Arranger and Presenter, Kate Page Musicians of the West Australian Symphony Orchestra ! Jump Jam Jiggle! Featuring excerpts from The Planets Gustav Holst Arranger and Presenter, Kate Page Musicians of the West Australian Symphony Orchestra Presented as part of the 2018 Homegrown Festival

More information

The Classical Period

The Classical Period The Classical Period How to use this presentation Read through all the information on each page. When you see the loudspeaker icon click on it to hear a musical example of the concept described in the

More information

Marion BANDS STUDENT RESOURCE BOOK

Marion BANDS STUDENT RESOURCE BOOK Marion BANDS STUDENT RESOURCE BOOK TABLE OF CONTENTS Staff and Clef Pg. 1 Note Placement on the Staff Pg. 2 Note Relationships Pg. 3 Time Signatures Pg. 3 Ties and Slurs Pg. 4 Dotted Notes Pg. 5 Counting

More information

Compose yourself: The Emotional Influence of Music

Compose yourself: The Emotional Influence of Music 1 Dr Hauke Egermann Director of York Music Psychology Group (YMPG) Music Science and Technology Research Cluster University of York hauke.egermann@york.ac.uk www.mstrcyork.org/ympg Compose yourself: The

More information

The influence of Room Acoustic Aspects on the Noise Exposure of Symphonic Orchestra Musicians

The influence of Room Acoustic Aspects on the Noise Exposure of Symphonic Orchestra Musicians www.akutek.info PRESENTS The influence of Room Acoustic Aspects on the Noise Exposure of Symphonic Orchestra Musicians by R. H. C. Wenmaekers, C. C. J. M. Hak and L. C. J. van Luxemburg Abstract Musicians

More information

The Environment and Organizational Effort in an Ensemble

The Environment and Organizational Effort in an Ensemble Rehearsal Philosophy and Techniques for Aspiring Chamber Music Groups Effective Chamber Music rehearsal is a uniquely democratic group effort requiring a delicate balance of shared values. In a high functioning

More information

Elements of Music. How can we tell music from other sounds?

Elements of Music. How can we tell music from other sounds? Elements of Music How can we tell music from other sounds? Sound begins with the vibration of an object. The vibrations are transmitted to our ears by a medium usually air. As a result of the vibrations,

More information

Memory and learning: experiment on Sonata KV 331, in A Major by W. A. Mozart

Memory and learning: experiment on Sonata KV 331, in A Major by W. A. Mozart Bulletin of the Transilvania University of Braşov Series VIII: Performing Arts Vol. 10 (59) No. 1-2017 Memory and learning: experiment on Sonata KV 331, in A Major by W. A. Mozart Stela DRĂGULIN 1, Claudia

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Emergence of dmpfc and BLA 4-Hz oscillations during freezing behavior.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Emergence of dmpfc and BLA 4-Hz oscillations during freezing behavior. Supplementary Figure 1 Emergence of dmpfc and BLA 4-Hz oscillations during freezing behavior. (a) Representative power spectrum of dmpfc LFPs recorded during Retrieval for freezing and no freezing periods.

More information

DEMENTIA CARE CONFERENCE 2014

DEMENTIA CARE CONFERENCE 2014 DEMENTIA CARE CONFERENCE 2014 My background Music Therapist for 24 years. Practiced in Vancouver, Halifax and here. Currently private practice Accessible Music Therapy. my practice includes seniors, adults

More information

Experiment PP-1: Electroencephalogram (EEG) Activity

Experiment PP-1: Electroencephalogram (EEG) Activity Experiment PP-1: Electroencephalogram (EEG) Activity Exercise 1: Common EEG Artifacts Aim: To learn how to record an EEG and to become familiar with identifying EEG artifacts, especially those related

More information

Music Study Guide. Moore Public Schools. Definitions of Musical Terms

Music Study Guide. Moore Public Schools. Definitions of Musical Terms Music Study Guide Moore Public Schools Definitions of Musical Terms 1. Elements of Music: the basic building blocks of music 2. Rhythm: comprised of the interplay of beat, duration, and tempo 3. Beat:

More information

Tempo and Beat Analysis

Tempo and Beat Analysis Advanced Course Computer Science Music Processing Summer Term 2010 Meinard Müller, Peter Grosche Saarland University and MPI Informatik meinard@mpi-inf.mpg.de Tempo and Beat Analysis Musical Properties:

More information

The role of the Alexander technique in musical training and performing

The role of the Alexander technique in musical training and performing International Symposium on Performance Science ISBN 978-90-9022484-8 The Author 2007, Published by the AEC All rights reserved The role of the Alexander technique in musical training and performing Malcolm

More information

Auditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are

Auditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are In: E. Bruce Goldstein (Ed) Encyclopedia of Perception, Volume 1, Sage, 2009, pp 160-164. Auditory Illusions Diana Deutsch The sounds we perceive do not always correspond to those that are presented. When

More information

29 Music CO-SG-FLD Program for Licensing Assessments for Colorado Educators

29 Music CO-SG-FLD Program for Licensing Assessments for Colorado Educators 29 Music CO-SG-FLD029-02 Program for Licensing Assessments for Colorado Educators Readers should be advised that this study guide, including many of the excerpts used herein, is protected by federal copyright

More information

The growth in use of interactive whiteboards in UK schools over the past few years has been rapid, to say the least.

The growth in use of interactive whiteboards in UK schools over the past few years has been rapid, to say the least. INTRODUCTION The growth in use of interactive whiteboards in UK schools over the past few years has been rapid, to say the least. When used well, the interactive whiteboard (IWB) can transform and revitalise

More information

LISTENING GUIDE. p) serve to increase the intensity and drive. The overall effect is one of great power and compression.

LISTENING GUIDE. p) serve to increase the intensity and drive. The overall effect is one of great power and compression. LISTENING GUIDE LUDWIG VAN BEETHOVEN (1770 1827) Symphony No. 5 in C Minor Date of composition: 1807 8 Orchestration: two flutes, two oboes, two clarinets, two horns, two trumpets, timpani, strings Duration:

More information

Tinnitus: The Neurophysiological Model and Therapeutic Sound. Background

Tinnitus: The Neurophysiological Model and Therapeutic Sound. Background Tinnitus: The Neurophysiological Model and Therapeutic Sound Background Tinnitus can be defined as the perception of sound that results exclusively from activity within the nervous system without any corresponding

More information

Student Performance Q&A:

Student Performance Q&A: Student Performance Q&A: 2002 AP Music Theory Free-Response Questions The following comments are provided by the Chief Reader about the 2002 free-response questions for AP Music Theory. They are intended

More information

On the Music of Emergent Behaviour What can Evolutionary Computation bring to the Musician?

On the Music of Emergent Behaviour What can Evolutionary Computation bring to the Musician? On the Music of Emergent Behaviour What can Evolutionary Computation bring to the Musician? Eduardo Reck Miranda Sony Computer Science Laboratory Paris 6 rue Amyot - 75005 Paris - France miranda@csl.sony.fr

More information

What is music as a cognitive ability?

What is music as a cognitive ability? What is music as a cognitive ability? The musical intuitions, conscious and unconscious, of a listener who is experienced in a musical idiom. Ability to organize and make coherent the surface patterns

More information

Common Spatial Patterns 2 class BCI V Copyright 2012 g.tec medical engineering GmbH

Common Spatial Patterns 2 class BCI V Copyright 2012 g.tec medical engineering GmbH g.tec medical engineering GmbH Sierningstrasse 14, A-4521 Schiedlberg Austria - Europe Tel.: (43)-7251-22240-0 Fax: (43)-7251-22240-39 office@gtec.at, http://www.gtec.at Common Spatial Patterns 2 class

More information

MUSIC (MUS) Music (MUS) 1

MUSIC (MUS) Music (MUS) 1 MUSIC (MUS) MUS 110 ACCOMPANIST COACHING SESSION Corequisites: MUS 171, 173, 271, 273, 371, 373, 471, or 473 applied lessons. Provides students enrolled in the applied music lesson sequence the opportunity

More information

Sound visualization through a swarm of fireflies

Sound visualization through a swarm of fireflies Sound visualization through a swarm of fireflies Ana Rodrigues, Penousal Machado, Pedro Martins, and Amílcar Cardoso CISUC, Deparment of Informatics Engineering, University of Coimbra, Coimbra, Portugal

More information

Robert Alexandru Dobre, Cristian Negrescu

Robert Alexandru Dobre, Cristian Negrescu ECAI 2016 - International Conference 8th Edition Electronics, Computers and Artificial Intelligence 30 June -02 July, 2016, Ploiesti, ROMÂNIA Automatic Music Transcription Software Based on Constant Q

More information

MUSIC (MUS) Music (MUS) 1

MUSIC (MUS) Music (MUS) 1 Music (MUS) 1 MUSIC (MUS) MUS 2 Music Theory 3 Units (Degree Applicable, CSU, UC, C-ID #: MUS 120) Corequisite: MUS 5A Preparation for the study of harmony and form as it is practiced in Western tonal

More information

Therapy for Memory: A Music Activity and Educational Program for Cognitive Impairments

Therapy for Memory: A Music Activity and Educational Program for Cognitive Impairments 2 Evidence for Music Therapy Therapy for Memory: A Music Activity and Educational Program for Cognitive Impairments Richard S. Isaacson, MD Vice Chair of Education Associate Prof of Clinical Neurology

More information

MUSIC (MUSC) Bucknell University 1

MUSIC (MUSC) Bucknell University 1 Bucknell University 1 MUSIC (MUSC) MUSC 114. Composition Studio..25 Credits. MUSC 121. Introduction to Music Fundamentals. 1 Credit. Offered Fall Semester Only; Lecture hours:3,other:2 The study of the

More information

MUSIC (MU) Music (MU) 1

MUSIC (MU) Music (MU) 1 Music (MU) 1 MUSIC (MU) MU 1130 Beginning Piano I (1 Credit) For students with little or no previous study. Basic knowledge and skills necessary for keyboard performance. Development of physical and mental

More information

Tuning the Brain: Neuromodulation as a Possible Panacea for treating non-pulsatile tinnitus?

Tuning the Brain: Neuromodulation as a Possible Panacea for treating non-pulsatile tinnitus? Tuning the Brain: Neuromodulation as a Possible Panacea for treating non-pulsatile tinnitus? Prof. Sven Vanneste The University of Texas at Dallas School of Behavioral and Brain Sciences Lab for Clinical

More information

Modeling memory for melodies

Modeling memory for melodies Modeling memory for melodies Daniel Müllensiefen 1 and Christian Hennig 2 1 Musikwissenschaftliches Institut, Universität Hamburg, 20354 Hamburg, Germany 2 Department of Statistical Science, University

More information

Gyorgi Ligeti. Chamber Concerto, Movement III (1970) Glen Halls All Rights Reserved

Gyorgi Ligeti. Chamber Concerto, Movement III (1970) Glen Halls All Rights Reserved Gyorgi Ligeti. Chamber Concerto, Movement III (1970) Glen Halls All Rights Reserved Ligeti once said, " In working out a notational compositional structure the decisive factor is the extent to which it

More information

A series of music lessons for implementation in the classroom F-10.

A series of music lessons for implementation in the classroom F-10. A series of music lessons for implementation in the classroom F-10. Conditions of Use These materials are freely available for download and educational use. These resources were developed by Sydney Symphony

More information

Chapter Five: The Elements of Music

Chapter Five: The Elements of Music Chapter Five: The Elements of Music What Students Should Know and Be Able to Do in the Arts Education Reform, Standards, and the Arts Summary Statement to the National Standards - http://www.menc.org/publication/books/summary.html

More information

Computational Parsing of Melody (CPM): Interface Enhancing the Creative Process during the Production of Music

Computational Parsing of Melody (CPM): Interface Enhancing the Creative Process during the Production of Music Computational Parsing of Melody (CPM): Interface Enhancing the Creative Process during the Production of Music Andrew Blake and Cathy Grundy University of Westminster Cavendish School of Computer Science

More information

CS229 Project Report Polyphonic Piano Transcription

CS229 Project Report Polyphonic Piano Transcription CS229 Project Report Polyphonic Piano Transcription Mohammad Sadegh Ebrahimi Stanford University Jean-Baptiste Boin Stanford University sadegh@stanford.edu jbboin@stanford.edu 1. Introduction In this project

More information

Expressive performance in music: Mapping acoustic cues onto facial expressions

Expressive performance in music: Mapping acoustic cues onto facial expressions International Symposium on Performance Science ISBN 978-94-90306-02-1 The Author 2011, Published by the AEC All rights reserved Expressive performance in music: Mapping acoustic cues onto facial expressions

More information

Music Standard 1. Standard 2. Standard 3. Standard 4.

Music Standard 1. Standard 2. Standard 3. Standard 4. Standard 1. Students will compose original music and perform music written by others. They will understand and use the basic elements of music in their performances and compositions. Students will engage

More information

Effects of Auditory and Motor Mental Practice in Memorized Piano Performance

Effects of Auditory and Motor Mental Practice in Memorized Piano Performance Bulletin of the Council for Research in Music Education Spring, 2003, No. 156 Effects of Auditory and Motor Mental Practice in Memorized Piano Performance Zebulon Highben Ohio State University Caroline

More information

Work In Progress: Adapting Inexpensive Game Technology to Teach Principles of Neural Interface Technology and Device Control

Work In Progress: Adapting Inexpensive Game Technology to Teach Principles of Neural Interface Technology and Device Control Paper ID #7994 Work In Progress: Adapting Inexpensive Game Technology to Teach Principles of Neural Interface Technology and Device Control Dr. Benjamin R Campbell, Robert Morris University Dr. Campbell

More information

Copyright 2009 Pearson Education, Inc. or its affiliate(s). All rights reserved. NES, the NES logo, Pearson, the Pearson logo, and National

Copyright 2009 Pearson Education, Inc. or its affiliate(s). All rights reserved. NES, the NES logo, Pearson, the Pearson logo, and National Music (504) NES, the NES logo, Pearson, the Pearson logo, and National Evaluation Series are trademarks in the U.S. and/or other countries of Pearson Education, Inc. or its affiliate(s). NES Profile: Music

More information

2016 HSC Music 1 Aural Skills Marking Guidelines Written Examination

2016 HSC Music 1 Aural Skills Marking Guidelines Written Examination 2016 HSC Music 1 Aural Skills Marking Guidelines Written Examination Question 1 Describes the structure of the excerpt with reference to the use of sound sources 6 Demonstrates a developed aural understanding

More information

Symphony No. 4, I. Analysis. Gustav Mahler s Fourth Symphony is in dialogue with the Type 3 sonata, though with some

Symphony No. 4, I. Analysis. Gustav Mahler s Fourth Symphony is in dialogue with the Type 3 sonata, though with some Karolyn Byers Mr. Darcy The Music of Mahler 15 May 2013 Symphony No. 4, I. Analysis Gustav Mahler s Fourth Symphony is in dialogue with the Type 3 sonata, though with some deformations. The exposition

More information

Making Connections Through Music

Making Connections Through Music Making Connections Through Music Leanne Belasco, MS, MT-BC Director of Music Therapy - Levine Music Diamonds Conference - March 8, 2014 Why Music? How do we respond to music: Movement dancing, swaying,

More information

MOZART, THE COMPOSER Lesson Plans

MOZART, THE COMPOSER Lesson Plans Lesson Plans October-December 2008 UNIT: LESSON: Mozart, The Composer 1 and 2. Mozart s early years AIMS To know of Mozart s early years life facts and some of his CONTRIBUTION TO COMPETENCES Communicative:

More information

Musical Entrainment Subsumes Bodily Gestures Its Definition Needs a Spatiotemporal Dimension

Musical Entrainment Subsumes Bodily Gestures Its Definition Needs a Spatiotemporal Dimension Musical Entrainment Subsumes Bodily Gestures Its Definition Needs a Spatiotemporal Dimension MARC LEMAN Ghent University, IPEM Department of Musicology ABSTRACT: In his paper What is entrainment? Definition

More information

Thought Technology Ltd Belgrave Avenue, Montreal, QC H4A 2L8 Canada

Thought Technology Ltd Belgrave Avenue, Montreal, QC H4A 2L8 Canada Thought Technology Ltd. 2180 Belgrave Avenue, Montreal, QC H4A 2L8 Canada Tel: (800) 361-3651 ٠ (514) 489-8251 Fax: (514) 489-8255 E-mail: _Hmail@thoughttechnology.com Webpage: _Hhttp://www.thoughttechnology.com

More information

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance Methodologies for Expressiveness Modeling of and for Music Performance by Giovanni De Poli Center of Computational Sonology, Department of Information Engineering, University of Padova, Padova, Italy About

More information

Spectral Sounds Summary

Spectral Sounds Summary Marco Nicoli colini coli Emmanuel Emma manuel Thibault ma bault ult Spectral Sounds 27 1 Summary Y they listen to music on dozens of devices, but also because a number of them play musical instruments

More information

Articulation Guide. Berlin Orchestra Inspire.

Articulation Guide. Berlin Orchestra Inspire. Guide Berlin Orchestra Inspire 1 www.orchestraltools.com OT Guide CONTENT I About this Guide 2 II Introduction 3 III Recording and Concept 4 IV Berlin Series 5 1 Berlin Orchestra Inspire... 6 Instruments...

More information

ECE 4220 Real Time Embedded Systems Final Project Spectrum Analyzer

ECE 4220 Real Time Embedded Systems Final Project Spectrum Analyzer ECE 4220 Real Time Embedded Systems Final Project Spectrum Analyzer by: Matt Mazzola 12222670 Abstract The design of a spectrum analyzer on an embedded device is presented. The device achieves minimum

More information