Incarnated sound: from bodily vibrations to biophysical music performance

Size: px
Start display at page:

Download "Incarnated sound: from bodily vibrations to biophysical music performance"

Transcription

1 Incarnated sound: from bodily vibrations to biophysical music performance Marco Donnarumma s Master of Science by Research Sound Design School of Arts Culture and Environment The University of Edinburgh Edinburgh EH8 9DF Supervisor: Dr. Martin Parker August 16, 2012

2 Abstract The research presented here shows how bioacoustic body signals can be musically meaningful in different performative context. Being mainly a performer, my aim in focusing on the sonic capabilities of the body is to investigate sensuous and physical modalities of computer-aided musical interaction. The design of the Xth Sense (XS) is presented; this is a biologically sensitive musical instrument that can be manually reproduced from scratch and at a low cost. During a performance, the XS amplifies and analyses a player s muscle sounds, which are in turn livesampled using the same data stream; the resulting sound forms are diffused by loudspeakers. This paradigm, which I call biophysical music, is grounded upon the original notions of visceral embodiment and sound-gesture, which are presented and discussed. By examining the contextual role of neurophysiological and self-perceptive mechanisms, evidence is provided which suggests biophysical music has the potential to affect the sensory system of both the player and the listener. The compositional strategies underlying the performance of biophysical music are presented in the practical context of three works for the XS. A set of muscle sound features is reported and its extraction documented. The features are used to drive idiomatic mapping and DSP processes. Finally, it is illustrated how the instrument autonomously adapts its algorithms to the player s physiology by identifying body muscular state. These findings, it is hoped, set a benchmark for the application of bioacoustics to new musical instruments design and computer music performance.

3 Contents 1 Description Media folder contents Xth Sense: an instrument, an aesthetic Body, biological media and computing systems Introduction to biotechnological performance practice Agency, effort and metaphor as functions of expressivity Creativity and technology today Critical understanding as a basis for empowerment Designing a redistributable instrument Music for Flesh II Biophysical music Encoding bodily agency Feature extraction and compositional strategies Visceral embodiment Sound-gesture Articulating corporeal music expression Hypo Chrysos Performance, strain and the sensory system Proprioception and neuromuscular feedback

4 CONTENTS CONTENTS Being a body affecting bodies Effort and physicality Agency without musicianship Mapping textural richness Ominous Playfulness Multidimensional sound-gestures Synchronous mapping of two MMG signals Machine learning An adaptive musical instrument Identifying muscular states Conclusions Summary Future directions A DIY documentation 57 A.1 Parts list and schematic A.2 How-to: build an Xth Sense sensor A.3 Software environment A.4 Get started B Pd-extended patches 70 C Further notes 77 D Audiovisual documentation 79 E Additional images 81 2

5 List of Figures 2.1 A multidimensional model of expressiveness in a BDMI gesture MMG of a sustained contraction: spectrogram (in the background); waveform (white in the foreground); logarithmic spectrum (yellow outline) The first hardware used to record muscle sounds The Xth Sense biophysical wearable sensor. Photograph: Chris Scott Frequency response of a muscle sound captured by the WM and KEG Performer, sensors, laptop and loudspeakers. Studio setup at Inspace, Edinburgh, UK. Photograph: Chris Scott Block diagram of the MMG feature extraction Still of a sound-gesture in the forth section of Music for Flesh II, and the related mapping diagram. Photograph: Mark Daniels Still of a sound-gesture consisting of repeated upward contractions of the left forearm. Photograph: Lais Pereira The author during a performance of Hypo Chrysos. Photograph: Thr3hold One of the blocks used in the performance. Photograph: Chris Scott Hypo Chrysos studio session at Inspace, Edinburgh, UK. Photograph: Chris Scott A sequence of multidimensional gesture in Ominous. Photograph: Marika Kochi The 4-stages DSP system used in Ominous Closing gesture of the first section of Ominous. Photograph: FILE

6 LIST OF FIGURES LIST OF FIGURES A.1 Schematic of the latest Xth Sense circuit (1.2). It has been designed to be as simple as possible to be easy to build and extend A.2 Detail of the [input.chain] object B.1 Overview of the Xth Sense digital interface B.2 The algorithm extracting the Natural and Soft features B.3 The algorithm extracting the Linear, Tanh and Maximum Running Average features B.4 Detail of the computation of the Maximum Running Average feature B.5 The Xth Sense hacked version of the bubbler object from the Soundhack library, wrapped in a graphical interface B.6 The Xth Sense video software. It receives the MMG features via OSC messages and use them to excite a swarm of particles and direct other live video processing. 74 B.7 The Xth Sense audio patch coded for Hypo Chrysos. One of the DSP stages (top), the mapping module (bottom right), OSC unit (center), mixer and feature dispatcher (center/bottom left) B.8 Multi-layered scaling algorithm for features mapping B.9 An algorithm tracking the rhythm of muscular contractions B.10 The Xth Sense integrated Machine Learning system based on supervised learning. 76 B.11 Detail of the artificial neural network E.1 Stereo format stage plan of Music for Flesh II E.2 Immersive format stage plan of Music for Flesh II E.3 Graphical score for Music for Flesh II indicating duration, intensity and texture of the sound-gestures E.4 Stage plan of Hypo Chrysos E.5 The earliest MMG recording using the software Ardour

7 Chapter 1 Description This submission presents a series of three performance works using the XS. These are the outcome of a research that brought together hardware design, bioacoustics, interactive music programming, and performance practice. The sensor hardware design is described in Section 2.2.2, and the documentation to reproduce it is found in Appendix A. The software computational idioms and their application to performance are illustrated throughout the text. Technical details on feature extraction, gesture mapping, and machine learning can be found respectively in Section 3.1.2, 4.2.2, and 5.2.2; software screenshots are embedded in Appendix B, and appropriately recalled in the text. Appendix A.3 includes an overview of the software anatomy, and a step-by-step installation tutorial. The muscle sound characteristics and their use in real-time performance are explained and contextualised in the frame of the two interactive music concerts, and an action art piece. These works appear in temporal order in Chapter 3, 4, and 5. This way, it is hoped to provide a clear chronological account of the research development. In various parts of the text, the reader is invited to temporarily stop the reading and listen to audio material or watch video recordings. All the media mentioned in the text can be found by browsing the folder named Incarnated- Sound media, which can be found in the SD card attached. In the text, this folder is referred to as media folder ; its content index can be viewed next. 5

8 Media folder contents Description 1.1 Media folder contents <pre><code>. -- content-index.txt -- Instrument -- code -- Xth-Sense_additional-libs_LINUX.tar.gz -- Xth-Sense_additional-libs_MacOSX.zip -- xth-sense-lib.zip -- Xth-Sense.zip -- parts-list-and-schematic -- PARTS-list_v1.0_2012.pdf -- Xth-Sense-v1.5_schematic_2012.pdf -- tutorials -- HOWTO_build-your-biosensor_2012.pdf -- Xth-Sense_GETSTARTED.pdf -- xth-sense_suggested-layout1.jpg -- xth-sense_suggested-layout2.jpg -- xth-sense_suggested-layout3.jpg -- READ-ME-FIRST.txt -- thesis_v1_r3.pdf -- Works -- audio-samples -- arm-muscle-sound.ogg -- hc_blood-floow-and-mmg-sample.wav -- hc_textural-mapping_section-three.wav -- mfii_gesture-echoes-sample.wav -- mfii_sound-gesture_section-four.wav -- omn_multidimensional-sound-gesture-2_section-one.wav 6

9 Media folder contents Description -- omn_multidimensional-sound-gesture-3_section-one.wav -- omn_multidimensional-sound-gesture_section-one.wav -- interviews-and-articles -- music-for-flesh-ii_vasa-gallery.png -- music-for-flesh-ii_weave.png -- xth-sense-bio-interface-berlin_createdigitalmusic.png -- xth-sense_createdigitalmusic.png -- xth-sense_gatech-news.png -- xth-sense_reuters.png -- live-audio-recordings -- CAPTIONS.txt -- marco-donnarumma_hypo-chrysos.wav -- marco-donnarumma_music-for-flesh-ii.wav -- marco-donnarumma_ominous.wav -- pictures -- marco-donnarumma_hc_inspace-residency_march2012_by-chris-scott_10.jpg -- marco-donnarumma_hc_inspace-residency_march2012_by-chris-scott_2b.jpg -- marco-donnarumma_hc_inspace-residency_march2012_by-chris-scott_3.jpg -- marco-donnarumma_hc_inspace-residency_march2012_by-chris-scott_4.jpg -- marco-donnarumma_hc_inspace-residency_march2012_by-chris-scott_5.jpg -- marco-donnarumma_hc_inspace-residency_march2012_by-chris-scott_6.jpg -- marco-donnarumma_hc_inspace-residency_march2012_by-chris-scott_7.jpg -- marco-donnarumma_hc_inspace-residency_march2012_by-chris-scott_8.jpg -- marco-donnarumma_hc_inspace-residency_march2012_by-chris-scott.jpg -- marco-donnarumma_hc_trendelenburg_gijon_dec2011_by-th3shold.jpg -- marco-donnarumma_mfii_cafe-oto_london_apr2012-by-marika-kochi.jpg -- marco-donnarumma_mfii_georgia-tech_atlanta_feb2012_by-bence-kollanyi.jpg -- marco-donnarumma_mfii_inspace_edinburgh_march2011_by-dimitris-patrikios.jpg -- marco-donnarumma_mfii_inspace_edinburgh_march2011_by-mark-daniels.jpg -- marco-donnarumma_mfii_inspace-residency_edinburgh_march2011_by-chris-scott.jpg -- marco-donnarumma_mfii_netaudiolx_lisbon_jan2012_by-lais-pereira.jpg -- marco-donnarumma_omn_cafe-oto_london_apr2012-by-marika-kochi_2.jpg 7

10 Media folder contents Description -- marco-donnarumma_omn_cafe-oto_london_apr2012-by-marika-kochi_3.jpg -- marco-donnarumma_omn_cafe-oto_london_apr2012_by-parag-mital.jpg -- marco-donnarumma_omn_file-hypersonica_sao-paulo_july2012_by-lais-pereira.jpg -- marco-donnarumma_xth-sense-biosensors_2011_by-chris-scott.jpg -- videos -- CAPTIONS.txt -- marco-donnarumma_hypo-chrysos.mp4 -- marco-donnarumma_music-for-flesh-ii.mp4 -- shiori-usui_into-the-flesh.mp4 10 directories, 57 files </code></pre> 8

11 Chapter 2 Xth Sense: an instrument, an aesthetic Everything we do is music - John Cage The Xth Sense (XS) is a musical instrument created to satisfy a desire for new sounds. However, before discussing the research findings, the aesthetic underpinning the instrument has to be elaborated. The instrument s nature reflects an aesthetic grounded in the notions of biomedia and empowerment. This is the starting point for our discourse on the XS, and although this chapter might seem a little far from sound and music, it provides a useful premise. 2.1 Body, biological media and computing systems In his seminal article What is Biomedia? (Thacker, 2003, p. 47), Eugene Thacker underlines the informative character of the biological media by noting that not only can everything be understood as information, but information is everything, in that every thing has a source code. 9

12 Body, biological media and computing systems Xth Sense: an instrument, an aesthetic The biological media (or, simply, biomedia) is understood in this text as the source code of the body, or in other words, as the the organic protocols that define the entire configuration of our organism. Through biomedia it is possible to describe in great detail the functioning of the body system. Information Technology (IT) and defense industries have not overlooked the instrumental potential of biomedia. NEC, a Japanese IT giant, has tested digital walls that use a custom facial recognition system to gather information about passers-by, and serve real-time, physiologically, and demographically targeted ads (Lah, 2010). In the United States, a program named FAST has been started by the Department of Homeland Security s Science and Technology Directorate (Burns, 2008). The program investigates the use of sensor arrays to covertly conduct surveillance on individuals that are not yet suspected of a crime. In an attempt to pre-know the advent of criminal activities, the system describes the criminal potential of a subject by secretly observing and storing a diverse range of biometric data 1, among which cardiovascular signals, pheromones, electro-dermal activity, and respiratory measurements (EPIC, Electronic Privacy Information Center, 2011). Per contra, the unparalleled heterogeneity of the body potentials (and its inherent inadequacies) has pushed the envelope of artistic practice. By drawing upon the complexity of biological protocols, artists have escaped the characterisation of the body as a fixed entity, a utility, or an unexpressive bunch of measurable organs. The coupling of creative practice and scientific research has resulted in the evolution of the body into disparate artistic objects, such as a biotechnological construct (Linz, 1992), a bioelectric interface (Knapp and Lusted, 1990), a living musical score (Votava and Berger, 2012), a remote spatial controller (NIMk, 2011), and in the case of my own work, a self-enclosed musical instrument. Machines seamlessly infiltrate a body to track down electrical pulses of neurons, cellular reactions, and palpitations of the flesh. The body is revealed as an organised, yet unpredictable system: a networked order of integrated agents capable of learning, reasoning, reacting, and interacting in conjunction with other entities. This is the body technology. Here, the meaning of technology 1 Biometrics refers a system of identification based on physiological and behavioural traits. It is used in access control, security industry, and advertising. 10

13 Body, biological media and computing systems Xth Sense: an instrument, an aesthetic is to be understood as a complex, emergent system of rules and living matter rather than a situated, deterministic automaton. On the one hand, complexity and unpredictability source the expressivity of the body, on the other, they constrain its integration with machines. Thinking about music, the combination of body and machine suffers a heavily mediated relationship, which, too often, produces the disappearance of the former or the celebration of the latter. Since the 1970s 2, technological devices are being used to musically portray the body source code. The aim is not to subject the body to a sort of biodata-mining 3, as corporations and governments seem to do, but rather to devise unexplored musical strategies. This idea is embodied in the development of what can be called Biosensing Digital Musical Instruments (BDMI) 4. These are electronic music systems that use a computer to mediate between the potentials of the biological body and a virtual sonic universe Introduction to biotechnological performance practice During the past forty years, two strands of biotechnological music performance have arisen: biofeedback, and biocontrol (Knapp and Lusted, 1990). Biofeedback is a medical technique which makes tangible human physiological processes, so that an individual can garner an understanding of her inner body. In the 1970s, musicians Alvin Lucier and David Rosenboom (Rosenboom, 1974) developed the first biofeedback instruments that would enable them to modulate music with their brainwaves (also known as electroencephalogram or EEG). In the 1980s performance artist Stelarc began using biological data and bioacoustic sounds in his works with robotic prosthesis (Donnarumma, 2012). From about the 1990s until the last decade, the research has advanced towards a more complex interaction embodied in the biocontrol interfaces. These are instruments that capture neuronal impulses (in the form of electrical voltage) via electrodes pressed against the skin. They capture 2 Specific bibliographic references are given in the next section. Here, it might be useful to refer to the thorough historical review presented in (Ortiz, 2012), published in a recent issue of the journal econtact! for which the author has been a Guest Editor. 3 Data-mining is a relatively recent field of computer science that studies the modalities by which recurrent patterns can be extrapolated from large data sets by means of artificial intelligence and statistics. Here the term is purposely stretched to suggest an analogy with biological data. 4 The definition is a semantic extension of the model of Digital Musical Instrument (DMI) presented in (Miranda and Wanderley, 2006) 11

14 Body, biological media and computing systems Xth Sense: an instrument, an aesthetic the EEG produced by the brain, the electromyogram (EMG) released by the skeletal muscles, and the electrocardiogram (ECG) generated by the heart. The biodata trigger digitally synthesised music. This paradigm has been explored by several artists, most notably by Atau Tanaka with the Biomuse (Tanaka, 2000) (developed by Ben Knapp and Hugh Lusted 5 ), Yoichi Nagashima (Nagashima, 2003), and Eduardo Reck Miranda and Andrew Brouse with the Brain-Computer Interfaces (BCI) (Miranda and Brouse, 2005). Neuronal control of music has been successfully developed, while, except for Tanaka s corporeal performances with EMG, the body physical qualities have not yet been fully explored. It is towards this corporeal modes of interaction that this research was oriented, therefore, the existent musical strategies based on bioelectric data did not seem to be applicable: EEG and ECG are not directly related to physical motion; EMG does not describe an actual movement, but rather the intention of movement signalled by the brain. A BDMI capable of capturing and emphasising the expressive qualities of the physical body was needed. At this point some questions arose: In which ways can a computer augment the carnality of a player s body? How to keep safe the inherent unpredictability of the body during such a process of mediation? To tackle these issues, the distinctive elements of the gesture of a BDMI performer were analysed. The goal was to gain a better understanding of the body in this context. In the next section, a multidimensional model of expressivity that emerged from this investigation is proposed. The model will help us establish the position of this research in the area of biotechnological performance practice Agency, effort and metaphor as functions of expressivity The BDMI gesture model elaborated below (Figure 2.1) takes in account some largely discussed factors, such as agency, effort, and metaphor, but it makes exclusive distinctions in respect to their qualities 6. Although the analysis that follows could possibly embrace digital musical instruments in general, it should be constrained to the realm of BDMI performance. In Mapping Out Instruments, Affordances, and Mobiles (Tanaka, 2010, p. 89), Atau Tanaka 5 See (Knapp and Lusted, 1990) 6 The idea of this model came across after reading (Brent, 2012). 12

15 Body, biological media and computing systems Xth Sense: an instrument, an aesthetic perceived Agency synchretic experienced embodied instrumental integrated Metaphor Effort Figure 2.1: A multidimensional model of expressiveness in a BDMI gesture refers to the characterisation of expressivity by researchers such as Claude Cadoz and Antonio Camurri in contrast with the intuitive approach of performers like Joel Ryan. He does so to identify the key to full expressivity not just in the effectiveness of communication, but in the sense of agency that the system gives back to the performer (emphasis by the author). Tanaka refers to agency as something that the player receives from the instrument. In fact, agency is not an internal cognitive factor, but a sense of awareness that is internalised. Musical agency exists when a player (or a listener) experiences the actuality of an expressive relationship between effort and sonic event. It is a sense that is never realised, and may only transpire through involuntary facial expression and other autonomic responses. Agency is not only experienced by the performer, but it is perceived by the audience at the same (real) time. A gesture exhibiting a weak sense of agency undermines the listener s interpretation of a performance; the player s physical body becomes immaterialised within and disconnected from the virtual sound world she creates (Kim and Seifert, 2006, p. 141). Here lies the expressive gap that sometimes creates that uncanny feel of disbelief in a performance. The link beween musical agency and effort exhibited by the player is made clear in the ar- 13

16 Body, biological media and computing systems Xth Sense: an instrument, an aesthetic ticle Touchstone, in which Sally Jane Norman, Michel Waisvisz, and Joel Ryan provide an exemplary image (Norman et al., 1998): A singer s effort in reaching a particular note is precisely what gives that note its beauty and expressiveness. The effort that it takes and the risk of missing that note forms the metaphor for something that is both indescribable and the essence of music. In the area of biotechnological music performance the same applies, although there is a crucial distinction to be done. Depending on a given BDMI, the gestural effort can be integrated or instrumental. The integrated effort is intended as a bodily impulse which is either directly mapped to continuous control parameters, or deployed as the actual source of sound. As for instruments based on EMG or muscle sounds (also known as mechanomyogram or MMG), the resulting sonic form is generally proportioned to the apparent effort of the gesture 7. The continuity of musicianship and musicality is made transparent throughout the performance. In its instrumental form instead, effort is primarily cognitive. This is the case of BCI that, for instance, may require the player to control the heart rate to enter a pre-determined physiological state, which eventually triggers musical patterns. Here the effort is physical too, but it is not easily discernible (nor are its effect on the sound). The audience is required to decode the performance to fully appreciate the music. Metaphor is another key to the audience understanding of the music being played. With or without the performer s willingness, musical gestures contribute to the construction of metaphors grounded upon a shared cultural knowledge. As observed in (Fels et al., 2003, p. 109): Metaphor enables device designers, players and audience to refer to elements that are common knowledge or cultural bases... Metaphor restricts and defines the mapping of a new device. Through metaphor, transparency increases, making the device more expressive. The listener s understanding of both familiar and unfamiliar sounds draws on perceptual models based on a shared knowledge of sound-to-gesture relationships (Godøy et al., 2006, p. 29), which, during real-time performance, are differently triggered by the player. In a performance using 7 As it has been noted, the EMG records only the intention of movement, not the actual contraction. Sometimes, this causes the relation of motion and EMG data to be less perceivable than the one produced with the MMG. 14

17 Creativity and technology today Xth Sense: an instrument, an aesthetic BDMI, for instance, metaphor emerges in different ways. When it comes in the form of a tangible and evident quality, metaphor becomes embodied. Imagine a performer slowly increasing the frequency and loudness of a sinewave by lifting the arms towards the ceiling. In contrast, a synchretic metaphor is one in which two different (or contradictory) elements are coupled within a gesture. Think of a player sitting still whilst a growling sound appears abruptly in the sonic field. By combining points for each of the model axis it is possible to derive different modalities of musical interaction. Far from wanting to elaborate all the possibilities that come into play in this scenario, the author s personal approach to the performance with BDMI is described in Chapter 3, 4, and 5. Driven by the idea of the biological body as a self-contained musical entity, the goal of this research was to explore a territory where a high degree of experienced and perceived agency is critical, the effort is integrated within the musical system, and metaphors are embodied in every musical gesture. Given the lack of a suitable instrument, the investigation began with the conception of an original BDMI. 2.2 Creativity and technology today Creative technologies are democratised, computers can be accessed locally, remotely, at any time, and at contained costs. It would be ingenuous, however, not to address the origin of this democratisation in the intricate roots of the industry and its corporations. In fact, such shift is not the result of the effort of a few individuals, rather it represents a systemic consequence of the expansion of the global market. Massive circuitry production is increasingly cheap, processors can be as small as a quark, and the exploitation of human resources is sadly somatised as a contingent implication of our society. This is both an attractive (if not unmissable) chance to increase capital and an opportunity to define new models of social control. By injecting into the mainstream market (supposedly) new, and (allegedly) accessible technologies, corporations succeded in embedding popular culture with a technological familiarity that could have not been imagined before. On the flipside, a global community striving for the so called Do It Yourself (DIY) ethic has rapidly grown, and it constitutes today a great part of the organic matter that 15

18 Creativity and technology today Xth Sense: an instrument, an aesthetic feeds our creative ecosystem today. Old and new technologies, obsolete and alternative devices, anything can be reverse engineered, mangled, hacked, disrupted, extended, and shared in order to satisfy a relentless hunger for discovery. Arguably, collaborative learning models and peer to peer distribution of knowledge have become critical to our socio-cultural dynamics Critical understanding as a basis for empowerment Despite the accessibility that corporations seek by providing an off-the-shelf, creative product, be it software (SW) or hardware (HW), its source or schematic needs to be protected by copyright laws that impede its open re-distribution or modification. A user does not buy the product itself, but rather a license to use it; in other words, the product is simply and effectively closed (Steiner, 2008). In contrast, DIY and Free Libre Open Source Software (FLOSS) strategies deliver open SW and HW that can be freely re-distributed and expanded by the community (Soler, 2008). Nonetheless, many times DIY projects are not concretely accessible by a wide audience, as specialised background knowledge is needed. Hereby we are to face a paradoxical model. Corporations have realised the production potential of the DIY community, and have began to slightly loosen their policies, so that makers and hackers would develop new applications for their devices. The marketing model of (and the hacking hype around) the Microsoft Kinect and the Nintendo Wii 8 exemplifies this strategy quite well. Such considerations help us realise that, although corporations strategies and DIY ethic seem to be antithetic by nature, they both contribute to a generalised awareness towards the accessibility of technological practices. Whereas a few years ago artists had felt the urge to foster technological advance in order to investigate novel approaches to the arts, today the situation is somewhat reversed. Artists can rely on a comfortable and pragmatic infrastructure of compelling software environments and integrated devices. Looking at software frameworks such as Pure Data, Max/MSP, or Processing, hardware prototyping platforms like the Arduino or the RaspberryPi, and increasingly powerful mobile devices such as smartphones and tablets, it could be argued that the artistic community earned a long sought technical emancipation. Accessibility and emancipation do not constitute empowerment on their own though, and they can be 8 These are two video games controllers, respectively based on computer vision and an array of motion sensor. 16

19 Creativity and technology today Xth Sense: an instrument, an aesthetic deceptive too. Empowerment means to acquire specialised knowledge by understanding and reproducing a process (or a device). Empowerment also implies the cultivation of critical skills that can lead to the conception of truly novel and original paradigms. In this sense, the whole history of knowledge can be thought of as a jigsaw puzzle. Each generation adds puzzle pieces which have only a small part of the picture on it, and gradually the bigger picture is unfolded. The personal research that will be described here emerged from such observations and concerns. How to escape the constrain of a generalised artistic practice that relies on closed tools? How can a DIY creative instrument integrate an innovative interaction and an inexpensive design? How can such an instrument be open, yet accessible and easy to use? Such questions are not new; several projects have addressed similar issues in the past, among which the already mentioned Arduino. This is a beautiful project that succeded in pragmatically answering the concerns elaborated above. The research presented here builds on the Arduino, yet it embodies an aesthetic rather than a technological concern; it calls upon an essential syncretism among the visceral body, music technology, and creative practice. In the next sections the genesis of the research and the instrument design are illustrated Designing a redistributable instrument This research is concerned with the musical application of muscle sounds 9. These are low frequency sound waves produced at the onset of a muscular contraction when the chemical energy contained in the muscle cells becomes kinetic. The MMG does not describe the movemement itself, but rather the amount of energy that causes the movement. Although the systematic study of muscle sounds started around the 1980 s (Oster and Jaffe, 1980, p. 121), so far it has only found actual applications in the medical field. This phenomenon can be observed on the surface of a muscle when it is contracted. At the onset of muscle contraction, significant changes in the muscle shape produce a large peak in the MMG. The oscillations of the muscle fibres at the muscle resonant frequency generate subsequent vibrations. Figure 2.2 shows the MMG of a sustained contraction captured via the XS. 9 In this text the term muscle sound and its technical acronym MMG are used interchangeably. 17

20 Creativity and technology today Xth Sense: an instrument, an aesthetic Figure 2.2: MMG of a sustained contraction: spectrogram (in the background); waveform (white in the foreground); logarithmic spectrum (yellow outline). The sound sample is available in the media folder attached. The bulk of this sound, and the other sound material linked in this text consists of very low frequencies. It is suggested to use an appropriate sound system, or headphones. Click here, or see the content index in Section 1.1, filename arm-muscle-sound.ogg, subfolder audio-samples. Several times in this text the MMG is referred to as a sound. Even though some may argue for a different interpretation of what a sound is, it is natural to use the term in this context as, de facto, the MMG produced by the muscle is an acoustic oscillation. As such, it can be amplified and heard through headphones or loudspeakers. This is how this musical journey started; after a few listening sessions in which some rudimentary custom sensors were used to amplify the sound produced by the flexion of an arm (Figure 2.3), it became clear that those little, yet detailed vibrations would have served well in a musical context. Before starting the development of the XS, four crucial criteria were defined: to develop a wearable, unobtrusive and extremely sensitive device; to implement efficient real-time capture of diverse muscle sounds; to make use of inexpensive hardware solutions as to ensure a low reproduction cost; 18

21 Creativity and technology today Xth Sense: an instrument, an aesthetic to foster the hardware open re-distribution by choosing adequate production methodologies. Interestingly, the MMG seems not to be a topic of interest in the field of music technology. The relevant literature includes little information about muscle sounds10, and apparently most BDMI researchers focus on EEG, EMG, ECG, or multidimensional control data which can be obtained through wearable accelerometers, gyroscopes, and similar sensors. Despite the apparent lack of pertinent documentation, useful technical information regarding the design of a MMG sensor were collected by reviewing the biomedical engineering literature. Figure 2.3: The first hardware used to record muscle sounds. The MMG has been used for general biomedical applications (Alves et al., 2010; Garcia et al., 2008) and as alternative control data for low cost prosthetic devices (Silva and Chau, 2003). Namely, it is the work of Jorge Silva at Prism Lab (Toronto, CA) that initially inspired this research. His MASc thesis (Silva, 2004) represents a comprehensive resource of information and 10 The most notable mention is included in (Miranda and Wanderley, 2006, pp ) 19

22 Creativity and technology today Xth Sense: an instrument, an aesthetic technical insights on the use and analysis of MMG, and it extensively documents the design of a coupled microphone-accelerometer sensor pair (CMASP). The device is capable of capturing the muscles sounds in real time. These are composed of inharmonic partials, whose frequency response ranges from 1Hz up to 50Hz. The oscillations of the muscle tissues are transmitted to the skin which, in turn, excites an air chamber. The vibrations are captured by an omnidirectional electret condenser microphone 11 adequately shielded from noise and interferences by means of a silicon case. In order to precisely identify muscle signals, a printed circuit board (PCB) is used to couple the microphone with an accelerometer that filters out the vibrations caused by the motion of the arm. Although this design has been proved functional through several academic reports, the criteria of the investigation would have been satisfied with a simpler device. With the support of the group at Dorkbot ALBA 12, an original MMG sensor was developed: the circuit did not use of a PCB and accelerometer, but deployed the same Panasonic WM-63PRT microphone (WM) indicated by Silva. This early prototype was successfully used to capture heartbeat and forearm muscles sounds; the earliest recordings and analysis of MMG signals were produced with the open source digital audio workstation Ardour2 (Figure E.5) and a benchmark was set in order to evaluate the signal-to-noise ratio (SNR). In spite of the positive results obtained with this prototype, the microphone shielding required further trials. The optimal shield had to fit specific requirements: to avoid the direct contact of the WM with the skin, as this would generate sound artefacts; to filter out external sounds by narrowing the WM sensitive area; to keep the WM static and avoid external air pressure to affect the signal; to provide a suitable air chamber for the WM in order to adequately amplify the MMG. First, the microphone was insulated by means of a polyurethane shield, but due to the strong malleability of this material its initial shape tended to change too easily. Eventually, the re- 11 The microphone sensitivity is indicated between 20Hz and 16kHz. However, a spectral sound analysis of the MMG have shown that the lower roll-off is enough gentle to pass sound waves down to 1Hz. It is not clear yet how these MMG components are treated by loudspeakers. 12 Electronics open research group based in Edinburgh. The group has now been merged into the Edinburgh Hacklab. See: 20

23 Creativity and technology today Xth Sense: an instrument, an aesthetic quirements were satisfied by a case molded from common silicon. This also enhanced the SNR. Once the early prototype had reached a good degree of efficiency and reliability, the circuit was embedded in a portable plastic box 13, along with an audio output and a cell holder for a 3V coin lithium battery 14. The shielded microphone was embedded in a velcro bracelet and the needed wiring cables were connected to the circuit box (Figure 2.4). Figure 2.4: The Xth Sense biophysical wearable sensor. Photograph: Chris Scott. Today the bulk of the hardware remains the same, even though the design was refined. In the early 2012 the WM went out of production. After a non trivial search for a replacement, the Kingstate KECG2742PBL-A (KEG) was chosen. This is just a couple of millimeters taller than the previous one, and it is largerly available on-line. The WM had the advantage of a higher sensitivity, the KEG, however, has a more uniform response to the MMG core frequency range (3-20Hz), which makes it more responsive to deeper or quieter muscle contractions (Figure 2.5). The function of the XS hardware is limited to the capture of muscle sounds. In order to perform music with them, a player relies on a dedicated, real-time software. The program handles the analysis of the muscle sounds, the extraction of features, and the digital processing of the audio stream. The software was developed in early 2011, and it has been continuously improved with new computational idioms and strategies that arose by practicing with the instrument. 13 Following a conversation with researcher Martin Ling, a hand-held box prototyping kit was adopted in the design. This is quite handy when it comes to the production of a fair amount of sensors, as it offers a ready-made enclosure. It includes a box molded from common plastic, two types of front and rear removable panels, a matrix board, and the needed screws. 14 See the parts list in Appendix A.1 21

24 Creativity and technology today Xth Sense: an instrument, an aesthetic Figure 2.5: Frequency response of a muscle sound captured by the WM and KEG The application, developed in Pd-extended 15 on a Linux operating system, is mainly based on the xth-sense-lib 16. This is a collection of over one hundred objects which has been purposely designed for the computation of bioacoustic signals. Although a thorough report on the software environment is out of the scope of this text, a technical overview and a tutorial are included in Appendix A.3, while code screenshots can be viewed in Appendix B. As of July 2012, the hardware documentation and the source code have been publicly released 17, along with step-by-step tutorials to build the hardware and run the software (these are included in Appendix A.) Apparently, the instrument has been well received both by musicians and academics, and other artists have been working with the XS 18. This text, however, is meant to focus on the findings of my personal study, therefore, those works are not presented in detail. The interested reader is invited to view the Appendix C for further details. In the next chapters, the ideas, insights, challenges, and outcome of this two-years experience with the XS, as both a player and a sound researcher, are described. 15 A free programming language for real time signal processing and computer music. See 16 Few objects are included in other Pd-extended libraries, which are: iemlib, moonlib, mrpeach, moocow, cyclone, iemgui, and iemguts. Additionally, two objects from the soundhack library have been customised and included in the xth-sense-lib: +pdelay and +bubbler. 17 Respectively, under a Creative Commons Attribution Share-Alike and a GPL v3.0 licenses. See the project s homepage at 18 Further details can be found in Appendix C. 22

25 Chapter 3 Music for Flesh II...and your very flesh shall be a great poem... - Walt Whitman Music for Flesh II (MFII) is an interactive music performance for enhanced body. In this piece, I literally create music using my body muscle sounds. By executing a series of given muscular flexions, muscle sounds are generated, live sampled, and diffused through loudspeakers (Figure E.1, E.2). The piece portrays the performer s body as a self-enclosed musical instrument and the flesh kinetic energy as an exclusive sound generating force. MFII lends itself well to describe the performance paradigm of biophysical music. This is a term I coined to describe music generated and played in real time by amplifying and processing the acoustic sound of a performer s muscles. The paradigm is underpinned by the notions of visceral embodiment and sound-gesture, which are discussed next. 3.1 Biophysical music As opposed to bioelectric controllers (that deploy EMG signals), the XS depends on a microphone that picks up subcutaneous mechanical vibrations, or better, sounds originating within the muscle fibres. The XS uses these sonic vibrations as a sound source to be processed using the same data 23

26 Biophysical music Music for Flesh II stream. The performer controls the live sampling and spatialisation of the muscle sounds, which the computer diffuses through the loudspeakers. At this point, the player modulates the surfacing sonic space by exerting further contractions. It is a creative feedback loop between the player s neuromuscular system1 and the computer circuitry. This is the principle that underpins the biophysical music model. The production and performance of biophysical music, however, relies on the design of specific compositional strategies and mapping techniques. Figure 3.1: Performer, sensors, laptop and loudspeakers. Studio setup at Inspace, Edinburgh, UK. Photograph: Chris Scott Encoding bodily agency During the performance, I use the XS to compose real-time music with the clusters of sound released by the muscles. The mechanical pulsation of the tissues is captured by a pair of XS sensors located on the forearms, and analysed by a dedicated software which extracts meaningful features. According to unique traits of the body muscular tension, the muscle sounds are digitally processed and eventually played back through a variable array of loudspeakers (Figure 3.1). A basic characteristic of the muscle sound is that its loudness increases with the strength of the contraction (Bolton et al., 1989). For instance, a sudden and strong contraction of the arm produces a loud sound with a sharp attack and a very short release. In this piece, a specific mapping technique extends the relationship between strength and loudness by adding 1 The combination of nervous system, muscles, and sensory nerves which enables movement. 24

27 Biophysical music Music for Flesh II multiple dimensions to it. The dynamic of each MMG pulse becomes a continuous stream of data that controls the processing of the resulting sound. In order to ensure a fair amount of complexity and richness, up to 8 simultaneous sampling dimensions are made available to the player. In this way the interrelation of agency, musicianship, and musicality can, it is hoped, remain transparent throughout the piece. The neural and biological impulses driving the player s actions become analogous expressive matter, for they emerge as a palpable auditive cosmos. An alternate interpretational layer of the performer s gestural motion is overtly enacted by the music which envelopes the audience. The reader is now invited to view an audiovisual recording of this work. This should also be used as a reference while reading the next sections. The video is available in the media folder attached. Clik here, or see the content index in Section 1.1, filename marcodonnarumma music-for-flesh-ii.mp4, subfolder videos Feature extraction and compositional strategies The computer learns about the body emergent muscular state by extracting discrete and continuous features from the MMG. Each sensor outputs an analog audio signal, and the software digitalises it. The result is passed through an array of algorithmic functions that are designed to shape the data stream into control features, namely: Natural (N) Soft (S) Linear (L) Tanh (T) Maximum Running Average (MRA) This section describes the technical idioms behind the extraction system and some practical applications of the data collected. Figure 3.2 shows a block diagram of the data flow in the XS software; the diagram can be a useful reference while reading the next paragraphs Here, the focus is on the data extraction and mapping. The audio processing that takes place before the feature extraction is documented in Appendix A.3. 25

28 Biophysical music Music for Flesh II Figure 3.2: Block diagram of the MMG feature extraction. 26

29 Biophysical music Music for Flesh II The N value is computed in two steps (Figure B.2). First, the software tracks the MMG root mean square (RMS) using a Hanning window of 512 samples. This value is not output immediately, but it is fed to a custom function. The result is a continuous event that imitates the elastic, and sometimes, jittery contraction of the muscle tissues. This can be compared to the bending of a rubber band: when the muscle is flat, N is equal to 0; at the onset of the first contraction, N increases proportionally to the amount of energy released. However, when the contraction ceases, N does not fall back immediately to 0, but it bounces back and forth as the muscle tissues recover a static position. Such behaviour is quite interesting for it causes control data to be produced also after a gesture is completed. In MFII, this method is used to involuntary excite the machine processor, and so provoke aural echoes of a gesture. From the audience perspective, this represents a rupture of the direct interaction between the performer and the machine. Nonetheless, such disruption unveils a real and unpredictable dialogue between the performer and the computer, and rather than contributing a negative feel, it helps generate more long-term and meaningful forms in performance. The gesture becomes better recognisable through the echoes that widen the auditory space around it. A related sound sample is available in the media folder attached. Clik here, or see the content index in Section 1.1, filename mfii gesture-echoes-sample.wav, subfolder audiosamples. The S feature is a softer continuous event which is obtained by passing N through a single exponential smoothing (SES) function (NIST/SEMATECH, 2003). S is used to drive subtle textural permutations by increasing the time and room size of a series of reverb effects, which are located at the end of the processing chain 3. L results from the conversion of the MMG audio signal into control rate messages every 20ms (Figure B.3). This time interval proved the best compromise between a high resolution representation of the biodata and the computer performance. L is the most used feature in MFII, as it helps produce the perception of a neat coupling of the player s motion and the musical forms emanated by her body. L is passed through a SES function to obtain T. This feature presents a minimal dynamic, so it 3 Information about conventional audio processing techniques, such as reverbering, filtering, etc., can be found in the common literature, and the mentioned XS audio effects are documented in their respective help files. These can be found in the source code included in the media folder attached. See Section 1.1 to locate the files. 27

30 Biophysical music Music for Flesh II Table 3.1: Mapping definitions in movement 5 Feature Left arm Right arm MRA delay line to pitch shifting grain size MRA pitch-shift delay mix grains delay time MRA not mapped granular delay mix MRA not mapped pitch-shift del. time MRA not mapped filter freq. cutoff T not mapped cosine panner is used to control musical processes that require a careful lead, such as a brief glissando and a minimal sound spatialisation. Eventually, L is reiterated through a sub-process which produces the MRA. This computation consists of three steps (Figure B.4): 1. L is observed in order to identify the running average (RA); 2. the last maximum (LM) of the RA is extracted every 2s; 3. LM is normalised and interpolated with its previous instance; The result is the MRA of multiple muscle contractions. This is a continuous event that moves away from the micro level of the single gesture, and reflects, instead, the average amount of energy released by the body in a wider time window. Similarly to the mapping of the N feature, the use of the MRA can disturb the audience perception of a mutual interaction between player and machine. Nonetheless, a clever mapping implementation can outline the performer s agency by placing emphasis on a series of coordinated actions, rather than on isolated gesture. The fifth movement of MFII is almost completely based on the MRA, although the mapping definitions are fairly complex. Here the auditive space is fully filled, nearly saturated. Polyphony is obtained by playing back the sound generated by the forearms. Simultaneously, the sonic matter is drastically processed using the MRA of multiple, bustling muscle flexions. The control parameters of a pitchshift based delay, a granular delay (Erbe, 2008) (Figure B.5), a bandpass biquad filter, and a cosine panner depend on the MRA. Additionally, the grains position within the sonic field is subtly manipulated through T. Table 3.1 illustrates the control array. 28

31 Visceral embodiment Music for Flesh II 3.2 Visceral embodiment For David Rokeby, not only the computer is objective and disinterested, but it removes you from your body (Rokeby, 1998). When performing with a BDMI, a player experiences a continuous unbalance between the real body and the virtual one. It is crucial, therefore, that the physical engagement is strong, multi-layered and meaningful. As noted by Tod Winkler (Winkler, 1995, p. 263), simple one-to-one correspondences are not always musically successful. The composer s job then, is not only to map movement data to musical parameters, but to interpret these numbers to produce musically satisfying results. The mapping of gestural or biological data to control parameters does not alone constitute a musically expressive mode of interaction. The meaning of those data has to be defined and contextualised in order to deliver the full potential of a performance Sound-gesture The visceral coupling of player and machine put forth by the XS is exemplified by an interpretational model which I call sound-gesture (SG). This consists not of a mere empty-handed gesture on its own. It is a gesture dictated by a neural impulse that generates a given muscular excitement (i.e., a MMG sound). In turn, the muscle sound becomes an expressive sonic event by means of the algorithms that live inside the computer circuits (Figure 3.3). Hence, the SG can be seen as a techno-epistemic enactment of a dormant capability of the body system. A SG is best understood in the frame of the gesture categorisation presented by Marcelo Wanderley and Claude Cadoz in (Cadoz et al., 2000, pp ). If we try to position the SG within the frame of their analysis, it becomes clear that a SG is an anomalous instrumental gesture. Wanderley and Cadoz, in fact, exclude the empty-handed gesture from the instrumental category for it owns only the semiotic function of the human gestural channel; that of communicating information toward the environment. They explain that this kind of gesture lacks the ergotic and the epistemic functions; respectively, the existence of a direct contact with the instrument, and the performer s use of the tactile-kinesthetic perception to play the instrument (ibid). 29

32 Figure 3.3: Still of a sound-gesture in the forth section of Music for Flesh II, and the related mapping diagram. Photograph: Mark Daniels. Visceral embodiment Music for Flesh II 30

33 Visceral embodiment Music for Flesh II In the case of the XS, however, the instrument that a performer manipulates is not an external object, but the muscle fibre of her own body. The XS capability to musically deploy a performer s muscle sounds challenges the nature of an instrumental gesture; the player does not act upon the external environment, but rather within the intimate, bodily milieu. It can be observed, therefore, that a performer can produce specific (physical) phenomena (ibid) by mastering the tension of the body (the ergotic function), whilst experiencing the enactment of a higher muscular and articulatory sensitivity (the epistemic function) Articulating corporeal music expression At the micro level, the sonic interaction is straight-forward: a single SG, such as the twitch of a wrist, generates a single sound form. The meso (i.e., intermediate) level relies on the articulation of multiple gestures within what I call a scene; this consists of all the processing units and modification parameters available to the performer. By exerting various amount of muscular force, the player can choose which processing stage to activate and control. Finally, the macro level consists of the overall piece structure; at a given time, diverse and independent SG definitions can be loaded into the system by using a timeline 4. The following paragraph describes specific gestures and the system responses. It is suggested to read it while listening to the related sound sample, which is available in the media folder attached. Click here, or see the content index in Section 1.1, filename mfii soundgesture section-four.wav, subfolder audio-samples. For instance, during the fourth movement of MFII, the left forearm is repeatedly contracted upward for about 30 seconds (Figure 3.4). This prompts the computer to playback the muscle sound in its purest form: that of a deep, low frequency vibration between 1Hz and 40Hz. After a minute, the current muscle sound is sampled and transposed up to 60Hz, so to enhance its physical impact. A few seconds after, by contracting the right arm, the processing of the MMG through delay lines, granular synthesis, and pitch bending is activated and modulated. 4 At the time of MFII, the timeline was automated. Cue points located along the timeline would trigger a new scene at a given time. In the latest software version, however, a timeline that senses the body muscular state (stillness, movement, slowness, high activity), and enable events only when the performer s muscles match a given state was implemented. This is discussed in Section

34 Visceral embodiment Music for Flesh II Figure 3.4: Still of a sound-gesture consisting of repeated upward contractions of the left forearm. Photograph: Lais Pereira. At this point a new textural layer appears: the sound of the left forearm is duplicated and scattered in high pitched grains that are spatialise by nervously contracting the wrist. Suddenly, the movement stops for about ten seconds; this results in the interruption of the data stream, which allows the software to enter a condition of stand-by. Within a couple of seconds the body can be forced into avoiding ancillary tension, and then, the muscles are completely released. As a result, all the feature values gradually fall down to zero and trigger a drastic, yet continuous decrease in the duration of the granular delay lines. With the next contraction the sound grains are mangled; their aural image is deformed until a harsh and glassy bundle of mid high frequencies emerges, rapidly moving over a wide stereo field. The sustained flexion of the upper limbs causes the machine to steadily increase the loudness and density of the sound output until the player s body stands still, and no sound is produced. This piece is grounded upon a compositional method that relies on defined gesture and neat sound forms. A clear narrative guides the listener through the musical experience 5. A few months after 5 Here, in fact, the sound-gestures follow a graphical score. This can be viewed in Appendix E.3. 32

35 Visceral embodiment Music for Flesh II the composition of MFII, the research moved towards a different context of application. The goal became to investigate the extent to which the player s cognition could have been bypassed: How to offer a non-narrative sonic experience? How to put aside musicianship and let the sound of the body simply be? In which ways would the performer be affected? And the audience? 33

36 Chapter 4 Hypo Chrysos The body is the soul s poor house or home, whose ribs the laths are and whose flesh the loam. - Robert Herrick Hypo Chrysos (HC) is a work of action art for vexed body and biophysical media. During this twenty minutes action I pull two concrete blocks in a circle (Figure 4.1). My motion is oppressively constant. I have to force myself into accepting the pain until the action is ended. The increasing strain of my corporeal tissues produces continuous bioacoustic signals. Blood flow, muscle sound bursts, and bone crackles are amplified, distorted, and played back through eight loudspeakers (Figure E.4). The same bioacoustic data stream excites an OpenGL-generated swarm of virtual entities, lights, and organic forms diffused by a video projector (Figure B.6). The work brings together different media so as to creatively explore the processes wherein selfperception, effort, and physicality collide. 4.1 Performance, strain and the sensory system HC is freely inspired by the sixth Bolgia of Dante s Inferno, located in one of the lowest of the circles of hell. Here, the poet encounters the hypocrites walking along wearing gilded cloaks filled 34

37 Performance, strain and the sensory system Hypo Chrysos Figure 4.1: The author during a performance of Hypo Chrysos. Photograph: Thr3hold. 35

38 Performance, strain and the sensory system Hypo Chrysos with lead. It was Dante s punishment for the falsity hidden behind their behaviour; a malicious use of reason which he considered unique to human beings. Using my arm to pull two ropes tied to concrete blocks, I struggle to walk along the stage. The ropes are short, and this forces me to lightly bend the torso forward, while my hips move backwards to maintain the equilibrium. The combined weight of the blocks is 30Kg (Figure 4.2). Initially they are not extremely difficult to pull, but on the long run, the resistance of my (very thin) body is truly stretched to the limits. First, I feel the abrasion caused by the friction of the ropes against the hands; after about ten minutes, the tension in my arms becomes painful, and a few minutes later the spinal column feels like burning due to the continuous attrition of the vertebras1. In order to keep moving in this condition, the body has to continuously optimise its response to the strain, and this provokes a hightened activity of the sensory system. Following such observation, attention was drawn to the relation of heavy physical exertion and self-perception mechanisms. Figure 4.2: One of the blocks used in the performance. Photograph: Chris Scott. 1 The first performance resulted in much more strain than I expected, so I included a section in which I bend on my knees, and partially recover for a few minutes. Usually, on the day after the performance, I suffer of delayed onset muscle soreness (DOMS), also known as muscle fever. Stiffness is a natural reaction of the muscles which, after a strenuous exercise, adapt rapidly to prevent damage to the tissues. In this condition every kind of movement is fairly painful, so I have to rehearse the piece two days before the public performance. 36

39 Performance, strain and the sensory system Hypo Chrysos Proprioception and neuromuscular feedback While learning to master the XS, it became clear that playing it required a greater concentration than I was used to. A traditional instrument gives to the player a direct tactile feedback, which can be used to refine the performance skills. In most cases, a BDMI performer lacks this sense of touch; instead of plucking a chord or pressing a key, she either executes an empty-handed gesture or rests in a static position. This is the main reason why I began to perform with the eyes completely closed. Performing as if blindfolded, muscle tension can be better perceived. This is a sense known as proprioception. It uses information from the sensory receptors of the muscles, joints, and inner ear to determine limb position and strength of the effort 2. The muscle sensory receptors are called muscle spindles, and they mechanically record the changes in the muscle length. Proprioception is best understood when compared with exteroception (the tactile sensitivity) and interoception (the sense of movement of the internal organs). As illustrated by Brian Massumi (Massumi, 2002, pp ): Tactility is the sensibility of the skin as surface of contact between the perceiving subject and the perceived object. Proprioception folds tactility into the body, enveloping the skin s contact with the external world in a dimension of medium depth: between epidermis and viscera. The muscles and ligaments register as conditions of movement what the skin internalises as qualities. After performing with the XS for about sixteen months, I felt as if my body had produced an acute enhancement of the proprioceptive sense and that I had acquired improved flexibility of the joints. At the same time, I became unconfident when performing with sound systems that could not provide a Sound Pressure Level (SPL) between dB. Low frequency sound at such a high SPL makes the body vibrate, and I noticed that the lack of this feeling would somehow compromise my performance. Could there have been something worth investigating? Very recently, a new point of interest for understanding this phenomenon was found in neurophysiological studies on vibration exercise (VbX). VbX consists of body stimulation by means of low frequency mechanical vibrations. Evidence exists which proves that VbX enhances physiological responses and muscle functions (Cardinale and Bosco, 2003; Cochrane, 2011). By applying ex- 2 A sensory receptor consists of the ending of a sensory nerve. It records an internal or external stimulus, and transduces it in an electrical impulse received by the central nervous system. 37

40 Performance, strain and the sensory system Hypo Chrysos ternal vibrations to the muscle or the tendon, the muscle spindles activate to dampen the tissue displacement. Contextual user studies reported that the continuous activation of the muscle spindles by means of a chronic VbX treatment improve neuromuscular performance (Cardinale and Bosco, 2003). Drawing on this finding, it could be argued that performing with the XS may have the potential to alter proprioception and neuromuscular activity. In a performance using the XS, the player focuses mainly on the sense of proprioception. Because the MMG loudness increases with increasing nerve stimulation (Bolton et al., 1989), the information on the changes in the performer s muscle length becomes paramount to the musical articulation of muscle sounds. But each time a contraction activates the player s sensory nerves, the amplified muscle sound emitted by the loudspeakers feeds back into her body. At this point, the player s sensory fibres activate again to dampen the tissue displacement caused by the external sound wave. As the process happens in real time the body becomes the subject of a multi-modal sensory feedback 3. Given that the VbX can alone improve neuromuscular performance, it is reasonable to suggest that the sensory feedback loop enacted by the XS may be capable of altering the performer s proprioceptive mechanisms. Moreover, it is arguable that in a public concert or performance setting, such a process may have the potential to influence the audience experience. Although time constraints have for the moment impeded a factual validation of the argument elaborated above, a better understanding of such a scenario came from the realisation and performance of HC. In the next section, a link among proprioception, bodily vibration, and audience engagement is proposed Being a body affecting bodies When the performer s muscle vibration becomes tangible sound breaching into the outer world, it invades the audience members bodies through their ears, skin, and muscle sensory receptors. The sound makes their muscles resonate, establishing a nexus between player and audience. The 3 It is worth noting other artists have been exploring different neuromuscular implications of real-time computer music interaction. One of the most notable example being the work of Julie Wilson-Bokowiec and Mark Bokowiec on psychophysical feedback. See (Bokowiec and Wilson-Bokowiec, 2007) 38

41 Performance, strain and the sensory system Hypo Chrysos listeners bodies, the player s body, and the performance space resonate synchronously. The performer s proprioceptive dimension has been magnified and now embraces the bodies of the audience members. The flesh vibrational force becomes a vector of affect. Here, the term affect refers to Gilles Deleuze and Felix Guattari s definition of a body potential to affect and be affected. It is a proprioceptive potential of interaction among bodies (Deleuze and Guattari, 1987). Because of its position in between cognition and viscera, affect is autonomous and unactualised. For Massumi, affect is not an object relegated within the body tissues, rather it escapes confinement in the particular body whose vitality, or potential for interaction, it is (Massumi, 1995, p. 96). In HC, affect expands beyond the boundaries of the player s tissues in which it originates, and modulates the audience sensory system by activating resonances in their flesh. Audience feedback following performances of HC has gone beyond the boundaries of a theoretical understanding of the practice and, in fact, shows to what extent the spectator experiences the corporeality of the performance. At the end of a performance in Spain 4, two listeners reported:...marco, are you ok? Can I help you with something, or you want some water? You know my arms hurt now? While watching you I felt as if I was pulling those blocks too! Incredible......it was such a strong experience. During the performance I realised I was contracting my arms so strong that they hurt, and those sounds... Oh, I hope you re feeling ok now... These listeners recognised a physical change in their body. The feeling of strain in their muscles was evident and surprising. Although their somatisation of the experience is likely to depend on a series of factors (including emotional state and enjoyment), it suggests that the corporeal sounds diffused in the concert space had contributed to actuate autonomic responses in their bodies. A neuromuscular link among our bodies was formed. 4 Namely, the world premiere of the work hosted by the Madatac Festival at the Caixaforum, Madrid, Spain in December,

42 Effort and physicality Hypo Chrysos 4.2 Effort and physicality In traditional music and dance, effort implies physicality. When a high degree of technicality is matched by a brave imagination in exploring new sounds and body movements, the experience of a performance can be breathtaking. Imagine a double bass player that unnaturally bends the whole body to resonate a hidden part of his instrument 5, or a dancer forced to use very short crutches to move frantically along the stage 6. It is arguable that the performance of a computer musician is rarely associated with the idea of effort and physicality. There exists a modernist preconception that the computer s most prominent feature is that of providing great results with effortless human input. Although this notion appears to be somewhat somatised by the mainstream, its nature is deceptive. As Joel Ryan puts it (Ryan, 1991, pp. 6 7): Effortlessness in fact is one of the cardinal virtues in the mythology of the computer. It is the spell of something for nothing which brightly colors most people s computer expectations. I share Ryan s feeling in that this is a rather dangerous viewpoint when applied to music and performance. Ultimately, the exhibition of the effort made by a performer on stage is the element of energy and desire, of attraction and repulsion in the movement of music (ibid). Being passionate about action art, I aimed to extend the notions of musical effort and physicality to that field. In an attempt to outline the expressive qualities of bodily effort, I looked for a performance strategy that would minimise the apparent musicianship, yet offer a sensuous and impactive sonic experience. The choice to create an action art piece allowed me to temporarily leave the comfort-zone of a fixed concert setting. By changing the performative context, I was challenged to conceive new ways of playing the XS. Next, I describe the path that, from the idea of musical effort, led me to broaden the application of the XS. Before reading the following sections, the reader is invited to watch the performance video available in the media folder attached. Click here, or see the content index in Section 1.1, filename marco-donnarumma hypo-chrysos.mp4, subfolder videos. 5 This image refers to an inspiring concert by John Eckhardt, which I attended in June 2012 at the Dialogues festival in Edinburgh, UK. 6 Here I refer to a scene from the dance piece Body Remix/Goldberg Variations, by the Compagnie Marie Chouinard. A video can be viewed on-line at 40

43 Effort and physicality Hypo Chrysos Figure 4.3: Hypo Chrysos studio session at Inspace, Edinburgh, UK. Photograph: Chris Scott. 41

44 Effort and physicality Hypo Chrysos Agency without musicianship John Cage, while referring to the compositional process of Sixteen Dances 7, states that sonic events do not need to be defined by the composer in order to exist, and be meaningful within a composition. By drawing the overall movement of music and leaving aside the need for control over the musical qualities, sound forms simply emerge. In this case, music is a result of the composer s (and the player s) acceptance, rather than control (Cage and Charles, 2000, p. 102). In this work, the performer s role is certainly not that of a musician in the strict sense of the term. In HC the goal is not to create or play music, but to simply pull a weight. Because of the intense strain, the player has little time to think about playing and must focus on the proprioceptive sense in order to resist the strain and continue to move (Figure 4.3). In this condition, it is difficult to attribute any musical intention to the gestures. The strain level of the player s tissues describes the movement of music, and the nature of the sonic events cannot be controlled, or intentionally determined. By forcing the body into a condition of intense physical exertion musicianship is deterred. But how to find then a meaningful link between the performer s corporeality and the resulting music? By studying the muscle sounds produced under constant exertion, it was shown that, after a large peak in the MMG appears at the onset of the contraction, the signal amplitude becomes very low. This suggested the idea of increasing the XS sensitivity 8 ; by doing so, the instrument not only captures weaker MMG signals, but also the sound of the blood flow 9. The result is a continuous and dense stream of low frequencies modulated by sudden muscle bursts. An audio sample of blood flow and MMG signals recorded with this configuration is available in the media folder attached. Click here, or see the content index in section 1.1, filename hc blood-floow-and-mmg-sample.wav, subfolder audio-samples. Although these sounds are extremely difficult (if not nearly impossible) to control, the varying intervals in their movement represents a source of meaningful microtonal variations. Nor the music or the moving images are controlled by the body, they rather emerge from within its tissues. This 7 A dance piece for Chris Cunningham (1951). 8 The related computational idiom is described in Appendix A.3 9 This, in fact, has a frequency response similar to the MMG, only much lower in amplitude. 42

45 Effort and physicality Hypo Chrysos strategy helps explore how the organisation of the sonic experience can be abstracted from the player s cognitive process and made apparent through the agency of the sensory system Mapping textural richness During the performance of HC, the acoustic waves originating within the veins and the muscles of the performer s body are digitally magnified. The sounds are manipulated by means of a two-stages DSP system, which consists of a stack of feedback delay lines and distortion effects (fuzz and all pass). At first, the soundscape consists of dispersed, punching low frequencies. Then, multiple sonic instances of the signal are stored, distorted, and fed back into the system (Figure B.7). Being that the input is continuous, a wall of sound slowly emerges. A frequency band is added during each section by varying the distortion drive, and eventually, the sound spectrum becomes thick and harsh. The reader is invited to listen to a small passage of the performance that exemplifies the paragraph above. The sample, recorded during a live performance, is available in the media folder attached. Click here, or see the content index in Section 1.1, filename hc texturalmapping section-three.wav, subfolder audio-samples. The mapping system consists of a small array of continuous events; this helps avoid a complete saturation of the system. A drawback of using a limited set of control features is less sonic richness, therefore, a strategy that diversifies the sonic outcome by optimising little control data was developed. The feature mapping does not change significantly throughout the piece, as it was found that subtle changes of the same mapping were more useful. For this purpose a multilayered scaling function was designed. The idiom is fairly simple: before the actual mapping takes place, the incoming data stream is processed by a custom, logarithmic or exponential function; the stream can then be offset by setting a custom range, and eventually reversed (see the code in Figure B.8). The variations on the soundscape of HC depend on the S and MRA features of the forearms, which provide the less jittery data. The features are mapped to the wet mix of a delay effect and a distortion unit, to the feedback amount of two delay lines, and to the degree of a cosine panner. By using a minimal one-to-many feature mapping and varying the curve scaling and range offset of a continuous MMG stream, the XS can produce a uniform soundscape 43

46 Effort and physicality Hypo Chrysos in which richness is experienced through manifold microtonal variations. Moreover, the body physiological state before playing, and the exhaustion accumulated throughout the performance drastically influence the control features. This is how textural variances of the soundscape exist, albeit not actually planned beforehand, nor consciously enacted. Another computational idiom purposely coded for HC is called anlz.rhythm, a rhythm tracking algorithm. Rhythm here, refers to the cadence of muscles contractions, in other words, how many times a MMG feature reaches a user-defined threshold. When the feature peaks Y times, the algorithm generates a trigger (Figure B.9). This starts the playback of a pre-recorded sound sample. The Y value can be either set by the user, randomly generated, or autonomously defined by the XS according to the player s current muscular energy. The idiom is used throughout the piece to trigger signals 10, such as a single reverbered percussion hit or a masking sound effect indicating the transition from one scene to another. Now, it the reader is invited to listen to a complete audio recording of the performance. This is different from the video seen before. A comfortable listening level and (possibly) closed eyes will ensure the best fruition of the work. The file is available in the media folder attached. Click here, or see the content index in Section 1.1, filename marco-donnarumma hypo-chrysos.wav, subfolder live-audio-recordings. 10 The term here is intended as in the soundscape studies glossary 44

47 Chapter 5 Ominous The flesh is the surface of the unknown. - Victor Hugo Ominous (OMN) is a sculpture of incarnated sound. The piece embodies, before the audience, the metaphor of an invisible and unknown object enclosed in my hands. This is made of malleable sonic matter. Similarly to a mime, I model the object in the empty space by means of whole-body gestures. The bioacoustic sound produced by the contractions of my muscle tissues is amplified, digitally processed, and played back through nine loudspeakers. The natural sound of my muscles and its virtual counterpart blend together into an unstable sonic object. This oscillates between a state of high density and one of violent release. As the listeners imagine the object s shape by following my gesture, the sonic stimuli induce a perceptual coupling. The listeners see through sound the sculpture which their sight cannot perceive. OMN is an hommage to artist Alberto Giacometti. The piece is an interpretation of a recurrent topic in his work, that of a constant irrational search and movement towards an unknown object 1. This theme is embodied in the threatening, bronze-casted sculpture Hands Holding the Void, which is the inspiration for this performance 2. 1 This quote is taken from a bulletin of the St. Louis Art Museum (US) in which the author (apparently unknown) rephrases a description allegedly attributed to André Breton. See (Saint Louis Art Museum, 1967, p. 2). 2 Also known as The Invisible Object ( ). I previously knew this work, but I saw it for the first time at 45

48 Playfulness Ominous 5.1 Playfulness The work s realisation took a large amount of time. One of the main concerns was how not to repeat the performance modalities of the previous works. At that time 3, I felt as the earlier compositional strategies had become constraining. After all, I had been performing with the XS for over a year. My technique had improved, and this, perhaps, could have helped investigate a mode of interaction that I had not been able to devise before. In MFII and HC, each arm generates individual sounds which are processed separately. The software analyses the MMG of the arms as if they were completely independent from each other. In most cases, however, a limb flexion tends to enact sympathetic vibrations of the adjacent limbs. This means that the muscle sound emitted by a limb is a joint result of the flexion of the observed muscle and the subsequent vibrations of the other limbs. In short, the MMG analysed at the capture point is the sum of interrelated limb vibrations. A skilful coordination of multiple limbs results in a fine control over the MMG dynamic. In addition, scientific studies on the spectral variances of the MMG found that, as muscular force increases, the MMG frequency spectrum becomes broader (Orizio et al., 1990). This implies that the muscle sound spectrum can be actually modulated by varying the intensity of a contraction. By improving whole-body coordination, a player can weigh her muscular force so as to produce specific spectral results. Such observations suggested to examine the relationships underlying simultaneous MMG data streams. As a result, a model for a multidimensional SG (Figure 5.1) was implemented. This relies on reciprocal time and intensity relations of two synced MMG signals Multidimensional sound-gestures When playing a traditional instrument, limb coordination is critical to both the quality of the music and the pleasure of performing. In the case of a chord instrument, for instance, synced gestures cause complex timbre variances. The initial plucking defines the amplitude and rate the National Gallery of Art in Washington, DC, US, in May The sculpture consists of a human-like figure combining natural and abstract traits, which seems to hold an invisible object. Its body rests in an unstable position and its suffering gaze seems about to explode in a loud cry. 3 Around the beginning of May

49 Playfulness Ominous Figure 5.1: A sequence of multidimensional gesture in Ominous. Photograph: Marika Kochi. 47

50 Playfulness Ominous of the chord s oscillation: a gentle gesture provokes a basic vibration, while a conceited one introduces distortion and harmonics. The fingering, in turn, determines pitch changes, modulates the sound dynamic, and causes resonances. For a player, being able to create a specific sonority by skilfully articulating such gestures can be gratifying. With a traditional instrument in mind, the XS configuration was refined so that a limb contraction would produce a sound, and a synchronous flexion of another limb would modulate that same sound. The model is described next. Here, the reader is invited to listen to a live recording of this performance. The file is available in the media folder attached. Click here, or see the content index in Section 1.1, filename marco-donnarumma ominous.wav, subfolder live-audio-recordings. Figure 5.2: The 4-stages DSP system used in Ominous. The MMG signal of the left bicep (i.e. the plucked chord) flows through a four-stages DSP system (Figure 5.2), whose parameters are driven by synced contractions of the right flexor muscle (i.e. the fingering). Instead of using a DSP system with one global output, each DSP stage sends its resulting signal to the loudspeakers. This strategy enables the playful creation of a multilayered sound flow. The enjoyment lies in that disparate sonic forms can be precisely shaped 48

51 Playfulness Ominous by coordinating and fine-tuning whole-body gestures that address one or multiple DSP stages at once. Table 5.1 shows the mapping used in the first section of OMN. Next, I describe the mapping technique and the signal routing strategy Synchronous mapping of two MMG signals At the beginning of the piece, the left fist is rhythmically open and closed. The pulsating, low frequency sound burst of the flexor is amplified, filtered and distorted. By accentuating the onset of a contraction I can intuitively broaden the MMG spectra, and modulate the amount of distortion of the higher partials (40-45Hz). As the texture and color defined by the first processing unit characterise all the subsequent sound forms, being able to precisely control this DSP stage is critical. At the same time, I have to keep the right arm still so not to activate the other processes. An audio sample of the outcome of this sound-gesture is available in the media folder attached. Please note, this file and the following ones, are extracted from a different performance than the previous one in this chapter. Click here, or see the content index in Section 1.1, filename omn multidimensional-sound-gesture section-one.wav, subfolder audiosamples. After a few seconds, the torso is bent forward and the right arm is lifted. Then, the hands are slowly brought together, as to enclose the invisible object. As the torso is bent towards the legs, the muscle sound becomes louder. The loudness increment is not caused by a stronger contraction of the arm muscles, but rather it emerges naturally because of the coordinated tension of the torso and shoulder muscles, which are now stretched. The muscle sound, in the form of a deep sound wave, vibrates the bodies of the audience members. The S feature of the left arm is mapped to the narrowness (Q factor) of a resonant filter. A minimal flexion of the fingers drastically reduces the Q factor, and the resonance bandwidth then becomes wider. The resulting signal is routed to a transposition effect and a delay line, which output piercing high frequencies that cut through the sonic field with a sweeping movement. The sample describing this sound-gesture illustrated above is available in the media folder attached. Click here, or see the content index in Section 1.1, filename omn multidimensionalsound-gesture-2 section-one.wav, subfolder audio-samples. 49

52 Playfulness Ominous Table 5.1: Mapping definitions in section 1 Feature Left arm Right arm Processed signal S Q factor not mapped input 1 S reverb size not mapped aux 1 MRA cosin panner not mapped aux 1 output MRA not mapped pitch-shift del. mix aux 2 output MRA not mapped pitch-shift del. fdb aux 2 output The second DSP stage receives the low frequency sound supplied by the first stage, and feeds it to a stereo reverb and a cosine panner. By twisting the left wrist the reverb size is increased and the sound spatialised. As the reverbered signal increases, it flows through the third DSP stage. Here, the sound is transposed once more and passed to a feedback delay line. The arms are slowly opened maintaning the muscle tension very high; a constant contraction force of the right arm produces a high and steady MRA, which, in turn, triggers the saturation of the feedback delay. At the same (logical) time, the MRA of the left arm controls the sound spatialisation. Figure 5.3: Closing gesture of the first section of Ominous. Photograph: FILE. 50

53 Machine learning Ominous First, the muscle sound explodes in a grave resonant rumble, and then, it mutates into a heavy and persistent metallic rattle. The sound density is intense. As the arms are increasingly opened, the skeetching sound becomes harsher. The sequence is frantically repeated a few times until the object appears ten times bigger than its initial size. It becomes evident that it can hardly be contained, and soon after, as the hands are completely separated, the object s energy is liberated in the air (Figure 5.3). At this moment, the torso is stretched backwards and the muscles are released. The MMG features fall down to 0 and the sound rapidly fades out. The related audio sample is available in the media folder attached. Click here, or see the content index in Section 1.1, filename omn multidimensional-sound-gesture-3 sectionone.wav, subfolder audio-samples. 5.2 Machine learning As a result of the development of a cascade DSP system and a synchronous mapping technique, the sound of the XS is richer, and the control modalities appear to be more playful. A player can control sound density, spectrum, timbre, loudness, timing, and spatialisation by fine-tuning a muscle contraction and coordinating the tension of the adjacent limbs. On the one hand, the use of the subtle nuances of whole-body muscular tension enables a higher degree of freedom in performance; on the other, it makes the XS more difficult to play. The challenge lies not in the player s technical skill, but rather in the unpredictability of the body. Although a player can acquire remarkable skills by training, the body is subjected to unconscious biological functions that have repercussions on the performance. For instance, the MMG loudness can become very unstable, even when cleverly controlled by the performer. During a concert, veins often bloat with blood because of the increasing heart rate. As the sound of the blood flow becomes louder, quieter muscle sounds become less audible; this flatten the overall loudness and should therefore be avoided (unless it is caused on purpose). In this case, the forearms should be lifted and brought close to the chest, for this position makes the blood flow down the arm, and let the veins recover their original shape. By combining this movement and a slower breathing rate, the body enters a state of relaxation in about ten seconds, and then, it is possible to play normally. Despite the 51

54 Machine learning Ominous fairly large amount of time I have been playing with the XS 4, I still need to train such processes. It appears that training alone does not enable a reciprocality between the spontaneous mechanisms of the inner body and the present configuration of the XS. This prompted new questions: How can the instrument autonomously recognise given changes of the biological body? Which are the musical strategies that could be enabled by such a behaviour? An adaptive musical instrument In the field of biology, the term adaptation is defined as the process of change by which an organism or species becomes better suited to its environment (OED, 2010). I like to think of the XS as a (computational) organism, and of the body as its environment. This analogy suggested to explore the modalities by which the instrument could identify the body s muscular state, and autonomously adapt to it. In the light of this goal, some features of the XS were reconsidered. Up to now, a performance time structure was controlled by a timeline. Triggers located at key points in time loaded a given preset scene; the timeline was unaware of the performer on stage. As long as one could memorise each cue point by rehearsing regularly, this approach proved functional. However, it became evident that the lack of control over the musical structure was a cause of distress during performance. This kind of interaction was not a viable, long-term solution. Following a conversation about Machine Learning (ML) with artist Ben Bogart 5, this area of study became integral to the research. ML is a branch of Artificial Intelligence (AI). Namely, it is the design of algorithms that enable a computer to identify and learn generic patterns within empirical datasets (Bishop, 2007). ML is currently used in a number of different fields, such as robotics, bioengineering, computational finance, videogame, and music performance. As a result of a literature review of those fields of application, a first test with the free ML software Wekinator (Fiebrink, 2011) was conducted. The software recognised different muscular states by learning from the MMG features provided by the XS. The test success prompted the 4 At the time of writing it was exactly one year and a half. 5 Ben is a colleague whose current work is centered on the implementation of artificial intelligence methods that investigate the notions of dream and memory in computational systems. His work can be viewed at http: // 52

55 Machine learning Ominous development of an integrated ML system for the XS. This is based on Artificial Neural Networks (ANN). An ANN is a mathematical model inspired by biological neural networks, in which a group of interconnected artificial neurons decodes similarities and establishes statistical relation among streams of data. Being the goal in using this technique to extend the performance capabilities of the instrument, and not to develop or validate a learning method, the details of the ML algorithm are not elaborated in this text. Rather, the next section describes the practical application of the learning system to the performance of OMN Identifying muscular states As of today, Pd-extended offers only one ML library. It is called ann and it was developed by IOhannes Zmoelnig, Davide Morelli, and Georg Holzmann. Although this is an efficient tool, its usage as a beginner can be difficult to grasp. In addition, given the lack of a basic theoretical knowledge in the field, the learning curve has been fairly steep. However, after some months of intermittent work, a supervised machine learning unit was developed, and the previous timeline was extended so to make it capable of responding to the body muscular state. There are several ML algorithm types and each of them has advantages and restrictions 6. The choice to work with supervised learning seemed the most logical approach to the research problem. During supervised learning the computer analyses training examples offline. Each example consists of input data and desired output, which is indicated by a label. By training multiple times, the algorithm defines and generalise the patterns that correlate the input data and the output label. In this way, it learns behaviours that are identified in real time afterwards. The application of this method in the context of the XS is structured as follows. First, a player executes performance gestures, and the instrument monitors different muscular states offline. The features extracted from the MMG signals (N, S, L, T, and MRA) are fed to the ANN. This identifies four different states which are labelled: still, moving, fast, and slow. During real-time 6 An exhaustive report of all ML algorithms is out of the scope of this text. The interested reader might refer to: supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, transduction, and multi-task learning. 53

56 Machine learning Ominous performance, the XS compares the stream of MMG features with the patterns it has learnt, detects the current muscular state, and eventually outputs the related label (see the code in Figure B.10, B.11). Key points are then added to the timeline in order to indicate the change of the scene being played. As time passes and a key point in time is reached, the computer stands by until it detects that the player is still; only then a new scene is loaded, and specific DSP chains are activated or turned off. When the scene is triggered, the XS automatically fades out the volume of the sound output, and restores it after a few seconds. This idiom permits to avoid unpleasant clicks and artefacts that would emerge when the DSP arrays are switched on and off. By continuously adapting its algorithms to the biophysical state of the performer s body, the XS leaves to the player the challenge and the pleasure of delivering an exciting musical experience. A fascinating outcome lies in that the instrument can easily adapt to different players. Further research could be dedicated to the design of musical interaction modalities that could be unique to a given player. 54

57 Chapter 6 Conclusions 6.1 Summary The research presented here showed how bioacoustic body signals can be musically meaningful in different performative context. Being mainly a performer, my aim in focusing on the sound of the body was to investigate sensuous and physical modalities of interaction in computer music. By doing so, a valid strand of research in biotechnological music performance was proposed. I described the design of the XS, a biophysical musical instrument that can be manually reproduced from scratch and at a low cost. The XS captures and analyses a player s muscle sounds; these are live sampled using the same data stream and diffused by loudspeakers. This performance paradigm, which I call biophysical music, is based on the notions of visceral embodiment and sound-gesture. These were presented, and their connection with proprioceptive mechanisms capable of affecting both the player and the listener was proposed. The compositional strategies underlying biophysical music were presented in the context of three works. A set of muscle sound features was reported and its extraction documented. The features were used to drive idiomatic mapping and DSP processes. By feeding the muscle sound features to a Machine Learning system, the XS could identify the muscular state of the performer s body. The instrument used this information to adapt its algorithms to the player s biophysical characteristics. 55

58 Future directions Conclusions 6.2 Future directions This work represents, hopefully, only the basis of a broader investigation on the possibilities of DIY, biophysical musical instruments. It is exciting to imagine how the performative strategies defined while studying and designing with muscle sounds could be used by other musicians and performers. In order for this to happen, it would be valuable to generalise the findings of this research by directing systematic user studies. These could lead to a benchmark for the design of other instruments of this kind. In this sense, the development of the present design towards the unified application of a diverse range of biodata constitutes a strong point of interest. A comparative study of MMG, EMG, EEG, Functional Magnetic Resonance Imaging, and Doppler amplification of the sounds in the blood vessels could indicate new modes of interaction. At the moment, the design of the XS could be refined and possibly extended. The feature extraction system could be improved by looking at the MMG timbre and spectral characteristics. The resulting data could suggest new gesture mapping strategies. A larger amount of features would also refine and improve the performance of the Artificial Neural Network system. This, in turn, could be largely extended with the collaboration of an expert. As for the hardware, a wireless design implementation would broaden its application. Further performative context could be investigated such as dance, theatre, and participative concerts. Finally, it is hoped that the XS, and the aesthetic it embodies, will be useful to the growth of the community of artists and researchers which I have learnt so much from. 56

59 Appendix A DIY documentation 57

60 Parts list and schematic DIY documentation A.1 Parts list and schematic 1x Kingstate KECG2742PBL-A electret condenser microphone x box prototyping kit (or another suitable plastic box and a matrix board) x resistor 2.2k x capacitor 1uF, 60V x coin battery holder x lythium coin battery 3V x quarter inch mono chassis jack socket m electric cable (to wire the components) 2m electric flexible cable (audio cable, two poles) silicon case (height 1.5cm, diameter 2cm, hole diameter 0.5cm) 2 velcro strips (hook and loop sides needs to be separated) black thread, or staples to hold close the bracelet 1 Prices are indicative. 2 To build the silicon case make a mould. You can use an empty plastic tube, cut it according to the size indicated above, and fill with a lot of silicon, it has to be very dense (use common silicon, you can find anywhere). Remember to insert a screw or a small wooden cylinder in the middle of the mould, so to shape a hole into the silicon case. Then let it dry for 3 or 4 days and open up the plastic to extract the silicon case. 58

61 Parts list and schematic DIY documentation Figure A.1: Schematic of the latest Xth Sense circuit (1.2). It has been designed to be as simple as possible to be easy to build and extend. 59

62 How-to: build an Xth Sense sensor DIY documentation A.2 How-to: build an Xth Sense sensor Time required: from 1 up to 4 hours (depending on the user s skills and practice). 1. Make sure you have all the components (see Appendix A.1). 2. If you bought the box prototyping kit skip this step. Otherwise, cut the matrix board in order to comfortably fit the box, but bear in mind the board has to be big enough to accommodate all the components. 3. Solder the circuit following carefully the schematics. You can solder everything except the flexible audio cable. This will be done later on. 4. Using a suitable drill, make two holes in the plastic box. The first hole is needed for the flexible cable to reach the circuit inside the box; this can be done on the longer side of the box. The second hole is needed to fit the jack socket. This can have different dimension, so make sure your fit into the hole. Before drilling the holes, define the best location where to drill. This depends on the layout of your circuit and the dimension of the jack socket. 5. Cut 1m of electric cable (or more, depending on which body part you want to use). 6. Accommodate the cable through the hole and solder the voltage and ground cables to the circuit according to the schematic. 7. Now, position the circuit inside the box and fit the jack socket into its hole. Make sure you can close the box. It can happen that the jack socket is slightly higher then the height of the box. If so, just cut away a small piece of the box cover so to be able to close the box. 8. Prepare the velcro bracelet. Cut two velcro strips (one with the hooks and one with the loops) about 10/15cm long (or adjust the length according to the limb you want to use). Sew them together, but remember to leave out about 3/4cm at about 1/4 of the whole lenght of the bracelet. This enables you to easily access the cables in case something is wrong. Also, cut out a small corner at one end of the bracelet. This way the cable will not disturb your movements. 60

63 How-to: build an Xth Sense sensor DIY documentation 9. When the bracelet is ready, make a small hole on the loop side of the bracelet; the position of the hole should be at about 1/4 of the whole lenght of the bracelet. We will use this hole to embed the microphone. 10. Take the free end of the flexible cable, and insert it in between the two sides of the bracelet through the corner you opened before. Pull the cable through the two sides of the bracelet until it comes out of the bracelet from the side you did not sew. 11. Solder the microphone pins to 2 wires (about 3/4cm long). It is crucial to remember which pin is the ground and which the voltage. See the microphone specification sheet (included in this package). (alternatively you can use a suitable micro socket, although those are not easy to find and they might be too loose to accommodate properly the microphone). 12. Take the wires you just soldered to the microphone, and insert them through the hole you made on the velcro loops. 13. Now you should have: on one side, the free end of the audio cable running through the velcro bracelet; on the other, the jumpers soldered to the microphone inserted into the bracelet. Open up the free end of the flexible cable and solder voltage and ground wires to the two microphone wires accordingly. Remember to solder the cables inside the bracelet. 14. You are almost done. Now, isolate each electric cable separately by applying a small piece of black tape on the soldered part. Isolate also the microphone pins separately. 15. Carefully place the silicon case on the microphone. Make sure that the microphone is as much static as possible. More it moves, less accurate is the signal. IMPORTANT: it is crucial that the microphone is located in the middle of the silicon case. If the mic is too high it will touch your skin, and this has to be avoided because the mic does not work with contact. If the mic is too deep inside the silicon case, the muscle sound will be very quiet. 16. Insert the battery into its battery holder, close the box, and you are done. 61

64 Software environment DIY documentation A.3 Software environment The XS digital interface was developed in Pd-extended on a Linux operating system 3. The software is composed of the main patch 4 called Xth-Sense.pd, and its dedicated library called xth-sense-lib. The latter brings together a broad range of Graph On Parent (GOP) 5 objects designed for MMG computation, and a discrete amount of general purpose computational idioms that facilitate a faster programming with Pd-extended. The library is largely inspired by the Pure Data Montreal Abstractions (Pd Mtl) 6. Similarly, the xth-sense-lib aims to provide highlevel, standardised objects, so as to speed up the learning curve of new users, and to ensure a rewarding environment for skilled programmers. The library includes 120 objects and the related help files. These are categorised using a taxonomy based on function categories. Each object is clearly named after his function, and each category is easily recognisable by reading its prefix. So far twelve categories have been implemented: anlz, real-time audio analysis; count, interpolation and loops; efx, real-time audio processing; flow, analysis of a data stream; gen, sound generators; gui, basic GOP tools which can be used to build complex macros; midi, midi objects; mix, audio mixing and routing; path, directories and paths management; scale, scaling of a data stream; smp, sample-based audio objects; utils, general-purpose abstractions. 3 Namely, a custom version of Ubuntu Lucid This is a Pure Data (or Pd-extended) program. 5 A Pd feature that enables a patch or object to have a custom appearance within the calling parent patch. 6 See: 62

65 Software environment DIY documentation Figure A.2: Detail of the [input.chain] object. A relevant feature of the software is its graphical user interface (GUI); this was designed and optimised in order to offer a fast performance prototyping, and secondly, to enable first-time users to achieve complex operations without dealing with low-level objects. The bulk of the GUI consists of two canvases: a side panel and a main window, which I call deck. The side panel hosts a global preset saving system, transport controls, nested modules, and two instances of the [input.chain] 7 object (Figure A.2). This receives the MMG signal from the XS sensors and executes four functions: 1. filtering: a bank of low-pass filters blocks frequencies higher than 70Hz, while two band-pass filters slightly increase the resonance of the higher partials (30-35Hz); 2. thresholding: by varying a RMS-based threshold, the sensitivity of the XS can be modulated. For instance, when the threshold is very low, the XS captures the sound of deeper muscle contractions and blood flow. Viceversa, when the threshold is high only loud signals (i.e., stronger contractions) are captured; 3. boosting: by controlling the punch power of a limiter, the MMG input amplitude can be increased (although this causes a decrease in the signal dynamic range); 4. routing: dispatch of the MMG sound to the deck, where further processing takes place. 7 According to a standard convention among Pd users, the print name of a Pd object is indicated as [name]. 63

66 Software environment DIY documentation The deck (i.e., the main canvas) includes several macro GOP units (see Figure B.1). From top to bottom: [global.controls], a GUI interface to save and load presets; [container.deck], a graphical wrapper of the following modules; [pd workspace], a wide area in which the user creates DSP chains; [features.pusher], a lookup module which displays the MMG feature values; [analysis.module], an abstraction which encloses the feature extraction algorithms and an audio scope; [routing.module], a GOP object for one-to-one and one-to-many mapping of MMG data; [midi.module], same object as above, but it produces MIDI data to be used by other software; [mixer.deck], a 10 channels mixer which includes level meters, pre-fader aux sends, and a master level; [time.sequence], a timeline that handles the structure of the piece by loading preset scenes. The GUI design seeks to obviate some issues which can arise when working with an intricate, multi-task environment in Pd-extended. During the development of the interface, the main concern was how to maintain the readability and usability of complex DSP patches in a limited area of the screen. Usually, the nesting of processes within a subpatch 8 is the most common method to avoid patches from being unreadable. However, the use of a considerable amount of subpatches can be counterproductive because the software becomes difficult to navigate. The issue was addressed by the implementation of a tabbed dynamic interface (TDI). This allows multiple subpatches to be enclosed within a single window, and provides triggers to switch between tabbed sets of subpatches. This way, all algorithms are clearly visible and the user can intuitively navigate the interface. The idea of a TDI for Pd-extended was inspired by the concept of space-awareness embodied by IOhannes 8 A nested patch housed within a parent canvas. 64

67 Software environment DIY documentation Zmölnig in the object [canvasposition] 9. This algorithm returns and controls the current position of the object within its containing canvas. [canvasposition] was incorporated in several of the macro GOP units, along with a simple function which sets the position of multiple patches at once. This mechanism enables the user to switch among six [pd workspace], or to toggle the visibility of the [analysis.module], the [routing.module], and the [midi.module] with a single click. 9 The object is part of the iemguts library by Zmölnig, which deals with meta-programming in Pure Data. For further details see (Zmölnig, 2009) 65

68 Get started DIY documentation A.4 Get started In this tutorial it is assumed that you have installed a working version of Pd-extended or newer. If you have not, first download Pd-extended from the official webpage 10. The XS software runs on Linux and Mac OSX. SETUP 1. Get an Xth Sense sensor and a quarter inch TRS Jack cable 2. Download the software and the needed libraries from the project website: 3. Install everything: LINUX Unzip the file xth-sense-lib.tar.gz Move the whole folder xth-sense-lib in /home/username/pd-externals/ Unzip the Xth-Sense additional-libs LINUX.tar.gz Move the folders iemguts e soundhack in /home/username/pd-externals/ Unzip the file Xth-Sense.tar.gz Move the Xth-Sense folder (which contains the main software you will use) in the location you wish. Avoid any whitespaces in the path leading to it. MAC OSX Unzip the file xth-sense-lib.zip Move the whole folder xth-sense-lib in /home/username/library/pd/ 10 Visit 66

69 Get started DIY documentation Please note, the binary files below have been tested on Mac OSX v.10.6 or newer. If yours is a different version you will need to download the source code and compile the libraries on your machine. Unzip the Xth-Sense additional-libs MacOSX.zip Move the folders iemguts e soundhack in /home/username/library/pd/ OR Download and compile the code of the two additional libraries Iemguts Soundhack (At the bottom of the page) Unzip the file Xth-Sense.zip Move the Xth-Sense folder (which contains the main software you will use) in the location you wish. Avoid any whitespaces in the path leading to it. 4. Open the sensor box. Insert the battery in the holder, and close the box. 5. Wear the bracelet. You want to wear it where the shape of a muscle changes clearly when you make a contraction. You will have to find the right place. The bracelet has to be enough tight not to move, but not too tight to hurt you. If it is too tight, the muscle does not have enough room to contract. Be careful; the sensor is fairly sturdy, but it can break with a careless behaviour. 6. Connect the output of the sensor box to the input 1 of your soundcard with a suitable cable. At this point, make some contractions. The sound of your muscle should be displayed by the meter on your soundcard. Set the input gain at a half way, so that the muscle sound is loud enough, but it is not clipping. 7. Locate the file Xth-Sense.pd. Double click it to launch it. The patch will pop up. 8. In the Pd menu bar, click Media and turn the Pd audio ON. 9. In the same menu bar, click Test Audio and MIDI. Use this patch to make sure 67

70 Get started DIY documentation sound is working. 10. Now, in the XS software click the grey button labelled deck, so to visualise this module. At the bottom right of the deck, there is the analysis module (with 5 sliders, and a red graph). There is a button that reads Audio.OFF. Click to turn it ON. Click also the grey toggle in the red graph, to activate the visualisation of the muscle sound. Make some more contraction and watch your muscle sound being visualised in the red graph! 68

71 Get started DIY documentation MAKE SOME NOISE! Now that the system is up and running, we will make some sound. 1. The upper part of the deck is called a workspace. This is where you can create your DSP chains by adding effects, etc. Select the workspace 1 by clicking the first big circular button at the top right. Click the top right rectangular button edit to edit the Workspace that pops up. 2. Use the keyboard shortcut ctl+1 to create a new object in the Workspace. When you see the blue dotted box, type in it mix.set.in 1. Click the grey canvas to create the object. This object receives the muscle sound from the sensor 1, and feeds it to the XS software. 3. Using the same procedure as above, place your mouse below the previous object and create another object called efx.pshift.ssb 80. This is an effect that transpose the muscle sounds to an higher frequency, so that it can be heard. The argument 80, stands for 80Hz which is the transposition frequency. This can be set to any frequency. 4. Below this effect, create another object called mix.set.ch one. This object dispatch the audio signal from the workspace to the mixer (at the bottom of the Deck). The argument one means it will send audio to the channel 1 in the mixer. 5. Now connect the three objects in the order you created them. The outlet of mix.set.in 1 to the inlet of efx.pshift.ssb 80, and the outlet of efx.pshift.ssb 80 to the inlet of mix.set.ch one. 6. Once the objects are connected, close the Workspace (ctl+w). At the bottom of the deck there is a mixer. Slide up the volume of the first fader from the left (that is the Main Out), and the volume of the second fader (that is the channel 1). 7. Now if you try out some gesture you will hear the sound of your muscles! Be aware that the speakers of your laptop are not capable of reproducing these kind of sounds. Use some loudspeakers, studio monitors, or good headphones. 69

72 Appendix B Pd-extended patches 70

73 Pd-extended patches Figure B.1: Overview of the Xth Sense digital interface. 71

74 Pd-extended patches Figure B.2: The algorithm extracting the Natural and Soft features. Figure B.3: The algorithm extracting the Linear, Tanh and Maximum Running Average features. 72

75 Pd-extended patches Figure B.4: Detail of the computation of the Maximum Running Average feature. Figure B.5: The Xth Sense hacked version of the bubbler object from the Soundhack library, wrapped in a graphical interface. 73

76 Pd-extended patches Figure B.6: The Xth Sense video software. It receives the MMG features via OSC messages and use them to excite a swarm of particles and direct other live video processing. Figure B.7: The Xth Sense audio patch coded for Hypo Chrysos. One of the DSP stages (top), the mapping module (bottom right), OSC unit (center), mixer and feature dispatcher (center/bottom left). 74

77 Pd-extended patches Figure B.8: Multi-layered scaling algorithm for features mapping. Figure B.9: An algorithm tracking the rhythm of muscular contractions. 75

78 Pd-extended patches Figure B.10: The Xth Sense integrated Machine Learning system based on supervised learning. Figure B.11: Detail of the artificial neural network. 76

79 Appendix C Further notes 77

80 Further notes In February 2012, the XS was awarded the first prize in the Margaret Guthman New Musical Instrument Competition, and named the world s most innovative new musical instrument by the Georgia Tech Center for Music Technology, Atlanta, GA, US 1. The jury panel included Atau Tanaka, media artist and researcher, and Cyril Lance, chief engineer at electronic musical instrument manufacturer Moog Music. Since then, others have showed interest in the instrument. Here are reported the most notable collaborations. Composer Shiori Usui (JP/UK) is the author of Into the Flesh (2012), the first piece for traditional instruments using the XS. The work was conceived on occasion of Inventor Composer Coaction, a project that sought to facilitate collaboration between composers and developers of digital and electronics instruments, for the creation of new music. It took place at the Department of Music, University of Edinburgh, during the first half of The piece was premiered by the Red note Ensemble on 9th May An audiovisual recording is referenced in Appendix D. Musician Adam Tindale (CA) and researcher Rebecca Fiebrink (US) have recently invited me to collaborate on a formal and systematic study of the application of Machine Learning techniques to digital musical instrument, with a primary focus on the XS. We are currently planning a long-term research project to start in Finally, the reader is invited to visit my on-line portfolio at where further information about the research dissemination can be found. This could be relevant to the evaluation of the work presented. It is worth noting that the portfolio also provides direct links to download my previous publications, which have been continuously reconsidered, reworded, extended, and unified during the thesis writing process. Other related material available on the website includes interviews, press release, list of performances, list of taught workshops, commissions, and awards. 1 The related press releas by the Georgia Institute of Technology can be seen at Works/interviews-and-articles/ xth-sense gatech-news.png 78

81 Appendix D Audiovisual documentation 79

82 Audiovisual documentation This is a list of media demonstrating the outcome of this research. The following audio and video recordings have been already viewed in the text, however, they are reported here so to be easily accessible also after the reading, and to report adequately the information regarding each performance. The URL below each filename can be clicked to view the material. If needed, the same media can be found in the media folder attached; see the related content index in Section 1.1 Audio recordings Ominous. Live at Body/Controlled, LEAP, Berlin, Germany, July Works/live-audio-recordings/marco-donnarumma ominous.wav Hypo Chrysos. Live at Trendelenburg Festival, Gijon, Spain, December Works/live-audio-recordings/marco-donnarumma hypo-chrysos.wav Videos Music for Flesh II. Live at Alison House, Edinburgh University, UK, March Works/videos/marco-donnarumma music-for-flesh-ii.mp4 Hypo Chrysos. Show-reel, recorded at Inspace, Edinburgh, UK, April Works/videos/marco-donnarumma hypo-chrysos.mp4 Into the Flesh, (Shiori Usui and Red Note Ensemble) for trombone, double bass and Xth Sense. Live at Jam House, Edinburgh, UK, May 2012 Works/videos/shiori-usui into-the-flesh.mp4 80

83 Appendix E Additional images 81

84 Additional images Performance area Loudspeakers Table with laptop and sound card Subwoofer Audience area Parcan lights This plan illustrates a stereo sound system setup, plus an additional mono speaker in the center/rear area of the stage. 50 cm meter --- Figure E.1: Stereo format stage plan of Music for Flesh II. 82

85 Additional images Performance area Loudspeakers Control desk Subwoofer Audience area Parcan lights This setup plan is suitable for use with either an octophonic or quadraphonic sound system. Blue squares positioned at the compass points indicate the location of loudspeakers in a quadraphonic system. Dotted blue squares indicate position and orientation of the further speakers to be added in order to use an octophonic setup. 50 cm meter --- Figure E.2: Immersive format stage plan of Music for Flesh II. 83

86 Additional images Figure E.3: Graphical score for Music for Flesh II indicating duration, intensity and texture of the sound-gestures. 84

87 Additional images Figure E.4: Stage plan of Hypo Chrysos. Performance area Loudspeakers Control desk Subwoofer Audience area White floodlights Screen for video projection Audio monitors Video monitor This setup plan is suitable for use with either an octophonic or quadraphonic sound system. Blue squares positioned at the corners of the audience area indicate the location of loudspeakers in a quadraphonic system. Dotted blue squares indicate position and orientation of the further speakers to be added in order to use an octophonic setup. 50 cm meter

88 Additional images Figure E.5: The earliest MMG recording using the software Ardour2. 86

Xth Sense: recoding visceral embodiment

Xth Sense: recoding visceral embodiment Xth Sense: recoding visceral embodiment Marco Donnarumma Sound Design, ACE The University of Edinburgh Alison House, Nicolson Square Edinburgh, UK, EH8 9DF m.donnarumma@sms.ed.ac.uk m@marcodonnarumma.com

More information

Re: ENSC 370 Project Physiological Signal Data Logger Functional Specifications

Re: ENSC 370 Project Physiological Signal Data Logger Functional Specifications School of Engineering Science Simon Fraser University V5A 1S6 versatile-innovations@sfu.ca February 12, 1999 Dr. Andrew Rawicz School of Engineering Science Simon Fraser University Burnaby, BC V5A 1S6

More information

DESIGN PHILOSOPHY We had a Dream...

DESIGN PHILOSOPHY We had a Dream... DESIGN PHILOSOPHY We had a Dream... The from-ground-up new architecture is the result of multiple prototype generations over the last two years where the experience of digital and analog algorithms and

More information

Tiptop audio z-dsp.

Tiptop audio z-dsp. Tiptop audio z-dsp www.tiptopaudio.com Introduction Welcome to the world of digital signal processing! The Z-DSP is a modular synthesizer component that can process and generate audio using a dedicated

More information

Getting Started with the LabVIEW Sound and Vibration Toolkit

Getting Started with the LabVIEW Sound and Vibration Toolkit 1 Getting Started with the LabVIEW Sound and Vibration Toolkit This tutorial is designed to introduce you to some of the sound and vibration analysis capabilities in the industry-leading software tool

More information

S I N E V I B E S FRACTION AUDIO SLICING WORKSTATION

S I N E V I B E S FRACTION AUDIO SLICING WORKSTATION S I N E V I B E S FRACTION AUDIO SLICING WORKSTATION INTRODUCTION Fraction is a plugin for deep on-the-fly remixing and mangling of sound. It features 8x independent slicers which record and repeat short

More information

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring 2009 Week 6 Class Notes Pitch Perception Introduction Pitch may be described as that attribute of auditory sensation in terms

More information

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT

More information

DTS Neural Mono2Stereo

DTS Neural Mono2Stereo WAVES DTS Neural Mono2Stereo USER GUIDE Table of Contents Chapter 1 Introduction... 3 1.1 Welcome... 3 1.2 Product Overview... 3 1.3 Sample Rate Support... 4 Chapter 2 Interface and Controls... 5 2.1 Interface...

More information

15th International Conference on New Interfaces for Musical Expression (NIME)

15th International Conference on New Interfaces for Musical Expression (NIME) 15th International Conference on New Interfaces for Musical Expression (NIME) May 31 June 3, 2015 Louisiana State University Baton Rouge, Louisiana, USA http://nime2015.lsu.edu Introduction NIME (New Interfaces

More information

Motivation: BCI for Creativity and enhanced Inclusion. Paul McCullagh University of Ulster

Motivation: BCI for Creativity and enhanced Inclusion. Paul McCullagh University of Ulster Motivation: BCI for Creativity and enhanced Inclusion Paul McCullagh University of Ulster RTD challenges Problems with current BCI Slow data rate, 30-80 bits per minute dependent on the experimental strategy

More information

Toward a Computationally-Enhanced Acoustic Grand Piano

Toward a Computationally-Enhanced Acoustic Grand Piano Toward a Computationally-Enhanced Acoustic Grand Piano Andrew McPherson Electrical & Computer Engineering Drexel University 3141 Chestnut St. Philadelphia, PA 19104 USA apm@drexel.edu Youngmoo Kim Electrical

More information

CUSSOU504A. Microphones. Week Two

CUSSOU504A. Microphones. Week Two CUSSOU504A Microphones Week Two Microphones: Overview and a very brief History. What is a Microphone, exactly? A microphone is an acoustic to electric sensor that converts sound into an electrical signal.

More information

NOTICE. The information contained in this document is subject to change without notice.

NOTICE. The information contained in this document is subject to change without notice. NOTICE The information contained in this document is subject to change without notice. Toontrack Music AB makes no warranty of any kind with regard to this material, including, but not limited to, the

More information

Brain-Computer Interface (BCI)

Brain-Computer Interface (BCI) Brain-Computer Interface (BCI) Christoph Guger, Günter Edlinger, g.tec Guger Technologies OEG Herbersteinstr. 60, 8020 Graz, Austria, guger@gtec.at This tutorial shows HOW-TO find and extract proper signal

More information

Chapter 1. Introduction to Digital Signal Processing

Chapter 1. Introduction to Digital Signal Processing Chapter 1 Introduction to Digital Signal Processing 1. Introduction Signal processing is a discipline concerned with the acquisition, representation, manipulation, and transformation of signals required

More information

Music in Practice SAS 2015

Music in Practice SAS 2015 Sample unit of work Contemporary music The sample unit of work provides teaching strategies and learning experiences that facilitate students demonstration of the dimensions and objectives of Music in

More information

EMERGENT SOUNDSCAPE COMPOSITION: REFLECTIONS ON VIRTUALITY

EMERGENT SOUNDSCAPE COMPOSITION: REFLECTIONS ON VIRTUALITY EMERGENT SOUNDSCAPE COMPOSITION: REFLECTIONS ON VIRTUALITY by Mark Christopher Brady Bachelor of Science (Honours), University of Cape Town, 1994 THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

More information

Sound visualization through a swarm of fireflies

Sound visualization through a swarm of fireflies Sound visualization through a swarm of fireflies Ana Rodrigues, Penousal Machado, Pedro Martins, and Amílcar Cardoso CISUC, Deparment of Informatics Engineering, University of Coimbra, Coimbra, Portugal

More information

Emovere: Designing Sound Interactions for Biosignals and Dancers

Emovere: Designing Sound Interactions for Biosignals and Dancers Emovere: Designing Sound Interactions for Biosignals and Dancers Javier Jaimovich Departamento de Música y Sonología Universidad de Chile Compañía 1264, Santiago, Chile javier.jaimovich@uchile.cl ABSTRACT

More information

Musical Acoustics Lecture 15 Pitch & Frequency (Psycho-Acoustics)

Musical Acoustics Lecture 15 Pitch & Frequency (Psycho-Acoustics) 1 Musical Acoustics Lecture 15 Pitch & Frequency (Psycho-Acoustics) Pitch Pitch is a subjective characteristic of sound Some listeners even assign pitch differently depending upon whether the sound was

More information

Exploring Choreographers Conceptions of Motion Capture for Full Body Interaction

Exploring Choreographers Conceptions of Motion Capture for Full Body Interaction Exploring Choreographers Conceptions of Motion Capture for Full Body Interaction Marco Gillies, Max Worgan, Hestia Peppe, Will Robinson Department of Computing Goldsmiths, University of London New Cross,

More information

Work In Progress: Adapting Inexpensive Game Technology to Teach Principles of Neural Interface Technology and Device Control

Work In Progress: Adapting Inexpensive Game Technology to Teach Principles of Neural Interface Technology and Device Control Paper ID #7994 Work In Progress: Adapting Inexpensive Game Technology to Teach Principles of Neural Interface Technology and Device Control Dr. Benjamin R Campbell, Robert Morris University Dr. Campbell

More information

Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems. School of Electrical Engineering and Computer Science Oregon State University

Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems. School of Electrical Engineering and Computer Science Oregon State University Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems Prof. Ben Lee School of Electrical Engineering and Computer Science Oregon State University Outline Computer Representation of Audio Quantization

More information

Multiband Noise Reduction Component for PurePath Studio Portable Audio Devices

Multiband Noise Reduction Component for PurePath Studio Portable Audio Devices Multiband Noise Reduction Component for PurePath Studio Portable Audio Devices Audio Converters ABSTRACT This application note describes the features, operating procedures and control capabilities of a

More information

PCM ENCODING PREPARATION... 2 PCM the PCM ENCODER module... 4

PCM ENCODING PREPARATION... 2 PCM the PCM ENCODER module... 4 PCM ENCODING PREPARATION... 2 PCM... 2 PCM encoding... 2 the PCM ENCODER module... 4 front panel features... 4 the TIMS PCM time frame... 5 pre-calculations... 5 EXPERIMENT... 5 patching up... 6 quantizing

More information

CHAPTER 8 CONCLUSION AND FUTURE SCOPE

CHAPTER 8 CONCLUSION AND FUTURE SCOPE 124 CHAPTER 8 CONCLUSION AND FUTURE SCOPE Data hiding is becoming one of the most rapidly advancing techniques the field of research especially with increase in technological advancements in internet and

More information

HEAD. HEAD VISOR (Code 7500ff) Overview. Features. System for online localization of sound sources in real time

HEAD. HEAD VISOR (Code 7500ff) Overview. Features. System for online localization of sound sources in real time HEAD Ebertstraße 30a 52134 Herzogenrath Tel.: +49 2407 577-0 Fax: +49 2407 577-99 email: info@head-acoustics.de Web: www.head-acoustics.de Data Datenblatt Sheet HEAD VISOR (Code 7500ff) System for online

More information

Digital Audio Design Validation and Debugging Using PGY-I2C

Digital Audio Design Validation and Debugging Using PGY-I2C Digital Audio Design Validation and Debugging Using PGY-I2C Debug the toughest I 2 S challenges, from Protocol Layer to PHY Layer to Audio Content Introduction Today s digital systems from the Digital

More information

SYNTHESIS FROM MUSICAL INSTRUMENT CHARACTER MAPS

SYNTHESIS FROM MUSICAL INSTRUMENT CHARACTER MAPS Published by Institute of Electrical Engineers (IEE). 1998 IEE, Paul Masri, Nishan Canagarajah Colloquium on "Audio and Music Technology"; November 1998, London. Digest No. 98/470 SYNTHESIS FROM MUSICAL

More information

BER MEASUREMENT IN THE NOISY CHANNEL

BER MEASUREMENT IN THE NOISY CHANNEL BER MEASUREMENT IN THE NOISY CHANNEL PREPARATION... 2 overview... 2 the basic system... 3 a more detailed description... 4 theoretical predictions... 5 EXPERIMENT... 6 the ERROR COUNTING UTILITIES module...

More information

ITU-T Y Functional framework and capabilities of the Internet of things

ITU-T Y Functional framework and capabilities of the Internet of things I n t e r n a t i o n a l T e l e c o m m u n i c a t i o n U n i o n ITU-T Y.2068 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU (03/2015) SERIES Y: GLOBAL INFORMATION INFRASTRUCTURE, INTERNET PROTOCOL

More information

Essentials of the AV Industry Welcome Introduction How to Take This Course Quizzes, Section Tests, and Course Completion A Digital and Analog World

Essentials of the AV Industry Welcome Introduction How to Take This Course Quizzes, Section Tests, and Course Completion A Digital and Analog World Essentials of the AV Industry Welcome Introduction How to Take This Course Quizzes, s, and Course Completion A Digital and Analog World Audio Dynamics of Sound Audio Essentials Sound Waves Human Hearing

More information

PEP-II longitudinal feedback and the low groupdelay. Dmitry Teytelman

PEP-II longitudinal feedback and the low groupdelay. Dmitry Teytelman PEP-II longitudinal feedback and the low groupdelay woofer Dmitry Teytelman 1 Outline I. PEP-II longitudinal feedback and the woofer channel II. Low group-delay woofer topology III. Why do we need a separate

More information

Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor

Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor Introduction: The ability to time stretch and compress acoustical sounds without effecting their pitch has been an attractive

More information

Networks of Things. J. Voas Computer Scientist. National Institute of Standards and Technology

Networks of Things. J. Voas Computer Scientist. National Institute of Standards and Technology Networks of Things J. Voas Computer Scientist National Institute of Standards and Technology 1 2 Years Ago We Asked What is IoT? 2 The Reality No universally-accepted and actionable definition exists to

More information

PROTOTYPE OF IOT ENABLED SMART FACTORY. HaeKyung Lee and Taioun Kim. Received September 2015; accepted November 2015

PROTOTYPE OF IOT ENABLED SMART FACTORY. HaeKyung Lee and Taioun Kim. Received September 2015; accepted November 2015 ICIC Express Letters Part B: Applications ICIC International c 2016 ISSN 2185-2766 Volume 7, Number 4(tentative), April 2016 pp. 1 ICICIC2015-SS21-06 PROTOTYPE OF IOT ENABLED SMART FACTORY HaeKyung Lee

More information

Computer Coordination With Popular Music: A New Research Agenda 1

Computer Coordination With Popular Music: A New Research Agenda 1 Computer Coordination With Popular Music: A New Research Agenda 1 Roger B. Dannenberg roger.dannenberg@cs.cmu.edu http://www.cs.cmu.edu/~rbd School of Computer Science Carnegie Mellon University Pittsburgh,

More information

Topic: Instructional David G. Thomas December 23, 2015

Topic: Instructional David G. Thomas December 23, 2015 Procedure to Setup a 3ɸ Linear Motor This is a guide to configure a 3ɸ linear motor using either analog or digital encoder feedback with an Elmo Gold Line drive. Topic: Instructional David G. Thomas December

More information

Lesson 1 EMG 1 Electromyography: Motor Unit Recruitment

Lesson 1 EMG 1 Electromyography: Motor Unit Recruitment Physiology Lessons for use with the Biopac Science Lab MP40 Lesson 1 EMG 1 Electromyography: Motor Unit Recruitment PC running Windows XP or Mac OS X 10.3-10.4 Lesson Revision 1.20.2006 BIOPAC Systems,

More information

IoT Strategy Roadmap

IoT Strategy Roadmap IoT Strategy Roadmap Ovidiu Vermesan, SINTEF ROAD2CPS Strategy Roadmap Workshop, 15 November, 2016 Brussels, Belgium IoT-EPI Program The IoT Platforms Initiative (IoT-EPI) program includes the research

More information

Data Converters and DSPs Getting Closer to Sensors

Data Converters and DSPs Getting Closer to Sensors Data Converters and DSPs Getting Closer to Sensors As the data converters used in military applications must operate faster and at greater resolution, the digital domain is moving closer to the antenna/sensor

More information

Fraction by Sinevibes audio slicing workstation

Fraction by Sinevibes audio slicing workstation Fraction by Sinevibes audio slicing workstation INTRODUCTION Fraction is an effect plugin for deep real-time manipulation and re-engineering of sound. It features 8 slicers which record and repeat the

More information

Hugo Technology. An introduction into Rob Watts' technology

Hugo Technology. An introduction into Rob Watts' technology Hugo Technology An introduction into Rob Watts' technology Copyright Rob Watts 2014 About Rob Watts Audio chip designer both analogue and digital Consultant to silicon chip manufacturers Designer of Chord

More information

Brain.fm Theory & Process

Brain.fm Theory & Process Brain.fm Theory & Process At Brain.fm we develop and deliver functional music, directly optimized for its effects on our behavior. Our goal is to help the listener achieve desired mental states such as

More information

Innovative Rotary Encoders Deliver Durability and Precision without Tradeoffs. By: Jeff Smoot, CUI Inc

Innovative Rotary Encoders Deliver Durability and Precision without Tradeoffs. By: Jeff Smoot, CUI Inc Innovative Rotary Encoders Deliver Durability and Precision without Tradeoffs By: Jeff Smoot, CUI Inc Rotary encoders provide critical information about the position of motor shafts and thus also their

More information

An Introduction to the Spectral Dynamics Rotating Machinery Analysis (RMA) package For PUMA and COUGAR

An Introduction to the Spectral Dynamics Rotating Machinery Analysis (RMA) package For PUMA and COUGAR An Introduction to the Spectral Dynamics Rotating Machinery Analysis (RMA) package For PUMA and COUGAR Introduction: The RMA package is a PC-based system which operates with PUMA and COUGAR hardware to

More information

ECE 5765 Modern Communication Fall 2005, UMD Experiment 10: PRBS Messages, Eye Patterns & Noise Simulation using PRBS

ECE 5765 Modern Communication Fall 2005, UMD Experiment 10: PRBS Messages, Eye Patterns & Noise Simulation using PRBS ECE 5765 Modern Communication Fall 2005, UMD Experiment 10: PRBS Messages, Eye Patterns & Noise Simulation using PRBS modules basic: SEQUENCE GENERATOR, TUNEABLE LPF, ADDER, BUFFER AMPLIFIER extra basic:

More information

New Products and Features on Display at the 2012 IBC Show

New Products and Features on Display at the 2012 IBC Show New Products and Features on Display at the 2012 IBC Show The innovative The innovative Rack: 3 units in one The most advanced studio codec The economic Cost-Efficient Solution for IP RAVENNA improved

More information

Tempo and Beat Analysis

Tempo and Beat Analysis Advanced Course Computer Science Music Processing Summer Term 2010 Meinard Müller, Peter Grosche Saarland University and MPI Informatik meinard@mpi-inf.mpg.de Tempo and Beat Analysis Musical Properties:

More information

EngineDiag. The Reciprocating Machines Diagnostics Module. Introduction DATASHEET

EngineDiag. The Reciprocating Machines Diagnostics Module. Introduction DATASHEET EngineDiag DATASHEET The Reciprocating Machines Diagnostics Module Introduction Reciprocating machines are complex installations and generate specific vibration signatures. Dedicated tools associating

More information

Muscle Sensor KI 2 Instructions

Muscle Sensor KI 2 Instructions Muscle Sensor KI 2 Instructions Overview This KI pre-work will involve two sections. Section A covers data collection and section B has the specific problems to solve. For the problems section, only answer

More information

Lian Loke and Toni Robertson (eds) ISBN:

Lian Loke and Toni Robertson (eds) ISBN: The Body in Design Workshop at OZCHI 2011 Design, Culture and Interaction, The Australasian Computer Human Interaction Conference, November 28th, Canberra, Australia Lian Loke and Toni Robertson (eds)

More information

Low Power VLSI Circuits and Systems Prof. Ajit Pal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Low Power VLSI Circuits and Systems Prof. Ajit Pal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Low Power VLSI Circuits and Systems Prof. Ajit Pal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No. # 29 Minimizing Switched Capacitance-III. (Refer

More information

SREV1 Sampling Guide. An Introduction to Impulse-response Sampling with the SREV1 Sampling Reverberator

SREV1 Sampling Guide. An Introduction to Impulse-response Sampling with the SREV1 Sampling Reverberator An Introduction to Impulse-response Sampling with the SREV Sampling Reverberator Contents Introduction.............................. 2 What is Sound Field Sampling?.....................................

More information

ITU-T Y.4552/Y.2078 (02/2016) Application support models of the Internet of things

ITU-T Y.4552/Y.2078 (02/2016) Application support models of the Internet of things I n t e r n a t i o n a l T e l e c o m m u n i c a t i o n U n i o n ITU-T TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU Y.4552/Y.2078 (02/2016) SERIES Y: GLOBAL INFORMATION INFRASTRUCTURE, INTERNET

More information

EngineDiag. The Reciprocating Machines Diagnostics Module. Introduction DATASHEET

EngineDiag. The Reciprocating Machines Diagnostics Module. Introduction DATASHEET EngineDiag DATASHEET The Reciprocating Machines Diagnostics Module Introduction Industries Fig1: Diesel engine cylinder blocks Machines Reciprocating machines are complex installations and generate specific

More information

FREE TV AUSTRALIA OPERATIONAL PRACTICE OP- 59 Measurement and Management of Loudness in Soundtracks for Television Broadcasting

FREE TV AUSTRALIA OPERATIONAL PRACTICE OP- 59 Measurement and Management of Loudness in Soundtracks for Television Broadcasting Page 1 of 10 1. SCOPE This Operational Practice is recommended by Free TV Australia and refers to the measurement of audio loudness as distinct from audio level. It sets out guidelines for measuring and

More information

CESR BPM System Calibration

CESR BPM System Calibration CESR BPM System Calibration Joseph Burrell Mechanical Engineering, WSU, Detroit, MI, 48202 (Dated: August 11, 2006) The Cornell Electron Storage Ring(CESR) uses beam position monitors (BPM) to determine

More information

Introduction to Data Conversion and Processing

Introduction to Data Conversion and Processing Introduction to Data Conversion and Processing The proliferation of digital computing and signal processing in electronic systems is often described as "the world is becoming more digital every day." Compared

More information

Automatic Construction of Synthetic Musical Instruments and Performers

Automatic Construction of Synthetic Musical Instruments and Performers Ph.D. Thesis Proposal Automatic Construction of Synthetic Musical Instruments and Performers Ning Hu Carnegie Mellon University Thesis Committee Roger B. Dannenberg, Chair Michael S. Lewicki Richard M.

More information

SMARTING SMART, RELIABLE, SIMPLE

SMARTING SMART, RELIABLE, SIMPLE SMART, RELIABLE, SIMPLE SMARTING The first truly mobile EEG device for recording brain activity in an unrestricted environment. SMARTING is easily synchronized with other sensors, with no need for any

More information

Measurement of overtone frequencies of a toy piano and perception of its pitch

Measurement of overtone frequencies of a toy piano and perception of its pitch Measurement of overtone frequencies of a toy piano and perception of its pitch PACS: 43.75.Mn ABSTRACT Akira Nishimura Department of Media and Cultural Studies, Tokyo University of Information Sciences,

More information

Computational Parsing of Melody (CPM): Interface Enhancing the Creative Process during the Production of Music

Computational Parsing of Melody (CPM): Interface Enhancing the Creative Process during the Production of Music Computational Parsing of Melody (CPM): Interface Enhancing the Creative Process during the Production of Music Andrew Blake and Cathy Grundy University of Westminster Cavendish School of Computer Science

More information

Spatial Formations. Installation Art between Image and Stage.

Spatial Formations. Installation Art between Image and Stage. Spatial Formations. Installation Art between Image and Stage. An English Summary Anne Ring Petersen Although much has been written about the origins and diversity of installation art as well as its individual

More information

An Integrated EMG Data Acquisition System by Using Android app

An Integrated EMG Data Acquisition System by Using Android app An Integrated EMG Data Acquisition System by Using Android app Dr. R. Harini 1 1 Teaching facultyt, Dept. of electronics, S.K. University, Anantapur, A.P, INDIA Abstract: This paper presents the design

More information

Spectral Sounds Summary

Spectral Sounds Summary Marco Nicoli colini coli Emmanuel Emma manuel Thibault ma bault ult Spectral Sounds 27 1 Summary Y they listen to music on dozens of devices, but also because a number of them play musical instruments

More information

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene Beat Extraction from Expressive Musical Performances Simon Dixon, Werner Goebl and Emilios Cambouropoulos Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria.

More information

Application Note #63 Field Analyzers in EMC Radiated Immunity Testing

Application Note #63 Field Analyzers in EMC Radiated Immunity Testing Application Note #63 Field Analyzers in EMC Radiated Immunity Testing By Jason Galluppi, Supervisor Systems Control Software In radiated immunity testing, it is common practice to utilize a radio frequency

More information

PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF)

PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF) PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF) "The reason I got into playing and producing music was its power to travel great distances and have an emotional impact on people" Quincey

More information

VivoSense. User Manual Galvanic Skin Response (GSR) Analysis Module. VivoSense, Inc. Newport Beach, CA, USA Tel. (858) , Fax.

VivoSense. User Manual Galvanic Skin Response (GSR) Analysis Module. VivoSense, Inc. Newport Beach, CA, USA Tel. (858) , Fax. VivoSense User Manual Galvanic Skin Response (GSR) Analysis VivoSense Version 3.1 VivoSense, Inc. Newport Beach, CA, USA Tel. (858) 876-8486, Fax. (248) 692-0980 Email: info@vivosense.com; Web: www.vivosense.com

More information

UNIVERSITY OF DUBLIN TRINITY COLLEGE

UNIVERSITY OF DUBLIN TRINITY COLLEGE UNIVERSITY OF DUBLIN TRINITY COLLEGE FACULTY OF ENGINEERING & SYSTEMS SCIENCES School of Engineering and SCHOOL OF MUSIC Postgraduate Diploma in Music and Media Technologies Hilary Term 31 st January 2005

More information

TongArk: a Human-Machine Ensemble

TongArk: a Human-Machine Ensemble TongArk: a Human-Machine Ensemble Prof. Alexey Krasnoskulov, PhD. Department of Sound Engineering and Information Technologies, Piano Department Rostov State Rakhmaninov Conservatoire, Russia e-mail: avk@soundworlds.net

More information

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS Andrew N. Robertson, Mark D. Plumbley Centre for Digital Music

More information

Palmer (nee Reiser), M. (2010) Listening to the bodys excitations. Performance Research, 15 (3). pp ISSN

Palmer (nee Reiser), M. (2010) Listening to the bodys excitations. Performance Research, 15 (3). pp ISSN Palmer (nee Reiser), M. (2010) Listening to the bodys excitations. Performance Research, 15 (3). pp. 55-59. ISSN 1352-8165 We recommend you cite the published version. The publisher s URL is http://dx.doi.org/10.1080/13528165.2010.527204

More information

ACTIVE SOUND DESIGN: VACUUM CLEANER

ACTIVE SOUND DESIGN: VACUUM CLEANER ACTIVE SOUND DESIGN: VACUUM CLEANER PACS REFERENCE: 43.50 Qp Bodden, Markus (1); Iglseder, Heinrich (2) (1): Ingenieurbüro Dr. Bodden; (2): STMS Ingenieurbüro (1): Ursulastr. 21; (2): im Fasanenkamp 10

More information

Understanding Compression Technologies for HD and Megapixel Surveillance

Understanding Compression Technologies for HD and Megapixel Surveillance When the security industry began the transition from using VHS tapes to hard disks for video surveillance storage, the question of how to compress and store video became a top consideration for video surveillance

More information

T ips in measuring and reducing monitor jitter

T ips in measuring and reducing monitor jitter APPLICAT ION NOT E T ips in measuring and reducing Philips Semiconductors Abstract The image jitter and OSD jitter are mentioned in this application note. Jitter measuring instruction is also included.

More information

Extending Interactive Aural Analysis: Acousmatic Music

Extending Interactive Aural Analysis: Acousmatic Music Extending Interactive Aural Analysis: Acousmatic Music Michael Clarke School of Music Humanities and Media, University of Huddersfield, Queensgate, Huddersfield England, HD1 3DH j.m.clarke@hud.ac.uk 1.

More information

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS Item Type text; Proceedings Authors Habibi, A. Publisher International Foundation for Telemetering Journal International Telemetering Conference Proceedings

More information

Embodied music cognition and mediation technology

Embodied music cognition and mediation technology Embodied music cognition and mediation technology Briefly, what it is all about: Embodied music cognition = Experiencing music in relation to our bodies, specifically in relation to body movements, both

More information

Enhancing Music Maps

Enhancing Music Maps Enhancing Music Maps Jakob Frank Vienna University of Technology, Vienna, Austria http://www.ifs.tuwien.ac.at/mir frank@ifs.tuwien.ac.at Abstract. Private as well as commercial music collections keep growing

More information

Using the BHM binaural head microphone

Using the BHM binaural head microphone 11/17 Using the binaural head microphone Introduction 1 Recording with a binaural head microphone 2 Equalization of a recording 2 Individual equalization curves 5 Using the equalization curves 5 Post-processing

More information

2018 Fall CTP431: Music and Audio Computing Fundamentals of Musical Acoustics

2018 Fall CTP431: Music and Audio Computing Fundamentals of Musical Acoustics 2018 Fall CTP431: Music and Audio Computing Fundamentals of Musical Acoustics Graduate School of Culture Technology, KAIST Juhan Nam Outlines Introduction to musical tones Musical tone generation - String

More information

OVERVIEW. YAMAHA Electronics Corp., USA 6660 Orangethorpe Avenue

OVERVIEW. YAMAHA Electronics Corp., USA 6660 Orangethorpe Avenue OVERVIEW With decades of experience in home audio, pro audio and various sound technologies for the music industry, Yamaha s entry into audio systems for conferencing is an easy and natural evolution.

More information

On the Music of Emergent Behaviour What can Evolutionary Computation bring to the Musician?

On the Music of Emergent Behaviour What can Evolutionary Computation bring to the Musician? On the Music of Emergent Behaviour What can Evolutionary Computation bring to the Musician? Eduardo Reck Miranda Sony Computer Science Laboratory Paris 6 rue Amyot - 75005 Paris - France miranda@csl.sony.fr

More information

HAVERHILL OLD INDEPENDENT CHURCH

HAVERHILL OLD INDEPENDENT CHURCH HAVERHILL OLD INDEPENDENT CHURCH HAUPTWERK v.3 SAMPLE SET MINI SET USER MANUAL Version 1.1 - Lavender Audio 2009 www.lavenderaudio.co.uk Thank you for purchasing this sample set which is a cut down version

More information

Designing for the Internet of Things with Cadence PSpice A/D Technology

Designing for the Internet of Things with Cadence PSpice A/D Technology Designing for the Internet of Things with Cadence PSpice A/D Technology By Alok Tripathi, Software Architect, Cadence The Cadence PSpice A/D release 17.2-2016 offers a comprehensive feature set to address

More information

Using Extra Loudspeakers and Sound Reinforcement

Using Extra Loudspeakers and Sound Reinforcement 1 SX80, Codec Pro A guide to providing a better auditory experience Produced: December 2018 for CE9.6 2 Contents What s in this guide Contents Introduction...3 Codec SX80: Use with Extra Loudspeakers (I)...4

More information

2. AN INTROSPECTION OF THE MORPHING PROCESS

2. AN INTROSPECTION OF THE MORPHING PROCESS 1. INTRODUCTION Voice morphing means the transition of one speech signal into another. Like image morphing, speech morphing aims to preserve the shared characteristics of the starting and final signals,

More information

DSP Monitoring Systems. dsp GLM. AutoCal TM

DSP Monitoring Systems. dsp GLM. AutoCal TM DSP Monitoring Systems dsp GLM AutoCal TM Genelec DSP Systems - 8200 bi-amplified monitor loudspeakers and 7200 subwoofers For decades Genelec has measured, analyzed and calibrated its monitoring systems

More information

Introduction To LabVIEW and the DSP Board

Introduction To LabVIEW and the DSP Board EE-289, DIGITAL SIGNAL PROCESSING LAB November 2005 Introduction To LabVIEW and the DSP Board 1 Overview The purpose of this lab is to familiarize you with the DSP development system by looking at sampling,

More information

M-16DX 16-Channel Digital Mixer

M-16DX 16-Channel Digital Mixer M-6DX 6-Channel Digital Mixer Workshop Getting Started with the M-6DX 007 Roland Corporation U.S. All rights reserved. No part of this publication may be reproduced in any form without the written permission

More information

BioTools: A Biosignal Toolbox for Composers and Performers

BioTools: A Biosignal Toolbox for Composers and Performers BioTools: A Biosignal Toolbox for Composers and Performers Miguel Angel Ortiz Pérez and R. Benjamin Knapp Queen s University Belfast, Sonic Arts Research Centre, Cloreen Park Belfast, BT7 1NN, Northern

More information

IEEE Santa Clara ComSoc/CAS Weekend Workshop Event-based analog sensing

IEEE Santa Clara ComSoc/CAS Weekend Workshop Event-based analog sensing IEEE Santa Clara ComSoc/CAS Weekend Workshop Event-based analog sensing Theodore Yu theodore.yu@ti.com Texas Instruments Kilby Labs, Silicon Valley Labs September 29, 2012 1 Living in an analog world The

More information

Integrated Circuit for Musical Instrument Tuners

Integrated Circuit for Musical Instrument Tuners Document History Release Date Purpose 8 March 2006 Initial prototype 27 April 2006 Add information on clip indication, MIDI enable, 20MHz operation, crystal oscillator and anti-alias filter. 8 May 2006

More information

Real-time EEG signal processing based on TI s TMS320C6713 DSK

Real-time EEG signal processing based on TI s TMS320C6713 DSK Paper ID #6332 Real-time EEG signal processing based on TI s TMS320C6713 DSK Dr. Zhibin Tan, East Tennessee State University Dr. Zhibin Tan received her Ph.D. at department of Electrical and Computer Engineering

More information

MULTIMIX 8/4 DIGITAL AUDIO-PROCESSING

MULTIMIX 8/4 DIGITAL AUDIO-PROCESSING MULTIMIX 8/4 DIGITAL AUDIO-PROCESSING Designed and Manufactured by ITEC Tontechnik und Industrieelektronik GesmbH 8200 Laßnitzthal 300 Austria / Europe MULTIMIX 8/4 DIGITAL Aim The most important aim of

More information

UNIT-3 Part A. 2. What is radio sonde? [ N/D-16]

UNIT-3 Part A. 2. What is radio sonde? [ N/D-16] UNIT-3 Part A 1. What is CFAR loss? [ N/D-16] Constant false alarm rate (CFAR) is a property of threshold or gain control devices that maintain an approximately constant rate of false target detections

More information

Ben Neill and Bill Jones - Posthorn

Ben Neill and Bill Jones - Posthorn Ben Neill and Bill Jones - Posthorn Ben Neill Assistant Professor of Music Ramapo College of New Jersey 505 Ramapo Valley Road Mahwah, NJ 07430 USA bneill@ramapo.edu Bill Jones First Pulse Projects 53

More information