VISUALIZING AND CONTROLLING SOUND WITH GRAPHICAL INTERFACES
|
|
- Peregrine McKinney
- 5 years ago
- Views:
Transcription
1 VISUALIZING AND CONTROLLING SOUND WITH GRAPHICAL INTERFACES LIAM O SULLIVAN, FRANK BOLAND Dept. of Electronic & Electrical Engineering, Trinity College Dublin, Dublin 2, Ireland lmosulli@tcd.ie Developments in abstract representations of sound from the field of computer music have potential applications for designers of musical computer games. Research in cognition has identified correlations in the perceptions of visual objects and audio events; - experiments show that test subjects associate certain qualities of graphical shapes with those of vocal sounds. Such 'sound symbolism' has been extended to non-vocal sounds and this paper describes attempts to exploit this and other phenomenon in the visualization of audio. The ideas are expanded upon to propose control for sound synthesis through the manipulation of virtual shapes. Mappings between parameters in the auditory and visual feedback modes are discussed. An exploratory user test examines the technique using a prototype system. INTRODUCTION The popularity of certain music-based computer games highlights the approach to the visual representation of music and sound in virtual environments. Games like Guitar Hero 1 provide an engaging experience through a note-entry-type task demanding high temporal precision. However, no commercially available games exploit the ability of the modern computer to manipulate and control musical timbre in real time. This paper outlines an approach to the representation and control of timbre through the provision of an effective graphical user interface (GUI). In section 1, examples of approaches to the sound synthesis GUI are described. Section 2 discusses aspects of perceived relationships between visual and auditory stimuli, including sound symbolism. Some examples of software applications using such perceptual links are presented in section 3. Section 4 describes a prototype interface used in an exploratory study into particular sound-shape relationships and outlines the subjective test undertaken. Section 5 discusses the results of the experiment. Section 6 proposes future work and a conclusion is offered in section 7. 1 SOUND CONTROL INTERFACES In the fields of computer music and audio production, GUIs take a number of common approaches [9]. Some emulate hardware devices e.g. a virtual synthesizer may use knobs, sliders and similar interface widgets placed on a graphical background (Fig. 1). These assume the user has specific operational knowledge of the original device (e.g. the effect of modifying a particular synthesis parameter) or possesses familiarity with a learned convention (such as the pitch distribution of the piano keyboard). A number of more experimental GUI designs employ interactive widgets as a means to control sound (Fig. 2); - these commonly represent the sound control parameters in some way, so the user is aware of the underlying system state. While many synthesis parameters are available in the above examples, the interfaces take the legacy windows, icons, menus, pointer (WIMP) format. This arrangement is not particularly suited to real-time musical play as it fosters an analytical approach to sound control, effectively decomposing the output sound into separate parameters [6]. The common one-to-one input-output mapping is not the most engaging for musical tasks. Although they are less intuitive for a beginner, complex mappings from more-than-one input to more-than-one output are more absorbing [2]. This has obvious ramifications for the design of musical games, where fun and engagement are of vital importance. The representation and control of sound synthesis using virtual objects allows simultaneous modification of multiple parameters [10]. By representing parameters intuitively, the cognitive load associated with musical play may be reduced. One approach to this is the abstract representation of the output sound. Software that visualises sound data in different ways has historically been of interest [8] and Levin provides a good overview of GUI approaches [9]. This research focuses on ways to intuitively link simple graphical objects and perceived sound qualities. Some efforts to find such innate relationships are described next. 1 AES 41st International Conference, London, UK, 2011 February 2 4 1
2 Figure 3: Shapes from the revised maluma/takete experiment [5]. Figure 1: The TAL Elek7ro virtual synthesizer 2. Figure 2: The IXI Stocksynth virtual synthesizer Empirical Study Empirical research has also tested audio-visual associations, an example being the Sound Mosaicss project [4]. In a comparative study of higher-level perception-based mappings between domains and spectral sonogram-type plots, the former were seen to enhance learnability and aid comprehension. A texturebased software GUI was used in the tests (Fig. 4). Findings suggest that empirically-derived mappings can be more effective for the provision of intuitive interfaces exploring musical timbre. This has connotations for the general gaming audience without musical training or an appreciation of sound synthesis theory. Other user studies are summarised in a paper [3] that goes on to examine the combination of mappings using 3-dimensional interfaces (Fig. 5). Previously discovered trends in audio-visual relationships were generally observed. These were preferences for size-amplitude and colour brightness-spectral centroid mappings with tendencies for distance-pitch and noisiness-texture roughness mappings. 2 AUDITORY-VISUAL RELATIONSHIPS 2.1 Sound Symbolism Cognitive research has shown that humans across many cultures associate vocal sounds with graphical shapes. An experiment performed by Wolfgang Kohler asked subjects to categorize a word sound as belonging to one of two graphical shapes [7]. A refined version of the experiment [5] showed that most of those tested associated the sound of the word maluma with a rounded graphic shape and the sound of takete with a more spiked shape (Fig. 3). It was the sounds themselves that the subjects were evaluating using linguistic labels such as 'softer' or 'brighter' and it was the perceived analogous qualities in the graphics that were seen as related. Figure 4: The Sound Mosaics software used to investigate auditory-visual mappings [4] AES 41st International Conference, London, UK, 2011 February 2 4 2
3 Figure 5: Screenshot of a 3-D test environment used to explore audio-visual mappings [3]. 3 SOFTWARE SYSTEM EXAMPLES Abaddo created a music composition environment [1] that employed perceived audio-visual relationships (Fig. 6). Although this system was essentially an offline timbre organiser, modern computer power and hardware interfaces mean that applications like this can now be run in real time, with novel interaction techniques. The links between shapes and sounds used were arbitrary, but they do reflect intuitive mappings. Figure 7: The twohand controller system [11]. One application used the positions and orientations of two tangible controllers to modify the output sound. Graphical feedback was provided that was related to perceptual qualities of the sounds such as brightness and harmonic distribution (Fig. 8). Figure 8: Visual feedback for the twohand interface [11]. Figure 6: Visual output from the Dynamics composition [1]. Work undertaken by the lead author has explored the use of sound symbolism for the control of sound synthesis [11]. Prototype applications were developed for a tabletop controller interface (Fig. 7). 4 SUBJECTIVE TEST A short exploratory study was conducted to investigate some of the trends seen in similar tests of auditoryvisual mappings and to outline a methodology for further study. The main question to be answered here was: Are some mappings between shapes and sounds more intuitive than others? The test focussed on the use of simple 2-dimensional graphical shapes (Fig. 9) to specifically target the perception of shape outlines similar to the Kohler experiment previously described. Colour and texture were therefore not considered. 4.1 Test System Bespoke software allowed the testing of various audiovisual mappings. Four graphical parameters and four sound parameters were examined, labelled as in table 1. AES 41st International Conference, London, UK, 2011 February 2 4 3
4 Graphical Parameters Size Number of Points Curvature Periodicity Sound Parameters Amplitude (Sound Level) Frequency (Pitch) Brightness Harmonicity Table 1: Parameters under test. The graphical component was written in Java 4 and allowed manipulation of the shape parameters using keyboard shortcuts. These values were sent to a sound synthesiser patch written in Max5 5 over Open Sound Control 6 (OSC), where they were mapped to various synthesis controls Graphical Parameters The graphic consisted of a set of vertices connected by Bezier curves. The Size attribute controlled the width and height of the graphical shape. The Number of Points determined the number of vertices used to draw the shape. The Curvature parameter determined the extension of some vertices beyond others, effectively determining the spikiness of the shape. The graphic could vary from a smooth circle to various star-like shapes (Fig. 9). The regularity of the shape was modified using the Periodicity factor Sound Parameters Sounds were generated using a frequency modulation (FM) synthesizer, which can generate many varied timbres with low computational overhead. The synthesis parameters available were the output amplitude, carrier frequency, modulation index, and harmonicity ratio. The Amplitude parameter was the output amplitude level and was related to the output volume. The carrier frequency determined the base frequency of the sound produced, labelled as the Frequency parameter. A pure tone was therefore generated at this frequency. The modulation index determined the number of sideband frequencies generated around the fundamental. A highpass filter set at the carrier frequency ensured that sidebands were only created above the carrier. This arrangement effectively controlled the position of the spectrum centroid of the output and was labelled as the Brightness parameter. The harmonicity ratio determined the distribution of the sidebands, varying the perception of the sound between harmonic and inharmonic timbres. This was represented as the Harmonicity parameter Shape-Sound Mappings Graphic attributes were mapped to sound synthesis parameters in the Max5 patch. In one example, Curvature was mapped to Brightness. As the shape became more curved, more harmonics were added to the output and the sound was perceived as being brighter (Fig. 9). The perception of brightness was common to both the aural and visual domains. All parameters were set on continuous scales, expect for Number of Points and Frequency. Number of Points was incremented or decremented by 2 to fit the graphic drawing algorithm. Frequency was evenly stepped on a simple pentatonic scale from 110 Hz to 220Hz. This approach was chosen to allow the playing of simple melodies. The dynamic graphics and sounds were rendered to video files for the user tests. Figure 9: Example relationship between the shape of a virtual controller (left) and output sound spectrum (right) in the prototype system. 4.2 Subjects There were 11 subjects (2 female), 3 of whom were non-musicians. Ages varied from 24 up to 52 years. 4.3 Test Setup Subjects were tested in the Department of Electronic Engineering in Trinity College Dublin. A projector screen and quality headphones were provided. Participants sat at a desk, where they were provided with an introductory information sheet. Subjects filled out a short background questionnaire. 4.4 Methodology A series of 39 videos (with audio) was shown. Subjects were asked to rate the perceived link between the AES 41st International Conference, London, UK, 2011 February 2 4 4
5 changes in the graphical shapes and the changes in the sounds. These were notated on a printed 5-point scale ranging from very weak to very strong. The play order of the videos was randomized between subjects. Participants were shown a random selection of 5 videos as a short training exercise. An informal postexperiment interview recorded general comments. The variable being changed in the visuals was mapped to each synthesis parameter in turn, with all other parameters being held constant. Variations of these constant values were tested for each mapping, but it was not possible to do this exhaustively. Mappings between modifications of the input parameter moving in one direction (i.e. incrementing or decrementing) were also mapped against movement of the output in the opposite direction. Again, it was not possible to do this for all mappings. 4.5 Results The histogram data for the mappings of each of the four input graphical parameters was plotted (Figs ). The standard errors associated with the means for each mapping were displayed as error bars, allowing visual inspection for statistically significant subjective preferences. 5 DISCUSSION The results of the subjective test show a number of interesting trends. Some of these support the preferences between certain audio-video mappings seen in other research, while others contradict them. 5.1 Input/ Graphical Parameter Groupings For the mappings of Size to various synthesis parameters, the results were not as expected. Previous studies showed a preference for a mapping between object size and sound output level, but the top rated mapping in this case was with Brightness. However, there was no statistical significance between the ratings for mappings of Size with Harmonicity, Brightness or Amplitude. Only the mappings with Frequency and Inverse Amplitude were clearly not favoured. Mappings with No. of Points also did not show statistically significant preferences for the most part. Only mappings to Inverse Amplitude showed a distinct difference in mean subjective rating, considering the standard error. Inverse Frequency (at moderate and low Curvature values) showed a statistically significant lack of preference when compared to the most preferred mapping, which again was with Brightness. The ratings for mappings of Curvature were more in line with expectation. Various mappings with Brightness showed a statistically significant preference over mappings with Amplitude and Frequency. Mappings to Inverse Brightness and Harmonicity were also significantly less favored. The Curvature-Inverse Harmonicity mapping is the only one that prevents a straight split grouping such that Curvature-Brightness mappings are the most preferred. A very strong preference is shown for the mappings of Periodicity with Harmonicity. 5.2 General Observations This study seems to differ slightly from earlier results that strongly link virtual object size with sound level but support the perceptual relationship between curvature and brightness of timbre. The study gives evidence that a statistically significant preference for mappings between periodicity, or regularity of form, and the harmonicity of partials in complex tones. It is suspected that the aesthetics of the transitions in either domain affected the rating of the strengths of the mappings. For example, it may be that any mapping to Harmonicity scored more highly due to the interesting change of timbre involved. The use of Harmonicity itself was problematic due to the FM synthesis method, as sideband positions were not precisely controlled. It is notable that inverted relationships were not always seen as the worst matches. For example, the mapping between No. Of Points and Inverse Brightness scored nearly as highly as the direct No. Of Points- Brightness mapping. More comparisons of direct- and inverse mappings are needed, as are considerations of mappings between discrete and continuous variables. Several subjects expressed concern that their answers were not consistent. While this may be attributed somewhat to test anxiety, it is felt that subjects may have calibrated their internal rating scale as the test went on. It may be necessary to re-design the test to reduce this affect. 5.3 Future Work The survey indicates preferences for some mappings, and tendencies towards others, that could prove useful for graphical interfaces. However, a more exhaustive exploration of all mapping permutations is needed, as is the inclusion of complex mapping schemes. A web-page based test may provide the largest sample set to date for this type of study. It is worth noting that the aim of such study is not the determination of universal truths for auditory/ visual perceptual relationships, but rather to determine if particular mappings can be sufficiently intuitive that they promote learnability and aid comprehension. In the specific case of musical games, the aim is the reduction of cognitive load such that multiple parameters may be controlled in an engaging and fun way. Another synthesis technique must be explored that offers more exact control over the distribution of partials. This will give a deeper understanding of mappings to harmonic distribution. AES 41st International Conference, London, UK, 2011 February 2 4 5
6 A number of novel musical games and applications are being developed to test the sound control and representation techniques. These will be made available at 6 CONCLUSIONS This paper showed how the design of GUIs for musical games may draw on the experience of interface design for computer music systems. A suggested approach was the use of virtual graphical objects to control sound synthesis for novel timbre-based games. The universal nature of results from cognitive experiments was presented as a possible way to encode intuitive mappings between the auditory and visual domains. A pilot study to analyse subjective user preferences showed that there seemed to be some inclinations for certain mappings, but a full study is now needed. A redesigned online survey was proposed to drastically increase the number of participants. 7 ACKNOWLEDGEMENTS This research is kindly supported by the Irish Research Council for Science, Engineering & Technology (IRCSET). REFERENCES [1] Abbado, A. Perceptual Correspondences of Abstract Animation and Synthetic Sound. M.S. Thesis. MIT Media Laboratory, June [2] Arfib, D., Couturier, J. M., Kessous, L., and Verfaille, V. Strategies of mapping between gesture data and synthesis model parameters using perceptual spaces. In Organised Sound 7(2): Cambridge University Press. [3] F. Berthaut and M. Desainte-Catherine, Combining Audiovisual Mappings for 3D Musical Interaction, International Computer Music Conference (ICMC 10), New York City and Stony Brook, NY, USA. June 1-5, [4] Giannakis, K. A comparative evaluation of auditory-visual mappings for sound visualisation. In Organised Sound, 11(3): Cambridge University Press. [5] Holland, M.K., and Wertheimer, M. Some Physiognomic aspects of naming, or maluma and takete revisited. Perceptual and Motor Skills, 1964, 119: [6] Hunt, A. and R. Kirk, Mapping Strategies for Musical Performance. In Trends in Gestural Control of Music, M. Wanderley and M. Battier, Editors IRCAM - Centre Pompidou, Paris [7] Kohler, W. Gestalt Psychology. Liveright Publishing Corporation, New York [8] Lemi, E., Georgaki, A. and Whitney, J. Reviewing the Transformation of Sound to Image in New Computer Music Software, In proceedings of Sound and Music Computing (SMC 07), 2007, pp [9] Levin, G. Painterly Interfaces for Audiovisual Performance. MS Thesis, Massachusetts Institute of Technology, Available at Accessed: October [10] Mulder, A., Fels, S. and Mase, K. Mapping virtual object manipulation to sound variation. IPSJ SIG notes 97-MUS-23, Vol. 97, No. 122, pp [11] O'Sullivan, L. Development of a Graphical Sound Synthesis Controller Exploring Cross- Modal Perceptual Analogies. Master's Thesis. Trinity College Dublin, Ireland lthesis.pdf AES 41st International Conference, London, UK, 2011 February 2 4 6
7 Figure 10: Mean subjective ratings for various mappings of the Size parameter. Figure 11: Mean subjective ratings for various mappings of the No. of Points parameter. AES 41st International Conference, London, UK, 2011 February 2 4 7
8 Figure 12: Mean subjective ratings for various mappings of the Curvature parameter. Figure 13: Mean subjective ratings for various mappings of the Periodicity parameter. AES 41st International Conference, London, UK, 2011 February 2 4 8
The Tone Height of Multiharmonic Sounds. Introduction
Music-Perception Winter 1990, Vol. 8, No. 2, 203-214 I990 BY THE REGENTS OF THE UNIVERSITY OF CALIFORNIA The Tone Height of Multiharmonic Sounds ROY D. PATTERSON MRC Applied Psychology Unit, Cambridge,
More informationAnimating Timbre - A User Study
Animating Timbre - A User Study Sean Soraghan ROLI Centre for Digital Entertainment sean@roli.com ABSTRACT The visualisation of musical timbre requires an effective mapping strategy. Auditory-visual perceptual
More informationThe Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng
The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng S. Zhu, P. Ji, W. Kuang and J. Yang Institute of Acoustics, CAS, O.21, Bei-Si-huan-Xi Road, 100190 Beijing,
More informationAN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY
AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT
More informationAnalysis, Synthesis, and Perception of Musical Sounds
Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music James W. Beauchamp Editor University of Illinois at Urbana, USA 4y Springer Contents Preface Acknowledgments vii xv 1. Analysis
More informationLOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU
The 21 st International Congress on Sound and Vibration 13-17 July, 2014, Beijing/China LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU Siyu Zhu, Peifeng Ji,
More informationCymaSense: A Real-Time 3D Cymatics- Based Sound Visualisation Tool
CymaSense: A Real-Time 3D Cymatics- Based Sound Visualisation Tool John McGowan J.McGowan@napier.ac.uk Grégory Leplâtre G.Leplatre@napier.ac.uk Iain McGregor I.McGregor@napier.ac.uk Permission to make
More informationA prototype system for rule-based expressive modifications of audio recordings
International Symposium on Performance Science ISBN 0-00-000000-0 / 000-0-00-000000-0 The Author 2007, Published by the AEC All rights reserved A prototype system for rule-based expressive modifications
More informationDAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes
DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring 2009 Week 6 Class Notes Pitch Perception Introduction Pitch may be described as that attribute of auditory sensation in terms
More informationSYNTHESIS FROM MUSICAL INSTRUMENT CHARACTER MAPS
Published by Institute of Electrical Engineers (IEE). 1998 IEE, Paul Masri, Nishan Canagarajah Colloquium on "Audio and Music Technology"; November 1998, London. Digest No. 98/470 SYNTHESIS FROM MUSICAL
More informationInfluence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas
Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical and schemas Stella Paraskeva (,) Stephen McAdams (,) () Institut de Recherche et de Coordination
More informationVuzik: Music Visualization and Creation on an Interactive Surface
Vuzik: Music Visualization and Creation on an Interactive Surface Aura Pon aapon@ucalgary.ca Junko Ichino Graduate School of Information Systems University of Electrocommunications Tokyo, Japan ichino@is.uec.ac.jp
More informationAn interdisciplinary approach to audio effect classification
An interdisciplinary approach to audio effect classification Vincent Verfaille, Catherine Guastavino Caroline Traube, SPCL / CIRMMT, McGill University GSLIS / CIRMMT, McGill University LIAM / OICM, Université
More informationLabView Exercises: Part II
Physics 3100 Electronics, Fall 2008, Digital Circuits 1 LabView Exercises: Part II The working VIs should be handed in to the TA at the end of the lab. Using LabView for Calculations and Simulations LabView
More informationExtending Interactive Aural Analysis: Acousmatic Music
Extending Interactive Aural Analysis: Acousmatic Music Michael Clarke School of Music Humanities and Media, University of Huddersfield, Queensgate, Huddersfield England, HD1 3DH j.m.clarke@hud.ac.uk 1.
More informationSmooth Rhythms as Probes of Entrainment. Music Perception 10 (1993): ABSTRACT
Smooth Rhythms as Probes of Entrainment Music Perception 10 (1993): 503-508 ABSTRACT If one hypothesizes rhythmic perception as a process employing oscillatory circuits in the brain that entrain to low-frequency
More information2. AN INTROSPECTION OF THE MORPHING PROCESS
1. INTRODUCTION Voice morphing means the transition of one speech signal into another. Like image morphing, speech morphing aims to preserve the shared characteristics of the starting and final signals,
More informationEffects of Auditory and Motor Mental Practice in Memorized Piano Performance
Bulletin of the Council for Research in Music Education Spring, 2003, No. 156 Effects of Auditory and Motor Mental Practice in Memorized Piano Performance Zebulon Highben Ohio State University Caroline
More informationMusic Representations
Lecture Music Processing Music Representations Meinard Müller International Audio Laboratories Erlangen meinard.mueller@audiolabs-erlangen.de Book: Fundamentals of Music Processing Meinard Müller Fundamentals
More informationNoise Tools 1U Manual. Noise Tools 1U. Clock, Random Pulse, Analog Noise, Sample & Hold, and Slew. Manual Revision:
Noise Tools 1U Clock, Random Pulse, Analog Noise, Sample & Hold, and Slew Manual Revision: 2018.05.16 Table of Contents Table of Contents Overview Installation Before Your Start Installing Your Module
More informationEnhancing Music Maps
Enhancing Music Maps Jakob Frank Vienna University of Technology, Vienna, Austria http://www.ifs.tuwien.ac.at/mir frank@ifs.tuwien.ac.at Abstract. Private as well as commercial music collections keep growing
More informationPOST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS
POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS Andrew N. Robertson, Mark D. Plumbley Centre for Digital Music
More informationApplication of a Musical-based Interaction System to the Waseda Flutist Robot WF-4RIV: Development Results and Performance Experiments
The Fourth IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics Roma, Italy. June 24-27, 2012 Application of a Musical-based Interaction System to the Waseda Flutist Robot
More informationCymatic: a real-time tactile-controlled physical modelling musical instrument
19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 Cymatic: a real-time tactile-controlled physical modelling musical instrument PACS: 43.75.-z Howard, David M; Murphy, Damian T Audio
More informationAn Investigation into the Tuition of Music Theory using Empirical Modelling
An Investigation into the Tuition of Music Theory using Empirical Modelling 0503985 Abstract Music theory is a subject that is often thought essential to the learning of a musical instrument, but is fraught
More informationLab P-6: Synthesis of Sinusoidal Signals A Music Illusion. A k cos.! k t C k / (1)
DSP First, 2e Signal Processing First Lab P-6: Synthesis of Sinusoidal Signals A Music Illusion Pre-Lab: Read the Pre-Lab and do all the exercises in the Pre-Lab section prior to attending lab. Verification:
More informationA PSYCHOACOUSTICAL INVESTIGATION INTO THE EFFECT OF WALL MATERIAL ON THE SOUND PRODUCED BY LIP-REED INSTRUMENTS
A PSYCHOACOUSTICAL INVESTIGATION INTO THE EFFECT OF WALL MATERIAL ON THE SOUND PRODUCED BY LIP-REED INSTRUMENTS JW Whitehouse D.D.E.M., The Open University, Milton Keynes, MK7 6AA, United Kingdom DB Sharp
More informationS I N E V I B E S FRACTION AUDIO SLICING WORKSTATION
S I N E V I B E S FRACTION AUDIO SLICING WORKSTATION INTRODUCTION Fraction is a plugin for deep on-the-fly remixing and mangling of sound. It features 8x independent slicers which record and repeat short
More informationModeling memory for melodies
Modeling memory for melodies Daniel Müllensiefen 1 and Christian Hennig 2 1 Musikwissenschaftliches Institut, Universität Hamburg, 20354 Hamburg, Germany 2 Department of Statistical Science, University
More informationExpressiveness and digital musical instrument design
Expressiveness and digital musical instrument design Daniel Arfib, Jean-Michel Couturier, Loïc Kessous LMA-CNRS (Laboratoire de Mécanique et d Acoustique) 31, chemin Joseph Aiguier 13402 Marseille Cedex
More informationNoise Tools 1U Manual. Noise Tools 1U. Clock, Random Pulse, Analog Noise, Sample & Hold, and Slew. Manual Revision:
Noise Tools 1U Clock, Random Pulse, Analog Noise, Sample & Hold, and Slew Manual Revision: 2018.09.13 Table of Contents Table of Contents Compliance Installation Before Your Start Installing Your Module
More informationDigital audio and computer music. COS 116, Spring 2012 Guest lecture: Rebecca Fiebrink
Digital audio and computer music COS 116, Spring 2012 Guest lecture: Rebecca Fiebrink Overview 1. Physics & perception of sound & music 2. Representations of music 3. Analyzing music with computers 4.
More informationColour Reproduction Performance of JPEG and JPEG2000 Codecs
Colour Reproduction Performance of JPEG and JPEG000 Codecs A. Punchihewa, D. G. Bailey, and R. M. Hodgson Institute of Information Sciences & Technology, Massey University, Palmerston North, New Zealand
More informationFraction by Sinevibes audio slicing workstation
Fraction by Sinevibes audio slicing workstation INTRODUCTION Fraction is an effect plugin for deep real-time manipulation and re-engineering of sound. It features 8 slicers which record and repeat the
More informationA CRITICAL ANALYSIS OF SYNTHESIZER USER INTERFACES FOR
A CRITICAL ANALYSIS OF SYNTHESIZER USER INTERFACES FOR TIMBRE Allan Seago London Metropolitan University Commercial Road London E1 1LA a.seago@londonmet.ac.uk Simon Holland Dept of Computing The Open University
More informationinter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE
Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 7.9 THE FUTURE OF SOUND
More informationReal-time Granular Sampling Using the IRCAM Signal Processing Workstation. Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France
Cort Lippe 1 Real-time Granular Sampling Using the IRCAM Signal Processing Workstation Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France Running Title: Real-time Granular Sampling [This copy of this
More informationA Perceptually Motivated Approach to Timbre Representation and Visualisation. Sean Soraghan
A Perceptually Motivated Approach to Timbre Representation and Visualisation Sean Soraghan A dissertation submitted in partial fulllment of the requirements for the degree of Engineering Doctorate Industrial
More informationPitch. The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high.
Pitch The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high. 1 The bottom line Pitch perception involves the integration of spectral (place)
More informationTopic 10. Multi-pitch Analysis
Topic 10 Multi-pitch Analysis What is pitch? Common elements of music are pitch, rhythm, dynamics, and the sonic qualities of timbre and texture. An auditory perceptual attribute in terms of which sounds
More informationComputer Coordination With Popular Music: A New Research Agenda 1
Computer Coordination With Popular Music: A New Research Agenda 1 Roger B. Dannenberg roger.dannenberg@cs.cmu.edu http://www.cs.cmu.edu/~rbd School of Computer Science Carnegie Mellon University Pittsburgh,
More informationLaboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB
Laboratory Assignment 3 Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB PURPOSE In this laboratory assignment, you will use MATLAB to synthesize the audio tones that make up a well-known
More informationDCI Requirements Image - Dynamics
DCI Requirements Image - Dynamics Matt Cowan Entertainment Technology Consultants www.etconsult.com Gamma 2.6 12 bit Luminance Coding Black level coding Post Production Implications Measurement Processes
More informationThe Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs
2005 Asia-Pacific Conference on Communications, Perth, Western Australia, 3-5 October 2005. The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs
More informationSHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS
SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS Areti Andreopoulou Music and Audio Research Laboratory New York University, New York, USA aa1510@nyu.edu Morwaread Farbood
More informationInteracting with a Virtual Conductor
Interacting with a Virtual Conductor Pieter Bos, Dennis Reidsma, Zsófia Ruttkay, Anton Nijholt HMI, Dept. of CS, University of Twente, PO Box 217, 7500AE Enschede, The Netherlands anijholt@ewi.utwente.nl
More informationCombining Instrument and Performance Models for High-Quality Music Synthesis
Combining Instrument and Performance Models for High-Quality Music Synthesis Roger B. Dannenberg and Istvan Derenyi dannenberg@cs.cmu.edu, derenyi@cs.cmu.edu School of Computer Science, Carnegie Mellon
More informationA Need for Universal Audio Terminologies and Improved Knowledge Transfer to the Consumer
A Need for Universal Audio Terminologies and Improved Knowledge Transfer to the Consumer Rob Toulson Anglia Ruskin University, Cambridge Conference 8-10 September 2006 Edinburgh University Summary Three
More informationEE391 Special Report (Spring 2005) Automatic Chord Recognition Using A Summary Autocorrelation Function
EE391 Special Report (Spring 25) Automatic Chord Recognition Using A Summary Autocorrelation Function Advisor: Professor Julius Smith Kyogu Lee Center for Computer Research in Music and Acoustics (CCRMA)
More informationUsability of Computer Music Interfaces for Simulation of Alternate Musical Systems
Usability of Computer Music Interfaces for Simulation of Alternate Musical Systems Dionysios Politis, Ioannis Stamelos {Multimedia Lab, Programming Languages and Software Engineering Lab}, Department of
More informationSTYLE-BRANDING, AESTHETIC DESIGN DNA
INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 10 & 11 SEPTEMBER 2009, UNIVERSITY OF BRIGHTON, UK STYLE-BRANDING, AESTHETIC DESIGN DNA Bob EVES 1 and Jon HEWITT 2 1 Bournemouth University
More informationMaking Progress With Sounds - The Design & Evaluation Of An Audio Progress Bar
Making Progress With Sounds - The Design & Evaluation Of An Audio Progress Bar Murray Crease & Stephen Brewster Department of Computing Science, University of Glasgow, Glasgow, UK. Tel.: (+44) 141 339
More informationAnalyzing Modulated Signals with the V93000 Signal Analyzer Tool. Joe Kelly, Verigy, Inc.
Analyzing Modulated Signals with the V93000 Signal Analyzer Tool Joe Kelly, Verigy, Inc. Abstract The Signal Analyzer Tool contained within the SmarTest software on the V93000 is a versatile graphical
More informationAbout Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance
Methodologies for Expressiveness Modeling of and for Music Performance by Giovanni De Poli Center of Computational Sonology, Department of Information Engineering, University of Padova, Padova, Italy About
More informationExperiments on musical instrument separation using multiplecause
Experiments on musical instrument separation using multiplecause models J Klingseisen and M D Plumbley* Department of Electronic Engineering King's College London * - Corresponding Author - mark.plumbley@kcl.ac.uk
More informationDYNAMIC AUDITORY CUES FOR EVENT IMPORTANCE LEVEL
DYNAMIC AUDITORY CUES FOR EVENT IMPORTANCE LEVEL Jonna Häkkilä Nokia Mobile Phones Research and Technology Access Elektroniikkatie 3, P.O.Box 50, 90571 Oulu, Finland jonna.hakkila@nokia.com Sami Ronkainen
More informationCathedral user guide & reference manual
Cathedral user guide & reference manual Cathedral page 1 Contents Contents... 2 Introduction... 3 Inspiration... 3 Additive Synthesis... 3 Wave Shaping... 4 Physical Modelling... 4 The Cathedral VST Instrument...
More informationEvaluating Musical Software Using Conceptual Metaphors
Katie Wilkie Centre for Research in Computing Open University Milton Keynes, MK7 6AA +44 (0)1908 274 066 klw323@student.open.ac.uk Evaluating Musical Software Using Conceptual Metaphors Simon Holland The
More informationUNIVERSITY OF DUBLIN TRINITY COLLEGE
UNIVERSITY OF DUBLIN TRINITY COLLEGE FACULTY OF ENGINEERING & SYSTEMS SCIENCES School of Engineering and SCHOOL OF MUSIC Postgraduate Diploma in Music and Media Technologies Hilary Term 31 st January 2005
More informationMIE 402: WORKSHOP ON DATA ACQUISITION AND SIGNAL PROCESSING Spring 2003
MIE 402: WORKSHOP ON DATA ACQUISITION AND SIGNAL PROCESSING Spring 2003 OBJECTIVE To become familiar with state-of-the-art digital data acquisition hardware and software. To explore common data acquisition
More informationLEARNING TO CONTROL A REVERBERATOR USING SUBJECTIVE PERCEPTUAL DESCRIPTORS
10 th International Society for Music Information Retrieval Conference (ISMIR 2009) October 26-30, 2009, Kobe, Japan LEARNING TO CONTROL A REVERBERATOR USING SUBJECTIVE PERCEPTUAL DESCRIPTORS Zafar Rafii
More informationISEE: An Intuitive Sound Editing Environment
Roel Vertegaal Department of Computing University of Bradford Bradford, BD7 1DP, UK roel@bradford.ac.uk Ernst Bonis Music Technology Utrecht School of the Arts Oude Amersfoortseweg 121 1212 AA Hilversum,
More informationChapter 1. Introduction to Digital Signal Processing
Chapter 1 Introduction to Digital Signal Processing 1. Introduction Signal processing is a discipline concerned with the acquisition, representation, manipulation, and transformation of signals required
More informationRobert Alexandru Dobre, Cristian Negrescu
ECAI 2016 - International Conference 8th Edition Electronics, Computers and Artificial Intelligence 30 June -02 July, 2016, Ploiesti, ROMÂNIA Automatic Music Transcription Software Based on Constant Q
More informationAPPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC
APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC Vishweshwara Rao, Sachin Pant, Madhumita Bhaskar and Preeti Rao Department of Electrical Engineering, IIT Bombay {vishu, sachinp,
More informationA Composition for Clarinet and Real-Time Signal Processing: Using Max on the IRCAM Signal Processing Workstation
A Composition for Clarinet and Real-Time Signal Processing: Using Max on the IRCAM Signal Processing Workstation Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France email: lippe@ircam.fr Introduction.
More informationMusical Acoustics Lecture 15 Pitch & Frequency (Psycho-Acoustics)
1 Musical Acoustics Lecture 15 Pitch & Frequency (Psycho-Acoustics) Pitch Pitch is a subjective characteristic of sound Some listeners even assign pitch differently depending upon whether the sound was
More informationMusic in Practice SAS 2015
Sample unit of work Contemporary music The sample unit of work provides teaching strategies and learning experiences that facilitate students demonstration of the dimensions and objectives of Music in
More informationONLINE ACTIVITIES FOR MUSIC INFORMATION AND ACOUSTICS EDUCATION AND PSYCHOACOUSTIC DATA COLLECTION
ONLINE ACTIVITIES FOR MUSIC INFORMATION AND ACOUSTICS EDUCATION AND PSYCHOACOUSTIC DATA COLLECTION Travis M. Doll Ray V. Migneco Youngmoo E. Kim Drexel University, Electrical & Computer Engineering {tmd47,rm443,ykim}@drexel.edu
More informationOCTAVE C 3 D 3 E 3 F 3 G 3 A 3 B 3 C 4 D 4 E 4 F 4 G 4 A 4 B 4 C 5 D 5 E 5 F 5 G 5 A 5 B 5. Middle-C A-440
DSP First Laboratory Exercise # Synthesis of Sinusoidal Signals This lab includes a project on music synthesis with sinusoids. One of several candidate songs can be selected when doing the synthesis program.
More informationDistributed Virtual Music Orchestra
Distributed Virtual Music Orchestra DMITRY VAZHENIN, ALEXANDER VAZHENIN Computer Software Department University of Aizu Tsuruga, Ikki-mach, AizuWakamatsu, Fukushima, 965-8580, JAPAN Abstract: - We present
More informationPsychoacoustic Evaluation of Fan Noise
Psychoacoustic Evaluation of Fan Noise Dr. Marc Schneider Team Leader R&D - Acoustics ebm-papst Mulfingen GmbH & Co.KG Carolin Feldmann, University Siegen Outline Motivation Psychoacoustic Parameters Psychoacoustic
More informationPRELIMINARY INFORMATION. Professional Signal Generation and Monitoring Options for RIFEforLIFE Research Equipment
Integrated Component Options Professional Signal Generation and Monitoring Options for RIFEforLIFE Research Equipment PRELIMINARY INFORMATION SquareGENpro is the latest and most versatile of the frequency
More informationPart I Of An Exclusive Interview With The Father Of Digital FM Synthesis. By Tom Darter.
John Chowning Part I Of An Exclusive Interview With The Father Of Digital FM Synthesis. By Tom Darter. From Aftertouch Magazine, Volume 1, No. 2. Scanned and converted to HTML by Dave Benson. AS DIRECTOR
More informationPulseCounter Neutron & Gamma Spectrometry Software Manual
PulseCounter Neutron & Gamma Spectrometry Software Manual MAXIMUS ENERGY CORPORATION Written by Dr. Max I. Fomitchev-Zamilov Web: maximus.energy TABLE OF CONTENTS 0. GENERAL INFORMATION 1. DEFAULT SCREEN
More informationUWE has obtained warranties from all depositors as to their title in the material deposited and as to their right to deposit such material.
Nash, C. (2016) Manhattan: Serious games for serious music. In: Music, Education and Technology (MET) 2016, London, UK, 14-15 March 2016. London, UK: Sempre Available from: http://eprints.uwe.ac.uk/28794
More informationAnalysis of local and global timing and pitch change in ordinary
Alma Mater Studiorum University of Bologna, August -6 6 Analysis of local and global timing and pitch change in ordinary melodies Roger Watt Dept. of Psychology, University of Stirling, Scotland r.j.watt@stirling.ac.uk
More informationDIGITAL COMMUNICATION
10EC61 DIGITAL COMMUNICATION UNIT 3 OUTLINE Waveform coding techniques (continued), DPCM, DM, applications. Base-Band Shaping for Data Transmission Discrete PAM signals, power spectra of discrete PAM signals.
More informationS I N E V I B E S ETERNAL BARBER-POLE FLANGER
S I N E V I B E S ETERNAL BARBER-POLE FLANGER INTRODUCTION Eternal by Sinevibes is a barber-pole flanger effect. Unlike a traditional flanger which typically has its tone repeatedly go up and down, this
More informationUpgrading E-learning of basic measurement algorithms based on DSP and MATLAB Web Server. Milos Sedlacek 1, Ondrej Tomiska 2
Upgrading E-learning of basic measurement algorithms based on DSP and MATLAB Web Server Milos Sedlacek 1, Ondrej Tomiska 2 1 Czech Technical University in Prague, Faculty of Electrical Engineeiring, Technicka
More informationImplementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor
Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor Introduction: The ability to time stretch and compress acoustical sounds without effecting their pitch has been an attractive
More informationA FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES
A FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES Panayiotis Kokoras School of Music Studies Aristotle University of Thessaloniki email@panayiotiskokoras.com Abstract. This article proposes a theoretical
More informationCalibrate, Characterize and Emulate Systems Using RFXpress in AWG Series
Calibrate, Characterize and Emulate Systems Using RFXpress in AWG Series Introduction System designers and device manufacturers so long have been using one set of instruments for creating digitally modulated
More informationLian Loke and Toni Robertson (eds) ISBN:
The Body in Design Workshop at OZCHI 2011 Design, Culture and Interaction, The Australasian Computer Human Interaction Conference, November 28th, Canberra, Australia Lian Loke and Toni Robertson (eds)
More informationEventide Inc. One Alsan Way Little Ferry, NJ
Copyright 2015, Eventide Inc. P/N: 141257, Rev 2 Eventide is a registered trademark of Eventide Inc. AAX and Pro Tools are trademarks of Avid Technology. Names and logos are used with permission. Audio
More informationVirtual Vibration Analyzer
Virtual Vibration Analyzer Vibration/industrial systems LabVIEW DAQ by Ricardo Jaramillo, Manager, Ricardo Jaramillo y Cía; Daniel Jaramillo, Engineering Assistant, Ricardo Jaramillo y Cía The Challenge:
More informationBrain.fm Theory & Process
Brain.fm Theory & Process At Brain.fm we develop and deliver functional music, directly optimized for its effects on our behavior. Our goal is to help the listener achieve desired mental states such as
More informationSubjective Emotional Responses to Musical Structure, Expression and Timbre Features: A Synthetic Approach
Subjective Emotional Responses to Musical Structure, Expression and Timbre Features: A Synthetic Approach Sylvain Le Groux 1, Paul F.M.J. Verschure 1,2 1 SPECS, Universitat Pompeu Fabra 2 ICREA, Barcelona
More informationMusical Entrainment Subsumes Bodily Gestures Its Definition Needs a Spatiotemporal Dimension
Musical Entrainment Subsumes Bodily Gestures Its Definition Needs a Spatiotemporal Dimension MARC LEMAN Ghent University, IPEM Department of Musicology ABSTRACT: In his paper What is entrainment? Definition
More information1 Ver.mob Brief guide
1 Ver.mob 14.02.2017 Brief guide 2 Contents Introduction... 3 Main features... 3 Hardware and software requirements... 3 The installation of the program... 3 Description of the main Windows of the program...
More informationThe quality of potato chip sounds and crispness impression
PROCEEDINGS of the 22 nd International Congress on Acoustics Product Quality and Multimodal Interaction: Paper ICA2016-558 The quality of potato chip sounds and crispness impression M. Ercan Altinsoy Chair
More informationAuditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are
In: E. Bruce Goldstein (Ed) Encyclopedia of Perception, Volume 1, Sage, 2009, pp 160-164. Auditory Illusions Diana Deutsch The sounds we perceive do not always correspond to those that are presented. When
More informationPractice makes less imperfect: the effects of experience and practice on the kinetics and coordination of flutists' fingers
Proceedings of the International Symposium on Music Acoustics (Associated Meeting of the International Congress on Acoustics) 25-31 August 2010, Sydney and Katoomba, Australia Practice makes less imperfect:
More informationPre-processing of revolution speed data in ArtemiS SUITE 1
03/18 in ArtemiS SUITE 1 Introduction 1 TTL logic 2 Sources of error in pulse data acquisition 3 Processing of trigger signals 5 Revolution speed acquisition with complex pulse patterns 7 Introduction
More informationPS User Guide Series Seismic-Data Display
PS User Guide Series 2015 Seismic-Data Display Prepared By Choon B. Park, Ph.D. January 2015 Table of Contents Page 1. File 2 2. Data 2 2.1 Resample 3 3. Edit 4 3.1 Export Data 4 3.2 Cut/Append Records
More informationOutline. Why do we classify? Audio Classification
Outline Introduction Music Information Retrieval Classification Process Steps Pitch Histograms Multiple Pitch Detection Algorithm Musical Genre Classification Implementation Future Work Why do we classify
More informationEvolutionary jazz improvisation and harmony system: A new jazz improvisation and harmony system
Performa 9 Conference on Performance Studies University of Aveiro, May 29 Evolutionary jazz improvisation and harmony system: A new jazz improvisation and harmony system Kjell Bäckman, IT University, Art
More informationScoregram: Displaying Gross Timbre Information from a Score
Scoregram: Displaying Gross Timbre Information from a Score Rodrigo Segnini and Craig Sapp Center for Computer Research in Music and Acoustics (CCRMA), Center for Computer Assisted Research in the Humanities
More informationPerceptual Considerations in Designing and Fitting Hearing Aids for Music Published on Friday, 14 March :01
Perceptual Considerations in Designing and Fitting Hearing Aids for Music Published on Friday, 14 March 2008 11:01 The components of music shed light on important aspects of hearing perception. To make
More informationInstallation of a DAQ System in Hall C
Installation of a DAQ System in Hall C Cuore Collaboration Meeting Como, February 21 st - 23 rd 2007 S. Di Domizio A. Giachero M. Pallavicini S. Di Domizio Summary slide CUORE-like DAQ system installed
More information