VISUALIZING AND CONTROLLING SOUND WITH GRAPHICAL INTERFACES

Similar documents
The Tone Height of Multiharmonic Sounds. Introduction

Animating Timbre - A User Study

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

Analysis, Synthesis, and Perception of Musical Sounds

LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU

CymaSense: A Real-Time 3D Cymatics- Based Sound Visualisation Tool

A prototype system for rule-based expressive modifications of audio recordings

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

SYNTHESIS FROM MUSICAL INSTRUMENT CHARACTER MAPS

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas

Vuzik: Music Visualization and Creation on an Interactive Surface

An interdisciplinary approach to audio effect classification

LabView Exercises: Part II

Extending Interactive Aural Analysis: Acousmatic Music

Smooth Rhythms as Probes of Entrainment. Music Perception 10 (1993): ABSTRACT

2. AN INTROSPECTION OF THE MORPHING PROCESS

Effects of Auditory and Motor Mental Practice in Memorized Piano Performance

Music Representations

Noise Tools 1U Manual. Noise Tools 1U. Clock, Random Pulse, Analog Noise, Sample & Hold, and Slew. Manual Revision:

Enhancing Music Maps

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

Application of a Musical-based Interaction System to the Waseda Flutist Robot WF-4RIV: Development Results and Performance Experiments

Cymatic: a real-time tactile-controlled physical modelling musical instrument

An Investigation into the Tuition of Music Theory using Empirical Modelling

Lab P-6: Synthesis of Sinusoidal Signals A Music Illusion. A k cos.! k t C k / (1)

A PSYCHOACOUSTICAL INVESTIGATION INTO THE EFFECT OF WALL MATERIAL ON THE SOUND PRODUCED BY LIP-REED INSTRUMENTS

S I N E V I B E S FRACTION AUDIO SLICING WORKSTATION

Modeling memory for melodies

Expressiveness and digital musical instrument design

Noise Tools 1U Manual. Noise Tools 1U. Clock, Random Pulse, Analog Noise, Sample & Hold, and Slew. Manual Revision:

Digital audio and computer music. COS 116, Spring 2012 Guest lecture: Rebecca Fiebrink

Colour Reproduction Performance of JPEG and JPEG2000 Codecs

Fraction by Sinevibes audio slicing workstation

A CRITICAL ANALYSIS OF SYNTHESIZER USER INTERFACES FOR

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

Real-time Granular Sampling Using the IRCAM Signal Processing Workstation. Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France

A Perceptually Motivated Approach to Timbre Representation and Visualisation. Sean Soraghan

Pitch. The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high.

Topic 10. Multi-pitch Analysis

Computer Coordination With Popular Music: A New Research Agenda 1

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB

DCI Requirements Image - Dynamics

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS

Interacting with a Virtual Conductor

Combining Instrument and Performance Models for High-Quality Music Synthesis

A Need for Universal Audio Terminologies and Improved Knowledge Transfer to the Consumer

EE391 Special Report (Spring 2005) Automatic Chord Recognition Using A Summary Autocorrelation Function

Usability of Computer Music Interfaces for Simulation of Alternate Musical Systems

STYLE-BRANDING, AESTHETIC DESIGN DNA

Making Progress With Sounds - The Design & Evaluation Of An Audio Progress Bar

Analyzing Modulated Signals with the V93000 Signal Analyzer Tool. Joe Kelly, Verigy, Inc.

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance

Experiments on musical instrument separation using multiplecause

DYNAMIC AUDITORY CUES FOR EVENT IMPORTANCE LEVEL

Cathedral user guide & reference manual

Evaluating Musical Software Using Conceptual Metaphors

UNIVERSITY OF DUBLIN TRINITY COLLEGE

MIE 402: WORKSHOP ON DATA ACQUISITION AND SIGNAL PROCESSING Spring 2003

LEARNING TO CONTROL A REVERBERATOR USING SUBJECTIVE PERCEPTUAL DESCRIPTORS

ISEE: An Intuitive Sound Editing Environment

Chapter 1. Introduction to Digital Signal Processing

Robert Alexandru Dobre, Cristian Negrescu

APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC

A Composition for Clarinet and Real-Time Signal Processing: Using Max on the IRCAM Signal Processing Workstation

Musical Acoustics Lecture 15 Pitch & Frequency (Psycho-Acoustics)

Music in Practice SAS 2015

ONLINE ACTIVITIES FOR MUSIC INFORMATION AND ACOUSTICS EDUCATION AND PSYCHOACOUSTIC DATA COLLECTION

OCTAVE C 3 D 3 E 3 F 3 G 3 A 3 B 3 C 4 D 4 E 4 F 4 G 4 A 4 B 4 C 5 D 5 E 5 F 5 G 5 A 5 B 5. Middle-C A-440

Distributed Virtual Music Orchestra

Psychoacoustic Evaluation of Fan Noise

PRELIMINARY INFORMATION. Professional Signal Generation and Monitoring Options for RIFEforLIFE Research Equipment

Part I Of An Exclusive Interview With The Father Of Digital FM Synthesis. By Tom Darter.

PulseCounter Neutron & Gamma Spectrometry Software Manual

UWE has obtained warranties from all depositors as to their title in the material deposited and as to their right to deposit such material.

Analysis of local and global timing and pitch change in ordinary

DIGITAL COMMUNICATION

S I N E V I B E S ETERNAL BARBER-POLE FLANGER

Upgrading E-learning of basic measurement algorithms based on DSP and MATLAB Web Server. Milos Sedlacek 1, Ondrej Tomiska 2

Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor

A FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES

Calibrate, Characterize and Emulate Systems Using RFXpress in AWG Series

Lian Loke and Toni Robertson (eds) ISBN:

Eventide Inc. One Alsan Way Little Ferry, NJ

Virtual Vibration Analyzer

Brain.fm Theory & Process

Subjective Emotional Responses to Musical Structure, Expression and Timbre Features: A Synthetic Approach

Musical Entrainment Subsumes Bodily Gestures Its Definition Needs a Spatiotemporal Dimension

1 Ver.mob Brief guide

The quality of potato chip sounds and crispness impression

Auditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are

Practice makes less imperfect: the effects of experience and practice on the kinetics and coordination of flutists' fingers

Pre-processing of revolution speed data in ArtemiS SUITE 1

PS User Guide Series Seismic-Data Display

Outline. Why do we classify? Audio Classification

Evolutionary jazz improvisation and harmony system: A new jazz improvisation and harmony system

Scoregram: Displaying Gross Timbre Information from a Score

Perceptual Considerations in Designing and Fitting Hearing Aids for Music Published on Friday, 14 March :01

Installation of a DAQ System in Hall C

Transcription:

VISUALIZING AND CONTROLLING SOUND WITH GRAPHICAL INTERFACES LIAM O SULLIVAN, FRANK BOLAND Dept. of Electronic & Electrical Engineering, Trinity College Dublin, Dublin 2, Ireland lmosulli@tcd.ie Developments in abstract representations of sound from the field of computer music have potential applications for designers of musical computer games. Research in cognition has identified correlations in the perceptions of visual objects and audio events; - experiments show that test subjects associate certain qualities of graphical shapes with those of vocal sounds. Such 'sound symbolism' has been extended to non-vocal sounds and this paper describes attempts to exploit this and other phenomenon in the visualization of audio. The ideas are expanded upon to propose control for sound synthesis through the manipulation of virtual shapes. Mappings between parameters in the auditory and visual feedback modes are discussed. An exploratory user test examines the technique using a prototype system. INTRODUCTION The popularity of certain music-based computer games highlights the approach to the visual representation of music and sound in virtual environments. Games like Guitar Hero 1 provide an engaging experience through a note-entry-type task demanding high temporal precision. However, no commercially available games exploit the ability of the modern computer to manipulate and control musical timbre in real time. This paper outlines an approach to the representation and control of timbre through the provision of an effective graphical user interface (GUI). In section 1, examples of approaches to the sound synthesis GUI are described. Section 2 discusses aspects of perceived relationships between visual and auditory stimuli, including sound symbolism. Some examples of software applications using such perceptual links are presented in section 3. Section 4 describes a prototype interface used in an exploratory study into particular sound-shape relationships and outlines the subjective test undertaken. Section 5 discusses the results of the experiment. Section 6 proposes future work and a conclusion is offered in section 7. 1 SOUND CONTROL INTERFACES In the fields of computer music and audio production, GUIs take a number of common approaches [9]. Some emulate hardware devices e.g. a virtual synthesizer may use knobs, sliders and similar interface widgets placed on a graphical background (Fig. 1). These assume the user has specific operational knowledge of the original device (e.g. the effect of modifying a particular synthesis parameter) or possesses familiarity with a learned convention (such as the pitch distribution of the piano keyboard). A number of more experimental GUI designs employ interactive widgets as a means to control sound (Fig. 2); - these commonly represent the sound control parameters in some way, so the user is aware of the underlying system state. While many synthesis parameters are available in the above examples, the interfaces take the legacy windows, icons, menus, pointer (WIMP) format. This arrangement is not particularly suited to real-time musical play as it fosters an analytical approach to sound control, effectively decomposing the output sound into separate parameters [6]. The common one-to-one input-output mapping is not the most engaging for musical tasks. Although they are less intuitive for a beginner, complex mappings from more-than-one input to more-than-one output are more absorbing [2]. This has obvious ramifications for the design of musical games, where fun and engagement are of vital importance. The representation and control of sound synthesis using virtual objects allows simultaneous modification of multiple parameters [10]. By representing parameters intuitively, the cognitive load associated with musical play may be reduced. One approach to this is the abstract representation of the output sound. Software that visualises sound data in different ways has historically been of interest [8] and Levin provides a good overview of GUI approaches [9]. This research focuses on ways to intuitively link simple graphical objects and perceived sound qualities. Some efforts to find such innate relationships are described next. 1 http://hub.guitarhero.com/ AES 41st International Conference, London, UK, 2011 February 2 4 1

Figure 3: Shapes from the revised maluma/takete experiment [5]. Figure 1: The TAL Elek7ro virtual synthesizer 2. Figure 2: The IXI Stocksynth virtual synthesizer 3. 2.2 Empirical Study Empirical research has also tested audio-visual associations, an example being the Sound Mosaicss project [4]. In a comparative study of higher-level perception-based mappings between domains and spectral sonogram-type plots, the former were seen to enhance learnability and aid comprehension. A texturebased software GUI was used in the tests (Fig. 4). Findings suggest that empirically-derived mappings can be more effective for the provision of intuitive interfaces exploring musical timbre. This has connotations for the general gaming audience without musical training or an appreciation of sound synthesis theory. Other user studies are summarised in a paper [3] that goes on to examine the combination of mappings using 3-dimensional interfaces (Fig. 5). Previously discovered trends in audio-visual relationships were generally observed. These were preferences for size-amplitude and colour brightness-spectral centroid mappings with tendencies for distance-pitch and noisiness-texture roughness mappings. 2 AUDITORY-VISUAL RELATIONSHIPS 2.1 Sound Symbolism Cognitive research has shown that humans across many cultures associate vocal sounds with graphical shapes. An experiment performed by Wolfgang Kohler asked subjects to categorize a word sound as belonging to one of two graphical shapes [7]. A refined version of the experiment [5] showed that most of those tested associated the sound of the word maluma with a rounded graphic shape and the sound of takete with a more spiked shape (Fig. 3). It was the sounds themselves that the subjects were evaluating using linguistic labels such as 'softer' or 'brighter' and it was the perceived analogous qualities in the graphics that were seen as related. Figure 4: The Sound Mosaics software used to investigate auditory-visual mappings [4]. 2 http://kunz.corrupt.ch/?products:vst_tal-elek7ro 3 www.ixi-audio.net/content/body_software_stock.html AES 41st International Conference, London, UK, 2011 February 2 4 2

Figure 5: Screenshot of a 3-D test environment used to explore audio-visual mappings [3]. 3 SOFTWARE SYSTEM EXAMPLES Abaddo created a music composition environment [1] that employed perceived audio-visual relationships (Fig. 6). Although this system was essentially an offline timbre organiser, modern computer power and hardware interfaces mean that applications like this can now be run in real time, with novel interaction techniques. The links between shapes and sounds used were arbitrary, but they do reflect intuitive mappings. Figure 7: The twohand controller system [11]. One application used the positions and orientations of two tangible controllers to modify the output sound. Graphical feedback was provided that was related to perceptual qualities of the sounds such as brightness and harmonic distribution (Fig. 8). Figure 8: Visual feedback for the twohand interface [11]. Figure 6: Visual output from the Dynamics composition [1]. Work undertaken by the lead author has explored the use of sound symbolism for the control of sound synthesis [11]. Prototype applications were developed for a tabletop controller interface (Fig. 7). 4 SUBJECTIVE TEST A short exploratory study was conducted to investigate some of the trends seen in similar tests of auditoryvisual mappings and to outline a methodology for further study. The main question to be answered here was: Are some mappings between shapes and sounds more intuitive than others? The test focussed on the use of simple 2-dimensional graphical shapes (Fig. 9) to specifically target the perception of shape outlines similar to the Kohler experiment previously described. Colour and texture were therefore not considered. 4.1 Test System Bespoke software allowed the testing of various audiovisual mappings. Four graphical parameters and four sound parameters were examined, labelled as in table 1. AES 41st International Conference, London, UK, 2011 February 2 4 3

Graphical Parameters Size Number of Points Curvature Periodicity Sound Parameters Amplitude (Sound Level) Frequency (Pitch) Brightness Harmonicity Table 1: Parameters under test. The graphical component was written in Java 4 and allowed manipulation of the shape parameters using keyboard shortcuts. These values were sent to a sound synthesiser patch written in Max5 5 over Open Sound Control 6 (OSC), where they were mapped to various synthesis controls. 4.1.1 Graphical Parameters The graphic consisted of a set of vertices connected by Bezier curves. The Size attribute controlled the width and height of the graphical shape. The Number of Points determined the number of vertices used to draw the shape. The Curvature parameter determined the extension of some vertices beyond others, effectively determining the spikiness of the shape. The graphic could vary from a smooth circle to various star-like shapes (Fig. 9). The regularity of the shape was modified using the Periodicity factor. 4.1.2 Sound Parameters Sounds were generated using a frequency modulation (FM) synthesizer, which can generate many varied timbres with low computational overhead. The synthesis parameters available were the output amplitude, carrier frequency, modulation index, and harmonicity ratio. The Amplitude parameter was the output amplitude level and was related to the output volume. The carrier frequency determined the base frequency of the sound produced, labelled as the Frequency parameter. A pure tone was therefore generated at this frequency. The modulation index determined the number of sideband frequencies generated around the fundamental. A highpass filter set at the carrier frequency ensured that sidebands were only created above the carrier. This arrangement effectively controlled the position of the spectrum centroid of the output and was labelled as the Brightness parameter. The harmonicity ratio determined the distribution of the sidebands, varying the perception of the sound between 4 http://www.java.com/en/ 5 http://cycling74.com/products/maxmspjitter/ 6 http://opensoundcontrol.org/introduction-osc harmonic and inharmonic timbres. This was represented as the Harmonicity parameter. 4.1.3 Shape-Sound Mappings Graphic attributes were mapped to sound synthesis parameters in the Max5 patch. In one example, Curvature was mapped to Brightness. As the shape became more curved, more harmonics were added to the output and the sound was perceived as being brighter (Fig. 9). The perception of brightness was common to both the aural and visual domains. All parameters were set on continuous scales, expect for Number of Points and Frequency. Number of Points was incremented or decremented by 2 to fit the graphic drawing algorithm. Frequency was evenly stepped on a simple pentatonic scale from 110 Hz to 220Hz. This approach was chosen to allow the playing of simple melodies. The dynamic graphics and sounds were rendered to video files for the user tests. Figure 9: Example relationship between the shape of a virtual controller (left) and output sound spectrum (right) in the prototype system. 4.2 Subjects There were 11 subjects (2 female), 3 of whom were non-musicians. Ages varied from 24 up to 52 years. 4.3 Test Setup Subjects were tested in the Department of Electronic Engineering in Trinity College Dublin. A projector screen and quality headphones were provided. Participants sat at a desk, where they were provided with an introductory information sheet. Subjects filled out a short background questionnaire. 4.4 Methodology A series of 39 videos (with audio) was shown. Subjects were asked to rate the perceived link between the AES 41st International Conference, London, UK, 2011 February 2 4 4

changes in the graphical shapes and the changes in the sounds. These were notated on a printed 5-point scale ranging from very weak to very strong. The play order of the videos was randomized between subjects. Participants were shown a random selection of 5 videos as a short training exercise. An informal postexperiment interview recorded general comments. The variable being changed in the visuals was mapped to each synthesis parameter in turn, with all other parameters being held constant. Variations of these constant values were tested for each mapping, but it was not possible to do this exhaustively. Mappings between modifications of the input parameter moving in one direction (i.e. incrementing or decrementing) were also mapped against movement of the output in the opposite direction. Again, it was not possible to do this for all mappings. 4.5 Results The histogram data for the mappings of each of the four input graphical parameters was plotted (Figs. 10-13). The standard errors associated with the means for each mapping were displayed as error bars, allowing visual inspection for statistically significant subjective preferences. 5 DISCUSSION The results of the subjective test show a number of interesting trends. Some of these support the preferences between certain audio-video mappings seen in other research, while others contradict them. 5.1 Input/ Graphical Parameter Groupings For the mappings of Size to various synthesis parameters, the results were not as expected. Previous studies showed a preference for a mapping between object size and sound output level, but the top rated mapping in this case was with Brightness. However, there was no statistical significance between the ratings for mappings of Size with Harmonicity, Brightness or Amplitude. Only the mappings with Frequency and Inverse Amplitude were clearly not favoured. Mappings with No. of Points also did not show statistically significant preferences for the most part. Only mappings to Inverse Amplitude showed a distinct difference in mean subjective rating, considering the standard error. Inverse Frequency (at moderate and low Curvature values) showed a statistically significant lack of preference when compared to the most preferred mapping, which again was with Brightness. The ratings for mappings of Curvature were more in line with expectation. Various mappings with Brightness showed a statistically significant preference over mappings with Amplitude and Frequency. Mappings to Inverse Brightness and Harmonicity were also significantly less favored. The Curvature-Inverse Harmonicity mapping is the only one that prevents a straight split grouping such that Curvature-Brightness mappings are the most preferred. A very strong preference is shown for the mappings of Periodicity with Harmonicity. 5.2 General Observations This study seems to differ slightly from earlier results that strongly link virtual object size with sound level but support the perceptual relationship between curvature and brightness of timbre. The study gives evidence that a statistically significant preference for mappings between periodicity, or regularity of form, and the harmonicity of partials in complex tones. It is suspected that the aesthetics of the transitions in either domain affected the rating of the strengths of the mappings. For example, it may be that any mapping to Harmonicity scored more highly due to the interesting change of timbre involved. The use of Harmonicity itself was problematic due to the FM synthesis method, as sideband positions were not precisely controlled. It is notable that inverted relationships were not always seen as the worst matches. For example, the mapping between No. Of Points and Inverse Brightness scored nearly as highly as the direct No. Of Points- Brightness mapping. More comparisons of direct- and inverse mappings are needed, as are considerations of mappings between discrete and continuous variables. Several subjects expressed concern that their answers were not consistent. While this may be attributed somewhat to test anxiety, it is felt that subjects may have calibrated their internal rating scale as the test went on. It may be necessary to re-design the test to reduce this affect. 5.3 Future Work The survey indicates preferences for some mappings, and tendencies towards others, that could prove useful for graphical interfaces. However, a more exhaustive exploration of all mapping permutations is needed, as is the inclusion of complex mapping schemes. A web-page based test may provide the largest sample set to date for this type of study. It is worth noting that the aim of such study is not the determination of universal truths for auditory/ visual perceptual relationships, but rather to determine if particular mappings can be sufficiently intuitive that they promote learnability and aid comprehension. In the specific case of musical games, the aim is the reduction of cognitive load such that multiple parameters may be controlled in an engaging and fun way. Another synthesis technique must be explored that offers more exact control over the distribution of partials. This will give a deeper understanding of mappings to harmonic distribution. AES 41st International Conference, London, UK, 2011 February 2 4 5

A number of novel musical games and applications are being developed to test the sound control and representation techniques. These will be made available at http://www.mee.tcd.ie/~lmosulli/ 6 CONCLUSIONS This paper showed how the design of GUIs for musical games may draw on the experience of interface design for computer music systems. A suggested approach was the use of virtual graphical objects to control sound synthesis for novel timbre-based games. The universal nature of results from cognitive experiments was presented as a possible way to encode intuitive mappings between the auditory and visual domains. A pilot study to analyse subjective user preferences showed that there seemed to be some inclinations for certain mappings, but a full study is now needed. A redesigned online survey was proposed to drastically increase the number of participants. 7 ACKNOWLEDGEMENTS This research is kindly supported by the Irish Research Council for Science, Engineering & Technology (IRCSET). REFERENCES [1] Abbado, A. Perceptual Correspondences of Abstract Animation and Synthetic Sound. M.S. Thesis. MIT Media Laboratory, June 1988. [2] Arfib, D., Couturier, J. M., Kessous, L., and Verfaille, V. Strategies of mapping between gesture data and synthesis model parameters using perceptual spaces. In Organised Sound 7(2): 127 144. 2002 Cambridge University Press. [3] F. Berthaut and M. Desainte-Catherine, Combining Audiovisual Mappings for 3D Musical Interaction, International Computer Music Conference (ICMC 10), New York City and Stony Brook, NY, USA. June 1-5, 2010. [4] Giannakis, K. A comparative evaluation of auditory-visual mappings for sound visualisation. In Organised Sound, 11(3): 297 307. 2006 Cambridge University Press. [5] Holland, M.K., and Wertheimer, M. Some Physiognomic aspects of naming, or maluma and takete revisited. Perceptual and Motor Skills, 1964, 119: 111-117. [6] Hunt, A. and R. Kirk, Mapping Strategies for Musical Performance. In Trends in Gestural Control of Music, M. Wanderley and M. Battier, Editors IRCAM - Centre Pompidou, Paris. 2000. [7] Kohler, W. Gestalt Psychology. Liveright Publishing Corporation, New York. 1947. [8] Lemi, E., Georgaki, A. and Whitney, J. Reviewing the Transformation of Sound to Image in New Computer Music Software, In proceedings of Sound and Music Computing (SMC 07), 2007, pp. 11-13. [9] Levin, G. Painterly Interfaces for Audiovisual Performance. MS Thesis, Massachusetts Institute of Technology, 2000. Available at http://acg.media.mit.edu/people/golan/thesis/ Accessed: October 2010. [10] Mulder, A., Fels, S. and Mase, K. Mapping virtual object manipulation to sound variation. IPSJ SIG notes 97-MUS-23, Vol. 97, No. 122, pp. 63-68. [11] O'Sullivan, L. Development of a Graphical Sound Synthesis Controller Exploring Cross- Modal Perceptual Analogies. Master's Thesis. Trinity College Dublin, Ireland. 2007. www.mee.tcd.ie/~lmosulli/liamosullivan_mphi lthesis.pdf AES 41st International Conference, London, UK, 2011 February 2 4 6

Figure 10: Mean subjective ratings for various mappings of the Size parameter. Figure 11: Mean subjective ratings for various mappings of the No. of Points parameter. AES 41st International Conference, London, UK, 2011 February 2 4 7

Figure 12: Mean subjective ratings for various mappings of the Curvature parameter. Figure 13: Mean subjective ratings for various mappings of the Periodicity parameter. AES 41st International Conference, London, UK, 2011 February 2 4 8