Timbre blending of wind instruments: acoustics and perception

Similar documents
Proceedings of Meetings on Acoustics

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Audio Feature Extraction for Corpus Analysis

Topic 10. Multi-pitch Analysis

Received 27 July ; Perturbations of Synthetic Orchestral Wind-Instrument

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

Noise evaluation based on loudness-perception characteristics of older adults

Analysis, Synthesis, and Perception of Musical Sounds

Topics in Computer Music Instrument Identification. Ioanna Karydi

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

Modeling memory for melodies

A PSYCHOACOUSTICAL INVESTIGATION INTO THE EFFECT OF WALL MATERIAL ON THE SOUND PRODUCED BY LIP-REED INSTRUMENTS

Pitch Perception and Grouping. HST.723 Neural Coding and Perception of Sound

DERIVING A TIMBRE SPACE FOR THREE TYPES OF COMPLEX TONES VARYING IN SPECTRAL ROLL-OFF

LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU

Influence of tonal context and timbral variation on perception of pitch

Acoustic and musical foundations of the speech/song illusion

AUD 6306 Speech Science

Supervised Musical Source Separation from Mono and Stereo Mixtures based on Sinusoidal Modeling

EFFECT OF TIMBRE ON MELODY RECOGNITION IN THREE-VOICE COUNTERPOINT MUSIC

Sound design strategy for enhancing subjective preference of EV interior sound

Chapter Two: Long-Term Memory for Timbre

Measurement of overtone frequencies of a toy piano and perception of its pitch

The Tone Height of Multiharmonic Sounds. Introduction

Quarterly Progress and Status Report. Perception of just noticeable time displacement of a tone presented in a metrical sequence at different tempos

MUSI-6201 Computational Music Analysis

Experiments on musical instrument separation using multiplecause

WE ADDRESS the development of a novel computational

Computational Parsing of Melody (CPM): Interface Enhancing the Creative Process during the Production of Music

Pitch is one of the most common terms used to describe sound.

ROOM LOW-FREQUENCY RESPONSE ESTIMATION USING MICROPHONE AVERAGING

SYNTHESIS FROM MUSICAL INSTRUMENT CHARACTER MAPS

Experiments on tone adjustments

Classification of Timbre Similarity

POLYPHONIC INSTRUMENT RECOGNITION USING SPECTRAL CLUSTERING

A FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES

Musical Acoustics Lecture 15 Pitch & Frequency (Psycho-Acoustics)

Composer Identification of Digital Audio Modeling Content Specific Features Through Markov Models

Sound Quality Analysis of Electric Parking Brake

ANALYSING DIFFERENCES BETWEEN THE INPUT IMPEDANCES OF FIVE CLARINETS OF DIFFERENT MAKES

Modeling sound quality from psychoacoustic measures

9.35 Sensation And Perception Spring 2009

A prototype system for rule-based expressive modifications of audio recordings

Table 1 Pairs of sound samples used in this study Group1 Group2 Group1 Group2 Sound 2. Sound 2. Pair

Automatic music transcription

A Need for Universal Audio Terminologies and Improved Knowledge Transfer to the Consumer

Analysis of local and global timing and pitch change in ordinary

Book: Fundamentals of Music Processing. Audio Features. Book: Fundamentals of Music Processing. Book: Fundamentals of Music Processing

PHYSICS OF MUSIC. 1.) Charles Taylor, Exploring Music (Music Library ML3805 T )

Music Recommendation from Song Sets

DYNAMIC AUDITORY CUES FOR EVENT IMPORTANCE LEVEL

STUDY OF THE PERCEIVED QUALITY OF SAXOPHONE REEDS BY A PANEL OF MUSICIANS

Object selectivity of local field potentials and spikes in the macaque inferior temporal cortex

Real-time Granular Sampling Using the IRCAM Signal Processing Workstation. Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France

A comparison of the acoustic vowel spaces of speech and song*20

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance

CTP431- Music and Audio Computing Musical Acoustics. Graduate School of Culture Technology KAIST Juhan Nam

Psychoacoustic Evaluation of Fan Noise

Consonance perception of complex-tone dyads and chords

2. AN INTROSPECTION OF THE MORPHING PROCESS

Electrospray-MS Charge Deconvolutions without Compromise an Enhanced Data Reconstruction Algorithm utilising Variable Peak Modelling

TYING SEMANTIC LABELS TO COMPUTATIONAL DESCRIPTORS OF SIMILAR TIMBRES

Comparison Parameters and Speaker Similarity Coincidence Criteria:

Effect of task constraints on the perceptual. evaluation of violins

Proceedings of Meetings on Acoustics

DOES MOVIE SOUNDTRACK MATTER? THE ROLE OF SOUNDTRACK IN PREDICTING MOVIE REVENUE

Quarterly Progress and Status Report. An attempt to predict the masking effect of vowel spectra

Perceptual Processes in Orchestration to appear in The Oxford Handbook of Timbre, eds. Emily I. Dolan and Alexander Rehding

APPLICATION OF MULTI-GENERATIONAL MODELS IN LCD TV DIFFUSIONS

PREDICTING THE PERCEIVED SPACIOUSNESS OF STEREOPHONIC MUSIC RECORDINGS

LEARNING SPECTRAL FILTERS FOR SINGLE- AND MULTI-LABEL CLASSIFICATION OF MUSICAL INSTRUMENTS. Patrick Joseph Donnelly

Concert halls conveyors of musical expressions

Loudspeakers and headphones: The effects of playback systems on listening test subjects

Psychophysical quantification of individual differences in timbre perception

Application Of Missing Feature Theory To The Recognition Of Musical Instruments In Polyphonic Audio

NAA ENHANCING THE QUALITY OF MARKING PROJECT: THE EFFECT OF SAMPLE SIZE ON INCREASED PRECISION IN DETECTING ERRANT MARKING

Quarterly Progress and Status Report. Formant frequency tuning in singing

Analysis and Discussion of Schoenberg Op. 25 #1. ( Preludium from the piano suite ) Part 1. How to find a row? by Glen Halls.

Efficient Computer-Aided Pitch Track and Note Estimation for Scientific Applications. Matthias Mauch Chris Cannam György Fazekas

Music Genre Classification and Variance Comparison on Number of Genres

CTP 431 Music and Audio Computing. Basic Acoustics. Graduate School of Culture Technology (GSCT) Juhan Nam

The Standard, Power, and Color Model of Instrument Combination in Romantic-Era Symphonic Works

MUSICAL INSTRUMENT RECOGNITION WITH WAVELET ENVELOPES

The quality of potato chip sounds and crispness impression

46. Barrington Pheloung Morse on the Case

Musical Instrument Identification Using Principal Component Analysis and Multi-Layered Perceptrons

A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS

Instrument Timbre Transformation using Gaussian Mixture Models

hit), and assume that longer incidental sounds (forest noise, water, wind noise) resemble a Gaussian noise distribution.

Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA

The Pines of the Appian Way from Respighi s Pines of Rome. Ottorino Respighi was an Italian composer from the early 20 th century who wrote

UNIVERSITY OF DUBLIN TRINITY COLLEGE

in the Howard County Public School System and Rocketship Education

How do singing, ear training, and physical movement affect accuracy of pitch and rhythm in an instrumental music ensemble?

Common assumptions in color characterization of projectors

Transcription:

Timbre blending of wind instruments: acoustics and perception Sven-Amin Lembke CIRMMT / Music Technology Schulich School of Music, McGill University sven-amin.lembke@mail.mcgill.ca ABSTRACT The acoustical and perceptual factors involved in timbre blending between orchestral wind instruments are investigated based on a pitch-invariant acoustical description of wind instruments. This description involves the estimation of spectral envelopes and identification of prominent spectral maxima or formants. A possible perceptual relevance for these formants is tested in two experiments employing different behavioral tasks. Relative frequency location and magnitude differences between formants can be shown to bear a pitch-invariant perceptual relevance to blend for several instruments, with these findings contributing to a perceptual theory of orchestration. Keywords timbre perception, blend, orchestration, auditory fusion, spectral envelope 1. BACKGROUND Timbre blending between instruments is a common application in orchestration practice. Important perceptual cues for blend are known to be based on note onset synchrony or partial tone harmonicity [5], which rely mainly on rhythmic or pitch relationships and hence compositional and performance factors. An orchestrator s choice of instruments, on the other hand, is more likely motivated by acoustical features of particular instruments. Previous studies have suggested the perceptual relevance of pitch-invariant spectral traits characterizing the timbre of orchestral wind instruments. Analogous to human voice formants, the existence of stable local spectral maxima across a wide pitch range has been reported for these instruments [7, 3]. Furthermore, coincidence of these formant regions between instrumental sounds has been argued to contribute to the percept of blend between timbres [4]. Stephen McAdams CIRMMT / Music Technology Schulich School of Music, McGill University smc@music.mcgill.ca Our aim is to verify and validate these hypotheses based on a two-stage approach comprising acoustical description and perceptual investigation. An attempt is made to correlate instrument usage with acoustical and perceptual factors by investigating whether a perceptual relevance of pitch-invariant spectral traits can be shown. 2. ACOUSTICAL DESCRIPTION Spectral analyses are computed on a broad audio sample database across the entire pitch range of instruments. Based on the obtained spectral information, partial tones are identified and their frequencies and amplitudes used to build global distributions of partials across all available pitches and between dynamic levels. A curve-fitting procedure applied to these empirically derived distributions yields a spectral envelope estimate from which pitch-invariant traits such as formant regions are identified and described, as shown in Figure 2. Power spectral density in db 1 4 5 6 Smoothing spline for optimal smoothing coefficient: p = 2e 7 spectral envelope estimate composite partial distribution 5 1 15 2 25 3 35 4 45 5 Frequency in Hz Figure 1. Spectral envelope estimate for bass trombone (line) and distribution of partial tones (dots).

As a means to investigate the perceptual relevance of the spectral traits a sound synthesis model is designed, based on two independently controllable formant filters with their frequency responses matched to the spectral envelope estimates. The synthesis is incorporated into a stimulus-presentation environment allowing real-time spectral shape modification, with its sound forming a dyad with a recorded wind instrument sound. The spectral shape modifications were operationalized as deviations for two formant-filter parameters for each formant i: 1) formant frequency F i in Hz and 2) formant magnitude L i in db. The zero-deviation case represents the so-called ideal which corresponds to the originally modelled filter frequency response. 3. PERCEPTUAL INVESTIGATION 3.1 Experimental design The perceptual relevance was assessed through two behavioral experiments that differed in the experimental tasks, with the second also aiming to provide further validation and clarification of findings from the first experiment. The synthesized instruments were paired with recorded samples of the same instruments at selected pitches. The instruments investigated in the main experiments were bassoon, horn, trumpet, oboe, flute and clarinet. Besides providing a validation for the contribution of formant regions to perceptual blend for different instruments, the experiment s multifactorial design also allowed their relevance to be investigated across different pitches, intervals and registers. With respect to multifactorial statistical hypothesis tests, both experiments adopted a within-participants design. 1 3.1.1 Experiment A: blend production The first experiment employed a production task and was conducted with 17 participants, recruited as musically experienced listeners. Across 88 trials (22 conditions 4 repetitions) participants were given the task to adjust either F i or L i directly in order to achieve the maximum attainable blend. User control of the stimulus production environment was provided via a two-dimensional graphical interface, with controls for the investigated formant parameter and the loudness balance between instruments. The parameter deviations from ideal values were taken as the dependent variable. 3.1.2 Experiment B: blend rating The second experiment was based on a simplified and less time-consuming rating task and involved 2 participants, again recruited as experienced listeners. Across 12 trials (3 conditions 2 contexts 2 repetitions) participants 1 All reported statistically significant results are based on a significance level: α =.5. 2 The presets included predetermined values for the loudness balance between instruments and also had been equalized for loudness across presets. were asked to rate the relative degree of blend for a total of 5 sound dyads per condition. A continuous relative blend rating scale was used, spanning from most blended to least blended. Across 5 dyads the same instrument sample formed pairs with varying formant parameter value presets for F 1 or L 1, with only the main formant (i = 1) being considered. 2 For both parameters one of the presets presented the zero-deviation ideal case. The remaining 4 presets comprised moderate deviations below (-mod) and above (+mod) the ideal and likewise, a pair of extreme deviations (-ext and +ext). These presets were based on generalizable formant properties which allowed comparisons between instruments to be made on a common scale of spectral-envelope description. For F 1 the 4 non-ideal preset values were defined as formant-frequency deviations corresponding to the points at which the spectral-envelope magnitude had decreased by either 3 db (mod) or 6 db (ext) relative to the formant maximum (see Figure 2). For L 1 the moderate deviations represented values obtained from the behavioral findings of Experiment A paired with values mirrored relative to the ideal. The extreme deviations were defined as being 6% more extreme than the moderate ones. Power spectral density in db 5 15 25 35 4 45 5 Experiment B: extreme deviations for horn, parameter: frequency estimate ideal extreme +extreme 6dB bounds 6dB bounds 5 1 15 2 25 3 35 4 45 5 Frequency in Hz Figure 2. Extreme deviations based on 6 db bounds. Perceptual performance for each instrument was assessed across 2-4 pitches to investigate whether rating trends for the parameter presets were stable across pitch. Furthermore, the 4 repetitions of each experimental condition included two contextual versions, involving the omission of either the preset for negative or positive extreme deviation which allowed us to assess whether contextual variations affected rating trends.

3.2 Behavioral findings Experiment A yielded results for the scenario in which participants themselves determined the parameter values leading to the best perceived blend. For relative parameter deviations F 1 /f max (normalized to the frequency of the formant maximum), a common trend to slightly underestimate the ideal by about 1% was found, as shown in Figure 3. For 4 instruments, the underestimations were statistically significant (t(16) 3.83, p.15, η.692), determined through a single-sample t-test against a sample mean of zero. Notably, the horn and bassoon did not differ significantly from the ideal formant frequency. The absolute deviations L 1 showed a clear trend to relative amplification of the main formant contributing to best blend, results for all considered instruments (bassoon, horn, oboe) being significantly different from the ideal (t(16) 7.33, p <.1, η >.87). relative frequency deviation in % 15 1 5 5 15 25 Relative formant deviation trumpet* horn bassoon oboe* flute* clarinet* Figure 3. Mean behavioral F 1 /f max (error: std. dev.). No consistent significant trends can be reported for instruments compared across 3 interval types (unison, and nonunison consonance and dissonance) in a one-way ANOVA. Notably, across all tested instruments no indication was obtained that consonance or dissonance affected the chosen location of F 1 differently. Another comparison between low versus high instrument register yields strong significant effects for all compared instruments (trumpet and bassoon: F (1, 16) 19.2, p.5, η 2 p.545; clarinet: F (1, 16) = 5.25, p =.358, η 2 p =.247), suggesting that the perceptual relevance of formants does not hold at high registers. This finding was anticipated given the acoustical explanation that at high pitches the increased sparsity of partial tones outline formants inadequately, rendering them less meaningful as perceptual cues. 3 Due to violations of the assumption of normality for about half the presets, main and interaction effects were tested with a battery of 5 independent ANOVAs on the raw and transformed behavioral ratings, including non-parametric approaches of rank-transformation [1] and prior alignment of nuissance factors [2]. The most liberal and conservative p-values are reported. Whenever statistical significance is in doubt, the most conservative finding is assumed valid. Experiment B aimed to confirm tendencies found in Experiment A and investigate whether they exhibited pitch invariance across a set of representative pitches, including the original conditions from the previous experiment. Instead of finding the best blend along a continuum of parameter deviations as in Experiment A, participants compared the relative degree of perceived blend between presets, which could, and in fact did, lead to some differences in the results. With regard to frequency deviations F 1, the preferred (i.e. highestrated) presets were not only oriented toward the ideal value and moderate underestimations (-mod), but included the extreme underestimation (-ext) as well. Conversely, the lowest ratings were obtained for overestimations of the ideal value (+mod and +ext), which agrees with the general trend of underestimation found in Experiment A. For gain deviations L i, amplification of the main formant could again be confirmed for the same instruments as in Experiment A, with nearly all comparisons being significantly different from the ideal. However, the trumpet, which had not been tested in Experiment A, did not show a clear trend for main formant amplification. Several ANOVAs were conducted to investigate whether the findings argue for robust perceptual performance of F 1 - ratings across pitches, intervals and contexts. 3 The analysis rationale involved showing main effects for the factor preset which would confirm that ratings could be considered as a reliable indicator for perceptual differences. Furthermore, the finding of significant interaction effects between the factors preset pitch would argue against pitch invariance, as the profile of blend ratings across presets would be shown to vary as a function of pitch. Likewise, obtaining interaction effects preset interval would reveal a different perceptual performance across unison and non-unison dissonance conditions. Finally, testing for main effects for context assesses the robustness of perceptual findings across variations of stimulus context, for which only the presets common to both contexts, namely the ideal and the moderate deviations, are taken into account and normalized to the same scale limits. Strong main effects for preset are found across all instruments, indicating their utility to be taken as a measure of perceptual performance. Based on the multifactorial tests the 6 instruments form two groups. The grouping is based on whether or not significant deviations against the assumption of pitch-invariant perceptual relevance have been found, more specifically concerning significant interactions with either pitch or interval across both contexts or main

effects between contexts. Only statistically significant effects are reported below, with statistics taken from multiple ANOVAs reported as follows, e.g. statistic = conservative value liberal value, and low and high denoting the contexts. 3.2.1 Group 1: pitch-variant The pitch-variant group consists of flute and clarinet. The flute yields moderate interaction effects with pitch across both contexts (low: F (3.95, 74.97 6, 114) = 2.83 4.29, p =.311.6, η 2 p =.13.184; high: F (6, 114) = 4.41 7.19, p =.5 <.1, η 2 p =.188.275) as well as a main effect for contextual variation (F (1, 19) = 4.9 15.7, p =.393.8, η 2 p =.25.452). The clarinet exhibits a significant interaction effect across intervals for both contexts (low: F (3, 57) = 2.88 3.38,p =.44.244,η 2 p =.131.151; high: F (2.3, 38.66 3, 57) = 3.83 4.38, p =.297.76, η 2 p =.168.188). Although no significant interaction with pitch is obtained for the most conservative statistic, it should be noted that the most liberal findings for clarinet across both contexts (low: F (3, 57) = 6.2, p =.1, η 2 p =.246; high: F (2.21, 41.92) = 19.86, p <.1, η 2 p =.511) indicate even stronger effects than obtained for flute. As a result, flute and clarinet deliver clear indications for a departure from the assumption of pitch-invariant perceptual relevance. Interestingly, they also present the instruments that are the least-well represented by the acoustical formant description. 3.2.2 Group 2: pitch-invariant Pitch-invariant perceptual relevance based on the formant description can be assumed for horn, bassoon, oboe and trumpet, given that no clear and consistent deviations from stable perceptual performance across pitch, interval and context were found. Among this group, the trumpet appears the least robust, as for non-unison interval type a single main effect for context was obtained (F (1, 19) = 1.5 25.4, p=.43 <.1,η 2 p =.355.569). Attempting an acoustical explanation, this could possibly be explained by its acoustical description exhibiting a very broad formant which may not function to the same extent as the narrower and more defined main formants as found for the other three instruments. Although Experiments A and B display somewhat different results concerning the perceptual relevance of exact overlap of the formants, they both support the hypothesis that perceived blend is achieved around and below the ideal formant location and is clearly reduced above this value. To further elucidate this tendency across all pitch-invariant instruments, a cluster analysis was conducted with the rating differences between preset levels being interpreted as a dissimilarity measure. This measure considered effect sizes (r) of statistically significant non-parametric post-hoc analyses for pairwise comparisons between presets (Wilcoxon signedrank test). 4 The complete-linkage clustering algorithm considered dissimilarity data averaged across 3 independent sets of effect sizes for the 4 instruments. As shown in Figure 4, the overestimations of F 1 (+mod and +ext) are maximally dissimilar to a compact cluster associating deviations centered on and below the ideal formant location (ideal, -mod and -ext). dissimilarity.55.5.45.4.35.3.25.2.15.1.5 Cluster analysis mod ideal ext +mod +ext Figure 4. Dendrogram displaying clustering based on effect size from post-hoc analyses for preset. 4. CONCLUSION We have shown that localized formant regions are perceptually relevant to blend for the main formant parameters describing relative magnitude and frequency location. With regard to the former, a preference of relative main formant amplification could inversely be interpreted as a general attenuation of higher spectral-envelope traits. This can be taken as an implication that higher degrees of timbre blending may generally be achieved at lower dynamic markings (e.g., mf, p, pp), as it has been shown that secondary formants are less pronounced at lower excitation intensities [7]. As concerns the role of relative frequency location, the theory of formant coincidence [4] does not appear to hold across both investigated experimental tasks. Instead, it becomes clearly apparent that the role of formants in the perception of blend may function as a critical frequency boundary. The degree of perceived blend decreases markedly whenever the relative location of formants exceeds the frequency boundary of a reference formant (see Figure 5). As the reference formant in our investigation was predetermined by the static sampled instrument, it remains to be studied how this would apply to musical practice, in which musicians perform blend in an interactive relationship. 4 Dissimilarity was assumed to be zero for non-significant differences.

magnitude blend frequency no blend Figure 5. Schematic of theory of perceptual blend based on formant frequency relationships. Pitch invariance is suggested by both the acoustical description and perceptual findings for most of the investigated wind instruments, which bears important implications for musical and orchestration practice. As the link between acoustical relationships and their contribution to perceptual blend has been established, pitch-invariant descriptors describing the frequency boundary may be able to serve as acoustical predictors of perceived blend. For the pitch-invariant instruments, this would enable the generation of systematic tables for blend relationships between combinations of different instruments and dynamic markings, which would serve as a helpful tool to orchestration practitioners. Furthermore, pitch invariance also suggests the utility of extending the notion of blend to non-unison usage in melodic-coupling or chordal phrases, as we have obtained clear findings arguing for a perceptual indifference to interval and consonance type. The single limitation of applicability concerns the perceptual relevance likely being unwarranted at the highest instrument registers as found in Experiment A, as would also be expected by acoustical considerations. In general, our behavioral findings suggest that instruments perceptual performance is most stable when strong formant cues are available acoustically, i.e. at pitch ranges that yield higher quantities and densities of partial tones to outline the spectral-envelope traits. This is in agreement with our findings obtained for the bassoon and horn, which in Experiment B exhibited a notable robustness across pitch and in Experiment A led to behavioral blend preference corresponding to the ideal formant location. Apart from being commonly used in orchestration practice to achieve blend, their lower pitch ranges could furthermore support a hypothesis of darker timbres generally leading to more blend [6]. With this hypothesis having been derived from an acoustic description based on a global spectral average (e.g., spectral centroid), our investigation has contributed further by delivering more differentiated explanations based on a more local spectral origin. These conclusions are expected to aid in the establishment of a spectral theory for perceptual blend that would serve as an instrument-specific complement to the composition- or performance-related cues mentioned in the introduction. A general perceptual theory for timbre blending could thereupon serve as a basis for reviewing existing treatises on orchestration concerning their agreement with the perceptual realities. It could also inspire new approaches to contemporary orchestration practice. At this point it can be hypothesized that rules established for formant-characterized instruments may concern a subset of possible perceptual blend scenarios. Given that they concern important members of the wind instrument family and in orchestration practice these are commonly given special care and attention, they might assume a critical role in a generalized theory of perceptual blend. 5. ACKNOWLEDGMENTS The authors would like to thank Bennett Smith for assistance in the setup of perceptual testing hardware. This work was supported by a Schulich School of Music scholarship to SAL and a Canadian Natural Sciences and Engineering Research Council grant to SM. REFERENCES [1] Conover, W. J., and Iman, R. L. Rank transformations as a bridge between parametric and nonparametric statistics. The American Statistician 35, 3 (1981), 124 129. [2] Higgins, J. J., and Tashtoush, S. An aligned rank transform test for interaction. Nonlinear World 1 (1994), 21 221. [3] Luce, D., and Clark, J. Physical correlates of brassinstrument tones. The Journal of the Acoustical Society of America 42, 6 (1967), 1232 1243. [4] Reuter, C. Die auditive Diskrimination von Orchesterinstrumenten - Verschmelzung und Heraushörbarkeit von Instrumentalklangfarben im Ensemblespiel. Peter Lang, Frankfurt am Main, 1996. [5] Sandell, G. J. Concurrent timbres in orchestration: a perceptual study of factors determining blend. Doctoral dissertation, Northwestern University, 1991. [6] Sandell, G. J. Roles for spectral centroid and other factors in determining blended instrument pairings in orchestration. Music Perception 13 (1995), 29 246. [7] Schumann, K. E. Physik der Klangfarben - Vol. 2. Professorial dissertation, Universität Berlin, 1929.