Visualizing auditory spatial imagery of multi-channel audio

Size: px
Start display at page:

Download "Visualizing auditory spatial imagery of multi-channel audio"

Transcription

1 Audio Engineering Society Convention Paper Presented at the 116th Convention 24 May 8 11 Berlin, Germany This convention paper has been reproduced from the author s advance manuscript, without editing, corrections, or consideration by the Review Board. The AES takes no responsibility for the contents. Additional papers may be obtained by sending request and remittance to Audio Engineering Society, 6 East 42 nd Street, New York, New York , USA; also see All rights reserved. Reproduction of this paper, or any portion thereof, is not permitted without direct permission from the Journal of the Audio Engineering Society. Visualizing auditory spatial imagery of multi-channel audio John Usher 1, Wieslaw Woszczyk 2 1 Multichannel Audio Research Laboratory 2 Centre for Interdisciplinary Research in Music Media and Technology Correspondence should be addressed to John Usher (john.usher@mail.mcgill.ca) ABSTRACT To describe a multichannel audio experience in terms of its spatial features requires us to consider sound imagery in terms of precedent sound. We mean precedent sound to be that part of a phantom sound image that contains spatial information about the virtual sound source. We have developed and tested a Graphical User Interface (GUI) to allow a listener to describe where they hear both precedent and environment-related sound in an audio scene. The GUI has previously been used as a tool for describing where we hear the precedent sound in two-channel sound reproduction, and we now extend the experimental paradigm to investigate phantom imagery for a multichannel loudspeaker arrangement. We present a category system for describing the spatial sound attribute definition, and have tested the GUI using 5 loudspeakers arranged according to BS-775 to replay multi-channel sound recordings of three different musical pieces (two duets and one solo). Graduate Tonmeister students used the GUI to describe these sound scenes, and a variety of statistical analyses are used to visualize auditory spatial imagery.

2 1. INTRODUCTION 1.1. Context of work Multichannel audio systems are fast becoming a standard of the common home. This new market creates a demand for a better understanding of how to evaluate these systems in terms which are relevant to the experience they generate. Sound imagery within a multichannel sound scene is a general concept which describes both the timbral and spatial aspects of perceived objects within the scene. We have developed a computer program which allows listeners to visualize sound images and describe how they sound as they are listening to reproduced music. The new tool, a Graphical User Interface (GUI), is the result of previous investigations [1, 2]. We use conclusions from these studies to show how this perceptual visualization process can be made both intuitive to the GUI user and possible to analyse in ways which are meaningful for the development of loudspeakers used in multi-channel audio. In this paper we will describe spatial sound imagery using the sound character [3] called definition. Definition is described by Rasch and Plomb [4] in this way: the temporal aspects of indirect sound correspond to the subjective attribute definition, the ability to distinguish and to recognize sounds. We add to this that the definition of a sound image tells a listener if that part of the sound image contains precedent sound; that is, whether the recorded sound source (e.g. musical instrument) seems to exist in this part of the image. A point to make here is that in this paper we will often refer to the perceived sound environment of a recorded musical instrument. This is what Ellis refers to as the virtual environment [5], that is, the listeners interpretation of the presented sound to represent objects in an environment other than that from which the impression physically originates (paraphrasing from [5]). Likewise, the perception of three-dimensional physical space within this virtual environment is an abstract space which is not necessarily the same as real-world space. The GUI is used as a perceptual data collector to describe the virtual environment created by the reproduced sound, and unless explicitly stated in this paper it is this abstract sound scene we refer to: not the physical properties of the listening or recording environment. Perceptual imagery in music is a complicated subject, and one which is studied in a number of academic disciplines. To music theorists and philosophers, musical imagery is often discussed as a concept of free or pure imagination, in the absence of any audible sound source [6]. For sound image perception in loudspeaker audio, we will borrow the notion of a sound image existing in the absence of any real-world sound; Griesiner has described [7] how we can perceive an identifiable sound object as coming from a point in real-world space which we can physically measure to have no acoustic radiation. In this paper we use the term image in the following sense: that part of the experienced sound field which seems to have originated from the same musical instrument, belongs to the same image. Ideas about how the human brain interprets the real-world sound scene into a perception of identifiable images (or objects) are dealt with in the psychological study of Auditory Scene Analysis (ASA) [8]. Specifically in the context of the present work, cognitive approaches of ASA in spatial sound imagery have generally been concerned with realworld, source-related scene analysis problems, or in the language of cognitive psychology: ecological questions (e.g. [9, 1]). This paper is not directly concerned with understanding the underlying processes the mind employs for discerning sound images, but we believe the holy-grail of accurately predicting spatial imagery from a given real-world sound scene can only be gained by treating the task as an ASA problem Spatial impression of reproduced sound With respect to spatial properties of sound images within a sound scene, this subject can be broken down into two parts: source related and environment related descriptions (see Masons comments in [11], p.45). The sound associated with location of the source in the virtual sound scene is also called the precedent sound [12]. However, we do not distinguish these two sound components as separate sound images: these two sound characters will generally form part of the same sound image. In other words, we say that a sound image can be spatially described by the regions of space in the perceived sound scene that seem to contain sound originating from the same source. The general term for describing perceived spatial attributes of sound is auditory spatial impression (ASI). Morimoto [12] discusses two important components of ASI: auditory source width (ASW) and listener envelopment (LEV). The new GUI is used to generate a visual representation of these two components using three dimensions: lateral space and image definition. Spatial sound envelopment [7, 12, 13, 14] is a general term used to describe the 2

3 spatial extent of a sound image in relation to the listener, but we will avoid using this term because what is understood by the feeling of envelopment is difficult to formalize and may confuse the meaning of data provided by the GUI. An example of the confusion which can arise when describing these two things graphically is noted by Morimoto [12]: ASW can be thought of as a one-dimensional case of LEV, with ASW as a width measurement such as degrees, and LEV describing a region of space in the virtual environment (which would be an area in our case, as we are interested in sound imagery in the lateral plane). Furthermore, description of ASI in terms of ASW is complicated by spatiotemporal fusion of precedent sound which can produce an impression of multiple images from the same virtual source [15]. To work around these problems of describing ASI with a graphical language, we restrict our study to an investigation of what Martens [15] calls auditory spatial imagery Purpose of paper In this paper we present a method to visualize the sound imagery of musical instruments within a multi-channel audio scene. The GUI we will use for this purpose must provide us data to test the following assumptions about the method we will describe: 1. That data from the GUI can be used to reveal differences in our perception of sound images in different sound scenes. 2. The GUI can be used to describe not just the spatial extent of a sound image, but it can also be used to reveal how a sound image is perceived in terms of a sound character we call definition. Conclusions regarding these two questions must be subject to a statistical interpretation that gives an indication as to how reliably the GUI can be used to map perceived spatial sound images. 2. SPATIAL DEFINITION OF SOUND Rasch and Plomb [4] define the subjective sound character definition in the following way: the temporal aspects of indirect sound correspond to the subjective attribute definition, the ability to distinguish and to recognize sounds. This is perhaps similar to what has been described as distinctness [3, 16], and in this spirit Lund [17] discusses a similar metric called the consistency score which is composed of two subjective attributes called robustness and diffusion. We define definition thus: Definition is that temporal attribute of a spatial sound image, or part of a sound image, which tells a listener if that part of the sound image represents source or environment related sound. This is different to saying direct or indirect sound, and virtual source definition is a more descriptive title for this attribute than just definition, but we will use definition for the sake of brevity Categorizing spatial definition In earlier work in this project,[1], spatial definition of an image (or rather, part of an image) was described using the analogy of image definition to temperature, whereby a defined image corresponds to a high temperature ( hot ) image, and a poorly defined image a cold image. This approach was chosen as the participants in the experiment had experience using this temperature analog in critical listening classes. In this earlier work, there were five categories to describe the image spatial definition (hot, hotter than average, average, cold, and uncertain). We found that listeners generally used three of these categories to describe their perceived sound scene, as shown in figure 1. We are not saying that the most used categories in this experiment correspond to the most relevant categories in auditory spatial imagery, rather we are noting that listeners generally used three categories to describe the imagery. In 1968, Chernyak and Dubrovsky [18] reported on an experiment whereby people would represent what they term the subjective acoustic space using a semi-circular grid corresponding to a map of this space. Sound images were drawn at the relevant grid location, and an estimate of [the listeners] sensation indicated using an ordinal scale of 1-6, with 6 representing a maximal sensation. It seems from the visual plots of the perceived imagery in this work that only three categories were deemed enough to display the data (this is also shown in [19], figure 3.24). In the present work, three categories are used to describe the definition of a sound image. 3

4 % category chosen Image hotness category n = 524 Fig. 1: Histogram showing percentage of instances that each of the 5 categories describing the sound image was chosen, from data collected in main experiment in [1]. Data consists of 524 unique ellipses from 24 presentations of 3 unique sound scenes. Line indicates the average likelihood of picking a category at random (2%). We will use the idea of spatial image definition discussed above as the dimension of sound character along which we will measure our perception of spatial sound imagery. We have chosen three horizontal categories of this sound attribute (Rosch describes in [2] what is meant by horizontal categories). This gives a third dimension to describe lateral sound imagery: two distance co-ordinates (which could be angle and range, or an x,y cartesian description) and the third; definition. Before we describe the three categories of definition, bare in mind that we are not saying that this exactly mirrors what Rosch calls the perceived world structure, but we use these three categories to enable a description of perceived spatial sound imagery in a way which will be most meaningful to answer the questions that this paper is interested in (see section 1.3). Furthermore, the meaning of the names we have given to these categories do not alone explain what this category is; it might be less confusing to name the categories A, B, and C. As mentioned before, we say that a sound image can be spatially described by the regions of space in the perceived virtual environment that seem to contain sound originating from the same source. In turn, the sound character in each part of this image can be described as being composed of one to three categories of definition, so when we say sound images that sound diffuse, we really mean parts of an image that sound diffuse Images that sound defined If a listener can identify a location of the sound source in a (virtual) sound scene, then we say that that region of the image is defined. Spatially speaking, the definition of a sound image may be described as being defined in the same way that the solidity or solidness of a physical object might be defined as solid Images that sound diffuse We use the word diffuse to describe a region of a sound image which has a perceived character of an acoustically diffuse sound field. This should not be confused with meaning that this region of the sound image is diffuse in the technical- acoustical sense, or in the way that Rumsey equates broadness of an image with diffuseness (see fig. 3 in [21]). In fact, it could be possible to have a diffuse part of a sound image coming from a very small region of virtual space (consider artificial reverberation reproduced from a single loudspeaker in an anechoic chamber) Images that sound fuzzy The likes of Corey ([22], pp.6-64) have used the term fuzzy to describe the perceived spatial imagery of a sound source when it is placed near a room boundary. We spatially categorize parts of a sound image as being fuzzy if it has a timbral sound character of direct or defined sound, but from the temporal spatial stability the listener perceives a sense of uncertainty about the existence of the source at this space in the virtual environment. This may be similar to a sense of source motion in the virtual space. Although not strictly included in this category using the above definition, we describe regions of a sound image which represent low-order acoustic reflections in the virtual environment- such as a region of the image where a slap echo seems to originate- as having a fuzzy definition. 3. DESIGN OF THE GUI We have designed a computer program which allows people to describe the perceived spatial extent of a sound image in the virtual environment, and to then categorize parts of that sound image in terms of spatial definition. This program has been developed using the results of previous work [1, 2]. The new program is a Graphical User Interface (GUI) which runs in MATLAB, whereby users can 4

5 draw ellipses to describe the spatial sound imagery of a single musical instrument (in the lateral plane): the screen-shot in figure 9 shows how the user sees the GUI. Basic design considerations for this tool are discussed in [1], but we will summarize the use of the new GUI here as it differs from previous versions in two important ways: Firstly, spatial imagery of multiple sound objects (e.g. musical instruments) can be simultaneously described, and therefore analysed, in any sound scene. Secondly, with the new definition categorization taxonomy the GUI can be used to describe indirect sound in the virtual sound scene. 4. EXPERIMENT 4.1. Purpose of experiment We have designed a method to obtain data from a listening experience which will help us answer the questions posed in section 1.3. The experiment setup is the same as that which the GUI is to be used in: for analysing sound imagery in multi-channel loudspeaker audio experiences. We do not provide a comprehensive evaluation of the results, as this may detract from the purpose of the paper which is to present a new method. Full results will be discussed in a future paper Method Stimuli Three commercially available recorded pieces of music were used in the experiment. All pieces were recorded live, with no artificial reverberation added, and mixed to be replayed with five electroacoustically matched Beolab 4 speakers (manufactured by Bang and Olufsen) arranged according to ITU-R BS 775 [23] (the rear speakers were at ±11 to the central axis). Two pieces were for two instruments, and one for a solo. In this paper, we will refer to each musical piece by the name in parentheses (that is, the title of the piece for the first, and the composers name for the others): 1. Voice (soprano) and piano (Ave Maria)[24]. 2. Solo piano [25] (Beethoven). 3. Trumpet and organ [26] (Henri Tomasi). The music was selected as we found the sound imagery for the instruments in the pieces to be fairly complex and enveloping. For instance, the piece for trumpet and organ was recorded in a very large cathedral with the trumpeter playing alongside the organist on the raised balcony, and the trumpet therefore sounded much larger than would often be heard in audio experiences. All three pieces were originally recorded, and produced for release on SACD. We converted the DSD stream to a 44.1 khz fs, 16 bit PCM format using Meitner converters, and we replayed the sound using the MAX/MSP audio software package running on a computer (a different computer to that on which the GUI program was running). The music was presented in random order in sets of three. We also timed how long it took for each subject to draw the sound image for a single instrument. It is suggested in [27] that the perception of listener envelopment in reproduced sound listening experiences is influenced primarily by the loudness of the sound at the listening position. We therefore tried to equalize loudness for the three different musical pieces. Firstly, we calibrated the 5 loudspeakers so that each of them gave the same un-weighted SPL of 75 db at the listening position with pink noise. Using the sound level meter, we then adjusted the reproduction level for each of the 3 pieces until the SPL at the listening position was approximately 75 db (using eye-ball averaging of the meter). The individual levels were then tweaked by the author until the subjective loudness of all music were equal. We did not equalize the loudness for each listener due to time restrictions, but would do so for a detailed comparative study between different sound recordings Participants and instructions Six people took part in the experiment; all of whom were students in the graduate sound recording program at McGill university, and have years of critical listening experience. Three of the subjects had used an earlier version of the GUI in the experiments discussed in [1] and [2]. The participants were ushered into the inner curtain area in the listening room described by figure 2, so they did not have a chance to investigate the sound reproduction system beyond the outer white curtains. In fact, the participants were not even told that the sound reproduction would be by 5 loudspeakers arranged to the BS-775 standard (see appendix 1 for the formal instructions given to the listeners). Fortuitously, to further this white-lie there were numerous Wave-Field Synthesis (WFS) speaker arrays in the room (see figure 2), and many listeners initially guessed that the sound reproduction was by means of the WFS speakers. This was to our experimental advantage, as it eliminated any perceptual bias effect of anticipating the sound-source location. Formal written instructions were then given to the participants, as shown in appendix 1. The participants were asked if they understood the instruc- 5

6 tions fully, and paid $15 if they chose to take part. A short training session of 1 minutes followed, whereby the GUI user would practice using it to describe a sound scene (which was one of the sound recordings used during the test). The 4 repeats of the test typically took two hours to complete. The participant sat at the centre of the inner circle in front of the table shown in figure 2. The GUI program ran on a PC laptop on the table, and the computer which played the sounds was next to the laptop. The full musical piece (approx. 6 minutes each) would repeat continuously until the subject selected the next presentation on both computers. The listener sat on a non-rotating chair and was free to move their head, but was asked to align their head with markers on the centre axis when making their decision about the location of perceived sound images Experiment set-up The arrangement of the transducers with respect to the listeners is shown in figure 2. We have not shown the coloured reference markers on the curtains, but they match the markers shown on the GUI in figure 9. There were nine markers on the inner curtain immediately in front of the listener, arranged at 1 degree intervals from -4 to + 4 degrees re. the centre axis. There were also two markers at ±11 degrees, either side of listeners, and another marker directly behind the listener. The markers were coloured strings, and the colour corresponded to that of the markers shown on the GUI Results This paper is about the idea of a new tool for evaluating sound scenes, and not about an evaluation of a particular sound scene. We will therefore not present results of a detailed comparative study using the three sound scenes in our experiments. Rather, we will focus on the ways in which data produced from listening experiments using the GUI can be used to gain a quantitative insight into spatial sound image perception, and to suggest answers to the following three questions: 1. Do listeners hear each of the three sound categories differently? 2. For different sound scenes, how are the sound images represented in each? 3. Are spatial sound images of different instruments within the same sound scene heard differently? 6 m 5.2m 1.85 m Centre axis Table Outer curtain Inner curtain WFS speaker panels (not used!) Heavy, folded thick curtain Fig. 2: Plan view of the experimental set-up. Position of loudspeakers used in the listening test shown as greyed circles behind outer curtain. Loudspeakers arranged according to [23], with rear speakers at 11 to the centre line. We will present three methods to show how we might answer these questions. Rather than present all the data, we have selected a few interesting examples; a further paper will present the results in full as this paper is concerned with the development and design of the GUI Density plots To summarize the graphical sound image descriptions elicited by the listeners in the experiment, we use density plots. Density plots have been used in similar experiments to show where sound is heard by different listeners for a given sound scene (or where sound is heard by the same listener for different presentations of the same scene). The graphical responses for the same scene are overlayed, and the region where the responses coincide are weighted according to the number of times sound was heard at the location. Overlaying the graphical image maps may be accomplished by photography [28], by manually summing the responses [18], or using a computer [1, 29]. Figure 3 shows density plots for each of the three pieces of music used in the experiment. In this experiment we analyse the responses only for the same definition category (i.e., diffuse, fuzzy, or defined). As eluded to in [1], we can not assume that the sound image attribute of definition is a continuous percept (that is, we can not represent the perceived definition of part of an image on a continuous numerical scale). If we were to 6

7 overlay responses from different categories, then this could lead us to conclusions such as if diffuse sound is heard at the same location on two different presentations for the same sound scene, then this is equivalent to hearing a fuzzy image at this location on a single occasion, which is obviously nonsense. Also, because the GUI allows users to describe single musical instruments within the sound scene, we can use density plots to visualize the sound imagery associated with just a single instrument in a complex musical listening experience. We might therefore summarize the density plots shown in figure 3 as a within-category analysis of sound imagery of single instruments in a sound scene. In order to show differences between density plots of the same sound scene, we must normalize the scaling of the density plots in a meaningful way. We do this by scaling the third dimension of the density plot (that is, it s height or density) relative to the maximum value which is possible. This maxima is proportional to the number of instances it is possible for sound of a certain category to have been heard (or rather, shown to have been heard using the GUI) at a particular location in the virtual sound scene. In our experiment, this value is equal to the number of subjects multiplied by the number of repeats for each subject. Therefore, when we have a normalized density of.5, this means that for half of the cases when this sound scene was described (by the same listener or by different listeners) it was shown that sound of this category was heard at this location in the sound scene. Of course, this highlights a major assumption upon which this analysis rests; that differences in image descriptions from different listeners can be ignored. The validity of this prerogative is discussed in section We will now discuss how differences between density plots such as those shown in figure 3 can be investigated. We do this by arithmetically subtracting two density plots, giving what we will call a differential density plot. These are shown in figure Using the GUI to show how the lateral spread of sound is heard As Olive says in [3], controlled listening tests for audio research purposes are generally conducted with a small panel of highly trained listeners. He further discusses the need for a metric in such listening tests to gage the reliability of the listeners judgements about their listening experience. Unfortunately, research suggests that such indexes (notably the intra-individual reliability index devised by Gabrielsson et al. [16]) show listeners generally report spatial hearing attributes of an audio experience less consistently than for other attributes, such as timbral character descriptions. Specifically, we found in a previous study using the GUI [1] that listeners can report the lateral distribution of a sound image to a degree of consistency that is comparable to the ears lateral resolution for sound localization, and that this elicitation is consistent for repeat presentations of the sound scene to both the same listener and for different listeners. However, in another study [2] we found that listeners representation of a sound image (ego-centric) range varies to a much higher degree for between-subject comparisons, even though the general within-subjects trends are observed to be similar across subjects. It is for the above reasons that we attempt to denoise the data from the density plots to look at the sound imagery with this noisy range dimension removed. This was first undertaken in [1] by calculating the strength of the density plot as a function of the lateral angle from the listening position. In practice this strength is calculated by the integration of the density plot along a line from the listening position to the edge of the virtual environment (which is in theory infinite, but we call this edge the furthest distance from the listener where sound is drawn in the virtual environment). This lateral image strength analysis allows us to suggest a quantitative answer to the three questions posed in section 4.3. In theory, the units of scale for angular image strength should be a two-dimensional area, as we are integrating the density plot along a line. We implement this calculation by spatially quantising the density plot into a grid of squares, which are slightly less than 1 1 cm. Therefore, the units of the vector plots in figure 6 correspond to a volume. 7

8 Piano from Ave Maria : Diffuse Piano from Ave Maria : Fuzzy Piano from Ave Maria : Defined Piano from Bethoven : Diffuse (a) Piano from Ave Maria: diffuse Piano from Bethoven : Fuzzy (b) Piano from Ave Maria: fuzzy Piano from Bethoven : Defined 5 5 (c) Piano from Ave Maria: defined Organ from Henri Tomash piece : Diffuse (d) Piano from Beethoven: diffuse Organ from Henri Tomash piece : Fuzzy (e) Piano from Beethoven: fuzzy Organ from Henri Tomash piece : Defined 5 (f) Piano from Beethoven: defined (g) Organ from Henri Tomasi : diffuse 5 (h) Organ from Henri Tomasi : fuzzy.5 5 (i) Organ from Henri Tomasi : defined Fig. 3: Density plots arranged as function of definition category for three instruments; one from each of the musical pieces used in the experiment

9 Piano and voice from Ave Maria :Defined.4 (a) Piano and voice from Ave Maria: defined.3 Organ and trumpet from Henri Tomash piece :Diffuse.2.1 More Defined sound heard Less Defined sound heard More Diffuse sound heard Less Diffuse sound heard (b) Organ and trumpet from Henri Tomasi: diffuse Fig. 4: Comparison of density plots between instruments from the same musical piece. Differential density plots arranged by definition category. Each comparison is for the same musical piece. Scaling of data is the same for all sub-figures. The reference image is the first instrument mentioned (e.g., the organ compared to the trumpet). If the sound contribution from the first instrument mentioned in the sub-figure caption is greater then the contribution from the other instrument, then the difference will be positive, and the density plot will be shaded darker red. The scale is normalized as discussed in section 4.3.1, and is the same for all sub-figures. Piano from Ave Maria and Piano from from Bethoven : Diffuse and Organ from Henri Tomash piece : Defined.4 (a) Piano from Ave Maria and Piano from Beethoven: diffuse More Diffuse sound heard Less Diffuse sound heard Fig. 5: Comparison of density plots between instruments from different musical pieces. Two density plots are compared in each sub-figure; for different instruments from separate musical pieces as shown in figure 3 (there are eight possible comparisons for each of the three categories). Scale is the same as in figure (b) Piano from Beethoven and Organ: defined 9

10 Piano from Ave Maria Piano from Bethoven Organ from Henri Tomash piece Diffuse Fuzzy Well defined Diffuse Fuzzy Well defined Diffuse Fuzzy Well defined (a) Piano from Ave Maria (b) Piano from Bethoven (c) Organ from Henri Tomasi Fig. 6: Polar plots showing lateral image strength as a function of angle for each of the three definition categories. Mean and 95% confidence intervals for the integrand are shown (95% CI s are the thin solid lines of the same colour as the mean line). Location of loudspeakers indicated with black dots. Image strength scale is arbitrary, but the same for all figures (see section for details of scaling). Diffuse part: Organ from Henri Tomash piece Fuzzy part: Organ from Henri Tomash piece Well defined: Organ from Henri Tomash piece bm dh fc on qh tz bm dh fc on qh tz bm dh fc on qh tz (a) Organ from Henri Tomasi : diffuse (b) Organ from Henri Tomasi : fuzzy (c) Organ from Henri Tomasi : defined Fig. 7: Polar plots showing lateral image strength as a function of angle for different people ( bd, bm, etc.). Arranged by song name and definition category. Mean and 95% confidence intervals for the integrand are shown, from 4 repeats of the same stimulus. Location of loudspeakers indicated with black dots. Image strength scale is arbitrary, but the same for all sub-figures and figure 6 (see section for details of scaling) response time (seconds) N = bm dh fc on qh tz person Fig. 8: Time taken by different users of the GUI to describe spatial imagery of a single instrument (e.g. the voice for Ave Maria). 95% confidence limits shown. 1

11 5. DISCUSSION As this paper is not about a detailed comparison of sound imagery in sound scenes, we will only highlight a selection of interesting findings which are shown by the density plots, differential density plots, and the lateral image strength analyses. Looking at the density plots in figure 3, we see that the maximum normalized density is generally around.5. However, it is the size of this high density region which tells us about the sound imagery of a particular definition category. We can see from sub-figure 3(i) that there was not much image with a defined definition associated with the organ in the Henri Tomasi piece, and that the diffuse-sounding part of the image was generally confined to the area within the outer-curtain of the listening room. Furthermore, sub-figure 4(b) tells us that this diffuse sound from the organ was heard in this inner area less than the diffuse sound from the trumpet in the same musical piece. The differential density plots reveal to the eye spatial differences between the auditory spatial imagery different instruments. Sub-figure 4(a) shows this nicely; according to the GUI results, the voice was heard closer and more centred than the piano in the Ave Maria piece. The lateral image spread of this piano differs from that of the piano in the Beethoven piece, as subfigure 6(b) shows: We can see that the diffuse sound from the Beethoven piano is heard to be stronger directly behind the listener than the Ave Maria piano, but the lateral spread of the diffuse sound in both of these pieces is not as evenly spread as the organ, as shown in the polar image-strength plots of figure 6. In particular, from the vector plots for both pianos we can see that the diffuse sound is generally heard in the direction of the rear loudspeakers, so we have a visualization of what Gerzon calls the detent effect [31, 32] (the pulling of a sound image in the direction of a loudspeaker). Regarding the imagery descriptions of the same sound scene by different listeners, we find from the within-subject comparison of lateral image strength in figure 7 that particular listeners consistently report differently from others. This is particularly true for subject bm, who understates the image heard of a particular category in a certain direction less than the others for the organ piece. We can also say that this difference is statistically relevant, as the 95% confidence intervals for the image strength vector do not overlap other subjects polar vectors. This within-subject analysis also reveals how subjects seem to agree with each other about hearing a certain category of image definition from a particular lateral direction in their sound scene. The subject response times shown in figure 8 furthers this idea that different listeners undertake the sound mapping task differently, and we find that the response times are statistically different for different listeners [F(5,11) = 9.59, p<.1]. 6. CONCLUSIONS In this paper we investigate auditory spatial imagery using the character called sound image definition, which is defined as: Definition is that temporal attribute of a spatial sound image, or part of a sound image, which tells a listener if that part of the sound image represents precedent sound. We have also described three categories of the sound character attribute called definition: diffuse, fuzzy, and defined. To investigate how we can investigate auditory spatial imagery in a loudspeaker sound scene, we have developed a computational sound mapping tool (a Graphical User Interface, or GUI) for visualizing our perception of these images. The tool can be used to reveal differences in our perception of images for different sound scenes (for example, different sound recordings, or different transducer configurations). Furthermore, the GUI can be used to describe not just the spatial extent of a sound image, but it can also be used to reveal how we hear an image in terms of its definition. We have tested the GUI in a formal listening test with three surround recordings of live musical instruments, and present a variety of methods to analyse the data so as to give a visualization of spatial sound imagery as represented with the new mapping tool. The analyses of density plots have been tailored for use with small numbers of participants in listening tests, where the new GUI will be used for describing perceived spatial sound imagery in multichannel sound scenes. We find the analyses techniques can be used for between-subject and within-subject comparisons of how we hear the direct and indirect sound in a reproduced multi-channel sound recording. A further paper we will present the results of the experiment in full. 7. ACKNOWLEDGEMENTS The authors thank all who took part in the experiment, as well as William Martens for helpful advice and Martha De Francisco for suggestions for the music used in the tests. This work was sponsored with financial grants from Bang and Olufsen; the National Sciences and Engineering Research Council of Canada; and Valorisation- Recherche Québec, to whom the authors are greatful. 11

12 References [1] J. Usher and W. Woszczyk. Design and testing of a graphical mapping tool for analyzing spatial audio scenes. In Proceedings of the Audio Engineering Society 24th International conference, Banff, Canada, July 23. [2] J. Usher, W. Martens, and W. Woszczyk. The influence of the presence of multiple sources on auditory spatial imagery. In Proceedings of the 18th International conference on Acoustics, Kyoto, Japan, April 24. [3] T. Letowsky. Sound quality assessment: concepts and criteria. In Proceedings of the Audio Engineering Society 87th international convention, page Preprint 2825, [4] R. A. Rasch and R. Plomp. The listener and the acoustic environment. In D. Deutsch, editor, The Psychology of Music, pages Academic Press, [5] S.R. Ellis. Presence of mind. Presence, 5(2): , [6] R.I. Godøy and H. Jørgensen. Musical imagery, chapter Theoretical perspectives, pages Swets and Zeitlinger, 21. [7] D. Griesinger. General overview of spatial impression, envelopment, localization, and externalization. In Proceedings of the 15th International Audio Engineering Society Conference, pages , Copenhagen, [8] A.S. Bregman. Auditory Scene Analysis: The Perceptual Organization of Sound. MIT Press, Cambridge, Mass., 199. [9] S. Lakatos, S. McAdams, and R. Causse. The representation of auditory source characteristics: Simple geometric form. Perception Psychophysics, 59(8): , [1] A. J. Kunkler-Peck and M. T. Turvey. Hearing shape. Journal of Experimental Psychology: Human Perception and performance, 26(1): , 2. [11] F. Rumsey. Spatial Audio. Focal press, 21. [12] M. Morimoto. How can auditory spatial impression be generated and controlled? In Proceedings of the 21 International Workshop on Spatial Media, 21. [13] M. Morimoto and Z. Maekawa. Auditory spaciousness and envelopement. In Proceedings of the 13th International Congress on Acoustics, volume 2, pages , [14] D. Griesinger. Spaciousness and envelopment in musical acoustics. In Proceedings of the 11st Conference of the Audio Engineering Society, Preprint no. 44, [15] W. L. Martens. Two-subwoofer reproduction enables increased variation in auditory spatial imagery. In Proceedings of the 21 International Workshop on Spatial Media, October 21. [16] R.E. Gabrielsson, U. Rosenburg, and H. Sjogren. Judgements and dimension analysis of perceived sound quality of sound-reproducing systems. Journal of the Acoustical Society of America, 55: , [17] T. Lund. Enhanced localization in 5.1 production. In Proceedings of the 19th Convention of the Audio Engineering Society, Los Angeles. Preprint 5243, 2. [18] R.I. Chernyak and N.A. Dubrovsky. Pattern of the noise images and the binaural summation of loudness for the different interaural correlation of noise. In Proceedings of the 6th International Congress on Acoustics, pages A53 A56, [19] J. Blauert. Spatial hearing: The psychophysics of human sound localization. MIT Press, Cambridge, Mass., revised edition, [2] E. Rosch. Principles of categorization. In E. Margolis and S. Laurence, editors, Concepts: core readings, pages MIT press, 1978/

13 [21] F. Rumsey. Spatial quality evaluation for reproduced sound: Terminology, meaning, and a scene-based paradigm. J. Audio Eng. Soc., 5(9): , 22. [22] J. A. Corey. An integrated system for dynamic control of auditory perspective in a multichannel sound field. PhD thesis, Department of sound recording, McGill university, 22. [23] ITU-R. Multichannel stereophonic sound system with and without accompanying picture. Recommendation BS.775-1, International Telecommunication Union Radiocommunication Assembly, [24] Franz Schubert. Ave Maria D SACD. PentaTone classics , 23. [25] L. v. Beethoven. Piano sonato no. 23 in F minor, Op.57. allegro ma non troppo. SACD. PTC , 23. [26] Henri Tomasi ( ). Semaine Sainte à Cuzco. BIS-SACD-119, 2. [27] G.A. Soulodre, M.C. Lavoie, and S. G. Norcross. Objective measurements of listener envelopement in mulitchannel surround systems. J. Audio Eng. Soc., 51(9):826 84, 23. [28] B. Wagener. Räumliche Verteilungen der Hörrichtungen in synthetischen Schallfeldern. Acustica, 25:23 219, [29] R. Mason. Elicitation and measurement of auditory spatial attributes in reproduced sound. PhD thesis, Univerity of Surrey, England. School of Performing Arts, February 22. [3] S. E. Olive. Differences in performance and preference of trained versus untrained listeners in loudspeaker tests: a case study. J. Audio Eng. Soc., 51(9):86 825, 23. [31] H.D. Harwood. Stereophonic image sharpness. Wireless World, 74:27 211, [32] M. A. Gerzon. Panpot laws for multispeaker stereo. In 92nd Convention of the Audio Engineering Society, Vienna. Preprint 338,

14 1. INSTRUCTIONS GIVEN TO PARTICIPANTS IN THE EXPERIMENT You are possibly going to take part in an experiment about where you hear sound images of musical instruments. By sound images we mean the spatial image created by all the sound from the instrument, including the sound reflections from the room the instrument seems to be in. The music will be presented in surround sound, from a number of loudspeakers behind the curtains. We have made a computer program which you will use to describe where you hear these sound images. The program, running on the computer in front of you, shows a top-down view of the room you are in, for example, the two white curtains and the coloured strings are shown, as well as where you are sitting now. In the future this program will be used to see how different recording techniques and different sound reproduction systems affect how we hear these sounds images. In the experiment you will hear 3 pieces of recorded music. Two recordings are of two instruments, and the other of a piano solo. As you are listening to the music, you should draw ellipses with the computer program to show the locations of where sound from each instrument is heard. As mentioned, the sound image created by each instrument is composed of all the sound which seems to come from that instrument; so the image consists of the direct sound and the sound echoes in the recording. Now, for each instrument image, we want you to describe how defined different parts of the image seem. The sound image definition can be thought of as varying from being not defined whatsoever to being defined. That is, if a sound image is not defined at all, we do not hear any part of the instrument image coming from this location in the listening room. On the other hand, part of a sound image may be very well defined. When an image is defined, it may seem that the virtual instrument could be situated at this location in the room. More precisely, we are saying that the sound image of an instrument is defined at locations in the sound scene where the instrument seems to be located. Remember, the sound scene is an artificial environment which only exists as an abstract idea in the head of the listener. Of course, you might hear part of a sound image as defined even though there is not a loudspeaker at this point in the listening room. A defined part of an image is much clearer than a poorly defined image part, in the same way that a singers voice becomes clearer when we move closer to him or her. When you draw the ellipse to represent where you hear this defined image, just draw the size of the image you hear; try not to be influenced by how big the instrument is in real life. For instance, a grand piano may have a much smaller and defined image than a violin. Furthermore, you may hear these defined parts separated from each other in the sound field. This would be represented by multiple ellipses of the type well-defined. At the other end of the image definition scale, we have poorly defined images. This kind of image might be created by reflected sound in the recording environment (that is, reverberation). We call this kind of image a diffuse sound image, as in a real sound environment diffuse sound arrives at the listeners head from all directions, and we can not localize the sound source by hearing just this diffuse sound. We are familiar with this sound as reverberation, and if part of the sound image sounds like this then you should indicate this by drawing a diffuse category ellipse at this region in the sound scene. As the sound could be coming from anywhere around you, a diffuse sound image may sound like it is surrounding you, but it may not seem like you are actually within the image. If this is the case, then show this by drawing a doughnut ring of diffuse ellipses around the head shown on the computer screen. On the other hand, it may seem that this diffuse sounding image may be coming from just a small region of space, for instance if we were to reproduce a single channel of artificial reverb through a single loudspeaker. The third category of image definition which we use to describe the sound image is a definition between very defined and diffuse. We call this category fuzzy, or blurred. A sound image may seem fuzzy if it has a clarity like that of a defined image, but the image moves or comes-and-goes as the music plays. You can also think of a fuzzy image region as being a region of space that you are uncertain whether the instrument exists within in the sound scene. The fuzzy image may sound a bit like a diffuse sound image, for example, musical notes from this part of the sound image may seem sustained due to the echoes in the room which the instrument was recorded in. It is often easier to describe a single instrument at a time. To do this, you will have to concentrate hard on the sound from just one instrument, and try to mentally block out the sound from the other instrument. You might wish to think first about where the image is defined, and to draw an ellipse on the 14

15 computer screen to show this region, before deciding where the partly undefined and diffuse parts of the sound image are (fear not! you can change any ellipse you draw at any time by selecting and dragging it). You may decide that you can t hear any diffuse sound component at all, or, any defined sound image, and it is possible that you would represent the entire sound image for a instrument as being composed of a single definition type, that is, well-defined, fuzzy, or diffuse. As we mentioned before, you may also hear multiple defined parts in an image; if this happens, then just show this using the computer program by drawing multiple ellipses. Remember; there are no wrong answers in this experiment! We only want to know where you hear the sound images in the sound scene. You can spend as long as you want to describe the sound images in each musical presentation, and you can get up and go home if you decide you ve had enough at any time! You will hear a piece of music played by a number of loudspeakers behind the white curtains, and this piece will repeat until you have finished describing the sound image for each instrument in the music, using the computer program. After a break, the 3 pieces of music will then be replayed in a different order. In this run you may hear the sound images to be different than the last run, so treat each repeat as if you were hearing the music for the first time. There are 4 runs in total, and you can take a break between runs for 5-15 minutes for an ear-break (but please try to finish a run once you have started it). You are free to move your head from side to side, but please remain seated and keep your head directly beneath the cross on the ceiling when you make your final decision about where you hear the sound images. Finally; enjoy listening! Think hard where you hear the sound image to be located, and then think about how defined the image is in different parts of the image. To make it easier to map where you hear sound in the room to where this is located on the computer screen, use the coloured strings on the curtain which look like Christmas decorations. You may want to read these instructions again to be sure you know what you have to do in this experiment, and if you are unclear about anything at all, please tell the experiment supervisor now. Thankyou for your time and effort in this experiment. 15

Piotr KLECZKOWSKI, Magdalena PLEWA, Grzegorz PYDA

Piotr KLECZKOWSKI, Magdalena PLEWA, Grzegorz PYDA ARCHIVES OF ACOUSTICS 33, 4 (Supplement), 147 152 (2008) LOCALIZATION OF A SOUND SOURCE IN DOUBLE MS RECORDINGS Piotr KLECZKOWSKI, Magdalena PLEWA, Grzegorz PYDA AGH University od Science and Technology

More information

Largeness and shape of sound images captured by sketch-drawing experiments: Effects of bandwidth and center frequency of broadband noise

Largeness and shape of sound images captured by sketch-drawing experiments: Effects of bandwidth and center frequency of broadband noise PAPER #2017 The Acoustical Society of Japan Largeness and shape of sound images captured by sketch-drawing experiments: Effects of bandwidth and center frequency of broadband noise Makoto Otani 1;, Kouhei

More information

Listener Envelopment LEV, Strength G and Reverberation Time RT in Concert Halls

Listener Envelopment LEV, Strength G and Reverberation Time RT in Concert Halls Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Listener Envelopment LEV, Strength G and Reverberation Time RT in Concert Halls PACS: 43.55.Br, 43.55.Fw

More information

MASTER'S THESIS. Listener Envelopment

MASTER'S THESIS. Listener Envelopment MASTER'S THESIS 2008:095 Listener Envelopment Effects of changing the sidewall material in a model of an existing concert hall Dan Nyberg Luleå University of Technology Master thesis Audio Technology Department

More information

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng S. Zhu, P. Ji, W. Kuang and J. Yang Institute of Acoustics, CAS, O.21, Bei-Si-huan-Xi Road, 100190 Beijing,

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 4aPPb: Binaural Hearing

More information

LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU

LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU The 21 st International Congress on Sound and Vibration 13-17 July, 2014, Beijing/China LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU Siyu Zhu, Peifeng Ji,

More information

White Paper JBL s LSR Principle, RMC (Room Mode Correction) and the Monitoring Environment by John Eargle. Introduction and Background:

White Paper JBL s LSR Principle, RMC (Room Mode Correction) and the Monitoring Environment by John Eargle. Introduction and Background: White Paper JBL s LSR Principle, RMC (Room Mode Correction) and the Monitoring Environment by John Eargle Introduction and Background: Although a loudspeaker may measure flat on-axis under anechoic conditions,

More information

RECORDING AND REPRODUCING CONCERT HALL ACOUSTICS FOR SUBJECTIVE EVALUATION

RECORDING AND REPRODUCING CONCERT HALL ACOUSTICS FOR SUBJECTIVE EVALUATION RECORDING AND REPRODUCING CONCERT HALL ACOUSTICS FOR SUBJECTIVE EVALUATION Reference PACS: 43.55.Mc, 43.55.Gx, 43.38.Md Lokki, Tapio Aalto University School of Science, Dept. of Media Technology P.O.Box

More information

EFFECTS OF REVERBERATION TIME AND SOUND SOURCE CHARACTERISTIC TO AUDITORY LOCALIZATION IN AN INDOOR SOUND FIELD. Chiung Yao Chen

EFFECTS OF REVERBERATION TIME AND SOUND SOURCE CHARACTERISTIC TO AUDITORY LOCALIZATION IN AN INDOOR SOUND FIELD. Chiung Yao Chen ICSV14 Cairns Australia 9-12 July, 2007 EFFECTS OF REVERBERATION TIME AND SOUND SOURCE CHARACTERISTIC TO AUDITORY LOCALIZATION IN AN INDOOR SOUND FIELD Chiung Yao Chen School of Architecture and Urban

More information

Room acoustics computer modelling: Study of the effect of source directivity on auralizations

Room acoustics computer modelling: Study of the effect of source directivity on auralizations Downloaded from orbit.dtu.dk on: Sep 25, 2018 Room acoustics computer modelling: Study of the effect of source directivity on auralizations Vigeant, Michelle C.; Wang, Lily M.; Rindel, Jens Holger Published

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 6.1 INFLUENCE OF THE

More information

Measurement of overtone frequencies of a toy piano and perception of its pitch

Measurement of overtone frequencies of a toy piano and perception of its pitch Measurement of overtone frequencies of a toy piano and perception of its pitch PACS: 43.75.Mn ABSTRACT Akira Nishimura Department of Media and Cultural Studies, Tokyo University of Information Sciences,

More information

Concert halls conveyors of musical expressions

Concert halls conveyors of musical expressions Communication Acoustics: Paper ICA216-465 Concert halls conveyors of musical expressions Tapio Lokki (a) (a) Aalto University, Dept. of Computer Science, Finland, tapio.lokki@aalto.fi Abstract: The first

More information

A typical example: front left subwoofer only. Four subwoofers with Sound Field Management. A Direct Comparison

A typical example: front left subwoofer only. Four subwoofers with Sound Field Management. A Direct Comparison Room EQ is a misnomer We can only modify the signals supplied to loudspeakers in the room. Reflections cannot be added or removed Reverberation time cannot be changed Seat-to-seat variations in bass cannot

More information

How to Obtain a Good Stereo Sound Stage in Cars

How to Obtain a Good Stereo Sound Stage in Cars Page 1 How to Obtain a Good Stereo Sound Stage in Cars Author: Lars-Johan Brännmark, Chief Scientist, Dirac Research First Published: November 2017 Latest Update: November 2017 Designing a sound system

More information

SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS

SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS Areti Andreopoulou Music and Audio Research Laboratory New York University, New York, USA aa1510@nyu.edu Morwaread Farbood

More information

CURRICULUM VITAE John Usher

CURRICULUM VITAE John Usher CURRICULUM VITAE John Usher John_Usher-AT-me.com Education: Ph.D. Audio upmixing signal processing and sound quality evaluation. 2006. McGill University, Montreal, Canada. Dean s Honours List Recommendation.

More information

Noise evaluation based on loudness-perception characteristics of older adults

Noise evaluation based on loudness-perception characteristics of older adults Noise evaluation based on loudness-perception characteristics of older adults Kenji KURAKATA 1 ; Tazu MIZUNAMI 2 National Institute of Advanced Industrial Science and Technology (AIST), Japan ABSTRACT

More information

The Cocktail Party Effect. Binaural Masking. The Precedence Effect. Music 175: Time and Space

The Cocktail Party Effect. Binaural Masking. The Precedence Effect. Music 175: Time and Space The Cocktail Party Effect Music 175: Time and Space Tamara Smyth, trsmyth@ucsd.edu Department of Music, University of California, San Diego (UCSD) April 20, 2017 Cocktail Party Effect: ability to follow

More information

Analysis of local and global timing and pitch change in ordinary

Analysis of local and global timing and pitch change in ordinary Alma Mater Studiorum University of Bologna, August -6 6 Analysis of local and global timing and pitch change in ordinary melodies Roger Watt Dept. of Psychology, University of Stirling, Scotland r.j.watt@stirling.ac.uk

More information

JOURNAL OF BUILDING ACOUSTICS. Volume 20 Number

JOURNAL OF BUILDING ACOUSTICS. Volume 20 Number Early and Late Support Measured over Various Distances: The Covered versus Open Part of the Orchestra Pit by R.H.C. Wenmaekers and C.C.J.M. Hak Reprinted from JOURNAL OF BUILDING ACOUSTICS Volume 2 Number

More information

A Need for Universal Audio Terminologies and Improved Knowledge Transfer to the Consumer

A Need for Universal Audio Terminologies and Improved Knowledge Transfer to the Consumer A Need for Universal Audio Terminologies and Improved Knowledge Transfer to the Consumer Rob Toulson Anglia Ruskin University, Cambridge Conference 8-10 September 2006 Edinburgh University Summary Three

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 7.9 THE FUTURE OF SOUND

More information

Doubletalk Detection

Doubletalk Detection ELEN-E4810 Digital Signal Processing Fall 2004 Doubletalk Detection Adam Dolin David Klaver Abstract: When processing a particular voice signal it is often assumed that the signal contains only one speaker,

More information

Experiments on tone adjustments

Experiments on tone adjustments Experiments on tone adjustments Jesko L. VERHEY 1 ; Jan HOTS 2 1 University of Magdeburg, Germany ABSTRACT Many technical sounds contain tonal components originating from rotating parts, such as electric

More information

The quality of potato chip sounds and crispness impression

The quality of potato chip sounds and crispness impression PROCEEDINGS of the 22 nd International Congress on Acoustics Product Quality and Multimodal Interaction: Paper ICA2016-558 The quality of potato chip sounds and crispness impression M. Ercan Altinsoy Chair

More information

Recording Quality Ratings by Music Professionals

Recording Quality Ratings by Music Professionals Recording Quality Ratings by Music Professionals Richard Repp, Ph.D. Department of Music, Georgia Southern University rrepp@richardrepp.com Abstract This study explored whether music professionals can

More information

Relation between the overall unpleasantness of a long duration sound and the one of its events : application to a delivery truck

Relation between the overall unpleasantness of a long duration sound and the one of its events : application to a delivery truck Relation between the overall unpleasantness of a long duration sound and the one of its events : application to a delivery truck E. Geissner a and E. Parizet b a Laboratoire Vibrations Acoustique - INSA

More information

TEPZZ A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: H04S 7/00 ( ) H04R 25/00 (2006.

TEPZZ A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: H04S 7/00 ( ) H04R 25/00 (2006. (19) TEPZZ 94 98 A_T (11) EP 2 942 982 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 11.11. Bulletin /46 (1) Int Cl.: H04S 7/00 (06.01) H04R /00 (06.01) (21) Application number: 141838.7

More information

TEPZZ 94 98_A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2015/46

TEPZZ 94 98_A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2015/46 (19) TEPZZ 94 98_A_T (11) EP 2 942 981 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 11.11.1 Bulletin 1/46 (1) Int Cl.: H04S 7/00 (06.01) H04R /00 (06.01) (21) Application number: 1418384.0

More information

Table 1 Pairs of sound samples used in this study Group1 Group2 Group1 Group2 Sound 2. Sound 2. Pair

Table 1 Pairs of sound samples used in this study Group1 Group2 Group1 Group2 Sound 2. Sound 2. Pair Acoustic annoyance inside aircraft cabins A listening test approach Lena SCHELL-MAJOOR ; Robert MORES Fraunhofer IDMT, Hör-, Sprach- und Audiotechnologie & Cluster of Excellence Hearing4All, Oldenburg

More information

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene Beat Extraction from Expressive Musical Performances Simon Dixon, Werner Goebl and Emilios Cambouropoulos Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria.

More information

DESIGNING OPTIMIZED MICROPHONE BEAMFORMERS

DESIGNING OPTIMIZED MICROPHONE BEAMFORMERS 3235 Kifer Rd. Suite 100 Santa Clara, CA 95051 www.dspconcepts.com DESIGNING OPTIMIZED MICROPHONE BEAMFORMERS Our previous paper, Fundamentals of Voice UI, explained the algorithms and processes required

More information

Trends in preference, programming and design of concert halls for symphonic music

Trends in preference, programming and design of concert halls for symphonic music Trends in preference, programming and design of concert halls for symphonic music A. C. Gade Dept. of Acoustic Technology, Technical University of Denmark, Building 352, DK 2800 Lyngby, Denmark acg@oersted.dtu.dk

More information

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring 2009 Week 6 Class Notes Pitch Perception Introduction Pitch may be described as that attribute of auditory sensation in terms

More information

SUBJECTIVE EVALUATION OF THE BEIJING NATIONAL GRAND THEATRE OF CHINA

SUBJECTIVE EVALUATION OF THE BEIJING NATIONAL GRAND THEATRE OF CHINA Proceedings of the Institute of Acoustics SUBJECTIVE EVALUATION OF THE BEIJING NATIONAL GRAND THEATRE OF CHINA I. Schmich C. Rougier Z. Xiangdong Y. Xiang L. Guo-Qi Centre Scientifique et Technique du

More information

Building Technology and Architectural Design. Program 9nd lecture Case studies Room Acoustics Case studies Room Acoustics

Building Technology and Architectural Design. Program 9nd lecture Case studies Room Acoustics Case studies Room Acoustics Building Technology and Architectural Design Program 9nd lecture 8.30-9.15 Case studies Room Acoustics 9.15 9.30 Break 9.30 10.15 Case studies Room Acoustics Lecturer Poul Henning Kirkegaard 29-11-2005

More information

2. Measurements of the sound levels of CMs as well as those of the programs

2. Measurements of the sound levels of CMs as well as those of the programs Quantitative Evaluations of Sounds of TV Advertisements Relative to Those of the Adjacent Programs Eiichi Miyasaka 1, Yasuhiro Iwasaki 2 1. Introduction In Japan, the terrestrial analogue broadcasting

More information

Processing. Electrical Engineering, Department. IIT Kanpur. NPTEL Online - IIT Kanpur

Processing. Electrical Engineering, Department. IIT Kanpur. NPTEL Online - IIT Kanpur NPTEL Online - IIT Kanpur Course Name Department Instructor : Digital Video Signal Processing Electrical Engineering, : IIT Kanpur : Prof. Sumana Gupta file:///d /...e%20(ganesh%20rana)/my%20course_ganesh%20rana/prof.%20sumana%20gupta/final%20dvsp/lecture1/main.htm[12/31/2015

More information

Vibratory and Acoustical Factors in Multimodal Reproduction of Concert DVDs

Vibratory and Acoustical Factors in Multimodal Reproduction of Concert DVDs Vibratory and Acoustical Factors in Multimodal Reproduction of Concert DVDs Sebastian Merchel and Ercan Altinsoy Chair of Communication Acoustics, Dresden University of Technology, Germany sebastian.merchel@tu-dresden.de

More information

Overview of ITU-R BS.1534 (The MUSHRA Method)

Overview of ITU-R BS.1534 (The MUSHRA Method) Overview of ITU-R BS.1534 (The MUSHRA Method) Dr. Gilbert Soulodre Advanced Audio Systems Communications Research Centre Ottawa, Canada gilbert.soulodre@crc.ca 1 Recommendation ITU-R BS.1534 Method for

More information

Quarterly Progress and Status Report. An attempt to predict the masking effect of vowel spectra

Quarterly Progress and Status Report. An attempt to predict the masking effect of vowel spectra Dept. for Speech, Music and Hearing Quarterly Progress and Status Report An attempt to predict the masking effect of vowel spectra Gauffin, J. and Sundberg, J. journal: STL-QPSR volume: 15 number: 4 year:

More information

Common assumptions in color characterization of projectors

Common assumptions in color characterization of projectors Common assumptions in color characterization of projectors Arne Magnus Bakke 1, Jean-Baptiste Thomas 12, and Jérémie Gerhardt 3 1 Gjøvik university College, The Norwegian color research laboratory, Gjøvik,

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Architectural Acoustics Session 3aAAb: Architectural Acoustics Potpourri

More information

TO HONOR STEVENS AND REPEAL HIS LAW (FOR THE AUDITORY STSTEM)

TO HONOR STEVENS AND REPEAL HIS LAW (FOR THE AUDITORY STSTEM) TO HONOR STEVENS AND REPEAL HIS LAW (FOR THE AUDITORY STSTEM) Mary Florentine 1,2 and Michael Epstein 1,2,3 1Institute for Hearing, Speech, and Language 2Dept. Speech-Language Pathology and Audiology (133

More information

Audio Metering Measurements, Standards, and Practice (2 nd Edition) Eddy Bøgh Brixen

Audio Metering Measurements, Standards, and Practice (2 nd Edition) Eddy Bøgh Brixen Audio Metering Measurements, Standards, and Practice (2 nd Edition) Eddy Bøgh Brixen Some book reviews just about write themselves. Pick the highlights from the table of contents, make a few comments about

More information

A PSYCHOACOUSTICAL INVESTIGATION INTO THE EFFECT OF WALL MATERIAL ON THE SOUND PRODUCED BY LIP-REED INSTRUMENTS

A PSYCHOACOUSTICAL INVESTIGATION INTO THE EFFECT OF WALL MATERIAL ON THE SOUND PRODUCED BY LIP-REED INSTRUMENTS A PSYCHOACOUSTICAL INVESTIGATION INTO THE EFFECT OF WALL MATERIAL ON THE SOUND PRODUCED BY LIP-REED INSTRUMENTS JW Whitehouse D.D.E.M., The Open University, Milton Keynes, MK7 6AA, United Kingdom DB Sharp

More information

White Paper Measuring and Optimizing Sound Systems: An introduction to JBL Smaart

White Paper Measuring and Optimizing Sound Systems: An introduction to JBL Smaart White Paper Measuring and Optimizing Sound Systems: An introduction to JBL Smaart by Sam Berkow & Alexander Yuill-Thornton II JBL Smaart is a general purpose acoustic measurement and sound system optimization

More information

PLACEMENT OF SOUND SOURCES IN THE STEREO FIELD USING MEASURED ROOM IMPULSE RESPONSES 1

PLACEMENT OF SOUND SOURCES IN THE STEREO FIELD USING MEASURED ROOM IMPULSE RESPONSES 1 PLACEMENT OF SOUND SOURCES IN THE STEREO FIELD USING MEASURED ROOM IMPULSE RESPONSES 1 William D. Haines Jesse R. Vernon Roger B. Dannenberg Peter F. Driessen Carnegie Mellon University, School of Computer

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 AN HMM BASED INVESTIGATION OF DIFFERENCES BETWEEN MUSICAL INSTRUMENTS OF THE SAME TYPE PACS: 43.75.-z Eichner, Matthias; Wolff, Matthias;

More information

A SIMPLE ACOUSTIC ROOM MODEL FOR VIRTUAL PRODUCTION AUDIO. R. Walker. British Broadcasting Corporation, United Kingdom. ABSTRACT

A SIMPLE ACOUSTIC ROOM MODEL FOR VIRTUAL PRODUCTION AUDIO. R. Walker. British Broadcasting Corporation, United Kingdom. ABSTRACT A SIMPLE ACOUSTIC ROOM MODEL FOR VIRTUAL PRODUCTION AUDIO. R. Walker British Broadcasting Corporation, United Kingdom. ABSTRACT The use of television virtual production is becoming commonplace. This paper

More information

Loudspeakers and headphones: The effects of playback systems on listening test subjects

Loudspeakers and headphones: The effects of playback systems on listening test subjects Loudspeakers and headphones: The effects of playback systems on listening test subjects Richard L. King, Brett Leonard, and Grzegorz Sikora Citation: Proc. Mtgs. Acoust. 19, 035035 (2013); View online:

More information

the 106th Convention 1999 May 8-11 Munich,Germany

the 106th Convention 1999 May 8-11 Munich,Germany An Investigation of Microphone Techniques for Ambient Sound in Surround Sound Systems 4912 (15) Russell Mason and Francis Rumsey, University of Surrey, Guildford, England Presented at the 106th Convention

More information

All-digital planning and digital switch-over

All-digital planning and digital switch-over All-digital planning and digital switch-over Chris Nokes, Nigel Laflin, Dave Darlington 10th September 2000 1 This presentation gives the results of some of the work that is being done by BBC R&D to investigate

More information

Auditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are

Auditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are In: E. Bruce Goldstein (Ed) Encyclopedia of Perception, Volume 1, Sage, 2009, pp 160-164. Auditory Illusions Diana Deutsch The sounds we perceive do not always correspond to those that are presented. When

More information

Quantify. The Subjective. PQM: A New Quantitative Tool for Evaluating Display Design Options

Quantify. The Subjective. PQM: A New Quantitative Tool for Evaluating Display Design Options PQM: A New Quantitative Tool for Evaluating Display Design Options Software, Electronics, and Mechanical Systems Laboratory 3M Optical Systems Division Jennifer F. Schumacher, John Van Derlofske, Brian

More information

Lab P-6: Synthesis of Sinusoidal Signals A Music Illusion. A k cos.! k t C k / (1)

Lab P-6: Synthesis of Sinusoidal Signals A Music Illusion. A k cos.! k t C k / (1) DSP First, 2e Signal Processing First Lab P-6: Synthesis of Sinusoidal Signals A Music Illusion Pre-Lab: Read the Pre-Lab and do all the exercises in the Pre-Lab section prior to attending lab. Verification:

More information

MAKE ROOM FOR BETTER MEETINGS. The IT & Facility Manager s guide

MAKE ROOM FOR BETTER MEETINGS. The IT & Facility Manager s guide MAKE ROOM FOR BETTER MEETINGS The IT & Facility Manager s guide Create the ultimate meeting room experience This guide is for anyone who s responsible for designing or operating company meeting facilities.

More information

Effect of room acoustic conditions on masking efficiency

Effect of room acoustic conditions on masking efficiency Effect of room acoustic conditions on masking efficiency Hyojin Lee a, Graduate school, The University of Tokyo Komaba 4-6-1, Meguro-ku, Tokyo, 153-855, JAPAN Kanako Ueno b, Meiji University, JAPAN Higasimita

More information

PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF)

PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF) PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF) "The reason I got into playing and producing music was its power to travel great distances and have an emotional impact on people" Quincey

More information

Evolution of sound reproduction from mechanical solutions to digital techniques optimized for human hearing

Evolution of sound reproduction from mechanical solutions to digital techniques optimized for human hearing Communication acoustics: Paper ICA2016-109 Evolution of sound reproduction from mechanical solutions to digital techniques optimized for human hearing Ville Pulkki 1 Aalto University, Finland, Ville.Pulkki@aalto.fi

More information

(Refer Slide Time 1:58)

(Refer Slide Time 1:58) Digital Circuits and Systems Prof. S. Srinivasan Department of Electrical Engineering Indian Institute of Technology Madras Lecture - 1 Introduction to Digital Circuits This course is on digital circuits

More information

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT

More information

THE DIGITAL DELAY ADVANTAGE A guide to using Digital Delays. Synchronize loudspeakers Eliminate comb filter distortion Align acoustic image.

THE DIGITAL DELAY ADVANTAGE A guide to using Digital Delays. Synchronize loudspeakers Eliminate comb filter distortion Align acoustic image. THE DIGITAL DELAY ADVANTAGE A guide to using Digital Delays Synchronize loudspeakers Eliminate comb filter distortion Align acoustic image Contents THE DIGITAL DELAY ADVANTAGE...1 - Why Digital Delays?...

More information

METHODS TO ELIMINATE THE BASS CANCELLATION BETWEEN LFE AND MAIN CHANNELS

METHODS TO ELIMINATE THE BASS CANCELLATION BETWEEN LFE AND MAIN CHANNELS METHODS TO ELIMINATE THE BASS CANCELLATION BETWEEN LFE AND MAIN CHANNELS SHINTARO HOSOI 1, MICK M. SAWAGUCHI 2, AND NOBUO KAMEYAMA 3 1 Speaker Engineering Department, Pioneer Corporation, Tokyo, Japan

More information

Temporal coordination in string quartet performance

Temporal coordination in string quartet performance International Symposium on Performance Science ISBN 978-2-9601378-0-4 The Author 2013, Published by the AEC All rights reserved Temporal coordination in string quartet performance Renee Timmers 1, Satoshi

More information

Methods to measure stage acoustic parameters: overview and future research

Methods to measure stage acoustic parameters: overview and future research Methods to measure stage acoustic parameters: overview and future research Remy Wenmaekers (r.h.c.wenmaekers@tue.nl) Constant Hak Maarten Hornikx Armin Kohlrausch Eindhoven University of Technology (NL)

More information

EFFECT OF REPETITION OF STANDARD AND COMPARISON TONES ON RECOGNITION MEMORY FOR PITCH '

EFFECT OF REPETITION OF STANDARD AND COMPARISON TONES ON RECOGNITION MEMORY FOR PITCH ' Journal oj Experimental Psychology 1972, Vol. 93, No. 1, 156-162 EFFECT OF REPETITION OF STANDARD AND COMPARISON TONES ON RECOGNITION MEMORY FOR PITCH ' DIANA DEUTSCH " Center for Human Information Processing,

More information

LISTENERS RESPONSE TO STRING QUARTET PERFORMANCES RECORDED IN VIRTUAL ACOUSTICS

LISTENERS RESPONSE TO STRING QUARTET PERFORMANCES RECORDED IN VIRTUAL ACOUSTICS LISTENERS RESPONSE TO STRING QUARTET PERFORMANCES RECORDED IN VIRTUAL ACOUSTICS SONG HUI CHON 1, DOYUEN KO 2, SUNGYOUNG KIM 3 1 School of Music, Ohio State University, Columbus, Ohio, USA chon.21@osu.edu

More information

Extending Interactive Aural Analysis: Acousmatic Music

Extending Interactive Aural Analysis: Acousmatic Music Extending Interactive Aural Analysis: Acousmatic Music Michael Clarke School of Music Humanities and Media, University of Huddersfield, Queensgate, Huddersfield England, HD1 3DH j.m.clarke@hud.ac.uk 1.

More information

Applied Acoustics 73 (2012) Contents lists available at SciVerse ScienceDirect. Applied Acoustics

Applied Acoustics 73 (2012) Contents lists available at SciVerse ScienceDirect. Applied Acoustics Applied Acoustics 73 (2012) 1282 1288 Contents lists available at SciVerse ScienceDirect Applied Acoustics journal homepage: www.elsevier.com/locate/apacoust Three-dimensional acoustic sound field reproduction

More information

THE PSYCHOACOUSTICS OF MULTICHANNEL AUDIO. J. ROBERT STUART Meridian Audio Ltd Stonehill, Huntingdon, PE18 6ED England

THE PSYCHOACOUSTICS OF MULTICHANNEL AUDIO. J. ROBERT STUART Meridian Audio Ltd Stonehill, Huntingdon, PE18 6ED England THE PSYCHOACOUSTICS OF MULTICHANNEL AUDIO J. ROBERT STUART Meridian Audio Ltd Stonehill, Huntingdon, PE18 6ED England ABSTRACT This is a tutorial paper giving an introduction to the perception of multichannel

More information

Psychomusicology: Music, Mind, and Brain

Psychomusicology: Music, Mind, and Brain Psychomusicology: Music, Mind, and Brain The Preferred Level Balance Between Direct, Early, and Late Sound in Concert Halls Aki Haapaniemi and Tapio Lokki Online First Publication, May 11, 2015. http://dx.doi.org/10.1037/pmu0000070

More information

Precedence-based speech segregation in a virtual auditory environment

Precedence-based speech segregation in a virtual auditory environment Precedence-based speech segregation in a virtual auditory environment Douglas S. Brungart a and Brian D. Simpson Air Force Research Laboratory, Wright-Patterson AFB, Ohio 45433 Richard L. Freyman University

More information

BeoVision Televisions

BeoVision Televisions BeoVision Televisions Technical Sound Guide Bang & Olufsen A/S January 4, 2017 Please note that not all BeoVision models are equipped with all features and functions mentioned in this guide. Contents 1

More information

Evaluation of a New Active Acoustics System in Performances of Five String Quartets

Evaluation of a New Active Acoustics System in Performances of Five String Quartets Audio Engineering Society Convention Paper 8603 Presented at the 132nd Convention 2012 April 26 29 Budapest, Hungary This paper was peer-reviewed as a complete manuscript for presentation at this Convention.

More information

Speech Recognition and Signal Processing for Broadcast News Transcription

Speech Recognition and Signal Processing for Broadcast News Transcription 2.2.1 Speech Recognition and Signal Processing for Broadcast News Transcription Continued research and development of a broadcast news speech transcription system has been promoted. Universities and researchers

More information

Psychoacoustics. lecturer:

Psychoacoustics. lecturer: Psychoacoustics lecturer: stephan.werner@tu-ilmenau.de Block Diagram of a Perceptual Audio Encoder loudness critical bands masking: frequency domain time domain binaural cues (overview) Source: Brandenburg,

More information

System Satellites Acoustimass Module. 2.5" (64 mm) full-range driver (per satellite) 5.25" (133 mm) dual voice coil low frequency driver

System Satellites Acoustimass Module. 2.5 (64 mm) full-range driver (per satellite) 5.25 (133 mm) dual voice coil low frequency driver Key Features Subwoofer/satellite systems that deliver high fidelity and extendedbandwidth reproduction of voice and music for a wide range of installed applications, including retail, restaurant and hospitality

More information

Pitch Perception and Grouping. HST.723 Neural Coding and Perception of Sound

Pitch Perception and Grouping. HST.723 Neural Coding and Perception of Sound Pitch Perception and Grouping HST.723 Neural Coding and Perception of Sound Pitch Perception. I. Pure Tones The pitch of a pure tone is strongly related to the tone s frequency, although there are small

More information

Spatial Audio Quality Perception (Part 1): Impact of Commonly Encountered Processes

Spatial Audio Quality Perception (Part 1): Impact of Commonly Encountered Processes PAPERS Spatial Audio Quality Perception (Part 1): Impact of Commonly Encountered Processes ROBERT CONETTA 1, 2, TIM BROOKES, 1 AES Member, FRANCIS RUMSEY, 1, 3 AES Fellow, (robertc@sandybrown.com) (t.brookes@surrey.ac.uk)

More information

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB Laboratory Assignment 3 Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB PURPOSE In this laboratory assignment, you will use MATLAB to synthesize the audio tones that make up a well-known

More information

2018 Fall CTP431: Music and Audio Computing Fundamentals of Musical Acoustics

2018 Fall CTP431: Music and Audio Computing Fundamentals of Musical Acoustics 2018 Fall CTP431: Music and Audio Computing Fundamentals of Musical Acoustics Graduate School of Culture Technology, KAIST Juhan Nam Outlines Introduction to musical tones Musical tone generation - String

More information

Witold MICKIEWICZ, Jakub JELEŃ

Witold MICKIEWICZ, Jakub JELEŃ ARCHIVES OF ACOUSTICS 33, 1, 11 17 (2008) SURROUND MIXING IN PRO TOOLS LE Witold MICKIEWICZ, Jakub JELEŃ Technical University of Szczecin Al. Piastów 17, 70-310 Szczecin, Poland e-mail: witold.mickiewicz@ps.pl

More information

EDDYCHEK 5. Innovative eddy current testing for quality and process control. Touchscreen. Networking. All major applications.

EDDYCHEK 5. Innovative eddy current testing for quality and process control. Touchscreen. Networking. All major applications. EDDYCHEK 5 Innovative eddy current testing for quality and process control All major applications 2 channel testing Touchscreen Reporting Networking Eddy current testing: essential Full-body testing with

More information

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur Module 8 VIDEO CODING STANDARDS Lesson 27 H.264 standard Lesson Objectives At the end of this lesson, the students should be able to: 1. State the broad objectives of the H.264 standard. 2. List the improved

More information

CS2401-COMPUTER GRAPHICS QUESTION BANK

CS2401-COMPUTER GRAPHICS QUESTION BANK SRI VENKATESWARA COLLEGE OF ENGINEERING AND TECHNOLOGY THIRUPACHUR. CS2401-COMPUTER GRAPHICS QUESTION BANK UNIT-1-2D PRIMITIVES PART-A 1. Define Persistence Persistence is defined as the time it takes

More information

Colour Reproduction Performance of JPEG and JPEG2000 Codecs

Colour Reproduction Performance of JPEG and JPEG2000 Codecs Colour Reproduction Performance of JPEG and JPEG000 Codecs A. Punchihewa, D. G. Bailey, and R. M. Hodgson Institute of Information Sciences & Technology, Massey University, Palmerston North, New Zealand

More information

Multichannel source directivity recording in an anechoic chamber and in a studio

Multichannel source directivity recording in an anechoic chamber and in a studio Multichannel source directivity recording in an anechoic chamber and in a studio Roland Jacques, Bernhard Albrecht, Hans-Peter Schade Dept. of Audiovisual Technology, Faculty of Electrical Engineering

More information

Temporal summation of loudness as a function of frequency and temporal pattern

Temporal summation of loudness as a function of frequency and temporal pattern The 33 rd International Congress and Exposition on Noise Control Engineering Temporal summation of loudness as a function of frequency and temporal pattern I. Boullet a, J. Marozeau b and S. Meunier c

More information

Precision testing methods of Event Timer A032-ET

Precision testing methods of Event Timer A032-ET Precision testing methods of Event Timer A032-ET Event Timer A032-ET provides extreme precision. Therefore exact determination of its characteristics in commonly accepted way is impossible or, at least,

More information

The acoustics of the Concert Hall and the Chinese Theatre in the Beijing National Grand Theatre of China

The acoustics of the Concert Hall and the Chinese Theatre in the Beijing National Grand Theatre of China The acoustics of the Concert Hall and the Chinese Theatre in the Beijing National Grand Theatre of China I. Schmich a, C. Rougier b, P. Chervin c, Y. Xiang d, X. Zhu e, L. Guo-Qi f a Centre Scientifique

More information

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs 2005 Asia-Pacific Conference on Communications, Perth, Western Australia, 3-5 October 2005. The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

More information

The interaction between room and musical instruments studied by multi-channel auralization

The interaction between room and musical instruments studied by multi-channel auralization The interaction between room and musical instruments studied by multi-channel auralization Jens Holger Rindel 1, Felipe Otondo 2 1) Oersted-DTU, Building 352, Technical University of Denmark, DK-28 Kgs.

More information

A BEM STUDY ON THE EFFECT OF SOURCE-RECEIVER PATH ROUTE AND LENGTH ON ATTENUATION OF DIRECT SOUND AND FLOOR REFLECTION WITHIN A CHAMBER ORCHESTRA

A BEM STUDY ON THE EFFECT OF SOURCE-RECEIVER PATH ROUTE AND LENGTH ON ATTENUATION OF DIRECT SOUND AND FLOOR REFLECTION WITHIN A CHAMBER ORCHESTRA A BEM STUDY ON THE EFFECT OF SOURCE-RECEIVER PATH ROUTE AND LENGTH ON ATTENUATION OF DIRECT SOUND AND FLOOR REFLECTION WITHIN A CHAMBER ORCHESTRA Lily Panton 1 and Damien Holloway 2 1 School of Engineering

More information

ISOMET. Compensation look-up-table (LUT) and How to Generate. Isomet: Contents:

ISOMET. Compensation look-up-table (LUT) and How to Generate. Isomet: Contents: Compensation look-up-table (LUT) and How to Generate Contents: Description Background theory Basic LUT pg 2 Creating a LUT pg 3 Using the LUT pg 7 Comment pg 9 The compensation look-up-table (LUT) contains

More information

Why do some concert halls render music more expressive and impressive than others?

Why do some concert halls render music more expressive and impressive than others? Evaluation of Concert Halls / Opera Houses : ISMRA216-72 Why do some concert halls render music more expressive and impressive than others? Tapio Lokki Aalto University, Finland, Tapio.Lokki@aalto.fi Abstract

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 7.5 BALANCE OF CAR

More information

University of Huddersfield Repository

University of Huddersfield Repository University of Huddersfield Repository Kassier, Rafael, Lee, Hyunkook, Brookes, Tim and Rumsey, Francis An informal comparison between surround microphone techniques Original Citation Kassier, Rafael, Lee,

More information