User Preference on Artificial Reverberation and Delay Time Parameters

Size: px
Start display at page:

Download "User Preference on Artificial Reverberation and Delay Time Parameters"

Transcription

1 Journal of the Audio Engineering Society Vol. 65, No. 1/2, January/February 2017 ( C 2017) DOI: User Preference on Artificial Reverberation and Delay Time Parameters PAPERS PEDRO D. PESTANA, 1 AES Member, (ppestana@porto.ucp.pt) JOSHUA D. REISS, 2 AES Member, AND ÁLVARO BARBOSA 3 (joshua.reiss@qmul.ac.uk) (abarbosa@usj.edu.mo) 1 Universidade Católica Portuguesa, CITAR, Lisbon, Portugal 2 C4DM, Queen Mary University of London, London UK 3 University of Saint Joseph, Macau, China It is a common belief that settings of artificial reverb and delay time in music production are strongly linked to musical tempo and related factors. But this relationship, if in existence, is not yet understood. We present the results of two subjective tests that evaluate user preference of young adults with formal training in audio engineering on artificial reverb and delay time, while trying to relate choice to tempo and other low-level explaining factors. Results show there is a conclusive relationship between musical tempo and delay time preference as described by users. Reverb time setting preference, however, cannot be explained in such a strong manner. In this latter aspect the present work has nevertheless uncovered some ideas on how to proceed in order to quantify the phenomenon. 0 INTRODUCTION The current work consists of the analysis of two subjective tests, performed with knowledgable practitioners, that strive to explain the relationship between the choice for the time parameter in artificial temporal processing units and the underlying musical content. Specifically, we hypothesize, following technical literature [1, 2], that there is a relationship between a song s musical tempo and the definition of artificial reverb and delay times. A delay, or echo, consists of a discrete repetition of the signal after a given period of time. This repetition can be individual or can have sequels, which are frequently (but not necessarily) evenly spaced in time. Below a delay time of about 30 ms, the human ear does not perceive a repetition, and it integrates both dry and delayed sounds, which means we will consider that the processing we call delay to consist of times that are greater than this interval. Artificial reverberation is a process that strives to emulate and complement the real phenomena of room reverberation. The physical manifestation of this effect depends upon the numerous reflections that spring from the room s boundaries creating a series of differently timed echoes that blend into a tail that will prolong the sound. It is typical to distinguish between early reflections (sparse and coloring) and reverberant sound (dense and statistically uncorrelated). It is reverberation that offers the sonic footprint that enables one to identify the sound of a room. One crucial parameter is the Reverberation Time (RT 60 ), which for historical reasons is given as the time it takes for the tail to decay 60 db after the original sound has ceased to exist. In the following Section we will contextualize the current work, looking also into the reasons and possible applications, while highlighting its differences to previous approaches. In Sec. 2 we discuss the subjective test methodology and statistical approach that was common to both tests. In Sec. 3 we present and interpret the results and further comments by test subjects, leading to some post-hoc analysis in Sec. 4. Some tentative conclusions are drawn in Sec. 5, along with indications for future work. 1 PRIOR WORK AND MOTIVATION In [3], 60 successful practicing sound engineers were interviewed and no conscious method for regulating artificial delay and reverb parameters was suggested, other than the idea that slower tempos lead to longer reverbs, and that there is a stylistic aspect to choice. In the current work we change the approach from producer to listener, looking for patterns in subjective preference of time-constant setting in delay and reverb. User preference should be content-, context-, and epoch-dependent, and the test does not propose to seek rigid correlations, merely trends that can lead to further investigation. 100 J. Audio Eng. Soc., Vol. 65, No. 1/2, 2017 January/February

2 PAPERS We acknowledge that extensive work has been done on listener preference regarding acoustic spaces [4 6] (a particularly good overview in [7]), but highlight the fact that setting artificial time constants in production is a related but very different problem. On the one hand the acoustic approach always starts from space given a specific space, how does the music fit while the production approach always starts with music given a specific song, how to set the parameters. There is also a strong difference between perception of spatial cues in three-dimensional spaces such as live halls and through a stereophonic setup where all cues collapse to two points in space [8]. A note as to the terminology in this work: we are looking for relationships between musical tempo and processing time constants as set by users. While in terms of delay processing the idea of a link is immediately as the repetition falls within the sense of musical beat; in terms of reverb it is not, as the vague idea of decaying 60 db does not imply that a decay time equaling the quarter note would mean the listener would have ceased to hear the reverberating field by the time the next beat would settle in. This does not preclude from searching for a relationship, as some author s have indicated there may be one [2, 3]. We have chosen the term coupled to refer to a relationship where the time constant would imply a subdivision of the beat for brevity sake, knowing there might not be a consensual term here. Trend knowledge from the subjective tests presented herein can be useful in automatic settings of reverb time by assisting mixing systems; adaptive parametrization of presets in artificial reverberation plug-ins; or any adaptive/automatic system that may have access to acoustic cues of the listening environment in order to remix audio content in real-time. 2 METHODOLOGY The tests were performed by a pool of experienced listeners from three different training centers: the School of Arts of the Catholic University of Oporto, the Communication Science department at the Lusíada University in Lisbon, and the professional audio school Restart, also in Lisbon. Both students and faculty members volunteered to collaborate, and an isolated room in each institution was chosen for the testing apparatus. Due to logistic concerns there was no formal pre-screening session, but the first runs of each test included redundancy testing that was used as a post-pre-screening method, in that examples were repeated and user consistency checked, leading to rejection of subjects that did not perform well in repeated tests. With this process around 10% of the participating subjects were rejected and an unofficial listening panel began to emerge. An identified problem is the fact that ours is a convenience sample, something that usually afflicts audio testing, and we can only suggest future replications of the test in different settings, so that a meta-analysis can be used to synthesize results. Test design and procedures followed closely the recommendations in [9]: individual duration of a test was targeted USER PREFERENCE ON REVERB AND DELAY PARAMETERS at under 20 minutes; the subjects were well informed on the test procedure, and exploratory interviews were made at the end. Song excerpts were kept under 30 seconds, the listener being able to listen as often as needed, and was allowed to answer at any time. Tests were performed with professional grade circumaural earphones (Sennheiser HD650), previously calibrated with a dummy head. The signal chain was consistent and the listening level stable across the procedures. Works such as [10] have confirmed that relative level setting, for example, is different over headphones and loudspeakers, with no consistent tendency as to sign or magnitude of difference. Our choice was mainly related to the necessity of running test at three facilities and maintaining consistency and repeatability in further tests. The choice of presentation level is essential not only for repeatable results but to keep the influence of level out of some perceptual attributes, as defined in [11]. Most recommendations oscillate between levels of 83 and 85 db(a) SPL [12], and we opted for the lower figure for all tests that did not deal with loudness control themselves. As a music production problem, we strived to reduce ambiguity to the minimum: all songs presented were recorded in acoustically dead rooms (RT 60 < 0.15s), so the amount of temporal processing artificially introduced is much larger than the recorded acoustic footprint. This is similar to the majority of contemporary recording practice approaches [1, 2, 13]. The two critical tests presented here followed a multistimulus approach, where subject j is allowed to classify condition k of song i on a quality preference scale ranging from 0 to 100. The different conditions typically vary exclusively in one parameter, and subjects are simply asked to rate based on either quality or clarity, with no indication about the aspect they are differentiating, except where noted. All tests were double blind, with completely randomized full-factorial design, where song order and condition order varied arbitrarily between subjects (who were informed of this). To minimize end point effect and subjective use of scaling, it was suggested that respondents should screen all conditions for the best one on a first pass, and rate it at 100, so that all others could be rated relatively to it, never exceeding the scale. Test instructions were pre-screened with colleagues and presented verbally before the test and a very brief sentence describing the task was featured on the test interface. The instructions were always straightforward and the procedure was well discussed with the subjects. Whenever doubts remained, a training run was performed. A mandatory training procedure was not part of test design as it was seen in pilot tests that accuracy was stable from the beginning and would have declined after a period, due to listening fatigue, should the procedure have been made longer. We find that this is peculiar to the type of comparison proposed. The test interfaces were custom-built in the software Max/MSP, and an example is shown in Fig. 1. The question was posed as Please evaluate according to the quality of the mix on a scale of 0 to 100. J. Audio Eng. Soc., Vol. 65, No. 1/2, 2017 January/February 101

3 PESTANA ET AL. Fig. 1. Multi-stimulus test interface for the second test. Each test has a label box (removed) reminding of the proposed task. Table 1. Overall characteristics of the two multi-stimulus tests. Test Reverb Delay Number of conditions 8 6 Number of subjects Number of songs 6 6 Ā k (condition) Ā j (subject) Ā i (song) Subjects non-randomness (Friedman) 86% 100% Mean Spearman ρ Mean answer time (s) 18:29 17:11 Mean answer time per song (s) 03:05 02:52 Mean subject age (yrs) Mean subject experience (yrs) Mean identification difficulty Mean judgment difficulty The evaluation of condition k of song i by subject j 1 typically leads us to a multivariate matrix that can be analyzed on a per-song basis, but also on a per-subject base, or, if song differences prove to be irrelevant, our matrix collapses to two dimensions, by bundling every observation of the same condition together. An overview of characteristics for each test is presented in Table 1. The three first rows are self-explanatory. Six songs enabled us to keep the total test durations under the estimated 20 minutes for most subjects. The three values A k, A j, and A i are mean range indicators. Let the range of the confidence interval for condition k, integrating all songs and subjects be given by: A k = 2 t s k / IJ, (1) with the standard deviation s k. The mean range per condition is then simply: Ā k = 1 K A k, (2) K k=0 1 We shall consider K conditions, I songs, and J subjects in total. PAPERS something that can be easily extended to i and j and give us some idea on how wide the confidence intervals typically are within each independent variable. Ā k, Ā j, and Ā i show how consistently conditions were evaluated by subject/song pairs, how closely subjects agreed between themselves for song/condition pairs, and how close songs were to each other in subject/condition pairs, respectively. We had designed our sampling strategy for an A k of around ±5, given 25 subjects, which is roughly achieved. To test for non-randomness, we used the Friedman test [14]. This is a rank test where we first order each subject s ranking of each condition on a per song basis, turning {x jk } J K {r jk } J K. We then calculate the rank sum for each evaluator: R k = J j=1 r kj, k = 1, 2,..., K. For cases where there are no equal ranks (which happened to account for all our cases, not by design but by serendipity), we can simplify the test statistic as: Q = 12 JK(K + 1) K Rk 2 k=1 3J(K + 1). (3) As our values for K and J are large enough, this can be approximated by a chi-squared distribution with p-value given by P ( χ 2 K 1 Q).Forp-values below the typical significance level of 0.05 we reject the null hypothesis, H 0 : Subject s judgments are arbitrarily attributed. We have also followed the suggestion in [15] to check the correlation of each subject to the average subject and red flag low scores. This has been done with Spearman s Rho, which, in the case of no rank ties, is given by: ρ = 1 6 n i=1 d2 j n ( ), (4) n 2 1 where d j is the rank difference between the j th subject and what we call the Typical independent (TI) subject. 2 In Table 1, subject s non-randomness and average Spearman s ρ follow the discussions above and are indicators of how reliable were the subject s judgments. For each test we performed I + 1 Friedman tests, one for the judgment of each song and one for the overall mean judgments across songs. The percentage indicated here is how many tests rejected the null hypothesis that evaluations were random. The mean Spearman ρ is the average of correlations between each subject and the TI-subject. The last two rows of Table 1 pertain to subjects being asked how easy it was to identify the differentiation parameter in the test and how easy was it to judge those differences. The mean score for those questions is presented, 0 being easiest and 100 hardest. 2 The typical independent subject is calculated by averaging the observations across subjects, excluding the subject that is being analysed prior to ranking. This allows preservation of independence of the two sets of ranks being compared. 102 J. Audio Eng. Soc., Vol. 65, No. 1/2, 2017 January/February

4 PAPERS USER PREFERENCE ON REVERB AND DELAY PARAMETERS Table 2. Characteristics of each song for both tests presented herein. The tempo influences the several subdivisions durations. Song # bpm beat (β) /2 (γ) dbl (δ) φ /5 (ε) π TEST CONDITIONS AND RESULTS 3.1 Delay Time Preferences For the setting of delay times it is customary to lock precisely to tempo, to the extent of many mix engineers using delay charts or calculators [1, 2]. It is interesting to confirm this connection with blind subjective evaluation, and for this we used six songs, which, ordered by tempo are at a speed of 67, 84, 105, 120, 143, and 180 bpm. We compared the following conditions: Evaluation α γ ε β π δ Condition Fig. 2. Mean and confidence intervals for the evaluation of each condition, considering inter-song differences to be irrelevant. Condition order is changed so that left-to-right corresponds to decreasing tempi. Condition α: Completely dry, unprocessed mix of all tracks, performed by a mixing engineer. Condition β:asinα, but the vocal is sent through a delay unit, set to a quarter-note, with 33% feedback, 80/20 dry/wet level. These two parameters are kept constant through all remaining examples. Condition γ:asinα, but the vocal is sent through a delay unit, set to an eighth-note. Condition δ:asinα, but the vocal is sent through a delay unit, set to a half-note. Condition π: The vocal delay time is now uncoupled to tempo and set to a quarter-note multiplied by π/2. Condition ε: The vocal delay time is now uncoupled to tempo and set to four-fifths of a quarter-note. The values are detailed in Table 2, which includes additional values that were used for the second test below. It is also important to notice that songs were of different genres, song1aslowpopballad,song2grooverockfromthe70s, song 3 is smooth jazz, song 4 funk/rock crossover, song 5 is uptempo classic rock, and song 6 electronic synthpop. An overview of the overall results is given in Fig. 2, where conditions are ordered by increasing delay time. There is a clear separation between times that are coupled and times that are not, and the former are clearly preferred. It seems that no delay on the vocals is subjectively better than a delay that is off-subdivision. Considering only beat-coupled settings, there is also an evident preference for faster subdivisions. Table 2 can be consulted for the absolute meaning of the preferred eight-note subdivision: it lies somewhere between 167 and 448 ms, slower than typical slapback echoes, suggesting we could have even tried for faster subdivisions. How relevant is song content in this scenario? Fig. 3 shows us the inter-song relationships for each condition. The slower song scores highly in all conditions where delay Fig. 3. Mean and confidence intervals for the evaluation of each condition, divided by songs. is present, but it is considered the worse to be left dry. Condition γ is very homogenous between songs, and it is always rated the best condition, except for song one, where timing to the beat (β) is rated higher. Subjects are fairly balanced in their replies, as can be seen in Fig. 4. There are no clusters of subjects, and an analysis of the correlation between subjects and the average subject shows that only 7 and 19 deviate from the norm, and this is simply because subject 7 prefers double (δ) to eighth (γ) and 19 prefers quarter (β). The results on the whole seem rather clear and robust, the Friedman test scores given in Table 3 indicate that evaluations are by far not random. Most subjects found the test simple and the differentiation parameter readily understandable. Two subjects went J. Audio Eng. Soc., Vol. 65, No. 1/2, 2017 January/February 103

5 PESTANA ET AL. PAPERS Fig. 4. Bar charts of the mean evaluation of each condition by each subject, averaged over the six songs. Table 3. Results of the Friedman test for randomness of evaluations. Reverb Delay Test Stat p-value Test Stat p-value Song Song < <0.001 Song <0.001 Song < <0.001 Song Song < All data 39.2 < <0.001 so far as to state they were aware they had chosen first the short delay, followed by the dry version, followed by the beat version. Subject 6 mentioned feeling that it depends on genre for rock the choice would be on short delays, for the jazzy theme, a quarter-note delay sounded well. Subjects 3 and 4, both the most experienced subjects, also reported that it was crucial how the delay time would fit with the phrasing of the melodic line, but for the evaluated pieces, their judgments were pretty standard and consistent between songs. Subject 9 stated that echoes are annoying in being reminiscent of dated production values this type of observation could result in a marked bi-polarization of opinions, but it did not, and subject 9 is the only case where this approach is reflected on the results. 3.2 Reverberation Time Preferences Again, we presented subjects with eight different conditions of timing reverb decay to tempo, over a range of the same six songs of different tempos. Reverb decay time cannot be disassociated from reverb level, as the ability to hear the tail very much depends on the loudness of the reverberant field in short musical gaps. We ran pilot studies to have a rough idea of preference so that we could lock one parameter at a comfortable level while varying the other. The reverb loudness was thus set so that it was 9 LoudnessUnits (LU) lower than the direct sound, and it was applied equally across all elements except for kick drum, bass guitar, and overheads. The reverb unit was a TC Electronic 4000 with a hall algorithm, no pre-delay, and all other settings left as in preset 1. One should refer back to Table 2, which indicates song characteristics and decay times for each of the conditions and each of the songs. The conditions used in this test instance are: Condition α: all tracks dry. Condition β: specified tracks sent to the unit with the decay time set to the beat (quarter-note) of the musical tempo. Condition γ: specified tracks sent to the unit with the decay time set to half the beat (8th-note) of the musical tempo. Condition δ: specified tracks sent to the unit with the decay time set to double the beat (half-note) of the musical tempo. Condition φ: specified tracks sent to the unit with the decay time set to a quarter-note multiplied by the golden ratio. This was suggested by a renowned engineer (in [3]) as being a personal approach and was one of the most quantitative responses we had, so we wanted to test for it. We are, however, unsure of whether we should consider this to be a coupling to tempo or not. Condition ε: decay time uncoupled with musical tempo, by subdividing each beat in five different parts and choosing the time it takes to complete four of these. Condition π: decay time uncoupled with musical tempo, by multiplying the beat by π/2, clearly an irrational measure. This is very close to φ in practice. Condition ζ: decay time set to 2 seconds. This is typical of the best music halls in the world for classical reproduction [4]: the Amsterdam Concertgebouw, Vienna Großer Musikvereinssaal, and Boston Symphony Hall. We are aware it is considered good for orchestral music but not for the pop/rock styles we are evaluating. Overall results are shown in Fig. 5 illustrating that other than subjects clearly disliking the dry option (α), and 104 J. Audio Eng. Soc., Vol. 65, No. 1/2, 2017 January/February

6 PAPERS USER PREFERENCE ON REVERB AND DELAY PARAMETERS Evaluation α γ ε β π φ δ ζ Condition Fig. 5. Mean and confidence intervals for the evaluation of each condition, assuming no inter-song difference. The conditions are ordered by increasing decay time. moderately disliking the long, two-second option (ζ), no substantial differences emerge. Condition π neutralizes a very soft bell-shape curve, which could indicate that there is an optimal tempo:decay relationship unattached to beatcoupling. Furthermore, options β, γ, and δ, the coupled ones, show no clear advantage over the uncoupled ε and π. The figure shows a smooth arching trend, which is only marred by condition π. Even considering that the homogeneity of the mean and confidence intervals is too high to make bold statements, it is tempting to suggest that the optimal decay time lies between the quarter and the half-note, but the π condition s lower status means some degree of coupling to the beat is preferred. It is tempting to imagine that inter-song and inter-subject differences could result in this blurred overall picture, but our analysis shows it is not so: both subjects and songs show the same indistinct behavior. One interesting conclusion that emerges from Fig. 6 is that the evaluation of fixed condition ζ decreases monotonically with increasing tempo. This is a clear reinforcement of the initial assertion that slow songs allow for longer reverbs. Another interesting observation from the figure is that the evaluation of the dry version s quality is markedly higher for songs 1, 3, and 5 than for songs 2, 4, and 6. This is perhaps surprisingly not related to tempo, but looking at Table 2, a pattern emerges the songs that work well dry are those with more syncopation, as opposed to the three straight, strong-pulsed songs. The Friedman test report in Table 3 also raises a curious question: why is song 1 the only one rated arbitrarily, particularly considering how low the remaining p-values were? Looking at the raw subject data, we confirm that there is a clear disagreement between subjects on how to evaluate the different conditions in the case of song 1. For example, subject 1 only seems to care for the quarter-note decay time, whereas subject 20 has a strong preference for an uncou- Fig. 6. Mean and confidence intervals for the evaluation of each condition, separated by song. Table 4. Correlation between each subject s evaluation and the average subject s evaluation of the conditions for song 1. # ρ # ρ # ρ # ρ pled decay and subject 7 for a half-note decay. These are polarized cases, but most subjects are more blurred in their judgments. This difference is emphasized in Table 4 showing the Spearman s rank correlation between each subject s judgments and the TI-subject for song 1. This dramatic variation in preference is not seen on the other songs. The open interview at the end of the test revealed the confusion on the parameter that was being differentiated. Even though 15 subjects were right in that it was reverberation, they were not confident on which aspect of reverberation it was. The remaining 5 subjects thought timbre and equalization, reinforcing the idea that single-parameter testing with complex material can lead to much confounding. Most subjects stated that there was one condition that was radically different from the rest, and from the results, we would say that it was the dry condition α. 4 POST HOC ANALYSIS For the reverberation case, there might be better criteria than tempo, and we followed the tests with an exploration on the correlation of our results with a large amount of audio descriptors. We found interesting links to two features: the signal s autocorrelation function (ACF) and spectral flux (feature calculations and definitions in [3, p. 107]). The first feature s relevance may relate to Ando s work [6], where the author finds that in good concert halls, the preferred J. Audio Eng. Soc., Vol. 65, No. 1/2, 2017 January/February 105

7 PESTANA ET AL. Table 5. Other potential explaining factors for decay time preference. T sub is the prediction according to [6], flux the spectral flux, and BPM our original explanation proposal, the beats per minute. # Pref. RT 60 T sub = 23 (τ e ) min flux ( 10 6 ) BPM Song , Song , Song , Song , Song , Song , delay time of a single reflection could be estimated with the ACF of the signal, the delay being determined from rxx ( t 1) = 0.1 rxx (0)3. Preferred reverberation time is then: RT 60 = 23 t 1. Our test design did not plan for this sort of conclusion, but we can measure a posteriori both features and see if user preference is better justified from them. We are assuming thus that decay time preference lies in the value that was maximum for each song, regardless of whether it was a significant or an overlapping maximum. These values are shown in the second column of Table 5. The next column shows Ando s prediction of preferred reverberation time. As can be seen this is completely exaggerated by calculating auto-correlation in wet signals. The third column shows the spectral flux and the fourth shows our original explaining factor. If we calculate the correlation between subjective preference and each of these explanations, we get r = for the auto-correlation, r = for the spectral flux, and r = for the BPM. A logarithmic transformation on the flux or BPM improves r ( and respectively), but still it looks like auto-correlation could be the best explanation for this sort of choice. 5 CONCLUSIONS Two similar tests were presented: evaluating user preference for reverb and delay time parameters and its relation to song tempo. In terms of reverb time, the proposed relationship does not hold true. Decay time is still related to tempo, as both the interview process and the subjective test showed a negative rank correlation between tempo and RT 60,asthe homogeneity in results throughout all conditions reinforces the fact that conditions were themselves a ratio of tempo to bpm. Unlike reverb time, the user test on delay time showed stronger results, and there are two quite definite conclusions we can draw from it: coupled delay times work better perceptually than uncoupled ones when attention is drawn to them (as in placing them on a vocal) and faster delays are preferred over longer ones, given the same conditions. 3 The time it takes for the envelope of the normalized autocorrelation function to decay to one tenth of its value at zero. The actual value used is the minimum of the running ACF, 2T = 2 s, with an interval of 100 ms. PAPERS Post-test interviews with subjects helped understand that the setting of reverb time is seen as too multi-dimensional to be correlated to a single factor (namely song tempo), as it was hinted that stylistic concerns and song genre have a bearing on user s choice. However, results still seem to indicate that even if song tempo is not the main correlate, there may be other low-level factors that strongly explain this variable. A new test design is needed to bring those to light, particularly because the decision to offer songs of different genres, production values and instrumentation may be sensible in terms of mimicking real-world situations, but was seen to bring too much confusion into the test. Further work is also needed in analyzing the way several parameters interact, especially in what relates reverb time to reverb level. Here a more interactive method-of-adjustment test might prove more adequate in explaining the underlying factors. While aspects related to delay time showed to be conclusive, much more work is needed in terms of reverb time prior to establishing a definitive model. We have provided enough information to ground explorative approaches to automating the time parameter of temporal process in an intelligent audio mixing context, and an alternate and foreseeable route for further work would be to try an initial implementation where the mapping [16, 17] follows the broad rules hinted to herein, and evaluate its success. 6 ACKNOWLEDGMENT Author P. D. Pestana was sponsored by national funds through the Fundação para a Ciência e a Tecnologia, Portugal, in projects: PEst-OE/EAT/UI0622/2014 and PEst- OE/MAT/UI2006/2014. Author J. D. Reiss is supported by EPSRC Platform Grant: Digital Music, EP/K REFERENCES [1] B. Owsinski, The Mixing Engineer s Handbook, 1st ed. (Vallejo, CA: Mix Books, 1999). [2] R. Izhaki, Mixing Audio Concepts, Practices and Tools, 1st ed. (Oxford: Elsevier Science & Technology, 2008). [3] P. D. Pestana, Automatic Mixing Systems Using Adaptive Digital Audio Effects, Ph.D., Universidade Católica Portuguesa (2013). [4] L. Beranek, Concert Halls and Opera Houses: Music, Acoustics, and Architecture (New York: Springer, 2004). [5] M. Long, Architectural Acoustics, 1st ed. (Oxford: Elsevier Science & Technology, 2006). [6] Y. Ando, Concert Hall Acoustics, 1st ed. (Berlin: Springer Verlag, 1990). [7] Y. Ando Concert Hall Acoustics Based on Subjective Preference Theory, in Springer Handbook of Acoustics, T. D. Rossing, Ed. (New York: Springer, 2007), ch. 10, pp [8] D. Griesinger, The Theory and Practice of Perceptual Modeling How to Use Electronic Reverberation 106 J. Audio Eng. Soc., Vol. 65, No. 1/2, 2017 January/February

8 PAPERS to Add Depth and Envelopment Without Reducing Clarity, Proceedings of the Tonmeister Conference, Hannover (2000). [9] S. Bech and N. Zacharov, Perceptual Audio Evaluation Theory, Method and Application (Chichester: John Wiley & Sons, 2006). [10] R. King, B. Leonard, and G. Sikora, The Effects of Monitoring Systems on Balance Preference: A Comparative Study of Mixing on Headphones Versus Loudspeakers, presented at the 131st Convention of the Audio Engineering Society (2011 Oct.), convention paper [11] A. Gabrielsson and H. Sjögren, Perceived Sound Quality of Sound-Reproduction Systems, J. Acous. Soc. Amer., vol. 65, no. 4, pp (1979). [12] SMPTE, RP 200:2012 Relative and Absolute Sound Pressure Levels for Motion-Picture Multichannel Sound Systems Applicable for Analog Photographic Film USER PREFERENCE ON REVERB AND DELAY PARAMETERS Audio, Digital Photographic Film Audio and D-Cinema, Tech. Rep. (2012). [13] P. Newell, Recording Studio Design, 2nd ed. (Oxford: Focal Press, 2008). [14] F. Mosteller and R. E. Rourke, Sturdy Statistics: Nonparametrics and Order Statistics, 1st ed. (Boston: Addison Wesley, 1973). [15] T. Sporer, J. Liebetrau, and S. Schneider, Statistics of MUSHRA Revisited, presented at the 127th Convention of the Audio Engineering Society (2009 Oct.), convention paper [16] J. D. Reiss and E. P. Gonzalez, Automatic Mixing, in DAFx, 2nd ed., U. Zölzer, Ed. (Chichester: John Wiley & Sons, 2011), pp [17] A. T. Sabin, Z. Rafii, and B. Pardo, Weighted- Function-Based Rapid Mapping of Descriptors to Audio Processing Parameters, J. Audio Eng. Soc., vol. 59, pp (2011 Jun.). THE AUTHORS Pedro D. Pestana Dr. Josh Reiss Álvaro Barbosa Pedro Duarte Pestana is currently teaching at the School of Arts at the Catholic University of Portugal (UCP) where he is also the director of the Research Center for Science and Technology of the Arts (CITAR). He received his Ph.D. in computer music at UCP specializing in adaptive digital audio effects for automatic mixing. He has investigated topics pertaining to perception and cognition in audio, machine learning systems, acoustics, interactive sound design, and digital audio effects. He has been an active member of the AES for over a decade and won the best paper award at the 134th AES Convention in Rome. Dr. Josh Reiss is a Senior Lecturer with the Centre for Digital Music at Queen Mary University of London. He received his Ph.D. in physics from Georgia Tech, specializing in analysis of nonlinear systems. Dr. Reiss has published over 150 scientific papers and serves on several steering and technical committees and is co-founder of the company MixGenius. He has investigated music retrieval systems, time scaling and pitch shifting techniques, loudspeaker design, automatic mixing, and digital audio effects, among others. His primary focus of research is on the use of state-of-the-art signal processing techniques for professional sound engineering. Álvaro Barbosa is an Associate Professor and Dean of Faculty of Creative Industries at University of Saint Joseph (USJ) in Macau SAR, China. Holding a Ph.D. degree in computer science and digital communication from UPF in Spain, his academic activity is mainly focused on the field of design for audio and music technology. His recent R&D work, on experimental network music and interactive sound-design systems, was largely fostered in 2010 during a Post-Doctoral Research Position at Stanford University in the Center for Computer Research in Music and Acoustics (CCRMA). His current projects have special emphasis in sound and music design pieces, design thinking, and systematic creativity. J. Audio Eng. Soc., Vol. 65, No. 1/2, 2017 January/February 107

User Preference on Artificial Reverberation and Delay Time Parameters

User Preference on Artificial Reverberation and Delay Time Parameters User Preference on Artificial Reverberation and Delay Time Parameters REISS, J The Author(s) 2017 This paper is Open Access which means you can download it for free. For additional information about this

More information

Timbre blending of wind instruments: acoustics and perception

Timbre blending of wind instruments: acoustics and perception Timbre blending of wind instruments: acoustics and perception Sven-Amin Lembke CIRMMT / Music Technology Schulich School of Music, McGill University sven-amin.lembke@mail.mcgill.ca ABSTRACT The acoustical

More information

Loudspeakers and headphones: The effects of playback systems on listening test subjects

Loudspeakers and headphones: The effects of playback systems on listening test subjects Loudspeakers and headphones: The effects of playback systems on listening test subjects Richard L. King, Brett Leonard, and Grzegorz Sikora Citation: Proc. Mtgs. Acoust. 19, 035035 (2013); View online:

More information

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring 2009 Week 6 Class Notes Pitch Perception Introduction Pitch may be described as that attribute of auditory sensation in terms

More information

Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA

Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA Audio Engineering Society Convention Paper Presented at the 139th Convention 215 October 29 November 1 New York, USA This Convention paper was selected based on a submitted abstract and 75-word precis

More information

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng S. Zhu, P. Ji, W. Kuang and J. Yang Institute of Acoustics, CAS, O.21, Bei-Si-huan-Xi Road, 100190 Beijing,

More information

LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU

LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU The 21 st International Congress on Sound and Vibration 13-17 July, 2014, Beijing/China LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU Siyu Zhu, Peifeng Ji,

More information

Computer Coordination With Popular Music: A New Research Agenda 1

Computer Coordination With Popular Music: A New Research Agenda 1 Computer Coordination With Popular Music: A New Research Agenda 1 Roger B. Dannenberg roger.dannenberg@cs.cmu.edu http://www.cs.cmu.edu/~rbd School of Computer Science Carnegie Mellon University Pittsburgh,

More information

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance Methodologies for Expressiveness Modeling of and for Music Performance by Giovanni De Poli Center of Computational Sonology, Department of Information Engineering, University of Padova, Padova, Italy About

More information

EFFECT OF REPETITION OF STANDARD AND COMPARISON TONES ON RECOGNITION MEMORY FOR PITCH '

EFFECT OF REPETITION OF STANDARD AND COMPARISON TONES ON RECOGNITION MEMORY FOR PITCH ' Journal oj Experimental Psychology 1972, Vol. 93, No. 1, 156-162 EFFECT OF REPETITION OF STANDARD AND COMPARISON TONES ON RECOGNITION MEMORY FOR PITCH ' DIANA DEUTSCH " Center for Human Information Processing,

More information

Jacob A. Maddams, Saoirse Finn, Joshua D. Reiss Centre for Digital Music, Queen Mary University of London London, UK

Jacob A. Maddams, Saoirse Finn, Joshua D. Reiss Centre for Digital Music, Queen Mary University of London London, UK AN AUTONOMOUS METHOD FOR MULTI-TRACK DYNAMIC RANGE COMPRESSION Jacob A. Maddams, Saoirse Finn, Joshua D. Reiss Centre for Digital Music, Queen Mary University of London London, UK jacob.maddams@gmail.com

More information

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene Beat Extraction from Expressive Musical Performances Simon Dixon, Werner Goebl and Emilios Cambouropoulos Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria.

More information

SUBJECTIVE EVALUATION OF THE BEIJING NATIONAL GRAND THEATRE OF CHINA

SUBJECTIVE EVALUATION OF THE BEIJING NATIONAL GRAND THEATRE OF CHINA Proceedings of the Institute of Acoustics SUBJECTIVE EVALUATION OF THE BEIJING NATIONAL GRAND THEATRE OF CHINA I. Schmich C. Rougier Z. Xiangdong Y. Xiang L. Guo-Qi Centre Scientifique et Technique du

More information

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS Andrew N. Robertson, Mark D. Plumbley Centre for Digital Music

More information

Analysis of local and global timing and pitch change in ordinary

Analysis of local and global timing and pitch change in ordinary Alma Mater Studiorum University of Bologna, August -6 6 Analysis of local and global timing and pitch change in ordinary melodies Roger Watt Dept. of Psychology, University of Stirling, Scotland r.j.watt@stirling.ac.uk

More information

Trends in preference, programming and design of concert halls for symphonic music

Trends in preference, programming and design of concert halls for symphonic music Trends in preference, programming and design of concert halls for symphonic music A. C. Gade Dept. of Acoustic Technology, Technical University of Denmark, Building 352, DK 2800 Lyngby, Denmark acg@oersted.dtu.dk

More information

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video Mohamed Hassan, Taha Landolsi, Husameldin Mukhtar, and Tamer Shanableh College of Engineering American

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 6.1 INFLUENCE OF THE

More information

On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance

On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance RHYTHM IN MUSIC PERFORMANCE AND PERCEIVED STRUCTURE 1 On time: the influence of tempo, structure and style on the timing of grace notes in skilled musical performance W. Luke Windsor, Rinus Aarts, Peter

More information

PHYSICS OF MUSIC. 1.) Charles Taylor, Exploring Music (Music Library ML3805 T )

PHYSICS OF MUSIC. 1.) Charles Taylor, Exploring Music (Music Library ML3805 T ) REFERENCES: 1.) Charles Taylor, Exploring Music (Music Library ML3805 T225 1992) 2.) Juan Roederer, Physics and Psychophysics of Music (Music Library ML3805 R74 1995) 3.) Physics of Sound, writeup in this

More information

Reverb 8. English Manual Applies to System 6000 firmware version TC Icon version Last manual update:

Reverb 8. English Manual Applies to System 6000 firmware version TC Icon version Last manual update: English Manual Applies to System 6000 firmware version 6.5.0 TC Icon version 7.5.0 Last manual update: 2014-02-27 Introduction 1 Software update and license requirements 1 Reverb 8 Presets 1 Scene Presets

More information

Faculty of Environmental Engineering, The University of Kitakyushu,Hibikino, Wakamatsu, Kitakyushu , Japan

Faculty of Environmental Engineering, The University of Kitakyushu,Hibikino, Wakamatsu, Kitakyushu , Japan Individual Preference in Relation to the Temporal and Spatial Factors of the Sound Field: Factors affecting Individual Differences in Subjective Preference Judgments Soichiro Kuroki 1, a, Masumi Hamada

More information

DYNAMIC AUDITORY CUES FOR EVENT IMPORTANCE LEVEL

DYNAMIC AUDITORY CUES FOR EVENT IMPORTANCE LEVEL DYNAMIC AUDITORY CUES FOR EVENT IMPORTANCE LEVEL Jonna Häkkilä Nokia Mobile Phones Research and Technology Access Elektroniikkatie 3, P.O.Box 50, 90571 Oulu, Finland jonna.hakkila@nokia.com Sami Ronkainen

More information

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT

More information

Building Technology and Architectural Design. Program 9nd lecture Case studies Room Acoustics Case studies Room Acoustics

Building Technology and Architectural Design. Program 9nd lecture Case studies Room Acoustics Case studies Room Acoustics Building Technology and Architectural Design Program 9nd lecture 8.30-9.15 Case studies Room Acoustics 9.15 9.30 Break 9.30 10.15 Case studies Room Acoustics Lecturer Poul Henning Kirkegaard 29-11-2005

More information

Temporal coordination in string quartet performance

Temporal coordination in string quartet performance International Symposium on Performance Science ISBN 978-2-9601378-0-4 The Author 2013, Published by the AEC All rights reserved Temporal coordination in string quartet performance Renee Timmers 1, Satoshi

More information

Listener Envelopment LEV, Strength G and Reverberation Time RT in Concert Halls

Listener Envelopment LEV, Strength G and Reverberation Time RT in Concert Halls Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Listener Envelopment LEV, Strength G and Reverberation Time RT in Concert Halls PACS: 43.55.Br, 43.55.Fw

More information

Preference of reverberation time for musicians and audience of the Javanese traditional gamelan music

Preference of reverberation time for musicians and audience of the Javanese traditional gamelan music Journal of Physics: Conference Series PAPER OPEN ACCESS Preference of reverberation time for musicians and audience of the Javanese traditional gamelan music To cite this article: Suyatno et al 2016 J.

More information

Timing In Expressive Performance

Timing In Expressive Performance Timing In Expressive Performance 1 Timing In Expressive Performance Craig A. Hanson Stanford University / CCRMA MUS 151 Final Project Timing In Expressive Performance Timing In Expressive Performance 2

More information

Pitfalls and Windfalls in Corpus Studies of Pop/Rock Music

Pitfalls and Windfalls in Corpus Studies of Pop/Rock Music Introduction Hello, my talk today is about corpus studies of pop/rock music specifically, the benefits or windfalls of this type of work as well as some of the problems. I call these problems pitfalls

More information

PLACEMENT OF SOUND SOURCES IN THE STEREO FIELD USING MEASURED ROOM IMPULSE RESPONSES 1

PLACEMENT OF SOUND SOURCES IN THE STEREO FIELD USING MEASURED ROOM IMPULSE RESPONSES 1 PLACEMENT OF SOUND SOURCES IN THE STEREO FIELD USING MEASURED ROOM IMPULSE RESPONSES 1 William D. Haines Jesse R. Vernon Roger B. Dannenberg Peter F. Driessen Carnegie Mellon University, School of Computer

More information

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine Project: Real-Time Speech Enhancement Introduction Telephones are increasingly being used in noisy

More information

EFFECTS OF REVERBERATION TIME AND SOUND SOURCE CHARACTERISTIC TO AUDITORY LOCALIZATION IN AN INDOOR SOUND FIELD. Chiung Yao Chen

EFFECTS OF REVERBERATION TIME AND SOUND SOURCE CHARACTERISTIC TO AUDITORY LOCALIZATION IN AN INDOOR SOUND FIELD. Chiung Yao Chen ICSV14 Cairns Australia 9-12 July, 2007 EFFECTS OF REVERBERATION TIME AND SOUND SOURCE CHARACTERISTIC TO AUDITORY LOCALIZATION IN AN INDOOR SOUND FIELD Chiung Yao Chen School of Architecture and Urban

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 6, 2009 http://asa.aip.org 157th Meeting Acoustical Society of America Portland, Oregon 18-22 May 2009 Session 4aID: Interdisciplinary 4aID4. Preparing a submission

More information

Before I proceed with the specifics of each etude, I would like to give you some general suggestions to help prepare you for your audition.

Before I proceed with the specifics of each etude, I would like to give you some general suggestions to help prepare you for your audition. TMEA ALL-STATE TRYOUT MUSIC BE SURE TO BRING THE FOLLOWING: 1. Copies of music with numbered measures 2. Copy of written out master class 1. Hello, My name is Dr. David Shea, professor of clarinet at Texas

More information

Estimation of inter-rater reliability

Estimation of inter-rater reliability Estimation of inter-rater reliability January 2013 Note: This report is best printed in colour so that the graphs are clear. Vikas Dhawan & Tom Bramley ARD Research Division Cambridge Assessment Ofqual/13/5260

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Musical Acoustics Session 3pMU: Perception and Orchestration Practice

More information

I n spite of many attempts to surpass

I n spite of many attempts to surpass WHAT IS SO SPECIAL ABOUT SHOEBOX HALLS? ENVELOPMENT, ENVELOPMENT, ENVELOPMENT Marshall Long Marshall Long Acoustics 13636 Riverside Drive Sherman Oaks, California 91423 I n spite of many attempts to surpass

More information

Table 1 Pairs of sound samples used in this study Group1 Group2 Group1 Group2 Sound 2. Sound 2. Pair

Table 1 Pairs of sound samples used in this study Group1 Group2 Group1 Group2 Sound 2. Sound 2. Pair Acoustic annoyance inside aircraft cabins A listening test approach Lena SCHELL-MAJOOR ; Robert MORES Fraunhofer IDMT, Hör-, Sprach- und Audiotechnologie & Cluster of Excellence Hearing4All, Oldenburg

More information

Acoustic concert halls (Statistical calculation, wave acoustic theory with reference to reconstruction of Saint- Petersburg Kapelle and philharmonic)

Acoustic concert halls (Statistical calculation, wave acoustic theory with reference to reconstruction of Saint- Petersburg Kapelle and philharmonic) Acoustic concert halls (Statistical calculation, wave acoustic theory with reference to reconstruction of Saint- Petersburg Kapelle and philharmonic) Borodulin Valentin, Kharlamov Maxim, Flegontov Alexander

More information

Acoustic and musical foundations of the speech/song illusion

Acoustic and musical foundations of the speech/song illusion Acoustic and musical foundations of the speech/song illusion Adam Tierney, *1 Aniruddh Patel #2, Mara Breen^3 * Department of Psychological Sciences, Birkbeck, University of London, United Kingdom # Department

More information

Concert halls conveyors of musical expressions

Concert halls conveyors of musical expressions Communication Acoustics: Paper ICA216-465 Concert halls conveyors of musical expressions Tapio Lokki (a) (a) Aalto University, Dept. of Computer Science, Finland, tapio.lokki@aalto.fi Abstract: The first

More information

Auditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are

Auditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are In: E. Bruce Goldstein (Ed) Encyclopedia of Perception, Volume 1, Sage, 2009, pp 160-164. Auditory Illusions Diana Deutsch The sounds we perceive do not always correspond to those that are presented. When

More information

How to Obtain a Good Stereo Sound Stage in Cars

How to Obtain a Good Stereo Sound Stage in Cars Page 1 How to Obtain a Good Stereo Sound Stage in Cars Author: Lars-Johan Brännmark, Chief Scientist, Dirac Research First Published: November 2017 Latest Update: November 2017 Designing a sound system

More information

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods Kazuyoshi Yoshii, Masataka Goto and Hiroshi G. Okuno Department of Intelligence Science and Technology National

More information

MAutoPitch. Presets button. Left arrow button. Right arrow button. Randomize button. Save button. Panic button. Settings button

MAutoPitch. Presets button. Left arrow button. Right arrow button. Randomize button. Save button. Panic button. Settings button MAutoPitch Presets button Presets button shows a window with all available presets. A preset can be loaded from the preset window by double-clicking on it, using the arrow buttons or by using a combination

More information

The Tone Height of Multiharmonic Sounds. Introduction

The Tone Height of Multiharmonic Sounds. Introduction Music-Perception Winter 1990, Vol. 8, No. 2, 203-214 I990 BY THE REGENTS OF THE UNIVERSITY OF CALIFORNIA The Tone Height of Multiharmonic Sounds ROY D. PATTERSON MRC Applied Psychology Unit, Cambridge,

More information

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB Laboratory Assignment 3 Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB PURPOSE In this laboratory assignment, you will use MATLAB to synthesize the audio tones that make up a well-known

More information

Music Segmentation Using Markov Chain Methods

Music Segmentation Using Markov Chain Methods Music Segmentation Using Markov Chain Methods Paul Finkelstein March 8, 2011 Abstract This paper will present just how far the use of Markov Chains has spread in the 21 st century. We will explain some

More information

Noise evaluation based on loudness-perception characteristics of older adults

Noise evaluation based on loudness-perception characteristics of older adults Noise evaluation based on loudness-perception characteristics of older adults Kenji KURAKATA 1 ; Tazu MIZUNAMI 2 National Institute of Advanced Industrial Science and Technology (AIST), Japan ABSTRACT

More information

Audio Feature Extraction for Corpus Analysis

Audio Feature Extraction for Corpus Analysis Audio Feature Extraction for Corpus Analysis Anja Volk Sound and Music Technology 5 Dec 2017 1 Corpus analysis What is corpus analysis study a large corpus of music for gaining insights on general trends

More information

Smooth Rhythms as Probes of Entrainment. Music Perception 10 (1993): ABSTRACT

Smooth Rhythms as Probes of Entrainment. Music Perception 10 (1993): ABSTRACT Smooth Rhythms as Probes of Entrainment Music Perception 10 (1993): 503-508 ABSTRACT If one hypothesizes rhythmic perception as a process employing oscillatory circuits in the brain that entrain to low-frequency

More information

From quantitative empirï to musical performology: Experience in performance measurements and analyses

From quantitative empirï to musical performology: Experience in performance measurements and analyses International Symposium on Performance Science ISBN 978-90-9022484-8 The Author 2007, Published by the AEC All rights reserved From quantitative empirï to musical performology: Experience in performance

More information

Chapter Two: Long-Term Memory for Timbre

Chapter Two: Long-Term Memory for Timbre 25 Chapter Two: Long-Term Memory for Timbre Task In a test of long-term memory, listeners are asked to label timbres and indicate whether or not each timbre was heard in a previous phase of the experiment

More information

Procedia - Social and Behavioral Sciences 184 ( 2015 )

Procedia - Social and Behavioral Sciences 184 ( 2015 ) Available online at www.sciencedirect.com ScienceDirect Procedia - Social and Behavioral Sciences 184 ( 2015 ) 322 327 5th Arte Polis International Conference and Workshop Reflections on Creativity: Public

More information

MOTIVATION AGENDA MUSIC, EMOTION, AND TIMBRE CHARACTERIZING THE EMOTION OF INDIVIDUAL PIANO AND OTHER MUSICAL INSTRUMENT SOUNDS

MOTIVATION AGENDA MUSIC, EMOTION, AND TIMBRE CHARACTERIZING THE EMOTION OF INDIVIDUAL PIANO AND OTHER MUSICAL INSTRUMENT SOUNDS MOTIVATION Thank you YouTube! Why do composers spend tremendous effort for the right combination of musical instruments? CHARACTERIZING THE EMOTION OF INDIVIDUAL PIANO AND OTHER MUSICAL INSTRUMENT SOUNDS

More information

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes hello Jay Biernat Third author University of Rochester University of Rochester Affiliation3 words jbiernat@ur.rochester.edu author3@ismir.edu

More information

Musicians Adjustment of Performance to Room Acoustics, Part III: Understanding the Variations in Musical Expressions

Musicians Adjustment of Performance to Room Acoustics, Part III: Understanding the Variations in Musical Expressions Musicians Adjustment of Performance to Room Acoustics, Part III: Understanding the Variations in Musical Expressions K. Kato a, K. Ueno b and K. Kawai c a Center for Advanced Science and Innovation, Osaka

More information

Voice & Music Pattern Extraction: A Review

Voice & Music Pattern Extraction: A Review Voice & Music Pattern Extraction: A Review 1 Pooja Gautam 1 and B S Kaushik 2 Electronics & Telecommunication Department RCET, Bhilai, Bhilai (C.G.) India pooja0309pari@gmail.com 2 Electrical & Instrumentation

More information

Perceptual and physical evaluation of differences among a large panel of loudspeakers

Perceptual and physical evaluation of differences among a large panel of loudspeakers Perceptual and physical evaluation of differences among a large panel of loudspeakers Mathieu Lavandier, Sabine Meunier, Philippe Herzog Laboratoire de Mécanique et d Acoustique, C.N.R.S., 31 Chemin Joseph

More information

MASTER'S THESIS. Listener Envelopment

MASTER'S THESIS. Listener Envelopment MASTER'S THESIS 2008:095 Listener Envelopment Effects of changing the sidewall material in a model of an existing concert hall Dan Nyberg Luleå University of Technology Master thesis Audio Technology Department

More information

Syrah. Flux All 1rights reserved

Syrah. Flux All 1rights reserved Flux 2009. All 1rights reserved - The Creative adaptive-dynamics processor Thank you for using. We hope that you will get good use of the information found in this manual, and to help you getting acquainted

More information

Detecting Musical Key with Supervised Learning

Detecting Musical Key with Supervised Learning Detecting Musical Key with Supervised Learning Robert Mahieu Department of Electrical Engineering Stanford University rmahieu@stanford.edu Abstract This paper proposes and tests performance of two different

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 7.9 THE FUTURE OF SOUND

More information

Practice makes less imperfect: the effects of experience and practice on the kinetics and coordination of flutists' fingers

Practice makes less imperfect: the effects of experience and practice on the kinetics and coordination of flutists' fingers Proceedings of the International Symposium on Music Acoustics (Associated Meeting of the International Congress on Acoustics) 25-31 August 2010, Sydney and Katoomba, Australia Practice makes less imperfect:

More information

Advance Certificate Course In Audio Mixing & Mastering.

Advance Certificate Course In Audio Mixing & Mastering. Advance Certificate Course In Audio Mixing & Mastering. CODE: SIA-ACMM16 For Whom: Budding Composers/ Music Producers. Assistant Engineers / Producers Working Engineers. Anyone, who has done the basic

More information

Quarterly Progress and Status Report. Perception of just noticeable time displacement of a tone presented in a metrical sequence at different tempos

Quarterly Progress and Status Report. Perception of just noticeable time displacement of a tone presented in a metrical sequence at different tempos Dept. for Speech, Music and Hearing Quarterly Progress and Status Report Perception of just noticeable time displacement of a tone presented in a metrical sequence at different tempos Friberg, A. and Sundberg,

More information

Measurement of overtone frequencies of a toy piano and perception of its pitch

Measurement of overtone frequencies of a toy piano and perception of its pitch Measurement of overtone frequencies of a toy piano and perception of its pitch PACS: 43.75.Mn ABSTRACT Akira Nishimura Department of Media and Cultural Studies, Tokyo University of Information Sciences,

More information

PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF)

PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF) PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF) "The reason I got into playing and producing music was its power to travel great distances and have an emotional impact on people" Quincey

More information

Using the new psychoacoustic tonality analyses Tonality (Hearing Model) 1

Using the new psychoacoustic tonality analyses Tonality (Hearing Model) 1 02/18 Using the new psychoacoustic tonality analyses 1 As of ArtemiS SUITE 9.2, a very important new fully psychoacoustic approach to the measurement of tonalities is now available., based on the Hearing

More information

A Beat Tracking System for Audio Signals

A Beat Tracking System for Audio Signals A Beat Tracking System for Audio Signals Simon Dixon Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria. simon@ai.univie.ac.at April 7, 2000 Abstract We present

More information

Modeling memory for melodies

Modeling memory for melodies Modeling memory for melodies Daniel Müllensiefen 1 and Christian Hennig 2 1 Musikwissenschaftliches Institut, Universität Hamburg, 20354 Hamburg, Germany 2 Department of Statistical Science, University

More information

Room acoustics computer modelling: Study of the effect of source directivity on auralizations

Room acoustics computer modelling: Study of the effect of source directivity on auralizations Downloaded from orbit.dtu.dk on: Sep 25, 2018 Room acoustics computer modelling: Study of the effect of source directivity on auralizations Vigeant, Michelle C.; Wang, Lily M.; Rindel, Jens Holger Published

More information

in the Howard County Public School System and Rocketship Education

in the Howard County Public School System and Rocketship Education Technical Appendix May 2016 DREAMBOX LEARNING ACHIEVEMENT GROWTH in the Howard County Public School System and Rocketship Education Abstract In this technical appendix, we present analyses of the relationship

More information

A SEMANTIC DIFFERENTIAL STUDY OF LOW AMPLITUDE SUPERSONIC AIRCRAFT NOISE AND OTHER TRANSIENT SOUNDS

A SEMANTIC DIFFERENTIAL STUDY OF LOW AMPLITUDE SUPERSONIC AIRCRAFT NOISE AND OTHER TRANSIENT SOUNDS 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 A SEMANTIC DIFFERENTIAL STUDY OF LOW AMPLITUDE SUPERSONIC AIRCRAFT NOISE AND OTHER TRANSIENT SOUNDS PACS: 43.28.Mw Marshall, Andrew

More information

Robert Alexandru Dobre, Cristian Negrescu

Robert Alexandru Dobre, Cristian Negrescu ECAI 2016 - International Conference 8th Edition Electronics, Computers and Artificial Intelligence 30 June -02 July, 2016, Ploiesti, ROMÂNIA Automatic Music Transcription Software Based on Constant Q

More information

Tempo and Beat Analysis

Tempo and Beat Analysis Advanced Course Computer Science Music Processing Summer Term 2010 Meinard Müller, Peter Grosche Saarland University and MPI Informatik meinard@mpi-inf.mpg.de Tempo and Beat Analysis Musical Properties:

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Architectural Acoustics Session 2aAAa: Adapting, Enhancing, and Fictionalizing

More information

JOURNAL OF BUILDING ACOUSTICS. Volume 20 Number

JOURNAL OF BUILDING ACOUSTICS. Volume 20 Number Early and Late Support Measured over Various Distances: The Covered versus Open Part of the Orchestra Pit by R.H.C. Wenmaekers and C.C.J.M. Hak Reprinted from JOURNAL OF BUILDING ACOUSTICS Volume 2 Number

More information

LEARNING TO CONTROL A REVERBERATOR USING SUBJECTIVE PERCEPTUAL DESCRIPTORS

LEARNING TO CONTROL A REVERBERATOR USING SUBJECTIVE PERCEPTUAL DESCRIPTORS 10 th International Society for Music Information Retrieval Conference (ISMIR 2009) October 26-30, 2009, Kobe, Japan LEARNING TO CONTROL A REVERBERATOR USING SUBJECTIVE PERCEPTUAL DESCRIPTORS Zafar Rafii

More information

Effects of articulation styles on perception of modulated tempos in violin excerpts

Effects of articulation styles on perception of modulated tempos in violin excerpts Effects of articulation styles on perception of modulated tempos in violin excerpts By: John M. Geringer, Clifford K. Madsen, and Rebecca B. MacLeod Geringer, J. M., Madsen, C. K., MacLeod, R. B. (2007).

More information

Autonomous Multitrack Equalization Based on Masking Reduction

Autonomous Multitrack Equalization Based on Masking Reduction Journal of the Audio Engineering Society Vol. 63, No. 5, May 2015 ( C 2015) DOI: http://dx.doi.org/10.17743/jaes.2015.0021 PAPERS Autonomous Multitrack Equalization Based on Masking Reduction SINA HAFEZI

More information

THE EFFECT OF EXPERTISE IN EVALUATING EMOTIONS IN MUSIC

THE EFFECT OF EXPERTISE IN EVALUATING EMOTIONS IN MUSIC THE EFFECT OF EXPERTISE IN EVALUATING EMOTIONS IN MUSIC Fabio Morreale, Raul Masu, Antonella De Angeli, Patrizio Fava Department of Information Engineering and Computer Science, University Of Trento, Italy

More information

A few white papers on various. Digital Signal Processing algorithms. used in the DAC501 / DAC502 units

A few white papers on various. Digital Signal Processing algorithms. used in the DAC501 / DAC502 units A few white papers on various Digital Signal Processing algorithms used in the DAC501 / DAC502 units Contents: 1) Parametric Equalizer, page 2 2) Room Equalizer, page 5 3) Crosstalk Cancellation (XTC),

More information

A Need for Universal Audio Terminologies and Improved Knowledge Transfer to the Consumer

A Need for Universal Audio Terminologies and Improved Knowledge Transfer to the Consumer A Need for Universal Audio Terminologies and Improved Knowledge Transfer to the Consumer Rob Toulson Anglia Ruskin University, Cambridge Conference 8-10 September 2006 Edinburgh University Summary Three

More information

ANALYSIS of MUSIC PERFORMED IN DIFFERENT ACOUSTIC SETTINGS in STAVANGER CONCERT HOUSE

ANALYSIS of MUSIC PERFORMED IN DIFFERENT ACOUSTIC SETTINGS in STAVANGER CONCERT HOUSE ANALYSIS of MUSIC PERFORMED IN DIFFERENT ACOUSTIC SETTINGS in STAVANGER CONCERT HOUSE Tor Halmrast Statsbygg 1.ammanuensis UiO/Musikkvitenskap NAS 2016 SAME MUSIC PERFORMED IN DIFFERENT ACOUSTIC SETTINGS:

More information

Precision testing methods of Event Timer A032-ET

Precision testing methods of Event Timer A032-ET Precision testing methods of Event Timer A032-ET Event Timer A032-ET provides extreme precision. Therefore exact determination of its characteristics in commonly accepted way is impossible or, at least,

More information

Koester Performance Research Koester Performance Research Heidi Koester, Ph.D. Rich Simpson, Ph.D., ATP

Koester Performance Research Koester Performance Research Heidi Koester, Ph.D. Rich Simpson, Ph.D., ATP Scanning Wizard software for optimizing configuration of switch scanning systems Heidi Koester, Ph.D. hhk@kpronline.com, Ann Arbor, MI www.kpronline.com Rich Simpson, Ph.D., ATP rsimps04@nyit.edu New York

More information

Pitch Perception and Grouping. HST.723 Neural Coding and Perception of Sound

Pitch Perception and Grouping. HST.723 Neural Coding and Perception of Sound Pitch Perception and Grouping HST.723 Neural Coding and Perception of Sound Pitch Perception. I. Pure Tones The pitch of a pure tone is strongly related to the tone s frequency, although there are small

More information

PERCEPTUAL QUALITY OF H.264/AVC DEBLOCKING FILTER

PERCEPTUAL QUALITY OF H.264/AVC DEBLOCKING FILTER PERCEPTUAL QUALITY OF H./AVC DEBLOCKING FILTER Y. Zhong, I. Richardson, A. Miller and Y. Zhao School of Enginnering, The Robert Gordon University, Schoolhill, Aberdeen, AB1 1FR, UK Phone: + 1, Fax: + 1,

More information

1. BACKGROUND AND AIMS

1. BACKGROUND AND AIMS THE EFFECT OF TEMPO ON PERCEIVED EMOTION Stefanie Acevedo, Christopher Lettie, Greta Parnes, Andrew Schartmann Yale University, Cognition of Musical Rhythm, Virtual Lab 1. BACKGROUND AND AIMS 1.1 Introduction

More information

Music Curriculum Kindergarten

Music Curriculum Kindergarten Music Curriculum Kindergarten Wisconsin Model Standards for Music A: Singing Echo short melodic patterns appropriate to grade level Sing kindergarten repertoire with appropriate posture and breathing Maintain

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 4aPPb: Binaural Hearing

More information

CS229 Project Report Polyphonic Piano Transcription

CS229 Project Report Polyphonic Piano Transcription CS229 Project Report Polyphonic Piano Transcription Mohammad Sadegh Ebrahimi Stanford University Jean-Baptiste Boin Stanford University sadegh@stanford.edu jbboin@stanford.edu 1. Introduction In this project

More information

Chord Classification of an Audio Signal using Artificial Neural Network

Chord Classification of an Audio Signal using Artificial Neural Network Chord Classification of an Audio Signal using Artificial Neural Network Ronesh Shrestha Student, Department of Electrical and Electronic Engineering, Kathmandu University, Dhulikhel, Nepal ---------------------------------------------------------------------***---------------------------------------------------------------------

More information

Before I proceed with the specifics of each etude, I would like to give you some general suggestions to help prepare you for your audition.

Before I proceed with the specifics of each etude, I would like to give you some general suggestions to help prepare you for your audition. TMEA ALL-STATE TRYOUT MUSIC BE SURE TO BRING THE FOLLOWING: 1. Copies of music with numbered measures 2. Copy of written out master class 1. Hello, My name is Dr. David Shea, professor of clarinet at Texas

More information

MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC

MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC Lena Quinto, William Forde Thompson, Felicity Louise Keating Psychology, Macquarie University, Australia lena.quinto@mq.edu.au Abstract Many

More information

Gyorgi Ligeti. Chamber Concerto, Movement III (1970) Glen Halls All Rights Reserved

Gyorgi Ligeti. Chamber Concerto, Movement III (1970) Glen Halls All Rights Reserved Gyorgi Ligeti. Chamber Concerto, Movement III (1970) Glen Halls All Rights Reserved Ligeti once said, " In working out a notational compositional structure the decisive factor is the extent to which it

More information

PRESCOTT UNIFIED SCHOOL DISTRICT District Instructional Guide January 2016

PRESCOTT UNIFIED SCHOOL DISTRICT District Instructional Guide January 2016 Grade Level: 9 12 Subject: Jazz Ensemble Time: School Year as listed Core Text: Time Unit/Topic Standards Assessments 1st Quarter Arrange a melody Creating #2A Select and develop arrangements, sections,

More information

Behavioral and neural identification of birdsong under several masking conditions

Behavioral and neural identification of birdsong under several masking conditions Behavioral and neural identification of birdsong under several masking conditions Barbara G. Shinn-Cunningham 1, Virginia Best 1, Micheal L. Dent 2, Frederick J. Gallun 1, Elizabeth M. McClaine 2, Rajiv

More information