FPFV-285/585 PRODUCTION SOUND Fall 2018 CRITICAL LISTENING Assignment

Similar documents
PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF)

The Physics Of Sound. Why do we hear what we hear? (Turn on your speakers)

I. LISTENING. For most people, sound is background only. To the sound designer/producer, sound is everything.!tc 243 2

Note on Posted Slides. Noise and Music. Noise and Music. Pitch. PHY205H1S Physics of Everyday Life Class 15: Musical Sounds

CTP 431 Music and Audio Computing. Basic Acoustics. Graduate School of Culture Technology (GSCT) Juhan Nam

Lecture 1: What we hear when we hear music

Simple Harmonic Motion: What is a Sound Spectrum?

THE DIGITAL DELAY ADVANTAGE A guide to using Digital Delays. Synchronize loudspeakers Eliminate comb filter distortion Align acoustic image.

Sound ASSIGNMENT. (i) Only... bodies produce sound. EDULABZ. (ii) Sound needs a... medium for its propagation.

VCE VET MUSIC TECHNICAL PRODUCTION

BBN ANG 141 Foundations of phonology Phonetics 3: Acoustic phonetics 1

UNIT 1: QUALITIES OF SOUND. DURATION (RHYTHM)

Technical Guide. Installed Sound. Loudspeaker Solutions for Worship Spaces. TA-4 Version 1.2 April, Why loudspeakers at all?

VCE VET MUSIC INDUSTRY: SOUND PRODUCTION

VCE VET MUSIC INDUSTRY: SOUND PRODUCTION

Amplitude and Loudness 1

Music Representations

Math and Music: The Science of Sound

Physics. Approximate Timeline. Students are expected to keep up with class work when absent.

Aural Architecture: The Missing Link

Foundations and Theory

A few white papers on various. Digital Signal Processing algorithms. used in the DAC501 / DAC502 units

CFX 12 (12X4X1) 8 mic/line channels, 2 stereo line channels. CFX 16 (16X4X1) 12 mic/line channels, 2 stereo line channels

Sound Design, Music, and Recording

Mixers. The functions of a mixer are simple: 1) Process input signals with amplification and EQ, and 2) Combine those signals in a variety of ways.

AUDIO RECORDING. Rewind - to move back to a specific point in the recording (usually the beginning)

8/16/16. Clear Targets: Sound. Chapter 1: Elements. Sound: Pitch, Dynamics, and Tone Color

Dynamic Range Processing and Digital Effects

Cathedral user guide & reference manual

Lecture 5: Frequency Musicians describe sustained, musical tones in terms of three quantities:

Binaural Measurement, Analysis and Playback

Music for the Hearing Care Professional Published on Sunday, 14 March :24

ANALYSIS of MUSIC PERFORMED IN DIFFERENT ACOUSTIC SETTINGS in STAVANGER CONCERT HOUSE

The Story of the Woodwind Family. STUDY GUIDE Provided by jewel winds

PHYSICS OF MUSIC. 1.) Charles Taylor, Exploring Music (Music Library ML3805 T )

UNIT 1: THE ART OF SOUND

The String Family. Bowed Strings. Plucked Strings. Musical Instruments More About Music

L. Sound Systems. Record Players

Music Representations

Mathematics in Contemporary Society - Chapter 11 (Spring 2018)

La Salle University. I. Listening Answer the following questions about the various works we have listened to in the course so far.

Which chime makes the sound with the highest pitch? How long is it? Which chime makes the sound with the lowest pitch? How long is it?

CMX-DSP Compact Mixers

Lab #10 Perception of Rhythm and Timing

CTP431- Music and Audio Computing Musical Acoustics. Graduate School of Culture Technology KAIST Juhan Nam

CHAPTER 20.2 SPEECH AND MUSICAL SOUNDS

The Trinity Church Videos: An Audio Analysis

Elements of Music David Scoggin OLLI Understanding Jazz Fall 2016

ANALYSING DIFFERENCES BETWEEN THE INPUT IMPEDANCES OF FIVE CLARINETS OF DIFFERENT MAKES

Introduction 3/5/13 2

By Jack Bennett Icanplaydrums.com DVD 12 JAZZ BASICS

The Cocktail Party Effect. Binaural Masking. The Precedence Effect. Music 175: Time and Space

Chapter 24. Meeting 24, Dithering and Mastering

Analysis of the effects of signal distance on spectrograms

Using the BHM binaural head microphone

Weeks 1& 2: Introduction to Music/The Creation Lesson 1

Physics HomeWork 4 Spring 2015

about half the spacing of its modern counterpart when played in their normal ranges? 6)

about half the spacing of its modern counterpart when played in their normal ranges? 6)

SREV1 Sampling Guide. An Introduction to Impulse-response Sampling with the SREV1 Sampling Reverberator

Welcome to Vibrationdata

External Assessment practice paper

WHITE PAPER: ACOUSTICS PRIMER FOR MUSIC SPACES

WHAT IS BARBERSHOP. Life Changing Music By Denise Fly and Jane Schlinke

Using the new psychoacoustic tonality analyses Tonality (Hearing Model) 1

1.39 Musical Instruments

BASICS of GOOD SOUND. in the CHURCH. MICHAEL E. LUITHLE Director of ITS Church of God of Prophecy International Offices

Instrument Selection Guide

ARIA STUDIOTRACK IIII R504

Musical Acoustics Lecture 15 Pitch & Frequency (Psycho-Acoustics)

A different way of approaching a challenge

Welcome to the West Babylon Musical Instrument Program!

XB-14 Quick Operation Manual V1 23/10/2013

Dynamic Spectrum Mapper V2 (DSM V2) Plugin Manual

The Mathematics of Music and the Statistical Implications of Exposure to Music on High. Achieving Teens. Kelsey Mongeau

Instruments. Of the. Orchestra

A Space for Looking is a Space for Listening

THE SHOWSCAN PROCESS and EUROPE S BIGGEST THEATRE SOUND SYSTEM

Grade 4 General Music

Room acoustics computer modelling: Study of the effect of source directivity on auralizations

Pitches and Clefs. Chapter. In This Chapter

Creative Computing II

MUSIC CURRICULUM High School Music Technology Objectives

Royal Reed Organ for NI Kontakt

Contents. Unit 8 THE TECHNOLOGY OF HIP-HOP Answers for the Student Worksheet Unit Project Unit Project Tips...

EUROPA I PREAMPLIFIER QUICK START GUIDE Dave Hill Designs version

Elegant Styles, Refined Tones and Much More: Presenting the Flagship AT-90S Atelier. AT-90S

DELAWARE MUSIC EDUCATORS ASSOCIATION ALL-STATE ENSEMBLES GENERAL GUIDELINES

2018 Fall CTP431: Music and Audio Computing Fundamentals of Musical Acoustics

D. BARD, J. NEGREIRA DIVISION OF ENGINEERING ACOUSTICS, LUND UNIVERSITY

Integrating Music and Mathematics in the Elementary Classroom

Linear Time Invariant (LTI) Systems

RECORDING AND REPRODUCING CONCERT HALL ACOUSTICS FOR SUBJECTIVE EVALUATION

What do we hope to measure?

Multi-Purpose Auditorium Sound Reinforcement System Design ECE Spring 2017

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB

Elements of Music. How can we tell music from other sounds?

WAVES Scheps Parallel Particles. User Guide

SPL Analog Code Plug-in Manual

KINTEK SCORES WITH MONO ENHANCEMENT

Transcription:

FPFV-285/585 PRODUCTION SOUND Fall 2018 CRITICAL LISTENING Assignment PREPARATION Track 1) Headphone check -- Left, Right, Left, Right. Track 2) A music excerpt for setting comfortable listening level. Read the explanatory material before listening to each track. FREQUENCY Frequency is how fast a sound-producing body (a string, a reed, an oscillator, etc.) vibrates. The more vibrations (or cycles) per second, the higher the frequency (in musical terms, the pitch). Frequency is measured in Hertz, abbreviated Hz, after the 19th century German physicist whose pioneering research into wave propagation led to the invention of radio. (The original unit of frequency, cycles per second (cps) is no longer used.) 1 cycle For purposes of illustration, we ll say that each of these tones is one second long. If we played a recording of these two tones, no one would be able to hear them, even with the most expensive high-end sound system. This is because our ears cannot respond to frequencies this low. The frequency range of human hearing is 20 Hz to 20,000 Hz (aka 20 kilohertz, or 20 khz). Newborns can hear up to 20 khz, but as we age the highest frequency we can hear drops. Welcome to life... A low note on a piano vibrates slowly and is low frequency, a high note on a piano vibrates rapidly and is high frequency. Distant thunder is low frequency, a buzzing mosquito is high frequency. The bass control on your stereo boosts or cuts low frequencies, the treble control boosts or cuts high frequencies. In between are the mid-range frequencies. The intelligibility of the human voice lies primarily in the mid-range. There is no exact boundary between low frequency and mid-range or between mid-range and high frequency. frequency in Hertz (Hz) 20 low mid high 31 62 125 250 500 1k 2k 4k 8k 16k 20k 1

It s very useful to split the mid-range into lo-mid, mid, and hi-mid; again these are only approximate boundaries. mid low high frequency in Hertz (Hz) 20 31 62 125 250 500 1k 2k 4k 8k 16k 20k low lo-mid mid hi-mid high Tracks 3-11 are electronically generated tones starting at 55 Hz, then successively doubling in frequency. Most of these tracks also have examples of instruments playing a note at the same frequency, some have graphics (not totally precise, but close enough to make the point...). The instruments sound different than the electronic tones and each other because instruments generate not only the fundamental pitch, but also additional mathematically-related pitches (overtones) of various strengths. The relationship of the overtones to the fundamental, which you can see in the graphics, is what gives each instrument its particular and easily identifiable sound. Track 3) Tone 55 Hz, fretless bass, pipe organ. Track 4) Tone 110 Hz, tuba, fretless bass. fundamental overtones Track 5) Tone 220 Hz, piano, guitar. fundamental overtones 2

Track 6) Tone 440 Hz, clarinet, trumpet. fundamental overtones Track 7) Tone 880 Hz, electric piano, xylophone. Track 8) Tone 1760 Hz, electric guitar, violin. Track 9) Tone 3520 Hz, piano, piccolo. Track 10) Tone 7040 Hz, piano, synthesized bell. Track 11) Tone 14080 Hz. You may not be able to hear it, because of the lack of high frequency response of either your ears or the playback equipment... Track 12) A tone rising in frequency from 20 Hz to 20 khz. As the frequency rises there are more vibrations per second, and they get closer together. This graphic illustrates just a short segment, at reduced scale, of the rising tone. (At full scale it would have to be a quarter-mile long to show the whole 20-20k sweep -- one cycle of a 20 Hz wave is 56 feet long, one cycle of a 20 khz wave is 5/8 long.) time This chart shows the ranges of the fundamental pitches of a variety of musical instruments. 3

AMPLITUDE Amplitude is how tall a wave is -- the higher the amplitude, the louder it sounds. Track 13) 1 khz tone increasing in amplitude. We perceive this as getting louder. As a descriptive term, amplitude applies not only to electrical signals such as the tone above but to real-life, real-time events causing the air to vibrate -- musical instruments, a ball as it bounces, a dog s vocal cords as it barks, etc. Track 14) A recording of a snare drum being hit increasingly harder. The sound waves generated in the air are increasing in amplitude, and what we hear changes as the amplitude changes. With each successively harder hit, the overtones change as well. A note on the piano played softly has subtly different overtones than the same note played strongly. An amplified whisper will never be a believable substitute for a yell. Amplitude is measured in decibels, abbreviated db. Just as there are different temperature scales (Fahrenheit, Celsius, and Kelvin), all measured in degrees, there are different amplitude scales, all measured in db. For the moment we ll focus on db SPL -- sound pressure level in decibels. This is how the intensity of sound moving through air -- and how our ears and brain perceive it as loudness -- is measured. 0 db SPL is the threshold of hearing -- any softer sound doesn t register in the brain because it gets buried in the body s system noise (the high-frequency noise of the nervous system and / or the lowfrequency noise of blood in the circulatory system). 125 db SPL is the threshold of pain -- so loud it hurts. 4

To put this chart into film-making context, 83 db SPL is considered the proper average level of normal conversational dialogue in a properly calibrated movie theater. 1 db SPL is defined as the smallest change in loudness that a human can perceive; your mileage may vary... Track 15) A sustained chord on a pipe organ. The volume is increased by 1 db then returned to the original volume three times. Can you tell the difference? 3 sec 2 sec 2 sec 2 sec 2 sec 2 sec 3 sec starting volume starting starting starting +1 db +1 db +1 db Track 16) The same chord starting softer. At the same timings as above, the volume is increased by 2 db. The final 3-second segment is 12 db louder than the first segment. Track 17) A sportscast consisting of an intro and the scores of seven basketball games. After each score, the volume is reduced by 2 db. When we speak, the amplitude isn t consistent (as it was with the pipe organ) -- we stress some words or syllables and soften others -- so the difference might not be immediately apparent, but it should be obvious that the last segment is significantly lower in volume than the first, specifically by 12 db. Track 18) A foley recording of footsteps. There are 24 footsteps, performed and recorded so that they are all basically the same amplitude. However, a fade-out has been applied, so that the last footstep is 24 db lower than the first. What does this illustrate about the relationship between volume and perceived distance? Track 19) QUESTION #1 -- multiple choice. Use the answer sheet. Track 20) QUESTION #1 -- multiple choice. Use the answer sheet. 5

STEREO and MONO We have two ears, which gives us binaural (stereo) hearing. It takes two microphones to make a true stereo recording, a two-channel amplifier to deliver the electrical signal to speakers or headphones, and two speakers or a pair of headphones to deliver stereo sound to our ears. A stereo audio recording has two channels of signal, called left and right corresponding to our left and right ears. BUT JUST BECAUSE YOU HEAR SOMETHING IN BOTH EARS IT DOESN T MEAN THAT IT S STEREO -- more on this later. First, though: Track 21) A live concert recording, using two microphones arranged to emulate the way our ears pick up sound. There are things happening from the far left to the far right and everywhere in between in the stereo field. Note the violins on the left, the basses on the right, the oboes mid-way to the right, etc. This is because the violins are picked up primarily by the left mic and very little by the right mic, the basses are picked up primarily by the right mic and very little by the left mic, and the oboes are picked up 2/3 or so by the right mic and 1/3 or so by the left mic. Track 22) The same recording, but something is different. QUESTION #3: Describe the difference on the answer sheet. Track 23) Another live recording, this time with the microphones placed close to the instruments, a common practice. While it s less like how we would hear the instruments in the concert, it offers the advantage of being able to adjust the balance between them. left mic right mic left channel (left ear) right channel (right ear) 6

As you can see from the screen snapshot of the waveforms, the instrument on the right is playing more notes. What you *can t* see but can only hear is that most of them are at higher pitches (frequencies) than the instrument on the left. Track 24) Just the left mic. Note that the right instrument can still be heard, but only faintly due to a) being out of the primary pickup pattern of the left mic and b) further away from the left mic. This single channel of sound is monaural, aka mono. Track 25) Just the right mic. The left instrument can be heard faintly. This is also mono. Track 26) Just the left mic, but in both ears. It s still mono, from a single microphone. Because it s in both ears it sound as if it s playing in the top of your head (or some people perceive it as between their eyes). Track 27) Just the right mic, but in both ears, also mono. Track 28) Both the left and right mics in both ears, once again mono. It s not possible to locate the individual instruments on the stage because the differences between the two mics have been eliminated, and the left and right channels (each comprised of both mics) are identical. left channel (left ear) right channel (right ear) Track 29) The original stereo recording again for comparison. Track 30) A stereo (two mics) recording of a car passing by. In the background there are birds chirping, some on the left and some on the right. Note that the birds stay in their same locations but the car moves in the stereo field. Track 31) The same recording, but now in mono. Note the lack of left-right movement and that the birds are now in the center. Track 32) The mono recording from Track 31, but panned left to right to simulate the car passing. Note that all of the birds (and the rest of the background) move across the stereo field with the car. Not exactly convincing as a true stereo recording... Track 33) More car fun! A stereo recording of a race car. Track 34) The same recording of the race car, but something is different. QUESTION #4: Describe / explain the difference on the answer sheet. Track 35) The same recording again, but different in yet another way than Track 34. QUESTION #5: Describe / explain the difference on the answer sheet. (If you had trouble with QUESTION #3, this would be a good time to go back and listen closely again to Tracks 21 and 22...) 7

REVERBERATION and ECHO What s the difference? Reverberation is the persistence of sound in a particular space after the original sound has stopped. When sound is produced in a space, it reflects off of the surfaces of the space, scattering around (aka diffusing), ringing out until it decays, a result of the energy of the reflections being absorbed by air and non-reflective surfaces. Reverb is a naturally-occuring characteristic of a space that gives aural clues about its size and shape and what kind of reflective or absorptive surfaces there are. A closet and a car interior may enclose the same volume of space, but they will sound distinctly different because of their geometries and surfaces. The larger the space, usually the longer it takes for the reverberation to decay. Cathedrals, gymnasiums, caves, concert halls, etc. are examples of spaces with long reverb times. [Reverb can also be produced artificially, whether using a spring (as in vintage guitar amplifiers), a metal plate (the plate reverbs in recording studios of the 70 s and 80 s), or in software. The state of the art in artificial reverb is called convolution reverb. This is a process in which someone goes into a space with high-end speakers and plays a 20 Hz to 20 khz sweep (like the rising frequency tone we heard in class) and records it with high-end microphones in various locations. Then the recording is processed with special software that removes the sweep but retains the way the space responded to it -- leaving, in effect, the sonic signature of an actual space.] Some rooms are built with absorptive materials on the walls, ceiling, and floor that reduce or eliminate reflections in order to achieve a dry (non-reverberant) sound. Rooms like this are typically found in recording and radio studios. Outdoors, away from buildings and trees, the sound is dry. Echo is discrete, non-diffuse, repetitions of sound. Echoes occur under certain conditions in spaces with just the right (or wrong...) geometry. Like reverb, echo can also be produced artificially in software. Dry no reflections diffuse reflections Echo Echo Echo Echo Echo discrete reflections Track 36) A recording of a hand drum. The first segment is dry, the second segment has reverb, and the third segment has echo. Track 37) QUESTION #6 -- multiple choice. Identify on the answer sheet the spatial characteristic of each of the three segments. 8