ACOUSTIC FEATURES OF RED DEER (CERVUS ELAPHUS) STAGS VOCALIZATIONS IN THE CANSIGLIO FOREST (NE ITALY, )

Similar documents
Spectrographic analysis points to source filter coupling in rutting roars of Iberian red deer

Comparison Parameters and Speaker Similarity Coincidence Criteria:

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES

Using the new psychoacoustic tonality analyses Tonality (Hearing Model) 1

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Measurement of overtone frequencies of a toy piano and perception of its pitch

Analysis of the effects of signal distance on spectrograms

R&S FSW-B512R Real-Time Spectrum Analyzer 512 MHz Specifications

Guide to Analysing Full Spectrum/Frequency Division Bat Calls with Audacity (v.2.0.5) by Thomas Foxley

Classification of Different Indian Songs Based on Fractal Analysis

R&S FSW-K160RE 160 MHz Real-Time Measurement Application Specifications

Changes in fin whale (Balaenoptera physalus) song over a forty-four year period in New England waters

Long-distance communication of acoustic cues to social identity in African elephants

hit), and assume that longer incidental sounds (forest noise, water, wind noise) resemble a Gaussian noise distribution.

7000 Series Signal Source Analyzer & Dedicated Phase Noise Test System

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods

Robert Alexandru Dobre, Cristian Negrescu

AUTOREGRESSIVE MFCC MODELS FOR GENRE CLASSIFICATION IMPROVED BY HARMONIC-PERCUSSION SEPARATION

Texas Music Education Research

Olga Feher, PhD Dissertation: Chapter 4 (May 2009) Chapter 4. Cumulative cultural evolution in an isolated colony

SOUND LABORATORY LING123: SOUND AND COMMUNICATION

2. AN INTROSPECTION OF THE MORPHING PROCESS

Automatic Classification of Instrumental Music & Human Voice Using Formant Analysis

Automatic Laughter Detection

Automatic Laughter Detection

A comparison of the acoustic vowel spaces of speech and song*20

Features for Audio and Music Classification

Kent Academic Repository

R&S CA210 Signal Analysis Software Offline analysis of recorded signals and wideband signal scenarios

Pitch-Synchronous Spectrogram: Principles and Applications

APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC

PulseCounter Neutron & Gamma Spectrometry Software Manual

Quarterly Progress and Status Report. An attempt to predict the masking effect of vowel spectra

THE importance of music content analysis for musical

Voice & Music Pattern Extraction: A Review

Relation between the overall unpleasantness of a long duration sound and the one of its events : application to a delivery truck

Comparison between Opera houses: Italian and Japanese cases

technical note flicker measurement display & lighting measurement

Signal Stability Analyser

Pitch. The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high.

Module 1: Digital Video Signal Processing Lecture 3: Characterisation of Video raster, Parameters of Analog TV systems, Signal bandwidth

MUSICAL INSTRUMENT RECOGNITION WITH WAVELET ENVELOPES

Pitch. There is perhaps no aspect of music more important than pitch. It is notoriously

Getting Started with the LabVIEW Sound and Vibration Toolkit

Making music with voice. Distinguished lecture, CIRMMT Jan 2009, Copyright Johan Sundberg

Michael J. Owren b) Department of Psychology, Uris Hall, Cornell University, Ithaca, New York 14853

1. Introduction NCMMSC2009

AN ANALYSIS OF SOUND FOR FAULT ENGINE

ACCURATE ANALYSIS AND VISUAL FEEDBACK OF VIBRATO IN SINGING. University of Porto - Faculty of Engineering -DEEC Porto, Portugal

Improving Frame Based Automatic Laughter Detection

A Matlab toolbox for. Characterisation Of Recorded Underwater Sound (CHORUS) USER S GUIDE

Nature Neuroscience: doi: /nn Supplementary Figure 1. Emergence of dmpfc and BLA 4-Hz oscillations during freezing behavior.

EE513 Audio Signals and Systems. Introduction Kevin D. Donohue Electrical and Computer Engineering University of Kentucky

Topic 10. Multi-pitch Analysis

Available online at International Journal of Current Research Vol. 9, Issue, 08, pp , August, 2017

Brain-Computer Interface (BCI)

Experiments on tone adjustments

International Journal of Computer Architecture and Mobility (ISSN ) Volume 1-Issue 7, May 2013

Model 7330 Signal Source Analyzer Dedicated Phase Noise Test System V1.02

Chapter Two: Long-Term Memory for Timbre

Characteristics of Polyphonic Music Style and Markov Model of Pitch-Class Intervals

Modeling sound quality from psychoacoustic measures

Please feel free to download the Demo application software from analogarts.com to help you follow this seminar.

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM

A New "Duration-Adapted TR" Waveform Capture Method Eliminates Severe Limitations

Loudness and Sharpness Calculation

Acoustic Prosodic Features In Sarcastic Utterances

INDIVIDUALITY IN SCOPS OWL OTUS SCOPS VOCALISATIONS

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB

THE EFFECT OF EXPERTISE IN EVALUATING EMOTIONS IN MUSIC

Voice source and acoustic measures of girls singing classical and contemporary commercial styles

Lecture 10 Harmonic/Percussive Separation

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement

Ensemble QLAB. Stand-Alone, 1-4 Axes Piezo Motion Controller. Control 1 to 4 axes of piezo nanopositioning stages in open- or closed-loop operation

Vocal-tract Influence in Trombone Performance

Progress in calculating tonality of technical sounds

1. Recording setup for database acquisition

Automatic Laughter Segmentation. Mary Tai Knox

PicoScope 9300 Series migration guide

Evaluating trained singers tone quality and the effect of changing focus of attention on performance

Computer-based sound spectrograph system

Synthesized Block Up- and Downconverter Indoor / Outdoor

Vector Network Analyzer TTR503A/TTR506A USB Vector Network Analyzer Preliminary Datasheet. Subject to change.

Digital SWIR Scanning Laser Doppler Vibrometer

Proc. of NCC 2010, Chennai, India A Melody Detection User Interface for Polyphonic Music

NOTICE. The information contained in this document is subject to change without notice.

An action based metaphor for description of expression in music performance

PERCEPTUAL QUALITY COMPARISON BETWEEN SINGLE-LAYER AND SCALABLE VIDEOS AT THE SAME SPATIAL, TEMPORAL AND AMPLITUDE RESOLUTIONS. Yuanyi Xue, Yao Wang

PSYCHOLOGICAL AND CROSS-CULTURAL EFFECTS ON LAUGHTER SOUND PRODUCTION Marianna De Benedictis Università di Bari

Short-Time Fourier Transform

Practice makes less imperfect: the effects of experience and practice on the kinetics and coordination of flutists' fingers

Automatic characterization of ornamentation from bassoon recordings for expressive synthesis

MUSI-6201 Computational Music Analysis

Musical Instrument Identification Using Principal Component Analysis and Multi-Layered Perceptrons

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

Lab P-6: Synthesis of Sinusoidal Signals A Music Illusion. A k cos.! k t C k / (1)

FLOW INDUCED NOISE REDUCTION TECHNIQUES FOR MICROPHONES IN LOW SPEED WIND TUNNELS

DESIGN PATENTS FOR IMAGE INTERFACES

SIMULATION OF PRODUCTION LINES THE IMPORTANCE OF BREAKDOWN STATISTICS AND THE EFFECT OF MACHINE POSITION

Transcription:

RAZPRAVE IV. RAZREDA SAZU XLVII-3 125 138 LJUBLJANA 2006 ACOUSTIC FEATURES OF RED DEER (CERVUS ELAPHUS) STAGS VOCALIZATIONS IN THE CANSIGLIO FOREST (NE ITALY, 2001-2002) AKUSTIŒNE ZNAŒILNOSTI OGLAØANJA JELENJIH SAMCEV (CERVUS ELAPHUS) V GOZDU CANSIGLIO (SV ITALIJA, 2001-2001) ANDREA FAVARETTO, RENZO DE BATTISTI, GIANNI PAVAN & ALBERTO PICCIN 125

Razprave IV. razreda SAZU, XLVII-3 (2006) ABSTRACT Acoustic features of Red Deer (Cervus elaphus) stags vocalizations in the Cansiglio Forest (NE Italy, 2001-2002) During the rut in the years 2001-2002 in the Cansiglio Forest (NE Italy), more then 1300 vocalizations of red deer stags were recorded and analyzed. The acoustic analysis showed an evident spectrographic and temporal heterogeneity, so that we could classify them in 11 different classes. In particular, for the analyzed population, we found a clear distinction between three principal temporal classes, so we described the acoustic repertoire of the stags population during the considered rutting seasons. Keywords: Red deer, free-ranging population, grunt roars and coughs. IZVLEŒEK Akustiœne znaœilnosti oglaøanja jelenjih samcev (Cervus elaphus) v gozdu Cansiglio (SV Italija, 2001-2001) Med jelenjim rukom v letih 2001-2002 je bilo v gozdu Cansiglio (SV Italija) posnetih in analiziranih veœ kot 1300 oglaøanj samcev. Zvoœna analiza je pokazala znaœilno spektrografsko in œasovno heterogenost, tako da smo oglaøanja lahko razporedili v 11 skupin. Øe posebej jasno so se na osnovi œasovnih parametrov razlikovale tri skupine, zato smo lahko opisali zvoœni nabor populacije samcev med obravnavno sezono ruka. Kljuœne besede: jelen, prosto æiveœa populacija, rukanje in kaøljanje. Addresses Naslovi Andrea FAVARETTO University of Padova Via Belle Gambe 2/a 31100 Treviso Italy E-mail: dejano@libero.it Gianni PAVAN CIBRA University of Pavia Via Taramelli 24 27100 Pavia Italy E-mail: gpavan@cibra.unipv.it Renzo DE BATTISTI Corpo Forestale dello Stato Via Cavalieri di Vittorio Veneto, 21 35129 Padova Italy E-mail: redeba@tin.it Alberto PICCIN Corpo Forestale dello Stato Uffici Amm.ne FF.DD. del Cansiglio via Lioni, 137 31029 Vittorio Veneto (TV) Italy E-mail: redeba@tin.it 126

Andrea Favaretto et al.: Acoustic features of Cervus elaphus INTRODUCTION This work deals with the Red Deer (Cervus elaphus) stag rutting calls in the Cansiglio Forest (North-East Italy, Veneto region). Although some researches on the acoustic behaviour of Fallow Deer (Dama dama) and Red Deer have been done in captivity or in a semi-domesticated condition (FITCH & REBY 2001, LONG et al. 1998, PÉPIN et al. 2001, REBY et al. 1998), the studies on free-ranging populations (REBY & MCCOMB 2003, MCCOMB 2001) are scarce. The goal of this work was to study and characterize the acoustical features of the roaring stags in the Cansiglio population (VAZZOLA et al. 2005). The question to answer was whether it were possible to develop an analysis procedure to identify general and individual acoustic features and separate those features that are related to the dimensions of the anatomical structures involved in the sound emission (source-filter theory) from those probably related to the personality and motivational status of each individual. In this work we describe the acoustic repertoire of the Cansiglio stags population during the 2001-2002 rutting calls seasons and identify the acoustic parameters that allow individual classification (FAVARETTO et al. 2005). MATERIALS AND METHODS More than 60 hours of red deer stags vocalizations were recorded in the Cansiglio Forest (Alps, North Italy, altitude 1000 m a.s.l.) during the 2001 and 2002 rutting seasons (September-October), by using a Beyerdynamic MC-737 shotgun microphone connected to an Apple ibook G3 laptop. The acoustic signals were recorded on the laptop by using a USB Roland UA-30 external audio interface and the Bias Peak 2.6 TDM software, in monophonic mode with 44.1 khz sampling rate and 16 bit resolution. The recordings were browsed, selected, divided into categories and, whenever possible, classified according to recognized individual emitters. Analyses were made with Praat (V. 4.0.12, P. Boersma and D. Weenink, University of Amsterdam, The Netherlands, www.praat.org), a software originally developed for speech analysis. We discarded from the analyses all the sounds showing: acoustic overlap of different roars, bad spectrographic display (acoustic signal level too low), environmental noise (caused by rain, wind, airplanes, cars, etc.). More than 1300 sound units, commonly called roars, belonging to 7 different stags, were analyzed, measured and categorized (Tab. 1). Then we analyzed how these sound units were sequenced (organized temporally) to possibly identify and classify higher level structures, the bouts. Temporal variables (sound units duration, total duration, number of units, pause between two consecutive units) and spectrographic variables (Fundamental frequency, F0, and Formants, F1, F2..F8) were measured by using the software PRAAT. Temporal variables were measured by selecting every sound unit in the PRAAT spectrographic window. F0 was measured every 20 ms time step in the frequency range between 50 to 250 127

Razprave IV. razreda SAZU, XLVII-3 (2006) Hz. Then we measured three values to characterize F0: the highest value (F0max), the lowest (F0min) and the average value (F0med). The first eight formants were measured by processing spectra with the cepstral smoothing command (bandwidth: 100 Hz). F0 was measured in all the sounds showing at least one harmonic segment; instead, the formants were measured in those sounds showing at least an harsh plateau of stable and nomodulated frequencies, corresponding to maximum elongation of the vocal tract length (REBY & MCCOMB 2003b, FITCH et al. 2001, WILDEN et al. 1998). RESULTS By analysing the spectrograms, we identified four basic different acoustic types: sounds which contain both harmonic and chaotic structures (REBY & MCCOMB 2003), sounds exclusively harmonic, sounds exclusively harsh and sounds we aren t able to distinguish any clear acoustic structure in. Considering their temporal aggregation, the sounds are emitted in bouts: every bout is composed by a variable number of sound units, typically ranging from one to 10 and more. The analysis of the duration showed a clear distinction of all the sounds in three principal categories (Fig. 1 & 2): the common roar, the grunt roar and the cough. The average duration of the common roar was 0.89 s (SD=0.3) with a repetition rate within a bout = 0.91/s. The grunt roars are shorter than the common roars; they are emitted in fast bout, usually consisting of 3 to 9 units, with a strong harsh characterization. The average duration of the grunt roar was 0.18 s (SD=0.08) with a repetition rate within a bout = 1.82/s. The cough is shorter than the grunt; it lacks a clear tonal structure, and it seems to be like an human cough (Fig. 10). It has average duration 0.076 s (SD = 0.032) and it is emitted in fast series, typically when the stag runs after another stag or hind. It is normally repeated 3 to 5 times with a repetition rate = 4.13/s. We called cough this kind of vocalization that was never described before. Based on the duration measures we identify the following bout categories: 1. common roar bout (Fig. 5): bout composed only by common roars (average duration of 2.88 s, average number of roars=2.6, min number of roars=1, max =12; SD =1.93). 2. grunt roar bout (Fig. 11): bout composed by grunt roars emitted in series, harsh in most cases (average duration of bouts= 4.64 s; SD=1.7; average number of roars= 6.22); in this bout often do appear also some common roars especially in the final position, but sometimes also in the initial one. 3. cough bout (Fig. 10): bout composed by coughs. In rare cases this bout can end with a common roar. 128

Andrea Favaretto et al.: Acoustic features of Cervus elaphus The grunt roars bouts are emitted more frequently when the rutting season raises the climax; so, the common roar bouts are more numerous all along the season, in the ratio of circa 10:1 (FAVARETTO 2004). The three categories of bout seen above may show variable roars composition. Combining the duration measures and the spectrographic analyses, we divided all the sounds in 8 subcategories, obtaining the following possibilities of bout composition: Common roar bout composition 1. harmonic common roar: sound completely harmonic (Fig. 3, 5). 2. harsh common roar: sound completely harsh (Fig. 4). 3. mixed common roar. 3.1. harmonic part followed by an harsh one (Fig. 5, 8). 3.2. two or more harsh segments (Fig. 6). 3.3. first harsh, then harmonic (Fig. 9). 3.4. first harmonic, then harsh, then harmonic again (Fig. 7). 4. vague common roar: sometimes emitted as last sound in a bout. The acoustic structure is not clear, the average duration 0.6 s (Fig. 5). Grunt roar bout composition 5. incipit: the sound that sometimes begin a grunt roar bout: it is a common roar longer then a grunt, normally with harsh structure (Fig. 11). 6. grunt roar. 7. closing roar: it is the roar that closes a grunt roar bout, normally with harsh structure. The duration is quite long. It s a sound that shows formant s stability (Fig. 11). Coughs bout composition 8. cough (Fig. 10), rarely ending with a common roar. During the two years field experience we measured 1346 sounds, organized into ca. 500 bouts. Table 2 shows the average values of time-related variables. Once we classified the different sounds and bouts, we were able to analyze the pool of vocalizations with advanced statistical procedures to test our individual identification data. By applying Discriminant analysis (SPSS 11.0) to the three categories of bouts we found that the grunt roar bouts exhibited the highest degree of separation into identifiable clusters that match our field observation on individually recognized individuals. By using the grunt roar bouts it was possible to correctly classify all the 7 different individuals with a high confidence degree (94.8%) (Fig. 12). CONCLUSIONS From the data gathered has emerged that the acoustic features of C. elaphus is more complex then expected; on the other hand, we described the repertoire of the Cansiglio 129

Razprave IV. razreda SAZU, XLVII-3 (2006) population that is based on acoustic units variably combined to generate different bouts. In particular, we found that grunt roar bouts convey individual features that may play an important role in the communication system of this species. This study may lead to important applications in applied zoology, with the aim of widening the knowledge about the considered species and with the perspective of a concrete use in the demo-ecology field, in the wildlife management and in the monitoring of free-ranging animals. REFERENCES FAVARETTO, A., 2004: Esperienze sull individuazione di maschi in una popolazione di cervo mediante analisi acustica delle vocalizzazioni (Foresta del Cansiglio).- Tesi di laurea in scienze Forestali e Ambientali, pp. 1-112. Università di Padova. FAVARETTO, A., DE BATTISTI, R. & PAVAN, G., 2005: Acoustic individuality of free-ranging red deer (Cervus elaphus L.) stags.- XXVII Congress of the International Union of Game Biologist 28 th August- 3 rd September 2005, Hannover, Germany. FITCH, W.T., NEUBAUER, J. & HERZEL, H., 2001: Calls out of chaos: the adaptive significance of nonlinear phenomena in mammalian vocal production.- Animal Behaviour, 2002, 63, 407-418. FITCH, W.T. & REBY, D., 2001: The descendent larynx is not uniquely human.- Proc. R. Soc. Lond., 268, 1669-1675. LONG, A.M., MOORE, N.P. & HAYDEN, T.J., 1998: Vocalization in red deer (Cervus elaphus), sika deer (Cervus nippon), and red x sika hybrids.- J. Zool., Lond., 244, 123-134. MCCOMB, K., 1991: Female choice for high roaring rates in red deer, Cervus elaphus.- Animal Behaviour, 41, 79-88. PÉPIN, D., CARGNELUTTI, B., GONZALES, G., JOACHIM, J. & REBY, D., 2001: Diurnal and seasonal variations of roaring activity of farmed red deer stags.- Applied animal Behaviour science, 74, 233-239. REBY, D. & MCCOMB, K., 2003a:. Vocal communication and reproduction in deer.- Advances in the study of Behaviour, 33, 231-264. REBY, D. & MCCOMB, K., 2003b: Anatomical constraints generate honesty: acoustic cues to age and weight in the roars of red deer stags.- Animal Behaviour, 65, 519-530. REBY, D., JOACHIM, J., LAUGA, J., LEK, S. & AULAGNIER, S., 1998: Individuality in the groans of fallow deer (Dama dama) bucks.- J. Zool., Lond., 245, 79-84. VAZZOLA, C., DE BATTISTI, R., DI GANGI, E., CAMPAGNARO, M. & PICCIN, A., 2005: Indagini demoecologiche della popolazione di cervo (Cervus elaphus L., 1758) in Cansiglio (Prealpi Venete). Anni 1995-2003.. In: BON, M., DAL 130

Andrea Favaretto et al.: Acoustic features of Cervus elaphus LAGO, A. & FRACASSO, G. (Eds.): Atti 4 Convegno Faunisti Veneti.- Associazione Faunisti Veneti, Natura Vicentina, 7, 1-288. WILDEN, I., HERZEL, H., PETERS, G., & TEMBROCK, G., 1998: Subharmonics, biphonation, and deterministic chaos in mammal vocalization.- Bioacoustics, 9, 171-196. 131

Razprave IV. razreda SAZU, XLVII-3 (2006) Figure 1: Duration of roars. Figure 2: Duration of coughs and grunt roars. 132

Andrea Favaretto et al.: Acoustic features of Cervus elaphus Figure 3: Vocalic common roar. Figure 4: Harsh common roars. 133

Razprave IV. razreda SAZU, XLVII-3 (2006) Figure 5: Common roar bout. Figure 6: Common roar with two chaotic events. 134

Andrea Favaretto et al.: Acoustic features of Cervus elaphus Figure 7: Common mixed twice vocalic. Figure 8: Normal common roar. 135

Razprave IV. razreda SAZU, XLVII-3 (2006) Figure 9: Common roar first harsh, then harmonic. Figure 10: Cough's bout. 136

Andrea Favaretto et al.: Acoustic features of Cervus elaphus Figure 11: Grunt roar bout with incipit and closing roar. 137

Razprave IV. razreda SAZU, XLVII-3 (2006) Figure 12: Cannonical discriminant function Table 1: Average values of time-related variables Original Predicted Group Membership Total ID_TEST 1.00 2.00 3.00 4.00 5.00 6.00 7.00 1.00 72 0 0 0 0 0 0 72 2.00 1 11 0 0 0 0 7 19 3.00 0 0 62 0 0 0 3 65 Count 4.00 0 0 0 5 0 0 0 5 5.00 1 0 0 0 20 0 0 21 6.00 0 0 0 0 0 13 0 13 7.00 0 0 0 0 0 0 37 37 1.00 100 0 0 0 0 0 0 100 2.00 5.3 57.9 0 0 0 0 36.8 100 3.00 0 0 95.4 0 0 0 4.6 100 % 4.00 0 0 0 100 0 0 0 100 5.00 4.8 0 0 0 95.2 0 0 100 6.00 0 0 0 0 0 100 0 100 7.00 0 0 0 0 0 0 100 100 A 94.8 % of original grouped cases correctly classified 138