Affective response to a set of new musical stimuli W. Trey Hill & Jack A. Palmer Psychological Reports, 106,

Similar documents
The relationship between properties of music and elicited emotions

Running head: THE EFFECT OF MUSIC ON READING COMPREHENSION. The Effect of Music on Reading Comprehension

THE EFFECT OF EXPERTISE IN EVALUATING EMOTIONS IN MUSIC

Expressive information

Running head: FACIAL SYMMETRY AND PHYSICAL ATTRACTIVENESS 1

MANUSCRIPT FORMAT FOR JOURNAL ARTICLES SUBMITTED TO AMMONS SCIENTIFIC, LTD. FOR POSSIBLE PUBLICATION IN PERCEPTUAL AND MOTOR

Information processing in high- and low-risk parents: What can we learn from EEG?

Emotions perceived and emotions experienced in response to computer-generated music

INFLUENCE OF MUSICAL CONTEXT ON THE PERCEPTION OF EMOTIONAL EXPRESSION OF MUSIC

Subjective Emotional Responses to Musical Structure, Expression and Timbre Features: A Synthetic Approach

1. BACKGROUND AND AIMS

THE INTERACTION BETWEEN MELODIC PITCH CONTENT AND RHYTHMIC PERCEPTION. Gideon Broshy, Leah Latterner and Kevin Sherwin

Compose yourself: The Emotional Influence of Music

MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC

Radiating beauty" in Japan also?

Interpretations and Effect of Music on Consumers Emotion

Electronic Musicological Review

Can parents influence children s music preferences and positively shape their development? Dr Hauke Egermann

Klee or Kid? The subjective experience of drawings from children and Paul Klee Pronk, T.

& Ψ. study guide. Music Psychology ... A guide for preparing to take the qualifying examination in music psychology.

Perceiving Differences and Similarities in Music: Melodic Categorization During the First Years of Life

Automatic Detection of Emotion in Music: Interaction with Emotionally Sensitive Machines

Effect of Compact Disc Materials on Listeners Song Liking

Consumer Choice Bias Due to Number Symmetry: Evidence from Real Estate Prices. AUTHOR(S): John Dobson, Larry Gorman, and Melissa Diane Moore

Improving Piano Sight-Reading Skills of College Student. Chian yi Ang. Penn State University

Quantifying Tone Deafness in the General Population

Discovering GEMS in Music: Armonique Digs for Music You Like

THE SOUND OF SADNESS: THE EFFECT OF PERFORMERS EMOTIONS ON AUDIENCE RATINGS

Searching for the Universal Subconscious Study on music and emotion

Effects of Auditory and Motor Mental Practice in Memorized Piano Performance

What Can Experiments Reveal About the Origins of Music? Josh H. McDermott

Expressive performance in music: Mapping acoustic cues onto facial expressions

"The mind is a fire to be kindled, not a vessel to be filled." Plutarch

Psychological wellbeing in professional orchestral musicians in Australia

Krause, A. and North, A. and Hewitt, L Music Selection Behaviors in Everyday Listening. Journal of Broadcasting and Electronic Media.

Acoustic and musical foundations of the speech/song illusion

Tear Machine. Adam Klinger. September 2007

MODELING MUSICAL MOOD FROM AUDIO FEATURES AND LISTENING CONTEXT ON AN IN-SITU DATA SET

The Impact of Humor in North American versus Middle East Cultures

The Effects of Study Condition Preference on Memory and Free Recall LIANA, MARISSA, JESSI AND BROOKE

Tapping to Uneven Beats

Psychology. Psychology 499. Degrees Awarded. A.A. Degree: Psychology. Faculty and Offices. Associate in Arts Degree: Psychology

The Influence of Visual Metaphor Advertising Types on Recall and Attitude According to Congruity-Incongruity

Psychology. 526 Psychology. Faculty and Offices. Degree Awarded. A.A. Degree: Psychology. Program Student Learning Outcomes

Journal of Research in Personality

Comparison, Categorization, and Metaphor Comprehension

EDUCATIONAL PSYCHOLOGY (ED PSY)

AGGRESSIVE HUMOR: NOT ALWAYS AGGRESSIVE. Thesis. Submitted to. The College of Arts and Sciences of the UNIVERSITY OF DAYTON

Abstract. Utilizing the Experience Sampling Method, this research investigated how individuals encounter

PHL 317K 1 Fall 2017 Overview of Weeks 1 5

Peak experience in music: A case study between listeners and performers

This slideshow is taken from a conference presentation (somewhat modified). It summarizes the Temperley & Tan 2013 study, and also talks about some

Children s recognition of their musical performance

PROFESSORS: Bonnie B. Bowers (chair), George W. Ledger ASSOCIATE PROFESSORS: Richard L. Michalski (on leave short & spring terms), Tiffany A.

日常の音楽聴取における歌詞の役割についての研究 対人社会心理学研究. 10 P.131-P.137

MUSICAL MOODS: A MASS PARTICIPATION EXPERIMENT FOR AFFECTIVE CLASSIFICATION OF MUSIC

Dial A440 for absolute pitch: Absolute pitch memory by non-absolute pitch possessors

Emotional and aesthetic antecedents and consequences of music-induced thrills

DEMOGRAPHIC DIFFERENCES IN WORKPLACE GOSSIPING BEHAVIOUR IN ORGANIZATIONS - AN EMPIRICAL STUDY ON EMPLOYEES IN SMES

Singing in the rain : The effect of perspective taking on music preferences as mood. management strategies. A Senior Honors Thesis

Graduate Bulletin PSYCHOLOGY

Research & Development. White Paper WHP 228. Musical Moods: A Mass Participation Experiment for the Affective Classification of Music

SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS

Sample assessment task. Task details. Content description. Year level 8. Theme and variations composition

MUSIC AND NOSTALGIA 1. This Is Your Song: Using Participants Music Preferences to Efficiently Evoke

Chartistic - A new non-verbal measurement tool towards the emotional experience of music

in the Howard County Public School System and Rocketship Education

UNIVERSITY OF SOUTH ALABAMA PSYCHOLOGY

The intriguing case of sad music

The Roles of Politeness and Humor in the Asymmetry of Affect in Verbal Irony

Opening musical creativity to non-musicians

Effects of articulation styles on perception of modulated tempos in violin excerpts

DEPARTMENT OF PSYCHOLOGY

Exploring Relationships between Audio Features and Emotion in Music

Sample APA Paper for Students Interested in Learning APA Style 6 th Edition. Jeffrey H. Kahn. Illinois State University

West Windsor-Plainsboro Regional School District String Orchestra Grade 9

Music Emotion Recognition. Jaesung Lee. Chung-Ang University

The Effects of Background Music on Non-Verbal Reasoning Tests

University of Groningen. Tinnitus Bartels, Hilke

TOWARDS AFFECTIVE ALGORITHMIC COMPOSITION

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas

Bi-Modal Music Emotion Recognition: Novel Lyrical Features and Dataset

The Effects of Web Site Aesthetics and Shopping Task on Consumer Online Purchasing Behavior

Perception of emotion in music in adults with cochlear implants

Construction of a harmonic phrase

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Brief Report. Development of a Measure of Humour Appreciation. Maria P. Y. Chik 1 Department of Education Studies Hong Kong Baptist University

Geological Magazine. Guidelines for reviewers

PSYCHOLOGY (PSY) - COURSES Fall 2017

Automatic Generation of Music for Inducing Physiological Response

Non-Reducibility with Knowledge wh: Experimental Investigations

Effects of Musical Training on Key and Harmony Perception

MANOR ROAD PRIMARY SCHOOL

PSYCHOLOGY. Introduction. Educational Objectives. Degree Programs. Departmental Honors. Additional Information. Prerequisites

Surprise & emotion. Theoretical paper Key conference theme: Interest, surprise and delight

Music, emotion, and time perception: the influence of subjective emotional valence and arousal?

THE RELATIONSHIP BETWEEN DICHOTOMOUS THINKING AND MUSIC PREFERENCES AMONG JAPANESE UNDERGRADUATES

PSYCHOLOGY. Courses. Psychology 1

West Windsor-Plainsboro Regional School District Band Curriculum Grade 11

The Human Features of Music.

Transcription:

Hill & Palmer (2010) 1 Affective response to a set of new musical stimuli W. Trey Hill & Jack A. Palmer Psychological Reports, 106, 581-588 2010 This is an author s copy of the manuscript published in Psychological Reports. The full-text publisher s version can be found using the following citation: Hill, W. T., & Palmer, J. A. (2010). Affective response to a set of new musical stimuli. Psychological Reports, 106(2), 581-588. doi: 10.2466/pr0.106.2.581-588

Hill & Palmer (2010) 2 Affective response to a set of new musical stimuli 1 W. Trey Hill Kansas State University Jack A. Palmer University of Louisiana at Monroe Summary. Recently, a novel set of musical stimuli was developed in an attempt to bring more rigor to a paradigm which often falls under scientific scrutiny. Although these musical clips were validated in terms of recognition for emotion, valence, and arousal, the clips were not specifically tested for their ability to elicit certain affective responses. The present study examined selfreported elation among 82 participants after listening to one of two types of the musical clips; 47 listened to happy music and 35 listened to sad music. Individuals who listened to happy music reported significantly higher elation than individuals who listened to the sad music. These results support the idea that music can elicit certain affective state responses. Infants as young as 6 months of age have shown the ability to distinguish happy music from sad music (Flom, Gentile, & Pick, 2008), as well as develop a preference for consonant over dissonant music (Zentner & Kagan, 1998). Rhythm and melody are used in every human culture, and music has been considered to be addicting much like other pleasures such as prescription drugs and sex (Huron, 2001). Several researchers agree that humans are inclined to musical behavior (e.g., Miller, 2000; Wallin, Merker, & Brown, 2000; Mithen, 2006). The premise of this research is that the prevalence of music in all cultures must have a psychological basis. Research on the psychology of music has been the subject of several criticisms as well. Specifically, claims that listening to music can alter mood states have been undermined by three possible problems: (1) the cognitivist view that music merely expresses, but does not induce, mood; (2) lack of evidence or explanation for how music elicits emotional reactions; and (3) the use of subjective methodologies (self-report) to measure the supposed induced mood. The first problem may be a problem of misattribution (e.g., Meyer, 1956; Kivy, 1989); the cognitivist view proposes that participants misattribute the recognized emotional expression of the music for their own feelings. This problem is often addressed methodologically by specifying to participants that they should report how they feel and not how the music sounds. Scherer and Zentner (2001) observed a difference in self-reported affect when the participants were told to report how they felt as opposed to how the music sounded. The second problem suggests that up to six cognitive processes (e.g., episodic memory, visual imagery, expectation) may occur during the act of listening to music, any combination of which could induce feelings of emotion in individuals. Juslin and Västfjäll (2008) proposed that 1 This article may not exactly replicate the final version published in Psychological Reports. It is not the copy of record. This is only the author s version of that paper. The official citation for this paper is: Hill, W. T., & Palmer, J. A. (2010). Affective response to a set of new musical stimuli. Psychological Reports, 106, 581-588. doi: 10.2466/pr0.106.2.581-588. This publisher s full-text version is available at: http://www.amsciepub.com/doi/abs/10.2466/pr0.106.2.581-588?journalcode=pr0

Hill & Palmer (2010) 3 these processes are not exclusive to music, and therefore emotional reactions to music may possibly be related to ordinary emotional reactions to the extent they employ the same processes. The focus of the present research is the third problem facing research on music and emotions, which has to do with the gathering of affect data via self-report. This method is the norm for typical research on affect, and as such has often been applied to music-based emotion research. Problems with self-report methodology (e.g., the possibility of demand characteristics) must be suspected in any such research including music and affect, although Kenealy (1988) found that experimentally manipulated demand characteristics had no effect on participants selfreported moods in music-based research. Various researchers have conducted isolated studies related to music that may not be generalizable or even comparable to each other. This difficulty in comparison between studies has primarily been due to variations in methods and in stimuli, both of which can have severe implications on theory development and verification. To address this lack of consistent methodology and provide a set of standard stimuli, Vieillard, Peretz, Gosselin, and Khalfa (2008) developed 56 musical clips to be used in music-based emotion research. These 56 clips were composed of four groups of 14 clips, with each group being a different intended emotion. Vieillard, et al. developed clips intended to sound happy, sad, peaceful, and scary. The items were both categorical (e.g., Ekman, 1982) and dimensional (e.g., Russell, 1980), e.g., they can be treated as bipolar elements on an affective dimension. Thus, although peaceful is generally not considered a basic emotion, it was added by Vieillard, et al. to provide the dimensional opposite of the scary music clips. Similarly (but here conforming to two basic emotions), the happy and sad music clips were created as dimensional opposites. The Vieillard, et al. (2008) musical clips were digitally constructed in piano timbre on computer software. Construction of the musical clips was based on the rules of the Western tonal system. The happy clips were in a major mode with fast tempo; sad clips were in a minor mode with slow tempo; scary clips were in a minor mode with intermediate tempo; and peaceful clips were in a major mode with slow tempo. Also, although most of the scary clips were composed for regular rhythm and consonance, a few were irregular and dissonant (similar to the piano music from older horror films). These descriptions are generalizations, of course. Readers who are musicians can refer to the appendixes of Vieillard, et al. (2008) for musical scores. Vieillard, et al. (2008) tested participants abilities to correctly recognize the intended emotion in the music. A detail important to the proposed study is that Vieillard, et al. also used two sets of instructions before their experiment: one set of instructions told participants to merely attempt to recognize the intended emotion, whereas another group was told to focus on their emotional experience while listening to the music. Participants more often correctly recognized the intended emotion when they were instructed to focus on their emotional experience. Although not the purpose of the study, Vieillard, et al. suggested that this is supportive of the notion that emotional recognition and emotional experience differ only in strength. Further, they suggested that the musical clips may have induced a congruent affective state in some of the participants. Evaluating this claim is the topic of the present research. For the present research, only the happy and sad musical clips were used (28 total clips). Since happy and sad are often considered basic emotions and are also often considered dimensional opposites, these two were judged most useful for evaluation. Using a between-groups design comparing happy and sad musical clips, the hypothesis of this study was that happy sounding musical clips would elicit higher scores on the Elated Depressed

Hill & Palmer (2010) 4 subscale of the Semantic Differential Feeling and Mood Scale than would sad sounding musical clips. Participants Method Participants were 82 undergraduates enrolled in a regional university in the Southern United States. The mean age was 22.8 yr. (SD = 6.9) with a mix of ethnicities and sexes: 67% Caucasian, 23 (28%) African American, 3 (4%) Asian, 1 (1%) Hispanic; 18 (22%) men, 64 (78%) women. Participants were recruited from introductory psychology courses and given extra credit in their course for participating in the study. Those who did not wish to participate were given an alternative option for extra credit. Materials Mood scale.the Semantic Differential Feeling and Mood Scale (SDFMS; Lorr & Wunderlich, 1988) was used in this study. The SDFMS is a measure of feeling and mood states along five bipolar dimensions (see below). Past research has suggested that affective states, including emotions, moods, attitudes, etc. (Scherer, 2000), are bipolar in nature (Russell, 1979). The SDFMS consists of 35 items, each rated on a 5-point scale labeled 1: Quite, 2: Slightly, 3: Neutral, 4: Slightly, and 5: Quite; to the left of each item s rating scale was one bipolar adjective (e.g., Dejected) and to the right of the boxes was the dimensionally opposite adjective (e.g., Cheerful). Participants were given specific instructions to place a check in the box indicative of how they feel right now. The scale can be subdivided into five different bipolar dimensions of general affect or feeling with the subscales having adjective-based labels and also a letter assigned to them. This allows scoring as a Total questionnaire or by subscales which can be analyzed separately. The subscales have general adjective-based labels and also a letter assigned to them. The first subscale is defined by the dimension Elated Depressed (hereafter called the Elation scale), with higher scores indicating higher general cheerfulness. The remaining subscales and their names are as follows: Relaxed Anxious (Relaxed), with higher scores denoting greater relaxation; Confident Unsure (Unsure), with higher scores signifying higher uncertainty; Energetic Fatigued (Fatigue), with higher scores denoting higher fatigue; Good Natured Grouchy (Grouchy), with higher scores indicating more grouchiness or hostility. Scherer and Zentner (2001) argue that the common adjective-based measures used in most research may not be appropriate for research on musical emotions. Thus, for the present study a different approach (i.e., the SDFMS) was taken regarding the listeners general affective experiences. There is some debate as to whether the emotional experiences of music mirror normal emotions (see Peretz, 2001; Scherer & Zentner, 2001). Although the SDFMS is an adjective-based measure, it was not designed to measure emotional states specifically, so it can be used to gain additional knowledge on affective responses in general (i.e., outside the traditional boundaries of basic emotions). Musical stimuli. The happy and sad musical clips developed by Vieillard, et al. (2008) were used in this experiment. In the remainder of this paper, Vieillard, et al. s happy

Hill & Palmer (2010) 5 musical clips are referred to collectively as happy music and the sad musical clips as sad music. Clips were downloaded from the Peretz Lab website at the University of Montreal 2. Although free for academic use, permission to use the clips was first obtained from an author of the publication (i.e., Vieillard, et al., 2008). Each set of clips, when played through once, was approximately 150 sec. long. Thus in the between-groups design, during the 5-min. listening session, participants would twice hear one complete set of either the happy clips or sad clips. Procedures The experiment was implemented in introductory psychology classrooms of approximately 40 students. Prior to the study, permission from professors was obtained to perform the experiment on two separate classes. The classes were randomly assigned to the different music groups a Happy music group (n = 47; 10 men, 37 women) and Sad music group (n = 35; 8 men, 27 women) using the flip of a coin. Because the participants were obtained via convenience sampling, there were unequal numbers of men and women in both conditions, a limitation of the study. The two classes were tested on the same day in the same classroom. One class was tested prior to the other but both were tested in the morning. Prior to the beginning of class, the musical clips were loaded onto the playlist of Windows Media Player and set to loop (i.e., continuously play without stopping). For each class, participants listened to the music clips in a group via the classroom s external audio system; one class listened to happy music and one class listened to sad music. When all students had arrived for class, the general nature of the experiment was explained, participants were given informed consent sheets, and any questions were answered. Participants who wished to discontinue their participation or had chosen to not participate were given alternate opportunities for extra credit at the end of class. Participants demographic information (ethnicity, age, sex, and year in school) was requested. For each class, the researcher then began playing either the happy or sad musical clips. The musical clips were allowed to play for a total of 5 min., during which participants were instructed to sit and listen to the music. After 5 min. of listening, while music continued to play, participants were given copies of the Semantic Differential Feeling and Mood Scale (Lorr & Wunderlich, 1988) and were asked to answer the questions pertaining to how they feel right now and not how the music sounded. The musical clips continued to play as they completed the SDFMS (approximately another 15 min.). Volume was held constant for participants in both classes. Following the completion of the Semantic Differential Feeling and Mood Scale, participants were debriefed as to the purpose of the study. Questions were answered pertaining to the hypothesis and contact information was provided if the participants wished to follow up on the results of the study. Although not controlled for, intergroup discussion about the experiment was unlikely; only the class professors but not potential participants knew of the experiment before the beginning of class. Analysis The data were first analyzed using a one-way between-subjects multivariate analysis of variance (MANOVA) to examine possible sex differences on the SDFMS. An additional one- 2 http://www.brams.umontreal.ca/plab/publications/article/96#downloads

Hill & Palmer (2010) 6 way between-subjects MANOVA was used to analyze possible group differences between Happy and Sad music groups on the SDFMS. Further univariate analyses were used to test for group differences on specific SDFMS subscales; however, because there were no significant sex differences (see Results), sex was excluded from any further analysis. Results and Discussion Results of the one-way MANOVA for Sex were not significant (Wilks λ = 0.87, F5,76 = 2.26, p =.06; partial η2 =.13), suggesting that overall, men and women did not differ significantly in their self-reported affect. Lack of sex differences in this study justified excluding sex from further analyses. Results of the one-way MANOVA for Music Group were statistically significant (Wilks λ = 0.73, F5,76 = 5.55, p <.001; partial η2 =.27), warranting further testing of the omnibus effect of music group on the SDFMS. To this end, univariate ANOVAs were used to examine the effect of music group on each SDFMS subscale. Statistically significant differences were found between Happy and Sad music groups for Elation (F1,80 = 21.06, p <.001; partial η2 =.21), and for Unsure (F1,80 = 6.64, p =.01; partial η2 =.08). No other significant differences were found (see Table 1). The present research assessed the changes in affect associated with listening to the new musical stimuli developed by Vieillard, et al. (2008), at the same time striving to minimize general concerns common within music and emotion research regarding possible misattributions of affective states and other self-report biases. The results of the present study documented a difference in self-report state affect between individuals listening to happy versus sad music. Confidence in the statement that these musical clips elicit some type of affective response certainly depends in part on one s confidence in the present method of measuring state affect. The participants were instructed to distinguish between how the music sounded and how they felt. In further research, the extent to which individuals can differentiate their feelings from their cognitive responses to the music could be explored. The results extend the possible uses of the Vieillard, et al. (2008) musical clips and lend some support to the notion that listening to music

Hill & Palmer (2010) 7 can elicit affective responses in individuals. Although this phenomenon has been regularly noted by the general public, it has much less frequently been subject to scientific scrutiny. It is hoped that the development of the Vieillard, et al. musical clips, along with an added rigor in the field of music-based psychological research, will help further this particular line of investigation. Further research should utilize the other two musical types developed by Vieillard, et al. (2008): peaceful sounding music and scary sounding music. Other measures of affective responses could be tested and used as validity checks, perhaps also to control for trait emotional tendencies. Scherer and Zentner (2001) suggested that several varying measures of emotion should be used in emotion research, especially when dealing with music. This could perhaps help in the identification of the variables which seem to make the affective response to music such a unique experience. References Ekman, P. (1982) Emotion in the human face. Cambridge, UK: Cambridge Univer. Press. Flom, R., Gentile, D. A., & Pick, A. D. (2008) Infants discrimination of happy and sad music. Infant Behavior and Development, 31, 716-728. Huron, D. (2001) Is music an evolutionary adaptation? Annals of the New York Academy of Sciences, 930, 51. Juslin, P. N., & Västfjäll, D. (2008) Emotional responses to music: the need to consider underlying mechanisms. Behavioral & Brain Sciences, 31, 559-621. Kenealy, P. (1988) Validation of a music induction procedure: some preliminary findings. Cognition & Emotion, 2, 41-48. Kivy, P. (1989) Sound sentiment: an essay on the musical emotions. Philadelphia: Temple Univer. Press. Lorr, M., & Wunderlich, R. A. (1988) A semantic differential mood scale. Journal of Clinical Psychology, 44, 33-35. Meyer, L. B. (1956) Emotion and meaning in music. Chicago: Univer. of Chicago Press. Miller, G. (2000) Evolution of human music through sexual selection. In N. L. Wallin, B. Merker, & S. Brown (Eds.), The origins of music. Cambridge, MA: MIT Press. Pp. 329-360. Mithen, S. (2006) The singing Neanderthals: the origins of music, language, mind, and body. Cambridge, MA: Harvard Univer. Press. Peretz, I. (2001) Listen to the brain: a biological perspective on musical emotions. In P. N. Juslin & J. A. Sloboda (Eds.), Music and emotion: theory and research. Oxford, UK: Oxford Univer. Press. Pp. 105-134. Russell, J. A. (1979) Affective space is bipolar. Journal of Personality and Social Psychology,

Hill & Palmer (2010) 8 37, 345-356. Russell, J. A. (1980) A circumplex model of affect. Journal of Personality and Social Psychology, 39, 1161-1178. Scherer, K. R. (2000) Psychological models of emotion. In J. Borod (Ed.), The neuropsychology of emotion. New York: Oxford Univer. Press. Pp. 137-162. Scherer, K. R., & Zentner, M. R. (2001) Emotional effects of music: production rules. In P. N. Juslin & J. A. Sloboda (Eds.), Music and emotion: theory and research. Oxford, UK: Oxford Univer. Press. Pp. 361-392. Vieillard, S., Peretz, I., Gosselin, N., & Khalfa, S. (2008) Happy, sad, scary and peaceful musical excerpts for research on emotions. Cognition & Emotion, 22, 720-752. Wallin, N. L., Merker, B., & Brown, S. (Eds.) (2000) The origins of music. Cambridge, MA: MIT Press. Zentner, M. R., & Kagan, J. (1998) Infants perception of consonance and dissonance in music. Infant Behavior and Development, 21, 483-492. Accepted April 14, 2010.