Infosound: An Audio Aid to Program Comprehension. Bellcore 331 Newman Springs Road Red Bank, NJ

Size: px
Start display at page:

Download "Infosound: An Audio Aid to Program Comprehension. Bellcore 331 Newman Springs Road Red Bank, NJ"

Transcription

1 nfosound: An Audio Aid to Program Comprehension Diane H. Sonnenwald B. Gopinath Gary 0. Haberman William M. Keese 111 John S. Myers Bellcore 331 Newman Springs Road Red Bank, NJ "Am We have explored ways to enhance users' comprehension of complex applications using music and special sound effects to present application program events that are difficult to visually detect. A prototype system, nfdound, allows developers to create and store musical sequences and special sound effects, to associate stored musical sequences and sound effects to application events, and to have real-time, continuous auditory control of sounds during application execution. nfosound has been used to create auditory interfaces for two applications: a telephone network service simulation and a parallel computation simulation. The auditory interfaces in these applications helped users detect rapid, multiple event sequences that were difficult to visually detect using text and graphical interfaces. This paper describes the architecture of nfosound, use of the system, and lessons we have learned. 1. ntroduction We have explored ways to enhance users' comprehension of complex applications using music and special sound effects to present application program events that are difficult to visually detect. A prototype system, nfdound, allows developers to create and store musical sequences and special sound effects, to associate stored musical sequences and sound effects to application events. and to have real-time, continuous auditory control of sounds during application execution through a standard programming interface. nfdound can be used in conjunction with other interface toolkits, e.g., a graphic toolkit, to enable users to simultaneously view and listen to a presentation. nfosound has been used to create auditory interfaces for two applications: a telephone network service simulation and a parallel computation simulation. Sounds were used by the applications to signal program events or program states that were momentary or sustained, sequential or parallel. Sounds used in the auditory interfaces of the applications included imitations of everyday sounds and speech. and musical sequences. mitations of everyday sounds and speech represented everyday events, e.g., a telephone ringing sound represented an incoming telephone call, and musical sequences represented abstract events or states that do not have an everyday sound association, e.g., parallel computer processing. Experience showed the auditory interfaces helped users detect rapid, multiple event sequences that were difficult to visually detect using text and graphical interfaces. This work builds upon previous research in auditory interfaces that has focused on using sound to convey information about numerical data and to provide cues about events and options in computer application environments. n particular Bly [] used musical sound to represent multivariate data. Each data value was represented by a single note; parameters of the data value were mapped into frequency, amplitude, duration, and waveshape properties of the note. Mezrich et al. [2] used musical sound to present multivariate time series. Variables in a time series were mapped to musical notes such that each time series had its own melody over time. Morrison and Lunney [3] used musical chords to present chemical spectsa data to the visually handicapped; each note represented a major feature of a substance's infrared spectrum. n the Sonic Finder [41, Gaver uses auditory icons (or special effect sounds) based on everyday sounds to provide auditory feedback on the status of common computer operations such as file copying. Edwards [S used musical sound and synthesized speech to adapt the mouse-based interface of a word processing application for visually handicapped users. nfosound is an interface toolkit that provides a mechanism to design both music and everyday sounds, and to have real-time and continuous control over music and everyday sounds from a parallel or sequential application. nfdound is part of the C* project [6] whose objective is to create an environment for the design and development of complex computer systems, such as telephone networks and services. The project consists of a model of computation that provides a rigorous mathematical foundation for the C* parallel programming languages, and tools [7] to create, view and manipulate parallel programs written in the C* languages. Using nfdound suggests new ways in which sound can be useful in human computer interaction. n the following sections, the architecture of nfosound is presented. TWO applications of nfosound are described and lessons learned from the applications are discussed. Future directions for nfdound are also presented. 2. nfosound Architecture nfdound is an audio interface toolkit that allows application developers to design and develop audio interfaces. The nfdound system architecture. consists of six major components: sound composition system sound storage system playback system application program interface sound generation system sound amplification system The components and their relationships are illustrated in Figure 1. nfdound is implemented on dedicated, peripheral hardware and communicates through a standard interface with application software executing on Unix' workstations, MS-DOS' personal computers (F'C), or an C* parallel processor. Using dedicated hardware maximizes audio response time without degrading application performance. The hardware implementation of nfosound is illustrated in Figure 2. The sound composition system enables musical sequences, motifs, and sound effects to be created and stored for later recall by applications. Music is composed using a Musical nsmment Digital nterface (MD). Unix is a trademark of ATdT Bell Laboratories. 2. MS-WS is a Mdemarlr d Miamoft, nc /90~/0541$ EEE 54 1

2 ~~~~~~ Users Sound Composition U System Sound Storage System U U PhybackSystem Applications Figure 1: nfosound Architecture Users compose sounds through tbe Sound Composition System. Sounds are stored for future retrieval in the Sound Storage System. Applications request sounds to be played through the Application Program nterface in the Playback System. The Sound Generation System and Sound Amplificaiton System system produce sounds received from the Playback System MD hhff8c. htk0... " MD Koyboard system t BY Comp8ttblo PC AT Figure 2: nfosound mplementation nfosound is implemented using commercially-available hardware and software. t interfaces with applications executing on a Unix workstation, MS-DOS PC, or C* Parallel Processor. 542

3 electronic keyboard connected to a MD interface device that is in tum connected to a Macintosh3 personal computer. To compose a musical composition, a usec plays a musical sequence on the MD electronic keyboard and the composition, including information about the pitch and loudness of each note, is automatically stored in MD format [81 in the Macintosh. The compositions may also be edited directly on the Macintosh using commercially-available composition software. The act of composition may be thought of as playing a digitized player piano whose output is a composition written in the MD protocol and stored in electronic form. Musical compositions can be written for multiple instruments, each with a range of 127 notes, with the velocity and pitch of each note specified. Sound effects are recorded through the recording option of the electronic synthesizer and sampler. During the composition process, a sequence or sound effect can be heard by sending its MD information to electronic synthesizers and samplers that imitate a variety of musical instruments (e.g.. flute, drums, piano, violins, drums, marimbas, xylophones, etc.) and sounds (e.g., speech, hand clapping, automobile crashes, telephone ringing, etc.). Finished compositions are transferred from the Macintosh via the MD interface device, or from the electronic synthesizer and sampler, via the MD Personal Computer (PC) interface card to the sound storage system. The sound storage system is a library system that stores the musical sequences and sound effects in MD format t is implemented on a dedicated BM Personal Computer (PC) AT or its equivalent. The sound storage system includes software that allows compositions to be retneved for editing or playback purposes. The playback system enables application programs to play musical compositions and sound effects. Through high-level function calls to the application program interface of &e. playback system, an application has real-time and continuous control over sound sequences, including control of the following auditory properties: duration: i.e., one or more sound sequences can be played, stopped or restarted simultaneously song orchestration: e.g., a piano sequence may be changed to a flute sequence. These changes are only constrained by the instrumentation capability of the multi-timbral synthesizer. amplitude: i.e., the volume of individual insmments in sound sequences may be altered frequency: i.e., the tempo of a sequence may be altered stereo panning: i.e., the sound orientation within the speaker system may be altered (similar to balance control in stereophonic systems) Since an application program may dynamically alter the orchestration, amplitude, tempo and stereo panning of any sound sequence, a small number of stored saunds can be used to dynamically create a large number of generated sounds. These capabilities reduce system memory requirements yet provide sonic richness. The sound generation system converts MD messages sent from the playback system into actual sounds. MD messages are sent from the playback component via a MD interface card to a multi-timbral synthesizer and electronic sampler that generate sounds. The generated sounds are then combined into stem output by a mixer. The stereo output from the mixer is processed by the sound amplification system that amplifies and plays the sounds through stereophonic speakers. 3. nfosound Applicaiions Human perception of sound and the ability to discriminate pitches, rhythmic patterns and melodic phrases has received extensive study over the years [9], [lo], [ll]. These studies show that auditory stimuli can be used as recognizable cues. Cues in the form of auditory icons based on users' world knowledge of everyday sounds have been used by Gaver [4, 121 to provide feedback on the status of computer operations such as copying and deleting files. Edwards [51 used musical sound and synthesized speech to adapt a mousebased application interface for visually handicapped users. n these applications, the audio interface was dynamically controlled by real-time events but audio was not used to provide information difficult to visually detect for non-impaired users. n contrast, Bly [l], Mezrich et al. [21, and Momson and Lunney [31 used audio to present complex numerical data that is difficult for users to visually understand but the audio interface was not dynamically controlled by real-time events. nfosound builds upon both approaches, using audio to present complex data, i.e., program events that occur in rapid succession and/or in parallel, and by providing programs real-time, continuous control of the auditory interface. When using nfosound to design auditory interfaces, rapid, parallel program events are represented by two classes of auditory stimuli: everyday sounds and music. n the Sonic Finder [12], Gaver used everyday sounds to increase users' feeling of direct engagement or mimesis with the computer environment He mapped everyday sounds to computer operations and relied on users making an analogy between the sound and computer operation. For example, a pouring sound was used to denote the copying operation. nfosound applications that simulated everyday events further increased users' feelings of direct engagement by using the same sounds associated with the real event to represent the simulated event. For example, a simulated incoming telephone call was represented by a telephone ringing sound. When nfoscund applications did not simulate everyday events or concepts that had associated sounds, music was used to represent the event or program state. Musical composition techniques, such as harmonization and style, were used to increase the users' feeling of direct engagement. For example, a six part harmony represented six synchronized parallel processors. nfosound is used by application programs to indicate when an event has occurred during program execution, i.e., sounds are associated with application events. The application event@) may be momentary occurring for a brief instance in time, e.g. telephone receiver pickup, or sustained, occurring for a long duration, e.g. talking on a telephone. Momentary and sustained events may occur sequentially and in parallel. Whenever an event occurs during program execution, a message (or messages) to play an associated sound(s) for one unit of time4 is sent to nfosound using the application program interface software functions described previously. Messages can be sent and sound played each unit of time for the duration of the event. Thus application events that occur momentarily for one unit of time can be represented by a momentary sound that lasts one unit of time, and application events that have a longer duration can be represented by a sustained sound, i.e., a sound that is played each unit of time while the event lasts. For example, during execution of a telephone network simulation program when a phone changes state from "not busy" to "ringing," a ringing sound can be played. To do this, the application program sends a message to the playback function to play the "ringing" sequence when it detects that the phone (currently not in use) has received an incoming call. The application program repeatedly sends this message until it detects that ringing phone has been answered or the originating calling party has hung up. When parallel events occur in applications, applications send simultaneous messages to nfosound. nfosound processes the concurrent messages and plays multiple sounds shku"sly. Associating sounds to events is similar to the visual gauge paradigm used to display state or value changes in object-oriented programming [13]. These techniques has been used to create audio interfaces for application programs that simulate real-time systems including a telephone network, computer queues, and parallel computation. nfosound may also be used in conjunction with a visual display to further enhance the interface. n the following sections, the telephone network and parallel computation simulations are described. 3. Macintosh is a trademark of Apple Compu~m. nc. 4. n wr model of computation. time is assumed to be discrae and is described by a succession of non-negative integers. 543

4 02:1454 Usor's Phone Hetwork DuUnaUon Phone Figure 3: Presentation of Telephone Network Simulation Visual icons represent objects and events. Everyday sounds that imitate common sounds associated with the telephone are used to denote events. 3.1 Telephone Network Simulation The telephone network simulation illustrates plain old telephone service with a user who randomly accesses the telephone network. n the simulation all usual telephone activities are represented, e.g., the user can pickup the phone, wait for a dialtone, dial a phone number, wait for a network connection, receive a connection (or not receive a connection), talk,or hangup. To realistically portray telephone usage patterns, random delays and random events were included in the simulation. The graphical display of the simulation is illustrated in Figure 3. User actions and the current state of the user's phone, destination phone, and network are displayed in text gauges. Two (graphical) phone icons and one network icon represent the user and destination phones and network connection, respectively. The icons are animated to show the phone receivers being picked up or put down, and a network connection being established or disconnected. The simulation's auditory display maps directly to the user's world knowledge of everyday telephone sounds. For example, the following sound sequences and speech are used in the simulation: telephone ringing telephone dialtone touchtone dialing telephone busy signal people talking telephone receiver hangup telephone receiver pickup These sounds are momentary (e.g., telephone receiver hangup) as well as sustained (e.g., people talking.) n the simulation, the duration of people talking was dynamic since the duration was based on the output of a random number generator. Each unit of time these events occur during program execution, the simulation application sends a message to nfosound to play the associated sounds once. A message is sent each unit of time until the event no longer occurs. Simultaneously, the application sends a message to the graphics interface software to display visual icons. This allows the visual interface and audio interface to be synchronized. The audio interface helped the application program develope? debug the simulation application, and engaged observers in the simulation. Even though the text gauges correctly identified the telephone state, call status, and telephone user activity, visual observation and comprehension of events and events sequences was reported to be difficult since there were multiple types of events and rapid event transitions. t was easier for the developer to hear event sequences to evaluate the correctness of the simulation. The developer first debugged the simulation application viewing its animated graphical representation and reading the (printed) program execution trace. Although the application program was judged to be correct, when sound was added additional mors were detected, e.g., the developer heard a dialtone continue after a hangup that had not been detected visually even though the word "dialtone" stayed on the screen after the word 'hangup" was displayed and the "dialtone" event occurred in the printed trace after the "hangup" event. The auditory stimuli enabled the developer to detect an application program logic emr that was not visually observed. 3.2 Parallel Computation Simulation The parallel computation application simulated parallel computation by six processors, each calculating a side of a rotating cube. Each processor was responsible for calculating and presenting one side. When the six processors are not working synchronously the six sides are graphically represented as disjoint planes floating on the screen independently as illustrated in Figure 4. When the processors are given a command to synchronize their computation, the six sides join together and a rotating cube is graphically displayed as shown in Figure 5. Because there is no everyday sound associated with parallel computation, a musical sequence was created to correspond to each of the six sides of the cube. For example, one side was associated with a percussion sequence, one with a bass line, one with a piano melody, one with a flute, one with a violin, and one with a voice. When the processors are working asynchronously, the planes are floating independently and the six musical sequences are not synchronized with each other and sound cacophonous. However, when the processors synchronize, the cube comes together and the music also synchronizes and becomes six part harmony. t is difficult to graphically represent a cube such that all six sides are simultaneously visible, however six sound sequences, each associated with a side of the cube, were distinguished auditorily. By combining music with graphics, synchronous parallel processing was more clearly dramatized for viewers. Music provided a natural representation of parallel computation concepts. 5. ne developer was an experienced software developer with over ten yean experience. 544

5 1 Figure 4: Presentation of Asynchronous Parallel Computation Asynchronous parallel computation is illusuated by six independently floating planes while six musical sequences that are not synchronized are played. Figure 5: Presentation of Synchronous Parallel Computation Synchronous parallel computation is illustrated by six planes united as a rotating cube while a six part musical harmony is played. 4. Future Directions We would like to integrate sound into the C* program development environment to help software engineers understand and manipulate complex data structures by incorporating auditory stimuli into graphical data structure editors and displays. Data structures such as trees and semantic networks can become large and complex quickly, and users can be visually overwhelmed by their complexity. For example, in C* programs data and program states are represented in trees that change each unit of time. That is, each unit of time (during program execution) a new tree is created that contains the current program data add state. n complex programs with large amounts of data that changes frequently and that execute over a long period of time, it is difficult for program developers to successfully remember the multiple parent and sibling relationships while navigating trees. Graphical editors with zooming, highlighting, and fisheye-viewing techniques [41 aid the user, but do not sohe all viewing comprehension problems. We would like to incorporate sound into such editors to leam if sound can help users differentiate complex structures. Singular musical sequences could be used to identify hierarchical, temporal, and component relationships in structures. For example, structure hierarchies could be represented by different octaves; sound tempo might indicate passage of time; and singular tunes might indicate component relationshids. When users wished to understand a structure, they could hen to the structure's description instead of viewing multiple graphical and/or textual descriptions. There might also be an auditory equivalence to the fisheye-viewing technique by using foreground and background sounds. For example, a component structure under inspection could be represented by one musical instrument (e.g. a trumpet). and a complimentary orchestration could be playing in the background to remind the user of the super structure($. These techniques are analogous D Prokofiev's use of musical themes to identify characters in Peter and the Worf and the use of leitmotivs in Wagnerian operas. Since constant auditory stimuli (such as the "beeping" capabilities used in many systems today) may become annoying or habitually ignored, we want to discover what effects sound and rhythm pattems have on people psychologically and physiologically. For example, playing faster and slower drumming pauems has been found to cause certain individuals to accelerate or decelerate corresponding heart rhythms [151. Analogies drawn from research in the use of sound to enhance graphical displays of multivariate, time-varying and logarithmic data [ll. [21. [31, and methods of auditory icon design [4, 161 may provide 545

6 additional solutions. n early nfosound prototypes application developers expressed interest in designing and building auditory interfaces but found the toolkit difficult to use. This problem was comted with the current prototype, but another problem was discovexd. Application developers not trained in music theory expressed difficulty in composing and alming musical sequences. n the current prototype application developers easily learned what to do to create musical compositions, but they did not know how to compose musical sequences. n our applications an experienced composer created every music sequences used in the auditory interfaces; software developers could not hear tonal differences and/or did not have the skills to compose music sequences. Further investigation is needed to determine how application developers not trained in music composition can be aided in music composition by automatic music composition algorithms, or alternatively, by a music library retrieval system that could match the application developers music needs to stored musical sequences. Sound appears to be a potentially useful modality for conveying information back to the used in an anentioncapturing manner and funher investigation, including empirical validation of user preferences, is needed to determine the value of these exploratory techniques. 5. Conclusion nfosound is an auditory interface toolkit to design musical sequences and everyday sounds, to store designed sounds, and to associate sounds with application events allowing applications to have real-time, continuous control over sounds during execution. nfosound has been used in applications that simulate a telephone network and parallel computation. n these applications. everyday sounds w& used to present program events that have well known associated everyday sounds in the real world. and musical sequences were used to present abstract events or concepts that do not have an everyday sound association. Program events represented auditorily included momentary and sustained events that occurred sequentially and/or in parallel. Our experience with nfosound suggests that auditory interfaces can help application developers and observers detect program events and event sequences that either occur in rapid succession and hence are hard to perceive visually, or ltre so complicated and not easily represented vsdly. Using nfosound has suggested new ways in which sound can be useful for human-computer interaction and identified problems in creating auditory interfaces. Future research includes incorporating auditory stimuli into graph& data mcture editors to leam how sound can aid navigation and understanding of complex data structures, investigation of methods to aid program develop in composing musical sequences, and empirical evaluation of user preferences for auditory versus non-auditory interfaces. 5.1 Acknowledgments We would like to thank T. Reingold and J. Vollaro who helped develop the graphic displays, and colleagues who reviewed this paper including Sara Bly, Dennis Egan, Judy List and Lynn Streeter. REFERENCES Bly, S., "Presenting nformation in Sound," Proceedings of the CH '82 Conference on Hwnan factors in Computer Systems. New York: ACM, 1982, Mezrich, J.J., S. Frysinger, R. Slivjanovski, "Dynamic Representation of Multivariate Time Series Data," Journal of the American Statistical Association. 79 (1984) Bly, S. (Ed.). "Communicating with Sound," Proceedings of the CH'S Conference on Human factors in Computer System, New York ACM, 1985, Gaver, W., "The Sonic Finder: An nterface That Uses Auditory cons," Human-Computer nferactwn 4(1), Spring Edwards, A., "Soundtrack An Auditory nterface for Blind Users," Human-Computer nteraction 4(1). Spring Cameron, EJ., D.M. Cohen, B. Gopinath, W.M. Keese 11, L. Ness, P. Uppalm, and J.R. Vollaro, "The C* Model and Environment," EEE Transactions on Software Engineering 14, 3 (March 1988) Cameron, E.J.. B. Gopinath, P. Metzger, and T. Reingold, "NFOPROBE, a Utility for the Animation of C* Programs," Proceedings of the 22nd Annual Conference on System Science. Hawaii, January DeFuria, S. and J. Scacciaferro, The MD Resource Book, Pompton Lakes, NJ: Third Earth Publishing, nc., Deutsch, D., ''Organizational Processes in Music," n Music, Mind anderain. M. Clynes, Ed.,New York: Plenum Press, Fraisse, P., "Time and Rhythm Perception." n Handbook of Perception. E. Carterette and M. Friedman, Eds., New York: Academic Press, Gabrielsson, A., "Experimental Research on Rhythm," Human Assoc. Review 30,1979, Gaver, William W., "Auditory cons: Using Sound in Computer nterfaces," Hm-Computer nteraction 2 (1986) Stefik, M. and D. Bobrow, "Object-Oriented Programming: Themes and Variations," A Magazine 7,3 (Fall 1986) Fumas, G. W., "Generalized Fisheye Views," Human Factors in Computing System CH '86 Conference Proceedings, Boston, April 13-17, 1986, Harrer and Harrer, "Music, Emotion and Autonomic Function," n Music and rhe Brain. M. Critchley and R. Henson, Eds., London: Wm. Heinemann, Blamer, M., D. Sumikawa, & R. Greenberg. "Earcons and cons: Their Structure and Common Design Principles," Human-Computer nteraction 4(10), Spring

Glasgow eprints Service

Glasgow eprints Service Brewster, S.A. and Wright, P.C. and Edwards, A.D.N. (1993) An evaluation of earcons for use in auditory human-computer interfaces. In, Ashlund, S., Eds. Conference on Human Factors in Computing Systems,

More information

Making Progress With Sounds - The Design & Evaluation Of An Audio Progress Bar

Making Progress With Sounds - The Design & Evaluation Of An Audio Progress Bar Making Progress With Sounds - The Design & Evaluation Of An Audio Progress Bar Murray Crease & Stephen Brewster Department of Computing Science, University of Glasgow, Glasgow, UK. Tel.: (+44) 141 339

More information

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT

More information

MEMORY & TIMBRE MEMT 463

MEMORY & TIMBRE MEMT 463 MEMORY & TIMBRE MEMT 463 TIMBRE, LOUDNESS, AND MELODY SEGREGATION Purpose: Effect of three parameters on segregating 4-note melody among distraction notes. Target melody and distractor melody utilized.

More information

Music Representations

Music Representations Lecture Music Processing Music Representations Meinard Müller International Audio Laboratories Erlangen meinard.mueller@audiolabs-erlangen.de Book: Fundamentals of Music Processing Meinard Müller Fundamentals

More information

Robert Alexandru Dobre, Cristian Negrescu

Robert Alexandru Dobre, Cristian Negrescu ECAI 2016 - International Conference 8th Edition Electronics, Computers and Artificial Intelligence 30 June -02 July, 2016, Ploiesti, ROMÂNIA Automatic Music Transcription Software Based on Constant Q

More information

THE SONIC ENHANCEMENT OF GRAPHICAL BUTTONS

THE SONIC ENHANCEMENT OF GRAPHICAL BUTTONS THE SONIC ENHANCEMENT OF GRAPHICAL BUTTONS Stephen A. Brewster 1, Peter C. Wright, Alan J. Dix 3 and Alistair D. N. Edwards 1 VTT Information Technology, Department of Computer Science, 3 School of Computing

More information

MAutoPitch. Presets button. Left arrow button. Right arrow button. Randomize button. Save button. Panic button. Settings button

MAutoPitch. Presets button. Left arrow button. Right arrow button. Randomize button. Save button. Panic button. Settings button MAutoPitch Presets button Presets button shows a window with all available presets. A preset can be loaded from the preset window by double-clicking on it, using the arrow buttons or by using a combination

More information

MEANINGS CONVEYED BY SIMPLE AUDITORY RHYTHMS. Henni Palomäki

MEANINGS CONVEYED BY SIMPLE AUDITORY RHYTHMS. Henni Palomäki MEANINGS CONVEYED BY SIMPLE AUDITORY RHYTHMS Henni Palomäki University of Jyväskylä Department of Computer Science and Information Systems P.O. Box 35 (Agora), FIN-40014 University of Jyväskylä, Finland

More information

Auditory Interfaces A Design Platform

Auditory Interfaces A Design Platform Auditory Interfaces A Design Platform Dan Gärdenfors gardenfors@hotmail.com 2001 Contents 1 Introduction 2 Background 2.1. Why Auditory Interfaces? 2.2 Hearing and Vision 2.3 The Potentials of Auditory

More information

Outline. Why do we classify? Audio Classification

Outline. Why do we classify? Audio Classification Outline Introduction Music Information Retrieval Classification Process Steps Pitch Histograms Multiple Pitch Detection Algorithm Musical Genre Classification Implementation Future Work Why do we classify

More information

Speech Recognition and Signal Processing for Broadcast News Transcription

Speech Recognition and Signal Processing for Broadcast News Transcription 2.2.1 Speech Recognition and Signal Processing for Broadcast News Transcription Continued research and development of a broadcast news speech transcription system has been promoted. Universities and researchers

More information

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring 2009 Week 6 Class Notes Pitch Perception Introduction Pitch may be described as that attribute of auditory sensation in terms

More information

QUALITY OF COMPUTER MUSIC USING MIDI LANGUAGE FOR DIGITAL MUSIC ARRANGEMENT

QUALITY OF COMPUTER MUSIC USING MIDI LANGUAGE FOR DIGITAL MUSIC ARRANGEMENT QUALITY OF COMPUTER MUSIC USING MIDI LANGUAGE FOR DIGITAL MUSIC ARRANGEMENT Pandan Pareanom Purwacandra 1, Ferry Wahyu Wibowo 2 Informatics Engineering, STMIK AMIKOM Yogyakarta 1 pandanharmony@gmail.com,

More information

AE16 DIGITAL AUDIO WORKSTATIONS

AE16 DIGITAL AUDIO WORKSTATIONS AE16 DIGITAL AUDIO WORKSTATIONS 1. Storage Requirements In a conventional linear PCM system without data compression the data rate (bits/sec) from one channel of digital audio will depend on the sampling

More information

1 Overview. 1.1 Nominal Project Requirements

1 Overview. 1.1 Nominal Project Requirements 15-323/15-623 Spring 2018 Project 5. Real-Time Performance Interim Report Due: April 12 Preview Due: April 26-27 Concert: April 29 (afternoon) Report Due: May 2 1 Overview In this group or solo project,

More information

Shimon: An Interactive Improvisational Robotic Marimba Player

Shimon: An Interactive Improvisational Robotic Marimba Player Shimon: An Interactive Improvisational Robotic Marimba Player Guy Hoffman Georgia Institute of Technology Center for Music Technology 840 McMillan St. Atlanta, GA 30332 USA ghoffman@gmail.com Gil Weinberg

More information

ACTION! SAMPLER. Virtual Instrument and Sample Collection

ACTION! SAMPLER. Virtual Instrument and Sample Collection ACTION! SAMPLER Virtual Instrument and Sample Collection User's Manual Forward Thank You for choosing the Action! Sampler Virtual Instrument, Loop, Hit, and Music Collection from CDSoundMaster. We are

More information

Creative Computing II

Creative Computing II Creative Computing II Christophe Rhodes c.rhodes@gold.ac.uk Autumn 2010, Wednesdays: 10:00 12:00: RHB307 & 14:00 16:00: WB316 Winter 2011, TBC The Ear The Ear Outer Ear Outer Ear: pinna: flap of skin;

More information

Tiptop audio z-dsp.

Tiptop audio z-dsp. Tiptop audio z-dsp www.tiptopaudio.com Introduction Welcome to the world of digital signal processing! The Z-DSP is a modular synthesizer component that can process and generate audio using a dedicated

More information

THE INTERACTION BETWEEN MELODIC PITCH CONTENT AND RHYTHMIC PERCEPTION. Gideon Broshy, Leah Latterner and Kevin Sherwin

THE INTERACTION BETWEEN MELODIC PITCH CONTENT AND RHYTHMIC PERCEPTION. Gideon Broshy, Leah Latterner and Kevin Sherwin THE INTERACTION BETWEEN MELODIC PITCH CONTENT AND RHYTHMIC PERCEPTION. BACKGROUND AND AIMS [Leah Latterner]. Introduction Gideon Broshy, Leah Latterner and Kevin Sherwin Yale University, Cognition of Musical

More information

24-29 April1993 lnliiirchr9

24-29 April1993 lnliiirchr9 24-29 April1993 lnliiirchr9 An Evaluation of Earcons for Use in Auditory Human-Computer nterfaces Stephen A. Brewster, Peter C. Wright and Alistair D. N. Edwards Department of Computer Science University

More information

Music Representations

Music Representations Advanced Course Computer Science Music Processing Summer Term 00 Music Representations Meinard Müller Saarland University and MPI Informatik meinard@mpi-inf.mpg.de Music Representations Music Representations

More information

NOTICE. The information contained in this document is subject to change without notice.

NOTICE. The information contained in this document is subject to change without notice. NOTICE The information contained in this document is subject to change without notice. Toontrack Music AB makes no warranty of any kind with regard to this material, including, but not limited to, the

More information

Auditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are

Auditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are In: E. Bruce Goldstein (Ed) Encyclopedia of Perception, Volume 1, Sage, 2009, pp 160-164. Auditory Illusions Diana Deutsch The sounds we perceive do not always correspond to those that are presented. When

More information

Edit Menu. To Change a Parameter Place the cursor below the parameter field. Rotate the Data Entry Control to change the parameter value.

Edit Menu. To Change a Parameter Place the cursor below the parameter field. Rotate the Data Entry Control to change the parameter value. The Edit Menu contains four layers of preset parameters that you can modify and then save as preset information in one of the user preset locations. There are four instrument layers in the Edit menu. See

More information

y POWER USER MUSIC PRODUCTION and PERFORMANCE With the MOTIF ES Mastering the Sample SLICE function

y POWER USER MUSIC PRODUCTION and PERFORMANCE With the MOTIF ES Mastering the Sample SLICE function y POWER USER MUSIC PRODUCTION and PERFORMANCE With the MOTIF ES Mastering the Sample SLICE function Phil Clendeninn Senior Product Specialist Technology Products Yamaha Corporation of America Working with

More information

SP-500 Main Features. EasyStart CONTENTS

SP-500 Main Features. EasyStart CONTENTS EasyStart 88 key RH2 (Real Weighted Hammer Action 2) keyboard. Different degrees of resistance from top to bottom. Velocity sensitive with 6 touch curves for custom response. TouchView Graphical user interface.

More information

Bite-Sized Music Lessons

Bite-Sized Music Lessons Bite-Sized Music Lessons A series of F-10 music lessons for implementation in the classroom Conditions of use These Materials are freely available for download and educational use. These resources were

More information

TABLE OF CONTENTS CHAPTER 1 PREREQUISITES FOR WRITING AN ARRANGEMENT... 1

TABLE OF CONTENTS CHAPTER 1 PREREQUISITES FOR WRITING AN ARRANGEMENT... 1 TABLE OF CONTENTS CHAPTER 1 PREREQUISITES FOR WRITING AN ARRANGEMENT... 1 1.1 Basic Concepts... 1 1.1.1 Density... 1 1.1.2 Harmonic Definition... 2 1.2 Planning... 2 1.2.1 Drafting a Plan... 2 1.2.2 Choosing

More information

Keyboard Music. Operation Manual. Gary Shigemoto Brandon Stark

Keyboard Music. Operation Manual. Gary Shigemoto Brandon Stark Keyboard Music Operation Manual Gary Shigemoto Brandon Stark Music 147 / CompSci 190 / EECS195 Ace 277 Computer Audio and Music Programming Final Project Documentation Keyboard Music: Operating Manual

More information

Brain.fm Theory & Process

Brain.fm Theory & Process Brain.fm Theory & Process At Brain.fm we develop and deliver functional music, directly optimized for its effects on our behavior. Our goal is to help the listener achieve desired mental states such as

More information

S I N E V I B E S FRACTION AUDIO SLICING WORKSTATION

S I N E V I B E S FRACTION AUDIO SLICING WORKSTATION S I N E V I B E S FRACTION AUDIO SLICING WORKSTATION INTRODUCTION Fraction is a plugin for deep on-the-fly remixing and mangling of sound. It features 8x independent slicers which record and repeat short

More information

Communicating graphical information to blind users using music : the role of context

Communicating graphical information to blind users using music : the role of context Loughborough University Institutional Repository Communicating graphical information to blind users using music : the role of context This item was submitted to Loughborough University's Institutional

More information

Course Overview. Assessments What are the essential elements and. aptitude and aural acuity? meaning and expression in music?

Course Overview. Assessments What are the essential elements and. aptitude and aural acuity? meaning and expression in music? BEGINNING PIANO / KEYBOARD CLASS This class is open to all students in grades 9-12 who wish to acquire basic piano skills. It is appropriate for students in band, orchestra, and chorus as well as the non-performing

More information

DYNAMIC AUDITORY CUES FOR EVENT IMPORTANCE LEVEL

DYNAMIC AUDITORY CUES FOR EVENT IMPORTANCE LEVEL DYNAMIC AUDITORY CUES FOR EVENT IMPORTANCE LEVEL Jonna Häkkilä Nokia Mobile Phones Research and Technology Access Elektroniikkatie 3, P.O.Box 50, 90571 Oulu, Finland jonna.hakkila@nokia.com Sami Ronkainen

More information

The Keyboard. Introduction to J9soundadvice KS3 Introduction to the Keyboard. Relevant KS3 Level descriptors; Tasks.

The Keyboard. Introduction to J9soundadvice KS3 Introduction to the Keyboard. Relevant KS3 Level descriptors; Tasks. Introduction to The Keyboard Relevant KS3 Level descriptors; Level 3 You can. a. Perform simple parts rhythmically b. Improvise a repeated pattern. c. Recognise different musical elements. d. Make improvements

More information

Arts Education Essential Standards Crosswalk: MUSIC A Document to Assist With the Transition From the 2005 Standard Course of Study

Arts Education Essential Standards Crosswalk: MUSIC A Document to Assist With the Transition From the 2005 Standard Course of Study NCDPI This document is designed to help North Carolina educators teach the Common Core and Essential Standards (Standard Course of Study). NCDPI staff are continually updating and improving these tools

More information

Topic 10. Multi-pitch Analysis

Topic 10. Multi-pitch Analysis Topic 10 Multi-pitch Analysis What is pitch? Common elements of music are pitch, rhythm, dynamics, and the sonic qualities of timbre and texture. An auditory perceptual attribute in terms of which sounds

More information

Fraction by Sinevibes audio slicing workstation

Fraction by Sinevibes audio slicing workstation Fraction by Sinevibes audio slicing workstation INTRODUCTION Fraction is an effect plugin for deep real-time manipulation and re-engineering of sound. It features 8 slicers which record and repeat the

More information

Pitch. The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high.

Pitch. The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high. Pitch The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high. 1 The bottom line Pitch perception involves the integration of spectral (place)

More information

Ben Neill and Bill Jones - Posthorn

Ben Neill and Bill Jones - Posthorn Ben Neill and Bill Jones - Posthorn Ben Neill Assistant Professor of Music Ramapo College of New Jersey 505 Ramapo Valley Road Mahwah, NJ 07430 USA bneill@ramapo.edu Bill Jones First Pulse Projects 53

More information

E X P E R I M E N T 1

E X P E R I M E N T 1 E X P E R I M E N T 1 Getting to Know Data Studio Produced by the Physics Staff at Collin College Copyright Collin College Physics Department. All Rights Reserved. University Physics, Exp 1: Getting to

More information

Lab experience 1: Introduction to LabView

Lab experience 1: Introduction to LabView Lab experience 1: Introduction to LabView LabView is software for the real-time acquisition, processing and visualization of measured data. A LabView program is called a Virtual Instrument (VI) because

More information

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical and schemas Stella Paraskeva (,) Stephen McAdams (,) () Institut de Recherche et de Coordination

More information

Elements of Music. How can we tell music from other sounds?

Elements of Music. How can we tell music from other sounds? Elements of Music How can we tell music from other sounds? Sound begins with the vibration of an object. The vibrations are transmitted to our ears by a medium usually air. As a result of the vibrations,

More information

Subjective Similarity of Music: Data Collection for Individuality Analysis

Subjective Similarity of Music: Data Collection for Individuality Analysis Subjective Similarity of Music: Data Collection for Individuality Analysis Shota Kawabuchi and Chiyomi Miyajima and Norihide Kitaoka and Kazuya Takeda Nagoya University, Nagoya, Japan E-mail: shota.kawabuchi@g.sp.m.is.nagoya-u.ac.jp

More information

Pitch Perception. Roger Shepard

Pitch Perception. Roger Shepard Pitch Perception Roger Shepard Pitch Perception Ecological signals are complex not simple sine tones and not always periodic. Just noticeable difference (Fechner) JND, is the minimal physical change detectable

More information

In total 2 project plans are submitted. Deadline for Plan 1 is on at 23:59. The plan must contain the following information:

In total 2 project plans are submitted. Deadline for Plan 1 is on at 23:59. The plan must contain the following information: Electronics II 2014 final project instructions (version 1) General: Your task is to design and implement an electric dice, an electric lock for a safe, a heart rate monitor, an electronic Braille translator,

More information

K-12 Performing Arts - Music Standards Lincoln Community School Sources: ArtsEdge - National Standards for Arts Education

K-12 Performing Arts - Music Standards Lincoln Community School Sources: ArtsEdge - National Standards for Arts Education K-12 Performing Arts - Music Standards Lincoln Community School Sources: ArtsEdge - National Standards for Arts Education Grades K-4 Students sing independently, on pitch and in rhythm, with appropriate

More information

After Direct Manipulation - Direct Sonification

After Direct Manipulation - Direct Sonification After Direct Manipulation - Direct Sonification Mikael Fernström, Caolan McNamara Interaction Design Centre, University of Limerick Ireland Abstract The effectiveness of providing multiple-stream audio

More information

Vocal Processor. Operating instructions. English

Vocal Processor. Operating instructions. English Vocal Processor Operating instructions English Contents VOCAL PROCESSOR About the Vocal Processor 1 The new features offered by the Vocal Processor 1 Loading the Operating System 2 Connections 3 Activate

More information

Music 175: Pitch II. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) June 2, 2015

Music 175: Pitch II. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) June 2, 2015 Music 175: Pitch II Tamara Smyth, trsmyth@ucsd.edu Department of Music, University of California, San Diego (UCSD) June 2, 2015 1 Quantifying Pitch Logarithms We have seen several times so far that what

More information

The Keyboard. An Introduction to. 1 j9soundadvice 2013 KS3 Keyboard. Relevant KS3 Level descriptors; The Tasks. Level 4

The Keyboard. An Introduction to. 1 j9soundadvice 2013 KS3 Keyboard. Relevant KS3 Level descriptors; The Tasks. Level 4 An Introduction to The Keyboard Relevant KS3 Level descriptors; Level 3 You can. a. Perform simple parts rhythmically b. Improvise a repeated pattern. c. Recognise different musical elements. d. Make improvements

More information

PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF)

PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF) PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF) "The reason I got into playing and producing music was its power to travel great distances and have an emotional impact on people" Quincey

More information

Connecticut State Department of Education Music Standards Middle School Grades 6-8

Connecticut State Department of Education Music Standards Middle School Grades 6-8 Connecticut State Department of Education Music Standards Middle School Grades 6-8 Music Standards Vocal Students will sing, alone and with others, a varied repertoire of songs. Students will sing accurately

More information

Perspectives on the Design of Musical Auditory Interfaces

Perspectives on the Design of Musical Auditory Interfaces Perspectives on the Design of Musical Auditory Interfaces Grégory Leplâtre and Stephen A. Brewster Department of Computing Science University of Glasgow Glasgow, UK Tel: (+44) 0141 339 8855 Fax: (+44)

More information

Real-time Granular Sampling Using the IRCAM Signal Processing Workstation. Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France

Real-time Granular Sampling Using the IRCAM Signal Processing Workstation. Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France Cort Lippe 1 Real-time Granular Sampling Using the IRCAM Signal Processing Workstation Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France Running Title: Real-time Granular Sampling [This copy of this

More information

Chapter 4. Logic Design

Chapter 4. Logic Design Chapter 4 Logic Design 4.1 Introduction. In previous Chapter we studied gates and combinational circuits, which made by gates (AND, OR, NOT etc.). That can be represented by circuit diagram, truth table

More information

BUSES IN COMPUTER ARCHITECTURE

BUSES IN COMPUTER ARCHITECTURE BUSES IN COMPUTER ARCHITECTURE The processor, main memory, and I/O devices can be interconnected by means of a common bus whose primary function is to provide a communication path for the transfer of data.

More information

Chapter Five: The Elements of Music

Chapter Five: The Elements of Music Chapter Five: The Elements of Music What Students Should Know and Be Able to Do in the Arts Education Reform, Standards, and the Arts Summary Statement to the National Standards - http://www.menc.org/publication/books/summary.html

More information

Piano Teacher Program

Piano Teacher Program Piano Teacher Program Associate Teacher Diploma - B.C.M.A. The Associate Teacher Diploma is open to candidates who have attained the age of 17 by the date of their final part of their B.C.M.A. examination.

More information

Training Document for Comprehensive Automation Solutions Totally Integrated Automation (T I A)

Training Document for Comprehensive Automation Solutions Totally Integrated Automation (T I A) Training Document for Comprehensive Automation Solutions Totally Integrated Automation (T I A) MODULE T I A Training Document Page 1 of 66 Module This document has been written by Siemens AG for training

More information

XYNTHESIZR User Guide 1.5

XYNTHESIZR User Guide 1.5 XYNTHESIZR User Guide 1.5 Overview Main Screen Sequencer Grid Bottom Panel Control Panel Synth Panel OSC1 & OSC2 Amp Envelope LFO1 & LFO2 Filter Filter Envelope Reverb Pan Delay SEQ Panel Sequencer Key

More information

& Ψ. study guide. Music Psychology ... A guide for preparing to take the qualifying examination in music psychology.

& Ψ. study guide. Music Psychology ... A guide for preparing to take the qualifying examination in music psychology. & Ψ study guide Music Psychology.......... A guide for preparing to take the qualifying examination in music psychology. Music Psychology Study Guide In preparation for the qualifying examination in music

More information

6.5 Percussion scalograms and musical rhythm

6.5 Percussion scalograms and musical rhythm 6.5 Percussion scalograms and musical rhythm 237 1600 566 (a) (b) 200 FIGURE 6.8 Time-frequency analysis of a passage from the song Buenos Aires. (a) Spectrogram. (b) Zooming in on three octaves of the

More information

Bite-Sized Music Lessons

Bite-Sized Music Lessons Bite-Sized Music Lessons A series of F-10 music lessons for implementation in the classroom Conditions of use These Materials are freely available for download and educational use. These resources were

More information

PRESCOTT UNIFIED SCHOOL DISTRICT District Instructional Guide January 2016

PRESCOTT UNIFIED SCHOOL DISTRICT District Instructional Guide January 2016 Grade Level: 9 12 Subject: Jazz Ensemble Time: School Year as listed Core Text: Time Unit/Topic Standards Assessments 1st Quarter Arrange a melody Creating #2A Select and develop arrangements, sections,

More information

SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS

SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS Areti Andreopoulou Music and Audio Research Laboratory New York University, New York, USA aa1510@nyu.edu Morwaread Farbood

More information

A Composition for Clarinet and Real-Time Signal Processing: Using Max on the IRCAM Signal Processing Workstation

A Composition for Clarinet and Real-Time Signal Processing: Using Max on the IRCAM Signal Processing Workstation A Composition for Clarinet and Real-Time Signal Processing: Using Max on the IRCAM Signal Processing Workstation Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France email: lippe@ircam.fr Introduction.

More information

MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC

MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC MELODIC AND RHYTHMIC CONTRASTS IN EMOTIONAL SPEECH AND MUSIC Lena Quinto, William Forde Thompson, Felicity Louise Keating Psychology, Macquarie University, Australia lena.quinto@mq.edu.au Abstract Many

More information

Music BCI ( )

Music BCI ( ) Music BCI (006-2015) Matthias Treder, Benjamin Blankertz Technische Universität Berlin, Berlin, Germany September 5, 2016 1 Introduction We investigated the suitability of musical stimuli for use in a

More information

Getting started with music theory

Getting started with music theory Getting started with music theory This software allows learning the bases of music theory. It helps learning progressively the position of the notes on the range in both treble and bass clefs. Listening

More information

Liquid Mix Plug-in. User Guide FA

Liquid Mix Plug-in. User Guide FA Liquid Mix Plug-in User Guide FA0000-01 1 1. COMPRESSOR SECTION... 3 INPUT LEVEL...3 COMPRESSOR EMULATION SELECT...3 COMPRESSOR ON...3 THRESHOLD...3 RATIO...4 COMPRESSOR GRAPH...4 GAIN REDUCTION METER...5

More information

NCEA Level 2 Music (91275) 2012 page 1 of 6. Assessment Schedule 2012 Music: Demonstrate aural understanding through written representation (91275)

NCEA Level 2 Music (91275) 2012 page 1 of 6. Assessment Schedule 2012 Music: Demonstrate aural understanding through written representation (91275) NCEA Level 2 Music (91275) 2012 page 1 of 6 Assessment Schedule 2012 Music: Demonstrate aural understanding through written representation (91275) Evidence Statement Question with Merit with Excellence

More information

J-Syncker A computational implementation of the Schillinger System of Musical Composition.

J-Syncker A computational implementation of the Schillinger System of Musical Composition. J-Syncker A computational implementation of the Schillinger System of Musical Composition. Giuliana Silva Bezerra Departamento de Matemática e Informática Aplicada (DIMAp) Universidade Federal do Rio Grande

More information

The Human Features of Music.

The Human Features of Music. The Human Features of Music. Bachelor Thesis Artificial Intelligence, Social Studies, Radboud University Nijmegen Chris Kemper, s4359410 Supervisor: Makiko Sadakata Artificial Intelligence, Social Studies,

More information

The CAITLIN Auralization System: Hierarchical Leitmotif Design as a Clue to Program Comprehension

The CAITLIN Auralization System: Hierarchical Leitmotif Design as a Clue to Program Comprehension The CAITLIN Auralization System: Hierarchical Leitmotif Design as a Clue to Program Comprehension James L. Alty LUTCHI Research Centre Department of Computer Studies Loughborough University Loughborough

More information

Music Theory: A Very Brief Introduction

Music Theory: A Very Brief Introduction Music Theory: A Very Brief Introduction I. Pitch --------------------------------------------------------------------------------------- A. Equal Temperament For the last few centuries, western composers

More information

Copyright 2015 Scott Hughes Do the right thing.

Copyright 2015 Scott Hughes Do the right thing. tonic. how to these cards: Improvisation is the most direct link between the music in your head and the music in your instrument. The purpose of Tonic is to strengthen that link. It does this by encouraging

More information

ALGORHYTHM. User Manual. Version 1.0

ALGORHYTHM. User Manual. Version 1.0 !! ALGORHYTHM User Manual Version 1.0 ALGORHYTHM Algorhythm is an eight-step pulse sequencer for the Eurorack modular synth format. The interface provides realtime programming of patterns and sequencer

More information

EMERGENT SOUNDSCAPE COMPOSITION: REFLECTIONS ON VIRTUALITY

EMERGENT SOUNDSCAPE COMPOSITION: REFLECTIONS ON VIRTUALITY EMERGENT SOUNDSCAPE COMPOSITION: REFLECTIONS ON VIRTUALITY by Mark Christopher Brady Bachelor of Science (Honours), University of Cape Town, 1994 THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

More information

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1 (19) United States US 20060288846A1 (12) Patent Application Publication (10) Pub. No.: US 2006/0288846A1 Logan (43) Pub. Date: Dec. 28, 2006 (54) MUSIC-BASED EXERCISE MOTIVATION (52) U.S. Cl.... 84/612

More information

HBI Database. Version 2 (User Manual)

HBI Database. Version 2 (User Manual) HBI Database Version 2 (User Manual) St-Petersburg, Russia 2007 2 1. INTRODUCTION...3 2. RECORDING CONDITIONS...6 2.1. EYE OPENED AND EYE CLOSED CONDITION....6 2.2. VISUAL CONTINUOUS PERFORMANCE TASK...6

More information

Assessment Schedule 2017 Music: Demonstrate aural understanding through written representation (91275)

Assessment Schedule 2017 Music: Demonstrate aural understanding through written representation (91275) NC Level 2 Music (91275) 2017 page 1 of 7 ssessment Schedule 2017 Music: emonstrate aural understanding through written representation (91275) ssessment Criteria with with emonstrating aural understanding

More information

SigPlay User s Guide

SigPlay User s Guide SigPlay User s Guide . . SigPlay32 User's Guide? Version 3.4 Copyright? 2001 TDT. All rights reserved. No part of this manual may be reproduced or transmitted in any form or by any means, electronic or

More information

Hip Hop Robot. Semester Project. Cheng Zu. Distributed Computing Group Computer Engineering and Networks Laboratory ETH Zürich

Hip Hop Robot. Semester Project. Cheng Zu. Distributed Computing Group Computer Engineering and Networks Laboratory ETH Zürich Distributed Computing Hip Hop Robot Semester Project Cheng Zu zuc@student.ethz.ch Distributed Computing Group Computer Engineering and Networks Laboratory ETH Zürich Supervisors: Manuel Eichelberger Prof.

More information

diptych Paul Schuette Fall 2008 a DUET for harpsichord, electric piano and live electronics

diptych Paul Schuette Fall 2008 a DUET for harpsichord, electric piano and live electronics diptych a DUET for harpsichord, electric piano and live electronics Paul Schuette Fall 2008 Program Note! diptych explores the differences beteen ho human beings and machines can process and understand

More information

The Digital Audio Workstation

The Digital Audio Workstation The Digital Audio Workstation The recording studio traditionally consisted of a large collection of hardware devices that were necessary to record, mix and process audio. That paradigm persisted until

More information

Enhancing Music Maps

Enhancing Music Maps Enhancing Music Maps Jakob Frank Vienna University of Technology, Vienna, Austria http://www.ifs.tuwien.ac.at/mir frank@ifs.tuwien.ac.at Abstract. Private as well as commercial music collections keep growing

More information

THE EFFECT OF EXPERTISE IN EVALUATING EMOTIONS IN MUSIC

THE EFFECT OF EXPERTISE IN EVALUATING EMOTIONS IN MUSIC THE EFFECT OF EXPERTISE IN EVALUATING EMOTIONS IN MUSIC Fabio Morreale, Raul Masu, Antonella De Angeli, Patrizio Fava Department of Information Engineering and Computer Science, University Of Trento, Italy

More information

Central Valley School District Music 1 st Grade August September Standards August September Standards

Central Valley School District Music 1 st Grade August September Standards August September Standards Central Valley School District Music 1 st Grade August September Standards August September Standards Classroom expectations Echo songs Differentiating between speaking and singing voices Using singing

More information

UNIT III. Combinational Circuit- Block Diagram. Sequential Circuit- Block Diagram

UNIT III. Combinational Circuit- Block Diagram. Sequential Circuit- Block Diagram UNIT III INTRODUCTION In combinational logic circuits, the outputs at any instant of time depend only on the input signals present at that time. For a change in input, the output occurs immediately. Combinational

More information

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION ABSTRACT We present a method for arranging the notes of certain musical scales (pentatonic, heptatonic, Blues Minor and

More information

Getting started with music theory

Getting started with music theory Getting started with music theory This software allows to learn the bases of music theory. It helps learning progressively the position of the notes on the range and piano keyboard in both treble and bass

More information

Applying lmprovisationbuilder to Interactive Composition with MIDI Piano

Applying lmprovisationbuilder to Interactive Composition with MIDI Piano San Jose State University From the SelectedWorks of Brian Belet 1996 Applying lmprovisationbuilder to Interactive Composition with MIDI Piano William Walker Brian Belet, San Jose State University Available

More information

Getting Started Guide for the V Series

Getting Started Guide for the V Series product pic here Getting Started Guide for the V Series Version 9.0.6 March 2010 Edition 3725-24476-003/A Trademark Information POLYCOM, the Polycom Triangles logo and the names and marks associated with

More information

A computer-controlled system for the recording modification and presentation of two-channel musical stirnuli

A computer-controlled system for the recording modification and presentation of two-channel musical stirnuli Behavior Research Methods & Instrumentanon 1976, Vol. 8(1), 24-28 COMPUTER TECHNOLOGY A computer-controlled system for the recording modification and presentation of two-channel musical stirnuli R. BIRD

More information

CS8803: Advanced Digital Design for Embedded Hardware

CS8803: Advanced Digital Design for Embedded Hardware CS883: Advanced Digital Design for Embedded Hardware Lecture 4: Latches, Flip-Flops, and Sequential Circuits Instructor: Sung Kyu Lim (limsk@ece.gatech.edu) Website: http://users.ece.gatech.edu/limsk/course/cs883

More information

SMS Composer and SMS Conductor: Applications for Spectral Modeling Synthesis Composition and Performance

SMS Composer and SMS Conductor: Applications for Spectral Modeling Synthesis Composition and Performance SMS Composer and SMS Conductor: Applications for Spectral Modeling Synthesis Composition and Performance Eduard Resina Audiovisual Institute, Pompeu Fabra University Rambla 31, 08002 Barcelona, Spain eduard@iua.upf.es

More information