Application of a Musical-based Interaction System to the Waseda Flutist Robot WF-4RIV: Development Results and Performance Experiments

Size: px
Start display at page:

Download "Application of a Musical-based Interaction System to the Waseda Flutist Robot WF-4RIV: Development Results and Performance Experiments"

Transcription

1 The Fourth IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics Roma, Italy. June 24-27, 2012 Application of a Musical-based Interaction System to the Waseda Flutist Robot WF-4RIV: Development Results and Performance Experiments Klaus Petersen (IEEE Member, Jorge Solis (IEEE Member) and Atsuo Takanishi (IEEE Member) Abstract During several years of development, the hardware of the anthropomorphic flutist robot Waseda Flutist WF- 4RIV has been continuously improved. The robot is currently able to play the flute at the level of an intermediate human player. Lately we have been focusing our research on the interactivity of the performance of the robot. Initially the robot has only been able to play a static performance that was not actively controllable by a partner musician of the robot. In a realistic performance set-up, in a band or an orchestra, musicians need to interact in order to create a performance that gives a natural and dynamic impression to the audience. In this publication we present the latest developments on the integration a Musicalbased Interaction System (MbIS) with WF-4RIV. Such a human robot interaction system is to allow human musicians to do natural musical communication with the flutist robot through audio-visual cues. Here we would like to summarize our previous results, present the latest extensions to the system and especially concentrate on experimenting with applications of the system. We evaluate our interactive performance system using three different methods: A comparison of a passive (noninteractive) and an interactive performance, evaluation of the technical functionality of the interaction system as a whole and by examining the MbIS from a user perspective with a user survey including amateur and professional musicians. We present experiment results that show that our Musical-based Interaction System extends the anthropomorphic design of the flutist robot, to allow increasingly interactive, natural musical performances with human musicians. A. Research Objective I. INTRODUCTION One type of robots that perform rich communication with humans are musical performance robots. Anthropomorphic musical performance robots have the ability to mechanically emulate the human way of playing a musical instrument. These technically very complex robots reach high performance levels that are comparable to the skill level of professional human musicians. A feature that in most cases they still lack however, is the ability to interact with other musicians. Playing a fixed, invariable sequence, they might be able to perform together with other players, but as soon as there is a spontaneous variation in the musical performance, the human and the robot performance will become desynchronized. This work has been kindly supported by the GCOE-Global Robot Academia program of Waseda University. Klaus Petersen is with Waseda University, Graduate School of Advanced Science and Engineering, Waseda University, Ookubo, Shinjuku-ku, Tokyo, Japan (phone: ; fax: ; klaus@aoni.waseda.jp). Jorge Solis and Atsuo Takanishi are with the Department of Mechanical Engineering, Waseda University. Atsuo Takanishi is one of the core members of the Humanoid Robotics Institute, Waseda University ( takanisi@waseda.jp). The Waseda Flutist Robot WF-4RIV has been developed for several years. With its bio-inspired design it emulates the functionality of human organs involved in playing the flute. In in its most recent version the flutist robot is able to play the flute at the level of an intermediate level human flute player ([1]). Regarding the mechanical development of the Waseda flutist robot our research purpose is to understand more about the dexterity and motor control abilities that are necessary for humans to play the flute ([2]). In recent research efforts we have tried to integrate the flutist robot with a human band. We intend to give the robot the interactive capabilities to actively play together with human musicians or other musical performance robots. By doing so we would like to develop new means of musical expression and also get a deeper understanding of the process of communication that takes place between human musicians. We have in previous work introduced a so called Musical-based Interaction System (MbIS). This system contains several modules for audio-visual sensor processing and for mapping the results from the sensor processing to the musical performance parameters of the robot ([3], [4]). Using this system we have enabled a human musician to control the performance of the flutist robot in a natural way. Generally, the MbIS follows two main purposes: First, to reduce the complexity of the usage of the flutist robot by providing easily usable controllers and involving feedback from the mechanical state of the robot in the interaction process. Second, through teach-in capabilities the interaction system is to give a high degree of flexibility to the human musician, allowing him to freely determine the connection between a musical melody pattern and an instrument gesture ([5]). Various work about musical performance robots has been published. This includes the MuBot string instrument robot series that has been developed at the University of Electro- Communications ([6]). Regarding the interaction with musical performance robots, in [7] the drumming robot Haile has been introduced, which is able to play an improvised performance together with other musicians using acoustic rhythmical cues. This research has been extended with the development of the robotic marimba player Shimon ([8]), which additionally has the ability to analyze and react to the harmonic characteristics of the music performance of its human partner players. [9] presents an approach to control the tempo of the accompaniment of an anthropomorphic robot playing a Theremin using audio-visual cues. Although the techniques for detecting audio-visual cues and the mapping methods used are similar, the referenced musical robot /12/$ IEEE 937

2 Fig. 1. Diagram of the proposed Musical-based Interaction System (MbIS) that was implemented in the Waseda Flutist Robot WF-4RIV. The system captures the performances actions of the human musician and, after the experience level selection stage, maps the processed sensor information into musical performance parameters for the robot. The robot s performance provides musical feedback to the human musician. systems follow a slightly different research approach than our system. They focus on developing an efficient way to assist human music production. In our research we concentrate on the anthropomorphic re-production of sound production and the development of a human-like interactive behavior of the robot. An important point that seems to be missing in the majority of the previously published work is the evaluation of the proposed interaction system. In some cases the system is used in an on-stage environment to demonstrate its suitability for a real performance. In other cases user-survey results or technical measurements are provided. From an engineering point-of-view a more detailed evaluation of the performance improvements that can be achieved using the introduced interaction systems from a listener perspective, how users judge the usability of such systems, and a detailed technical evaluation, might be desirable. Therefore, in this paper we would like to especially concentrate on the evaluation of our interaction system: In the first part of this evaluation, we perform a comparative analysis, in order to determine the different characteristics (by a listener survey) of the passive and active performance system. It follows a technical system evaluation from interaction experiments with an intermediate level musician. In the third part of the evaluation, we look more closely at experiments from the non-technical user perspective. We perform a user survey to characterize the practical usability of the system. B. Implementation Concept We propose the Musical-Based Interaction System (MbIS), to allow the interaction between the flutist robot and musicians based on two levels of interaction (Figure 1): the basic interaction level and the extended interaction level. The purpose of the two-level design is to make the system usable for people with different experience levels in human-robot interaction. In the basic interaction level we focus on enabling a user who does not have much experience in communicating with the robot to understand about the device s physical limitations. We use a simple visual controller that has a fixed correlation regarding which performance parameter of the robot it modulates, in order to make this level suitable for beginner players. The WF-4RIV is built with the intention of emulating the parts of the human body that are necessary to play the flute. Therefore it has artificial lungs with a limited volume. Also other sound modulation parameters like the vibrato frequency (generated by an artificial vocal chord) have a certain dynamic range in which they operate. To account for these characteristics the user s input to the robot via the sensor system has to be modified in a way that it does not violate the physical limits of the robot. With the extended level interaction interface, our goal is to give the user the possibility to interact with the robot more freely (compared to the basic level of interaction). To achieve this, we propose a simplified learning (teach-in) system that allows the user to link instrument gestures with musical patterns. Here, the correlation of sensor input to sensor output is not fixed. Furthermore, we allow for more degrees-offreedom in the instrument movements of the user. As a result this level is more suitable for advanced level players. We use a particle filter-based instrument gesture detection system and histogram-based melody detection algorithms. In a teaching phase the musician can therefore assign instrument gestures to certain melody patterns. In a performance phase the robot will re-play these melodies according to the taughtin information. II. COMPARATIVE EVALUATION OF PASSIVE AND INTERACTIVE MUSICAL PERFORMANCE We performed qualitative validation of the distinction between passive and active performance by a user survey. The survey was done using the same musical material as in the previous section. As experimental subjects, we chose 15 amateur musicians and 2 professional musicians. While we had access to a relatively high number of amateur musician subjects within our university, the number of available professional musicians was low. We developed a questionnaire that consisted of 7 adjective pairs to characterize a musical performance by the flutist robot. The questionnaire was developed according to a concept proposed in [10]. The purpose of the survey was to find the impression of the two performance modes on the listener. For each adjective pair in the questionnaire, the survey subject was asked to express his impression of the performance on a 5-point Likert-type scale. Applied to the adjective pair interesting / boring, a 1 would account for a very boring performance and a 5 for a really interesting one. If a listener was indecisive on which adjective to choose, he could choose a 3 to emphasize neither one of both adjectives. The result of the survey for all adjective pairs is shown in Fig. 2. Especially in case of the survey with 15 amateur musician subjects, the survey shows promising results. The active performance scored significantly higher (with the result of the t-test > 0.05) for classifications interesting, varied, natural and emotional. We have already shown with the performance 938

3 Fig. 2. In the two graphs above, the results of the listener survey to compare the active and passive performance are shown. In a) the averaged questionnaire scoring by the amateur musicians is shown. b) shows the survey results for the professional musicians. Filled rectangles point to an adjective category for which there is a significant difference between the result for the basic and extended level system. Red boxes show the scoring for a passive performance and blue boxes display the results for an active performance. Fig. 3. The figure displays an basic interaction level setup. On the left hand side the flutist robot WF-4RIV is displayed. The right hand side shows the robot view of the interacting musician and the virtual fader that the player manipulates. The resulting data and the fill-status of the lung are shown in the graph below. index that the active performance bears a stronger correlation between visible actions by the musician and musical performance output. This results in the impression of a more interesting and varied performance on the listener of the performance. Considered that the active performance gives a more natural impression to the listener and the musical performer, the result for the adjective pairs natural / artificial and emotional / rational can be explained. The active performance was attributed a higher score for naturalness and emotionality than the passive performance. The additional physical movement, resulting in stronger synchronicity of the two performers, shows more human-like features that a static performance without further exchange of information might not display. This leads the listener / viewer to the conclusion that the active performance is more natural (human-like) and conveying more emotional content than the passive performance. III. TECHNICAL EVALUATION OF THE MBIS A. Basic Level of Interaction To demonstrate the technical functionality of the basic interaction system, we asked an intermediate level saxophone player to improvise over a repetitive musical pattern from the theme of the jazz standard piece The Autumn Leaves. By moving his instrument, the musician was able to adjust the tempo of the sequence that was performed by the flutist robot. While the musician controlled the performance of the robot, his input was modulated by the physical state of the robot. The relevant state parameter could be deliberately selected by the user. In this case we chose the state of the lung of the robot to modulate the values that were transmitted from a visual controller (Figure 3). This fader was used to continuously control the speed of a pre-defined Fig. 4. In the beginner level interaction system, the user controls the tempo of a pattern performed by the robot. The lung fill level plotted in the top graph, modulates the input data from the virtual fader resulting in the robot performance displayed by the pitch and the amplitude curve. melody sequence. The speed of the performed pattern was continuously reduced, when the lung reached a fill-level of 80%. In order to perform the experiment, the saxophone player stood in front of the robot (within the viewing angle of the robot s cameras). After introducing the functionality of the basic level interaction system to the player, we recorded the sound output of the robot, its lung fill level, virtual fader level and modulated virtual fader level for the resulting interaction of the robot and the musician. A graph of the result of the experiment is shown in Fig. 4. Before the lung reached a fill level of 80%, the performance tempo of the robot was controlled by the unmodulated fader level. With a fill level above 80%, the fader value actually transmitted to the robot (Modulated fader value) was faded out before the lung was completely empty. This adjustment can be observed at 17.5s-22.5s in the fader value plot, the modulated fader value plot and the robot output volume plot. As the fader value was faded-out rapidly, the resulting performance tempo of the robot decreased from fast 939

4 Fig. 5. In this screenshot an interactive performance using the extended level of interaction is displayed. In the left part, the flutist robot WF4-RV is shown. In image on the right hand side displays the interaction partner as seen by the robot. The interaction partner can select melody patterns or performance states (state 1, 2, 3) by changing the orientation of his instrument. Fig. 6. In the extended level interaction system s teach-in phase the user associates instrument motion with melody patterns. A melody pattern m performed by the musician is repeated by the robot for confirmation r. The robot state table shows the association that is being set-up the teach-in system. (160bpm) to slow (70bpm). This variation can been seen in the data shown in the robot performance pitch plot. At 23.5s and 24s the robot refilled its lungs for a duration of approx. 0.5s. These breathing points have a time-distance of approx. 40s (in the displayed graph one breathing cycle is displayed). During the breathing points no sound was produced by the robot. B. Extended Interaction Level Similar to the technical evaluation of the basic level interaction system, we concentrate on a proof-of-concept demonstration of the functionality of the extended level interaction system, rather than calculating numerical error values for the separate system components. In the experimental setup, an intermediate-level (instrument skill level) saxophone player controlled the robot for improvisation on the theme of the same song also chosen for the previous survey experiment, the jazz standard The Autumn Leaves. The musician controlled the performance parameters of the robot using the mapping module of the extended level interaction system. The experiment had two phases, the teaching phase and the performance phase. In the first phase the interacting musician taught a movement-performance parameter relationship to the robot. In this particular case we related one of three melody patterns to the inclination angle of the instrument of the robot s partner musician. From this information the robot built a state-space table that relates instrument angles to musical patterns. In the second stage (Figure 5) the interaction partner controlled the robot with these movements. When a certain instrument state is determined, the robot plays the musical pattern that relates to the current instrument angle. The transition of the teaching phase to the performance phase is defined by the number of melody patterns associated by the robot. In case of this experiment, the switch occurred Fig. 7. In the extended level interaction system s performance phase the user controls the robot s output tone by changing the orientation of his instrument. In the graph the detected instrument orientation, the associated musical pattern and the output of the robot are shown. after 3 melody patterns had been recorded. The experiment was performed by the intermediate-level flute player. One teaching phase and one performance phase was done for each experiment run. The recorded data for the teaching phase is displayed in Fig. 6. In the first part (from T = 0s), the instrument player moved his instrument to an angle of approximately 125 (state I) and plays melody pattern A. The flutist robot confirmed the detection of the pattern by repeating the melody. This is displayed in the Robot performance pitch graph and marked with m (musician) and r (robot). The association of pattern A and instrument state I was written to the robot state table. At T = 18s the player changed his instrument position to approximately 100 (state II) and played the next melody pattern, which was recognized and confirmed as melody pattern B. The association of state II and pattern B was memorized in the robot state table. At last 940

5 (T = 22s), the instrumentalist moved his instrument to state III (approximately 75, and played a melody pattern C. The association of instrument state III and melody pattern C was saved in the association table. The results for the performance phase of the extended level interaction experiment is shown in Fig. 7. In the teaching phase the musician associated three melody patterns A, B, C, to instrument states I, II, III. In the performance phase he recalled the melody patterns in order to build an accompaniment for an improvisation. In the graph, each time the musician shifted his instrument to a new angle (Instrument orientation graph), the detected instrument state changed. As a result of this change the robot played the answer melody that was associated in the teaching phase. This happens several times in the displayed graph. At 15s the musician moved his instrument to an angle of 150 (state I) and the robot immediately played the associated answer melody (pattern A). At 20s he shifted the instrument to 100 and triggered melody pattern B. When moving the instrument to 50 at 23s, the robot answered with melody pattern C. It remains to note that, after one pattern has been performed, the robot automatically reset its lung until the next pattern is commanded. These short breathing spots can be seen throughout the Robot performance volume plot, notably at t = 5s, t = 9s or t = 13s. The proposed technical evaluation experiments cover only two cases of a musician-robot interaction configuration. These configurations were suggested as a conceptual test of the basic and extended level interaction system by the professional musician, who we worked together with, when planning the presented experiments. Also have experiments only been performed for relatively simple types of user input. In a realistic performance, more extreme movements and musical expression than proposed here might occur. As a result of the presented experiments the basic interaction level can be characterized as functional from the technical pointof-view for a certain performance scenario. The absolute certainty that the system works with every possible input data, was not achieved. The system has been evaluated on a case-by-case basis. We tried to choose the situation proposed here as an example for a typical scenario. To further classify, in how far this makes sense also from the point-of-view of a real user, we evaluate in the next section. IV. SURVEY EVALUATION FROM THE INTERACTING USER PERSPECTIVE Goal of the development of the Musical-based Interaction System is to provide the user with an intuitive, natural interface to interact with a musical robot. To find out about the acceptance of the system and its general usability, the system needs to be tested with a variety of users, the experience and comments of the users need to be recorded and these results need to be analyzed. Users were asked to do a musical performance with the flutist robot, first using the basic level interaction system and second using the extended level interaction system. We asked the users to fill out a questionnaire for each of these performances to characterize Fig. 8. This figure shows the results of the user survey for the basic interaction system and the extended interaction system. In a) the averaged questionnaire scoring by the amateur musicians is shown. b) shows the survey results for the professional musicians. Filled rectangles in the graph point to a adjective category for which there is a significant difference between the result for the basic and extended level system. Red boxes show the scoring for the basic interaction level and blue boxes display the results for the extended interaction level. their experience with the system. This experimental method was applied for professional musicians as well as amateur musicians and the result statistically analyzed. With the results we try to show that the system provides a natural user experience to users of different experience levels. In the survey experiment we again asked 15 male amateur musicians and 2 male professional musicians to use the basic and the extended interaction system with WF- 4RIV. Regarding the low number of professional musician subjects, the same limitation as described for the previous survey apply. For each interaction level one questionnaire needed to be filled out. Re-trials of the interaction system experiment runs, if requested by the user, were allowed. Each questionnaire consisted of 8 pairs of adjectives, similar to the approach proposed in [10]. As a score system, similarly to the previously described survey, we used a 5-point Likerttype scale. Applied to the adjective pair natural / artificial, a score of 1 would account for a very natural, human-like interaction and a score of 5 for a very machine-like, static one. The results of the user survey for the basic and extended level interaction system are shown in Fig Amateur musicians were asked to characterize the differences between the use of basic level of interaction and the extended level of interaction. These subjects for the survey described in this section were the same as in the previous survey experiment. The survey described in this section was done after the previously described experiment. The users were asked to attribute 8 pairs of adjectives to the two levels after 941

6 interacting with the robot using each of these levels. The result of the survey shows that in case of the adjective pairs natural / artificial, free movement / constrained, emotional / rational, expressive / unexpressive, easy / difficult, there occurs a significant difference (t-test result p > 0.05) between the basic and extended interaction level. A Student s t-distribution is assumed for the survey results and therefore the t-test was chosen to determine statistical significance. For the pair natural / artificial, the amateur users in average gave a score of 1.7 to the basic and a score of 4 to the advanced interaction level. A similar result was achieved for the adjective pair free movement / constrained with a score of 2 for the basic interaction level and 4.5 for the advanced interaction level. In case of the adjective pair emotional / rational, the basic level scored 2 and the extended level 4. The basic interaction level was attributed with a higher score of 3.8, than the extended interaction level with 2 for the adjective pair easy / difficult. Furthermore, we asked 2 professional musicians to use the basic and extended interaction level and attribute their impression with the previously described adjective pairs. The outcome of the experiment is similar to the results for the amateur musicians. In case of the adjective pair natural / artificial, the basic interaction level scored 2, whereas the extended interaction level achieved 4. For the adjective pair free movement / constrained a score of 2.2 was attributed to the basic level of interaction and a score of 4.5 was attributed to the extended interaction level. In case of the adjective pair emotional / rational the professional musicians in average evaluated the basic level with a score of 2.5 and the extended level with a score of 4. A score of 4.5 for the basic and 2.5 for the extended interaction system were attributed for the adjective pair easy / difficult. As average impression for the amateur musician survey subjects as well as the professional musician subjects, the extended interaction level was evaluated to be more natural and more emotional in its usage. This might be related to the additional freedom of expression given by the teach-in system and the use of the particle filter-based tracking. V. CONCLUSIONS AND FUTURE WORK In previous publications we have introduced a Musicalbased Interaction System (MbIS) for the Waseda flutist robot WF-4RIV. So far we have had only a preliminary evaluation of the technical and usability characteristics of the system. We also considered that the evaluation strategies introduced in other work published related to musical performance robot interaction systems left room for improvement. Therefore, in this paper we evaluated the interaction system in three different categories: The difference between a passive performance and an active performance was analyzed. Through the application of a user study, we concluded that to a certain degree, the active performance is more similar to a performance between humans, than a passive performance between robot and human. This led to the next section, in which we displayed experimental results to demonstrate the technical functionality of the basic and extended level of interaction. Although these results did not cover all possible cases of usage of the system, we decided to further evaluate the system in the user survey shown in the next section. The results of the user survey show that the interaction system levels are characterized differently by the amateur and professional musician users. The basic level interaction system on the one hand, is evaluated to be more constrained and in general provide a more artificial feel, but is easy to use. The extended level interaction system on the other hand is more complicated in its usage, but due to its greater flexibility leads to more natural and expressive performance. In future works, the evaluation of the system is to be continued, with focus on in-depth analysis of the results, such as using a different method of determining the statistical significance of the survey results instead of the used t-test. In the surveys presented in this paper, especially the number of professional musician survey subjects was very low. We intend to perform further surveys with a larger number of professional subjects. REFERENCES [1] J. Solis, K. Chida, K. Suefuji, K. Taniguchi, S. Hashimoto, and A. Takanishi, The waseda flutist robot wf-4rii in comparison with a professional flutist, Computer Music Journal, vol. 30, pp , [2] J. Solis and A. Takanishi, The waseda flutist robot no. 4 refined iv: Enhancing the sound clarity and the articulation between notes by improving the design of the lips and tonguing mechanisms, IROS, pp , [3] K. Petersen, J. Solis, and A. Takanishi, Toward enabling a natural interaction between human musicians and musical performance robots: Implementation of a real-time gestural interface, in Robot and Human Interactive Communication, RO-MAN The 17th IEEE International Symposium on. IEEE, 2008, pp [4], Development of a aural real-time rhythmical and harmonic tracking to enable the musical interaction with the Waseda flutist robot, in Intelligent Robots and Systems, IROS IEEE/RSJ International Conference on. IEEE, 2009, pp [5], Implementation of a musical performance interaction system for the Waseda Flutist Robot: Combining visual and acoustic sensor input based on sequential Bayesian filtering, in Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on. IEEE, 2010, pp [6] M. Kajitani, Development of musician robots in japan, in Australian Conference on Robotics and Automation, [7] G. Weinberg and S. Driscoll, Towards robotic musicianship, Computer Music Journal, vol. 30, pp , [8] G. Hoffman and G. Weinberg, Gesture-based human-robot jazz improvisation, in Robotics and Automation (ICRA), 2010 IEEE International Conference on. IEEE, 2010, pp [9] A. Lim, T. Mizumoto, L. Cahier, T. Otsuka, T. Takahashi, K. Komatani, T. Ogata, and H. Okuno, Robot musical accompaniment: integrating audio and visual cues for real-time synchronization with a human flutist, in Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on. IEEE, 2010, pp [10] C. Bartneck, D. Kulic, and E. Croft, Measuring the anthropomorphism, animacy, likability, perceived intelligence, and perceived safety of robots, in Workshop on Metrics for Human-Robot Interactionf, 2008, pp

Shimon: An Interactive Improvisational Robotic Marimba Player

Shimon: An Interactive Improvisational Robotic Marimba Player Shimon: An Interactive Improvisational Robotic Marimba Player Guy Hoffman Georgia Institute of Technology Center for Music Technology 840 McMillan St. Atlanta, GA 30332 USA ghoffman@gmail.com Gil Weinberg

More information

Computer Coordination With Popular Music: A New Research Agenda 1

Computer Coordination With Popular Music: A New Research Agenda 1 Computer Coordination With Popular Music: A New Research Agenda 1 Roger B. Dannenberg roger.dannenberg@cs.cmu.edu http://www.cs.cmu.edu/~rbd School of Computer Science Carnegie Mellon University Pittsburgh,

More information

Music-Ensemble Robot That Is Capable of Playing the Theremin While Listening to the Accompanied Music

Music-Ensemble Robot That Is Capable of Playing the Theremin While Listening to the Accompanied Music Music-Ensemble Robot That Is Capable of Playing the Theremin While Listening to the Accompanied Music Takuma Otsuka 1, Takeshi Mizumoto 1, Kazuhiro Nakadai 2, Toru Takahashi 1, Kazunori Komatani 1, Tetsuya

More information

QUALITY OF COMPUTER MUSIC USING MIDI LANGUAGE FOR DIGITAL MUSIC ARRANGEMENT

QUALITY OF COMPUTER MUSIC USING MIDI LANGUAGE FOR DIGITAL MUSIC ARRANGEMENT QUALITY OF COMPUTER MUSIC USING MIDI LANGUAGE FOR DIGITAL MUSIC ARRANGEMENT Pandan Pareanom Purwacandra 1, Ferry Wahyu Wibowo 2 Informatics Engineering, STMIK AMIKOM Yogyakarta 1 pandanharmony@gmail.com,

More information

Practice makes less imperfect: the effects of experience and practice on the kinetics and coordination of flutists' fingers

Practice makes less imperfect: the effects of experience and practice on the kinetics and coordination of flutists' fingers Proceedings of the International Symposium on Music Acoustics (Associated Meeting of the International Congress on Acoustics) 25-31 August 2010, Sydney and Katoomba, Australia Practice makes less imperfect:

More information

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment Gus G. Xia Dartmouth College Neukom Institute Hanover, NH, USA gxia@dartmouth.edu Roger B. Dannenberg Carnegie

More information

Ben Neill and Bill Jones - Posthorn

Ben Neill and Bill Jones - Posthorn Ben Neill and Bill Jones - Posthorn Ben Neill Assistant Professor of Music Ramapo College of New Jersey 505 Ramapo Valley Road Mahwah, NJ 07430 USA bneill@ramapo.edu Bill Jones First Pulse Projects 53

More information

Outline. Why do we classify? Audio Classification

Outline. Why do we classify? Audio Classification Outline Introduction Music Information Retrieval Classification Process Steps Pitch Histograms Multiple Pitch Detection Algorithm Musical Genre Classification Implementation Future Work Why do we classify

More information

K-12 Performing Arts - Music Standards Lincoln Community School Sources: ArtsEdge - National Standards for Arts Education

K-12 Performing Arts - Music Standards Lincoln Community School Sources: ArtsEdge - National Standards for Arts Education K-12 Performing Arts - Music Standards Lincoln Community School Sources: ArtsEdge - National Standards for Arts Education Grades K-4 Students sing independently, on pitch and in rhythm, with appropriate

More information

Interacting with a Virtual Conductor

Interacting with a Virtual Conductor Interacting with a Virtual Conductor Pieter Bos, Dennis Reidsma, Zsófia Ruttkay, Anton Nijholt HMI, Dept. of CS, University of Twente, PO Box 217, 7500AE Enschede, The Netherlands anijholt@ewi.utwente.nl

More information

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance Methodologies for Expressiveness Modeling of and for Music Performance by Giovanni De Poli Center of Computational Sonology, Department of Information Engineering, University of Padova, Padova, Italy About

More information

Musical Entrainment Subsumes Bodily Gestures Its Definition Needs a Spatiotemporal Dimension

Musical Entrainment Subsumes Bodily Gestures Its Definition Needs a Spatiotemporal Dimension Musical Entrainment Subsumes Bodily Gestures Its Definition Needs a Spatiotemporal Dimension MARC LEMAN Ghent University, IPEM Department of Musicology ABSTRACT: In his paper What is entrainment? Definition

More information

Computational Modelling of Harmony

Computational Modelling of Harmony Computational Modelling of Harmony Simon Dixon Centre for Digital Music, Queen Mary University of London, Mile End Rd, London E1 4NS, UK simon.dixon@elec.qmul.ac.uk http://www.elec.qmul.ac.uk/people/simond

More information

Versatile EMS and EMI measurements for the automobile sector

Versatile EMS and EMI measurements for the automobile sector EMC/FIELD STRENGTH EMC Measurement Software R&S EMC32-A Versatile EMS and EMI measurements for the automobile sector EMC Measurement Software R&S EMC32-A (automotive) from Rohde & Schwarz is a powerful

More information

Sound visualization through a swarm of fireflies

Sound visualization through a swarm of fireflies Sound visualization through a swarm of fireflies Ana Rodrigues, Penousal Machado, Pedro Martins, and Amílcar Cardoso CISUC, Deparment of Informatics Engineering, University of Coimbra, Coimbra, Portugal

More information

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT

More information

PLOrk Beat Science 2.0 NIME 2009 club submission by Ge Wang and Rebecca Fiebrink

PLOrk Beat Science 2.0 NIME 2009 club submission by Ge Wang and Rebecca Fiebrink PLOrk Beat Science 2.0 NIME 2009 club submission by Ge Wang and Rebecca Fiebrink Introduction This document details our proposed NIME 2009 club performance of PLOrk Beat Science 2.0, our multi-laptop,

More information

Devices I have known and loved

Devices I have known and loved 66 l Print this article Devices I have known and loved Joel Chadabe Albany, New York, USA joel@emf.org Do performing devices match performance requirements? Whenever we work with an electronic music system,

More information

Simple motion control implementation

Simple motion control implementation Simple motion control implementation with Omron PLC SCOPE In todays challenging economical environment and highly competitive global market, manufacturers need to get the most of their automation equipment

More information

Music Understanding and the Future of Music

Music Understanding and the Future of Music Music Understanding and the Future of Music Roger B. Dannenberg Professor of Computer Science, Art, and Music Carnegie Mellon University Why Computers and Music? Music in every human society! Computers

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 7.9 THE FUTURE OF SOUND

More information

The Tone Height of Multiharmonic Sounds. Introduction

The Tone Height of Multiharmonic Sounds. Introduction Music-Perception Winter 1990, Vol. 8, No. 2, 203-214 I990 BY THE REGENTS OF THE UNIVERSITY OF CALIFORNIA The Tone Height of Multiharmonic Sounds ROY D. PATTERSON MRC Applied Psychology Unit, Cambridge,

More information

Kansas State Music Standards Ensembles

Kansas State Music Standards Ensembles Kansas State Music Standards Standard 1: Creating Conceiving and developing new artistic ideas and work. Process Component Cr.1: Imagine Generate musical ideas for various purposes and contexts. Process

More information

Zooming into saxophone performance: Tongue and finger coordination

Zooming into saxophone performance: Tongue and finger coordination International Symposium on Performance Science ISBN 978-2-9601378-0-4 The Author 2013, Published by the AEC All rights reserved Zooming into saxophone performance: Tongue and finger coordination Alex Hofmann

More information

Flip-Flops. Because of this the state of the latch may keep changing in circuits with feedback as long as the clock pulse remains active.

Flip-Flops. Because of this the state of the latch may keep changing in circuits with feedback as long as the clock pulse remains active. Flip-Flops Objectives The objectives of this lesson are to study: 1. Latches versus Flip-Flops 2. Master-Slave Flip-Flops 3. Timing Analysis of Master-Slave Flip-Flops 4. Different Types of Master-Slave

More information

Interactive Visualization for Music Rediscovery and Serendipity

Interactive Visualization for Music Rediscovery and Serendipity Interactive Visualization for Music Rediscovery and Serendipity Ricardo Dias Joana Pinto INESC-ID, Instituto Superior Te cnico, Universidade de Lisboa Portugal {ricardo.dias, joanadiaspinto}@tecnico.ulisboa.pt

More information

Automatic Projector Tilt Compensation System

Automatic Projector Tilt Compensation System Automatic Projector Tilt Compensation System Ganesh Ajjanagadde James Thomas Shantanu Jain October 30, 2014 1 Introduction Due to the advances in semiconductor technology, today s display projectors can

More information

PAK 5.9. Interacting with live data.

PAK 5.9. Interacting with live data. PAK 5.9 Interacting with live data. Realize how beneficial and easy it is to have a continuous data stream where you can decide on demand to record, view online or to post-process dynamic data of your

More information

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes hello Jay Biernat Third author University of Rochester University of Rochester Affiliation3 words jbiernat@ur.rochester.edu author3@ismir.edu

More information

Correlated Receiver Diversity Simulations with R&S SFU

Correlated Receiver Diversity Simulations with R&S SFU Application Note Marius Schipper 10.2012-7BM76_2E Correlated Receiver Diversity Simulations with R&S SFU Application Note Products: R&S SFU R&S SFE R&S SFE100 R&S SFC R&S SMU200A Receiver diversity improves

More information

SWITCHED INFINITY: SUPPORTING AN INFINITE HD LINEUP WITH SDV

SWITCHED INFINITY: SUPPORTING AN INFINITE HD LINEUP WITH SDV SWITCHED INFINITY: SUPPORTING AN INFINITE HD LINEUP WITH SDV First Presented at the SCTE Cable-Tec Expo 2010 John Civiletto, Executive Director of Platform Architecture. Cox Communications Ludovic Milin,

More information

Quantify. The Subjective. PQM: A New Quantitative Tool for Evaluating Display Design Options

Quantify. The Subjective. PQM: A New Quantitative Tool for Evaluating Display Design Options PQM: A New Quantitative Tool for Evaluating Display Design Options Software, Electronics, and Mechanical Systems Laboratory 3M Optical Systems Division Jennifer F. Schumacher, John Van Derlofske, Brian

More information

Full Disclosure Monitoring

Full Disclosure Monitoring Full Disclosure Monitoring Power Quality Application Note Full Disclosure monitoring is the ability to measure all aspects of power quality, on every voltage cycle, and record them in appropriate detail

More information

1ms Column Parallel Vision System and It's Application of High Speed Target Tracking

1ms Column Parallel Vision System and It's Application of High Speed Target Tracking Proceedings of the 2(X)0 IEEE International Conference on Robotics & Automation San Francisco, CA April 2000 1ms Column Parallel Vision System and It's Application of High Speed Target Tracking Y. Nakabo,

More information

Welcome to Vibrationdata

Welcome to Vibrationdata Welcome to Vibrationdata Acoustics Shock Vibration Signal Processing February 2004 Newsletter Greetings Feature Articles Speech is perhaps the most important characteristic that distinguishes humans from

More information

Next Generation Software Solution for Sound Engineering

Next Generation Software Solution for Sound Engineering Next Generation Software Solution for Sound Engineering HEARING IS A FASCINATING SENSATION ArtemiS SUITE ArtemiS SUITE Binaural Recording Analysis Playback Troubleshooting Multichannel Soundscape ArtemiS

More information

SPL Analog Code Plug-in Manual

SPL Analog Code Plug-in Manual SPL Analog Code Plug-in Manual EQ Rangers Vol. 1 Manual SPL Analog Code EQ Rangers Plug-in Vol. 1 Native Version (RTAS, AU and VST): Order # 2890 RTAS and TDM Version : Order # 2891 Manual Version 1.0

More information

Subjective Similarity of Music: Data Collection for Individuality Analysis

Subjective Similarity of Music: Data Collection for Individuality Analysis Subjective Similarity of Music: Data Collection for Individuality Analysis Shota Kawabuchi and Chiyomi Miyajima and Norihide Kitaoka and Kazuya Takeda Nagoya University, Nagoya, Japan E-mail: shota.kawabuchi@g.sp.m.is.nagoya-u.ac.jp

More information

Robert Alexandru Dobre, Cristian Negrescu

Robert Alexandru Dobre, Cristian Negrescu ECAI 2016 - International Conference 8th Edition Electronics, Computers and Artificial Intelligence 30 June -02 July, 2016, Ploiesti, ROMÂNIA Automatic Music Transcription Software Based on Constant Q

More information

TV Synchronism Generation with PIC Microcontroller

TV Synchronism Generation with PIC Microcontroller TV Synchronism Generation with PIC Microcontroller With the widespread conversion of the TV transmission and coding standards, from the early analog (NTSC, PAL, SECAM) systems to the modern digital formats

More information

A Real-Time Genetic Algorithm in Human-Robot Musical Improvisation

A Real-Time Genetic Algorithm in Human-Robot Musical Improvisation A Real-Time Genetic Algorithm in Human-Robot Musical Improvisation Gil Weinberg, Mark Godfrey, Alex Rae, and John Rhoads Georgia Institute of Technology, Music Technology Group 840 McMillan St, Atlanta

More information

The software concept. Try yourself and experience how your processes are significantly simplified. You need. weqube.

The software concept. Try yourself and experience how your processes are significantly simplified. You need. weqube. You need. weqube. weqube is the smart camera which combines numerous features on a powerful platform. Thanks to the intelligent, modular software concept weqube adjusts to your situation time and time

More information

Speech Recognition and Signal Processing for Broadcast News Transcription

Speech Recognition and Signal Processing for Broadcast News Transcription 2.2.1 Speech Recognition and Signal Processing for Broadcast News Transcription Continued research and development of a broadcast news speech transcription system has been promoted. Universities and researchers

More information

Music Curriculum Kindergarten

Music Curriculum Kindergarten Music Curriculum Kindergarten Wisconsin Model Standards for Music A: Singing Echo short melodic patterns appropriate to grade level Sing kindergarten repertoire with appropriate posture and breathing Maintain

More information

Improving music composition through peer feedback: experiment and preliminary results

Improving music composition through peer feedback: experiment and preliminary results Improving music composition through peer feedback: experiment and preliminary results Daniel Martín and Benjamin Frantz and François Pachet Sony CSL Paris {daniel.martin,pachet}@csl.sony.fr Abstract To

More information

Powerful Software Tools and Methods to Accelerate Test Program Development A Test Systems Strategies, Inc. (TSSI) White Paper.

Powerful Software Tools and Methods to Accelerate Test Program Development A Test Systems Strategies, Inc. (TSSI) White Paper. Powerful Software Tools and Methods to Accelerate Test Program Development A Test Systems Strategies, Inc. (TSSI) White Paper Abstract Test costs have now risen to as much as 50 percent of the total manufacturing

More information

Playful Sounds From The Classroom: What Can Designers of Digital Music Games Learn From Formal Educators?

Playful Sounds From The Classroom: What Can Designers of Digital Music Games Learn From Formal Educators? Playful Sounds From The Classroom: What Can Designers of Digital Music Games Learn From Formal Educators? Pieter Duysburgh iminds - SMIT - VUB Pleinlaan 2, 1050 Brussels, BELGIUM pieter.duysburgh@vub.ac.be

More information

Quantifying the Benefits of Using an Interactive Decision Support Tool for Creating Musical Accompaniment in a Particular Style

Quantifying the Benefits of Using an Interactive Decision Support Tool for Creating Musical Accompaniment in a Particular Style Quantifying the Benefits of Using an Interactive Decision Support Tool for Creating Musical Accompaniment in a Particular Style Ching-Hua Chuan University of North Florida School of Computing Jacksonville,

More information

Curriculum Standard One: The student will listen to and analyze music critically, using vocabulary and language of music.

Curriculum Standard One: The student will listen to and analyze music critically, using vocabulary and language of music. Curriculum Standard One: The student will listen to and analyze music critically, using vocabulary and language of music. 1. The student will analyze the uses of elements of music. A. Can the student analyze

More information

VR5 HD Spatial Channel Emulator

VR5 HD Spatial Channel Emulator spirent Wireless Channel Emulator The world s most advanced platform for creating realistic RF environments used to test highantenna-count wireless receivers in MIMO and beamforming technologies. Multiple

More information

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring 2009 Week 6 Class Notes Pitch Perception Introduction Pitch may be described as that attribute of auditory sensation in terms

More information

SPL Analog Code Plug-in Manual

SPL Analog Code Plug-in Manual SPL Analog Code Plug-in Manual EQ Rangers Manual EQ Rangers Analog Code Plug-ins Model Number 2890 Manual Version 2.0 12 /2011 This user s guide contains a description of the product. It in no way represents

More information

6 th Grade Instrumental Music Curriculum Essentials Document

6 th Grade Instrumental Music Curriculum Essentials Document 6 th Grade Instrumental Curriculum Essentials Document Boulder Valley School District Department of Curriculum and Instruction August 2011 1 Introduction The Boulder Valley Curriculum provides the foundation

More information

Reducing False Positives in Video Shot Detection

Reducing False Positives in Video Shot Detection Reducing False Positives in Video Shot Detection Nithya Manickam Computer Science & Engineering Department Indian Institute of Technology, Bombay Powai, India - 400076 mnitya@cse.iitb.ac.in Sharat Chandran

More information

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM A QUER B EAMPLE MUSIC RETRIEVAL ALGORITHM H. HARB AND L. CHEN Maths-Info department, Ecole Centrale de Lyon. 36, av. Guy de Collongue, 69134, Ecully, France, EUROPE E-mail: {hadi.harb, liming.chen}@ec-lyon.fr

More information

Authors: Kasper Marklund, Anders Friberg, Sofia Dahl, KTH, Carlo Drioli, GEM, Erik Lindström, UUP Last update: November 28, 2002

Authors: Kasper Marklund, Anders Friberg, Sofia Dahl, KTH, Carlo Drioli, GEM, Erik Lindström, UUP Last update: November 28, 2002 Groove Machine Authors: Kasper Marklund, Anders Friberg, Sofia Dahl, KTH, Carlo Drioli, GEM, Erik Lindström, UUP Last update: November 28, 2002 1. General information Site: Kulturhuset-The Cultural Centre

More information

Interactive Virtual Laboratory for Distance Education in Nuclear Engineering. Abstract

Interactive Virtual Laboratory for Distance Education in Nuclear Engineering. Abstract Interactive Virtual Laboratory for Distance Education in Nuclear Engineering Prashant Jain, James Stubbins and Rizwan Uddin Department of Nuclear, Plasma and Radiological Engineering University of Illinois

More information

MUSICAL INSTRUMENT RECOGNITION WITH WAVELET ENVELOPES

MUSICAL INSTRUMENT RECOGNITION WITH WAVELET ENVELOPES MUSICAL INSTRUMENT RECOGNITION WITH WAVELET ENVELOPES PACS: 43.60.Lq Hacihabiboglu, Huseyin 1,2 ; Canagarajah C. Nishan 2 1 Sonic Arts Research Centre (SARC) School of Computer Science Queen s University

More information

y POWER USER MUSIC PRODUCTION and PERFORMANCE With the MOTIF ES Mastering the Sample SLICE function

y POWER USER MUSIC PRODUCTION and PERFORMANCE With the MOTIF ES Mastering the Sample SLICE function y POWER USER MUSIC PRODUCTION and PERFORMANCE With the MOTIF ES Mastering the Sample SLICE function Phil Clendeninn Senior Product Specialist Technology Products Yamaha Corporation of America Working with

More information

CMS Conference Report

CMS Conference Report Available on CMS information server CMS CR 1997/017 CMS Conference Report 22 October 1997 Updated in 30 March 1998 Trigger synchronisation circuits in CMS J. Varela * 1, L. Berger 2, R. Nóbrega 3, A. Pierce

More information

Logisim: A graphical system for logic circuit design and simulation

Logisim: A graphical system for logic circuit design and simulation Logisim: A graphical system for logic circuit design and simulation October 21, 2001 Abstract Logisim facilitates the practice of designing logic circuits in introductory courses addressing computer architecture.

More information

Multimodal databases at KTH

Multimodal databases at KTH Multimodal databases at David House, Jens Edlund & Jonas Beskow Clarin Workshop The QSMT database (2002): Facial & Articulatory motion Clarin Workshop Purpose Obtain coherent data for modelling and animation

More information

APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC

APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC Vishweshwara Rao, Sachin Pant, Madhumita Bhaskar and Preeti Rao Department of Electrical Engineering, IIT Bombay {vishu, sachinp,

More information

Hidden Markov Model based dance recognition

Hidden Markov Model based dance recognition Hidden Markov Model based dance recognition Dragutin Hrenek, Nenad Mikša, Robert Perica, Pavle Prentašić and Boris Trubić University of Zagreb, Faculty of Electrical Engineering and Computing Unska 3,

More information

Evaluating Oscilloscope Mask Testing for Six Sigma Quality Standards

Evaluating Oscilloscope Mask Testing for Six Sigma Quality Standards Evaluating Oscilloscope Mask Testing for Six Sigma Quality Standards Application Note Introduction Engineers use oscilloscopes to measure and evaluate a variety of signals from a range of sources. Oscilloscopes

More information

Automatic Music Clustering using Audio Attributes

Automatic Music Clustering using Audio Attributes Automatic Music Clustering using Audio Attributes Abhishek Sen BTech (Electronics) Veermata Jijabai Technological Institute (VJTI), Mumbai, India abhishekpsen@gmail.com Abstract Music brings people together,

More information

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene Beat Extraction from Expressive Musical Performances Simon Dixon, Werner Goebl and Emilios Cambouropoulos Austrian Research Institute for Artificial Intelligence, Schottengasse 3, A-1010 Vienna, Austria.

More information

Effects of Auditory and Motor Mental Practice in Memorized Piano Performance

Effects of Auditory and Motor Mental Practice in Memorized Piano Performance Bulletin of the Council for Research in Music Education Spring, 2003, No. 156 Effects of Auditory and Motor Mental Practice in Memorized Piano Performance Zebulon Highben Ohio State University Caroline

More information

Psychoacoustic Evaluation of Fan Noise

Psychoacoustic Evaluation of Fan Noise Psychoacoustic Evaluation of Fan Noise Dr. Marc Schneider Team Leader R&D - Acoustics ebm-papst Mulfingen GmbH & Co.KG Carolin Feldmann, University Siegen Outline Motivation Psychoacoustic Parameters Psychoacoustic

More information

Illumination-based Real-Time Contactless Synchronization of High-Speed Vision Sensors

Illumination-based Real-Time Contactless Synchronization of High-Speed Vision Sensors Proceedings of the 2008 IEEE International Conference on Robotics and Biomimetics Bangkok, Thailand, February 21-26, 2009 Illumination-based Real-Time Contactless Synchronization of High-Speed Vision Sensors

More information

ECE 4220 Real Time Embedded Systems Final Project Spectrum Analyzer

ECE 4220 Real Time Embedded Systems Final Project Spectrum Analyzer ECE 4220 Real Time Embedded Systems Final Project Spectrum Analyzer by: Matt Mazzola 12222670 Abstract The design of a spectrum analyzer on an embedded device is presented. The device achieves minimum

More information

BIG IDEAS. Music is a process that relies on the interplay of the senses. Learning Standards

BIG IDEAS. Music is a process that relies on the interplay of the senses. Learning Standards Area of Learning: ARTS EDUCATION Music: Instrumental Music (includes Concert Band 10, Orchestra 10, Jazz Band 10, Guitar 10) Grade 10 BIG IDEAS Individual and collective expression is rooted in history,

More information

Indiana Music Standards

Indiana Music Standards A Correlation of to the Indiana Music Standards Introduction This document shows how, 2008 Edition, meets the objectives of the. Page references are to the Student Edition (SE), and Teacher s Edition (TE).

More information

2018 Indiana Music Education Standards

2018 Indiana Music Education Standards 2018 Indiana Music Education Standards Introduction: Music, along with the other fine arts, is a critical part of both society and education. Through participation in music, individuals develop the ability

More information

Topic: Instructional David G. Thomas December 23, 2015

Topic: Instructional David G. Thomas December 23, 2015 Procedure to Setup a 3ɸ Linear Motor This is a guide to configure a 3ɸ linear motor using either analog or digital encoder feedback with an Elmo Gold Line drive. Topic: Instructional David G. Thomas December

More information

homework solutions for: Homework #4: Signal-to-Noise Ratio Estimation submitted to: Dr. Joseph Picone ECE 8993 Fundamentals of Speech Recognition

homework solutions for: Homework #4: Signal-to-Noise Ratio Estimation submitted to: Dr. Joseph Picone ECE 8993 Fundamentals of Speech Recognition INSTITUTE FOR SIGNAL AND INFORMATION PROCESSING homework solutions for: Homework #4: Signal-to-Noise Ratio Estimation submitted to: Dr. Joseph Picone ECE 8993 Fundamentals of Speech Recognition May 3,

More information

HEAD. HEAD VISOR (Code 7500ff) Overview. Features. System for online localization of sound sources in real time

HEAD. HEAD VISOR (Code 7500ff) Overview. Features. System for online localization of sound sources in real time HEAD Ebertstraße 30a 52134 Herzogenrath Tel.: +49 2407 577-0 Fax: +49 2407 577-99 email: info@head-acoustics.de Web: www.head-acoustics.de Data Datenblatt Sheet HEAD VISOR (Code 7500ff) System for online

More information

DYNAMIC AUDITORY CUES FOR EVENT IMPORTANCE LEVEL

DYNAMIC AUDITORY CUES FOR EVENT IMPORTANCE LEVEL DYNAMIC AUDITORY CUES FOR EVENT IMPORTANCE LEVEL Jonna Häkkilä Nokia Mobile Phones Research and Technology Access Elektroniikkatie 3, P.O.Box 50, 90571 Oulu, Finland jonna.hakkila@nokia.com Sami Ronkainen

More information

CHARACTERIZATION OF END-TO-END DELAYS IN HEAD-MOUNTED DISPLAY SYSTEMS

CHARACTERIZATION OF END-TO-END DELAYS IN HEAD-MOUNTED DISPLAY SYSTEMS CHARACTERIZATION OF END-TO-END S IN HEAD-MOUNTED DISPLAY SYSTEMS Mark R. Mine University of North Carolina at Chapel Hill 3/23/93 1. 0 INTRODUCTION This technical report presents the results of measurements

More information

The Measurement Tools and What They Do

The Measurement Tools and What They Do 2 The Measurement Tools The Measurement Tools and What They Do JITTERWIZARD The JitterWizard is a unique capability of the JitterPro package that performs the requisite scope setup chores while simplifying

More information

Jam Tomorrow: Collaborative Music Generation in Croquet Using OpenAL

Jam Tomorrow: Collaborative Music Generation in Croquet Using OpenAL Jam Tomorrow: Collaborative Music Generation in Croquet Using OpenAL Florian Thalmann thalmann@students.unibe.ch Markus Gaelli gaelli@iam.unibe.ch Institute of Computer Science and Applied Mathematics,

More information

PRESCOTT UNIFIED SCHOOL DISTRICT District Instructional Guide January 2016

PRESCOTT UNIFIED SCHOOL DISTRICT District Instructional Guide January 2016 Grade Level: 9 12 Subject: Jazz Ensemble Time: School Year as listed Core Text: Time Unit/Topic Standards Assessments 1st Quarter Arrange a melody Creating #2A Select and develop arrangements, sections,

More information

Musicians Adjustment of Performance to Room Acoustics, Part III: Understanding the Variations in Musical Expressions

Musicians Adjustment of Performance to Room Acoustics, Part III: Understanding the Variations in Musical Expressions Musicians Adjustment of Performance to Room Acoustics, Part III: Understanding the Variations in Musical Expressions K. Kato a, K. Ueno b and K. Kawai c a Center for Advanced Science and Innovation, Osaka

More information

Curriculum Standard One: The student will listen to and analyze music critically, using the vocabulary and language of music.

Curriculum Standard One: The student will listen to and analyze music critically, using the vocabulary and language of music. Curriculum Standard One: The student will listen to and analyze music critically, using the vocabulary and language of music. 1. The student will analyze the uses of elements of music. A. Can the student

More information

MUSICAL EAR TRAINING THROUGH ACTIVE MUSIC MAKING IN ADOLESCENT Cl USERS. The background ~

MUSICAL EAR TRAINING THROUGH ACTIVE MUSIC MAKING IN ADOLESCENT Cl USERS. The background ~ It's good news that more and more teenagers are being offered the option of cochlear implants. They are candidates who require information and support given in a way to meet their particular needs which

More information

Finger motion in piano performance: Touch and tempo

Finger motion in piano performance: Touch and tempo International Symposium on Performance Science ISBN 978-94-936--4 The Author 9, Published by the AEC All rights reserved Finger motion in piano performance: Touch and tempo Werner Goebl and Caroline Palmer

More information

Precision testing methods of Event Timer A032-ET

Precision testing methods of Event Timer A032-ET Precision testing methods of Event Timer A032-ET Event Timer A032-ET provides extreme precision. Therefore exact determination of its characteristics in commonly accepted way is impossible or, at least,

More information

INTRODUCTION AND FEATURES

INTRODUCTION AND FEATURES INTRODUCTION AND FEATURES www.datavideo.com TVS-1000 Introduction Virtual studio technology is becoming increasingly popular. However, until now, there has been a split between broadcasters that can develop

More information

GSA Applicant Guide: Instrumental Music

GSA Applicant Guide: Instrumental Music GSA Applicant Guide: Instrumental Music I. Program Description GSA s Instrumental Music program is structured to introduce a broad spectrum of musical styles and philosophies, developing students fundamental

More information

In this paper, the issues and opportunities involved in using a PDA for a universal remote

In this paper, the issues and opportunities involved in using a PDA for a universal remote Abstract In this paper, the issues and opportunities involved in using a PDA for a universal remote control are discussed. As the number of home entertainment devices increases, the need for a better remote

More information

Area-Efficient Decimation Filter with 50/60 Hz Power-Line Noise Suppression for ΔΣ A/D Converters

Area-Efficient Decimation Filter with 50/60 Hz Power-Line Noise Suppression for ΔΣ A/D Converters SICE Journal of Control, Measurement, and System Integration, Vol. 10, No. 3, pp. 165 169, May 2017 Special Issue on SICE Annual Conference 2016 Area-Efficient Decimation Filter with 50/60 Hz Power-Line

More information

Standard 1: Singing, alone and with others, a varied repertoire of music

Standard 1: Singing, alone and with others, a varied repertoire of music Standard 1: Singing, alone and with others, a varied repertoire of music Benchmark 1: sings independently, on pitch, and in rhythm, with appropriate timbre, diction, and posture, and maintains a steady

More information

Sofia Dahl Cognitive and Systematic Musicology Lab, School of Music. Looking at movement gesture Examples from drumming and percussion Sofia Dahl

Sofia Dahl Cognitive and Systematic Musicology Lab, School of Music. Looking at movement gesture Examples from drumming and percussion Sofia Dahl Looking at movement gesture Examples from drumming and percussion Sofia Dahl Players movement gestures communicative sound facilitating visual gesture sound producing sound accompanying gesture sound gesture

More information

CHILDREN S CONCEPTUALISATION OF MUSIC

CHILDREN S CONCEPTUALISATION OF MUSIC R. Kopiez, A. C. Lehmann, I. Wolther & C. Wolf (Eds.) Proceedings of the 5th Triennial ESCOM Conference CHILDREN S CONCEPTUALISATION OF MUSIC Tânia Lisboa Centre for the Study of Music Performance, Royal

More information

Cognitive modeling of musician s perception in concert halls

Cognitive modeling of musician s perception in concert halls Acoust. Sci. & Tech. 26, 2 (2005) PAPER Cognitive modeling of musician s perception in concert halls Kanako Ueno and Hideki Tachibana y 1 Institute of Industrial Science, University of Tokyo, Komaba 4

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 6.1 INFLUENCE OF THE

More information

Standard 1 PERFORMING MUSIC: Singing alone and with others

Standard 1 PERFORMING MUSIC: Singing alone and with others KINDERGARTEN Standard 1 PERFORMING MUSIC: Singing alone and with others Students sing melodic patterns and songs with an appropriate tone quality, matching pitch and maintaining a steady tempo. K.1.1 K.1.2

More information

Registration Reference Book

Registration Reference Book Exploring the new MUSIC ATELIER Registration Reference Book Index Chapter 1. The history of the organ 6 The difference between the organ and the piano 6 The continued evolution of the organ 7 The attraction

More information

LED driver architectures determine SSL Flicker,

LED driver architectures determine SSL Flicker, LED driver architectures determine SSL Flicker, By: MELUX CONTROL GEARS P.LTD. Replacing traditional incandescent and fluorescent lights with more efficient, and longerlasting LED-based solid-state lighting

More information

Social Interaction based Musical Environment

Social Interaction based Musical Environment SIME Social Interaction based Musical Environment Yuichiro Kinoshita Changsong Shen Jocelyn Smith Human Communication Human Communication Sensory Perception and Technologies Laboratory Technologies Laboratory

More information