ASSISTANCE FOR NOVICE USERS ON CREATING SONGS FROM JAPANESE LYRICS

Similar documents
Melodic Outline Extraction Method for Non-note-level Melody Editing

AutoChorusCreator : Four-Part Chorus Generator with Musical Feature Control, Using Search Spaces Constructed from Rules of Music Theory

Musical Creativity. Jukka Toivanen Introduction to Computational Creativity Dept. of Computer Science University of Helsinki

Robert Alexandru Dobre, Cristian Negrescu

Hidden Markov Model based dance recognition

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION

AUTOMATIC MUSIC COMPOSITION BASED ON COUNTERPOINT AND IMITATION USING STOCHASTIC MODELS

Outline. Why do we classify? Audio Classification

APPLICATIONS OF A SEMI-AUTOMATIC MELODY EXTRACTION INTERFACE FOR INDIAN MUSIC

Music Composition with Interactive Evolutionary Computation

Characteristics of Polyphonic Music Style and Markov Model of Pitch-Class Intervals

Connecticut State Department of Education Music Standards Middle School Grades 6-8

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES

A Bayesian Network for Real-Time Musical Accompaniment

Building a Better Bach with Markov Chains

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC

BayesianBand: Jam Session System based on Mutual Prediction by User and System

Music Radar: A Web-based Query by Humming System

Music/Lyrics Composition System Considering User s Image and Music Genre

THE importance of music content analysis for musical

A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS

6.UAP Project. FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System. Daryl Neubieser. May 12, 2016

Subjective Similarity of Music: Data Collection for Individuality Analysis

Automatic Labelling of tabla signals

Music Similarity and Cover Song Identification: The Case of Jazz

COURSE TITLE: Advanced Chorus (Grades 9-12) PREREQUISITE:


MUSIC CURRICULM MAP: KEY STAGE THREE:

A probabilistic approach to determining bass voice leading in melodic harmonisation

Rethinking Reflexive Looper for structured pop music

Keywords Separation of sound, percussive instruments, non-percussive instruments, flexible audio source separation toolbox

A Case Based Approach to the Generation of Musical Expression

CSC475 Music Information Retrieval

Sudhanshu Gautam *1, Sarita Soni 2. M-Tech Computer Science, BBAU Central University, Lucknow, Uttar Pradesh, India

2. AN INTROSPECTION OF THE MORPHING PROCESS

DEPARTMENT/GRADE LEVEL: Band (7 th and 8 th Grade) COURSE/SUBJECT TITLE: Instrumental Music #0440 TIME FRAME (WEEKS): 36 weeks

CHAPTER 3. Melody Style Mining

QUALITY OF COMPUTER MUSIC USING MIDI LANGUAGE FOR DIGITAL MUSIC ARRANGEMENT

Figured Bass and Tonality Recognition Jerome Barthélemy Ircam 1 Place Igor Stravinsky Paris France

Speech and Speaker Recognition for the Command of an Industrial Robot

CPU Bach: An Automatic Chorale Harmonization System

Query By Humming: Finding Songs in a Polyphonic Database

Melodic Pattern Segmentation of Polyphonic Music as a Set Partitioning Problem

Computational Modelling of Harmony

Music Source Separation

A repetition-based framework for lyric alignment in popular songs

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods

Introductions to Music Information Retrieval

A Real-Time Genetic Algorithm in Human-Robot Musical Improvisation

An Interactive Case-Based Reasoning Approach for Generating Expressive Music

FINE ARTS Institutional (ILO), Program (PLO), and Course (SLO) Alignment

Exploring Our Roots, Expanding our Future Volume 1: Lesson 1

ITU Workshop on Making Television Accessible From Idea to Reality, hosted and supported by Japan Broadcasting Corporation (NHK)

Music Segmentation Using Markov Chain Methods

Contest and Judging Manual

ORB COMPOSER Documentation 1.0.0

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

TEST SUMMARY AND FRAMEWORK TEST SUMMARY

Topic 11. Score-Informed Source Separation. (chroma slides adapted from Meinard Mueller)

Algorithmic Music Composition

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION

AUDITION PROCEDURES:

Central Junior High Vocal Music

Distributed Virtual Music Orchestra

A Novel Approach to Automatic Music Composing: Using Genetic Algorithm

Evolutionary Computation Applied to Melody Generation

Representing, comparing and evaluating of music files

Music Understanding and the Future of Music

1 Overview. 1.1 Nominal Project Requirements

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment

Sentiment Extraction in Music

Music Alignment and Applications. Introduction

The KING S Medium Term Plan - Music. Y10 LC1 Programme. Module Area of Study 3

A Study of Synchronization of Audio Data with Symbolic Data. Music254 Project Report Spring 2007 SongHui Chon

AUTOREGRESSIVE MFCC MODELS FOR GENRE CLASSIFICATION IMPROVED BY HARMONIC-PERCUSSION SEPARATION

jsymbolic 2: New Developments and Research Opportunities

HarmonyMixer: Mixing the Character of Chords among Polyphonic Audio

From RTM-notation to ENP-score-notation

PERFORMING ARTS. Head of Music: Cinzia Cursaro. Year 7 MUSIC Core Component 1 Term

A prototype system for rule-based expressive modifications of audio recordings

LEVELS IN NATIONAL CURRICULUM MUSIC

LEVELS IN NATIONAL CURRICULUM MUSIC

Musical Harmonization with Constraints: A Survey. Overview. Computers and Music. Tonal Music

Curriculum Mapping Subject-VOCAL JAZZ (L)4184

Music Information Retrieval

Constructive Adaptive User Interfaces Composing Music Based on Human Feelings

Greeley-Evans School District 6 High School Vocal Music Curriculum Guide Unit: Men s and Women s Choir Year 1 Enduring Concept: Expression of Music

METHOD TO DETECT GTTM LOCAL GROUPING BOUNDARIES BASED ON CLUSTERING AND STATISTICAL LEARNING

Grade 5 General Music

COURSE OF STUDY UNIT PLANNING GUIDE GENERAL MUSIC GRADE LEVEL 3-5 REVISED AUGUST 2017 ALIGNED TO THE NJSLS FOR VISUAL AND PERFORMING ARTS

3:15 Tour of Music Technology facilities. 3:35 Discuss industry trends Areas that are growing/shrinking, New technologies New jobs Anything else?

Tool-based Identification of Melodic Patterns in MusicXML Documents

MUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES

Expressive Singing Synthesis based on Unit Selection for the Singing Synthesis Challenge 2016

D-Lab & D-Lab Control Plan. Measure. Analyse. User Manual

AutoChorale An Automatic Music Generator. Jack Mi, Zhengtao Jin

Chord Classification of an Audio Signal using Artificial Neural Network

Computational Parsing of Melody (CPM): Interface Enhancing the Creative Process during the Production of Music

PRODUCTION OF TV PROGRAMS ON A SINGLE DESKTOP PC -SPECIAL SCRIPTING LANGUAGE TVML GENERATES LOW-COST TV PROGRAMS-

Advances in Algorithmic Composition

Transcription:

ASSISTACE FOR OVICE USERS O CREATIG SOGS FROM JAPAESE LYRICS Satoru Fukayama, Daisuke Saito, Shigeki Sagayama The University of Tokyo Graduate School of Information Science and Technology 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan {fukayama,dsaito,sagayama}@hil.t.u-tokyo.ac.jp ABSTRACT This paper describes a system designed to assist users in creating original songs from Japanese lyrics with ease. Although software which helps in accomplishing this task has advanced recently, assisting users in going through the difficulties of composition is still a challenging task. We discuss a possible solution for assisting composers through three approaches; to design a system with direction functionality in generating songs, to formulate composition as an optimization problem, and to integrate a synthesis and analysis engine of vocals and lyrics. After 54 days of operation of our implemented web-based system, 15, 139 songs were automatically generated by 5, 908 distinct users. On average, 2.33 songs were generated per access to the website per user and a wide variety of composition parameters were chosen for song generation. The results indicate that our method is able to greatly assist users in generating original songs from Japanese lyrics. 1. ITRODUCTIO With recent advanced audio manipulation technologies and widespread use of video sites on the Internet, it became commonplace to create original songs and broadcast those to the world. These situations can motivate potential users who are not skilled at composing or mixing music, but have a desire to create their original music. Many difficulties of creating music have been overcome by the introduction of technologies for music content manipulation. Digital audio workstation (DAW) has enabled users to obtain tracks with high quality instrument sounds and edit music through cut and paste and adding audio effects. Furthermore, focusing on the more detailed aspects of composition, our research investigates how computers can assist users in overcoming the difficulties of composing melodies, especially when they do not have much proficiency or pertaining to composition. Since early research on automatic composition with computers [1], discussions have been made on how computers can assist the user s creation of musical compositions [2]. Graphical interfaces have been exploited for composition systems as well [3], which make it possible for users to handle more abstract commands for music generation. ew computer languages and data structures have also been proposed for composition [4]. These languages provided environments that enabled users to generate music with algorithmic procedures. Computer systems for editing acoustic events and music scores also have been proposed [5, 6]. These attempts raise questions about what kind of interface is user-friendly in such systems. Interpretation of musical theoretical knowledge and conventions of compositional methods in a form that computers could handle is also looked into several approaches; Expert-knowledge based systems [7], a system based on constraints satisfaction [8], systems with genetic algorithms [9], imitating musical styles with example based programming [10], probabilistic modeling of music [11, 13] and machine learning [14]. These approaches support users when they do not have enough technical background on music or composition. The aim of this research is to design a system which assists novice users in creating original songs easily. By reviewing the previous automatic composition methods from the viewpoint of composer s assistance for novice users, three problems arise: (1) how to give directions on generating songs, (2) how to maintain consistency regarding musical theories, and (3) what the most easy-to-use interface will be. In the following sections, we discuss the approaches taken to deal with these problems. They are (1) to design a system which combines the direction functionality on generating songs, (2) to formulate composition as an optimization problem, and (3) to integrate a synthesis and analysis engine of vocal and lyrics. 2. SYSTEM DESIG FOR ASSISTACE O CREATIG SOGS FROM JAPAESE LYRICS 2.1. Design for giving directions on generating music 2.1.1. Direction based on decomposed components In order to give directions on generating melodies of songs, providing examples of existing songs can be effective. Since music can be broken down into components such as melody, harmony and rhythm, songs can be also decomposed into rhythm of melody, chord sequence, accompaniment and others. In addition, these musical components correlate with one another. For instance, when the chord sequence is in a sad mood, the melody with that chord sequence tends to also be in a sad mood. Hence, posing direc-

tions on generating melodies are possible by referring to the musical components which can be found in existing songs. Let us define ŝ as a song with melody ˆm that the user would like to generate, with a direction that reflects the mood of the chord sequence and a rhythm of the melody appearing in s 1, and an accompaniment in another song s 2. By representing the composition of song as f, the composition can be formulated as follows: ŝ = ( ˆm,c 1,r 1,a 2 ) (1) = f (c 1,r 1,a 2 ), (2) where c 1, r 1 and a 2 are the chord sequence in s 1, rhythm of melody in s 1 and accompaniment pattern in s 2, respectively. Those are obtained with: (c 1,r 1,a 1 ) = f 1 (s 1 ), (3) (c 2,r 2,a 2 ) = f 1 (s 2 ), (4) where f 1 represents the decomposition of music. Methods for designing f will be discussed in Section 3. Since decomposition of songs is a difficult task, the preparation of libraries which contain typical patterns of chord progression, rhythm of the melody and the accompaniment is proposed. Variety can be expected in the generated results by taking advantage of the vast number of possible combinations of patterns in the libraries. For instance, if 20 patterns were prepared for chord sequence patterns, melody rhythm patterns and accompaniment patterns respectively, 20 3 types of melodies will be possible. Although duplication might happen in music styles of the melodies due to poor variety in the libraries, a careful design of the libraries should be able to handle this. In addition, in order not to confuse users by having them choose a lot of parameters, preset parameters can be prepared for each of the musical styles. 2.1.2. Editing functionality of composition parameters Although prepared libraries provide users with the ability to impose directions easily for generating melodies, it may cause limitations for creating songs, as their composition will be subject to the available patterns in the libraries. A possible solution for this problem is to install a pattern editing functionality with capabilities such as: editing chord sequence patterns, rhythm patterns for the melodies, accompaniment patterns, and analysis results of accent and phonetics of the lyrics. These interfaces can assist users in composing songs with more specific directions, nimus the difficulties of writing totally new chord sequences, rhythm patterns and so forth. 2.1.3. Editing tentative generated results After the songs are generated by the system, a user may want to change the details of the generated songs. Here, it can be hypothesized that users will feel editing an existing result less cumbersome than creating a song from scratch. This concept of user assistance can be implemented by enabling reference to the composition parameters when the songs are generated, and setting a resume button in the interface for setting the parameters. 2.2. Design of a user-friendly interface Lyrics are an easy-to-use input for novice users on music since they require little or no musical knowledge in writing them 1. In case the user cannot find appropriate lyrics to attach melody, an automatic lyric generator with natural language processing techniques can be employed, such as techniques for interpolating between specified keywords using -gram models. In general, a lot of information included in the lyrics to be reflected in songs are difficult to extract (e.g. semantic information in the lyrics). However, prosody for the lyrics is relatively easy to estimate from the lyrics input with the language processing frontend of the text-to-speech engine. Direction of structure often appears to be related to how the lyrics are structured. Therefore, the linefeed code in lyrics input can be used for generating the structure segment for the music. It is possible to disply the score in musical notation for the generated results. Generated results are able to represent in score with music notation language. Assuming there are users who are not necessarily capable of reading musical scores, the results should be audible. Accompaniment audio track can be generated from MIDI data with a MIDI synthesizer. Singing voices can be generated with the singing voice synthesizer. A variety of voice qualities can be obtained by varying the training data set or the vocal tract length parameter which can be specified in the model. 2.3. Maintenance of consistency between musical components Since there are dependencies between the musical components and the melody, maintaining consistency between musical components during melody generation is necessary. Directions given on the melody are: (1) lyrics, (2) chord progression, (3) accompaniment, (4) rhythm patterns for the melody, (5) musical theory such as contrapuntal conventions. Maintenance of consistency beyond the given direction can be handled by using a probabilistic modeling and optimization as described in Section 3. 3. ALGORITHM FOR MELODY COMPOSITIO FROM JAPAESE LYRICS In this section, we briefly review the method to compose melody from Japanese lyrics with chord sequence pattern, rhythm pattern of the melody and accompaniment pattern. This method is mainly based on the previous research [13]. 1 Guido of Arezzo proposed a method to create pitches corresponding to the vowels in the lyrics[12].

Figure 1. Pitch contour of ka mi and kami : the pitch accent of the Japanese word is lexically contrastive in ka mi(a) god vs. kami (B) paper. 3.1. Japanese Prosody and its Role in Composition Japanese is said to have a fixed shape consisting of a sharp decline around the accented syllable, a decline that is usually analysed as a drop from a H 2 tone to a L 3 [15]. Furthermore, as shown in Fig. 1, the place of the accent is lexically contrastive, as in ka mi god vs. kami paper [15]. A melody attached to the lyrics cause an effect similar to the accent. Therefore we can assume that the prosody of Japanese lyrics imposes constraits on pitch motions of the melody. 3.2. Composition of Rhythm 3.2.1. Allocation of Lyrics on Melody We assume that melody consists of segments each of which correspond to a phrase structure and that the lyrics should be divided into segments. For instance, 2 bars can be treated as a segment for a song with a length of 8 bars. Furthermore, in most of classical Japanese songs, one syllable (mora) corresponds to one note in a melody. Thus the number of notes in each segment is determined by the number of syllables. When we consider the constraints on dividing the lyrics, the following 3 criteria can be assumed: (1) a the similar number of syllables in each segments is preferred, (2) the border of the segments should not be crossed over within a word, (3) overly short lyrics should be iterated prior to allocation. Under these constraints, we can solve the syllable allocation problem by using dynamic programming. 3.2.2. Keeping Unity of Rhythm in Melody Even though the numbers of notes in each of the segments are decided, there still exists a large degree of freedom in rhythm. One possible way to put constraints on rhythm is to make it so that the generated rhythm belongs to the same family of rhythms. To cope with this matter, a rhythm tree can be used, that is one rhythm has similar features, when one can be derived by uniting or dividing the note on the another. In practice, tree structured templates of rhythm as shown in Fig. 2 are prepared beforehand by hand. 2 H: high 3 L: low Figure 2. By using the rhythm tree (above), rhythm corresponding to the number of syllables is generated with consideration of keeping the unity of rhythm feature in the same song. 3.3. Composition of Pitches 3.3.1. Composition with Probabilistic Inference Some trends can be often observed in melodies. For instance, in the case of songs, pitches of the melody would be constrained by the usual voice range of the singer. The prosody of the lyrics also impose constraints on pitch motions of the melody. As we reviewed at Section 3.1, pitch motions of Japanese songs largely follow the up-ward and down-ward motions based on the prosody of the lyrics. Furthermore, chord progression, bass line of the accompaniment part, and durations of each of the notes impose constraints on the occurrence and transition of pitches on the basis of écuriture of composition, such as harmony and counterpoint. For a certain obtained melody, that melody would satisfy these constraints as we discussed above. Conversely, we can compose a song by finding the melody which optimally meets the conditions. Let the pitch sequence as a sequence of MIDI note number be X1 = x 1x 2 x, and the sequence of conditions on pitch sequence be Y1 = y 1 y 2 y, where each y n is a chord label with annotations of scale and tonality(c n ), duration of the note (d n ), MIDI note number of the accompaniment bass (b n ), and pitch accent information, i.e. y n = (c n,d n,b n,a n ). Let P ( X1 Y 1 also denote conditional probability for X 1 given Y1 which represent the tendency of pitch sequences X 1 under condition Y1. The composition of pitch for melody can be considered as finding an optimal sequence X1 given Y1 which maximizes P( X1 Y 1 : X1 = argmax P ( X1 C 1. (5) By assuming P ( x n X n 1 1,Y 1 X 1 equation (5) will be as follows: X 1 = argmax X 1 ) ( P xn x n 1,Y1, (6) n=1 P ( x n x n 1,Y1, (7)

Figure 3. System flow of web-based automatic song composition system from Japanese lyrics and choice of parameters. where P ( x 1 x 0,Y1 ( = P x1 Y1. Since there are 128 possible sequences of pitch, it is computationally unfeasible to search all of the possible sequences for the optimal one. However, obtaining the optimal pitch sequence becomes O() by using dynamic programming [16]. 4. ORPHEUS: AUTOMATIC COMPOSITIO SYSTEM FROM JAPAESE LYRICS 4.1. Overview of Orpheus version 3 Orpheus version 3 is a web-based system for automatic composition where users can create songs from Japanese lyrics with choice of composition parameters. System design and the composition algorithm discussed in Sections 2 and 3 have been implemented. Figure 3 shows the flow of the system. This is our third version of Orpheus automatic composition system series. 4.2. Lyrics input and choosing preset The lyric input interface appears when the user accesses the web site. The interface is shown in Fig. 4. Here, users can input their lyrics in the text field. Linefeed code is used for setting the structure. Users are also provided with reserved symbols for instrumental segments markup, used for the generation of intro or endings to the song. In case the user could not find out what lyrics to input, the system provides the user with an automatic lyrics generator, which can generate lyrics from input of 1 to 5 keywords and then interpolate between keywords with an -gram model trained with lyrics database. Radio buttons for choosing the preset parameters set for composition are available at the bottom of the interface in order to avoid irritating users with having to set a lot of composition parameters. 4.3. Giving directions on composition When the user proceeds past the lyric input interface, the terface for giving directions on composing songs will appear. This second interface is shown in Fig. 5. Each segment of the song is represented with a box. Users can choose composition parameters for each segment. In the latest version (version 3) of our system, around 30 chord progressions, 65 rhythm patterns for melody (including Figure 4. Interface for lyrics input and choosing preset or set of composition parameters. (http://www.orpheusmusic.org/v3) 10 patterns which are able to generate melody with auftakt), and 37 accompaniment patterns are installed for the user to give directions on generating songs. Prosody of the lyrics is analyzed with the text-to-speech engine of Galatea Talk [17], and shown in the text fields located in the boxes. Users can manually correct the prosody by editing the string in the text field. Composition parameters such as key settings and the upper and lower bounds of the melody pitches can be organized with the pull-down menu options. Parameters are also prepared in order to add variety in generating songs such as: the choice of adding the user s name and the title of the song in the score (two text field on the top of the interface), tempo change (10 choices from 40 to 180 beats per minute), choice of voices (11 choices which are obtained by varying vocal tract length parameter), number of accompaniment tracks (2 tracks in maximum), choice of musical instruments for each accompaniment track, and choice of drums (24 patterns). 4.4. Composition result and results dissemination As a result of the composition algorithm, users will obtain songs satisfying constraints given by the parameters, musical theories and the up-ward and down-ward pitch motion of the lyrics. This interface is shown in Fig. 6. Vocals

Figure 5. Interface for giving directions on generating songs. Each box represents the 8 bars segment. Composition parameters can be specified with the pull-down menus in each box. The text field located in each box is for editing the analyzed prosody of the Japanese lyrics. The box with no text field is for the instrumental segment. Figure 6. Interface for showing and downloading the results. Users can listen to generated songs with synthesized singing voice accompanied by instrumental tracks. Score data, and mp3 file are able to be downloaded. Functionality for tweeting the results on Twitter and sending comments to the system administrator are also installed. are generated with a vocal synthesizer based on a hidden Markov model [18]. Scores are generated with music notation language lilypond. Users can download the score and the audio file of generated songs. Dissemination of results with Twitter is also possible. If the user is not satisfied with the result, it is always possible to return to the previous webpage to change parameters and execute the composition again. 5. DISCUSSIOS O OPERATED RESULTS During 54 days of operation, 5,098 distinct users tried Orpheus version 3 and 15,139 songs were generated with 11,578 access to the composition server (Table 1). Daily comparison of the numbers of generated songs, server access and distinct users is shown in Fig. 7. The results show 280 songs on average were generated daily during operation. The number of distinct users was counted by detecting and removing counts for the same IP addresses in the access log. In order to analyze how well our system assisted the users to compose songs, we calculated the average number of generated songs per access: R with following equation: R = 1 S i i=1 A i, (8) where is the total number of users, i is for the index of each user, S i is the number of songs which user i generated and A i is the access count of user i. In addition, we excluded the counts for generated songs and accesses which were related with our research team members by checking the IP addresses. Also, the number of accesses where users did not compose is not counted. Results indicate that 2.33 songs in average were generated per access to the system. These statistics indicate that our system Table 1. Statistics on 54 days operation of web-based automatic composition service. is the number of distinct users. Distinct users were detected by checking duplicated IP addresses. A i, S i are the numbers of accesses and generated songs by each distinct user, respectively. The number of generated songs per access was obtained by calculating the average number of generated songs per access by a user. Here, the number of accesses which users did not compose was not counted. Distinct users 5098 Access counts i A i 11578 Generated songs i S i 15139 Generated songs per access 1 S i i A i 2.33 provides adequate solutions for the novice users to compose their original songs. Statistics for the chosen composition parameters are shown in Table 2. Results show that a variety of parameters were chosen for composition. This may indicate that users were proficient in giving directions on composing songs with our prepared parameter sets or preset styles. However, more investigations are needed for finding whether the generated results were satisfying the users intention or not. In further versions and future work, in order to remove limitations on user creation using the pre-installed composition parameters, preparing functionality for uploading patterns by users through the web can be suggested. Furthermore, a function of rank those uploaded parameters or generated results may promote the users to compose more original songs. This may bring about a social network of musical composition on the web.

Table 2. Top 10 presets or sets of composition parameters during 54 days of operation. Style preset (top 10) Frequency Singing with guitar style (default) 3622 Rock 1 style 2484 Old Japanese Meiji-era songs style 1658 ursery songs 1 style 1417 Ballade style 1285 Japanese typical school song style 1145 Pops style 951 Rock 2 style 781 Yesterday (from The Beatles) style 583 ursery songs 2 style 576 Figure 7. umbers of generated songs, server access and distinct users per day (data for 49 days out of total 54 days is shown above): In average, 280 songs were generated in a day. Distinct users were counted by detecting and removing the same IP addresses in the access log. 6. COCLUSIO We discussed a method to assist novice users in the creation of original songs from Japanese lyrics, and introduced our system Orpheus version 3. In order to help navigate the user through the difficult process of composition, we proposed the following three system designs for the solutions; (1) to design a system with direction functionality in generating songs, (2) to formulate composition as an optimization problem, and (3) to integrate system synthesis and analysis engine of vocal and lyrics. Evaluation of our method took place through the web service of our automatic composition system. The average number of generated songs per user in a access to the webserver was 2.33 songs, and in total 15,139 songs were generated automatically during 54 days of operation. Various presets for composition parameters were chosen for giving directions on generating songs. These results indicated that our method was a possible solution for encouraging novice users to compose their original songs. For future work, we plan to add functionality for uploading music components and recommending generated songs. 7. REFERECES [1] L. Hiller and L. Isaacson, Experimental Music, McGraw-Hill, 1959. [2] S. Kaske, A Conversation with Clarence Barlow, Computer Music Journal, vol.9, no.1, 1985. [3] M. Mathews and L. Rosler, Graphical language for the scores of computer-generated sounds, Perspective of ew Music, vol.6, no.2, pp.92 118, 1968. [4] M. Balaban, et al., Abstraction as a Means for End- User Computing in Creative Applications, IEEE Trans on Systems, Man, and Cybernetics, Part A, vol.32, no.6, pp.640 653, 2002. [5] R. Baker, et al., MUSICOMP: MUsic Simulator- Interpreter for COMpositional Procedures for the IBM 7090 electronic digital computer, Urbana:University of Illinois, Experimental Music Studio, Tech. Rep., 1963. [6] G. M. Koenig, Project 1: a programme for musical composition, Electric Music Reports, vol.2, pp.32 44, 1970. [7] K. Ebcioglu, An Expert System for Harmonizing Four-Part Chorales, Computer Music Journal, vol.12, no.3, pp.43-51, 1998. [8] M. Henz, et al., COMPOzE - intention-based music composition through constraint programming, in Proc. ICTAI, pp.118 121, 1996. [9] J. A. Biles, GenJam: A Genetic Algorithm for Generating Jazz Solos, in Proc ICMC, 1994. [10] D. Cope, Computer Model of Music Composition, Machine models of music, pp.403 425, 1992. [11] F. Pachet, The Continuator: Musical Interaction With Style, Journal of ew Music Research, vol.32, no.3, pp.333-341, 2003. [12] G. Hierhaus, Algorihmic Composition, Springer Verlag Wien, 1990. [13] S. Fukayama, et al., Automatic song composition from the lyrics exploiting prosody of the Japanese lyrics, in Proc. SMC, 2010. [14] S. M. Schwanauer, A Learning Machine for Tonal Composition, Machine models of music, pp.512 532, 1992. [15] M. E. Beckman and J. B. Pierrehumbert, Intonational structure in japanese and english, in Phonology Yearbook 3, pp.255 309, 1986. [16] R. E. Bellman, Dynamic Programming, Princeton University Press, 1957. [17] S. Kawamoto, et al., Galatea: Open-Source Software for Developing Anthropomorphic Spoken Dialog Agents, Life-Like Characters, Springer-Verlag, pp.187 212, 2004. [18] S. Sako, et al., A Singing Voice Synthesis System Based on Hidden Markov Model, Transactions of Information Processing Society of Japan, pp.719 727, 2004.