Melodic Outline Extraction Method for Non-note-level Melody Editing

Similar documents
BayesianBand: Jam Session System based on Mutual Prediction by User and System

ASSISTANCE FOR NOVICE USERS ON CREATING SONGS FROM JAPANESE LYRICS

AutoChorusCreator : Four-Part Chorus Generator with Musical Feature Control, Using Search Spaces Constructed from Rules of Music Theory

Music Radar: A Web-based Query by Humming System

METHOD TO DETECT GTTM LOCAL GROUPING BOUNDARIES BASED ON CLUSTERING AND STATISTICAL LEARNING

Author Index. Absolu, Brandt 165. Montecchio, Nicola 187 Mukherjee, Bhaswati 285 Müllensiefen, Daniel 365. Bay, Mert 93

Music/Lyrics Composition System Considering User s Image and Music Genre

CSC475 Music Information Retrieval

Hidden Markov Model based dance recognition

CPU Bach: An Automatic Chorale Harmonization System

SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS

Robert Alexandru Dobre, Cristian Negrescu

Outline. Why do we classify? Audio Classification

Transcription of the Singing Melody in Polyphonic Music

Chords not required: Incorporating horizontal and vertical aspects independently in a computer improvisation algorithm

TOWARD AN INTELLIGENT EDITOR FOR JAZZ MUSIC

Subjective Similarity of Music: Data Collection for Individuality Analysis

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION

2 2. Melody description The MPEG-7 standard distinguishes three types of attributes related to melody: the fundamental frequency LLD associated to a t

Multiple instrument tracking based on reconstruction error, pitch continuity and instrument activity

Figured Bass and Tonality Recognition Jerome Barthélemy Ircam 1 Place Igor Stravinsky Paris France

Student Performance Q&A:

Appendix A Types of Recorded Chords

Advances in Algorithmic Composition

Characteristics of Polyphonic Music Style and Markov Model of Pitch-Class Intervals

Piano Transcription MUMT611 Presentation III 1 March, Hankinson, 1/15

Take a Break, Bach! Let Machine Learning Harmonize That Chorale For You. Chris Lewis Stanford University

Exploring the Rules in Species Counterpoint

Music Composition with Interactive Evolutionary Computation

Music Alignment and Applications. Introduction

The Human Features of Music.

AUTOMATIC MUSIC COMPOSITION BASED ON COUNTERPOINT AND IMITATION USING STOCHASTIC MODELS

Week 14 Query-by-Humming and Music Fingerprinting. Roger B. Dannenberg Professor of Computer Science, Art and Music Carnegie Mellon University

Music Segmentation Using Markov Chain Methods

A Case Based Approach to the Generation of Musical Expression

Interacting with a Virtual Conductor

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment

Analysis of local and global timing and pitch change in ordinary

AN ESSAY ON NEO-TONAL HARMONY

Transition Networks. Chapter 5

MELONET I: Neural Nets for Inventing Baroque-Style Chorale Variations

MUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES

CHAPTER ONE TWO-PART COUNTERPOINT IN FIRST SPECIES (1:1)

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

A prototype system for rule-based expressive modifications of audio recordings

Computer Coordination With Popular Music: A New Research Agenda 1

Music Information Retrieval

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods

A Learning-Based Jam Session System that Imitates a Player's Personality Model

A Bayesian Network for Real-Time Musical Accompaniment

Creating Data Resources for Designing User-centric Frontends for Query by Humming Systems

Building a Better Bach with Markov Chains

The MAMI Query-By-Voice Experiment Collecting and annotating vocal queries for music information retrieval

CSC475 Music Information Retrieval

Beethoven, Bach, and Billions of Bytes

CHAPTER 3. Melody Style Mining

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM

Subjective evaluation of common singing skills using the rank ordering method

Music Similarity and Cover Song Identification: The Case of Jazz

EE391 Special Report (Spring 2005) Automatic Chord Recognition Using A Summary Autocorrelation Function

Query By Humming: Finding Songs in a Polyphonic Database

Student Performance Q&A:

ORB COMPOSER Documentation 1.0.0

A System for Automatic Chord Transcription from Audio Using Genre-Specific Hidden Markov Models

Musical Creativity. Jukka Toivanen Introduction to Computational Creativity Dept. of Computer Science University of Helsinki

Jazz Melody Generation and Recognition

Statistical Modeling and Retrieval of Polyphonic Music

What is the Essence of "Music?"

Automatic characterization of ornamentation from bassoon recordings for expressive synthesis

Algorithmic Music Composition

Singer Identification

LEARNING AUDIO SHEET MUSIC CORRESPONDENCES. Matthias Dorfer Department of Computational Perception

THE importance of music content analysis for musical

Audio. Meinard Müller. Beethoven, Bach, and Billions of Bytes. International Audio Laboratories Erlangen. International Audio Laboratories Erlangen

Acoustic and musical foundations of the speech/song illusion

Computers Composing Music: An Artistic Utilization of Hidden Markov Models for Music Composition

Melodic Pattern Segmentation of Polyphonic Music as a Set Partitioning Problem

Predicting Variation of Folk Songs: A Corpus Analysis Study on the Memorability of Melodies Janssen, B.D.; Burgoyne, J.A.; Honing, H.J.

On human capability and acoustic cues for discriminating singing and speaking voices

Tempo and Beat Analysis

Automatic Generation of Drum Performance Based on the MIDI Code

A Computational Model for Discriminating Music Performers

A CHROMA-BASED SALIENCE FUNCTION FOR MELODY AND BASS LINE ESTIMATION FROM MUSIC AUDIO SIGNALS

Musical Harmonization with Constraints: A Survey. Overview. Computers and Music. Tonal Music

Polyphonic Audio Matching for Score Following and Intelligent Audio Editors

A Novel Approach to Automatic Music Composing: Using Genetic Algorithm

Evolutionary jazz improvisation and harmony system: A new jazz improvisation and harmony system

Week 14 Music Understanding and Classification

Harmonic Generation based on Harmonicity Weightings

A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS

1 Overview. 1.1 Nominal Project Requirements

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

NEW QUERY-BY-HUMMING MUSIC RETRIEVAL SYSTEM CONCEPTION AND EVALUATION BASED ON A QUERY NATURE STUDY

Voice & Music Pattern Extraction: A Review

Student Performance Q&A:

Topic 10. Multi-pitch Analysis

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

A probabilistic approach to determining bass voice leading in melodic harmonisation

AutoChorale An Automatic Music Generator. Jack Mi, Zhengtao Jin

Available online at ScienceDirect. Procedia Computer Science 46 (2015 )

Transcription:

Melodic Outline Extraction Method for Non-note-level Melody Editing Yuichi Tsuchiya Nihon University tsuchiya@kthrlab.jp Tetsuro Kitahara Nihon University kitahara@kthrlab.jp ABSTRACT In this paper, we propose a method for extracting a melodic outline from a note sequence and a method for re-transforming the outline to a note sequence for non-note-level melody editing. There have been many systems that automatically create a melody. When the melody output by an automatic music composition system is not satisfactory, the user has to modify the melody by either re-executing the composition system or editing the melody on a MIDI sequencer. The former option, however, has the disadvantage that it is impossible to edit only part of the melody, and the latter option is difficult for non-experts, musically untrained people. To solve this problem, we propose a melody editing procedure based on a continuous curve of the melody called a melodic outline. The melodic outline is obtained by applying the Fourier transform to the pitch trajectory of the melody and extracting low-order Fourier coefficients. Once the user redraws the outline, it is transformed into a note sequence by the inverse procedure of the extraction and a hidden Markov model. Experimental results show that non-experts can edit the melody to some extent easily and satisfactorily. 1. INTRODUCTION Automatic music composition systems [1 6] give the user original music without requiring the user to perform musically difficult operations. These systems are useful, for example, in the situation that a musically untrained person wants original (copyright-free) background music for a movie. These systems automatically generate melodies and backing tracks based on the user s input such as lyrics and style parameters. In most cases, however, the generated pieces do not completely match those desired or expected by users because it is difficult to express the desire as style parameters. The common approach for solving this problem is to manually edit the generated pieces with a MIDI sequencer, but this approach is not an easy operation for musically untrained people. The goal of this study is to achieve an environment that enables musically untrained users to explore satisfactory melodies by repeated trial-and-error editing of melodies generated by automatic music composition systems. There are two reasons why it is difficult for musically untrained Copyright: c 2013 Yuichi Tsuchiya et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. people to use a conventional MIDI sequencer. The first reason is that musically untrained listeners understand music without mentally representing audio signals as musical scores [7]. The melody representation for melody editing should therefore not be based on musical notes; it should capture the coarse structure of the melody that an untrained person would recognize in an audio signal. The second reason is that it is difficult for untrained people to avoid dissonant notes in a MIDI sequencer. A certain support is therefore needed to avoid such notes using a computing technology. In this paper, we propose a new sub-symbolic melody representation called a melodic outline. The melodic outline represents only the coarse temporal characteristics of the melody; the notewise information of the melody is hidden. This representation can be obtained by applying the Fourier transform to the pitch trajectory of the melody. Because low-order Fourier coefficients represent the coarse melodic characteristics and high-order ones represent the fine characteristics, we can obtain the melodic outline by applying the inverse Fourier transform to only low-order Fourier coefficients. Once the melodic outline is obtained, the user can redraw the outline with a mouse. The redrawn outline is transformed into a sequence of notes by the inverse procedure of melodic outline extraction. In this process, the selection of notes dissonant to the accompaniment are avoided to select by using a hidden Markov model (HMM). The rest of the paper is organized as follows. In Section 2, we describe the concept of the melodic outline. In Section 3, we present a method for melodic outline extraction and conversion of the outline to a sequence of notes. In Section 4, we report experimental results. Finally, we conclude the paper in Section 5. 2. BASIC CONCEPT OF MELODIC OUTLINE A melodic outline is a melody representation in which the melody is represented as a continuous curve. An example is shown in Figure 1. A melodic outline is mainly used for editing a melody with a three-step process: (1) the target melody represented as a sequence of notes is automatically transformed into a melodic outline, (2) the melodic outline is redrawn by the user, and (3) the redrawn outline is transformed into a note of sequence. The key technology for achieving this is the mutual transform of a note-level melody representation and a melodic outline. We think that this mutual transform should satisfy the following requirements: 762

Figure 1. Example of melodic outline. (a) Input melody, (b) Melodic outline Figure 2. Flow of melody editing. 1. A melodic outline does not explicitly represent the pitch and note value of each note. 2. When a melodic outline is inversely transformed into a note sequence without any editing, the result should be equivalent to the original melody. 3. When a melodic outline edited by a user is transformed into a note sequence, musically inappropriate notes (e.g., notes causing dissonance) should be avoided. No previous studies have proposed melody representations satisfying all these requirements. Various methods for transforming a melody to a lower-resolution representation have been proposed such as [8], but these representations are designed for melody matching in query-byhumming music retrieval, so they cannot be inversely transformed into a sequence of notes. OrpheusBB [9] is a humanin-the-loop music composition system, which enables users to edit automatically generated content when it does not satisfy their desire. When the user edits some part of the content, this system automatically regenerates the remaining part, but the editing is performed at the note level. The flow of the melody editing is shown in Figure 2. The method supposes that the user composes a melody with an automatic music composition system. The melody is transformed into a melodic outline with the method described in Section 3.1. The user can freely redraw the melodic outline. Using the method described in Section 3.2, the melodic outline is inversely transformed into a note sequence. If the user is satisfied with the result, the user again edits the melodic outline. The user can repeat the editing process until a satisfactory melody is obtained. 3. METHOD FOR MUTUAL TRANSFORM OF MELODIC OUTLINE AND NOTE SEQUENCE In the section, we describe our method for editing melodies developed using the process described above (Figures 3 and 4). Our melody editing method consists of three steps: (1) transform of a note sequence into a melodic outline, (2) editing of the melodic outline, and (3) inverse transform of the edited melodic outline into a note sequence. 3.1 Transform of a Note Sequence into a Melodic Outline The given MIDI sequence of a melody (Figure 3 (a)) is transformed into a pitch trajectory (Figure 3 (b)). The pitch is represented logarithmically, where middle C is 60.0 and a semitone is represented by 1.0. (The difference from note numbers is that non-integer values are acceptable.) Regarding the pitch trajectory as a periodic signal, the Fourier transform is applied to this trajectory. Note that the input to the Fourier transform is not an audio signal, so the result does not represent a sound spectrum. Because the Fourier transform is applied to the pitch trajectory of a melody, the result represents the feature of temporal motion in the melody. Low-order Fourier coefficients represent slow motion in the melody while high-order Fourier coefficients represent fast motion. By extracting low-order Fourier coefficients and applying the inverse Fourier transform to them, a rough pitch contour of the melody, i.e., the melodic outline, is obtained (Figure 3 (c)). 3.2 Inverse Transform of a Melodic Outline into a Note Sequence Once part of the melodic outline is redrawn, the redrawn outline is transformed into a note sequence. The overview of the procedure of the transform is shown in Figure 4. First, the Fourier transform is applied to the redrawn outline (Figure 4 (a)). Then, the higher-order Fourier coefficients of the original pitch trajectory, which had been removed when the melodic outline is extracted, are added to the Fourier coefficients of the redrawn outline to generate the same pitch trajectory as the original melody from the non-redrawn part of the melodic outline. Next, the inverse Fourier transform is applied, producing the post-edit pitch trajectory (Figure 4 (b)). Next, the pitch trajectory is transformed into a note sequence. In this process, notes that cause dissonance with the accompaniment should be avoided, which is achieved 763

using a hidden Markov model. The HMM used here is shown in Figure 5. This model is formulated based on the idea that the observed pitch trajectory O = o 1 o 2 o N is emitted with random deviation from a hidden sequence of note numbers H = h 1 h 2 h N that does not cause dissonance. The HMM consists of hidden states {s i }, each of which corresponds to a note number.(therefore, each h n takes an element of {s i }.) Each state s i emits a value of pitch following a normal distribution N(i, σ 2 ). For example, the state s 60, corresponding to the note number 60, follows the normal distribution with a mean of 60.0 and a variance of σ 2. The variance σ 2 is common among all states and is experimentally determined; it is set to 13 in the current implementation. In the current implementation, 36 states, from s 48 to s 84, are used. The transition probability P (s j s i ) is determined as follows: P (s j s i ) = p 1 (s j ) p 2 (s i, s j ), where p 1 (s j ) is the probability that each note number appears in the target key (C major in the current implementation). This is experimentally defined based on the idea of avoiding non-diatonic notes as follows: p 1 (s i ) = 16/45 (C) 2/45 (D) 8/45 (E) 3/45 (F, A) 12/45 (G) 1/45 (B) 0 (Non-diatonic notes) In addition, p 2 (s i, s j ) is the probability that note numbers i,j successively appear. This probability is also experimentally defined based on the pitch interval between the two note numbers as follows: 1/63 (Augmented fourth Diminished fifth Major sixth, Minor seventh) Major seventh) p 2 (s i, s j ) = 2/63 (Perfect prime) 4/63 (Minor sixth) 6/63 (Perfect fourth, Perfect fifth) 10/63 (Minor second, Major second Minor third, Major third) Currently, the editing targets only the diatonic scale. These transition probabilities are applied only at each note boundary and no transitions are accepted between the onset and offset times of each note, because only pitch editing is currently supported for simplicity. As described above, the transition probabilities are manually determined so that non-diatonic notes in the C major scale are avoided. However, the transition probabilities can be learned using a melody corpus. If the transition probabilities are learned with melodies of a particular genre (e.g., jazz), they would reflects melodic characteristics of that genre. By using the Viterbi algorithm on this HMM, we obtain a sequence of note numbers H = h 1 h 2 h N (which Figure 3. Overview method of extracting note sequence to melodic outline. (a) MIDI sequence of melody, (b) Pitch trajectory, (c) Melodic outline. would not contain dissonant notes) from the pitch trajectory O = o 1 o 2 o N. Finally, the result is output in the MIDI format. 4. IMPLEMENTATION AND EXPERIMENTS 4.1 Implementation We implemented a system for melody editing based on the proposed method. In this system, the original melody is assumed to be an output of Orpheus [4]. After the user creates a melody using Orpheus, the user inputs the melody s ID given by Orpheus into our system. Then, the system obtains a MIDI file from the Orpheus web server, and displays the melody both in a note-level representation and as a melodic outline(figure 6 (a)). Once the user redraws the melodic outline, the system immediately regenerates the melody with the method described in Section 3 and updates the display(figure 6 (b)). If the user is not satisfied after, listening to the regenerated melody, the user can redraw the melodic outline repeatedly until a satisfactory melody is obtained. 4.2 Example of Melody Editing We demonstrate an example of melody editing using a melodic outline. As a target of editing, we used a four-measure melody generated by Orpheus [9], which generates a melody based on the prosody of Japanese lyrics. We input a sentence (Yume mita mono wa hitotsu no kofuku / Negatta mono wa hitotsu no ai) 1 taken from a Japanese poem Yume mita mono wa... by Michizo Tatehara, and obtained the melody shown in Figure 7 (a). Figure 7 (b) shows 1 This literally means All I dream is a piece of happiness. All I hope is a piece of love. 764

Proceedings of the Sound and Music Computing Conference 2013, SMC 2013, Stockholm, Sweden Figure 4. Overview of transforming melodic outline to note sequence. (a) Edited melodic outline, (b) Generated pitch trajectory, (c) Generated melody. Figure 6. The user interface of edit display. (a)input melody, (b)edited the melodic outline. a melodic outline extracted from this melody. From this melodic outline, we can see the following: (1) this melody has disjunct motion in the second measure, (2) the pitch rises gradually from the third measure to the forth measure, (3) the melody ends with a downward motion in pitch. Table 1. Questionnaire results (instructed editing). A B C D E F average Q1 6 5 7 6 7 7 6.3 Q2 6 7 5 6 7 6 6.1 Q3 5 6 6 6 6 6 5.8 We edited this melody with the melodic outline. The last half of the melodic outline is redrawn so that the gravity of the pitch motion is higher than that of the original melody. The redrawn melodic outline and the melody generated from it are shown in Figures 7 (c) and (d), respectively. The generated melody reflects the editing; it rises in higher pitch than the original melody. 4.3 User Test We asked human subjects to use this melody editing system. As with the previous section, the melody to be edited is prepared by giving a sentence ( Osake wo nondemo ii / Sorega tanosii kotodattara)2 taken from a Japanese poem Clover no harappa de... by Junko Takahashi to Orpheus. The melody is shown in Figure 8 (a). We asked the subjects to edit this melody in two ways. The first way is based on the instruction to make all notes in the last measure higher. The second way is free editing. After each editing, we asked the subjects to answer the following questions: Q1 Were you satisfied with the output? Q2 Did you edit the melody without difficulty? Q3 Were you able to edit the melody as desired? (7: Strongly agree, 6: agree, 5: weakly agree, 4: neutral, 3: weakly disagree, 2: disagree, 1: strongly disagree) The subjects were six musically untrained people (20 21 years old). Figure 5. Overview of HMM for estimating note sequence from pose-edit pitch trajectory 2 765 This literally means You may drink alcohol, if it makes you happy.

Proceedings of the Sound and Music Computing Conference 2013, SMC 2013, Stockholm, Sweden Figure 8. Melodies created by subjects. Figure 7. Example of melody editing. (a) Input melody, (b) Melodic outline of (a), (c) Edited melodic outline, (d) Note representation of generated melody. form to the pitch trajectory of the melody and extracting only low-order Fourier coefficients. After the outline is redrawn by the user, it is transformed into a note sequence. In this transform, a hidden Markov model is used to avoid notes dissonant to the accompaniment. Experimental results show that both the editing user interface and the results are satisfactory to some extent for human subjects. In the content design field, it is said that controllers for editing content should be based on the cognitive structure of the content and at an appropriate abstraction level [10]. When a user interface for editing content satisfies this requirement, it is called directable. Melodic outlines are designed based on the insight that non-professional listeners cognize melodies without mentally obtaining notelevel representations. The melody editing interface based on melodic outlines is therefore considered to achieve directability in editing melodies. We have several future issues. First, we plan to extend the method to edit the rhythmic aspect of melodies. Second, we will try to learn the state transition probability matrix from a music corpus. In particular, we will try to achieve a matrix that has characteristics of a particular genre by learning the matrix with a corpus of that genre. Finally, we plan to conduct a long-term user experiment for investigating how users acquire or develop the schema of melodies through our system. Table 2. Questionnaire results (free editing). A B C D E F average Q1 6 6 6 5 6 5 5.6 Q2 6 7 7 3 6 6 5.8 Q3 6 3 6 3 7 6 5.1 The results of the questionnaire for the instructed editing are listed in Table 1. Almost every subject agreed on all three questions. Figures 8 (b) and (c) show the melodies generated by Subjects C and F, respectively. The melody of Figure 8 (b), as instructed, has lower pitches in the last measure than in the last measure of the original melody, and is musically acceptable. Although the melody of Figure 8 (c) has some higher notes in the last measure than in the last measure of the original melody, it is also musically acceptable. The results of the questionnaire for the free editing are listed in Table 2. Most subjects agreed on all the questions. Figures 8 (d) and (e) shows the melodies generated by Subjects A and E, which are mostly musically acceptable. The third measure of the melody of Subject E starts with A, which might cause a sense of incongruity because it is a non-diatonic note. The subject, however, is probably satisfied with this output because the subject s answer to Q1 is 7. Two subjects answered 3 for Q3, which could be because the time for the experiment is limited. In the future, we will conduct a long-term experiment. Acknowledgments We thank Dr. Hiroko Terasawa and Dr. Masaki Matsubara (University of Tsukuba) for their valuable comments. 6. REFERENCES [1] L. Hiller, L. lsaacson, Musical composition with a high-speed digital computer, Journal of Audio Engineering Society, 1958. 5. CONCLUSION In this paper, we proposed a method enabling musically untrained people to edit a melody at the non-note level by transforming the melody to a melodic outline. The melodic outline is obtained by applying the Fourier trans- [2] C. Ames, M. Domino, Cybernetic composer: An overview, in Understanding Music with AI, M. Balaban, K. Ebcioglu, O. Laske, Eds. Association for the 766

Advancement of Artificial Intelligence Press, pp.186-205, 1992. [3] D. Cope, Computers and Musical Style, Oxford University Press, 1991. [4] S. Fukayama, K. Nakatsuma, S. Sako, T. Nishimoto, S. Shigeki Automatic song composition from the lyrics exploiting prosody of the japanese language, in Proc. Sound and Music Computing, 2010. [5] D. Ando, P. dahlstedt, M. G. Nordaxhl, H. iba, Computer aided composition by means of interactive gp, in Proc. The International Computer Music Association, pp.254 257, 2006. [6] J. A. Biles, Genjam: A genetic algorithm for generating jass solos, in Proc. The International Computer Music Association, 1994. [7] M. Goto, A Real-time Music-scene-description System: Predominant-F0 Estimation for Detecting Melody and Bass Lines in Real-world Audio Signals, Speech Communication (The International Speech Communication Association Journal), 2004. [8] M. Marolt, A Mid-level Representation for Melodybased Retrieval in Audio Collections, The Institute of Electrical and Electronics Engineers, Inc. Transactions on Multimedia, pp.1617 1625, 2008. [9] T. Kitahara, S. Fukayama, H. Katayose, S. Sagayama, N. Nagata An Interactive Music Composition System Based on Autonomous Maintenance of Musical Consistency, in Proc. Sound and Music Computing, 2011. [10] H. Katayose, M.Hashida Discussion on Directability for Generative Music Systems, The Special Interest Group Technical Reports of Information Processing Society of Japan, pp.99 104, 2007. 767