µtunes: A Study of Musicality Perception in an Evolutionary Context

Similar documents
Predicting Variation of Folk Songs: A Corpus Analysis Study on the Memorability of Melodies Janssen, B.D.; Burgoyne, J.A.; Honing, H.J.

Analysis of local and global timing and pitch change in ordinary

SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS

Algorithmic Music Composition

An Integrated Music Chromaticism Model

Audio Feature Extraction for Corpus Analysis

Characteristics of Polyphonic Music Style and Markov Model of Pitch-Class Intervals

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG?

Modeling memory for melodies

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng

PLANE TESSELATION WITH MUSICAL-SCALE TILES AND BIDIMENSIONAL AUTOMATIC COMPOSITION

Melodic Outline Extraction Method for Non-note-level Melody Editing

Olga Feher, PhD Dissertation: Chapter 4 (May 2009) Chapter 4. Cumulative cultural evolution in an isolated colony

Acoustic and musical foundations of the speech/song illusion

DJ Darwin a genetic approach to creating beats

About Giovanni De Poli. What is Model. Introduction. di Poli: Methodologies for Expressive Modeling of/for Music Performance

MUSI-6201 Computational Music Analysis

Specifying Features for Classical and Non-Classical Melody Evaluation

10 Visualization of Tonal Content in the Symbolic and Audio Domains

Detecting Musical Key with Supervised Learning

THE INTERACTION BETWEEN MELODIC PITCH CONTENT AND RHYTHMIC PERCEPTION. Gideon Broshy, Leah Latterner and Kevin Sherwin

Melodic Pattern Segmentation of Polyphonic Music as a Set Partitioning Problem

HST 725 Music Perception & Cognition Assignment #1 =================================================================

Speaking in Minor and Major Keys

SCALES AND KEYS. major scale, 2, 3, 5 minor scale, 2, 3, 7 mode, 20 parallel, 7. Major and minor scales

Evolving Cellular Automata for Music Composition with Trainable Fitness Functions. Man Yat Lo

EFFECT OF REPETITION OF STANDARD AND COMPARISON TONES ON RECOGNITION MEMORY FOR PITCH '

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

A MULTI-PARAMETRIC AND REDUNDANCY-FILTERING APPROACH TO PATTERN IDENTIFICATION

Frankenstein: a Framework for musical improvisation. Davide Morelli

Music Composition with RNN

Music 175: Pitch II. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) June 2, 2015

LOUDNESS EFFECT OF THE DIFFERENT TONES ON THE TIMBRE SUBJECTIVE PERCEPTION EXPERIMENT OF ERHU

EIGHT SHORT MATHEMATICAL COMPOSITIONS CONSTRUCTED BY SIMILARITY

Study Guide. Solutions to Selected Exercises. Foundations of Music and Musicianship with CD-ROM. 2nd Edition. David Damschroder

Improvised Duet Interaction: Learning Improvisation Techniques for Automatic Accompaniment

Chapter 6. Normal Distributions

Classification of Timbre Similarity

Timbre blending of wind instruments: acoustics and perception

Creating a Feature Vector to Identify Similarity between MIDI Files

Music Information Retrieval Using Audio Input

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

BIBLIOGRAPHIC DATA: A DIFFERENT ANALYSIS PERSPECTIVE. Francesca De Battisti *, Silvia Salini

Evaluating Oscilloscope Mask Testing for Six Sigma Quality Standards

A TEXT RETRIEVAL APPROACH TO CONTENT-BASED AUDIO RETRIEVAL

Gyorgi Ligeti. Chamber Concerto, Movement III (1970) Glen Halls All Rights Reserved

Pitch Perception and Grouping. HST.723 Neural Coding and Perception of Sound

Extracting Significant Patterns from Musical Strings: Some Interesting Problems.

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

Varying Degrees of Difficulty in Melodic Dictation Examples According to Intervallic Content

Elements of Music David Scoggin OLLI Understanding Jazz Fall 2016

WEB APPENDIX. Managing Innovation Sequences Over Iterated Offerings: Developing and Testing a Relative Innovation, Comfort, and Stimulation

Evolutionary Computation Applied to Melody Generation

A Real-Time Genetic Algorithm in Human-Robot Musical Improvisation

Analysis and Clustering of Musical Compositions using Melody-based Features

The Human Features of Music.

Precision testing methods of Event Timer A032-ET

EVOLVING DESIGN LAYOUT CASES TO SATISFY FENG SHUI CONSTRAINTS

Building a Better Bach with Markov Chains

Jazz Melody Generation and Recognition

EE391 Special Report (Spring 2005) Automatic Chord Recognition Using A Summary Autocorrelation Function

Composer Style Attribution

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION

Research on sampling of vibration signals based on compressed sensing

Melody Retrieval using the Implication/Realization Model

KONRAD JĘDRZEJEWSKI 1, ANATOLIY A. PLATONOV 1,2

Sequential Association Rules in Atonal Music

Harmony and tonality The vertical dimension. HST 725 Lecture 11 Music Perception & Cognition

Music Composition with Interactive Evolutionary Computation

Classification of Different Indian Songs Based on Fractal Analysis

Quarterly Progress and Status Report. Perception of just noticeable time displacement of a tone presented in a metrical sequence at different tempos

Perceptual Evaluation of Automatically Extracted Musical Motives

Visual Encoding Design

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

On the mathematics of beauty: beautiful music

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas

ALGEBRAIC PURE TONE COMPOSITIONS CONSTRUCTED VIA SIMILARITY

Automatic Rhythmic Notation from Single Voice Audio Sources

CPU Bach: An Automatic Chorale Harmonization System

MELONET I: Neural Nets for Inventing Baroque-Style Chorale Variations

A STATISTICAL VIEW ON THE EXPRESSIVE TIMING OF PIANO ROLLED CHORDS

Algorithmically Flexible Style Composition Through Multi-Objective Fitness Functions

Music Complexity Descriptors. Matt Stabile June 6 th, 2008

Outline. Why do we classify? Audio Classification

jsymbolic and ELVIS Cory McKay Marianopolis College Montreal, Canada

CS229 Project Report Polyphonic Piano Transcription

Automatic Reduction of MIDI Files Preserving Relevant Musical Content

PERCEPTUAL QUALITY OF H.264/AVC DEBLOCKING FILTER

Musical Representations of the Fibonacci String and Proteins Using Mathematica

Speech To Song Classification

Arts, Computers and Artificial Intelligence

Sequential Association Rules in Atonal Music

Automatic Music Clustering using Audio Attributes

CHAPTER ONE TWO-PART COUNTERPOINT IN FIRST SPECIES (1:1)

Example: compressing black and white images 2 Say we are trying to compress an image of black and white pixels: CSC310 Information Theory.

Active learning will develop attitudes, knowledge, and performance skills which help students perceive and respond to the power of music as an art.

Evaluating Melodic Encodings for Use in Cover Song Identification

A probabilistic approach to determining bass voice leading in melodic harmonisation

Instrumental Performance Band 7. Fine Arts Curriculum Framework

Music Theory. Fine Arts Curriculum Framework. Revised 2008

Transcription:

µtunes: A Study of Musicality Perception in an Evolutionary Context Kirill Sidorov Robin Hawkins Andrew Jones David Marshall Cardiff University, UK K.Sidorov@cs.cardiff.ac.uk ontario.cs.cf.ac.uk/mutunes ABSTRACT We have conducted an experiment with the intent to determine and quantify what properties of monophonic melodies humans perceive as appealing. This was done in an evolutionary setting: a population of melodies was subjected to Darwinian selection with popular human vote serving as the basis for the fitness function. We describe the experimental procedure, measures to avoid or minimise possible experimental biases, and address the problem of extracting maximum fitness information from sparse measurements. We have rigorously analysed the course of the resulting evolutionary process and have identified several important trends. In particular, we have observed a decline in complexity of melodies over time, increase in diatonicity, consonance, and rhythmic variety, well-defined principal directions of evolution, and even rudimentary evidence of speciation and genre-forming. We discuss the relevance of these effects to the question of what is perceived as a pleasant melody. Such analysis has not been done before and hence the novel contribution of this paper is the study of the psychological biases and preferences when popular vote is used as the fitness function in an evolutionary process.. INTRODUCTION Evolutionary approach to music composition is well described in the literature: whether the fitness information is provided by human evaluation [, 2] or otherwise [3, 4]. Recently, the importance of consumers preference in driving the evolution of music was demonstrated in [2]. While their conclusions were criticised (especially as to the role of biases in selection [5]), the experiment of [2] was the first large-scale attempt at music evolution with popular vote serving as the fitness function. Further, [5, 6, 7] argue that recombination and transformation of information according to psychological biases of individuals are the crucial element of cultural evolution. In contrast to [2], where the process of evolution itself was examined, this work concentrates on and attempts to measure the above-mentioned psychological and cultural Copyright: c 24 Kirill Sidorov et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3. Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. biases that guide the evolution of music, and hence attempts to quantify what aspects of music the respondents find appealing. 2. EXPERIMENTAL SETUP In this experiment, we maintain and evolve a population of melodies. To minimise the model bias, we have adopted the simplest representation of the population in which the phenotypes and genotypes of individuals are identically equivalent. Each individual is represented by two lists: a list of intervals (in semitones) between successive notes in the melody, and a list of note durations (as integer multiplies of the time quantum, in this case t = /6th note). The total duration of each melody is capped at 64 /6th, which is equivalent to four bars in 4/4. At the start of evolution, the population is initialised with randomly generated melodies in which the intervals are drawn from integer Gaussian distribution with µ = and σ = 7 semitones (middle C is chosen for the first note in all melodies). Gaussian sampling is done to avoid biasing the respondents towards diatonicity. The note durations in the initial population are chosen equal (crotchets). Thus, the initial melodies are results of Gaussian random walks, in which the expected root-mean-square deviation of the last note after n = 5 steps is σ n 27 semitones. The melodies were then confined (by taking the modulo) to the range of ± octave from the starting note. The population size is kept constant at every generation (N = exemplars), with the entire evolutionary history being recorded for future analysis. The choice of the population size depends on two factors. First, the population size should ideally be large enough for emergent phenomena, such as speciation, to be observed. Second, too large a population would not allow us to observe a substantial number of generations: indeed, if ranking the population involves O(N log N) comparisons, then given a budget of C comparisons the number of generations we could observe is at most C/O(N log N). To sample popular opinion, we set up a website which offers the visitors two melodies from the current population. The visitors are prompted to play back the melodies and select the one which they prefer. Their response is recorded and serves to update the population rank. In contrast to [2], where respondents choose between five categories, in our experiment they were presented with two options (select the best from two melodies). The reason for http://ontario.cs.cf.ac.uk/mutunes The name µtunes is pronounced to rhyme with mutants. - 935 -

this is twofold. First, the problem of optimal ranking using expensive pairwise comparisons is well-studied in the literature [8, 9]. This is a non-trivial task, since we need to find the best estimate of the population rank given the limited number of pairwise comparisons which, in addition, may be noisy (contradictory). In [9] an efficient ranking algorithm is described which by adaptive pairwise queries efficiently ranks the population. Due to its robustness under noise this is the algorithm we used in our experiment. After each comparison, with probability Nγ log(nγ) a new generation is produced. Above, N = is the population size and γ =.5 is a parameter controlling the rate of evolution. This is equivalent to triggering a new generation on average every O(N log N) comparisons which are required to rank N individuals. When a new generation is triggered, the highest ranked α = 2% of the individuals take part in sexual reproduction. Pairs are formed by uniform random sampling from the top α = 2% of the population. Breeding involves a one-point crossover operation: a time t f in the female melody is selected uniformly randomly (at any point, including in the middle of notes), with t f being quantised to /6th notes; similarly t m is selected for the male individual. The melodies (intervals and note durations) are sliced at t f and t m. Breeding produces two offsprings: one shares a beginning with the female parent, and an end with the male parent, and vice-versa for the other offspring. Such crossover can result in a note being broken in two, as slicing time t is not guaranteed to coincide with the note boundaries. When this occurs, with probability β = 5% the notes at the cut-point are fused. This way, we ensure that the rhythmic granularity does not unnecessarily increase due to crossover. A better approach would be to estimate β dynamically to explicitly ensure that the crossover operation does not affect the average granularity in the population. Finally, mutation occurs in one of the offspring. The offspring to be mutated is chosen uniformly randomly. Mutation affects only the intervals. One interval, selected again uniformly at random, is incremented or decremented by one. The bottom α = 2% of the population are removed from the population and replaced with the newly generated offspring. The rhythm was not explicitly subjected to mutation to simplify the experiment; however, variability in rhythm naturally arises as the result of the crossover operation, which splices the melodies as described above. 3. ANALYSIS AND RESULTS So far into the experiment, we have registered 7, comparisons which resulted in 45 generations of evolution. We have also carried out a control experiment in which the evolution proceeded under the same conditions (and the same initial population) except the responses were replaced with random Bernoulli-distributed values (p = /2, fair coin). Below we describe the various features of the population that we measured over time as the evolution progresses: entropy of melody and rhythm, repetitiveness, properties of melodic contour, and pitch distribution. Avg. entropy, bits Avg. entropy, bits 4 3.9 3.8 3.7 3.6 3.5 3.4 3.3 3.2..2.3.4.5.7.8 3..9 3 5 5 2 25 3 35 4 45.6.4.2.8.4.2 5 5 2 25 3 35 4 45. Figure. Change in average entropy of melody (above) and rhythm (below). Avg. number of repeats Avg. number of repeats.3.25.2.5. 5 5 2 25 3 35 4 45.5.9.8.7.5 5 5 2 25 3 35 4 45.2.7.5.4.3.2..5..5.5..5 Figure 2. Average number of consecutive repetitions of pitches (above) and rhythmic values (below)..5 from control, bits from control, bits from control from control - 936 -

.7.5 : Avg. number of points of inflection 5.5 from control :.55 5 5 2 25 3 35 4 45. Figure 3. Average number of extrema in the melodic contour. ENTROPY. It has been hypothesised [, ] that simplicity in art is appealing. It hence seems natural to measurewhat happens to the complexity of melodies in the population. To do so, we estimate the average Shannon s entropy [2] of melodies and their rhythms, assuming that intervals between notes and note durations are symbols from a finite alphabet. The entropy is simply Fraction of pitches in class, % 35 3 25 2 5 5 Western classical music English folk music / 2/ 3/9 4/8 5/7 6 Interval class (with inversions) H(X) = i P (x i ) log 2 P (x i ), () where X is a string of intervals (or durations), P (x i ) is the probability of occurrence of symbol x i (estimated from frequency). Figure shows the average population entropy over time (in this and other figures the bold blue line indicates the difference between the experiment and control). We observe that the melodic entropy is noticeably decreasing. This indicates that the respondents tend to select simpler melodies. This is in line with predictions of [, ]. Remarkably, the opposite result is observed for rhythm: the entropy is noticeably increasing, which suggests the preference for more rhythmically varied melodies. (The rhythm entropy in the control experiment is not constant due to a defect in the crossover operator: it does not ensure that the average granularity remains the same. However, the difference from the control shows significant increase of rhythm entropy over time.) REPETITIVENESS. We also investigated whether the repetitiveness is selected for. To measure repetitiveness, we count the number of adjacent identical intervals (and note durations) and normalise by the length of the melodies. Figure 2 illustrates the trend in thus computed average repetitiveness. Again, we observe increase in melodic and decrease in rhythmic repetitiveness. We note that higher repetitiveness implies lower entropy, but not the other way round: a repeated alternation between two notes (e.g. trill) will have low entropy, but not high repetitiveness. Figure 4. Above: Histogram of interval classes (vertically) over time (horizontally); red = high frequency, blue = low frequency. Below: the interval class histogram at the last generation. : Fraction of pitches in class, % 5 5 : C Db D Eb E F Gb G Ab A Bb H Pitch class Figure 5. Histogram of pitch classes (vertically) over time (horizontally), red colour indicates higher frequency. Below: the pitch class histogram at the last generation. - 937 -

Figure 6. Embedding of the melodies in R 2 over time. The generation numbers are shown in the corner. Pale gray dots represent all melodies ever alive, large white dots melodies alive in the current generation. Background colour shows the estimated density (using KDE [5]). MELODIC CONTOUR. To investigate whether the shape of the melodic contour is significant in respondents selection, we use a simple measure of monotonicity: we compute the number of local extrema in a melody and normalise by the length of the melody. High number of extrema would be indicative of oscillating melodic contours, low number would indicate more monotonously ascending or descending contours. Figure 3 shows the change in average monotonicity over time and illustrates a noticeable decline in this parameter, indicating negative preference for complex, undulating melodies. PITCH AND INTERVAL DISTRIBUTION. At each generation, we measured the distribution of intervals in the melodies and the resulting pitch classes. Figure 4 (top) shows the evolution of the interval histogram over time. The intervals in Figure 4 are shown modulo 2, and inversions of intervals are placed in the same bin (e.g. perfect fourth and perfect fifth, denoted 5/7 in Figure 4, are in the same bin, and similarly for other intervals). We observe a marked preference for consonant, diatonic intervals, and the semitone (, /, 5/7); the augmented fourth is actively selected agaist (bottom row, 6); interestingly, the prevalence of the major 3rd (4/8) is smaller than that of the major 2nd (2/) and the minor 3rd (3/9). Figure 4 (bottom) compares the distribution of intervals to that in the control group as well as to that in a corpus of Western classical music [3] and English folk music [4]. We show the correlation matrix for these distributions in Table. We observe that the resulting distribution of pitch classes appears to be more characteristic of the Western classical music than of folk music. Figure 5 shows the evolving histogram of pitch classes (top row corresponds to C, second to D-flat etc.) Again we observe a marked tendency towards diatonicity: note, for example, the high values in the F-minor triad (F-Ab-C). EUCLIDEAN EMBEDDING AND CLUSTERING. We regard the space of all melodies as a metric space with the Exp. W.C. E.F. Exp. -.532.497 W.C..532 -.7787 E.F..497.7787 - Table. Correlation between the difference from control in interval class distribution in this experiment vs. that of Western classical music vs. English folk music. Levenshtein distance [6] (edit distance) applied to strings of intervals forming the melodies (and strings of note durations) as the metric. Having computed pairwise distances between all melodies in the population, we can compute the embedding of the melodies into Euclidean R n space. Multidimensional Scaling [7] delivers such embedding, as well as optimal (in the least-squares sense) embeddings into reduced dimensionality spaces R m, m < n. In particular, for visualisation it is convenient to embed the melodies into R 2. Figure 6 shows such embedding. All melodies that ever lived are shown (pale gray), the ones currently alive are marked by large white circles. To illustrate the tendency, in the background we show the distribution density obtained using kernel density estimator [5]. We remark that having started from the initial cluster (Figure 6 top left) the evolution diverges into well-defined directions (downwards and to the right in Figure 6), and new mutually dissimilar stable clusters of melodies are formed. We speculate that this phenomenon is analogous to speciation in biological evolution. Admittedly, a larger scale experiment is required to more accurately study this effect. To illustrate the principal modes in which the evolution progresses, we have clustered all the melodies by similarity (using classic k-means algorithm [8] on the melodies embedded in R n ). Figure 7 show the resulting clusters together with the melodies nearest to the cluster centroids. These correspond to relatively stable melodic species in our experiment. - 938 -

Cluster 2 3 4 3 2 4 Centroid melody Figure 7. Clusters in the population (above) and the melodies nearest to cluster centres (below). 3 3 9 7 8 2 22 3 3 45 Top melody Figure 8. Top ranking melody over the generations. 4. DISCUSSION AND CONCLUSIONS We have conducted an experiment to evolve a population of melodies by popular vote (incidentally, in Figure 8 we show the best ranking melodies over generations). We have performed statistical analysis of the resulting evolutionary history. We have thus used the evolutionary setting in order to examine the popular vote as the fitness function in terms of perceptual biases of the respondents. By measuring the average entropy, repetitiveness, and variability in melody in rhythm over time, we have come to the conclusion that the respondents are biased towards melodically straightforward, consonant, diatonic, while, on the other hand, rhythmically varied melodies. We have observed that out of chaos emerges the preference for the Western diatonic scale. We also speculate that the formation of stable clusters in diverging branches of evolution may be an effect analogous to speciation. While the above results may already be intuitively familiar to expert musicians, we have for the first time demonstrated that evolutionary setting is a useful tool for studying psychological perceptual biases and æsthetic preferences in humans. A larger scale experiment would merit more accurate analysis of socio-cultural background of the respondents. It could potentially reveal interesting correlations between the background and musical preferences. It would be also very interesting to conduct a similar experiment with other musical systems, for example those not based on 2-tone scale. Further work would also include a more extensive analysis of the emergent phenomena related to tonality, mode, and key. Although this experiment is ongoing, and the corpus of data is continuously growing, we believe our preliminary findings may be of interest to the computational music community. 5. REFERENCES [] B. E. Johanson and R. Poli, GP-music: An interactive genetic programming system for music generation with automated fitness raters, U. of Birmingham, Tech. Rep. CSRP-98-3, May 998. [2] R. M. MacCallum, M. Mauch, A. Burt, and A. M. Leroi, Evolution of music by public choice, PNAS, vol. 9, no. 3, pp. 2 8 2 86, 22. [3] Y. M. A. Khalifa, H. Shi, and G. Abreu, Evolutionary music composer, in Late Breaking Papers at the 24 Genetic and Evolutionary Computation Conference, M. Keijzer, Ed., Seattle, Washington, USA, 26 Jul. 24. [4] J. D. Fernandez and F. J. Vico, AI methods in algorithmic composition: A comprehensive survey, J. Artif. Intell. Res. (JAIR), vol. 48, pp. 53 582, 23. [5] N. Claidière, S. Kirby, and D. Sperber, Effect of psychological bias separates cultural from biological evolution. PNAS, vol. 9, no. 5, p. E3526, 22. [6] D. Sperber and L. A. Hirschfeld, The cognitive foundations of cultural stability and diversity, Trends in Cognitive Sciences, vol. 8, no., pp. 4 46, Jan. 24. [7] S. Kirby, H. Cornish, and K. Smith, Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language, PNAS, vol. 5, no. 3, pp. 68 686, August 28. [8] K. G. Jamieson and R. D. Nowak, Active ranking using pairwise comparisons. in NIPS, J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. C. N. Pereira, and K. Q. Weinberger, Eds., 2, pp. 224 2248. [9] F. L. Wauthier, M. I. Jordan, and N. Jojic, Efficient ranking from pairwise comparisons. in ICML (3), ser. JMLR Proceedings, vol. 28, 23, pp. 9 7. [] J. Schmidhuber, Low-complexity art, Leonardo, Journal of the Inter Soc for the Arts, Sciences, and Technology, vol. 3, no. 2, pp. 97 3, 997. - 939 -

[] N. Hudson, Musical beauty and information compression: Complex to the ear but simple to the mind? BMC Research Notes, vol. 4, no., pp. 9+, 2. [2] C. Shannon, A mathematical theory of communication, The Bell System Technical Journal, vol. 27, no. 3, pp. 379 423, 948. [3] D. Huron, Tone and voice: A derivation of the rules of voice-leading from perceptual principles, Musical perception, vol. 9, no., 2. [4] W. J. Dowling, Rhythmic fission and the perceptual organization of tone sequences, Musical perception, vol. 967 part, 967. [5] Z. I. Botev, J. F. Grotowski, and D. P. Kroese, Kernel density estimation via diffusion, Annals of Statistics, 2. [6] D. S. Hirschberg, Serial computations of levenshtein distances, in Pattern Matching Algorithms, A. Apostolico and Z. Galil, Eds. Oxford University Press, 997, pp. 23 4. [7] A. M. Bronstein, M. M. Bronstein, and R. Kimmel, Generalized multidimensional scaling: A framework for isometry-invariant partial surface matching, PNAS, vol. 3, no. 5, pp. 68 72, 26. [8] J. MacQueen, Some methods for classification and analysis of multivariate observations, in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume : Statistics. University of California Press, 967, pp. 28 297. - 94 -