Orchestral Composition Steven Yi. early release

Similar documents
Topics in Computer Music Instrument Identification. Ioanna Karydi

A Composition for Clarinet and Real-Time Signal Processing: Using Max on the IRCAM Signal Processing Workstation

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

Analysis, Synthesis, and Perception of Musical Sounds

Automatic Music Clustering using Audio Attributes

Auditory Fusion and Holophonic Musical Texture in Xenakis s

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG?

ELEC 310 Digital Signal Processing

Quick reference guide

Evaluating the Elements of a Piece of Practical Writing The author of this friendly letter..

Put your sound where it belongs: Numerical optimization of sound systems. Stefan Feistel, Bruce C. Olson, Ana M. Jaramillo AFMG Technologies GmbH

CTP 431 Music and Audio Computing. Course Introduction. Graduate School of Culture Technology (GSCT) Juhan Nam

A FUNCTIONAL CLASSIFICATION OF ONE INSTRUMENT S TIMBRES

Automatic Piano Music Transcription

Music Complexity Descriptors. Matt Stabile June 6 th, 2008

Ben Neill and Bill Jones - Posthorn

System Of Shadows, an Interactive Performance Environment for Trumpet/Flugelhorn and Kyma

How to Build A Table of Authorities in Word * By: Morgan Otway

Applying lmprovisationbuilder to Interactive Composition with MIDI Piano

Articulation Guide. Berlin Brass - French Horn SFX.

A consideration on acoustic properties on concert-hall stages

Predicting the immediate future with Recurrent Neural Networks: Pre-training and Applications

Detection of Panoramic Takes in Soccer Videos Using Phase Correlation and Boosting

Gyorgi Ligeti. Chamber Concerto, Movement III (1970) Glen Halls All Rights Reserved

EMERGENT SOUNDSCAPE COMPOSITION: REFLECTIONS ON VIRTUALITY

Acoustic Echo Canceling: Echo Equality Index

Articulation Guide. Nocturne Cello.

On Human Capability and Acoustic Cues for Discriminating Singing and Speaking Voices

Contents. xv xxi xxiii xxiv. 1 Introduction 1 References 4

AUTOREGRESSIVE MFCC MODELS FOR GENRE CLASSIFICATION IMPROVED BY HARMONIC-PERCUSSION SEPARATION

ANALYSIS of MUSIC PERFORMED IN DIFFERENT ACOUSTIC SETTINGS in STAVANGER CONCERT HOUSE

Lab2: Cache Memories. Dimitar Nikolov

USING A SOFTWARE SYNTH: THE KORG M1 (SOFTWARE) SYNTH

DESIGNING OPTIMIZED MICROPHONE BEAMFORMERS

Department of Music, University of Glasgow, Glasgow G12 8QH. One of the ways I view my compositional practice is as a continuous line between

Supervised Learning in Genre Classification

Articulation Guide. TIME macro.

In addition, the choice of crossover frequencies has been expanded to include the range from 40 Hz to 220 Hz in 10 Hz increments.

Interacting with a Virtual Conductor

Transcription of the Singing Melody in Polyphonic Music

Short Bounce Rolls doubles, triples, fours

Lecture 9 Source Separation

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Getting started with music theory

Contents. Welcome to LCAST. System Requirements. Compatibility. Installation and Authorization. Loudness Metering. True-Peak Metering

How Does H.264 Work? SALIENT SYSTEMS WHITE PAPER. Understanding video compression with a focus on H.264

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM

UNIVERSITY OF DUBLIN TRINITY COLLEGE

Robert Alexandru Dobre, Cristian Negrescu

Loudness and Sharpness Calculation

Semi-supervised Musical Instrument Recognition

MUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES

Automatic Laughter Detection

BIG IDEAS. Music is a process that relies on the interplay of the senses. Learning Standards

Rotation p. 55 Scale p. 56 3D Transforms p. 56 Warping p. 58 Expression Language p. 58 Filtering Algorithms p. 60 Basic Image Compositing p.

Practice makes less imperfect: the effects of experience and practice on the kinetics and coordination of flutists' fingers

I. Students will use body, voice and instruments as means of musical expression.

Acoustic Measurements Using Common Computer Accessories: Do Try This at Home. Dale H. Litwhiler, Terrance D. Lovell

Musical Instrument Identification Using Principal Component Analysis and Multi-Layered Perceptrons

Real-time Granular Sampling Using the IRCAM Signal Processing Workstation. Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France

GCE. Music. Mark Scheme for January Advanced Subsidiary GCE Unit G353: Introduction to Historical Study in Music

PLOrk Beat Science 2.0 NIME 2009 club submission by Ge Wang and Rebecca Fiebrink

Music Segmentation Using Markov Chain Methods

Laugh when you re winning

Articulation Guide. Berlin Brass - Additional Instruments.

Pitch. The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high.

VISSIM Tutorial. Starting VISSIM and Opening a File CE 474 8/31/06

RECOMMENDATION ITU-R BT Methodology for the subjective assessment of video quality in multimedia applications

Articulation Guide. Berlin Strings.

Module 4: Video Sampling Rate Conversion Lecture 25: Scan rate doubling, Standards conversion. The Lecture Contains: Algorithm 1: Algorithm 2:

Chapter 12. Meeting 12, History: Iannis Xenakis

An ecological approach to multimodal subjective music similarity perception

The Ruben-OM patch library Ruben Sverre Gjertsen 2013

Oculomatic Pro. Setup and User Guide. 4/19/ rev

Influence of timbre, presence/absence of tonal hierarchy and musical training on the perception of musical tension and relaxation schemas

Alchemist XF Understanding Cadence

WASD PA Core Music Curriculum

1. INTRODUCTION. Index Terms Video Transcoding, Video Streaming, Frame skipping, Interpolation frame, Decoder, Encoder.

BeoVision Televisions


FINE ARTS EARLY ELEMENTARY. LOCAL GOALS/OUTCOMES/OBJECTIVES 2--Indicates Strong Link LINKING ORGANIZER 1--Indicates Moderate Link 0--Indicates No Link

H.261: A Standard for VideoConferencing Applications. Nimrod Peleg Update: Nov. 2003

TEPZZ A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: H04S 7/00 ( ) H04R 25/00 (2006.

TEPZZ 94 98_A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (43) Date of publication: Bulletin 2015/46

Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting

SIWAREX FTA Weighing Module for High Accuracy Requirements Calibrating SIWAREX FTA with SIWATOOL FTA

ON THE DERIVATION OF MUSIC FROM LANGUAGE

STA4000 Report Decrypting Classical Cipher Text Using Markov Chain Monte Carlo

3jFPS-control Contents. A Plugin (lua-script) for X-Plane 10 by Jörn-Jören Jörensön

Automatic Laughter Detection

Narrative Theme Navigation for Sitcoms Supported by Fan-generated Scripts

Query By Humming: Finding Songs in a Polyphonic Database

Into the Depths: The Technical Details Behind AV1. Nathan Egge Mile High Video Workshop 2018 July 31, 2018

TongArk: a Human-Machine Ensemble

Common Spatial Patterns 3 class BCI V Copyright 2012 g.tec medical engineering GmbH

Multiband Noise Reduction Component for PurePath Studio Portable Audio Devices

Non-Uniformity Analysis for a Spatial Light Modulator

Chapter 4. Logic Design

FINE ARTS STANDARDS FRAMEWORK STATE GOALS 25-27

Fast Quadrature Decode TPU Function (FQD)

Transcription:

Orchestral Composition Steven Yi early release 2003.12.20

Table of Contents Introduction...3 Part I Analysis...4 Observations...4 Musical Information...4 Musical Information Flow...4 Model One...4 Model Two...4 Model Three...4 Model Four...5 Musical Data...5...5 Instructions...5...5 Techniques...5...5 Instruments...6...6 Properties...6 Performers...6 Properties...6...6 Performer Groups...6 Properties...6...6 Part II Resynthesis...7 Part III - Techniques...8 Note Techniques...8 Durational Change...8 Apply Algorithm with Note as Parmater...8 Line Techniques...8 Performer Techniques...8 Performer Group Techniques...8

Introduction I've come to realize a strong affinity to orchestral music these days, finding myself more and more drawn to the performance capabilities, temporally and timbrally, that the large ensemble has within its means. Through attending concerts of ensembles of varying quality, I often wrote notes to myself ab out what it was that made orchestral composition what it was. I am writing this as I am developing a script library for my own means to exploring the techniques and methods of which I find to be within orchestral music.

Part I Analysis and Modelling Observations Musical Information As data for an Algorithm As configuration for an algorithm Musical Information Flow Model One The basic music model includes musical information flowing to one sound generator. The relationship of musical input to sound generators is a one-to-one relationship. Model Two Model Two is only slightly more advanced. Musical input is still mapped in a one-to-one relationship to sound generators. With this model, however, the output of sound generators are mapped to a sound modifier. The sound generators now coexist within a single acoustical space. Model Three Musical data first reaches a performer and is filtered by the performer. The performer may have properties set that will affect the musical input. Parameters like spatial location, accuracy, and dynamic range may affect the notes to be played. The performer then passes on the musical data for the sound generator to perform, and the resultant sound is mixed into a sound modifier acoustical space.

Model Four Musical Data is information that, in conjunction with a technique as well as instructions on performance, will determine most of the musical output Instructions are given to the performer accompanied with musical data Techniques are aspects of a performer or performer group

Instruments actually create sound Properties instruments have a variety of sounds producing methods and parameters not all instruments have the same sound methods/techniques Performers Properties performers have performance techniques given musical data and musical instruction, they apply their own properties to the data are located in space and have individual properties (no two performers alike) have techniques to performs musical data have instruments to perform Performer Groups Properties groups have performance techniques (Xenakis Surfaces) one-to-many data relationships are made up of performers take musical data and instructions to perform that data have techniques to performs musical data are given instructions as to how many should play

Part II Resynthesis Orchestral Data Flow

Part III - Techniques Note Techniques Durational Change Stoccato, etc. Apply Algorithm with Note as Parmater Tremelo Trill Line Techniques Performer Techniques Performer Group Techniques

Alphabetical Index