Brian Carty - Robotune: An ios Application Built with Csound

Similar documents
Music 25: Introduction to Sonic Arts

BA single honours Music Production 2018/19

A quality framework for use in music-making sessions working with young people in SEN/D settings.

EDUCATION Masters of Science in Recording Arts 2014 University of Colorado Denver Denver, Colorado

UNIVERSITY COLLEGE DUBLIN NATIONAL UNIVERSITY OF IRELAND, DUBLIN MUSIC

KAMIENIEC. analog resonant phase rotator. Model of operator s manual rev. 1977/1.0

MASTERS (MPERF, MCOMP, MMUS) Programme at a glance

Glasperlenspiel in 3D audio

Proposal Endorsement Signatures

Sample Entrance Test for CR (BA in Popular Music)

KORG's Gadget for ipad Mobile Synthesizer Studio

XYNTHESIZR User Guide 1.5

Time Fabric. Pitch Programs for Z-DSP

A Composition for Clarinet and Real-Time Signal Processing: Using Max on the IRCAM Signal Processing Workstation

Tiptop audio z-dsp.

Introduction! User Interface! Bitspeek Versus Vocoders! Using Bitspeek in your Host! Change History! Requirements!...

SUMMER SCHOOL. a five-day dance music production masterclass in the heart of London

Development and Implementation of a Community College Music Technology Degree ABSTRACT

Music and Music Technology

External Assessment practice paper

Play your music with passion. I-Play Music PROSPECTUS

THEORY AND COMPOSITION (MTC)

Shifty Manual v1.00. Shifty. Voice Allocator / Hocketing Controller / Analog Shift Register

The MPC X & MPC Live Bible 1

Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor

MUSIC (MUS) Music (MUS) 1

y POWER USER Motif and the Modular Synthesis Plug-in System PLG100-VH Vocal Harmony Effect Processor Plug-in Board A Getting Started Guide

3:15 Tour of Music Technology facilities. 3:35 Discuss industry trends Areas that are growing/shrinking, New technologies New jobs Anything else?

AURAFX: A SIMPLE AND FLEXIBLE APPROACH TO INTERACTIVE AUDIO EFFECT-BASED COMPOSITION AND PERFORMANCE

MUSIC AND SONIC ARTS MUSIC AND SONIC ARTS MUSIC AND SONIC ARTS CAREER AND PROGRAM DESCRIPTION

A System for Acoustic Chord Transcription and Key Extraction from Audio Using Hidden Markov models Trained on Synthesized Audio

Changemakers: Youth Theatre Practice Symposium

On the Move. Digital Mixers

QC External Synchronization (SYN) S32

WRoCAH White Rose NETWORK Expressive nonverbal communication in ensemble performance

Call for Embedded Opportunity: The British Library Sound Archive

QUALITY OF COMPUTER MUSIC USING MIDI LANGUAGE FOR DIGITAL MUSIC ARRANGEMENT

Synthesis Technology E102 Quad Temporal Shifter User Guide Version 1.0. Dec

QUAD LFO MANUAL V SE 14TH AVENUE PORTLAND OR USA

Page 1 of 6. MUSC8021: Core Musicianship Skills 5. Core Musicianship Skills 5 APPROVED Long Title: Core Musicianship Skills 5.

HAVERHILL OLD INDEPENDENT CHURCH

Arrangements for: National Certificate in Music. at SCQF level 5. Group Award Code: GF8A 45. Validation date: June 2012

Requirements for a Music Major, B.A. (47-50)

Advance Certificate Course In Audio Mixing & Mastering.

RECORDING AND REPRODUCING CONCERT HALL ACOUSTICS FOR SUBJECTIVE EVALUATION

Polytek Reference Manual

TABLE OF CONTENTS TABLE OF CONTENTS TABLE OF CONTENTS. 1 INTRODUCTION 1.1 Foreword 1.2 Credits 1.3 What Is Perfect Drums Player?

An Impact Soundworks Sample Library

MUSIC (MU) Music (MU) 1

S I N E V I B E S ROBOTIZER RHYTHMIC AUDIO GRANULATOR

Using machine learning to support pedagogy in the arts

The Cocktail Party Effect. Binaural Masking. The Precedence Effect. Music 175: Time and Space

Music (MUS) Courses. Music (MUS) 1

Music Through Midi: Using Midi To Create Your Own Electronic Music System By Michael Boom READ ONLINE


MUSIC (MUSI) 100 Level Courses. Music (MUSI) 1

SIBELIUS ACADEMY, UNIARTS. BACHELOR OF GLOBAL MUSIC 180 cr

Multichannel Audio Technologies

Radar Signal Processing Final Report Spring Semester 2017

Electronic Music Composition MUS 250

North Oxfordshire Academy Music Department. Department Staffing. Ben Judson Head of Music

installation... from the creator... / 2

Using SignalTap II in the Quartus II Software

College of MUSIC. James Forger, DEAN UNDERGRADUATE PROGRAMS. Admission as a Junior to the College of Music

SYNTHESIS FROM MUSICAL INSTRUMENT CHARACTER MAPS

A SuperCollider Implementation of Luigi Nono s Post-Prae-Ludium Per Donau

AND SOUND PRODUCTION 1 (4-11 ECTS

Dynamic Spectrum Mapper V2 (DSM V2) Plugin Manual

Spectral Sounds Summary

Mr Hyde User Manual Mr. Hyde. synthblock. by Analogue Solutions

UWE has obtained warranties from all depositors as to their title in the material deposited and as to their right to deposit such material.

A prototype system for rule-based expressive modifications of audio recordings

Real-time Granular Sampling Using the IRCAM Signal Processing Workstation. Cort Lippe IRCAM, 31 rue St-Merri, Paris, 75004, France

MOBILE APPS DEVELOPMENT

AUDITION DETAILS PIM PAM A STATE SECONDARY COLLEGE JUNIOR SECONDARY ASPIRING PERFORMERS PROGRAM

EMERGENT SOUNDSCAPE COMPOSITION: REFLECTIONS ON VIRTUALITY

Cristina Bachmann, Heiko Bischoff, Marion Bröer, Sabine Pfeifer, Heike Schilling, Benjamin Schütte This PDF provides improved access for

16 award-winning MIDAS PRO microphone preamplifiers

Music Radar: A Web-based Query by Humming System

WHAT IS PLAYING THE CURRICULUM?

Why Music Theory Through Improvisation is Needed

ROSIE Limited Warranty Installation

Ben Neill and Bill Jones - Posthorn

15th International Conference on New Interfaces for Musical Expression (NIME)

Overview of ITU-R BS.1534 (The MUSHRA Method)

Presented by the Partnership Proposal Cal State LA April 2017

Music Education (MUED)

UNIVERSITY OF LOUISIANA AT LAFAYETTE. STEP Committee Technology Fee Application

Next Generation Software Solution for Sound Engineering

Sample assessment task. Task details. Content description. Year level 9. Class performance/concert practice

DSA-1. The Prism Sound DSA-1 is a hand-held AES/EBU Signal Analyzer and Generator.

Visual enhancement using multiple audio streams in live music performance

Musical Futures: A case study investigation. Final report from. Institute of Education University of London. for the. Paul Hamlyn Foundation

Shifty Manual. Shifty. Voice Allocator Hocketing Controller Analog Shift Register Sequential/Manual Switch. Manual Revision:

The Medical City, Metro Manila, Philippines

USER S GUIDE DSR-1 DE-ESSER. Plug-in for Mackie Digital Mixers

Playful Sounds From The Classroom: What Can Designers of Digital Music Games Learn From Formal Educators?

Classical music, instrument / accordion

Music in Practice SAS 2015

SOUND REINFORCEMENT APPLICATIONS

Transcription:

Friday 25th Paper Session 1 November 2016, 15:00-17:00, Bewerunge Room, Maynooth University. Dr. Richard Boulanger - Three Decades with Csound: The Roots, Birth, and Early Years Looking back at more than 30 years of working side-by-side with Professor Barry Vercoe at The MIT Media Lab and on MIT Csound, Berklee College of Music s Professor of Electronic Production and Design, Dr. Richard Boulanger, author and editor of The Csound Book (MIT Press - 2000), will share his personal stories on Csound s roots (1969-1985); birth (1986); the early years (1986-1991); and hi-light some seminal projects and breakthroughs along the way (ADI SharCsound, OLPCsound, MIDI, RealTime Audio IO, GPL Open Source Licensing, and Csound5). In this talk, Dr.B. will share his wonderful collaborative and mentoring experiences with Barry Vercoe, dating back to the pre-dawn of the program through the early years of this amazing three decade Csound saga. What were some of the fun, funny, and amazing Csound-based projects upon which Richard and Barry collaborated? What led to Csound? Where did it come from? What were some of the early hurdles and breakthroughs? Who were the key developers along the way, and in what ways did they help to carry the torch and blaze a new path? For me, music is a medium through which the inner spiritual essence of all things is revealed and shared. Compositionally, I am interested in extending the voice of the traditional performer through technological means to produce a music that connects with the past, lives in the present and speaks to the future. Educationally, I am interested in helping students see technology as the most powerful instrument for the exploration, discovery, and realization of their essential musical nature their inner voice. Brian Carty - Robotune: An ios Application Built with Csound Robotune is an ios vocal processing App(lication) built with Csound. The App was envisaged primarily as a teaching tool designed to analyse contemporary vocal processing techniques such as tuning, harmonisation and common time domain processes. Background and motivation will be discussed, followed by an overview and demonstration of the App. Reflections on the development process will be offered, from critical listening and referencing to algorithm emulation and development, testing and release. The primarily 'trial and error' approach taken to the process will be highlighted throughout. Brian Carty is the Director of Education at the Sound Training College in Temple Bar, Dublin. Before this, he completed his Irish Research Council funded PhD on Binaural

Audio at NUI Maynooth. Brian s background is primarily academic; he has presented his research work around the world and published in a number of books and journals. He also works professionally as a musician, producer and audio programmer. Steven Yi - On Csound and Blue As we celebrate the 30th anniversary of Csound, I have been reflecting on Blue's own 15 year history and all that has developed within the two programs. In this paper, I would like to revisit the history of Csound from my perspective as the developer of Blue. I will discuss key developments in Csound and their impacts upon Csound use and practice. I will also look at how these developments influenced my own approach to designing Blue and how my views have evolved over time. Finally, I will look at new features planned for Csound 7 and what they mean for future Blue development. Steven Yi is a composer and programmer. He is the author of the Blue Integrated Music Environment, author of Pink and Score music libraries, and core developer of Csound. He holds a PhD in Digital Arts and Humanities from the National University of Ireland, Maynooth. In his free time, he enjoys practicing and studying T'ai Chi. Dr. Richard Boulanger - Boulanger Labs: Writing Apps and Building Hardware Synths with Csound Inside As we celebrate the 30th anniversary of Csound, I have been reflecting on Blue's own 15 year history and all that has developed within the two programs. In this paper, I would like to revisit the history of Csound from my perspective as the developer of Blue. I will discuss key developments in Csound and their impacts upon Csound use and practice. I will also look at how these developments influenced my own approach to designing Blue and how my views have evolved over time. Finally, I will look at new features planned for Csound 7 and what they mean for future Blue development. Born out of a number of courses in the Electronic Production and Design Department at The Berklee College of Music, Dr. Richard Boulanger, President and CEO of Boulanger Labs, will discuss how his collaborations with undergraduate students at the college have led to the development of a number of successful ipad apps which have leveraged for them amazing and hi-profile jobs at Apple, Vimeo, and Izotope (to name a few), research appointments at Georgia Tech and Stanford, and to the spin-off of a commercially-successful EuroRack-based, internationally distributed, modular synthesizer company - Qu-Bit.

Paper Session 2 Saturday 26 th November 2016, 10:00-12:00, Bewerunge Room, Maynooth University. Martin Crowley, James Kelly - Educational Tools for Live Sound Engineering Built Using Cabbage and Csound Live sound engineering is a resource-heavy topic to teach. Practical experience is essential, ideally in a professional performance venue. This can often be unfeasible due to factors such as expense and a lack of resources or training facilities. The three tools developed aim to give the learner an introduction to common live sound engineering tasks in a virtual learning environment, thus maximising time available in actual venues. A signal routing tool, a virtual Room Tuner (using impulse responses) and a feedback demonstration tool are presented. Martin Crowley has been a student of Audio Production for several years. He has a background in radio production and djing, but his consuming passion is live electronic music performance. He is currently devising a technology enabled live performance installation for an 8 channel ambisonic speaker array with the help of 'Csound and Pure Data. James Kelly holds a HND in Audio Production & Sound Engineering and is currently studying for a BA in Creative Music Production at IADT/STC Templebar. His Major Project focuses on developing software tools using csound and Cabbage. Federico Russo - MIDI Controlled Audio Effects Real-Time Reshaping of the Audio Signal Explanation of the general structure of the instrument It is a MIDI controlled audio effect, which is divided into 4 modules. Each module processes the audio signal in different ways, except for the audio input, which is needed to get the source signal. The audio signal flows trough the modules thanks to a dry/wet system based on internal busses. Explanation of each module of the instrument (code order) -Module 1: audio input. -Module 2: audio shift, warp and freeze, separated for spectral amplitudes and frequencies, controlled via MIDI. Implemented with an LFO controlled binaural movement. -Module 3: audio resynthesis, using a vocoder-like structure, with selection of waveform to blend the incoming signal with. Explanation of the workflow The instrument was created module by module, testing the correct functionality each time a line of code was written. Creative use of references was applied, looking at the use that other developers had of opcodes in their instruments, to understand their functions and possible applications in the instrument.

Applications The instrument finds its usefulness in cinematic sound effects for voices and non tonal based music, due to its random behaviour from the pvsfreeze opcode (the freeze of the frequencies is triggered by the keyboard). Federico Russo began to study the electric bass when he was 13 under Andrea Grossi, enrolling in the courses of the Accademia Nova of Rome. He also studied cello, with the concert performer and professor of the Istituto Superiore di Studi Musicali Gaetano Braga of Teramo Matteo Scarpelli. During his studies, he also played, composed and arranged music with his band, Libra: the band played in some of the more important stages of the Roman scene, but also in other cities thanks to the promotional tour for the album Sottopelle (Volcan records), this landed the band on MTV new generation. After that he started to study sound engineering at the Saint Louis college of music in Rome, where he founded with two colleagues the One Got Fat Collective for experimental electronic music and videos. In addition to that, during the sound engineering studies he met the experimental music composer Luca Spagnoletti, who introduced him to audio programming, specially the object programming of Max msp. Shane Byrne - Csound as a Tool for Enabling Musicians This paper explores the use of Csound as a means for developing compositional and performance tools for musicians with complex disabilities. It discusses how Csound can be used to develop instruments that are designed specifically around the individual musician s needs and tastes. This includes not only the design of software instruments but also the hardware interfaces that are used to interact with the software. This article will also address the methodologies employed when running Csound on embedded computing devices such as the Intel Galileo and provide a case study of a recent build that was completed on behalf of The Drake Music Project. Shane Byrne is a composer of acoustic and electronic music and is currently a PhD researcher at Maynooth University focusing on interactivity and participation within electronic music composition. In 2013 he completed his BA in Music Technology with first class honours and in 2014 completed an MA in creative music technologies also receiving first class honours. His current work is focused on physical computing and the potential for human interaction to add to an overall immersive musical experience for both the performer and the audience. His work has more recently led him to investigate the potential for such interaction to facilitate and encourage learning amongst the learning impaired and the autistic community. He also works as a sound designer, foley artist and mixing engineer. His first love is performance and regularly takes part in improvisation nights and occasionally plays gigs with several noise and progressive rock bands in Dublin.

Thom McDonnell - Development of the Csound HRTF Opcodes to Allow use of Any Dataset, utilising the SOFA standard Binaural hearing allows humans determine the direction of sound, by detecting small differences between the signals hitting the ears. The differences at each ear are Interaural Time Differences (ITD) and Inter-aural Intensity Differences (IID), together forming the Head Related Transfer Function (HRTF). Both are frequency dependent, which is largely due to physiology; this implies a different frequency and phase profile for each direction in 360 degrees space around the head. As each listener's physiology differs, the HRTF varies between listeners. The Csound HRTF opcodes use the MIT dataset to create a well working generalised HRTF as a solution to the difficulty in acquiring fully personalised HRTFs. Several studies arrive at the conclusion that there is a degradation of localisation ability when non-individualised HRTFs are used, with confusions and reversals occurring more often. The ability to utilise any dataset (potentially a hacked individualised dataset) is desirable as it offers a more robust solution, allowing users to find a more closely matched dataset to improve immersion and localisation. The Audio Engineering Society (AES) has recently put forward a solution in defining a standardised spatial audio data file format (SOFA). This allows further development of the current opcodes, retaining backwards compatibility, and allowing users to audition and use any dataset. The current opcodes have proven robust; the algorithms for interpolation of HRTFs work well and are thus maintained in this development. The HRTF reverb opcodes could benefit from some tonal improvements based on an analysis of non-binaural industry standard products; this will also be addressed. Thom McDonnell is a Dublin-based Producer and Lecturer. He specialises in teaching Music Technology, Synthesis, Studio Engineering, Audio Production and Electronics.