DOWNLOAD PDF MIDI AND ELECTRONIC MUSIC TECHNOLOGY

Similar documents
Cathedral user guide & reference manual

Toward a Computationally-Enhanced Acoustic Grand Piano

Natural Radio. News, Comments and Letters About Natural Radio January 2003 Copyright 2003 by Mark S. Karney

Tiptop audio z-dsp.

Original Marketing Material circa 1976

Registration Reference Book

UNIT V 8051 Microcontroller based Systems Design

******************************************************************************** Optical disk-based digital recording/editing/playback system.

DH400. Digital Phone Hybrid. The most advanced Digital Hybrid with DSP echo canceller and VQR technology.

Vocal Processor. Operating instructions. English

DSA-1. The Prism Sound DSA-1 is a hand-held AES/EBU Signal Analyzer and Generator.

multitrack sequencer USER GUIDE Social Entropy Electronic Music Instruments

ADSR AMP. ENVELOPE. Moog Music s Guide To Analog Synthesized Percussion. The First Step COMMON VOLUME ENVELOPES

Fraction by Sinevibes audio slicing workstation

Simple Harmonic Motion: What is a Sound Spectrum?

99 Series Technical Overview

Mixers. The functions of a mixer are simple: 1) Process input signals with amplification and EQ, and 2) Combine those signals in a variety of ways.

R H Y T H M G E N E R A T O R. User Guide. Version 1.3.0

bel canto SEP2 Single Ended Triode Tube Preamplifier User's Guide and Operating Information

Music Representations

PRELIMINARY INFORMATION. Professional Signal Generation and Monitoring Options for RIFEforLIFE Research Equipment

Part II: Dipping Your Toes Fingers into Music Basics Part IV: Moving into More-Advanced Keyboard Features

S I N E V I B E S FRACTION AUDIO SLICING WORKSTATION

PROFESSIONAL 2-CHANNEL MIXER WITH EFFECTS LOOP

Table of Contents. Introduction 2 C valve Features 3. Controls and Functions 4-5 Front Panel Layout 4 Rear Panel Layout 5

Chapter 40: MIDI Tool

XYNTHESIZR User Guide 1.5

MAutoPitch. Presets button. Left arrow button. Right arrow button. Randomize button. Save button. Panic button. Settings button

M-16DX 16-Channel Digital Mixer

Noise Tools 1U Manual. Noise Tools 1U. Clock, Random Pulse, Analog Noise, Sample & Hold, and Slew. Manual Revision:

4 MHz Lock-In Amplifier

Reason Overview3. Reason Overview

MODEL PA II-R (1995-MSRP $549.00)

MUSIC THEORY & MIDI Notation Software

NCFE Level 1/2 Technical Award in Music Technology (603/2959/2) Unit 01 Understand the principles and practices of music technology

Integrated Circuit for Musical Instrument Tuners

SREV1 Sampling Guide. An Introduction to Impulse-response Sampling with the SREV1 Sampling Reverberator

Hugo Technology. An introduction into Rob Watts' technology

Kramer Electronics, Ltd. USER MANUAL. Models: VS-162AV, 16x16 Audio-Video Matrix Switcher VS-162AVRCA, 16x16 Audio-Video Matrix Switcher

fxbox User Manual P. 1 Fxbox User Manual

EBU INTERFACES FOR 625 LINE DIGITAL VIDEO SIGNALS AT THE 4:2:2 LEVEL OF CCIR RECOMMENDATION 601 CONTENTS

Kramer Electronics, Ltd. USER MANUAL. Model: VS x 1 Sequential Video Audio Switcher

Essentials of the AV Industry Welcome Introduction How to Take This Course Quizzes, Section Tests, and Course Completion A Digital and Analog World

KBR-M -WARNING- -SPECIFICATIONS-

Video Disk Recorder DSR-DR1000

DNT0212 Network Processor

y POWER USER MUSIC PRODUCTION and PERFORMANCE With the MOTIF ES Mastering the Sample SLICE function

1 Prepare to PUNISH! 1.1 System Requirements. Plug-in formats: Qualified DAW & Format Combinations: System requirements: Other requirements:

OVERVIEW. YAMAHA Electronics Corp., USA 6660 Orangethorpe Avenue

Multi-Track Recording in the 1990s. Multi-track recording involves the capture of individual sounds (guitar, drums, flute,

Polyend Poly Polyphonic MIDI to CV Converter User Manual

MTL Software. Overview

CFX 12 (12X4X1) 8 mic/line channels, 2 stereo line channels. CFX 16 (16X4X1) 12 mic/line channels, 2 stereo line channels

OPERATION NOTES FOR PSIDEX AUDIO PGP-1A PRE-AMPLIFIER DESCRIPTION INSTALLATION

Natural-sounding telephone audio... Hybrids

PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF)

The Micropython Microcontroller

RoHS. Atma-Sphere Music Preamplifier. model P-2 OWNER'S MANUAL. Please study this document carefully before using equipment

Stereo Box Pre Box Amp Box Amp Box Mono Switch Box. Tuner Box Dock Box F / V Phono Box MM Record Box USB Phono Box II


K150 USER S MANUAL. Kurzweil Music Systems, Inc. Waltham, MA. June 1986 version KMSI P/N:

Synthesis Technology E102 Quad Temporal Shifter User Guide Version 1.0. Dec

2 The MIDI Manual. Figure 1.1. Example of a typical MIDI system with the MIDI network connections.

NOTICE. The information contained in this document is subject to change without notice.

MOBILE AUDIO PRODUCT SUMMARY 2018

Music in the Digital Age

Digital audio is superior to its analog audio counterpart in a number of ways:

Linear Time Invariant (LTI) Systems

LD-V4300D DUAL STANDARD PLAYER. Industrial LaserDisc TM Player

Concepts for the MIDI Composer, Arranger, and Orchestrator

Polythemus AU Midi Effect for IOS User Manual (11 th Mar 2019)

MP212 Principles of Audio Technology II

Using the BHM binaural head microphone

Synthesized Clock Generator

Video Disk Recorder DSR-DR1000A/DR1000AP

The MPC X & MPC Live Bible 1

MIC CUE MIC 1 MIC 2 TREBLE BASS MIC1/2 Tone Controls GROUND

DCB mk 3. professional bi-directional MIDI to DCB converter. Operating manual

DNT0212 Network Processor

Noise Detector ND-1 Operating Manual

medlab One Channel ECG OEM Module EG 01000

FPFV-285/585 PRODUCTION SOUND Fall 2018 CRITICAL LISTENING Assignment

VTAPE. The Analog Tape Suite. Operation manual. VirSyn Software Synthesizer Harry Gohs

Digital Audio and Video Fidelity. Ken Wacks, Ph.D.

Automated Local Loop Test System

Physical Modelling of Musical Instruments Using Digital Waveguides: History, Theory, Practice

American DJ. Show Designer. Software Revision 2.08

Digital Audio Design Validation and Debugging Using PGY-I2C

Prosoniq Magenta Realtime Resynthesis Plugin for VST

Computer Coordination With Popular Music: A New Research Agenda 1

Noise Tools 1U Manual. Noise Tools 1U. Clock, Random Pulse, Analog Noise, Sample & Hold, and Slew. Manual Revision:

Contents. Foreword... xi. Preface... xiii. Introduction... xvii

Parade Application. Overview

MAD A-Series...Flat Panel Surface Planar Arrays

WAVES Cobalt Saphira. User Guide

clipping; yellow LED lights when limiting action occurs. Input Section Features

Royal Reed Organ for NI Kontakt

Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor

The Digital Audio Workstation

CHAPTER 3 AUDIO MIXER DIGITAL AUDIO PRODUCTION [IP3038PA]

Transcription:

Chapter 1 : MIDI music technology blog.quintoapp.com MIDI (/ ˈ m ɪ d i /; short for Musical Instrument Digital Interface) is a technical standard that describes a communications protocol, digital interface, and electrical connectors that connect a wide variety of electronic musical instruments, computers, and related audio devices. Audio sequencer and Digital audio workstation Sequencing software provides a number of benefits to a composer or arranger. It allows recorded MIDI to be manipulated using standard computer editing features such as cut, copy and paste and drag and drop. Keyboard shortcuts can be used to streamline workflow, and editing functions are often selectable via MIDI commands. The sequencer allows each channel to be set to play a different sound, and gives a graphical overview of the arrangement. A variety of editing tools are made available, including a notation display that can be used to create printed parts for musicians. Tools such as looping, quantization, randomization, and transposition simplify the arranging process. Realistic expression can be added through the manipulation of real-time controllers. Mixing can be performed, and MIDI can be synchronized with recorded audio and video tracks. Work can be saved, and transported between different computers or studios. Cue list sequencing is used to trigger dialogue, sound effect, and music cues in stage and broadcast production. These became essential with the appearance of complex synthesizers such as the Yamaha FS1R, [49] which contained several thousand programmable parameters, but had an interface that consisted of fifteen tiny buttons, four knobs and a small LCD. These create a full band arrangement in a style that the user selects, and send the result to a MIDI sound generating device for playback. The generated tracks can be used as educational or practice tools, as accompaniment for live performances, or as a songwriting aid. Software synthesizer and Software sampler Computers can use software to generate sounds, which are then passed through a digital-to-analog converter DAC to a power amplifier and loudspeaker system. These timing issues can cause synchronization problems, and clicks and pops when sample playback is interrupted. Software synthesizers also exhibit a noticeable delay known as latency in their sound generation, because computers use an audio buffer that delays playback and disrupts MIDI timing. These cards used FM synthesis, which generates sound through modulation of sine waves. Some manufacturers used bit samples, and padded those to 16 bits. VJs and turntablists use it to cue clips, and to synchronize equipment, and recording systems use it for synchronization and automation. Standard applications use only three of the five conductors: Most devices do not copy messages from their input to their output port. A third type of port, the "thru" port, emits a copy of everything received at the input port, allowing data to be forwarded to another instrument [10]: A MIDI merger is able to combine the input from multiple devices into a single stream, and allows multiple controllers to be connected to a single device. A MIDI switcher allows switching between multiple devices, and eliminates the need to physically repatch cables. MIDI patch bays combine all of these functions. They contain multiple inputs and outputs, and allow any combination of input channels to be routed to any combination of output channels. Routing setups can be created using computer software, stored in memory, and selected by MIDI program change commands. MIDI data processors are used for utility tasks and special effects. A three-byte MIDI message requires nearly 1 millisecond for transmission. If an event is sent on two channels at once, the event on the higher-numbered channel cannot transmit until the first one is finished, and so is delayed by 1ms. This contributed to the rise of MIDI interfaces with multiple in- and out-ports, because timing improves when events are spread between multiple ports as opposed to multiple channels on the same port. This unit provides a variety of real-time controllers, which can manipulate various sound design parameters of computer-based or standalone hardware instruments, effects, mixers and recording devices. Many devices are some combination of the two types. Keyboards are by far the most common type of MIDI controller. Software synthesizers offer great power and versatility, but some players feel that division of attention between a MIDI keyboard and a computer keyboard and mouse robs some of the immediacy from the playing experience. Controllers may be general-purpose devices that are designed to work with a variety of equipment, or they may be designed to work with a specific piece of software. The MSic controller includes patch cables that can be used to control signal routing in their virtual reproduction of the MS synthesizer, and Page 1

can also control third-party devices. These devices are highly portable, but their limited programming interface requires computer-based tools for comfortable access to their sound parameters. The operating system and factory sounds are often stored in a Read-only memory ROM unit. Typically, the MIDI Module includes a large screen, so the user can view information for the currently selected function. Synthesizer Synthesizers may employ any of a variety of sound generation techniques. They may include an integrated keyboard, or may exist as "sound modules" or "expanders" that generate sounds when triggered by an external controller, such as a MIDI keyboard. Sound modules are typically designed to be mounted in a inch rack. Sampler musical instrument A sampler can record and digitize audio, store it in random-access memory RAM, and play it back. Samplers typically allow a user to edit a sample and save it to a hard disk, apply effects to it, and shape it with the same tools that synthesizers use. They also may be available in either keyboard or rack-mounted form. Samplers did not become established as viable MIDI instruments as quickly as synthesizers did, due to the expense of memory and processing power at the time. Drum machine Drum machines typically are sample playback devices that specialize in drum and percussion sounds. They commonly contain a sequencer that allows the creation of drum patterns, and allows them to be arranged into a song. There often are multiple audio outputs, so that each sound or group of sounds can be routed to a separate output. The individual drum voices may be playable from another MIDI instrument, or from a sequencer. Page 2

Chapter 2 : Chapter Three: How MIDI works MIDI stands for Musical Instrument Digital Interface. The development of the MIDI system has been a major catalyst in the recent unprecedented explosion of music technology. MIDI has put powerful computer instrument networks and software in the hands of less technically versed musicians and amateurs. Early developments in electronic instruments Precursors of electronic instruments Electricity was used in the design of musical instruments as early as, when J. Delaborde of Paris invented an electric harpsichord. Experimental instruments incorporating solenoids, motors, and other electromechanical elements continued to be invented throughout the 19th century. Complex and impractical, the Telharmonium nevertheless anticipated electronic organs, synthesizers, and background music technology. Early electronic instruments The dawn of electronic technology was marked by the invention of the triode vacuum tube in by Lee De Forest. The triode gave musical instrument developers unprecedented ability to design circuits that would produce repetitive waveforms oscillators and circuits that would strengthen and articulate waveforms that had already been produced amplifiers. In the time period between World Wars I and II, many new musical instruments using electronic technology were developed. These may be classified as follows: Instruments that produce vibrations in familiar mechanical waysâ the striking of strings with hammers, the bowing or plucking of strings, the activation of reedsâ but with the conventional acoustic resonating agent, such as a sounding board, replaced by a pickup system, an amplifier, and a loudspeaker, which enable the performer to modify both the quality and the intensity of the tone. These instruments include electric pianos; electric organs employing vibrating reeds; electric violins, violas, cellos, and basses; and electric guitars, banjos, and mandolins. Instruments that produce waveforms by electric or electronic means but use conventional performer interfaces such as keyboards and fingerboards to articulate the tones. The most successful of these was the Hammond organ, which implemented the same technical principles as the Telharmonium but used tiny rotary generators in conjunction with electronic amplification in place of large, high-power generators. The Hammond organ was placed on the market in, and it remained a commercially important keyboard instrument for more than 40 years. Other, more experimental early electronic keyboard instruments used rotating electrostatic generators, rotating optical disks in conjunction with photoelectric cells, or vacuum-tube oscillators to produce sound. Instruments that were designed for performance in the conventional sense but which implemented novel forms of performer interfaces. Instruments that were not intended for conventional live performance but instead were designed to read an encoded score automatically. The first of these was the Coupleux-Givelet synthesizer, which the inventors introduced in at the Paris Exposition. Unlike a player piano, however, the Coupleux-Givelet instrument provided for control of pitch, tone colour, and loudness, as well as note articulation. The principles of score encoding and sound control embodied in this instrument have become increasingly important to contemporary composers as electronic musical instrument technology has continued to develop. The tape recorder as a musical tool The next stage of development in electronic instruments dates from the discovery of magnetic tape recording techniques and their refinement after World War II. These techniques enable the composer to record any sounds whatever on tape and then to manipulate the tape to achieve desired effects. Sounds can be superimposed upon each other mixed, altered in timbre by means of filters, or reverberated. Repeating sound-patterns can be created by means of tape loops. Tape splicing can be used to rearrange the attack beginning portion and decay ending portion of a sound or to combine portions of two or more sounds to form striking juxtapositions of sound with arbitrarily great length and complexity. Thus, the composer can exercise precise control over every aspect of his original sound material. These sounds were shaped, processed, and then put together composed to form a unified artistic whole. In a studio for elektronische Musik was founded at Cologne, W. While the composers associated with this studio used many of the same techniques of tape manipulation as did the French group, they favoured electronically generated rather than natural sound sources. In particular, they synthesized complex tones from sine waveforms, which are pure tones with no overtones. Carlton Gamer Robert A. These instruments used small keyboards and were designed to mount immediately under the keyboard of a piano. They were capable of simulating a wide variety Page 3

of traditional orchestral timbres, which the player selected by setting an array of tablet-shaped switches along the front of the instrument. Also during this postwar period, electronic organs became one of the largest segments of the musical instrument industry. These multikeyboard, polyphonic chord-playing instruments were first modeled after traditional pipe organs, but they later evolved into a new class of musical instruments for domestic use. The electronic home organ offered a variety of timbres, which were oriented toward popular music, as well as such performance assists as automatic rhythm production, easily enabling it to replace the player piano in popularity. Unlike commercial keyboard-controlled organs and related instruments, the score-reading instruments were large, experimentally oriented devices. The scanner, which was mounted on a carriage that rolled along a foot table, read an encoded score that was drawn on cardboard cards that covered the table. The RCA synthesizer was capable of producing four musical tones simultaneously. Pitches, tone colours, vibrato intensities, envelope shapes, and portamento of the four tones were encoded in binary form on a perforated paper roll. The development of tape music as a compositional medium, the advancement of the technology of score-reading music systems, and the commercial proliferation of electronic organs and other keyboard-controlled electronic instruments all set the stage for the appearance of the electronic music synthesizer in the s. Other contributing factors were the advancement of electronic technology itself and the domination of popular music by the electric guitar and other amplified instruments. The electronic music synthesizer The word synthesize means to produce by combining separate elements. Thus, synthesized sound is sound that a musician builds from component elements. A synthesized sound may resemble a traditional acoustic musical timbre, or it may be completely novel and original. One characteristic is common to all synthesized music, however: The notions that synthesized music is intended to imitate a more traditional entity and that synthesized music is generated by automated, mechanical means without control by a musician are generally not true. A traditional musical instrument is a collection of acoustic elements whose interrelationships are fixed by the instrument builder. The violinist brings the strings into contact with the fingerboard and a bow to cause the strings to vibrate; but he does not change the position of the strings relative to the bridge, the position of the bridge relative to the body, or the configuration of the body itself. A synthesist, on the other hand, views his instrument as a collection of parts that he configures to produce the desired timbre and response. The elements, or parts, that a synthesist works with depend on the design of the instruments that he is using. Generally, synthesizers include oscillators to generate repetitive waveforms, mixers to combine waveforms, filters to increase the strength of some overtones while reducing the strength of others, and amplifiers to shape the loudness contours of the sounds. Other sound-producing and -processing elements, which can exist as electronic circuits or as built-in computer programs, may also be available. To facilitate the musical control of these elements, a synthesizer may have any combination of a conventional keyboard; other manual control devices, such as wheels, sliders, or joysticks; electronic pattern generators; or a computer interface. The appearance of high-quality, low-cost silicon transistors in the early s enabled electronic instrument designers to incorporate all the basic synthesizer features in relatively small, convenient instruments. The Synket, built by the Italian engineer Paolo Ketoff in, was designed for live performance of experimental music. It had three small, closely spaced, touch-sensitive keyboards, each of which controlled a single tone. These instruments differed primarily in the control interfaces they offered. The Buchla instruments did not feature keyboards with movable keys; instead, they had touch-sensitive contact pads that could be used to initiate sounds and sound patterns. Switched-on Bach, the music of J. Bach transcribed for Moog synthesizer and recorded by Walter Carlos and Benjamin Folkman in, achieved a dramatic commercial success. In the years following the appearance of Switched-on Bach, many synthesizer recordings of traditional and popular music appeared, and synthesizer music was frequently heard in movie soundtracks and advertising commercials. Most electronic music synthesizers that were designed before are called analog synthesizers, because their circuits directly produce electric waveforms that are analogous to the sound waveforms of acoustic instruments. This is in contrast to digital synthesizers and music systems, the circuits of which produce series of numbers that must then be converted to waveforms. The first digital music synthesis systems were general-purpose computers. The computer as a musical tool The direct synthesis of sound by computer was first described in by Max Mathews and coworkers at the Bell Telephone Laboratories, Page 4

Murray Hill, N. Computer sound synthesis involves the description of a sound waveform as a sequence of numbers representing the instantaneous amplitudes of the wave over very small successive intervals of time. The waveform itself is then generated by the process of digital-to-analog conversion, in which first the numbers are converted to voltage steps in sequence and then the steps are smoothed to produce the final waveform. The algorithm is written by a composer or programmer as a series of instructions that are stored in digital media i. The composer then also writes a score that specifies properties of the individual sound events that make up the composition. A great variety of sound-synthesis and music-composition algorithms have been developed at research institutions around the world. Music V, created in â 68, is the most widely used sound-synthesis program to have been developed at Bell Laboratories. Music V consists of computer models of oscillator and amplifier modules, plus procedures for establishing interactions among the modules. By the end of the s, computer music systems surpassed tape studio techniques and analog synthesizers as the electronic composition medium of choice among modern and experimental music composers. Digital synthesizers, the music workstation, and MIDI Digital synthesizers During the s, commercial electronic instrument manufacturers introduced many performance-oriented keyboard instruments that used digital computer technology in combination with built-in sound-synthesis algorithms. Introduced in, the DX-7 was polyphonic, had a five-octave touch-sensitive keyboard, and offered a wide choice of timbres, which the player could adjust or change to suit his requirements. Well over, DX-7s were sold, and Yamaha adapted their FM technology to a line of instruments ranging from portable, toylike keyboards to rack-mounted modules for studio and experimental use. Another important early digital synthesizer was the Casio CZ, a battery-powered four-voice keyboard instrument using simple algorithms that were modeled after the capabilities of analog synthesizers. The CZ was introduced in at a price approximately one-quarter that of the DX-7 and achieved widespread popularity. Sampling instruments; music workstations A sound waveform from a microphone or tape recorder can be digitized, or converted to a sequence of numbers that is the digital representation of the waveform. Instruments that enable a musician to digitize a sound waveform and then process it and play it back under musical control are called sampling instruments. The Fairlight CMI was a general-purpose computer with peripheral devices that allowed the musician to digitize sounds, store them, and then play them back from a keyboard. In Roger Linn introduced the Linn Drum, an instrument containing digitized percussion sounds that could be played in patterns determined by the musician. In Raymond Kurzweil introduced the Kurzweil, a keyboard-controlled instrument containing digitally encoded representations of grand piano, strings, and many other orchestral timbres. Both the Linn and the Kurzweil instruments were intended for composition as well as for performance, since they contained digital memories into which the musician could enter a score. By the end of the s, many instrument manufacturers had combined the technologies of the digital computer, digital sound synthesis, and sampling digital sound recording into integrated composition and sound-processing systems called music workstations. Musical instrument digital interface In several commercial instrument manufacturers agreed on a way of interconnecting instruments so that they could work together or in conjunction with a personal computer. MIDI embodies the means for transmitting commands that tell which notes are being played, what timbre is desired, what nuances are being produced, and so forth. With a personal computer and the appropriate software programs, MIDI-equipped instruments are capable of performing as a system similar to the larger music workstations. By the end of the s, MIDI systems had become very popular with amateur as well as professional musicians. Moog Assessment Electronic instruments have contributed to a tremendous expansion of musical resources. Their increasing sophistication has made available to the composer a palette of sounds ranging from pure tones at one extreme to the most complex sonic structures at the other. In addition, it has made possible the rhythmic organization of music to a degree of subtlety and complexity previously unattainable. One consequence of the use of electronic instruments has been the wide acceptance of a new definition of music as organized sound. Another consequence is the acceptance of the notion that the composer may communicate directly with an audience without the need for a performer as interpreter. Yet another consequence is the democratization of both experimental and traditional music composition through the availability of high-quality, reasonably priced instruments and computer software. Some observers have felt that the elimination of the performer as Page 5

interpreter, while it may enable the composer to realize perfectly his intentions, is nevertheless a serious loss. Performance, it is argued, is a creative discipline complementary to that of composition itself, and varieties of interpretation add richness to the musical experience; moreover, the physical presence of the performer infuses drama into what would be otherwise a purely aural, intellectual, and, by implication, somewhat lifeless event. But in fact many compositions for electronic instruments may be performed live with virtuosity and drama. With contemporary electronic instrument technology, the composer is free to choose whether or not the creative contribution of a performer will serve his artistic goals. Page 6

Chapter 3 : Music Production, Audio Engineering and DJ Courses School of Electronic Music MIDI: MIDI, technology standard allowing electronic musical instruments to communicate with one another and with computers. By the beginning of the s, affordable digital synthesizer keyboards offering a wide range of instrument sounds and effects were widely available. Roland developed the DCB interface, which was integrated throughout much of their product line between and It used a parallel-style interface, which was fast but expensive to implement. Performers disliked dealing with the bulky multi-conductor cables, and options for building networks of devices were limited. In, Roland introduced the DIN sync interface to synchronize different electronic musical instruments. It was introduced with the Roland TR in, followed by other Roland equipment in It allowed the synth to be played remotely via a keytar -style controller keyboard. It used a high-speed serial interface. It did not sell well and Sequential did not develop any other uses for it. With some input from Tom Oberheim, they sketched out an interface that would be less expensive to implement, using off-the-shelf communications integrated circuits, and inexpensive 5-pin DIN cables already widely used as a means to connect tape decks to amplifiers in semi-pro audio equipment. Based on their discussion, Sequential built a prototype, which the other participants used to evaluate ideas over the next two years. The manufacturers involved formed a trade group, the MIDI Manufacturers Association, which became the owner of the rights to the specification, as well as a means of pooling research money. MIDI allowed communication between different instruments and general-purpose computers to play a role in music production. Each cable carries data in one direction only; two cables are required for two-way communication. The baud rate the rate at which state transitions can occur on the line, which along with the byte format determines the data rate is 31, baud a somewhat unfortunate choice, since the computer industry was settling on 38, baud for this speed range. With an 8-bit byte, plus one start bit time and one stop bit time, this equates to 3, data bytes, or 25, data bits per second. The cable has identical male connectors at both ends; all connectors on all devices use female connectors. All conductors are wired straight through, with a pin at one end connected to the same-number pin at the other end; there is no need for "crossover" cables. Only three pins of the 5-pin connector are used. Pin 4 is the "positive" pin, on which the transmitter places the signal; pin 5 is the "return". Pins 4 and 5 form an isolated-from-ground current loop. Pin 2 on the cable ends is connected to the cable shield. The transmitting device grounds pin 2. Pins 1 and 3 are not used. However, many cables have all five pins wired; this allows the cable to be used for other purposes, such as DIN sync. At the receiver end, the standard mandates that an optoisolator be used. This electrically isolates the cable, and the transmitter connected at its other end, from the receiving device. At the receiving end, pin 2 is not to be connected. These measures prevent ground loops, and help reduce noise on the line by creating a "balanced line". A fully equipped standard MIDI device has three jacks. An "in" jack receives data from a transmitter. An "out" jack sends data to a receiver. A "thru" jack electrically retransmits everything that appears at the "in" jack. This allows several receiving devices to be daisy-chained to a single transmitter. The transmitter can use MIDI channel assignments to direct data to specific receiving devices. Not all devices will have all three jacks; for instance, an effects unit that does not produce any data may not have an "out" jack. All devices that have an "in" jack are supposed to also have a "thru" jack, but not every manufacturer follows the standard in this regard. Depending on the numeric value, a byte is categorized as either a "command" byte or a "data" byte. A command byte has its most significant bit set, so that in hexadecimal, its value is between 80 and FF. A data byte has a value between 0 and 7F. Most command bytes divide the byte into two four-bit chunks each of which can be thought of as a single hex digit. The first digit is the message type, and the second digit is a channel number. The concept of a channel allows several receiving devices to be chained off of a single transmitter, such that the transmitter can send data to each receiver individually, without confusion. By convention, channel numbers are referred to as being from 1 to 16, although in the actual command byte, these translate in hex to values from 0 to F 15 in decimal. Generally, the channel number that a receiving devices receives on is set by the performer using a setup menu selection on the device, although some devices may also use a rotary switch or a set of DIP switches. Multitimbral synths usually have the capability to Page 7

receive on more than one channel. Some very old devices e. There are some message types which are not channel-specific; they are received by every device connected to the transmitter For these types of messages, all 8 bits of the command byte specify the message type. The list below describes the most commonly used basic message types, with the command byte value -- in hex -- for that message type. Note on 9x, which tells a synth to begin playing a note. Corresponds to a key being pressed on a keyboard. Each note on a conventional music keyboard is assigned a note number, with middle C being note The message format also conveys a velocity value, which corresponds to how sharply the key is struck. Note off 8x, which tells a synth to end playing a note or, more specifically, transition the note to its release phase. This corresponds to a key being released on a keyboard. The message format also allows for the conveyance of a release velocity. Program change Cx, which instructs the synth to load a patch. Pitch wheel Ex, which corresponds to the movement of a pitch bend lever or wheel. The message format allows for a bit value, or roughly 16, individual steps. Channel pressure, better known as aftertouch Dx. This corresponds to the force applied to the keyboard on an aftertouch-sensitive keyboard. Key pressure, better known as polyphonic aftertouch Ax. This message format allows for the transmission of a variety of control message values corresponding to either common synth features, or to modes of MIDI operation. The format allows for a Controller nnumber and a 7-bit value. In the MIDI standard, when the word Controller appears capitalized, it refers specifically to a controller number or name that is assigned a fixed meaning in the standard. To extend the range of possible values, some Controller types have been assigned two numbers, and the value is transmitted as two separate messages when the range extension is needed. MIDI Clock F8, used for synchronization of devices with a recording device, sequencer, or master timing device. A more advanced synchronization mechanism. Sometimes used to prevent stuck notes in the event that a cable becomes disconnected while playing. System exclusive, commonly referred to as sysex F0 allows individual synth and device manufacturers to define device-specific message formats on top of the standard formats. Sysex formats are totally under the control of the individual manufacturer. Each manufacturer is assigned a manufacturer number, which they are required to use in their sysex formats, to prevent conflicts. End of system exclusive F7. Needed to indicate the end of a system exclusive message, since the standard allows such messages to be of variable length. Over time, a number of smaller additions and extensions to the data protocols have been added, such as the MIDI Sample Dump Standard SDS, which is a means of using sysex messages to transfer sample data between devices. In several places, further definition has been put onto areas which were originally left open or undefined, such as the standard definitions for continuous controller numbers, and the mechanism for extending the range of manufacturer ID numbers. Many change pages and edits were published for the original standard through the s and s. In, the MMA finally decided to codify all of the changes in a new version of the standard. For some reason, they decided to return to referring to this as the "MIDI 1. It is still known by that name today, despite having been revised several times since It can be downloaded from the MMA Web site registration is required. Several extensions and revisions of the MIDI standard have been developed for control of theatrical lighting. One extension approved by the MMA is called MIDI Show Control; it consists of a set of defined system exclusive messages that are allocated for the purpose, which would allow music and lighting applications to exist within the same MIDI system although this is seldom done. Some other de facto standards redefine MIDI standard messages and continuous controller numbers. MIDI has also seen some use in home automation applications. Computer interfaces Edit Although the possibility of computers having MIDI interfaces was envisioned in the original earlys development, it was not known at the time what form such an interface might take, or what it might be required to do, so the original specification had no specific provisions for computer intefaces. As a result, the first software sequencer applications appeared on the ST platform. Originally these were just "write-through" devices, but soon the hardware vendors, in response to the increasingly large MIDI studios and rigs being built by some performers, began building MIDI interfaces that included multiple in and out ports. These were intended to solve two problems: Sequencer manufacturers began collaborating with the hardware vendors to devise protocols. None of these were standardized and protocols proliferated. As data rates became higher, a new problem appeared: Performers began to complain of sloppy timing when executing complex sequences with larger setups. This problem got worse with the advent Page 8

of USB interfaces on computers; USB is capable of high data rates but tends to have highly variable latency. So the next addition to the sequencer-to-interface protocol was time stamping. This allowed a sequencer application, when playing back a sequence, to "look ahead" and send data to the MIDI interface device with a time stamp saying, "Send this data at time X". When the interface was properly designed and time synchronized with the computer, time stamping improved timing accuracy when playing back sequences. Today, computer MIDI interfaces are available which range from one-in-one-out basic interfaces, to large rack mount interfaces with eight or more of each. Some of the lower end of such devices no longer have traditional MIDI jacks. Such devices usually conform to the "class compliant" standard published by the USB trade association in, which allows them to interface with most WIndows, OS X or Linux computers without needed to have a device-specific driver loaded. Page 9

Chapter 4 : music technology dictionary, MIDI, audio and electronics terms explained! Electronic and digital music technology is the use of electronic or digital instruments, computers, electronic effects units, software or digital audio equipment by a musician, composer, sound engineer, DJ or record producer to make, perform or record music. Not to be confused with reverberation, which is more of a gradual decay of a sound resulting from mixture of multiple echos, rather than a "bounce - back", where each echo may be heard distinctly. Common examples include "Reverb": The enables the level of influence of the "wet" signal to be influenced via a potentiometer on the mixing desk. See also Sound Synthesis envelope generator. Error Correction is the process of replacing these lost bits. When lower level signals are lowered attenuated and higher level signals are raised boosted, this is expansion. A devise that acheives this through circuit ry or software is called an expander. This is the opposite of compression. Usually associated with a mixing desk. Much more info here. When feedback reaches a certain level it causes an exponential rise in the level of certain frequencies, such as the screaming howl familar to guitarists and microphone users, or in the case of lower frequencies, a kind of ever increasing rumble. Figure of Eight microphone: Hertz the man who devised this form of measurement. Our ears become progressively less sensitive to sounds below Hertz, and they are at their most sensitive to sounds which have a frequency of khz. Sound professionals tend to use this term where the layman may say "volume". This is not a problem with digital recording, so long as it is not converted into analogue. This is due to the fact that in its most basic level, the copying of a digital recording is a copying of a vast amount of binary numbers, which can be checked. Its symptom is a low hum 50-60 Hertz see Frequency. If you are in the US it will be 60, in Europe When the other tracks are recorded, this is usually then replaced with a "final vocal". To use an example, say there was a sound with a fundamental frequency or if we were being musical, a note or pitch of Hertz. If this were a note, its pitch would corresond to the fundamental frequency of the sound, however the subtle interaction of these higher frequency harmonics of varying amplitudes levels, will give the sound its "timbre" or "colour". Hence a guitar playing E4 will sound considerably different from a piano playing E4, although the fundamental frequency, or pitch will still be Any digital audio recording device which is based on a computer type hard disk storage device, rather than D. Digital Audio Tape, CD etc. In recording, this term is used mostly in relation to magnetic tape, and other devices which utilize magnetism eg the magnetic fields of a microphone etc. With magnetic tape, it is when the magnetization of the tape lags slightly behind the electro-magnetic field produced from the recording head, creating a kind of distortion. This problem is partially overcome with the use of high frequency around k Hz inaudible signal, which is known as an AC " bias " zonicweb. It is commonly represented by the mathematical symbol "Z", and is measured in Ohms. Some people may have had an unwelcome brush with the laws of impedance when usually drunk at a party! Keeping with speakers and amplifiers, impedance in terms of Ohms is usually measured nominally an average of the various levels of impedance measured throughout the scale of sound frequencies eg a typical loudspeaker with a nominal impedance of 8 Ohms may vary in impedance between 3 and 30 Ohms! The lower the nominal impedance of the load, the higher the power in terms of Watts delivered to the speaker. Returning to the "drunken party" scenario, if you connect a 4 speakers with a nominal impedance of 8 Ohms each to one channel of of your amplifier, collectively, they will have an nominal impedance load of 2Ohms, which is beyond the range of most domestic amps. However if it is capable of driving this load comfortably, the amp will deliver a much higher wattage than it would to an 8Ohm load. However finally, this must be qualified by reminding readers that nominal impedance is merely an average see above, so some "margin of error" is required when matching speakers with an amp which may be operating at the limits of its impedance "envelope". So as they used to say in a famous TV cop programme "be careful out there! Opposite of " conductor ". Slightly later, a more advanced device, utilzing silicon as the semiconductive material, and known at the time as a "Unitary Circuit" Later "Integrated", was patented by Kenneth Intel Noyce when he was working at Fairchild Semiconductor. In modern "plug and play" systems each dedicated IRQ line is automatically assigned a number, however in older systems this had to be set manually by the user. A full Page 10

sized Jack plug has a spur of a quarter of an inch, but there are also smaller "mini" jack plugs. The distortion is more pronounced at the higher end of the frequency spectrum. Jitter also refers to timing errors where the word clock is an embedded part of the datastream self-clocking. For example, a piece in the key of C major uses mostly notes of the C major scale. A piece of music can have several key changes in it, and this is known as "modulation" see "key change" below. The first note of a scale is known as the tonic and is the note that tells us the name of the key. Saves the musician the bother of writing sharps or flats on every "accidental" note within the piece, and saves cluttering up the stave. If there are no symbols, then the piece will be in the key of C Major or A Minor, or it could possibly be a piece of arty "atonal" music by Karlheinz Stockhausen or someone similar, which has no key. Also known as "modulation". The impedance is the measurement of this resistance in terms of Ohms. The final mix of a piece of music. Hence, a 3GHz, 32 bit microprocessor can execute 3 billion cycles of 32 bits per second. Hence, if a companies fastest microprocessor clockspeed is say, 3. A sound set of sounds is the standard. Rather than recording sound or "audio" however it records the parameters of the note. The sounds triggered are dependent on the MIDI instrument or sampler supplying the sound. There are up to 16 channels per MIDI loop operating within increments of 0 - See also Timbre and Harmonic s. Page 11

Chapter 5 : Electronic instrument music blog.quintoapp.com CHAPTER 7. MIDI and Electronic Music Technology. Today, professional and nonprofessional musicians are using the language of the musical instrument digital interface (MIDI) to perform an expanding range of music and automation tasks within audio production, audio for video and film post, stage production, etc. Drum machines[ edit ] A Yamaha RY30 Drum Machine A drum machine is an electronic musical instrument designed to imitate the sound of drums, cymbals, other percussion instruments, and often basslines. Drum machines are most commonly associated with electronic dance music genres such as house music, but are also used in many other genres. They are also used when session drummers are not available or if the production cannot afford the cost of a professional drummer. In the s, most modern drum machines are sequencers with a sample playback rompler or synthesizer component that specializes in the reproduction of drum timbres. Though features vary from model to model, many modern drum machines can also produce unique sounds, and allow the user to compose unique drum beats and patterns. Electro-mechanical drum machines were first developed in, with the invention of the Chamberlin Rhythmate. Transistorized electronic drum machines later appeared in the s. The most iconic drum machine was the Roland TR, widely used in hip hop and dance music. Sampler musical instrument Digital sampling technology, introduced in the s, has become a staple of music production in the s. Devices that use sampling, record a sound digitally often a musical instrument, such as a piano or flute being played, and replay it when a key or pad on a controller device e. Samplers can alter the sound using various audio effects and audio processing. Sampling has its roots in France with the sound experiments carried out by Musique Concrete practitioners. In the s, when the technology was still in its infancy, digital samplers cost tens of thousands of dollars and they were only used by the top recording studios and musicians. These were out of the price range of most musicians. Before affordable sampling technology was readily available, DJs would use a technique pioneered by Grandmaster Flash to manually repeat certain parts in a song by juggling between two separate turntables. This can be considered as an early precursor of sampling. In turn, this turntablism technique originates from Jamaican dub music in the s, and was introduced to American hip hop in the s. In the s, most professional recording studios use digital technologies. In the s, many samplers exist in the digital-only realm. This new generation of digital samplers are capable of reproducing and manipulating sounds. New genres of music have formed which would be impossible without sampling. Advanced sample libraries have made complete performances of orchestral compositions possible that sound similar to a live performance. MIDI MIDI allows multiple instruments to be played from a single controller often a keyboard, as pictured here, which makes stage setups much more portable. This system fits into a single rack case, but prior to the advent of MIDI. MIDI has been the musical instrument industry standard interface since the s through to the present day. A demonstration at the convention showed two previously incompatible analog synthesizers, the Prophet and Roland Jupiter-6, communicating with each other, enabling a player to play one keyboard while getting the output from both of them. This was a massive breakthrough in the s, as it allowed synths to be accurately layered in live shows and studio recordings. MIDI enables different electronic instruments and electronic music devices to communicate with each other and with computers. The advent of MIDI spurred a rapid expansion of the sales and production of electronic instruments and music software. This newly founded association standardized the MIDI protocol by generating and disseminating all the documents about it. Since the s, personal computers developed and became the ideal system for utilizing the vast potential of MIDI. With universal MIDI protocols, electronic keyboards, sequencers, and drum machines can all be connected together. Current developments in computer hardware and specialized software continue to expand MIDI applications. Computers in music technology[ edit ] Computer and synthesizer technology joining together changed the way music is made, and is one of the fastest changing aspects of music technology today. Matthews also pioneered a cornerstone of music technology; analog to digital conversion. The first generation of professional commercially available computer music instruments, or workstations as some companies later called them, were very sophisticated elaborate systems that cost a great deal of money when they first appeared. It was not until the advent of MIDI that Page 12

general-purpose computers started to play a role in music production. Advancements in technology have increased the speed of hardware processing and the capacity of memory units. Software developers write new, more powerful programs for sequencing, recording, notating, and mastering music. Such programs allow the user to record acoustic sounds with a microphone, mix tracks record or MIDI musical sequences, which may then be organized along a timeline and edited on a flat-panel display of a computer or Digital Audio Workstation. Musical segments recorded on can be copied and duplicated ad infinitum, without any loss of fidelity or added noise a major contrast from analog recording, in which every copy leads to a loss of fidelity and added noise. Digital music can be edited and processed using a multitude of audio effects. Classical and other notated types of music are frequently written on scorewriter software. Music technology includes many forms of music reproduction. Music and sound technology refer to the use of sound engineering in a commercial, experimental or amateur hobbyist manner. Music technology and sound technology may sometimes be classed as the same thing, but they actually refer to different fields of work. Sound engineering refers primarily to the use of sound technology for sound recording or in sound reinforcement systems used in concerts and live shows. Page 13

Chapter 6 : 7 MIDI and Electronic Music Technology - Modern Recording Techniques, 6th Edition [Book] It's 30 years since the development of technology that allowed synthesisers and drum machines to be connected to computers - and since then MIDI has revolutionised the world of music recording. Is there any way to reconcile these two interests? Indeed, popular music today â from indie rock to hip-hop to house â would not be the same without innovations in computer science and technology. The following article is an exploration of the pioneering inventions and innovations in music technology that, through the use of computers, continue to define the musical experience of today. Making Music in the 20th Century marks the year that the technological roots of modern popular music were formed. In that fateful year the world welcomed its first drum machine while the revolutionary electric guitar took the music scene by storm. Perhaps more importantly, however, was that these two innovations inspired and challenged others to experiment with electric instruments and to test how technology could continue to enhance the musical experience. Then, in, producer George Martin was faced with a dilemma. Without the technological innovations available today, Martin ingeniously solved the problem by mechanically slowing one take while speeding up the other, then spliced the two takes together to produce one of the most celebrated popular music recordings in history. Without major advancements in computer technology, however, such would not be the case. Once monolithic, the late s and early 80s saw the size of computers greatly diminish while major improvements were being made in processing power. Personal computers were made accessible for the first time in history and, watching closely, the music industry quickly responded. As the Beatles were walking Abbey Road and the Rolling Stones were licking their way to chart toping heights, brilliant innovations on old technologies would surface simultaneously that â from sampling to the drum machine to the Musical Instrument Digital Interface MIDI â gave rise to whole new genres like hip-hop and electronic music while altering the trajectory of popular music itself. The following is a brief run-through of some of those major developments in computer technology. Sampling Sampling allows musicians to borrow snippets of past tracks and even entire recordings and incorporate them into original creations. Using synthesizer technology, artists can also alter the tone of the sample by speeding up or slowing down the track; later iterations of samplers would actually come in the form of synthesizers as synths became more sophisticated and were able to adopt sampling technology. The first sampler â the Mellatron â appeared in the late 60s and early 70s and was a tape replay keyboard that stored recordings on analog tape. Although its genius was widely recognized, it was soon improved upon with the emergence of the memory-based digital sampler. Developed by a trio of computer scientists and software engineers, the first digital sampler â the EMS Musys system â ran on two mini computers PDP-8s, giving birth to the first digital music studio. As musicians began realizing the need and benefit of sound synthesis for sampling purposes, sampling synthesizers soon emerged. Today, sampling technology is either software-based or appears as part of the music workstation. Digital Drum Machine Beginning with the Rythmicon â the father of all drum machines, first produced in â the drum machine has had a strong impact on music through the years. Both machines are icons of the early hip-hop, underground dance and techno genres. Digital drum machines, otherwise known as drum computers, also figure heavily in the development of pop music in the 80s. Starting with the Linn LM-1, digital samples of drum sounds and drum sound synthesis were both used with increasing frequency, appearing in works from the soundtrack of Scarface to Prince. In music today the physical drum machine is a rare sight, whose use was rendered obsolete by MIDI and digital music workstations. Digital Synthesizer The digital synthesizer produces a stream of numbers at a certain rate that is then converted to analog form, allowing speakers to produce sound. Synthesizer aided music is some of the most identifiable of the 70s and 80s. No only did the Beatles and Rolling Stones utilize its capacity to produce unique and spacy sounds, but a whole new genre arose from its use: Today, the synthesizer is a major element of the music workstation. Forms of Sound Synthesis. Page 14