Auto-Tune. Collection Editors: Navaneeth Ravindranath Tanner Songkakul Andrew Tam

Similar documents
Seeing Using Sound. By: Clayton Shepard Richard Hall Jared Flatow

Music Fundamentals 3: Minor Scales and Keys. Collection Editor: Terry B. Ewell

Musical Signal Processing with LabVIEW Introduction to Audio and Musical Signals. By: Ed Doering

2. AN INTROSPECTION OF THE MORPHING PROCESS

Pitch correction on the human voice

A prototype system for rule-based expressive modifications of audio recordings

ECE438 - Laboratory 4: Sampling and Reconstruction of Continuous-Time Signals

Music Source Separation

FFT Laboratory Experiments for the HP Series Oscilloscopes and HP 54657A/54658A Measurement Storage Modules

Contemp PIano 101 Instructions. Collection Editor: E T

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB

DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

OCTAVE C 3 D 3 E 3 F 3 G 3 A 3 B 3 C 4 D 4 E 4 F 4 G 4 A 4 B 4 C 5 D 5 E 5 F 5 G 5 A 5 B 5. Middle-C A-440

Measurement of overtone frequencies of a toy piano and perception of its pitch

CZT vs FFT: Flexibility vs Speed. Abstract

Cedits bim bum bam. OOG series

Investigation of Digital Signal Processing of High-speed DACs Signals for Settling Time Testing

ELEC 484 Project Pitch Synchronous Overlap-Add

Spectrum Analyser Basics

Realizing Waveform Characteristics up to a Digitizer s Full Bandwidth Increasing the effective sampling rate when measuring repetitive signals

Analysis, Synthesis, and Perception of Musical Sounds

Upgrading E-learning of basic measurement algorithms based on DSP and MATLAB Web Server. Milos Sedlacek 1, Ondrej Tomiska 2

Robert Alexandru Dobre, Cristian Negrescu

hit), and assume that longer incidental sounds (forest noise, water, wind noise) resemble a Gaussian noise distribution.

CSC475 Music Information Retrieval

DATA COMPRESSION USING THE FFT

CS229 Project Report Polyphonic Piano Transcription

USING MATLAB CODE FOR RADAR SIGNAL PROCESSING. EEC 134B Winter 2016 Amanda Williams Team Hertz

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES

QSched v0.96 Spring 2018) User Guide Pg 1 of 6

Adaptive Resampling - Transforming From the Time to the Angle Domain

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Fraction by Sinevibes audio slicing workstation

Introduction To LabVIEW and the DSP Board

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

PS User Guide Series Seismic-Data Display

Figure 1: Feature Vector Sequence Generator block diagram.

S I N E V I B E S FRACTION AUDIO SLICING WORKSTATION

Doubletalk Detection

Audio Processing Exercise

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement

Automatic Construction of Synthetic Musical Instruments and Performers

A Matlab toolbox for. Characterisation Of Recorded Underwater Sound (CHORUS) USER S GUIDE

Toward a Computationally-Enhanced Acoustic Grand Piano

USER S GUIDE DSR-1 DE-ESSER. Plug-in for Mackie Digital Mixers

The BAT WAVE ANALYZER project

Speech and Speaker Recognition for the Command of an Industrial Robot

Research on sampling of vibration signals based on compressed sensing

Augmentation Matrix: A Music System Derived from the Proportions of the Harmonic Series

Topics in Computer Music Instrument Identification. Ioanna Karydi

Lab P-6: Synthesis of Sinusoidal Signals A Music Illusion. A k cos.! k t C k / (1)

MSB LSB MSB LSB DC AC 1 DC AC 1 AC 63 AC 63 DC AC 1 AC 63

Please feel free to download the Demo application software from analogarts.com to help you follow this seminar.

Harmonic Series II: Harmonics, Intervals, and Instruments *

Calibrate, Characterize and Emulate Systems Using RFXpress in AWG Series

Experiments on musical instrument separation using multiplecause

Agilent PN Time-Capture Capabilities of the Agilent Series Vector Signal Analyzers Product Note

Getting Started with the LabVIEW Sound and Vibration Toolkit

PulseCounter Neutron & Gamma Spectrometry Software Manual

WAVES Cobalt Saphira. User Guide

NanoGiant Oscilloscope/Function-Generator Program. Getting Started

An Introduction to the Spectral Dynamics Rotating Machinery Analysis (RMA) package For PUMA and COUGAR

Handout 1 - Introduction to plots in Matlab 7

MAutoPitch. Presets button. Left arrow button. Right arrow button. Randomize button. Save button. Panic button. Settings button

3 5 5 D hampton38e This channel has a very deep and wide null directly in the middle of the passband (from 7 to 9 MHz, d maximum attenuation). This nu

Study of White Gaussian Noise with Varying Signal to Noise Ratio in Speech Signal using Wavelet

Appendix D. UW DigiScope User s Manual. Willis J. Tompkins and Annie Foong

EE-217 Final Project The Hunt for Noise (and All Things Audible)

1 Ver.mob Brief guide

Removing the Pattern Noise from all STIS Side-2 CCD data

Single Channel Speech Enhancement Using Spectral Subtraction Based on Minimum Statistics

Experiment 13 Sampling and reconstruction

UNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT

Journal Papers. The Primary Archive for Your Work

Tempo Estimation and Manipulation

Audio Compression Technology for Voice Transmission

The Effect of Time-Domain Interpolation on Response Spectral Calculations. David M. Boore

Detection and demodulation of non-cooperative burst signal Feng Yue 1, Wu Guangzhi 1, Tao Min 1

The Measurement Tools and What They Do

Vocoder Reference Test TELECOMMUNICATIONS INDUSTRY ASSOCIATION

Lab 5 Linear Predictive Coding

Tempo and Beat Analysis

Introduction to QScan

Voice & Music Pattern Extraction: A Review

Book: Fundamentals of Music Processing. Audio Features. Book: Fundamentals of Music Processing. Book: Fundamentals of Music Processing

ANALYSIS OF COMPUTED ORDER TRACKING

Pitch-Synchronous Spectrogram: Principles and Applications

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng

y POWER USER MUSIC PRODUCTION and PERFORMANCE With the MOTIF ES Mastering the Sample SLICE function

Topic 10. Multi-pitch Analysis

Neural Network for Music Instrument Identi cation

DAY 1. Intelligent Audio Systems: A review of the foundations and applications of semantic audio analysis and music information retrieval

A NEW LOOK AT FREQUENCY RESOLUTION IN POWER SPECTRAL DENSITY ESTIMATION. Sudeshna Pal, Soosan Beheshti

Transcription An Historical Overview

Spectral Sounds Summary

Collection of Setups for Measurements with the R&S UPV and R&S UPP Audio Analyzers. Application Note. Products:

Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor

Semi-automated extraction of expressive performance information from acoustic recordings of piano music. Andrew Earis

Digital music synthesis using DSP

LeCroy Digital Oscilloscopes

Transcription:

Auto-Tune Collection Editors: Navaneeth Ravindranath Tanner Songkakul Andrew Tam

Auto-Tune Collection Editors: Navaneeth Ravindranath Tanner Songkakul Andrew Tam Authors: Navaneeth Ravindranath Blaine Rister Tanner Songkakul Andrew Tam Online: < http://cnx.org/content/col11474/1.1/ > C O N N E X I O N S Rice University, Houston, Texas

This selection and arrangement of content as a collection is copyrighted by Navaneeth Ravindranath, Tanner Songkakul, Andrew Tam. It is licensed under the Creative Commons Attribution 3.0 license (http://creativecommons.org/licenses/by/3.0/). Collection structure revised: December 20, 2012 PDF generated: December 20, 2012 For copyright and attribution information for the modules contained in this collection, see p. 14.

Table of Contents 1 Auto-Tune: Introduction......................................................................... 1 2 Auto-Tune: Challenges........................................................................... 3 3 Auto-Tune: Implementation..................................................................... 7 4 Auto-tune: Experimental Results............................................................... 9 5 Auto-Tune: Installation Instructions........................................................... 11 Index................................................................................................ 13 Attributions.........................................................................................14

iv

Chapter 1 Auto-Tune: Introduction 1 Motivation Music is governed by a chromatic scale of pitches, an exact ratio of harmonics which sound pleasing to the human ear when arranged in song. Unfortunately, most individuals, even those with pleasant singing voices, struggle to sing at exactly the correct frequency for a given note on the harmonic scale. Because the human ear can precisely detect small deviations in pitch, a singer even slightly distanced in pitch from a chromatic tone can result in music which is audibly displeasing to the listener. To achieve correct pitches in their recordings, many contemporary popular artists such as T-Pain and Rebecca Black use automatic pitch correction software, or Auto-Tune, ensure that their vocal tracks are perfectly in tune. These programs shift the pitch of each individual note up or down in order to match a note on the chromatic scale, leading to an output which is more pleasing to the listener. The corrected notes retain the musical properties of the original by also shifting the higher harmonics of the note. However, many amateur musicians do not have access to this type of software, which can be expensive and dicult to use. Solution We have created a software solution which, given a vocal track which is loaded into MatLab, quickly and precisely detects and corrects a vocal track to notes on the chromatic scale. Our solution successfully pitch corrects a given vocal track by detecting the pitch of each note and shifting it to the nearest note on the chromatic scale. However, the phase of the original signal is not completely preserved, resulting in distortion in the output. Our solution is quick and easy to use, requiring no musical knowledge, and results in an in-tune, if somewhat distorted, signal. 1 This content is available online at <http://cnx.org/content/m45355/1.1/>. 1

2 CHAPTER 1. AUTO-TUNE: INTRODUCTION

Chapter 2 Auto-Tune: Challenges 1 2.1 Challenges There were several challenges we had to consider when implementing our autotune function in matlab. 1. Isolating a single note We need to divide the time-domain signal into windows small enough that they cannot contain more than one note. Once we have performed windowing in the time domain, we can take the FFT for each window to get the isolated spectrum of the note sung. We then compare it to the closest note on the chromatic scale to determine the amount of shift needed. 1 This content is available online at <http://cnx.org/content/m45378/1.2/>. 3

4 CHAPTER 2. AUTO-TUNE: CHALLENGES Figure 2.1: Spectral content of an entire song. Note the many peaks.

5 Figure 2.2: Spectral content of two notes and their harmonics. Note the distinct peaks. Figure reused from Finding Piano Note Frequencies connexions module by Scott Steger, http://cnx.org/content/m14191/latest/?collection=col10462/latest 2. Transform Distortions Windowing the signal in the time domain requires the use of a lter. All lters create distortions in the frequency domain because of their non-ideal frequency response in the stopband. Figure 2.3: Ideal time-domain window http://accad.osu.edu/ smay/rmannotes/regularpatterns/transitions.html 2

6 CHAPTER 2. AUTO-TUNE: CHALLENGES Figure 2.4: Frequency response of ideal time window. Note the nonzero amplitude outside the main lobe. https://ccrma.stanford.edu/ jos/sasp/rectangular_window.html 3 In addition to these magnitude distortions, windows can have nonlinear phase. This changes the relative phase between the tones in each note. For any notes comprised of more than a single tone, this phase change will create audible distortion. 3. Time Duration We change the pitch of our signals via the chipmunk eect; notes are shifted up or down by re-sampling, eectively playing the signal back at a lower or higher sampling rate. This method creates minimum phase distortion, but changes the time duration of the signal. In order to counteract this eect, we must stretch or compress each portion of the song to maintain its original time duration after re-sampling to change the pitch. We can then achieve our desired note shifting by playing the entire song back at its original sampling frequency. 2 http://accad.osu.edu/ smay/rmannotes/regularpatterns/transitions.html 3 https://ccrma.stanford.edu/ jos/sasp/rectangular_window.html

Chapter 3 Auto-Tune: Implementation 1 3.1 Overview of algorithm When pitch shifting is mentioned, most people immediately associate it with frequency shifting. Frequency shifting can be easily achieved by simply modulating the input signal by a sinusoid; however, employing such a method creates a ring modulation eect, which is not the desired eect in this case. Thus, pitch shifting and frequency shifting are not the same thing. A true pitch shift can be realized by resampling the input signal. Unfortunately, this method changes the duration of the input signal, which is also not a desired eect. It turns out that a slight modication of the second method (resampling) can be used to accurately pitch shift a signal. In order to implement (crude) auto-tuning, we just need to break up the input signal into small windows and pitch shift each window by an appropriate amount. More sophisticated phase correction algorithms are required to remove the distortions that result. Below is a schematic that summarizes our implementation. Figure 3.1 We will now outline our MATLAB implementation of auto-tuning. The algorithm can be broken down into three major steps: 3.2 1. Determining the shift ratio for a window The input signal is rst divided into windows of length 256, modulating by Hanning windows. To increase frequency resolution, the window is zero-padded so that its length is 512. The frequency spectrum of each window is then computed using a 512-point FFT. To nd the dominant note in the window, the largest peak 1 This content is available online at <http://cnx.org/content/m45377/1.2/>. 7

8 CHAPTER 3. AUTO-TUNE: IMPLEMENTATION within a specied frequency range is selected. It does not matter whether we select the peak corresponding to the fundamental frequency or a harmonic since since both are expected to be out of tune by the same ratio. The frequency of the note is easily found from the index of the peak by a linear mapping: the rst peak corresponds to a frequency of 0 Hz, and the last peak corresonds to the sampling frequency. The next step is to nd the frequency on the chromatic scale (440 Hz multiplied by integer powers of the twelvth root of 2) that the identied peak needs to be shifted to. To do this, we simply map the identied peak to the closest key on the piano and nd the corresponding frequency of the note. The shift ratio is the frequency corresponding to the closest piano key divided by the dominant frequency in the frequency spectrum. 3.3 2. Pitch shifting a window To pitch shift a window, we must rst stretch/compress the window in time and then resample the window. In order to raise the pitch, we need to expand the window since we would like to resample at a higher frequency; similarly, lowering the pitch requires shrinking the window. For clarity, we will assume for the remainder of the section that we are interesting in raising the pitch for a given window. The steps involved in lowering the pitch are analagous. In order to expand the window, we subdivide the window into smaller overlapping frames each of length 64, with 75% overlap, modulated by Hanning windows. Thus, each frame begins 16 samples after the previous frame begins. For a window of length 256, this will result in 13 frames. The 13 frames are then spaced out and added together so that the expanded window is larger than the original window by a factor of the shift ratio determined in the previous section. We have now managed to stretch the window in time, but in doing so we have completely destroyed the linear phase of the window. Thus, the phase must be reconstructed. This is done by taking the FFT of each frame, adding the expected linear phase oset to the FFT coecients in each frame by looking at the phase dierence between the current frame and the previous frame, and nally taking an inverse-fft to get the corrected frame in the time domain. We used an external package to handle these phase corrections. To complete the pitch shift, we need to resample the window at a rate higher by a factor of the shift ratio. This is achieved by a simple linear interpolation. Note that the original length of the window is preserved since we have expanded the window and resampled the window using the same ratio. 3.4 3. Recombining the windows Finally, the pitch shifted windows are combined together. Currently, there is no phase correction after recombination, and as a result, there is audible distortion in the output. Resolving the phase discrepancies for the entire signal is a rather challenging project since the phase is nonlinear. We encourage others to expand on and improve our implementation of this nal stage of the algorithm by adding phase correction.

Chapter 4 Auto-tune: Experimental Results 1 4.1 Experimental Results Our implementation was able to correct the pitch of vocal samples, leaving the timbre of the voice mostly intact. Figs. 1 and 2 show spectrograms from a vocal sample of the song Rolling in the Deep, originally recorded by Adele, as was sung by an anonymous woman and posted to YouTube. To listen to the results, click below. input 2 output 3 External Image Please see: http://i.imgur.com/t1t6j.png Figure 4.1: "Rolling in the Deep" input spectra. External Image Please see: http://i.imgur.com/oz79x.png Figure 4.2: pitch-corrected spectra. The gures show that pitch correction shifts the spectrum slightly, without signicant distortion to its overall shape. It is desirable that the two gures are remarkably similar, as we wish to correct the pitch without signicant distortion to the sound. Fig. 3 supports this claim by zooming in on the spectra of one window. We can see that a slight frequency shift has occurred, yet the shapes of the input and corrected spectra are identical. Also note that higher frequencies are shifted by a greater amount, which is necessary to account for the human brain's logarithmic perception of pitch. Recall that oversampling allows for pitch detection accurate to within 0.5Hz, with higher accuracies being possible through more severe oversampling and quadratic regression, at the cost of increased execution time. 1 This content is available online at <http://cnx.org/content/m45379/1.2/>. 2 http://www.mediare.com/?rh8lkcpvcsh1cq9 3 http://www.mediare.com/?ucbo1ivb7xzbj8z 9

10 CHAPTER 4. AUTO-TUNE: EXPERIMENTAL RESULTS Figure 4.3: Spectra of one window. External Image Please see: http://i.imgur.com/ofcsj.jpg We admit one major drawback of our design, that the sound acquires a sort of robotic quality, partially due to the strict correction of tones to an unnatural level of accuracy. Many commercial systems take steps to correct the pitch only partially in some areas, such as in transitions between notes, to yield a more natural sound. This would be dicult, but not impossible to implement with the degree of automation that we desire. Still, best results require a human to work alongside the computer to tell where notes start and where they end, and how strictly to correct in dierent segments of the track, but this was not the goal of our user-friendly design. Another serious issue causing the robotic sound is phase. While the phase vocoder attempts to preserve phase in the sub-windows during pitch shifting, we do not preserve phase from window to window, in which dierent shifts occur. Thus, there is a random incoherence every 256 samples that clearly aects the sound. While the spectral content of the signals has been shown to be accurate, the distorted phase remains a signicant issue. Sophisticated algorithms have been developed to overcome this obstacle, but time constraints prevented us from implementing them across dierent windows, which undergo dierent shifts. Finally, a less avoidable distortion comes from spectral leakage, due to windowing. Switching from rectangular to Hamming windows reduced this eect, but even with much more sophisticated designs, it is impossible to mitigate leakage completely. The corrected voice, however, remains quite recognizable, and we have achieved our primary goal of correcting pitch, with minimizing distortion only a secondary concern. 4.2 Conclusions and Future Work We implemented auto-tune in MATLAB with a high degree of accuracy, requiring almost no input from the user. Oversampling yielded sucient frequency resolution, while windowing resolved dierent notes in the time domain. Dominant frequencies were eciently detected and matched to the nearest note on the chromatic scale, between which a shift ratio was determined. Using a phase vocoder and resampling, we achieved an accurate frequency shift in each window. In the future, much of the remaining distortion could be mitigated by utilizing phase-locking techniques between windows. Our open-source design provides an easy-to-use and accurate pitch correction. Click here 4 to download an informative poster explaining our design. 4.3 Individual Contributions The contributions of each group member are listed below: Navaneeth Ravindranath: wrote 'nd nearest note' function, drafted preliminary version of algorithm, wrote 'Implementation' Connexions module Blaine Rister: help with algorithms and code, poster editing, plots, Experimental Results and Installation Instructions Connexions modules Tanner Songkakul: prototyping helper functions, help with algorithms and code, poster creation, Introduction connexions module Andrew Tam: prototyping helper functions, creating and managing poster graphics, Challenges Connexions module 4 http://www.ledropper.com/poster_2

Chapter 5 Auto-Tune: Installation Instructions 1 5.1 Installation Instructions This 2.zip archive contains the main function, autotune.m, as well as the helper routines pitchshift.m, fusionframes.m, and createframes.m. To install the program, rst you must extract it from the archive. In Windows, this can be done by right-clicking on the.zip and selecting extract here. Similar methods exist for other operating systems. Next, copy all of the les to your MATLAB directory, or any other directory in your MATLAB path. If you copy the les as part of a folder, then the folder must be included in your MATLAB path for MATLAB to nd the autotune function. The pitch shifting code is a slight modication of the Guitar Pitch Shifter from this site 3. Note that while it is open-source, there may be legal issues with redistributing this code. Do so at your own risk. These issues can be avoided by substituting the pitch shifter for your own pitch shifting code, replacing the call to pitchshift() in autotune.m. 5.2 Using the Program The autotune function takes in ve arguments, X, pmin, pmax, w, and Fs. X is the input signal, which is a matrix of amplitudes in either mono (one column) or stereo (two columns). This matrix can be extracted from a.wav le with the wavread command in MATLAB. pmin is the minimum frequency in which we expect to nd a peak, and pmax is the maximum, in Hz. We suggest passing in 60 for pmin and 1050 for pmax to cover the human vocal range, but this interval may need to be expanded for other instruments. In general, the narrowest possible interval covering all played notes is preferable, as this grants the greatest chance of correctly detecting the correct note. w is the width of the window to correct as if it were an individual note. Note that the current version of the code rounds w to the next power of two, for simplicity. Theoretically, higher values of w would result in less distortion, but also poorer time resolution, where the boundaries between notes would be less precise. We found that 256 was a good value for this parameter. Finally, Fs is the sampling frequency, in Hz, of the input signal X. This value can be obtained as the second return value of the wavread command, and is commonly either 22050 or 44100. Here is an example of the proper MATLAB commands to auto-tune a vocal sample: [X, Fs] = wavread('path to input file'); out = autotune(x, 60, 1050, Fs); % Read input from.wav % Process signal wavwrite(out, Fs, 'path to output file'); % Write output to.wav 1 This content is available online at <http://cnx.org/content/m45380/1.1/>. 2 http://www.mediare.com/?m627iuf74xxauij 3 http://www.guitarpitchshifter.com/matlab.html 11

12 CHAPTER 5. AUTO-TUNE: INSTALLATION INSTRUCTIONS Alternatively, the output can be played from the MATLAB command line with the following command: soundsc(out, Fs); Note that the current version of the code compresses stereo input to mono output, and that execution time can be greatly reduced by processing only a portion of the input song. Please enjoy our code, and feel free to modify and improve upon it!

INDEX 13 Index of Keywords and Terms Keywords are listed by the section with that keyword (page numbers are in parentheses). Keywords do not necessarily appear in the text of the page. They are merely associated with that section. Ex. apples, Ÿ 1.1 (1) Terms are referenced by the page they appear on. Ex. apples, 1 2 2012, Ÿ 2(3) 3 301 project, Ÿ 2(3) A auto tune, Ÿ 4(9), Ÿ 5(11) autotune, auto-tune, Ÿ 3(7) F fast fourier transform, Ÿ 5(11) M MATLAB, Ÿ 4(9), Ÿ 5(11) music, Ÿ 4(9) P phase vocoder, Ÿ 5(11) pitch correction, Ÿ 4(9), Ÿ 5(11) S signal processing, Ÿ 4(9)

14 ATTRIBUTIONS Attributions Collection: Auto-Tune Edited by: Navaneeth Ravindranath, Tanner Songkakul, Andrew Tam URL: http://cnx.org/content/col11474/1.1/ License: http://creativecommons.org/licenses/by/3.0/ Module: "Introduction" Used here as: "Auto-Tune: Introduction" By: Tanner Songkakul URL: http://cnx.org/content/m45355/1.1/ Page: 1 Copyright: Tanner Songkakul License: http://creativecommons.org/licenses/by/3.0/ Module: "Auto-Tune: Challenges" By: Andrew Tam URL: http://cnx.org/content/m45378/1.2/ Pages: 3-6 Copyright: Andrew Tam License: http://creativecommons.org/licenses/by/3.0/ Module: "Auto-Tune: Implementation" By: Navaneeth Ravindranath URL: http://cnx.org/content/m45377/1.2/ Pages: 7-8 Copyright: Navaneeth Ravindranath License: http://creativecommons.org/licenses/by/3.0/ Module: "Auto-tune: Experimental Results" By: Blaine Rister URL: http://cnx.org/content/m45379/1.2/ Pages: 9-10 Copyright: Blaine Rister License: http://creativecommons.org/licenses/by/3.0/ Module: "Auto-Tune: Installation Instructions" By: Blaine Rister URL: http://cnx.org/content/m45380/1.1/ Pages: 11-12 Copyright: Blaine Rister License: http://creativecommons.org/licenses/by/3.0/

Auto-Tune ELEC 301 Final Project About Connexions Since 1999, Connexions has been pioneering a global system where anyone can create course materials and make them fully accessible and easily reusable free of charge. We are a Web-based authoring, teaching and learning environment open to anyone interested in education, including students, teachers, professors and lifelong learners. We connect ideas and facilitate educational communities. Connexions's modular, interactive courses are in use worldwide by universities, community colleges, K-12 schools, distance learners, and lifelong learners. Connexions materials are in many languages, including English, Spanish, Chinese, Japanese, Italian, Vietnamese, French, Portuguese, and Thai. Connexions is part of an exciting new information distribution system that allows for Print on Demand Books. Connexions has partnered with innovative on-demand publisher QOOP to accelerate the delivery of printed course materials and textbooks into classrooms worldwide at lower prices than traditional academic publishers.