ISSN ICIRET-2014

Similar documents
International Journal of Advance Engineering and Research Development MUSICAL INSTRUMENT IDENTIFICATION AND STATUS FINDING WITH MFCC

Figure 1: Feature Vector Sequence Generator block diagram.

Speech and Speaker Recognition for the Command of an Industrial Robot

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION

2. AN INTROSPECTION OF THE MORPHING PROCESS

Voice Controlled Car System

Subjective Similarity of Music: Data Collection for Individuality Analysis

Automatic Laughter Detection

MUSI-6201 Computational Music Analysis

AUTOREGRESSIVE MFCC MODELS FOR GENRE CLASSIFICATION IMPROVED BY HARMONIC-PERCUSSION SEPARATION

Comparison Parameters and Speaker Similarity Coincidence Criteria:

GRADIENT-BASED MUSICAL FEATURE EXTRACTION BASED ON SCALE-INVARIANT FEATURE TRANSFORM

Supervised Learning in Genre Classification

Chapter 1. Introduction to Digital Signal Processing

Audio-Based Video Editing with Two-Channel Microphone

A Parametric Autoregressive Model for the Extraction of Electric Network Frequency Fluctuations in Audio Forensic Authentication

Classification of Timbre Similarity

Classification of Musical Instruments sounds by Using MFCC and Timbral Audio Descriptors

Automatic Laughter Detection

Automatic Rhythmic Notation from Single Voice Audio Sources

Chord Classification of an Audio Signal using Artificial Neural Network

Study of White Gaussian Noise with Varying Signal to Noise Ratio in Speech Signal using Wavelet

A Parametric Autoregressive Model for the Extraction of Electric Network Frequency Fluctuations in Audio Forensic Authentication

Research Article. ISSN (Print) *Corresponding author Shireen Fathima

hit), and assume that longer incidental sounds (forest noise, water, wind noise) resemble a Gaussian noise distribution.

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement

Music Source Separation

THE importance of music content analysis for musical

PRODUCTION MACHINERY UTILIZATION MONITORING BASED ON ACOUSTIC AND VIBRATION SIGNAL ANALYSIS

Improving Frame Based Automatic Laughter Detection

A NEW LOOK AT FREQUENCY RESOLUTION IN POWER SPECTRAL DENSITY ESTIMATION. Sudeshna Pal, Soosan Beheshti

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

A New Method for Calculating Music Similarity

EE391 Special Report (Spring 2005) Automatic Chord Recognition Using A Summary Autocorrelation Function

Available online at ScienceDirect. Procedia Computer Science 46 (2015 )

MUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES

Recognising Cello Performers Using Timbre Models

Music Information Retrieval with Temporal Features and Timbre

Semi-supervised Musical Instrument Recognition

Detection and demodulation of non-cooperative burst signal Feng Yue 1, Wu Guangzhi 1, Tao Min 1

MPEG-7 AUDIO SPECTRUM BASIS AS A SIGNATURE OF VIOLIN SOUND

Singer Identification

Singing voice synthesis based on deep neural networks

CS229 Project Report Polyphonic Piano Transcription

Query By Humming: Finding Songs in a Polyphonic Database

International Journal of Engineering Research-Online A Peer Reviewed International Journal

Robert Alexandru Dobre, Cristian Negrescu

WE ADDRESS the development of a novel computational

Computational Models of Music Similarity. Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST)

Topics in Computer Music Instrument Identification. Ioanna Karydi

An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions

A QUERY BY EXAMPLE MUSIC RETRIEVAL ALGORITHM

Various Applications of Digital Signal Processing (DSP)

Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis

Hidden Markov Model based dance recognition

Single Channel Speech Enhancement Using Spectral Subtraction Based on Minimum Statistics

An Lut Adaptive Filter Using DA

Design of Speech Signal Analysis and Processing System. Based on Matlab Gateway

Music Segmentation Using Markov Chain Methods

Acoustic Scene Classification

ECE 4220 Real Time Embedded Systems Final Project Spectrum Analyzer

WAKE-UP-WORD SPOTTING FOR MOBILE SYSTEMS. A. Zehetner, M. Hagmüller, and F. Pernkopf

Reconstruction of Ca 2+ dynamics from low frame rate Ca 2+ imaging data CS229 final project. Submitted by: Limor Bursztyn

Digital Signal. Continuous. Continuous. amplitude. amplitude. Discrete-time Signal. Analog Signal. Discrete. Continuous. time. time.

Singer Traits Identification using Deep Neural Network

Efficient Computer-Aided Pitch Track and Note Estimation for Scientific Applications. Matthias Mauch Chris Cannam György Fazekas

Extraction Methods of Watermarks from Linearly-Distorted Images to Maximize Signal-to-Noise Ratio. Brandon Migdal. Advisors: Carl Salvaggio

Topic 4. Single Pitch Detection

HUMANS have a remarkable ability to recognize objects

Application Of Missing Feature Theory To The Recognition Of Musical Instruments In Polyphonic Audio

HIDDEN MARKOV MODELS FOR SPECTRAL SIMILARITY OF SONGS. Arthur Flexer, Elias Pampalk, Gerhard Widmer

Tempo and Beat Analysis

Musical Instrument Identification Using Principal Component Analysis and Multi-Layered Perceptrons

Appendix D. UW DigiScope User s Manual. Willis J. Tompkins and Annie Foong

Time Series Models for Semantic Music Annotation Emanuele Coviello, Antoni B. Chan, and Gert Lanckriet

/$ IEEE

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods

Region Adaptive Unsharp Masking based DCT Interpolation for Efficient Video Intra Frame Up-sampling

Keywords Separation of sound, percussive instruments, non-percussive instruments, flexible audio source separation toolbox

OBJECTIVE EVALUATION OF A MELODY EXTRACTOR FOR NORTH INDIAN CLASSICAL VOCAL PERFORMANCES

Adaptive decoding of convolutional codes

POST-PROCESSING FIDDLE : A REAL-TIME MULTI-PITCH TRACKING TECHNIQUE USING HARMONIC PARTIAL SUBTRACTION FOR USE WITHIN LIVE PERFORMANCE SYSTEMS

A Framework for Segmentation of Interview Videos

Recognising Cello Performers using Timbre Models

Investigation of Digital Signal Processing of High-speed DACs Signals for Settling Time Testing

Research on sampling of vibration signals based on compressed sensing

Music Genre Classification and Variance Comparison on Number of Genres

Automatic music transcription

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

EVALUATION OF SIGNAL PROCESSING METHODS FOR SPEECH ENHANCEMENT MAHIKA DUBEY THESIS

Smart Traffic Control System Using Image Processing

Normalized Cumulative Spectral Distribution in Music

MUSIC TONALITY FEATURES FOR SPEECH/MUSIC DISCRIMINATION. Gregory Sell and Pascal Clark

Lyrics Classification using Naive Bayes

Reduction of Noise from Speech Signal using Haar and Biorthogonal Wavelet

2. Problem formulation

An Accurate Timbre Model for Musical Instruments and its Application to Classification

Experiments on musical instrument separation using multiplecause

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

Transcription:

Robust Multilingual Voice Biometrics using Optimum Frames Kala A 1, Anu Infancia J 2, Pradeepa Natarajan 3 1,2 PG Scholar, SNS College of Technology, Coimbatore-641035, India 3 Assistant Professor, SNS College of Technology, Coimbatore-641035, India 1 kalaalwar@gmail.com, 2 anuinfancia.uit@gmail.com, 3 pradeepa.natarajan@gmail.com Abstract - In this paper, a multilingual speaker identification system based on optimal energy frame selection approach is discussed. A fixed frame rate adopted in most state-of-the-art speaker identification systems can face some problems, such as suddenly meeting some noisy frames, assigning the equal importance to each and every frame, and pitch asynchronous representation. The proposed energy frame method detects dynamic regions in speech signal and change of frame size to suit the local conditions which improves the speaker identification accuracy. The proposed method uses Mel Frequency Cepstral Coefficients (MFCC) for feature extraction. Vector Quantization and Gaussian Mixture Model techniques are used for speaker modeling to minimize the amount of data to be handled. The proposed system was investigated the effect of the different length segmental feature for speaker identification. The performance was evaluated against 53 speakers for 3 different languages (Tamil, English, and Hindi). From the experimental analysis the proposed multilingual speaker identification system yields higher identification accuracy of 19% and 25% than the existing method, while using Vector Quantization and Gaussian Mixture Model as speaker modeling technique respectively. Keywords - Mel Frequency Cepstral Coefficient, Speaker modeling, Vector Quantization, Gaussian Mixture Model, Variable Frame Rate, False Acceptance, False rejection. 1. INTRODUCTION Voice recognition is the process of automatic recognition of the speaker on the basis of individual information available in speech waves. This technique makes it possible to user's voice to verify their identity and control access to services such as voice dialing, Telephone Services using by Banks, Mobile Shopping, Accessing Database and Authentication Purposes. Speaker recognition can be classified into identification and verification. Speaker identification is the process of identifying which registered speaker provides a given voice sample. Speaker verification, on the other hand, is the process of accepting or neglecting the identity claim of a speaker [1]. In conventional approach, the same language is used for both training and testing phases may not be the best choice. This leads to language-constrained problem. To avoid this, multilingual can be processed as training is done in one language and testing is made in another language to get best speaker identification system [2]. Speaker identification method can be classified into three modules such as preprocessing, feature extraction and speaker modeling. The purpose of preprocessing is to offset the attenuation due to physiological characteristics of the speech production system and also to enhance the higher frequencies and improves the efficiency of the speech analysis [3]. Figure 1: General Block diagram of Speaker Identification System E.G.S.PILLAY ENGINEERING COLLEGE NAGAPATTINAM Page 102

2. PRE-PROCESSING The main objective of speech pre-processing is to make the speech signal more intelligible for further processing. The pre-processing stage converts the analog speech signal into digital samples with sampling frequency of 8000HZ. It consists of three modules namely pre-emphasis, framing, windowing [5]. Pre-emphasis is used to boost the energy of high frequency signals. Common voice characteristics emit low frequencies higher in amplitude than high frequencies.. A simple method of pre-emphasis is processing with a FIR filter given by: Figure 2: MFCC Processor where x[n] is the input speech signal and y[n] is the output pre-emphasized speech and α = 0.95 is an adjustable parameter. After pre-processing, the speech signal is divided into frames where each frame consists of N (256) samples and successive frames are overlapping with each other by M (128) samples [6].After frame segmentation, windowing is carried out to reduce the side effects caused by signal discontinuity at the beginning and at the end due to framing. w (2) 0 where N is the number of samples in each frame. The next step is the application of Fast Fourier Transform (FFT), which converts each frame of N samples from the time domain into the frequency domain [7]. In this final step, we convert the log Mel spectrum returns to time. The result is called the Mel frequency Cepstrum coefficients (MFCC). 3. PROPOSED METHOD In proposed method, variable frame rate analysis is based upon the first-order difference of the energy for ΔE. This ΔE is used to determine at what point a new feature should be extracted. In the proposed method, a criteria to retain the current frame is employed if the change in energy ΔE is greater than a fixed threshold T, and discard it if ΔE<T. The steps involved in the proposed method to find optimum energy frames are, Step.1: Calculate MFCC vectors with n samples frame length and m samples step size. Step.2: Calculate b (i), change in energy from MFCC vectors, by using the equation, 2.1 MFCC After a process of Windowing and Fourier transformation is performed, wrapping of signals in the frequency domain using 24 filter bank is done. This filter is developed based on the behavior of human ear s perception, or each tone of a voice signal with an actual frequency f, measured in Hz, it can also be found as a subjective pitch in mel frequency scale [8]. The mel frequency scale is determined to have a linear frequency relationship below 1000 Hz and a logarithmic relationship higher than 1000Hz. The mel frequency higher than 1000 Hz is, where m is the frame number, n is the frame length and x m (n) is the n th sample of speech in the m th frame. Step.3: Find the average energy (T), from b (i). Step.4: Calculate first order energy difference between consecutive frames, by using the equation, ΔE= (5) Step.5: If E > T, current frame is extracted, if E < T, current frame is discarded. Mel (f) = 2595* (3) E.G.S.PILLAY ENGINEERING COLLEGE NAGAPATTINAM Page 103

(7) here, λ=(,,,d is the number of dimension of, M is the number of components,λ is the parameter set, is the mean of the components, and is the covariance matrix of the components.consequently, its log-likelihood function is defined as, (8) where is input vector,t and T is the number of input vectors. Parameter estimation in GMM is often performed by EM (Expectation Maximization) algorithm [7]. Figure. 3 Block diagram of speaker identification system 4. GAUSSIAN MIXTURE MODEL Gaussian mixture models (GMM) [4] are similar to code books in the regard that clusters in feature space are estimated as well. In addition to the mean vectors, the covariance of the clusters and the mixture weights are also computed, resulting in a more detailed speaker model if there is a sufficient amount of training speech. One common approach to identification is to compute the probability of each speaker model given the features and then chose the speaker with highest probability. Gaussian mixture model (GMM) is a sophisticated statistical model, which can be viewed as a universal estimator. GMM has been applied to speaker recognition to model speaker s characteristics. GMM is specified as, = (6) where are mixture coefficients subject to. ( ) are component gaussian distributions: 5. VECTOR QUANTIZATION Vector Quantization (VQ) [3] is a process of mapping vectors from a large vector space to a finite number of regions in that space. Each region is called a cluster and can be represented by its center called a codeword. The collection of all codeword is called a codebook. Each speaker is represented by a codebook of spectral templates representing the phonetic sound clusters. The training material of a speaker is used to estimate a codebook, which is the model for that speaker. The classification of unknown test signals is based on the quantization error. For an identification decision, the error of the test feature vector sequence in regard to all codebooks is computed. The identified speaker is the one whose code book has the smallest error between the test vectors and the corresponding nearest code book vector. The key advantages of VQ are Reduced storage for spectral analysis information Reduced computation for determining similarity of spectral analysis vectors. In speech recognition, a major component of the computation is the determination of spectral similarity between a pair of vectors. Based on the VQ representation this is often reduced to a table lookup of similarities between pairs of codebook vectors. Discrete representation of speech sounds E.G.S.PILLAY ENGINEERING COLLEGE NAGAPATTINAM Page 104

Speaker correct no. of identification Identification = *100 Accuracy total no. of speakers Figure. 4 Block Diagram of the basic VQ Training and classification structure In this approach we consider the speaker of a particular utterance as an information source that can be modeled using the standard source coding method called vector quantization. Figure.5 shows the flow diagram of VQ - LBG algorithm [9]. Figure. 6 Input signal Figure. 7 Signal after framing Figure.5 flow chart for VQ-LBG algorithm 6. RESULTS AND DICUSSION Performance evaluation: Voice identification accuracy is calculated by using this formula: Figure. 8 Logarithmic power Spectrum E.G.S.PILLAY ENGINEERING COLLEGE NAGAPATTINAM Page 105

Figure. 9Power spectrum Figure. 10 Mel scale filter bank In this plot, the areas containing the highest level of energy are displayed in red. As we can see on the plot, the red area is located between 0.3 and 0.7 seconds. The plot also shows that most of the energy is concentrated in the lower frequencies (between 50 Hz and 1 khz). For a more detailed plot run the demo script on the CD-Rom. In the above figure we have only chosen few feature vectors. Each column refers to a feature vector. The element of each column and the corresponding MFCCs. As we had chosen the first 24 DCT coefficients, hence each column will be having 24 elements. In this project the total numbers of frames are reduced by using Feature Extraction. Table1. Gives the Optimum selection of frames Optimum Speaker Total no. of selection of frames frame accuracy (%) 1 196 90 2 180 85 3 196 92 4 200 102 6 256 125 7 355 150 8 400 189 9 190 85 10 186 80 11 180 70 12 522 300 13 400 198 14 255 156 15 256 135 16 222 100 17 422 200 18 512 302 19 321 189 20 256 123 21 158 85 22 128 70 23 422 212 24 365 189 25 365 179 26 254 145 27 356 156 28 354 162 29 184 123 30 176 100 31 196 102 32 190 89 33 485 235 34 192 90 35 245 156 36 354 186 37 254 165 38 258 145 39 265 123 40 412 221 41 435 200 42 126 65 43 196 78 44 185 89 45 180 80 46 172 79 47 160 90 48 258 156 49 324 215 50 256 153 51 278 124 52 298 156 53 285 145 Average = 274 Average = 141 From the experimental results the Optimum Frame selection is outperformed compared to Existing method. Here, the Total numbers of frames are reduced in terms of 133 samples per frame. For example 41th speaker has the total number of frames reduced by 200 samples per frame. E.G.S.PILLAY ENGINEERING COLLEGE NAGAPATTINAM Page 106

Table2. Comparison of speaker Identification accuracy between the Existing method and the proposed method Codebook Existing method-accuracy (%) Proposed method-accuracy (%) Language Size GMM VQ GMM VQ Window 16 24 32 16 24 32 16 24 32 16 24 32 Train: English Blackman 55 55 72 79 83 85 68 68 53 79 85 85 Test: Tamil Hamming 45 45 53 72 77 87 60 60 45 79 85 87 Rectangular 51 51 58 68 81 85 57 60 47 79 81 85 Train: English Blackman 49 49 57 51 60 70 53 53 64 57 64 70 Test: Hindi Hamming 49 49 66 49 64 68 57 57 58 57 64 68 Rectangular 55 55 60 47 57 64 55 55 62 47 51 57 Train: Hindi Blackman 53 53 58 55 55 55 60 65 62 51 57 60 Test: Tamil Hamming 64 64 64 49 57 55 60 60 64 55 66 72 Rectangular 51 51 57 49 49 58 58 58 57 60 62 72 Train: Hindi Blackman 43 43 49 51 53 58 51 51 57 51 57 62 Test: English Hamming 51 51 51 42 57 55 53 53 60 60 62 72 Rectangular 45 45 47 51 55 58 51 51 60 55 58 72 Train: Tamil Blackman 64 64 72 65 55 75 68 68 75 75 65 75 Test: English Hamming 58 58 66 60 55 65 79 79 75 60 55 65 Rectangular 53 53 72 35 55 45 66 66 72 35 55 40 Train: Tamil Blackman 38 38 49 70 60 60 43 43 66 75 65 60 Test: English Hamming 42 42 51 65 65 55 51 51 57 65 60 55 Rectangular 40 40 51 35 50 50 42 42 51 40 50 50 Figure11. Optimum Energy Frame Figure 12.English Train E.G.S.PILLAY ENGINEERING COLLEGE NAGAPATTINAM Page 107

techniques, it is clearly shown that GMM outperformed VQ in every way. For a database with 53 speakers, a maximum percentage of 87% is achieved which makes this system very capable of performing to a reasonable level in real time. The proposed system was analyzed without any speech enhancement techniques.the system efficiency can be further improved by adding any speech enhancement technique as a preprocessor. By reducing the number of frames (by selecting optimum frames), this system is suitable for reduced time and space complexity environment. REFERENCES Figure 13. Hindi Train Figure 14: Tamil Train From the above fig.12, fig. 13, fig. 14 GMM outperformed in identification accuracy, when compared to VQ. 7. CONCLUSION The proposed optimum energy frame algorithm shows that the identification accuracy increases with reduction in size and space complexity. This system gives better acoustic signal modeling in regions with fast spectral changes. The proposed multilingual speaker identification system was analyzed under different windowing schemes and by varying the length of frames with different overlaps. From our initial experiment, we choose 512 samples per frame with overlap of 60% as an optimal one. The proposed system performance was compared with Mel Frequency Cepstral Coefficients (Existing method) for VQ and GMM. From the analysis of two modeling [1] Piyush Lotia, M.R. Khan, Multistage VQ Based GMM For Text Independent Speaker identification System, International Journal of Soft Computing and Engineering (IJSCE) ISSN: 2231-2307, Volume-1, Issue-2, May 2011 [2] Manjot Kaur Gill, Reet kamal Kaur, Jagdev Kaur, Vector Quantization based Speaker identification, International Journal of Computer Applications (0975-8887), vol:4, no.2, July 2010. [3] H.S Jayanna, S.R Mahadeva Prasanna, Analysis, Feature Extraction, Modeling and Testing Techniques for Speaker Recognition, IETE Technical Review Year : 2009, Volume : 26, Issue : 3, Page : 181-190. [4] Khalid Saeed, Member IEEE, and Mohammad Kheir Nammous, A Speech-and-Speaker Identification System: Feature Extraction, Description, and Classification of Speech- Signal Image, IEEE Transactions on Industrial Electronics, VOL: 54, NO.2, APRIL 2007. [5] Md. Rashidul Hasan, Mustafa Jamil, Md. Golam Rabbani, Md. Saifur Rahman Speaker Identification Using Mel Frequency Cepstral coefficients 3rd International Conference on Electrical & Computer Engineering ICECE 2004, 28-30 December 2004, Dhaka, Bangladesh. [6] J. Macias-Guarasa, J. Ordonez, et al., Revisiting scenarios and methods for variable frame rate analysis in automatic speech recognition, In Proc. Eurospeech, pp. 1809 1812, 2003, ISSN: 1018-4074. [7] D. A. Reynolds, An Overview of Automatic Speaker Recognition Technology, Proc. IEEE, pp. 4072-4075, 2002. E.G.S.PILLAY ENGINEERING COLLEGE NAGAPATTINAM Page 108

[8] Durou, D. (1999) Multilingual textindependent speaker identification. In Proceedings of Multilingual Interoperability in Speech Technology (MIST), Leusden, The Netherlands, pp. 115 118. [9] Reynolds, D.A., Rose, R.C. Robust Text- Independent Speaker Identification using Gaussian Mixture Speaker Models, IEEETransactions on Acoustics, Speech, and Signal Processing 3(1) (1995) 72 83 [10] Philippe Le Cerf and Dirk Van Compernolle, A new variable frame rate analysis method for speech recognition, IEEE Signal Processing Letter, vol. 1, no. 12, pp.185 187, December 1994. [11] S. M. Peeling and K. M. Ponting, Variable frame rate analysis in the ARM continuous speech recognition system, Speech Commun, vol.10, pp. 155-162, 1991. [12] Pointing, K.M. and Peeling, S.M. The use of variable frame rate analysis in speech recognition,computer Speech and Language Comput. Speech Lang.(UK),vol.5,no.2,April 1991,p.169-79. [13] Young, S.J. and Rainton, D. Optimal frame rate analysis for speech recognition, IEE Colloquium on Techniques for Speech Processing (Digest No.181),London,UK.17 Dec,1990,p.5/1-3. [14] J. S. Bridle and M. D. Brown, A date-adaptive frame rate technique and its use in automatic speech recognition, in Proc. Inst. Acoustics Autumn Conference, 1982, pp. C2.1-C2.6. [15] Lawrence Rabiner, B H Juang, Biing Hwang Juang, Fundamentals of Speech Recognition,( Prentice Hall, Singapore), ISBN:0130151572. Anu Infancia J was born in Tami Nadu, India on 1992. She completed her B.E Electronics and Communication Engineering in United Institute of Technology, Coimbatore. And currently pursuing her M.E Electronics and Communication Engineering in SNS College of Technology, Coimbatore. Her area of interest includes Digital Signal Processing. Pradeepa Natarajan was born in Tamil Nadu, India. She received her B.E., degree specialized in Electronics and Communication Engineering in SNS College of Technology, Coimbatore, under Anna University, Chennai, in the year 2009 and M.E., degree in Applied Electronics in Dr.Mahalingam College of Engineering and Technology, Coimbatore from Anna University, Chennai in the year 2012. She is now working as Assistant Professor in the Department of Electronics and Communication Engineering in SNS College of Technology, Coimbatore, Tamil Nadu, India. Her area of interest includes Digital Image Processing and Image Restoration. Kala A was born in Tami Nadu, India on 1990. She completed her B.E., degree in Electronics and Communication Engineering in Mepco Schlenk Engineering college, Sivakasi.And currently pursuing her M.E., degree Specialized in Electronics and Communication Engineering in SNS College of Technology, Coimbatore. Her area of interest includes Signal Processing. E.G.S.PILLAY ENGINEERING COLLEGE NAGAPATTINAM Page 109