PRODUCTION MACHINERY UTILIZATION MONITORING BASED ON ACOUSTIC AND VIBRATION SIGNAL ANALYSIS

Similar documents
Real-time Chatter Compensation based on Embedded Sensing Device in Machine tools

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Figure 1: Feature Vector Sequence Generator block diagram.

AUTOREGRESSIVE MFCC MODELS FOR GENRE CLASSIFICATION IMPROVED BY HARMONIC-PERCUSSION SEPARATION

Chord Classification of an Audio Signal using Artificial Neural Network

Virtual Vibration Analyzer

Real-Time Compensation of Chatter Vibration in Machine Tools

An Introduction to the Spectral Dynamics Rotating Machinery Analysis (RMA) package For PUMA and COUGAR

Speech and Speaker Recognition for the Command of an Industrial Robot

An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions

2. AN INTROSPECTION OF THE MORPHING PROCESS

Chapter 1. Introduction to Digital Signal Processing

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

Measurement of overtone frequencies of a toy piano and perception of its pitch

ECE 4220 Real Time Embedded Systems Final Project Spectrum Analyzer

Investigation of Digital Signal Processing of High-speed DACs Signals for Settling Time Testing

A Matlab toolbox for. Characterisation Of Recorded Underwater Sound (CHORUS) USER S GUIDE

Research Article. ISSN (Print) *Corresponding author Shireen Fathima

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement

hit), and assume that longer incidental sounds (forest noise, water, wind noise) resemble a Gaussian noise distribution.

MONITORING AND ANALYSIS OF VIBRATION SIGNAL BASED ON VIRTUAL INSTRUMENTATION

CS229 Project Report Polyphonic Piano Transcription

Music Genre Classification and Variance Comparison on Number of Genres

CSC475 Music Information Retrieval

ECE438 - Laboratory 4: Sampling and Reconstruction of Continuous-Time Signals

Major Differences Between the DT9847 Series Modules

International Journal of Advance Engineering and Research Development MUSICAL INSTRUMENT IDENTIFICATION AND STATUS FINDING WITH MFCC

Automatic Rhythmic Notation from Single Voice Audio Sources

Robert Alexandru Dobre, Cristian Negrescu

DIGITAL COMMUNICATION

GCT535- Sound Technology for Multimedia Timbre Analysis. Graduate School of Culture Technology KAIST Juhan Nam

Speech Recognition Combining MFCCs and Image Features

Recognising Cello Performers using Timbre Models

Supervised Learning in Genre Classification

Computational Models of Music Similarity. Elias Pampalk National Institute for Advanced Industrial Science and Technology (AIST)

Recognising Cello Performers Using Timbre Models

Musical Instrument Identification Using Principal Component Analysis and Multi-Layered Perceptrons

Getting Started with the LabVIEW Sound and Vibration Toolkit

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

Automatic Laughter Detection

Keywords Separation of sound, percussive instruments, non-percussive instruments, flexible audio source separation toolbox

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

BitWise (V2.1 and later) includes features for determining AP240 settings and measuring the Single Ion Area.

Voice Controlled Car System

Audio-Based Video Editing with Two-Channel Microphone

MIE 402: WORKSHOP ON DATA ACQUISITION AND SIGNAL PROCESSING Spring 2003

ISSN ICIRET-2014

Subjective Similarity of Music: Data Collection for Individuality Analysis

technical note flicker measurement display & lighting measurement

Improving Frame Based Automatic Laughter Detection

Laboratory Assignment 3. Digital Music Synthesis: Beethoven s Fifth Symphony Using MATLAB

HEAD. HEAD VISOR (Code 7500ff) Overview. Features. System for online localization of sound sources in real time

Introduction To LabVIEW and the DSP Board

A Categorical Approach for Recognizing Emotional Effects of Music

Practical considerations of accelerometer noise. Endevco technical paper 324

Color Image Compression Using Colorization Based On Coding Technique

ONE SENSOR MICROPHONE ARRAY APPLICATION IN SOURCE LOCALIZATION. Hsin-Chu, Taiwan

MPEG has been established as an international standard

Sensor Development for the imote2 Smart Sensor Platform

An Efficient Reduction of Area in Multistandard Transform Core

Data flow architecture for high-speed optical processors

DT9837 Series. High Performance, USB Powered Modules for Sound & Vibration Analysis. Key Features:

Realizing Waveform Characteristics up to a Digitizer s Full Bandwidth Increasing the effective sampling rate when measuring repetitive signals

THE importance of music content analysis for musical

Digital Signal Processing

MUSICAL INSTRUMENT IDENTIFICATION BASED ON HARMONIC TEMPORAL TIMBRE FEATURES

Automatic Music Similarity Assessment and Recommendation. A Thesis. Submitted to the Faculty. Drexel University. Donald Shaul Williamson

Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting

DETECTING ENVIRONMENTAL NOISE WITH BASIC TOOLS

An Effective Filtering Algorithm to Mitigate Transient Decaying DC Offset

Full Disclosure Monitoring

EXPLORING THE USE OF ENF FOR MULTIMEDIA SYNCHRONIZATION

EngineDiag. The Reciprocating Machines Diagnostics Module. Introduction DATASHEET

Agilent PN Time-Capture Capabilities of the Agilent Series Vector Signal Analyzers Product Note

Various Applications of Digital Signal Processing (DSP)

PulseCounter Neutron & Gamma Spectrometry Software Manual

EngineDiag. The Reciprocating Machines Diagnostics Module. Introduction DATASHEET

Hidden Markov Model based dance recognition

Voice & Music Pattern Extraction: A Review

Noise. CHEM 411L Instrumental Analysis Laboratory Revision 2.0

Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems. School of Electrical Engineering and Computer Science Oregon State University

Digital Signal. Continuous. Continuous. amplitude. amplitude. Discrete-time Signal. Analog Signal. Discrete. Continuous. time. time.

DETECTION OF SLOW-MOTION REPLAY SEGMENTS IN SPORTS VIDEO FOR HIGHLIGHTS GENERATION

Hidden melody in music playing motion: Music recording using optical motion tracking system

International Journal of Engineering Trends and Technology (IJETT) - Volume4 Issue8- August 2013

1.1 Digital Signal Processing Hands-on Lab Courses

Automatic Laughter Detection

B I O E N / Biological Signals & Data Acquisition

REPORT DOCUMENTATION PAGE

Tempo and Beat Analysis

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG?

A Parametric Autoregressive Model for the Extraction of Electric Network Frequency Fluctuations in Audio Forensic Authentication

Composer Identification of Digital Audio Modeling Content Specific Features Through Markov Models

Automatic Classification of Instrumental Music & Human Voice Using Formant Analysis

Removal of Decaying DC Component in Current Signal Using a ovel Estimation Algorithm

A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique

Machine Vision System for Color Sorting Wood Edge-Glued Panel Parts

PC-based Personal DSP Training Station

Book: Fundamentals of Music Processing. Audio Features. Book: Fundamentals of Music Processing. Book: Fundamentals of Music Processing

EMS DATA ACQUISITION AND MANAGEMENT (LVDAM-EMS) MODEL 9062-C

Transcription:

8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING" 19-21 April 2012, Tallinn, Estonia PRODUCTION MACHINERY UTILIZATION MONITORING BASED ON ACOUSTIC AND VIBRATION SIGNAL ANALYSIS Astapov, S.; Preden, J. S.; Aruväli, T. & Gordon, B. Abstract: Real-time monitoring of machinery and systems at the shop floor is essential for many tasks in the manufacturing context. One of the potential application areas of machinery and system monitoring is machine utilization monitoring, which provides the source data for planning. Optimal schedule planning is a critical step in maximizing the efficiency of a manufacturing facility. The paper considers a set of signal processing and analysis procedures that enable machinery monitoring by employing audio and acceleration sensors which can be easily installed. Testing results of the proposed system show good quality of machine state identification. Key words: machinery monitoring, acoustic signal processing, vibration signal processing, process state classification 1. INTRODUCTION Machinery monitoring in the shop floor is useful from several perspectives. Machinery monitoring can provide information for both machine utilization optimization and preventive & predictive maintenance purposes. Monitoring machine utilization will allow for increasing the efficiency of the production facility. Predictive maintenance will decrease the number of accidental failures, thereby increasing machine utilization and efficiency. One of the solutions for machinery monitoring is the use of wireless sensor networks technology [1], which makes it possible to apply the monitoring equipment to the monitored devices with minimum expense. Most recent research articles handle acoustic and vibration monitoring in order to inspect a work piece and cutting tool quality [2] or machinery working mode [3], however discarding the basic feedback about machinery utilization. Higher level comprehensive information about working modes has improved the process mostly theoretically so far. On the other hand raw, filter free information about machinery activity can give faster, more practical and efficient results in shop floor. It is also important to achieve objective feedback as the data can be corrupted if handled by several workers, each with his or her own understanding of the process. Actually, most modern machinery already has built in automatic utilization detection and data acquisition and storage capabilities. But the problem is that every single machine collects only its own data and the analysis of this scattered data is inefficient and time consuming. Furthermore, after restarting, the data stored in production machinery often disappears. Thus organized machinery utilization feedback for the entire shop floor accumulated in one application would be a useful tool for production planning in a company (on a bigger scale also in a supply chain or cluster). This paper introduces and compares several signal processing and classification methods applied to acoustic and vibration signals in order to detect Computer Numerical Control (CNC) machinery utilization. 268

2. SIGNAL PROCESSING METHODS Digital Signal Processing (DSP) aims to extract information from (typically periodic) signals relevant for a specific application [4]. For the task of shop floor machinery monitoring a scheme of DSP presented in Fig. 1. is applied, which combines the means of signal processing and classification of the signal s parameters. Input signal frame Feature Extraction Classification Class label Knowledge Base Fig. 1. Signal processing system The system operates frame-by-frame, processing sections (frames of length N) of the digitized signal from a memory buffer. For every processed signal frame the system then outputs a class label, which specifies the estimated state of the monitored machine. The first step of DSP is the feature extraction (FE) procedure, which derives the signal parameters, relevant in the identification of specific machinery operation states. The resulting set of features is much shorter than the raw signal frame and does not contain redundant signal information which is relevant in a resource constrained system. The vector of extracted features is then analyzed by a classification algorithm, which estimates the most probable class label (corresponding to a process state) by applying a pre-defined knowledge base of the monitored process. This knowledge base is derived from a training signal, which contains relevant machine operation states, providing references for the classification algorithm. 2.1 Feature Extraction Methods In the paper two FE methods are considered, both of which derive the parameters of the signal in the frequency domain. The first method is an analytical one and is based on effective frequency interval selection. The simplified scheme of the method, which we will call Spectral means, is presented in Fig. 2. The incoming signal frame is transformed by applying the Fast Fourier Transform (FFT), which decomposes the temporal signal into a set of its frequency components, called a frequency spectrum (we apply one specific application of the FFT [5]). The most distinguishable frequency intervals of the signal pattern are specified during system analysis and at run time the mean values of those intervals are calculated and concatenated into the feature vector. Signal frame Fast Fourier Transform Spectrum Intervals mean(.) Spectral Means Fig. 2. Feature extraction of spectral means The second method (Fig. 3.) is a popular feature representation of audio signals [6], called Mel-Frequency Cepstral Coefficients (MFCC). The frequency power spectrum, which is a squared absolute frequency spectrum, is scaled to the mel-scale and the cepstral coefficients are calculated by applying a Discrete Cosine Transform (DCT). The mel-scale is given by f Mel( f ) 2595log10 1 700, (1) where f is the linear frequency in Hz. It models the human sound perception and is almost linear up to 1 khz and logarithmic thereafter, imitating the increasing deficiency of human perception of the higher frequencies. The cepstral coefficients of the resulting mel energies provide information about the harmonic frequencies present in the power spectrum. In this paper we apply the implementation of MFCC by Ellis [7]. The mel-scaled 269

energies and the cepstral coefficients are concatenated to form the feature vector. Signal frame FFT (.) ^2 DCT Cepstral coefficients Fig. 3. MFCC feature extraction Mel scaling Log(.) Mel energies 2.2 Classification Methods Two conceptually different classification methods are chosen for comparison: correlation-based and a fuzzy algorithm. The knowledge base of the correlationbased classifier consists of C reference vectors, at least one per class. The algorithm makes its class estimate by calculating the correlation between the incoming feature vector and each of the C reference vectors. The reference vector corresponding to the maximal correlation determines the class label. The algorithm is very simple, but not very robust nor noise tolerant. The fuzzy classification algorithm is more sophisticated and robust. For its decisionmaking it uses a rule base which is derived from a large number of reference feature vectors [8], thus accounting for the variance in process dynamics. The inference mechanism is based on the principals of fuzzy logic. It calculates the degrees of membership of the unknown feature vector to the feature subspaces corresponding to different classes. The feature subspace of the highest membership value defines the winning class. 3. TEST SIGNAL ACQUISITION 3.1 Experiment Setup The experiments took place at the shop floor of a small size manufacturing facility during common operational conditions, i.e. full staff and standard machinery operation cycles. Two pieces of manufacturing equipment were chosen for testing: the two degree of freedom Vytek 200 watts CNC laser cutting machine LST4896 and the three degree of freedom AXYZ CNCrouter 6020. During the experiment the laser was cutting small rectangular pieces of a 1 mm polystyrene sheet. The cutting proceeded row-wise with the carriage returning the laser head to the beginning of the previous row with one piece position lower from it. The laser cutting process consists of the following states: 1) Laser is idle. 2) Laser is cutting. The CNC-router during signal acquisition was cutting a sheet of 21 mm plywood. During the whole process of cutting several shapes from the plywood the spindle did not stop spinning, thus the procedure is regarded as a continuous working cycle. The router possesses several operation states: 1) Router is idle. 2) Compressed air supply enabled. 3) In addition to 2): vacuum pump enabled. 4) In addition to 3): dust collector enabled. 5) In addition to 4): cutting process. In total two types of signals were analyzed during the cutting experiments: the acoustic signal acquired by a microphone and a vibration signal acquired using an acceleration sensor. Both types of signals were acquired in parallel to each other and thus correspond to the same events. 3.2 Acoustic Signal Acquisition Audio signals were measured using a Shure SM58 microphone and converted to digital form using a Roland Edirol UA- 25EX audio signal processor at 44.1 khz sampling rate in mono channel mode, saved in a 16-bit Waveform Audio File (WAV) format. Thus the audio signal is normalized to a scale of [-1, 1]. In both the laser and router cases the microphone was placed beside and directed towards the apparatus approximately 1.2 m 270

above the floor. Thus no direct contact was made between the sensor and the machine. 3.3 Acceleration Signal Acquisition Vibration measurements were made with an analog dual-axis accelerometer ADXL311 with a sensitivity of ±2g, 0g bias of 1.5 V and sensitivity of 174 mv/ g at the operating voltage of V DD = 3 V. The signals were digitized at a sampling frequency of 1 khz using an Agilent U2354A data acquisition device. For the laser test the accelerometer was firmly attached on top of the x-axis carriage. The x axis was then pointed perpendicular to the movement of the carriage and the y axis parallel to it. During the routing experiment the transducer was attached to the spindle parallel to the Earth surface with x and y axis pointed along the first two degrees of freedom of the carriage. As in both cases the sensor is placed parallel to the ground, the gravitational component is not present in the axis s readings and the 0g bias is easily subtracted from the signal. Signal analysis was performed on just one axis the one that shows greater deviation of the signal during different process stages. 4. SIGNAL PROCESSING AND CLASSIFICATION RESULTS Signal processing starts with an analysis step where it is determined whether the events of interest are at all distinguishable in the signal and if they are, their corresponding time intervals are specified. The analysis is performed by examining the signals in the frequency domain using spectrograms (i.e. three-dimensional plots of successively lined up signal frame spectra). Class labels are assigned to all the distinguishable process states of interest. Sample datasets are generated for reference and testing. After the datasets have been generated, analysis and testing ensues. 4.1 Signal Analysis, Processing and Reference Class Label Assignment For signal processing a frame size of 16384 samples for audio signals is chosen, which corresponds to 0.372 s at the sampling frequency of 44.1 khz and 256 samples for the acceleration signals which corresponds to 0.256 s at the 1 khz sampling frequency. Every signal is analyzed by its amplitude and shape in the time domain and by its spectral pattern in the frequency domain. A plot of the laser audio signal and its spectrogram is presented as an example in Fig. 4. As it can be seen, the idle time intervals (0-10, 104-107, 201-204, 298-304 s) and the cutting cycle intervals (10-104, 107-201, 204-298 s) are well separable and thus identifiable. Fig. 4. Laser audio signal and spectrogram The identification of both states of laser operation described in Section 3.1 is thus possible. Class labels are assigned accordingly: class 1 to state 1) and class 2 to state 2). For the acceleration signal analysis the y axis (the one parallel to the carriage movement) is chosen. The vibration levels are very low in case of this machine, thus the identification is based more on the carriage movement. Class labels are assigned as for the audio signal. For the router all five modes of operation can be identified using the audio signal, the assigned class labels are identical to the mode of operation indexes (Section 3.1). 271

The acceleration sensor is able to differentiate only the fifth state (cutting) from all the others. As the sensor is placed on the spindle, it detects only its vibration; all other vibrations seem to be well dampened. Thus two class labels are assigned: class 1 to states 1) - 4) and class 2 to state 5). 4.2 Classification Results The knowledge base of the correlationbased classifier consists of reference feature vectors, one per class. Each vector is derived by taking the average values of 20 successive vectors from the signal dataset, belonging to the same class. The rule base of the fuzzy controller is trained on half of the dataset feature vectors. For both classification methods whole datasets are used for classification accuracy testing. The classification accuracy is estimated in the percentage of correctly classified frames, i.e. the ratio between the number of frames with estimated class labels concurring with the reference and the total number of frames in the signal. The classification accuracy for the correlation-based classifier is presented in Table 1. The majority of the estimates is above 95%, the average being 91.7%, which is an excellent result considering low robustness of the classification algorithm. For the laser experiment the results are however ambiguous, since the laser is not cutting the material continuously and stops emission during the transition from one cut piece to the other. This results in some frames during the cutting stage being classified as nonoperation, which is ultimately correct. Therefore the actual classification quality is higher, than estimated by the ratio metric. Sp. means MFCC Laser audio 92,68 97,56 Laser accel. 91,80 65,80 Router audio 91,76 95,72 Router accel. 98,39 99,85 Table 1. Correlation classifier results (%) The classification results for the fuzzy classification algorithm are listed in Table 2. As expected, the accuracy is higher than for the correlation classifier with the average value being 93.1%. The results for the laser experiment are similar to the previous case, with the ratios being even lower, signifying a greater sensitivity of the algorithm to the process dynamics. These ambiguous results are presented in the upper subplot of Fig. 5. Sp. means MFCC Laser audio 89,02 98,05 Laser accel. 73,40 87,65 Router audio 98,68 98,72 Router accel. 99,85 99,49 Table 2. Fuzzy classifier results (%) class labels class labels 2 Fuzzy classifier, Laser accel., Spectral means, 73.40% 1 0 200 400 600 800 1000 1200 frames 5 4 3 2 1 Fuzzy classifier, Mill audio, MFCC, 98.72% 0 500 1000 1500 2000 2500 frames Fig. 5. Signal classification result example plots: circles denote reference and crosses denote estimated class labels 4.3 General Quality Assessments The set of experiments with the test data can be considered successful. High machine state identification accuracy was achieved for almost all test signals. The fuzzy classification algorithm shows better performance as was expected. However the correlation-based algorithm is likely to produce more erroneous results with increased background noise. Both feature extraction algorithms have proven to be applicable. The increase in the number of classes does not affect classification quality (lower plot of Fig. 5.) if the classes are well enough separable. 272

5. FURTHER RESEARCH The experiments were conducted using only one measurement point per signal type. An obvious direction for future research is multipoint measurements and the development of methods that would enable cooperation and complementation of several monitoring system procedures for more concise and robust decisionmaking. The methods should be also evaluated in the presence of greater background noise. From the aspect of production planning, knowing the reason behind the pauses in production is important. Therefore the automatic methods of interruption cause estimation are worthy of research. 6. CONCLUSION Several signal processing and classification methods applicable for a manufacturing machinery monitoring system were evaluated in the paper. The combinations of these methods were tested on signals acquired by acoustic and acceleration sensors at the shop floor of an industrial facility under common operation conditions. Two machines were considered for the experiments a laser cutting machine and a CNC-router. Testing results have shown that the proposed system is capable of identifying the operation states of these machines with high efficiency. 7. ACKNOWLEDGEMENT The work presented in this paper was partially supported by Innovative Manufacturing Engineering Systems Competence Centre IMECC that is cofinanced by Enterprise Estonia and European Union Regional Development Fund (project EU30006), by Artemis JU Project Simple (grant agreement number 100261), Research Project SF0140113Bs08 and Estonian Science Foundation (grant F7852). 8. REFERENCES 1. Wright, P.; Dornfeld, D. and Ota, N. Condition monitoring in end-milling using wireless sensor networks (WSNs). Trans. NAMRI/SME, 2008, 36. 2. Guo, Y.B. and Ammula, S.C., Real-time acoustic emission monitoring for surface damage in hard machining, Int. Jour. of Machine Tools and Manufacture, 2005, 45, 1622-1627. 3. Aruväli, T.; Preden, J.; Serg, R.; Otto, T., In-process determining of the working mode in CNC turning, Est. Jour. of Eng., 2011, 17, 4 16. 4. Vaseghi, S. V. Multimedia signal processing: Theory and applications in speech, music and communications. John Wiley & Sons Ltd., UK, 2007. 5. Frigo, M. and Johnson, S. G. FFTW: an adaptive software architecture for the FFT. IEEE Int. Conf. on Ac., Sp. and Sig. Proc., 1998, 3, 1381-1384. 6. Peeters, G. A large set of audio features for sound description (similarity and classification) in the CUIDADO project. CUIDADO I.S.T. Project Report, 2004. 7. Ellis, D. PLP and RASTA (and MFCC, and inversion) in Matlab using melfcc.m and invmelfcc.m. labrosa.ee.columbia.edu, Last upd. Aug. 2010, Last vis. May 2011. http://labrosa.ee.columbia.edu/matlab/rasta mat/ 8. Riid, A. and Rüstern, E. An Integrated Approach for the Identification of Compact, Interpretable and Accurate Fuzzy Rule-Based Classifiers from Data. 15th IEEE Int. Conf. on Intell. Engin. Sys., 2011, 101-107. 9. ADDITIONAL DATA ABOUT AUTHORS Sergei Astapov, Research Laboratory for Proactive Technologies, Dep. of Computer Control, sergei.astapov@dcc.ttu.ee Jürgo Preden, Research Laboratory for Proactive Technologies, Dep. of Computer Control, jurgo.preden@ttu.ee Tanel Aruväli, Dep. of Machinery, tanelaruvali@hot.ee Boris Gordon, Dep. of Computer Control, borsi.gordon@dcc.ttu.ee All authors are with Tallinn University of Technology, Ehitajate tee 5, 19086 Tallinn, Estonia. 273