An Efficient Multi-Target SAR ATR Algorithm
|
|
- Caren Cook
- 5 years ago
- Views:
Transcription
1 An Efficient Multi-Target SAR ATR Algorithm L.M. Novak, G.J. Owirka, and W.S. Brower MIT Lincoln Laboratory Abstract MIT Lincoln Laboratory has developed the ATR (automatic target recognition) system for the DARPA-sponsored SAIP program; the baseline ATR system recognizes 10 GOB (ground order of battle) targets; the enhanced version of SAIP requires the ATR system to recognize 20 GOB targets. This paper compares ATR performance results for 10- and 20-target MSE (meansquared error) classifiers using medium-resolution SAR (synthetic aperture radar) imagery. Introduction MIT Lincoln Laboratory is responsible for developing the ATR system for the DARPA-sponsored SAIP program. SAIP supports new sensor platforms such as the GLOBAL HAWK system, which gathers wide area SAR stripmap imagery at medium resolution (1.0 m 1.0 m) and SAR spotlight imagery at high resolution (0.3 m 0.3 m). The classification stage of the SAIP ATR provides target recognition at both medium and high resolution. In high-resolution spotlight mode, conventional 2D- FFT image formation processing is used to construct the 0.3 m 0.3 m resolution SAR imagery that is used to perform target recognition. In medium-resolution stripmap mode, a superresolution image formation algorithm is used to enhance SAR image resolution prior to performing target recognition. This image formation algorithm enhances the resolution of the 1.0 m 1.0 m imagery to approximately 0.5 m 0.5 m. Reference [1] compared ATR performance results for the 10- and 20-target MSE classifiers using high-resolution (0.3 m 0.3 m) SAR imagery to perform target recognition. This paper focuses on a comparison of ATR performance for 10- and 20-target MSE classifiers using medium- resolution (1.0 m 1.0 m) SAR imagery to perform target recognition. The Lincoln Laboratory baseline ATD/R system, which is depicted in Figure 1, consists of three basic data-processing stages: (1) detection, (2) discrimination, and (3) classification. In the detection and discrimination stages, the goal is to eliminate from further consideration any portions of the input imagery not containing targets, but simultaneously to allow all portions of the imagery containing targets to pass through to the classification stage. Classification is performed on each input subimage by finding the best matching target image from a database of stored target reference images or templates. The purpose of classification is to categorize the object in the input image either as a target of interest (of which there may be many types, e.g., T72 tank, M109 howitzer, etc.), or as an uninteresting clutter object. Objects that are determined to be in the former category are labeled with the target type corresponding to the best matching template, whereas objects in the latter category are simply labeled unknown. When the basic classification algorithm is performed, the best matching template is declared to be the one that yields the smallest MSE value with respect to the input image. However, as implied in Figure 1, this template-matching algorithm is actually performed twice within the new, efficient multiresolution classification stage. The initial MSE preclassifier is implemented using imagery that has the inherent sensor resolution (assumed in this paper to be 1.0 m 1.0 m). This preclassifier provides coarse classification information that is used to reduce the final, higher resolution template search space (target type, aspect angle, and spatial offset). After preclassification, a superresolution technique [2] known as high-definition imaging (HDI) is applied to the input image before the image is passed to the final high-resolution MSE classification algorithm. This multiresolution architecture for the classification subsystem reduces computational expense. Background and Data Description The synthetic aperture radar imagery used in these studies was provided to Lincoln Laboratory by Wright Laboratories, WPAFB, Dayton, Ohio. The data were gathered by the Sandia X- band, HH-polarization SAR sensor at two different sites in support of the DARPA-sponsored MSTAR program [3]. The first MSTAR collection (MSTAR #1) took place in fall 1995 at Redstone Arsenal, Huntsville, Alabama; the second MSTAR collection (MSTAR #2) took place in fall 1996 at Eglin AFB, Ft. Walton Beach, Florida. In each collection, a large number of military targets were imaged in spotlight mode, over 360 of target aspect, and at 0.3 m 0.3 m resolution. For the studies presented in this paper, these SAR data were processed to have 1.0 m 1.0 m resolution. Our initial studies [4] evaluated the performance and summarized the results of a 10-target MSE classifier using imagery of the 18 distinct targets contained in the MSTAR #1 data set. Figure 2 shows a typical SAR spotlight image (processed to 1.0 m 1.0 m resolution) of the Redstone Arsenal target array. We used 15 depression target images to construct a 10-target classifier. The classifier was trained by constructing classifier templates using SAR images of the following targets: BMP2#1, M2#1, T72#1, BTR60, BTR70, M1, M109, M110, M113, and M548 (see Figure 3). The target array shown in Figure 2 includes three versions each of the BMP2 armored personnel carrier, the M2 infantry fighting vehicle, and the T72 main battle tank. The three T72 tanks varied significantly from tank to tank; T72#1 used in training the classifier had skirts along both sides of the target; T72#2 had fuel drums (barrels) mounted on the rear of the tank; T72#3 had neither skirts nor barrels. The classifier was tested using the remaining 8 targets that were not used in training the classifier: two BMP2s, two M2s, two T72s, the HMMWV, and the M35. In our initial studies, the HMMWV and the M35 were used as confuser vehicles (i.e., vehicles not included in the set of 10 targets that the classifier was trained to recognize); the other 6
2 test targets provided independent classifier testing data (data not used in classifier training). One important conclusion gleaned from these initial studies [4] was that the ability to correctly classify the independent T72 targets depended strongly on how closely the target configuration matched that of the tank used in training the classifier. Because of the presence of the fuel drums located on the rear of the tank, T72#2 was called unknown a significant number of times. Using additional T72 tank data from MSTAR #2, we demonstrated that intraclass variability is a very important issue for classifier design [1]. This paper compares the performance of the 10-target MSE classifier with the performance obtained from the 20-target MSE classifier using medium-resolution (1.0 m 1.0 m) SAR imagery. To implement the 20-target classifier we combined 11 target types imaged during the MSTAR #1 collection with 9 target types imaged during the MSTAR #2 collection (both data sets at 15 depression). Figure 4 shows a typical SAR spotlight image (processed to 1.0 m 1.0 m resolution) of the (MSTAR #2) Eglin AFB target array. Photographs of the targets used to implement the 20-target classifier are shown in Figure 5. The 10- and 20-target classifiers were implemented by constructing 72 templates per target. These templates were obtained by using the target images which were gathered every degree in aspect around the target. Five consecutive images were then averaged to form average images per target. The templates were then obtained by isolating the clutter-free target pixels from each 5 -average image, providing templates spanning a total 360 aspect coverage per target. Efficient Classifier Implementation We developed a computationally efficient implementation of the MSE classifier for the SAIP system to provide significantly increased speed in the ATR function with only a marginal loss in ATR performance. As shown in Figure 1, the high-resolution MSE classifier is preceded by a preclassifier stage that performs a coarse MSE classification on 1.0 m 1.0 m resolution data. This reduced resolution MSE preclassifier provides an estimate of the pose (aspect angle of the target) and an estimate of the target s true class. This information is passed to the more computationally intensive high-resolution MSE classifier and is used to limit the search space over target aspect and target type, which results in a more computationally efficient ATR algorithm [4]. Figure 6 shows a cumulative error probability curve of the 20-target MSE preclassifier pose-estimation error in degrees for 1.0 m 1.0 m resolution target data. Because each template represents 5 of aspect angle, a pose error of 20 in angle corresponds to ±4 templates. The curve in Figure 6 indicates that approximately 95% of the time the correct pose is contained in the ±4-template search space. Note that a 180 ambiguity is included in the pose estimates because these targets are nearly symmetric when facing forward or backward. Therefore, a ±4-template pose estimate with the 180 ambiguity yields a total of 18 templates to be searched at higher resolution. Thus the higher-resolution MSE classifier does not have to search all 72 templates per target; rather, it searches a much smaller subset of the high-resolution template set. Figure 7 presents a plot of the probability that the correct target class is contained in the top N MSE scores for 1.0 m 1.0 m resolution imagery. For this study, the top score gave the correct class only 31.7% of the time. The correct class for the 20-target classifier was contained in the top 10 scores approximately 94.2% of the time. The curves in Figures 6 and 7 show that the 20 pose error angle and the top 10 scores from the preclassifier can be used to prune the high-resolution MSE classification search space with only a small degradation in performance. Performance Results This section of the paper summarizes the ATR performance achieved by the 10- and 20-target MSE classifiers using 1.0 m 1.0 m resolution SAR imagery. Both classifiers were initially tested using the 6 independent targets from the MSTAR #1 collection. The results of these evaluations are summarized in Table 1, which presents the classifier confusion matrices for the 10-target classifier trained using MSTAR #1 data and tested on the 6 MSTAR #1 independent test targets (top) and for the 20-target classifier tested on 6 MSTAR #1 independent test targets (bottom). When the 10-target classifier was tested using the independent MSTAR #1 test data, an average probability of correct classification of 66.2% was achieved against the 6 independent targets. Note, however, that the performance for the T72 tank with fuel drums on the rear (T72#2) was somewhat reduced; 123 images of the 195 total were correctly classified, while 44 images of the 195 total were declared unknown by the classifier. When the 20-target classifier was tested using the same independent MSTAR #1 test data, the average probability of correct classification degraded slightly to 60.7%. The number of T72#2 targets correctly classified by the 20-target classifier was only 104 of the 195 total, while 31 images were declared unknown. Both classifiers were then tested using independent test data (three BTR70s and four M109s) in controlled configurations from the MSTAR #2 collection. Table 2 presents the classifier confusion matrices for the original 10-target classifier tested on the 7 MSTAR #2 independent test targets (top) and for the 20-target classifier tested on the 7 MSTAR #2 independent test targets (bottom). The probabilities of correct classification against these independent test data are 77.3% and 70.3% for the 10- and 20- target classifiers, respectively. This test illustrates that classifier templates developed from the MSTAR #1 collection work equally well when tested against these independent test target images from the MSTAR #2 collection. The MSTAR #2 collection imaged eight T72 tanks in a variety of configurations, as described in Table 3. We tested the 10- and 20-target classifiers using target images of seven of the independent T72 tanks from the MSTAR #2 collection. Note that a single T72 tank from the MSTAR #1 collection was used to train both classifiers; its configuration was skirts/no barrels (S/NB; i.e., skirts along both sides of the tank but no fuel drums mounted at the rear). When both classifiers were tested against seven of the independent T72 tanks from the MSTAR #2 collection,
3 significantly degraded classifier performance was observed. As shown in Table 4, the probabilities of correct classification against these test data are 52.3% and 36.4% for the 10- and 20-target classifiers, respectively. As shown in Table 4, the 10-target classifier rejected a large number of T72 tank images (424 of the total 1918), declaring them unknown. The confusion matrix indicates that 93 images of T72 #7 were rejected, 93 images of T72 #6 were rejected, and 83 images of T72 #5 were rejected. Note that T72 #5, #6, and #7 were all configured with fuel barrels and T72 #7 was also configured with reactive armor. The 20-target classifier confusion matrix presented in Table 4 illustrates that only 698 T72 test images were correctly classified of the total 1918; also, 263 test images were declared unknown. Increasing the number of target classes from 10 to 20 resulted in many more T72 test images being incorrectly classified. Table 4 (bottom) indicates that for the 20-target classifier the M60 and T62 tank classes were a considerable source of confusion for the T72 test inputs. For T72 #5 and #6 alone, 149 T72 test inputs were incorrectly classified as T62 tanks. Because the T72 was trained using only the S/NB configuration target from the MSTAR #1 collection, it was decided that the various T72 configurations should be investigated more carefully. SAR images and optical photographs of the MSTAR #1 and MSTAR #2 targets were used to investigate target configuration, especially for the many T72 variants and the T62. We compared T72 test images from MSTAR #2 with the T72 training images from MSTAR #1 and the T62 target images. Many test images of the T72 tanks that were configured with fuel barrels, such as MSTAR #2 T72 #5, #6, and #7 had scatterering signatures that were more similar to the T62 templates than the T72 templates. This discrepancy was explained by observing that the images used to construct the T62 templates were configured with skirts/barrels and the images used to construct the T72 templates were configured with skirts/no barrels. These observations prompted an experimental modification of the classifiers. We speculated that augmenting the classifier template sets with an additional template set of a T72 tank having a no skirts/barrels configuration would improve overall classification performance. The study using the MSTAR #2 T72 tanks (summarized in Table 4) was repeated for the 10- and 20-target classifiers with the additional (NS/B) T72 templates. For clarity, we will refer to these classifiers as the 11- and 21-target classifiers even though they still only identify 10 and 20 unique target types. The 10-target classifiers discussed earlier (Tables 1, 2, 4) used 720 preclassifier templates and 720 high-resolution templates; the 20-target classifiers used 1440 preclassifier templates and 1440 highresolution templates; the 11- and 21-target classifiers use an additional 144 templates for the T72 variant. Table 5 summarizes the results of the 11- and 21-target classifiers; the probabilities of correct classification improved to 75.6% and 63.9% for the 11- and the 21-target classifiers, respectively. A full evaluation of the performance of the 11- and 21-target classifier implementations was performed by combining the independent test inputs from the MSTAR #1 and MSTAR #2 data sets. Table 6 presents confusion matrices for 5195 independent test inputs. The leftmost column denotes the target type, followed by the number of different-serial-numbered targets of each type used in the performance evaluation. For example, there were 9 different-serial-numbered T72s included in this final performance summary. Since the total number of test inputs varies with each target type, the confusion matrix entries have been converted to percentages. As Table 6 shows, the average probabilities of correct classification are 74.4% and 66.2% for the 11- and 21-target classifiers, respectively. The results of this evaluation indicate that classification performance can be maintained even with significant target configuration variability if additional templates with the appropriate configuration are incorporated into the classifier. Of course, the addition of classification templates to compensate for a target configuration variation does increase the overall storage requirement and computation of the classifier. Summary This paper compared the performance of 10- and 20-target, template-based, MSE classifiers. Both classifiers were developed at Lincoln Laboratory in support of the SAIP program. The classifiers use medium-resolution (1.0 m 1.0 m) data processed using a new superresolution imaging technique high-definition imaging in an efficient multiresolution architecture. Highdefinition imaging improves overall classification performance while the multiresolution implementation reduces the computational load. System performance was evaluated using a significant number of tactical military targets (5195 test images). The results of these evaluations show that the number of target classes can be increased from 10-target classes to 20-target classes with only a small decrease in target recognition performance. The correct classification performance for the final 10- and 20-target classifiers was 77.4% and 66.2%, respectively. The results of these evaluations also show that significant target configuration variability can decrease interclass separability and degrade performance; however, additional reference templates can be used to mitigate these effects. References 1. L.M. Novak, et al, Performance of a 20-Target MSE Classifier, SPIE Conference, Orlando, Fla., April G.R. Benitz, High-Definition Vector Imaging for Synthetic Aperture Radar, Asilomar Conf., Pacific Grove, Calif., November MSTAR Program Technology Review, Denver, Colo., November L.M. Novak, et al, The ATR System in SAIP, Lincoln Laboratory Journal, Vol. 10, No. 2, 1997.
4 Figure 1. Block diagram of the Lincoln Laboratory baseline ATR system. Figure 4. Typical SAR spotlight image (1.0 m 1.0 m resolution) of the Eglin AFB target array. Figure 2. Typical SAR spotlight image (1.0 m 1.0 m resolution) of the Redstone Arsenal target array P BMP2 #1 M2 #1 T72 #1 BTR60 BTR70 M548 BMP2 #2 M2 #2 T72 #2 M1 M109 M110 BMP2 #3 M2 #3 T72 #3 M113 M35 HMMWV Figure 5. The targets from the MSTAR #1 and #2 collections used to train the 20-target classifier. Table 3 Intraclass variability matrix (seven T72 tanks from the MSTAR #2 data set) B T72 Intra-class Variability Matrix Figure 3. Photographs of the 18 targets from the MSTAR #1 collection. The classifier was trained with 10 targets (BMP2#1, M2#1, T72#1, BTR60, BTR70, M548, M1, M109, M110, M113). Six independent targets (BMP2#2, M2#2, T72#2, BMP2#3, M2#3, T72#3) and two confuser targets (M35, HMMWV) provided test data for the classifier. Notation Configuration of Target S/B Skirts/barrels (fuel drums) S/NB Skirts/no barrels NS/B No skirts/barrels NS/NB No skirts/no barrels S/B/A Skirts/barrels/reactive armor
5 Figure 6. Cumulative error probability versus pose error with 1.0 m 1.0 m resolution target data. Figure 7. The probability that the correct target class is contained in the top N MSE scores with 1.0 m 1.0 m resolution target data. Table 1 Confusion matrices for the 10-and 20-target classifiers using 1.0 m 1.0 m resolution imagery (test inputs are six independent targets from the MSTAR #1 data set)
6 Table 2 Confusion matrices for the 10- and 20-target classifiers (test inputs are seven independent targets from the MSTAR #2 data set) Table 4 Confusion matrices for the 10- and 20-target classifiers (test inputs are seven T72 tanks from the MSTAR #2 data set)
7 Table 5 Confusion matrices for the 11- and 21-target classifiers (test inputs are seven T72 tanks from the MSTAR #2 data sets) Table 6 Confusion matrices for the 11- and 21-target classifiers (test inputs are a composite of MSTAR #1 and MSTAR #2 data sets)
PERFORMANCE OF 10- AND 20-TARGET MSE CLASSIFIERS 1
PERFORMANCE OF 0- AND 0-TARGET MSE CLASSIFIERS Leslie M. Novak, Gregory J. Owirka, and William S. Brower Lincoln Laboratory Massachusetts Institute of Technology Wood Street Lexington, MA 00-985 ABSTRACT
More informationInstrument Recognition in Polyphonic Mixtures Using Spectral Envelopes
Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes hello Jay Biernat Third author University of Rochester University of Rochester Affiliation3 words jbiernat@ur.rochester.edu author3@ismir.edu
More informationIntra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences
Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences Michael Smith and John Villasenor For the past several decades,
More informationA Framework for Segmentation of Interview Videos
A Framework for Segmentation of Interview Videos Omar Javed, Sohaib Khan, Zeeshan Rasheed, Mubarak Shah Computer Vision Lab School of Electrical Engineering and Computer Science University of Central Florida
More informationCharacterization and improvement of unpatterned wafer defect review on SEMs
Characterization and improvement of unpatterned wafer defect review on SEMs Alan S. Parkes *, Zane Marek ** JEOL USA, Inc. 11 Dearborn Road, Peabody, MA 01960 ABSTRACT Defect Scatter Analysis (DSA) provides
More informationMPEG has been established as an international standard
1100 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 9, NO. 7, OCTOBER 1999 Fast Extraction of Spatially Reduced Image Sequences from MPEG-2 Compressed Video Junehwa Song, Member,
More informationUSING MATLAB CODE FOR RADAR SIGNAL PROCESSING. EEC 134B Winter 2016 Amanda Williams Team Hertz
USING MATLAB CODE FOR RADAR SIGNAL PROCESSING EEC 134B Winter 2016 Amanda Williams 997387195 Team Hertz CONTENTS: I. Introduction II. Note Concerning Sources III. Requirements for Correct Functionality
More informationSimple LCD Transmitter Camera Receiver Data Link
Simple LCD Transmitter Camera Receiver Data Link Grace Woo, Ankit Mohan, Ramesh Raskar, Dina Katabi LCD Display to demonstrate visible light data transfer systems using classic temporal techniques. QR
More informationOptimization of Multi-Channel BCH Error Decoding for Common Cases. Russell Dill Master's Thesis Defense April 20, 2015
Optimization of Multi-Channel BCH Error Decoding for Common Cases Russell Dill Master's Thesis Defense April 20, 2015 Bose-Chaudhuri-Hocquenghem (BCH) BCH is an Error Correcting Code (ECC) and is used
More informationUnderstanding PQR, DMOS, and PSNR Measurements
Understanding PQR, DMOS, and PSNR Measurements Introduction Compression systems and other video processing devices impact picture quality in various ways. Consumers quality expectations continue to rise
More informationVISUAL CONTENT BASED SEGMENTATION OF TALK & GAME SHOWS. O. Javed, S. Khan, Z. Rasheed, M.Shah. {ojaved, khan, zrasheed,
VISUAL CONTENT BASED SEGMENTATION OF TALK & GAME SHOWS O. Javed, S. Khan, Z. Rasheed, M.Shah {ojaved, khan, zrasheed, shah}@cs.ucf.edu Computer Vision Lab School of Electrical Engineering and Computer
More informationReducing False Positives in Video Shot Detection
Reducing False Positives in Video Shot Detection Nithya Manickam Computer Science & Engineering Department Indian Institute of Technology, Bombay Powai, India - 400076 mnitya@cse.iitb.ac.in Sharat Chandran
More informationSWITCHED INFINITY: SUPPORTING AN INFINITE HD LINEUP WITH SDV
SWITCHED INFINITY: SUPPORTING AN INFINITE HD LINEUP WITH SDV First Presented at the SCTE Cable-Tec Expo 2010 John Civiletto, Executive Director of Platform Architecture. Cox Communications Ludovic Milin,
More informationChapter 10 Basic Video Compression Techniques
Chapter 10 Basic Video Compression Techniques 10.1 Introduction to Video compression 10.2 Video Compression with Motion Compensation 10.3 Video compression standard H.261 10.4 Video compression standard
More informationFa m i l y o f PXI Do w n c o n v e r t e r Mo d u l e s Br i n g s 26.5 GHz RF/MW
page 1 of 6 Fa m i l y o f PXI Do w n c o n v e r t e r Mo d u l e s Br i n g s 26.5 GHz RF/MW Measurement Technology to the PXI Platform by Michael N. Granieri, Ph.D. Background: The PXI platform is known
More informationOverview of All Pixel Circuits for Active Matrix Organic Light Emitting Diode (AMOLED)
Chapter 2 Overview of All Pixel Circuits for Active Matrix Organic Light Emitting Diode (AMOLED) ---------------------------------------------------------------------------------------------------------------
More informationMulti-Shaped E-Beam Technology for Mask Writing
Multi-Shaped E-Beam Technology for Mask Writing Juergen Gramss a, Arnd Stoeckel a, Ulf Weidenmueller a, Hans-Joachim Doering a, Martin Bloecker b, Martin Sczyrba b, Michael Finken b, Timo Wandel b, Detlef
More informationmmwave Radar Sensor Auto Radar Apps Webinar: Vehicle Occupancy Detection
mmwave Radar Sensor Auto Radar Apps Webinar: Vehicle Occupancy Detection Please note, this webinar is being recorded and will be made available to the public. Audio Dial-in info: Phone #: 1-972-995-7777
More informationAdaptive Key Frame Selection for Efficient Video Coding
Adaptive Key Frame Selection for Efficient Video Coding Jaebum Jun, Sunyoung Lee, Zanming He, Myungjung Lee, and Euee S. Jang Digital Media Lab., Hanyang University 17 Haengdang-dong, Seongdong-gu, Seoul,
More informationCHARACTERIZATION OF END-TO-END DELAYS IN HEAD-MOUNTED DISPLAY SYSTEMS
CHARACTERIZATION OF END-TO-END S IN HEAD-MOUNTED DISPLAY SYSTEMS Mark R. Mine University of North Carolina at Chapel Hill 3/23/93 1. 0 INTRODUCTION This technical report presents the results of measurements
More information2. Problem formulation
Artificial Neural Networks in the Automatic License Plate Recognition. Ascencio López José Ignacio, Ramírez Martínez José María Facultad de Ciencias Universidad Autónoma de Baja California Km. 103 Carretera
More informationECE438 - Laboratory 1: Discrete and Continuous-Time Signals
Purdue University: ECE438 - Digital Signal Processing with Applications 1 ECE438 - Laboratory 1: Discrete and Continuous-Time Signals By Prof. Charles Bouman and Prof. Mireille Boutin Fall 2015 1 Introduction
More informationUnderstanding Compression Technologies for HD and Megapixel Surveillance
When the security industry began the transition from using VHS tapes to hard disks for video surveillance storage, the question of how to compress and store video became a top consideration for video surveillance
More informationTransducers and Sensors
Transducers and Sensors Dr. Ibrahim Al-Naimi Chapter THREE Transducers and Sensors 1 Digital transducers are defined as transducers with a digital output. Transducers available at large are primary analogue
More informationADOSE DELIVERABLE D6.9; PUBLIC SUMMARY SRS Testing of components and subsystems
RELIABLE APPLICATION SPECIFIC DETECTION OF ROAD USERS WITH VEHICLE ON-BOARD SENSORS ADOSE DELIVERABLE D6.9; PUBLIC SUMMARY SRS Testing of components and subsystems Issued by: AIT Austrian Institute of
More informationWhite Paper. Uniform Luminance Technology. What s inside? What is non-uniformity and noise in LCDs? Why is it a problem? How is it solved?
White Paper Uniform Luminance Technology What s inside? What is non-uniformity and noise in LCDs? Why is it a problem? How is it solved? Tom Kimpe Manager Technology & Innovation Group Barco Medical Imaging
More informationDesign Principles and Practices. Cassini Nazir, Clinical Assistant Professor Office hours Wednesdays, 3-5:30 p.m. in ATEC 1.
ATEC 6332 Section 501 Mondays, 7-9:45 pm ATEC 1.606 Spring 2013 Design Principles and Practices Cassini Nazir, Clinical Assistant Professor cassini@utdallas.edu Office hours Wednesdays, 3-5:30 p.m. in
More informationNUMEROUS elaborate attempts have been made in the
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 46, NO. 12, DECEMBER 1998 1555 Error Protection for Progressive Image Transmission Over Memoryless and Fading Channels P. Greg Sherwood and Kenneth Zeger, Senior
More informationWide Area View camera
WAVcam Innovative Signal Analysis, Inc WAVcam Stands for Wide Area View camera The WAVcam System includes software licenses and one or more sensors, processors, archival products. Innovative Signal Analysis,
More informationAN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY
AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT
More informationVideo compression principles. Color Space Conversion. Sub-sampling of Chrominance Information. Video: moving pictures and the terms frame and
Video compression principles Video: moving pictures and the terms frame and picture. one approach to compressing a video source is to apply the JPEG algorithm to each frame independently. This approach
More informationDigital Video Telemetry System
Digital Video Telemetry System Item Type text; Proceedings Authors Thom, Gary A.; Snyder, Edwin Publisher International Foundation for Telemetering Journal International Telemetering Conference Proceedings
More informationSkip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video
Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video Mohamed Hassan, Taha Landolsi, Husameldin Mukhtar, and Tamer Shanableh College of Engineering American
More informationFLOW INDUCED NOISE REDUCTION TECHNIQUES FOR MICROPHONES IN LOW SPEED WIND TUNNELS
SENSORS FOR RESEARCH & DEVELOPMENT WHITE PAPER #42 FLOW INDUCED NOISE REDUCTION TECHNIQUES FOR MICROPHONES IN LOW SPEED WIND TUNNELS Written By Dr. Andrew R. Barnard, INCE Bd. Cert., Assistant Professor
More informationHigh-Power Amplifier (HPA) Configuration Selection
WHITE PAPER High-Power Amplifier (HPA) Configuration Selection by Kimberly Nevetral Abstract: High Power Amplifier configuration is one of the most important decisions for Satellite Communication (SATCOM)
More informationMonitor QA Management i model
Monitor QA Management i model 1/10 Monitor QA Management i model Table of Contents 1. Preface ------------------------------------------------------------------------------------------------------- 3 2.
More information1ms Column Parallel Vision System and It's Application of High Speed Target Tracking
Proceedings of the 2(X)0 IEEE International Conference on Robotics & Automation San Francisco, CA April 2000 1ms Column Parallel Vision System and It's Application of High Speed Target Tracking Y. Nakabo,
More information1022 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL 2010
1022 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL 2010 Delay Constrained Multiplexing of Video Streams Using Dual-Frame Video Coding Mayank Tiwari, Student Member, IEEE, Theodore Groves,
More informationUNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT
UNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT Stefan Schiemenz, Christian Hentschel Brandenburg University of Technology, Cottbus, Germany ABSTRACT Spatial image resizing is an important
More informationAUDIOVISUAL COMMUNICATION
AUDIOVISUAL COMMUNICATION Laboratory Session: Recommendation ITU-T H.261 Fernando Pereira The objective of this lab session about Recommendation ITU-T H.261 is to get the students familiar with many aspects
More informationImprove Visual Clarity In Live Video SEE THROUGH FOG, SAND, SMOKE & MORE WITH NO ADDED LATENCY A WHITE PAPER FOR THE INSIGHT SYSTEM.
Improve Visual Clarity In Live Video SEE THROUGH FOG, SAND, SMOKE & MORE WITH NO ADDED LATENCY A WHITE PAPER FOR THE INSIGHT SYSTEM 2017 ZMicro, Inc. 29-00181 Rev. A June 2017 1 Rugged Computing Solution
More informationImpact of scan conversion methods on the performance of scalable. video coding. E. Dubois, N. Baaziz and M. Matta. INRS-Telecommunications
Impact of scan conversion methods on the performance of scalable video coding E. Dubois, N. Baaziz and M. Matta INRS-Telecommunications 16 Place du Commerce, Verdun, Quebec, Canada H3E 1H6 ABSTRACT The
More informationNormalization Methods for Two-Color Microarray Data
Normalization Methods for Two-Color Microarray Data 1/13/2009 Copyright 2009 Dan Nettleton What is Normalization? Normalization describes the process of removing (or minimizing) non-biological variation
More informationMusical Hit Detection
Musical Hit Detection CS 229 Project Milestone Report Eleanor Crane Sarah Houts Kiran Murthy December 12, 2008 1 Problem Statement Musical visualizers are programs that process audio input in order to
More informationUsing enhancement data to deinterlace 1080i HDTV
Using enhancement data to deinterlace 1080i HDTV The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Andy
More informationAutomatic Labelling of tabla signals
ISMIR 2003 Oct. 27th 30th 2003 Baltimore (USA) Automatic Labelling of tabla signals Olivier K. GILLET, Gaël RICHARD Introduction Exponential growth of available digital information need for Indexing and
More informationChapter Two: Long-Term Memory for Timbre
25 Chapter Two: Long-Term Memory for Timbre Task In a test of long-term memory, listeners are asked to label timbres and indicate whether or not each timbre was heard in a previous phase of the experiment
More informationADVANCES in semiconductor technology are contributing
292 IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 14, NO. 3, MARCH 2006 Test Infrastructure Design for Mixed-Signal SOCs With Wrapped Analog Cores Anuja Sehgal, Student Member,
More informationMachine Vision System for Color Sorting Wood Edge-Glued Panel Parts
Machine Vision System for Color Sorting Wood Edge-Glued Panel Parts Q. Lu, S. Srikanteswara, W. King, T. Drayer, R. Conners, E. Kline* The Bradley Department of Electrical and Computer Eng. *Department
More informationImplementation of an MPEG Codec on the Tilera TM 64 Processor
1 Implementation of an MPEG Codec on the Tilera TM 64 Processor Whitney Flohr Supervisor: Mark Franklin, Ed Richter Department of Electrical and Systems Engineering Washington University in St. Louis Fall
More informationDETECTION OF SLOW-MOTION REPLAY SEGMENTS IN SPORTS VIDEO FOR HIGHLIGHTS GENERATION
DETECTION OF SLOW-MOTION REPLAY SEGMENTS IN SPORTS VIDEO FOR HIGHLIGHTS GENERATION H. Pan P. van Beek M. I. Sezan Electrical & Computer Engineering University of Illinois Urbana, IL 6182 Sharp Laboratories
More informationNational Park Service Photo. Utah 400 Series 1. Digital Routing Switcher.
National Park Service Photo Utah 400 Series 1 Digital Routing Switcher Utah Scientific has been involved in the design and manufacture of routing switchers for audio and video signals for over thirty years.
More informationMATH& 146 Lesson 11. Section 1.6 Categorical Data
MATH& 146 Lesson 11 Section 1.6 Categorical Data 1 Frequency The first step to organizing categorical data is to count the number of data values there are in each category of interest. We can organize
More informationDetecting Musical Key with Supervised Learning
Detecting Musical Key with Supervised Learning Robert Mahieu Department of Electrical Engineering Stanford University rmahieu@stanford.edu Abstract This paper proposes and tests performance of two different
More information2. AN INTROSPECTION OF THE MORPHING PROCESS
1. INTRODUCTION Voice morphing means the transition of one speech signal into another. Like image morphing, speech morphing aims to preserve the shared characteristics of the starting and final signals,
More informationThe SmoothPicture Algorithm: An Overview
The SmoothPicture Algorithm: An Overview David C. Hutchison Texas Instruments DLP TV The SmoothPicture Algorithm: An Overview David C. Hutchison, Texas Instruments, DLP TV Abstract This white paper will
More informationA video signal consists of a time sequence of images. Typical frame rates are 24, 25, 30, 50 and 60 images per seconds.
Video coding Concepts and notations. A video signal consists of a time sequence of images. Typical frame rates are 24, 25, 30, 50 and 60 images per seconds. Each image is either sent progressively (the
More informationA Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique
A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique Dhaval R. Bhojani Research Scholar, Shri JJT University, Jhunjunu, Rajasthan, India Ved Vyas Dwivedi, PhD.
More informationHow To Stretch Customer Imagination With Digital Signage
How To Stretch Customer Imagination With Digital Signage INTRODUCTION Digital signage is now the standard wherever people shop, travel, gather, eat, study and work. It is used to increase sales, improve
More informationCOMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards
COMP 9 Advanced Distributed Systems Multimedia Networking Video Compression Standards Kevin Jeffay Department of Computer Science University of North Carolina at Chapel Hill jeffay@cs.unc.edu September,
More informationDrum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods
Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods Kazuyoshi Yoshii, Masataka Goto and Hiroshi G. Okuno Department of Intelligence Science and Technology National
More informationFast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264
Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264 Ju-Heon Seo, Sang-Mi Kim, Jong-Ki Han, Nonmember Abstract-- In the H.264, MBAFF (Macroblock adaptive frame/field) and PAFF (Picture
More informationAn Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions
1128 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 11, NO. 10, OCTOBER 2001 An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions Kwok-Wai Wong, Kin-Man Lam,
More informationWhite Paper : Achieving synthetic slow-motion in UHDTV. InSync Technology Ltd, UK
White Paper : Achieving synthetic slow-motion in UHDTV InSync Technology Ltd, UK ABSTRACT High speed cameras used for slow motion playback are ubiquitous in sports productions, but their high cost, and
More informationLecture 2 Video Formation and Representation
2013 Spring Term 1 Lecture 2 Video Formation and Representation Wen-Hsiao Peng ( 彭文孝 ) Multimedia Architecture and Processing Lab (MAPL) Department of Computer Science National Chiao Tung University 1
More informationComputer Coordination With Popular Music: A New Research Agenda 1
Computer Coordination With Popular Music: A New Research Agenda 1 Roger B. Dannenberg roger.dannenberg@cs.cmu.edu http://www.cs.cmu.edu/~rbd School of Computer Science Carnegie Mellon University Pittsburgh,
More informationThreshold Tuning of the ATLAS Pixel Detector
Haverford College Haverford Scholarship Faculty Publications Physics Threshold Tuning of the ATLAS Pixel Detector P. Behara G. Gaycken C. Horn A. Khanov D. Lopez Mateos See next page for additional authors
More informationFeature-Based Analysis of Haydn String Quartets
Feature-Based Analysis of Haydn String Quartets Lawson Wong 5/5/2 Introduction When listening to multi-movement works, amateur listeners have almost certainly asked the following situation : Am I still
More informationAn Overview of the Performance Envelope of Digital Micromirror Device (DMD) Based Projection Display Systems
An Overview of the Performance Envelope of Digital Micromirror Device (DMD) Based Projection Display Systems Dr. Jeffrey B. Sampsell Texas Instruments Digital projection display systems based on the DMD
More informationOther funding sources. Amount requested/awarded: $200,000 This is matching funding per the CASC SCRI project
FINAL PROJECT REPORT Project Title: Robotic scout for tree fruit PI: Tony Koselka Organization: Vision Robotics Corp Telephone: (858) 523-0857, ext 1# Email: tkoselka@visionrobotics.com Address: 11722
More informationSonarWiz Layback - Cable-Out Tutorial
SonarWiz Layback - Cable-Out Tutorial Revision 6.0,4/30/2015 Chesapeake Technology, Inc. email: support@chesapeaketech.com Main Web site: http://www.chesapeaketech.com Support Web site: http://www.chestech-support.com
More informationA Look-up-table Approach to Inverting Remotely Sensed Ocean Color Data
A Look-up-table Approach to Inverting Remotely Sensed Ocean Color Data W. Paul Bissett Florida Environmental Research Institute 4807 Bayshore Blvd. Suite 101 Tampa, FL 33611 phone: (813) 837-3374 x102
More informationSupervised Learning in Genre Classification
Supervised Learning in Genre Classification Introduction & Motivation Mohit Rajani and Luke Ekkizogloy {i.mohit,luke.ekkizogloy}@gmail.com Stanford University, CS229: Machine Learning, 2009 Now that music
More informationFigure 1: Feature Vector Sequence Generator block diagram.
1 Introduction Figure 1: Feature Vector Sequence Generator block diagram. We propose designing a simple isolated word speech recognition system in Verilog. Our design is naturally divided into two modules.
More informationDAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes
DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring 2009 Week 6 Class Notes Pitch Perception Introduction Pitch may be described as that attribute of auditory sensation in terms
More informationDrift Compensation for Reduced Spatial Resolution Transcoding
MERL A MITSUBISHI ELECTRIC RESEARCH LABORATORY http://www.merl.com Drift Compensation for Reduced Spatial Resolution Transcoding Peng Yin Anthony Vetro Bede Liu Huifang Sun TR-2002-47 August 2002 Abstract
More informationTesting Production Data Capture Quality
Testing Production Data Capture Quality K. Bradley Paxton, Steven P. Spiwak, Douglass Huang, and James K. McGarity ADI, LLC 200 Canal View Boulevard, Rochester, NY 14623 brad.paxton@adillc.net, steve.spiwak@adillc.net,
More informationImproving Performance in Neural Networks Using a Boosting Algorithm
- Improving Performance in Neural Networks Using a Boosting Algorithm Harris Drucker AT&T Bell Laboratories Holmdel, NJ 07733 Robert Schapire AT&T Bell Laboratories Murray Hill, NJ 07974 Patrice Simard
More informationChord Classification of an Audio Signal using Artificial Neural Network
Chord Classification of an Audio Signal using Artificial Neural Network Ronesh Shrestha Student, Department of Electrical and Electronic Engineering, Kathmandu University, Dhulikhel, Nepal ---------------------------------------------------------------------***---------------------------------------------------------------------
More information4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER. 6. AUTHOR(S) 5d. PROJECT NUMBER
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions,
More informationNEW APPROACHES IN TRAFFIC SURVEILLANCE USING VIDEO DETECTION
- 93 - ABSTRACT NEW APPROACHES IN TRAFFIC SURVEILLANCE USING VIDEO DETECTION Janner C. ArtiBrain, Research- and Development Corporation Vienna, Austria ArtiBrain has installed numerous incident detection
More informationEfficient Implementation of Neural Network Deinterlacing
Efficient Implementation of Neural Network Deinterlacing Guiwon Seo, Hyunsoo Choi and Chulhee Lee Dept. Electrical and Electronic Engineering, Yonsei University 34 Shinchon-dong Seodeamun-gu, Seoul -749,
More informationCS229 Project Report Polyphonic Piano Transcription
CS229 Project Report Polyphonic Piano Transcription Mohammad Sadegh Ebrahimi Stanford University Jean-Baptiste Boin Stanford University sadegh@stanford.edu jbboin@stanford.edu 1. Introduction In this project
More informationDESIGNING OPTIMIZED MICROPHONE BEAMFORMERS
3235 Kifer Rd. Suite 100 Santa Clara, CA 95051 www.dspconcepts.com DESIGNING OPTIMIZED MICROPHONE BEAMFORMERS Our previous paper, Fundamentals of Voice UI, explained the algorithms and processes required
More informationDistortion Analysis Of Tamil Language Characters Recognition
www.ijcsi.org 390 Distortion Analysis Of Tamil Language Characters Recognition Gowri.N 1, R. Bhaskaran 2, 1. T.B.A.K. College for Women, Kilakarai, 2. School Of Mathematics, Madurai Kamaraj University,
More informationUC San Diego UC San Diego Previously Published Works
UC San Diego UC San Diego Previously Published Works Title Classification of MPEG-2 Transport Stream Packet Loss Visibility Permalink https://escholarship.org/uc/item/9wk791h Authors Shin, J Cosman, P
More information5) The transmission will be able to be done in colors, grey scale or black and white ("HF fax" type).
Patrick Lindecker (F6CTE) Bures-sur-Yvette the 9 th of may 2005 Editing by Bill Duffy ( KA0VXK) In this paper, I describe a digital picture transmission protocol named Run which has the main originalities,
More informationHuman Hair Studies: II Scale Counts
Journal of Criminal Law and Criminology Volume 31 Issue 5 January-February Article 11 Winter 1941 Human Hair Studies: II Scale Counts Lucy H. Gamble Paul L. Kirk Follow this and additional works at: https://scholarlycommons.law.northwestern.edu/jclc
More informationTemporal Error Concealment Algorithm Using Adaptive Multi- Side Boundary Matching Principle
184 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.12, December 2008 Temporal Error Concealment Algorithm Using Adaptive Multi- Side Boundary Matching Principle Seung-Soo
More informationNOTICE: This document is for use only at UNSW. No copies can be made of this document without the permission of the authors.
Brüel & Kjær Pulse Primer University of New South Wales School of Mechanical and Manufacturing Engineering September 2005 Prepared by Michael Skeen and Geoff Lucas NOTICE: This document is for use only
More information19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007
19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 AN HMM BASED INVESTIGATION OF DIFFERENCES BETWEEN MUSICAL INSTRUMENTS OF THE SAME TYPE PACS: 43.75.-z Eichner, Matthias; Wolff, Matthias;
More informationAnalysis of WFS Measurements from first half of 2004
Analysis of WFS Measurements from first half of 24 (Report4) Graham Cox August 19, 24 1 Abstract Described in this report is the results of wavefront sensor measurements taken during the first seven months
More informationVideo coding standards
Video coding standards Video signals represent sequences of images or frames which can be transmitted with a rate from 5 to 60 frames per second (fps), that provides the illusion of motion in the displayed
More informationCorrelation to the Common Core State Standards
Correlation to the Common Core State Standards Go Math! 2011 Grade 4 Common Core is a trademark of the National Governors Association Center for Best Practices and the Council of Chief State School Officers.
More informationReconfigurable Neural Net Chip with 32K Connections
Reconfigurable Neural Net Chip with 32K Connections H.P. Graf, R. Janow, D. Henderson, and R. Lee AT&T Bell Laboratories, Room 4G320, Holmdel, NJ 07733 Abstract We describe a CMOS neural net chip with
More informationInSync White Paper : Achieving optimal conversions in UHDTV workflows April 2015
InSync White Paper : Achieving optimal conversions in UHDTV workflows April 2015 Abstract - UHDTV 120Hz workflows require careful management of content at existing formats and frame rates, into and out
More informationAnalysis of a Two Step MPEG Video System
Analysis of a Two Step MPEG Video System Lufs Telxeira (*) (+) (*) INESC- Largo Mompilhet 22, 4000 Porto Portugal (+) Universidade Cat61ica Portnguesa, Rua Dingo Botelho 1327, 4150 Porto, Portugal Abstract:
More informationAnalysis and Clustering of Musical Compositions using Melody-based Features
Analysis and Clustering of Musical Compositions using Melody-based Features Isaac Caswell Erika Ji December 13, 2013 Abstract This paper demonstrates that melodic structure fundamentally differentiates
More informationSingle-sided CZT strip detectors
University of New Hampshire University of New Hampshire Scholars' Repository Space Science Center Institute for the Study of Earth, Oceans, and Space (EOS) 2004 Single-sided CZT strip detectors John R.
More informationResearch Article. ISSN (Print) *Corresponding author Shireen Fathima
Scholars Journal of Engineering and Technology (SJET) Sch. J. Eng. Tech., 2014; 2(4C):613-620 Scholars Academic and Scientific Publisher (An International Publisher for Academic and Scientific Resources)
More information