PERFORMANCE OF 10- AND 20-TARGET MSE CLASSIFIERS 1

Similar documents
An Efficient Multi-Target SAR ATR Algorithm

A Framework for Segmentation of Interview Videos

USING MATLAB CODE FOR RADAR SIGNAL PROCESSING. EEC 134B Winter 2016 Amanda Williams Team Hertz

VISUAL CONTENT BASED SEGMENTATION OF TALK & GAME SHOWS. O. Javed, S. Khan, Z. Rasheed, M.Shah. {ojaved, khan, zrasheed,

1ms Column Parallel Vision System and It's Application of High Speed Target Tracking

Search Strategies for a Wide-Field Electro-Optic Sensor

Machine Vision System for Color Sorting Wood Edge-Glued Panel Parts

REPORT DOCUMENTATION PAGE

4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER. 6. AUTHOR(S) 5d. PROJECT NUMBER

Improving Performance in Neural Networks Using a Boosting Algorithm

APPLICATIONS OF DIGITAL IMAGE ENHANCEMENT TECHNIQUES FOR IMPROVED

Instrument Recognition in Polyphonic Mixtures Using Spectral Envelopes

Computer Coordination With Popular Music: A New Research Agenda 1

CS229 Project Report Polyphonic Piano Transcription

Cycle-7 MAMA Pulse height distribution stability: Fold Analysis Measurement

in the Howard County Public School System and Rocketship Education

User Guide. S-Curve Tool

Drum Sound Identification for Polyphonic Music Using Template Adaptation and Matching Methods

Evaluation of Serial Periodic, Multi-Variable Data Visualizations

ESD RECORD COPY STUDIES OF DISPLAY SYMBOL LEGIBILITY. Part V. The Effects of Television Transmission on the Legibility of Common Five-Letter Words

UC San Diego UC San Diego Previously Published Works

1022 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL 2010

White Paper. Uniform Luminance Technology. What s inside? What is non-uniformity and noise in LCDs? Why is it a problem? How is it solved?

Adaptive Key Frame Selection for Efficient Video Coding

A Look-up-table Approach to Inverting Remotely Sensed Ocean Color Data

EFFECTS OF HMD BACKLIGHT BLEED-THROUGH IN LOW-LIGHT AUGMENTED REALITY APPLICATIONS

Wipe Scene Change Detection in Video Sequences

TERRESTRIAL broadcasting of digital television (DTV)

SMART VEHICLE SCREENING SYSTEM USING ARTIFICIAL INTELLIGENCE METHODS

Basic Pattern Recognition with NI Vision

mmwave Radar Sensor Auto Radar Apps Webinar: Vehicle Occupancy Detection

SWITCHED INFINITY: SUPPORTING AN INFINITE HD LINEUP WITH SDV

August 21-25, Keywords: abstracts, deadlines, paper preparation, editing, process, references. Abstract

UNIT-3 Part A. 2. What is radio sonde? [ N/D-16]

ECE438 - Laboratory 1: Discrete and Continuous-Time Signals

Smart Traffic Control System Using Image Processing

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

FLOW INDUCED NOISE REDUCTION TECHNIQUES FOR MICROPHONES IN LOW SPEED WIND TUNNELS

Concept of Operations (CONOPS)

Characterization and improvement of unpatterned wafer defect review on SEMs

Fa m i l y o f PXI Do w n c o n v e r t e r Mo d u l e s Br i n g s 26.5 GHz RF/MW

Supporting Information

Scenario Test of Facial Recognition for Access Control

3D MR Image Compression Techniques based on Decimated Wavelet Thresholding Scheme

How To Stretch Customer Imagination With Digital Signage

UNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT

Analysis of WFS Measurements from first half of 2004

INTERIM ADVICE NOTE 109/08. Advice Regarding the Motorway Signal Mark 4 (MS4)

The SmoothPicture Algorithm: An Overview

Analysis of Background Illuminance Levels During Television Viewing

Understanding Compression Technologies for HD and Megapixel Surveillance

Dynamic IR Scene Projector Based Upon the Digital Micromirror Device

Implementation of LED Roadway Lighting

Introduction Display...1 Mounting...1 Firmware Version...2. ADL Operation... 3

Advanced Techniques for Spurious Measurements with R&S FSW-K50 White Paper

THE importance of music content analysis for musical

Image Compression Techniques Using Discrete Wavelet Decomposition with Its Thresholding Approaches

Brain-Computer Interface (BCI)

Journal Papers. The Primary Archive for Your Work

Lecture 2 Video Formation and Representation

Multi-Shaped E-Beam Technology for Mask Writing

SonarWiz Layback - Cable-Out Tutorial

CESR BPM System Calibration

An Overview of the Performance Envelope of Digital Micromirror Device (DMD) Based Projection Display Systems

Guidance For Scrambling Data Signals For EMC Compliance

Source/Receiver (SR) Setup

EMBEDDED ZEROTREE WAVELET CODING WITH JOINT HUFFMAN AND ARITHMETIC CODING

RadarView. Primary Radar Visualisation Software for Windows. cambridgepixel.com

Quick Setup Guide for IntelliAg Model CTA

Pre-processing of revolution speed data in ArtemiS SUITE 1

(Skip to step 11 if you are already familiar with connecting to the Tribot)

Measurement of automatic brightness control in televisions critical for effective policy-making

Threshold Tuning of the ATLAS Pixel Detector

2x1 prototype plasma-electrode Pockels cell (PEPC) for the National Ignition Facility

ADOSE DELIVERABLE D6.9; PUBLIC SUMMARY SRS Testing of components and subsystems

Shot Transition Detection Scheme: Based on Correlation Tracking Check for MB-Based Video Sequences

Examination of a simple pulse blanking technique for RFI mitigation

Quick Setup Guide for IntelliAg Model NTA

Chapter 1. Introduction to Digital Signal Processing

V9A01 Solution Specification V0.1

Spectroscopy on Thick HgI 2 Detectors: A Comparison Between Planar and Pixelated Electrodes

Improve Visual Clarity In Live Video SEE THROUGH FOG, SAND, SMOKE & MORE WITH NO ADDED LATENCY A WHITE PAPER FOR THE INSIGHT SYSTEM.

Problem Points Score USE YOUR TIME WISELY USE CLOSEST DF AVAILABLE IN TABLE SHOW YOUR WORK TO RECEIVE PARTIAL CREDIT

High performance optical blending solutions

[COE STYLE GUIDE FOR THESES AND DISSERTATIONS]

Feature-Based Analysis of Haydn String Quartets

Bus route and destination displays making it easier to read.

Understanding PQR, DMOS, and PSNR Measurements

Audio-Based Video Editing with Two-Channel Microphone

Multiple-point simulation of multiple categories Part 1. Testing against multiple truncation of a Gaussian field

Use of Scanning Wizard Can Enhance Text Entry Rate: Preliminary Results

Monitor QA Management i model

Principles of Video Segmentation Scenarios

UPDATE TO DOWNSTREAM FREQUENCY INTERLEAVING AND DE-INTERLEAVING FOR OFDM. Presenter: Rich Prodan

STB Front Panel User s Guide

Automatic Commercial Monitoring for TV Broadcasting Using Audio Fingerprinting

Note for Applicants on Coverage of Forth Valley Local Television

Using enhancement data to deinterlace 1080i HDTV

Overview of All Pixel Circuits for Active Matrix Organic Light Emitting Diode (AMOLED)

Reducing False Positives in Video Shot Detection

Transcription:

PERFORMANCE OF 0- AND 0-TARGET MSE CLASSIFIERS Leslie M. Novak, Gregory J. Owirka, and William S. Brower Lincoln Laboratory Massachusetts Institute of Technology Wood Street Lexington, MA 00-985 ABSTRACT MIT Lincoln Laboratory is responsible for developing the ATR (automatic target recognition) system for the DARPA-sponsored SAIP program; the baseline ATR system recognizes 0 GOB (ground order of battle) targets; the enhanced version of SAIP requires the ATR system to recognize 0 GOB targets. This paper presents ATR performance results for 0- and 0-target MSE classifiers using highresolution SAR (synthetic aperture radar) imagery. Keywords: automatic target recognition, synthetic aperture radar, mean square error classifier, tactical targets, stationary targets. INTRODUCTION MIT Lincoln Laboratory is responsible for developing the ATR system for the DARPA-sponsored SAIP program [,]. SAIP supports new sensor platforms such as the Global Hawk system [,] which gathers wide area SAR stripmap imagery at medium resolution (.0 m.0 m) and SAR spotlight imagery at high resolution (0. m 0. m). The classification stage of the SAIP ATR provides target recognition at both medium and high resolution. In high resolution spotlight mode, conventional D- FFT image formation processing is used to construct the 0. m 0. m resolution SAR imagery. In medium resolution stripmap mode, a superresolution image formation algorithm is used to enhance SAR image resolution. This new image formation algorithm enhances the resolution of the.0 m.0 m This work was sponsored by the Defense Advanced Research Projects Agency under Air Force Contract F968-95-C-000. Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the United States Government.

imagery to approximately 0.5 m 0.5 m. Reference [5] focuses on ATR performance of the SAIP MSE classifier using medium resolution (.0 m.0 m) SAR imagery; this paper focuses on a comparison of ATR performance for 0- and 0-target MSE classifiers using high resolution (0. m 0. m) SAR imagery. A brief overview of the paper follows. Section discusses some initial classifier studes and describes the SAR imagery used in these studies; photographs and high resolution SAR images of the ground order of battle (GOB) targets are shown. Section describes classifier training and testing, and presents evaluations of the performance of the 0- and 0-target MSE classifiers. In Section we consider the effects of severe target configuration variabilities (extended operating conditions), including classifier testing using images of targets placed in revetments, and images of an M09 target with severe turret rotations (turret rotated 90 deg from forward position). Finally, Section 5 summarizes the important results from these studies.. BACKGROUND AND DATA DESCRIPTION The synthetic aperture radar imagery used in these studies was provided to Lincoln Laboratory by Wright Laboratories, WPAFB, Dayton, Ohio. The data were gathered by the Sandia X-band, HHpolarization SAR sensor at two different sites in support of the DARPA-sponsored MSTAR program [6]. The first MSTAR collection (MSTAR #) took place in fall 995 at Redstone Arsenal, Huntsville, Alabama; the second MSTAR collection (MSTAR #) took place in fall 996 at Eglin AFB, Ft. Walton Beach, Florida. In each collection, a large number of military targets were imaged in spotlight mode, over 60 deg of target aspect, and at 0. m 0. m resolution. Our initial studies [,] evaluated the performance and summarized the results of a 0-target MSE classifier using imagery of the 8 distinct targets contained in the MSTAR # data set. Figure shows a typical SAR spotlight image (0. m 0. m resolution) of the Redstone Arsenal target array. We used 5 deg depression target images to construct a 0-target classifier. The classifier was trained by constructing classifier templates using SAR images of the following targets: BMP#, M#, T7#, BTR60, BTR70, M, M09, M0, M, and M58 (nomenclature explained in Figure ). The target array shown in Figure includes three versions each of the BMP armored personnel carrier, the M

infantry fighting vehicle, and the T7 main battle tank. The three T7 tanks varied significantly from tank to tank; T7# used in training the classifier had skirts along both sides of the target; T7# had fuel drums (barrels) mounted on the rear of the tank; T7# had neither skirts nor barrels. The classifier was tested using the remaining 8 targets that were not used in training the classifier: two BMPs, two Ms, two T7s, the HMMWV, and the M5. In our initial studies, the HMMWV and the M5 were used as confuser vehicles (i.e., vehicles not included in the set of 0 targets that the classifier was trained to recognize); the other 6 test targets provided independent classifier testing data (data not used in classifier training). One important conclusion gleaned from these initial studies was that the ability to correctly classify the independent T7 targets depended strongly on how closely the target configuration matched that of the tank used in training the classifier. Because of the presence of the fuel drums located on the rear of the tank, T7# was called unknown a significant number of times. Using additional T7 tank data from MSTAR # data set, we demonstrate in this paper that intraclass variability is a very important issue for classifier design. Figure. Typical SAR spotlight image (0. m 0. m resolution) of the Redstone Arsenal target array.

This paper compares the performance of the 0-target classifier with the performance obtained from the newly implemented 0-target MSE classifier. To implement the 0-target MSE classifier we combined target types imaged during the MSTAR # collection with 9 target types imaged during the MSTAR # collection ( both data sets at 5-deg depression). Figure shows a typical SAR spotlight image (0. m 0. m resolution) of the Eglin AFB target array. Photographs of the targets used to implement the 0-target MSE classifier are shown in Figure. 00-P BMP # M # T7 # BTR60 BTR70 M58 BMP # M # T7 # M M09 M0 BMP # M # T7 # M M5 HMMWV Figure. Photographs of the 8 targets from the MSTAR # collection. The classifier was trained with 0 targets (BMP#, M#, T7#, BTR60, BTR70, M58, M, M09, M0, M). Six independent targets (BMP#, M#, T7#, BMP#, M#, T7#) and two confuser targets (M5, HMMWV) provided test data for the classifier.

Figure. Typical SAR spotlight image (0. m 0. m resolution) of the Eglin AFB target array. 5

Figure. Photographs of the 0 targets from the MSTAR # and MSTAR # collections used to train the 0-target classifier.. PERFORMANCE RESULTS This section of the paper summarizes the ATR performance achieved by the MSE classifier using 0. m 0. m resolution SAR imagery. The 0-target and 0-target classifiers were implemented by constructing 7 templates per target. These templates were obtained by generating target images every degree in aspect around the target. Five consecutive images were then averaged to form 7 average images per target. The templates were then obtained by isolating the clutter-free target pixels from each average image, providing 7 templates spanning a total 60-deg aspect coverage per target. Both classifiers were initially tested using the 6 independent targets from the MSTAR # collection. The results of these evaluations are summarized in Table, which presents the classifier confusion matrices for the 0-target classifier trained using MSTAR # data and tested on the 6 MSTAR # independent 6

test targets (top) and for the 0-target classifier trained using MSTAR # and MSTAR # data and tested on 6 MSTAR # independent test targets (bottom). Table Confusion Matrices for the 0-and 0 Target Classifiers Using 0. M 0. M Resolution Imagery (Six Independent Targets From the MSTAR # Data Set) Number of Targets Classified As 00-9B BMP BTR60 BTR70 M09 M0 M M M M58 T7 Unk BMP # BMP # 87 9 0-Target Classifier Pcc = 95.% M # 96 M # 9 T7 # T7 # 58 6 86 5 Number of Targets Classified As 0-Target Classifier Pcc = 9.0% BMP BTR60 BTR70 M09 M0 M M M M58 T7 M5 S M50 M577 M60 M88 M978 T6 ZIL ZSU Unk BMP # BMP # M # M # T7 # 86 9 96 9 6 9 5 T7 # 86 5 When the 0-target classifier was tested using the independent MSTAR # test data, nearly perfect performance was achieved. Probability of correct classification is 95.% against the 6 independent targets. Note, however, that the performance for the T7 tank with fuel drums on the rear (T7#) was somewhat reduced; 6 images of the 95 total were declared unknown by the classifier. When the 0- target classifier was tested using the same independent MSTAR # test data, the probability of correct classification degraded slightly to 9.0%. The number of targets declared unknown by the 0-target classifier was approximately the same as for the 0-target classifier (5 images of the total 70). 7

Both classifiers were then tested using independent test data (three BTR70s and four M09s) in controlled configurations from the MSTAR # collection. Table presents the classifier confusion matrices for the original 0-target classifier tested on the 7 MSTAR # independent test targets (top) and for the 0-target classifier tested on the 7 MSTAR # independent test targets (bottom). The probabilities of correct classification against these independent test data are 96.8% and 9.% for the 0- and 0-target classifiers, respectively. This test illustrates that classifier templates developed from the MSTAR # collection work equally well when tested against these independent test target images from the MSTAR # collection. Note that for the 0-target classifier, a small number of BTR70s were incorrectly classified as Ss (6 images of the total 8) and a small number of M09s were incorrectly classified as M577s ( images of the total 095). Table Confusion Matrices for the 0- and 0-Target Classifiers (Seven Independent Targets From the MSTAR # Data Set) Number of Targets Classified As BMP BTR60 BTR70 M09 M0 M M M M58 T7 Unk 00-8B BTR70 # BTR70 # 70 60 0-Target Classifier Pcc = 96.8% BTR70 # 5 M09 # 7 M09 # 60 M09 # 7 M09 # 68 Number of Targets Classified As 0-Target Classifier Pcc = 9.% BMP BTR60 BTR70 M09 M0 M M M M58 T7 M5 S M50 M577 M60 M88 M978 T6 ZIL ZSU Unk BTR70 # 67 BTR70 # 9 0 BTR70 # 5 6 8 M09 # 66 M09 # 5 0 M09 # 68 M09 # 55 0 8

The MSTAR # collection imaged eight different-serial-numbered T7 tanks in a variety of configurations, as described in Table. We tested the 0- and 0-target classifiers using target images of seven of the independent T7 tanks from the MSTAR # collection. Note that a single T7 tank from the MSTAR # collection was used to train both classifiers; its configuration was skirts/no-barrels (S/NB; i.e., skirts along both sides of the tank but no fuel drums mounted at the rear). Table Intraclass Variability Matrix (Eight T7 Tanks From the MSTAR # Data Set) T7 Intraclass Variability Matrix 0- Notation S/B S/NB NS/B NS/NB S/B/A Configuration of Target Skirts/barrels (fuel drums) Skirts/no barrels No skirts/barrels No skirts/no barrels Skirts/barrels/reactive armor As shown in Table, the 0-target classifier rejected a large number of T7 tank images (08 of the total 98), declaring them unknown. The confusion matrix indicates that 00 images of T7#7 were rejected, 0 images of T7#6 were rejected, and 8 images of T7#5 were rejected. We investigated the various T7 configurations more carefully. For example, Figures 5 and 6 are photographs of the tank with and without the reactive armor; this variability is expected to cause a large increase in the mean-square-error calculated by the classifier. Figure 7 shows MSE histograms for two targets. One is a T7 in the S/NB configuration; since the T7 used to train the classifier had the same configuration, the majority of test inputs are below the MSE baseline threshold of 55. Figure 7 also shows the MSE histogram for a T7 with the skirts/barrels/reactive-armor configuration; the curve indicates a large fraction of these scores is above the threshold of 55, and these test inputs are rejected as unknown by the classifier. 9

The 0-target-classifier confusion matrix presented in Table illustrates that only 0 test images were correctly classified of the total 98; also, 0 test images were declared unknown. Increasing the number of target classes from 0 to 0 resulted in many T7 test images being declared T6s or M60s in the 0-target; this happened because the T7 target training was done using only the S/NB configuration target data from the MSTAR # collection. Table Confusion Matrices for the 0- AND 0-Target Classifiers (Seven Tanks From the MSTAR # Data Set) Number of Targets Classified As BMP BTR60 BTR70 M09 M0 M M M M58 T7 Unk 00-7B T7 # 9 8 S/NB T7 # T7 # 5 9 9 9 0-Target Classifier Pcc = 7.0% NS/NB T7 # 9 6 S/B T7 #5 8 8 NS/B T7 #6 9 5 0 S/B/A T7 #7 0 7 5 00 Number of Targets Classified As 0-Target Classifier Pcc = 59.0% BMP BTR60 BTR70 M09 M0 M M M M58 T7 M5 S M50 M577 M60 M88 M978 T6 ZIL ZSU Unk T7 # 8 00 8 0 S/NB T7 # 6 6 T7 # 5 06 7 NS/NB T7 # 6 5 S/B T7 #5 6 8 5 66 NS/B T7 #6 6 88 6 90 66 S/B/A T7 #7 5 5 0 0 76 0

Figure 5. Top view of a typical T7 tank without reactive armor. Figure 6. Top view of a T7 tank with reactive armor covering a significant portion of the tank surface.

0.5 Templates: T7 (S/NB) MSE Threshold = 55 0-0.0 Probability Density 0.5 0.0 Test Inputs T7 (S/NB) T7 (S/B/A) 0.05 0.00 0 0 50 60 MSE Score 70 80 90 Figure 7. MSE score distributions of two independent T7 test inputs against the T7 templates used to train the classifier. The solid line is the distribution of scores from a tank with the same configuration as the training tank (skirts, no barrels). The dashed line is the distribution of scores from a tank configured with skirts, barrels, and reactive armor. A large fraction of the latter scores are above the baseline MSE classifier threshold (dotted line) and are thus rejected as unknown. Figure 8 compares T7 test images (center) with the T7 training images from MSTAR # (left) and the T6 target images from MSTAR # (right). The center T7 test images appear visually more similar to the T6 target images because the configuration for each of these targets is no-skirts/barrels (NS/B; see Figure ). These observations prompted an experimental classifier augmentation, leading to the confusion matrices presented in Table 5. We determined that by augmenting the classifier template sets with an additional template set for a T7 tank having an NS/B configuration, the probabilities of correct classification improved to 9.9% and 90.% for the 0- and the 0-target classifiers, respectively. (Although we have added additional T7 templates to each of the classifiers, we still denote these classifiers as 0- and 0-target classifiers because they only identify 0 and 0 unique target types). The 0- and 0-target classifiers discussed earlier (Tables,, ) used 70 templates and 0 templates,

respectively; the 0- and 0-target classifiers presented in Table 5 used 8 templates and 5 templates, respectively. Figure 8. SAR images of a T7 with skirts and without barrels (left column), a T7 without skirts and with barrels (center column), and a T6 without skirts and with barrels (right column) at two aspect angles 0 deg (top row) and 70 deg (bottom row). The T7 on the left and the T6 on the right are used to train the 0-target classifier while the center T7 provides independent test data. The center T7 is visually more similar to the identically configured T6 than the differently configured T7, thus prompting the addition of additional T7 templates to mitigate the T7 intraclass variability.

Table 5 Confusion Matrices for the 0- and 0-Target Classifiers (with additional T7 templates) (Seven Tanks From the MSTAR # Data Set) Number of Targets Classified As BMP BTR60 BTR70 M09 M0 M M M M58 T7 Unk 0-5 T7 # 6 5 6 S/NB T7 # T7 # 6 8 9 8 0-Target Classifier Pcc = 9.9% NS/NB T7 # 57 S/B T7 #5 68 NS/B T7 #6 7 S/B/A T7 #7 8 6 0-Target Classifier Pcc = 90.% Number of Targets Classified As BMP BTR60 BTR70 M09 M0 M M M M58 T7 M5 S M50 M577 M60 M88 M978 T6 ZIL ZSU Unk T7 # 5 8 5 6 S/NB T7 # 57 8 T7 # 6 NS/NB T7 # 0 6 S/B T7 #5 67 NS/B T7 #6 7 S/B/A T7 #7 9 7 To summarize the performance of the 0- and 0-target classifier implementations described earlier, Table 6 presents confusion matrices combining the results using 595 independent test inputs. The leftmost column denotes the target type, followed by the number of different-serial-numbered targets of each type used in the performance evaluation. For example, there were 9 different-serial-numbered T7s included in this final performance summary. Since the total number of test inputs varies with each target type, the performance numbers have been converted to percent. As Table 6 shows, the probabilities of correct classification are 95.8% and 9.6% for the 0- and 0-target classifiers, respectively.

Table 6 Confusion Matrices for the 0- and 0-Target Classifiers (with additional T7 templates) (Composite of MSTAR # and MSTAR # Data Sets) Percent of Targets Classified As 0- BMP BTR60 BTR70 M09 M0 M M M M58 T7 Unk BMP () 97.0 0.5 0.75 0.75 0.5.0 BTR70 () 95.0 5.0 0-Target Classifier Pcc = 95.8% M09 () 0. 98.0 0..6 M () 0.5 99.5 0.5 T7 (9) 0. 0. 0. 0. 0.5 9.0 5.0 Percent of Targets Classified As 0-Target Classifier Pcc = 9.6% BMP BTR60 BTR70 M09 M0 M M M M58 T7 M5 S M50 M577 M60 M88 M978 T6 ZIL ZSU Unk BMP () 96.75 0.5 0.5 0.75 0.5 0.75.0 BTR70 () 9.5.5 0.5 0.5 0.5 0.5.0 M09 () 0. 9.5 0.5.0 0. 0.7 0.. M () 0.5 99.5 0.5 0.5 T7 (9) 0. 0. 0. 0. 9.0 0.8.5.5.. EXTENDED OPERATING CONDITIONS This section of the paper summarizes MSE classifier performance evaluations using test images of targets gathered during the MSTAR # collection in various extended operating condition configurations [7]. Preliminary performance results are presented for targets that () are obscured by revetments, and () targets have significant turret rotations. For testing the effects of revetments on classifier performance, test target imagery from the MSTAR # collection was used, including the BTR70, M09, and T7 targets. Figure 9 is a photograph of the BTR70 target in a half-revetment configuration; Figure 0 shows the M09 target in a full-revetment configuration; and Figure shows the T7 target in a half-revetment configuration with the turret rotated 5 deg right. 5

Figure 9. Photograph of a BTR70 in a half-revetment configuration. A half-revetment is designed to obscure objects below the vehicle axle. We evaluated the MSE classifier scores for the M09 target in various revetted configurations. Figure shows the MSE scores for testing against independent M09 targets () in the open, () in halfrevetments, and () in full-revetments (the M09 training templates were from the MSTAR # data). The figure indicates that the effect of the revetments is to partially obscure the target due to blockage of the radar energy, resulting in an increase in the MSE scores. We determined that increasing the MSE threshold to 65 would provide quite good classifier performance against these revetted targets. Table 7 summarizes the results of this study for the 0-target classifier using the MSE threshold of 65; the probability of correct classifications are 50.8% and 75.7% against the fully and half-revetted targets, respectively. Using an MSE threshold of 60, the probability of correct classifications are reduced to.9% and 9.% against the fully and half-revetted targets, respectively (see Table 8). Using our baseline MSE threshold of 55 (Tables,,,5 and 6) the probabilities of correct classification are further reduced to.% and.9% against the fully and half-revetted targets (see Table 9). Although P CC against these revetted targets may be improved significantly by increasing the MSE threshold, the ability of the classifier to reject confuser vehicles would be significantly decreased. Figure shows MSE classifier confuser rejection versus the MSE threshold; from these curves it is seen that the baseline 6

threshold of 55 provides good confuser rejection whereas using an MSE threshold of 65 shows essentially no confuser rejection capability. Figure 0. Photograph of an M09 in a full-revetment configuration. A full revetment is designed to obscure objects below the vehicle deck. Figure. Photograph of a T7 in a half-revetment configuration with the turret rotated 5 deg. 7

00-B 0.50 Templates: M09 (Open) MSE Threshold = 55 Probability Density 0.0 0.0 0.0 Tests Inputs M09 (Open) M09 (Half) M09 (Full) 0.0 0.00 0 0 0 60 80 00 MSE Score Figure. MSE score distributions of three independent M09 test inputs against the M09 templates used to train the classifier. The solid line is the distribution of scores from the M09 with the same configuration, in the open, as the M09 used in training. The dashed line is the distribution of scores from the M09 in the half-revetment configuration. The dotted line is the distribution of scores from the M09 in the full-revetment configuration. The MSE scores increase as the amount of obscuration of the test targets increase. 8

Table 7 Confusion Matrix for the 0-Target Classifier (Threshold = 65) (Six Targets From the MSTAR # Data Set) Number of Targets Classified As 0- BMP BTR60 BTR70 M09 M0 M M M M58 T7 Unk Full Revetment BTR70 (Full) M09 (Full) 5 6 0 T7 (5, Full) BTR70 (Half) 5 8 7 0 0-Target Classifier Pcc = 50.8% (Full) Pcc = 75.7% (Half) Half Revetment M09 (Half) T7 (5, Half) 5 6 Table 8 Confusion Matrix for the 0-Target Classifier (Threshold = 60) (Six Targets From the MSTAR # Data Set) Number of Targets Classified As 0-6 BMP BTR60 BTR70 M09 M0 M M M M58 T7 Unk Full Revetment BTR70 (Full) M09 (Full) 6 6 5 Half Revetment T7 (5, Full) BTR70 (Half) M09 (Half) 8 56 0-Target Classifier Pcc =.9% (Full) Pcc = 9.% (Half) T7 (5, Half) 7 Table 9 Confusion Matrix for the 0-Target Classifier (Threshold = 55) (Six Targets From the MSTAR # Data Set) Number of Targets Classified As 0-7 BMP BTR60 BTR70 M09 M0 M M M M58 T7 Unk Full Revetment Half Revetment BTR70 (Full) M09 (Full) T7 (5, Full) BTR70 (Half) M09 (Half) T7 (5, Half) 0 0 8 57 6 59 57 6 5 0-Target Classifier Pcc =.% (Full) Pcc =.9% (Half) 9

Figure. Probability of rejecting confuser vehicles (i.e., vehicles not included in the set of targets that the classifier was trained to recognize) versus MSE classifier threshold. A classifier threshold of 55 provides good confuser rejection. We have investigated the effect of turret rotations for the M09 target. The turret rotations investigated were quite severe, with the maximum rotation as large as 90 deg from the nominal forward turret position (see Figure ). The templates used in this study were constructed using the M09 target (with no turret rotation) from the MSTAR # data. As Figure 5 indicates, for turret rotations of 0 deg or more, the probability of correctly classifying this target degrades very rapidly. Classifier performance could be improved by including extra templates from M09s having turret rotations at various angles; we do not have independent training data to test this hypothesis, but we speculate that it could require template sets every 0 deg or so of turret rotation to achieve good classifier performance. 0

0-8P M09 (Turret Rotated 90 ) Figure. Photograph of an M09 with turret rotated 90 deg from forward position.

00 0-9 0-Target Classifier 0-Target Classifier Probability of Correct Classification (P CC ) 80 60 0 0 0 0 0 0 0 60 80 00 M09 Turret Rotation (deg) Figure 5. Probability of correct classification of an M09 in the open as a function of M09 turret rotation. The M09 used to train the 0- and 0-target classifiers was articulated with no turret rotation. The turret of the M09 represents a significant fraction of the total radar-illuminated surface. The classifier performance degrades rapidly but may be improved with the introduction of training data for M09s with rotated turrets.

5. SUMMARY This paper has presented an evaluation of the performance of 0-target and 0-target, template-based MSE classifiers; both classifiers were developed at Lincoln Laboratory in support of the SAIP program. Classifier performance was evaluated using a large number of images of tactical military targets (595 test images) at high resolution. The results of these performance evaluations show that the number of target classes can be increased from 0-targets to 0-targets with only a small decrease in target recognition performance. The correct classification performance for the final 0- and 0-target classifiers was 95.8% and 9.6%, respectively. The results of these evaluations show that significant target configuration variability can decrease interclass separability and degrade performance; however, additional reference templates can be used to mitigate these effects. Finally, these evaluations also show that classifier performance degrades significantly when tested against targets in revetments; this is because the revetments obscure significant portions of the target s signature. 6. REFERENCES. L.M. Novak, G.J. Owirka, W.S. Brower, and A.L. Weaver, The Automatic Target Recognition System in SAIP, Lincoln Laboratory Journal, Vol. 0, No., 997.. L.M. Novak, G.J. Owirka, and A.L. Weaver, Automatic Target Recognition Using Enhanced Resolution SAR Data, IEEE Transactions on Aerospace and Electronic Systems, Vol. 5, No., January 999.. W.P. Delaney, The Changing World, The Changing Nature of Conflicts: A Critical Role for Military Radar, 995 IEEE National Radar Conference, Alexandria, VA, May 995.. Global Hawk Relays Images Via Commercial Satellite, Aviation Week and Space Technology, 5 February 999. 5. L.M. Novak, G.J. Owirka, and W.S. Brower, An Efficient Multi-Target SAR ATR Algorithm, Asilomar Conference, Pacific Grove, CA, November 998. 6. Moving and Stationary target Acquisition Recognition (MSTAR), Program Technology Review, Denver, CO, November 996. 6. E.R. Keydel, and S.W. Lee, MSTAR Extended Operating Conditions: A Tutorial, SPIE 757, Orlando, FL, April 996.