Digital Video Telemetry System

Similar documents
Chapter 10 Basic Video Compression Techniques

Video compression principles. Color Space Conversion. Sub-sampling of Chrominance Information. Video: moving pictures and the terms frame and

ATSC vs NTSC Spectrum. ATSC 8VSB Data Framing

ELEC 691X/498X Broadcast Signal Transmission Fall 2015

Motion Video Compression

Principles of Video Compression

Video coding standards

An Overview of Video Coding Algorithms

COMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards

Audio and Video II. Video signal +Color systems Motion estimation Video compression standards +H.261 +MPEG-1, MPEG-2, MPEG-4, MPEG- 7, and MPEG-21

Video 1 Video October 16, 2001

Understanding IP Video for

Understanding Compression Technologies for HD and Megapixel Surveillance

A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique

DIGITAL COMMUNICATION

Colour Reproduction Performance of JPEG and JPEG2000 Codecs

HEVC: Future Video Encoding Landscape

Introduction to image compression

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

Video Compression. Representations. Multimedia Systems and Applications. Analog Video Representations. Digitizing. Digital Video Block Structure

MPEG + Compression of Moving Pictures for Digital Cinema Using the MPEG-2 Toolkit. A Digital Cinema Accelerator

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

HIGH SPEED ASYNCHRONOUS DATA MULTIPLEXER/ DEMULTIPLEXER FOR HIGH DENSITY DIGITAL RECORDERS

An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions

New forms of video compression

H.261: A Standard for VideoConferencing Applications. Nimrod Peleg Update: Nov. 2003

VITERBI DECODER FOR NASA S SPACE SHUTTLE S TELEMETRY DATA

INTERNATIONAL TELECOMMUNICATION UNION. SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Coding of moving video

Tutorial on the Grand Alliance HDTV System

In MPEG, two-dimensional spatial frequency analysis is performed using the Discrete Cosine Transform

WYNER-ZIV VIDEO CODING WITH LOW ENCODER COMPLEXITY

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences

Introduction to Video Compression Techniques. Slides courtesy of Tay Vaughan Making Multimedia Work

A MISSILE INSTRUMENTATION ENCODER

AUDIOVISUAL COMMUNICATION

Joint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes. Digital Signal and Image Processing Lab

Visual Communication at Limited Colour Display Capability

Information Transmission Chapter 3, image and video

CERIAS Tech Report Preprocessing and Postprocessing Techniques for Encoding Predictive Error Frames in Rate Scalable Video Codecs by E

Chapter 2 Introduction to

MPEGTool: An X Window Based MPEG Encoder and Statistics Tool 1

Contents. xv xxi xxiii xxiv. 1 Introduction 1 References 4

Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems. School of Electrical Engineering and Computer Science Oregon State University

DCT Q ZZ VLC Q -1 DCT Frame Memory

Overview: Video Coding Standards

ROTARY HEAD RECORDERS IN TELEMETRY SYSTEMS

Implementation of an MPEG Codec on the Tilera TM 64 Processor

MULTIMEDIA COMPRESSION AND COMMUNICATION

Film Grain Technology

VIDEO 101: INTRODUCTION:

Digital Television Fundamentals

The H.26L Video Coding Project

SERIAL HIGH DENSITY DIGITAL RECORDING USING AN ANALOG MAGNETIC TAPE RECORDER/REPRODUCER

White Paper Lower Costs in Broadcasting Applications With Integration Using FPGAs

06 Video. Multimedia Systems. Video Standards, Compression, Post Production

Advanced Computer Networks

MULTI-STATE VIDEO CODING WITH SIDE INFORMATION. Sila Ekmekci Flierl, Thomas Sikora

RATE-REDUCTION TRANSCODING DESIGN FOR WIRELESS VIDEO STREAMING

OVE EDFORS ELECTRICAL AND INFORMATION TECHNOLOGY

Digital Image Processing

BER MEASUREMENT IN THE NOISY CHANNEL

PCM ENCODING PREPARATION... 2 PCM the PCM ENCODER module... 4

17 October About H.265/HEVC. Things you should know about the new encoding.

Reduced complexity MPEG2 video post-processing for HD display

INTRA-FRAME WAVELET VIDEO CODING

The H.263+ Video Coding Standard: Complexity and Performance

COSC3213W04 Exercise Set 2 - Solutions

Video (Fundamentals, Compression Techniques & Standards) Hamid R. Rabiee Mostafa Salehi, Fatemeh Dabiran, Hoda Ayatollahi Spring 2011

CHAPTER 2 SUBCHANNEL POWER CONTROL THROUGH WEIGHTING COEFFICIENT METHOD

AT65 MULTIMEDIA SYSTEMS DEC 2015

An FPGA Based Solution for Testing Legacy Video Displays

Dual frame motion compensation for a rate switching network

MULTIMEDIA TECHNOLOGIES

Multimedia Communications. Video compression

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Error Resilience for Compressed Sensing with Multiple-Channel Transmission

Coded Channel +M r9s i APE/SI '- -' Stream ' Regg'zver :l Decoder El : g I l I

Multimedia Communications. Image and Video compression

COMP 9519: Tutorial 1

Chapter 2. Advanced Telecommunications and Signal Processing Program. E. Galarza, Raynard O. Hinds, Eric C. Reed, Lon E. Sun-

Video Coding IPR Issues

CHROMA CODING IN DISTRIBUTED VIDEO CODING

Introduction to Data Conversion and Processing

OL_H264MCLD Multi-Channel HDTV H.264/AVC Limited Baseline Video Decoder V1.0. General Description. Applications. Features

Video Transmission. Thomas Wiegand: Digital Image Communication Video Transmission 1. Transmission of Hybrid Coded Video. Channel Encoder.

CAP240 First semester 1430/1431. Sheet 4

DVM-3000 Series 12 Bit DIGITAL VIDEO, AUDIO and 8 CHANNEL BI-DIRECTIONAL DATA FIBER OPTIC MULTIPLEXER for SURVEILLANCE and TRANSPORTATION

Enhanced Frame Buffer Management for HEVC Encoders and Decoders

Video Over Mobile Networks

Midterm Review. Yao Wang Polytechnic University, Brooklyn, NY11201

Modeling and Optimization of a Systematic Lossy Error Protection System based on H.264/AVC Redundant Slices

Analysis of MPEG-2 Video Streams

PAL uncompressed. 768x576 pixels per frame. 31 MB per second 1.85 GB per minute. x 3 bytes per pixel (24 bit colour) x 25 frames per second

Digital Audio Broadcast Store and Forward System Technical Description

BLOCK CODING & DECODING

FEC FOR EFFICIENT VIDEO TRANSMISSION OVER CDMA

RECOMMENDATION ITU-R BT.1203 *

So far. Chapter 4 Color spaces Chapter 3 image representations. Bitmap grayscale. 1/21/09 CSE 40373/60373: Multimedia Systems

Part1 박찬솔. Audio overview Video overview Video encoding 2/47

Transcription:

Digital Video Telemetry System Item Type text; Proceedings Authors Thom, Gary A.; Snyder, Edwin Publisher International Foundation for Telemetering Journal International Telemetering Conference Proceedings Rights Copyright International Foundation for Telemetering Download date 07/06/2018 11:05:11 Link to Item http://hdl.handle.net/10150/608381

Digital Video Telemetry System Gary A. Thom Delta Information Systems Edwin Snyder Aydin Vector 300 Welsh Rd., Bldg. 3 47 Friends Lane Horsham, PA 19044 Newtown, PA 18940 ABSTRACT The ability to acquire real-time video from flight test platforms is becoming an important requirement in many test programs. Video is often required to give the flight test engineers a view of critical events during a test such as instrumentation performance or weapons separation. Digital video systems are required because they allow encryption of the video information during transmission. This paper describes a Digital Video Telemetry System that uses improved video compression techniques which typically offer at least a 10:1 improvement in image quality over currently used techniques. This improvement is the result of interframe coding and motion compensation which other systems do not use. Better quality video at the same bit rate, or the same quality video at a lower bit rate is achieved. The Digital Video Telemetry System also provides for multiplexing the video information with other telemetered data prior to encryption. KEY WORDS: Video Compression, Video Transmission, Codec. INTRODUCTION As a leading manufacturer of airborne data acquisition systems Aydin Vector Division (Aydin) has recognized the growing need for the inclusion of real-time video in the list of measurements to be acquired for transmission to the ground station. Analog video has long been used in the field of flight test, being transmitted over a wide band data link. However, the growing scarcity of available bandwidth, the need for data security and the efficiencies gained both

economically and operationally from the use of digital multiplexed systems has driven the development of the Digital Video Telemetry System. Aydin Vector has formed an alliance with Delta Information Systems (Delta) to develop an enhanced airborne compressed digital video encoder and decoder. Delta is an internationally recognized leader in both the development of video compression standards and their implementation in both hardware and software. After careful consideration of the requirements and possible solutions, a system implementation has been developed which applies advanced video compression standards to airborne video data links. Aydin now offers a series of standard product modules which incorporate compressed digital video into time division multiplexed data links. BACKGROUND The inclusion of video and encrypted video into a telemetry data link poses some unique challenges to the system designer. An analog video signal of 4 MHz bandwidth could be included on a data link sub-carrier; but, if the signal requires encryption, it must be digitized. A monochrome video signal digitized with a resolution of 640 pixels / line, 480 lines / frame, 8 bits / picture element, at a rate of 30 frames / second results in a transmitted bit rate in excess of 73 Mbps. A color video signal at the same resolution and frame rate may require two to three times that bit rate. It is a direct result of the enormity of numbers such as this that video data compression techniques have been developed. Several video encoding methods have been explored. In 1993 the U.S. Range Commanders Council introduced IRIG-STD-210-93 which describes a method of encoding a RS-170 black and white video stream using Differential Pulse Code Modulation (DPCM) coding. This is strictly an intra-frame coding technique where each frame of digitized video information is compression coded independently of other frames. No advantage is taken from the fact that the differences between successive frames may be small. The IRIG-STD-210 technique results in an average of 3 bits per pixel and a transmitted bit rate, for similar conditions as described above, between 10 Mbps and 28 Mbps depending upon the extent of entropy coding. The telemetry data links on the national test ranges are typically limited to handling data rates up to 5 Mbps. Although there is a move to upgrade these capabilities to 10 Mbps and even higher, one can see that bit rate (bandwidth) is a

premium commodity. Unless video is the primary information source, it takes a back seat to the "measurement" telemetry. The goal is to transmit more with less. Multiplexing compressed video with measurement telemetry in a single link provides significant advantages in bandwidth utilization, and telemetry system simplification. COMPRESSION ALGORITHM Today, commercial applications of video compression for entertainment, video teleconferencing, and multimedia products has resulted in the development of highly efficient compression methods. Video compression standards with acronyms such as MPEG and JPEG have become part of our every day lexicon. The JPEG technique results in bits/nyquist sample approximately equal to 0.1 times that required by DPCM video. Likewise MPEG requires one tenth that of JPEG. Other less familiar standards such as T.4, T.6, JBIG, and H.261 have emerged as developments proceed in the direction of transmitting more with less. These algorithms (summarized in Table 1) each have characteristics which determine their applicability to the Digital Video Telemetry System. The video data compression techniques commonly used to date (as defined by IRIG-STD-210) make use of intra-frame technology. This method only removes data redundancies from within the current video frame. Each frame is coded independently. Within the frame, the difference between the previous pixel and the current pixel is PCM encoded to three bits. This may then be entropy encoded bringing the average bits per pixel to about 1.7. Other algorithms such as JPEG, Wavelets, and Fractals also use intra-frame coding. Newer techniques such as MPEG and H.261 which are specifically designed for motion video compression, use inter-frame coding. Inter-frame coding not only removes redundancies within a frame, but it also removes redundancies from frame to frame. These algorithms also use Motion Compensation to account for the motion of objects from frame to frame. The H.261 algorithm (Figure 1) was selected for use in the Digital Video Telemetry System. Primary considerations for this selection included the high quality, high compression rate and real-time nature of the H.261 algorithm. This algorithm uses the Discrete Cosine Transform (DCT) to convert spatial (pixel) information into the frequency domain. Visual energy is concentrated into a few frequency domain coefficients allowing higher compression than using the original spatial information. This is due to the fact that much of the frequency domain information

TABLE 1 - COMPRESSION ALGORITHMS ALGORITHM TRANSFORM INTER/INTRA FRAME CODING COLOR BITS/PIXEL LATENCY DPCM PCM Intra-frame BW 3 to 1.7 Low T.4 Run-Length Intra-frame Bilevel 0.1 Low T.6 2D Run-Length Intra-frame Bilevel 0.08 Low JBIG Predictive Intra-frame Bilevel 0.05 Low JPEG DCT Intra-frame Color 0.3 Low MPEG-1/2 DCT, Motion Compensation Bi-directional Inter-frame Color 0.01 High H.261 DCT, Motion Compensation Inter-frame Color 0.01 Low Wavelets Wavelet Intra-frame Color na Low Fractals Afine Intra-frame Color na High SW DCT QUANT HUFF OUT IN SW IDCT IQUANT MC DCT - Discrete Cosine Transform Quant - Quantizer Huff - Huffman Encoding IDCT - Inverse DCT IQUANT - Inverse Quntizer MC - Motion Compensation SW - Switch FIGURE 1 - H.261 Compression Algorithm

is not significant in reconstructing the spatial information. An 8x8 two dimensional DCT is used which minimizes processing requirements while maximizing block size. The H.261 algorithm increases compression by using inter-frame coding. This technique allows the redundancies between frames to be removed thus providing a significant increase in compression. This is done by creating a prediction of the current frame from the decoded previous frame. The difference between the current frame and the predicted frame is then taken. The DCT is applied to this difference frame. The decoded previous frame is used instead of the actual previous frame in order to remove any quantization error that may accumulate in the decoder. Typically a large portion of the frame has no change, so this difference approaches zero. In areas of the frame where there is motion, the frame-to-frame difference may be large. In these areas Motion Compensation is used to improve the predicted frame prior to calculating the frame differences. Motion Compensation relies on the fact that from frame to frame much of the scene does not change in content. The position of objects may change, but the significant content is similar. This process compares an area of the current frame with an offset area of the previous frame. The offset area is the frame location from which an object has moved. A motion vector is used to indicate the changed position of the object and replaces the transform coefficients thereby significantly reducing the number of bits required for transmission. This technique improves the inter-frame performance by accounting for the motion of objects from frame to frame. By accounting for motion, the frame differences are reduced allowing less information to be sent to represent each new video frame. Quantization of the transform coefficients allows removal of insignificant coefficients resulting in the compression gains described above. The step size of the quantizer determines how much error is introduced in the image. The quantizer is adjusted to trade off picture quality for available bandwidth. H.261 allows different areas of the video frame to be quantized differently. This allows more bits to be allocated to areas that are changing and fewer bits to be used in areas that are relatively static. The quantizer also varies from frame to frame. As the inter-frame differences become smaller, the quantizer step size is reduced, thus reducing the quantization error and improving the picture quality.

Finally, additional compression gain is achieved by Huffman entropy coding the quantized coefficients and motion vectors. Inter-frame coding techniques may be more sensitive to errors than intra-frame techniques. Two mechanisms are implemented in the algorithm to combat these errors: Intra-refresh and forward error correction. Intra-refresh is a process by which small areas of the frame are intra-frame coded. Over a programmable number of frames the entire frame is refreshed. These intra-frame coded areas do not depend on prior frames, they are completely updated with new information. This technique corrects the effect of previous transmission errors as well as any accumulated processing errors. Since intra-frame coding requires significantly more bits than inter-frame coding, intra-refresh uses only a small area of each frame in order to maintain the highest compression ratio with the highest picture integrity. Forward Error Correction is also applied to the compressed data stream in order to reduce the effects of transmission errors. A 511,493 BCH code is used which provides 18 parity bits for every 493 data bits. This results in a 3.7% overhead but provides for correction of up to two random errors per block thus supporting bit error rates as high as 4 in 1000. Other methods are optionally available. The most common being the use of a Viterbi Encoder. Although the Viterbi encoding method is highly robust (up to 6 db gain can be realized) it also requires more transmission bandwidth. The most commonly used Viterbi encoder uses a constraint length of 7 and rate 1/2. This requires an output bandwidth double that of the un-encoded bit stream. The result is that the H.261 algorithm provides significant compression gains over the DPCM and motion JPEG algorithms without the high data latency associated with the MPEG and Fractal algorithms. The use of Wavelets for motion video is still under development. THE DIGITAL TELEMETRY SYSTEM The Digital Video Telemetry System consists of an Aydin Vector Video Compression Encoder (VCE-810) and Video Decoder (VCD-810). These devices are based on a Delta designed codec which incorporates the H.261 technology described above.

A block diagram for the VCE-810 Video Encoder is shown in Figure 2. An analog color or black and white video signal is input to the Video Decoder where it is filtered and digitized. This analog video can be NTSC, PAL or S-Video (Y/C). The output of the digitizer is passed to the Compression Encoder. EXT CLOCK IN DECODER COMPRESSION ENCODER SERIAL INTERFACE DATA CLOCK FIGURE 2 - Video Encoder Block Diagram The encoder applies the H.261 compression algorithm to the video data. This consists of the inter-frame difference, Motion Compensation, DCT Transform, Quantization, Huffman encoding, and BCH coding. Programmable parameters allow selection of picture resolution and maximum quantization level. Two picture resolutions are currently provided: Normal and Low. The Low resolution mode provides one quarter the resolution of the Normal mode. This is useful for low bit rate applications. The quantization level is a value from 1 to 32 which specifies the quantizer step size. Lower values provide high picture quality while larger values yield higher frame rates. Selection of these parameters allows the user to trade off picture quality for frame rate at a given compressed data bit rate. Other parameters of the encoding process are adjusted for specific applications. These include: inter/intra-decision threshold, motion vector search range, and intrarefresh rate. Compressed data is clocked out of the Compression Encoder through the Serial Interface using an internal or external clock. This clock may be at any bit rate from about 9.6 Kbps to 3 Mbps. Output clock and data are available at either TTL or RS-422 signal levels. The compressed data stream is then presented to a telemetry transmitter.

The encoder is designed to automatically adapt so as to always transmit the highest quality picture under the constraints issued by the user. Since the system is designed to operate in a PCM data link, the bit rate is predetermined and fixed by the user. The encoder therefore varies picture quality by automatically adjusting the quantizer step size. Once the maximum permissible value of quantization has been reached, frame rate reduction is imposed. However, these degradation situations are rare and occur only at very low transmitted bit rates, usually below 100 Kbps. Little or no picture degradation is noticeable in a well designed data link. The functional flow of the video decoder (VCD-810) is shown in Figure 3. A compressed data stream with clock is input through the Serial Interface to the Compression Decoder. This may be either TTL or RS-422 signal levels. The Compression Decoder converts the compressed data stream into video information according to the H.261 algorithm. The input data stream contains all of the information necessary to automatically program the decoder to de-compress and reproduce the original video. For this reason there are no programmable parameters associated with the decoder. The compression decoder corrects errors detected by the BCH forward error coding, Huffman decodes the corrected data, inverse quantizes, inverse DCT transforms, motion corrects, and inter-frame processes the data stream to yield the video frame information. DATA CLOCK SERIAL INTERFACE COMPRESSION DECODER ENCODER OUT FIGURE 3 - Video Functional Diagram Video information from the Compression Decoder is applied to a Video Encoder which regenerates the analog video signal. The Video Encoder output is a composite video signal (NTSC or PAL) and a component video signal (S-Video).

IN A TELEMETRY LINK The inclusion of digitized video in a dedicated communication link poses only the challenge of compressing the signal and adding the overhead of forward error correction (FEC) and in some cases additional coding for the purpose of data security. However, most data telemetering applications include the measurement and transmission of other parameters in addition to the video. It is desirable to merge all of the measured data, including the video, into a single communications link thus saving hardware and operational costs. Figure 4 depicts the Aydin Vector PCU-800 (Signal Conditioner and PCM Multiplexer) including a VCE-800 video sub-system. Multiple video inputs are capable of being merged with multiple measurement data sources. Space in the transmitted PCM time division multiplexed (TDM) stream is automatically allocated by the pre-mission setup and control activity governed by the Aydin ADASWARE software. Using this software, the user defines the nature and resolution of the data parameters, the aggregate PCM bit rate and the video quality parameters (among other information). ADASWARE automatically allocates the time slot period required for each video stream and provides the necessary control information to the PCU-800. VCE-800 Camera Video Encoder Camera VCE-800 Video Encoder PCU-800 PCM Encoder Transmitter Transducers FIGURE 4 - Video Telemetry Encoder System At the receiving site a single PCM stream is received and separated into the component parts for processing and display. Figure 5 shows a typical ground station implementation. The received PCM data stream is first bit synchronized, Viterbi decoded if applicable and frame synchronized. The frame synchronizer

VCD-800 Video Decoder GSR-200 Receiver PCA-800 PCM Decom Monitor VCD-800 Video Decoder Monitor FIGURE 5 - Video Telemetry Decoder System identifies the location in the stream of each measurement source. Most, if not all, of the "conventional" measurements are extracted at this level for display and preprocessing. The time slots containing video information are extracted and serially applied to the VCD-800. Total decoding of the video and conversion to a NTSC, PAL or S-Video output is accomplished by the VCD sub-system components. An appropriate video monitor is connected and the system user is presented with a clear reproduction of the original video. CONCLUSION By making use of robust and efficient video compression techniques and appropriate digital multiplexing, video information may be added to a telemetering measurement list with minimal impact. The H.261 algorithm provides the needed efficiency which allows the inclusion of visual information in a telemetry stream while remaining within the transmission bandwidth restrictions imposed by the Range authorities. Video quality demands vary by application and there is a trade off between bitrate and video quality. The systems described in this paper offer operation from below 16 kbps to above 3 Mbps, providing a wide range of quality selections and allowing video information to be included in almost any telemeter. These systems provide excellent quality video at very modest data rates, outperforming most other techniques.