AN OVERVIEW OF FLAWS IN EMERGING TELEVISION DISPLAYS AND REMEDIAL VIDEO PROCESSING

Similar documents
OPTIMAL TELEVISION SCANNING FORMAT FOR CRT-DISPLAYS

IC FOR MOTION-COMPENSATED DE-INTERLACING, NOISE REDUCTION, AND PICTURE-RATE CONVERSION

United States Patent: 4,789,893. ( 1 of 1 ) United States Patent 4,789,893 Weston December 6, Interpolating lines of video signals

UNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT

FRAME RATE CONVERSION OF INTERLACED VIDEO

4. ANALOG TV SIGNALS MEASUREMENT

(a) (b) Figure 1.1: Screen photographs illustrating the specic form of noise sometimes encountered on television. The left hand image (a) shows the no

LCD Motion Blur Reduced Using Subgradient Projection Algorithm

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique

Rec. ITU-R BT RECOMMENDATION ITU-R BT PARAMETER VALUES FOR THE HDTV STANDARDS FOR PRODUCTION AND INTERNATIONAL PROGRAMME EXCHANGE

InSync White Paper : Achieving optimal conversions in UHDTV workflows April 2015

CHAPTER 2. Black and White Television Systems

ZONE PLATE SIGNALS 525 Lines Standard M/NTSC

Chapter 3 Fundamental Concepts in Video. 3.1 Types of Video Signals 3.2 Analog Video 3.3 Digital Video

ECE438 - Laboratory 4: Sampling and Reconstruction of Continuous-Time Signals

NAPIER. University School of Engineering. Advanced Communication Systems Module: SE Television Broadcast Signal.

A video signal processor for motioncompensated field-rate upconversion in consumer television

Rounding Considerations SDTV-HDTV YCbCr Transforms 4:4:4 to 4:2:2 YCbCr Conversion

CM3106 Solutions. Do not turn this page over until instructed to do so by the Senior Invigilator.

PREDICTION OF PERCEIVED QUALITY DIFFERENCES BETWEEN CRT AND LCD DISPLAYS BASED ON MOTION BLUR

RECOMMENDATION ITU-R BT Studio encoding parameters of digital television for standard 4:3 and wide-screen 16:9 aspect ratios

Colour Matching Technology

Investigation of Digital Signal Processing of High-speed DACs Signals for Settling Time Testing

Video coding standards

In MPEG, two-dimensional spatial frequency analysis is performed using the Discrete Cosine Transform

New-Generation Scalable Motion Processing from Mobile to 4K and Beyond


DISPLAY AWARENESS IN SUBJECTIVE AND OBJECTIVE VIDEO QUALITY EVALUATION

Chapter 7. Scanner Controls

DVG-5000 Motion Pattern Option

White Paper. Uniform Luminance Technology. What s inside? What is non-uniformity and noise in LCDs? Why is it a problem? How is it solved?

RECOMMENDATION ITU-R BT (Questions ITU-R 25/11, ITU-R 60/11 and ITU-R 61/11)

Laser Beam Analyser Laser Diagnos c System. If you can measure it, you can control it!

Overview of All Pixel Circuits for Active Matrix Organic Light Emitting Diode (AMOLED)

Elements of a Television System

Deinterlacing An Overview

Electrical and Electronic Laboratory Faculty of Engineering Chulalongkorn University. Cathode-Ray Oscilloscope (CRO)

THE CAPABILITY to display a large number of gray

APPLICATION NOTE AN-B03. Aug 30, Bobcat CAMERA SERIES CREATING LOOK-UP-TABLES

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement

LED driver architectures determine SSL Flicker,

Audio and Video II. Video signal +Color systems Motion estimation Video compression standards +H.261 +MPEG-1, MPEG-2, MPEG-4, MPEG- 7, and MPEG-21

Temporal Error Concealment Algorithm Using Adaptive Multi- Side Boundary Matching Principle

Secrets of the Studio. TELEVISION CAMERAS Technology and Practise Part 1 Chris Phillips

Colour Reproduction Performance of JPEG and JPEG2000 Codecs

Solution for Nonuniformities and Spatial Noise in Medical LCD Displays by Using Pixel-Based Correction

Color Reproduction Complex

An Overview of Video Coding Algorithms

2. AN INTROSPECTION OF THE MORPHING PROCESS

TOWARDS VIDEO QUALITY METRICS FOR HDTV. Stéphane Péchard, Sylvain Tourancheau, Patrick Le Callet, Mathieu Carnec, Dominique Barba

The Lecture Contains: Frequency Response of the Human Visual System: Temporal Vision: Consequences of persistence of vision: Objectives_template

Types of CRT Display Devices. DVST-Direct View Storage Tube

Understanding Compression Technologies for HD and Megapixel Surveillance

CHAPTER 3 COLOR TELEVISION SYSTEMS

decodes it along with the normal intensity signal, to determine how to modulate the three colour beams.

CATHODE RAY OSCILLOSCOPE. Basic block diagrams Principle of operation Measurement of voltage, current and frequency

STANDARDS CONVERSION OF A VIDEOPHONE SIGNAL WITH 313 LINES INTO A TV SIGNAL WITH.625 LINES

Analog TV Systems: Monochrome TV. Yao Wang Polytechnic University, Brooklyn, NY11201

AN IMPROVED ERROR CONCEALMENT STRATEGY DRIVEN BY SCENE MOTION PROPERTIES FOR H.264/AVC DECODERS

Interlace and De-interlace Application on Video

Dan Schuster Arusha Technical College March 4, 2010

Research and Development Report

SPATIAL LIGHT MODULATORS

Chapter 2 Circuits and Drives for Liquid Crystal Devices

Implementation of an MPEG Codec on the Tilera TM 64 Processor

DIGITAL COMMUNICATION

Lecture 2 Video Formation and Representation

Analysis of WFS Measurements from first half of 2004

Motion Video Compression

Module 1: Digital Video Signal Processing Lecture 3: Characterisation of Video raster, Parameters of Analog TV systems, Signal bandwidth

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

Color Image Compression Using Colorization Based On Coding Technique

PLASMA DISPLAY PANEL (PDP) DAEWOO D I G I T A L DIGITAL TV DEVISION

Using enhancement data to deinterlace 1080i HDTV

Film Sequence Detection and Removal in DTV Format and Standards Conversion

Module 3: Video Sampling Lecture 16: Sampling of video in two dimensions: Progressive vs Interlaced scans. The Lecture Contains:

WYNER-ZIV VIDEO CODING WITH LOW ENCODER COMPLEXITY

Multimedia Systems Video I (Basics of Analog and Digital Video) Mahdi Amiri April 2011 Sharif University of Technology

Reduced complexity MPEG2 video post-processing for HD display

ALIQUID CRYSTAL display (LCD) has been gradually

TERRESTRIAL broadcasting of digital television (DTV)

Data flow architecture for high-speed optical processors

Reproducible Quality Analysis of Deinterlacing and Motion Portrayal for Digital TV Displays

L14 - Video. L14: Spring 2005 Introductory Digital Systems Laboratory

Television History. Date / Place E. Nemer - 1

Traditionally video signals have been transmitted along cables in the form of lower energy electrical impulses. As new technologies emerge we are

Motion blur estimation on LCDs

Introduction. Edge Enhancement (SEE( Advantages of Scalable SEE) Lijun Yin. Scalable Enhancement and Optimization. Case Study:

If your sight is worse than perfect then you well need to be even closer than the distances below.

Multirate Digital Signal Processing

BRITE-VIEW BLS-2000 Professional Progressive Scan Video Converter

Adaptive decoding of convolutional codes

White Paper : Achieving synthetic slow-motion in UHDTV. InSync Technology Ltd, UK

These are used for producing a narrow and sharply focus beam of electrons.

Spatial Light Modulators XY Series

Objective video quality measurement techniques for broadcasting applications using HDTV in the presence of a reduced reference signal

NON-UNIFORM KERNEL SAMPLING IN AUDIO SIGNAL RESAMPLER

Multichannel Satellite Image Resolution Enhancement Using Dual-Tree Complex Wavelet Transform and NLM Filtering

Transcription:

AN OVERVIEW OF FLAWS IN EMERGING TELEVISION DISPLAYS AND REMEDIAL VIDEO PROCESSING Gerard de Haan, Senior Member IEEE and Michiel A. Klompenhouwer Philips Research Laboratories, Eindhoven, The Netherlands ABSTRACT New display principles aim at supreme image quality. The temporal aspects of these devices sometimes remain underexposed in the literature, and the paper presents an overview of new artifacts and possible remedies with signal processing. INTRODUCTION Over the years, a large number of new display principles have emerged from the search for flat, high quality, or low cost alternatives for the successful Cathode Ray Tube (CRT). The various properties of a good display should include high static resolution, high peak brightness, high contrast, high lumen efficacy, and favourable dynamic behaviour. Of these, it is the last item which gains least exposure. Nevertheless, the temporal aspects of a video display determine important properties like flicker, dynamic resolution and motion portrayal. In this paper, we shall discuss typical artifacts due to unfavourable properties of emerging television displays. We introduce a processing model that eliminates or at least reduces the various artifacts that result from temporal imperfections of CRTs with alternative scanning, Liquid Crystal Displays (LCDs), tiled displays, Plasma Display Panels (PDPs), and colour sequential displays. We shall conclude that knowledge of the motion in the scene, i.e. motion estimation, is essential to at least partially repair the often unfavourable temporal behaviour of these displays. Such repair is realistic, as these displays have appeared on the market at the moment motion vector estimation has come to maturity for consumer applications [,2,3,4]. In Section 2, we model the display artifacts due to scanning mismatches, and in Section 3 the artifacts due to integration of the display. Section 4 introduces a general processing model for artifact reduction, which is elaborated in Section 5 for individual display types. In Section 6, we draw our conclusions. 2 TEMPORAL ARTIFACTS IN DISPLAYS RESULTING FROM SCANNING MISMATCHES In the early days of television, both the imaging and the display device used an electron beam to scan the scene, and it was only logical that the scanning had the same parameters, i.e. line time, picture time and interlace factor. In such a system, the delay between the registration of an input pixel and the display of an output pixel is the same for all pixels. Over time, the technologies for imagers and displays diverged, and the delay for every displayed pixel no longer had the same value. Figure illustrates the scanning process in imager and display. 2. Linear delay variations In a first category of displays, we can model the delay variation t resulting from the scanning mismatch as a linear function of the position on the screen. To determine the scanning mismatch, we first have to model the scanning process, i.e. the time of recording/displaying each pixel in a video frame. The scanning mismatch is then defined for each pixel as the delay between recording and displaying (neglecting any constant contribution ). The scanning process of imaging and display devices that use electron beams as described above is a linear function of the display co-ordinates, with: t = S 0 + S x + S 2 y () where the scanning parameters S n are constants, and x and y describe the horizontal and vertical position of the pixel respectively. For example, the scanning process for a CRT display is described by: t CRT = t ---x l w t + --y f h where t f and t l = t f /h are frame- and line times, and w and h are the number of pixels horizontally (width) and vertically (height). The electron beam fly-back times are neglected for simplicity, and the pixel at the co-ordinate origin is defined to have a recording/displaying time of zero by neglecting the constant delay contribution (S 0 =0).. A constant delay between imager and display has no influence on image quality, it merely determines to which extent the video is said to be live. (2)

Fig. The straightforward way to transmit a picture (top figure) is to send the luminance variations of individual picture elements (pixels) over time via parallel channels from imager to display. Rather than using many low bandwidth channels, however, a single wide bandwidth channel is common. Here, the luminance values of the individual pixels are multiplexed in the imager (scene scanning) and demultiplexed by the display scanning process. In the early days of television, the scanning of imager and display were complementary. Nowadays they are uncoupled, which leads to delay variations between the individual pixels. Since both imaging and display device have identical scanning parameters, their scanning mismatch is zero (again neglecting a constant delay). However, when the scanning parameters are not identical, there will be delay variations across the screen due to the scanning mismatch: t = C 0 + C x + C 2 y (3) where C n = S n,display S n,imager A typical example is the scanning mismatch between a CCD camera (S n 0) and a CRT display, leading to t CCD/CRT = t CRT (Eq.2). Due to motion in the scene, delay variations are translated into spatial variations via the local velocity v of the objects in the scene: x = v t (4) and as a consequence, positional errors are a function of the velocity: x = C 0 v + C xv + C 2 yv In this first category of display and imager combinations, motion results in a geometrical distortion of the moving object. This distortion varies smoothly as a function of the spatial position, and therefore is considered mild and acceptable at least for consumer vision applications. (5) Figure 2a illustrates the effect of scanning mismatches in this category. The figure shows the perceived image when making a horizontal pan over a scene with a frame-transfer CCD camera, and displaying the resulting video on a scanned display (e.g. a CRT). The smoothly varying delay error as a function of the screen position causes a skew distortion of the image content. For higher speeds this effect can be quite noticeable, but it is seldom annoying. 2.2 Non-linear delay variations In a second category of imager and display combinations, the parameters C n are not constant (C 0 can therefore no longer be neglected). This relation may vary per colour (for colour sequential displays, as shown in Figure 2b), per picture portion (for tiled displays like vidiwall screens, see Figure 2c), or may depend on the picture number (for broadcast film material on any known display and for video in PCs, see Figure 2d). Another example is interlaced video on a progressive display, where the delay depends on the line number (odd/even). As these delay variations and, from Equation 5, positional errors do not necessarily vary smoothly, correction gives a clear improvement that is almost necessary for acceptable image quality. The delay may even depend on the displayed data, as is typical for displays with an on/off character that realize gray scale representation with pulse-width modulation,(e.g.

Fig.2 The perceived effect of scanning mismatches and temporal integration on various imager and display combinations. In all cases the camera pans over the image, so that the image moves from left to right across the screen. Top-left to bottom-right show: a: frame transfer CCD imager and CRT display, b: imager tube and CRT colour sequential display, c: frame transfer CCD imager and tiled (vidiwall) CRT display, d: 25 Hz film on 50 Hz CRT, e: frame transfer CCD imager and liquid crystal display (LCD), f: frame transfer imager and plasma display panel (PDP). PDPs). In this type of display, each bit-slice has a different delay. The artifacts for this type are even stronger than in the previous examples, as can be seen in Figure 2f and in Figure 8. Typically, brightly coloured dynamic contours appear in low detail areas. Repair therefore leads to an obvious quality improvement. In a more elaborate paper dedicated to the artifacts in PDPs [5] we have shown that this improvement, although feasible, is more difficult to achieve. Section 5.5 gives a summary of these artifacts and improvements. Recently, correction algorithms have reached a level of maturity allowing introduction of dedicated ICs [, 2] for consumer video equipment and software packages enabling real time correction on digital signal processors (DSPs) [4]. 3 TEMPORAL ARTIFACTS DUE TO The CRT is a stroboscopic display device: the light for an individual pixel is generated as a pulse which is very short compared to the picture time. For non-stroboscopic displays like many emerging types including LCDs and PDPs, each image Fxn (, ), where n is the picture number, is displayed during a display time t i in a sample-and-hold manner. For an LCD this display time equals the picture. Other effects are left out for simplicity, but lead to similar artifacts. time T, while for a PDP it varies with the brightness of the pixel. When there is motion in the image, the viewer tracks the motion and hence integrates the intensity produced by each image along the motion trajectory: F out ( x, n) t i = t -- F x + t --D, n i T dt 0 The displacement vector is the product of the object velocity and the picture period: D = vt (7) If t i is constant, the integration can also be written as a convolution of the original image Fxn (,) and a motion tracking / temporal sample-and-hold function h(α): F out ( x, n) where h( α) = t i -- T T -- Fx ( + αd, n) dα t i 0 = Fx ( + αd, n)h( α) dα T t i, 0 α < t i T = 0, otherwise (6) (8) (9)

VIDEO SOURCE INVERSE FILTERING INVERSE VARIABLE DELAY VARIABLE DELAY "FILTER" IDEAL VIDEO DISPLAY GENERAL PROCESSING TO ELIMINATE OR REDUCE ARTIFACTS MODEL FOR SCANNING MISMATCH AND NON-STROBOSCOPIC LIGHT GENERATION HBK043 Fig.3 The artifacts due to temporal behaviour of displays modelled as a cascade of a filter to represent non-stroboscopic rendering and a variable delay to represent a mismatch with the scanning of the imaging device. The general remedy includes inverse filtering and delay compensation, i.e. motion compensated interpolation. is a D block function oriented along the motion vector D. It is therefore actually a 2D function hx ( ), which has value zero outside the line segment x = Dk (0 k < t i /T). The 2D integral (area) is normalized to. The 2D spatial Fourier transform of Equation 8 is: F out (, fn) = F out (,)e xn j2πfx dx = Ffn (, )Hf () (0) with Ffn (, ) being the 2D spatial Fourier transform of the original signal F(,) x n, and Hf () the 2D spatial Fourier transform of h( x) : Hf () sinc π t i = --D. f () T We can now see that the effect of the motion tracking / temporal sample-and-hold characteristic is a low pass filtering with a sinc-frequency response, where the width of t the pass-band decreases proportionally to the quantity --D i. T Figure 2e shows the perceived artifact for our panning scene on an LCD. The blurring is clearly visible. 4 TEMPORAL ARTIFACT REDUCTION From the previous sections, weaknesses in the temporal behaviour of emerging displays result in spatial impairments such as distortions, blurring or even tearing of moving image parts. Moreover, the effects are proportional to the velocity of the motion. Possible reduction methods must therefore have knowledge of the motion of objects in the scene. This knowledge can be obtained from motion estimation algorithms. Devices implementing these algorithms calculate whether there is motion at a certain part of the screen and, if so, in which direction and how fast. Evidently, the ability to design high quality truemotion estimators is essential for remedial processing. The model for temporal artifacts can be divided into two parts: the variable delay and the integration filter at the right of Figure 3. Consequently, the remedy for all the described artifacts consists of a cascaded inverse variable delay and inverse integration filter to counteract the effects of integration of non-stroboscopic displays, shown at the left of Figure 3. The inverse variable delay is provided by a motion compensated picture interpolation module, e.g.: F MC (,) xn = -- ( F 2 in ( x + βd, n) + F in ( x ( β)d, n ) ) (2) where β is determined by the delay to be compensated. The principle of inverse filtering has been suggested by Bitzakidis [6]. From the analysis in the Fourier domain (Eq.0 and ), we can see that this inverse filtering is defined by: Ffn (, ) = F out (, fn) -------------------- = Hf () F out (, fn) -------------------------------- sinc π t i --D. f T (3) In practise, the inverse filtering cannot be perfect because the transfer Hf () resulting from the integration along the motion trajectory has zeroes, and the dynamic range of the display is limited. Nevertheless, accurate knowledge of the motion in the scene can provide the key to counteracting the motion blurring introduced by these non-stroboscopic displays. In addition to the imperfections of the inverse filter due to the nature of the integration filter, the pulse width modulation techniques for gray level generation cause data dependencies of the artifacts which can be reduced, but not completely eliminated. This will be explained in Section 5.5. 5 ELABORATION OF THE REQUIRED PROCESSING FOR INDIVIDUAL DISPLAY TYPES In the previous sections, we have derived a general model that describes the temporal behaviour of a display. We extended this model with two modules, a variable delay

and an inverse filter, that describe the ideal video display processing to remedy imperfect temporal characteristics of emerging display principles. t = 0 t = tl t = 0 t = tl In this section, we elaborate these two modules for some interesting display types belonging to the non-linear category introduced in Section 2.2. The linear devices as mentioned in Section 2. will not be discussed in more detail, since the perceived defects of these displays are hardly ever annoying. t = 0 t = tf t = tl t = 0 t = tf t = tl 5. Colour sequential displays In a colour sequential display, the light from the 3 primary colours is not emitted simultaneously for a given pixel, but rather alternates in separate pictures. We shall use Rxn (,), Gxn (,), and Bxn (,) to indicate the red, green and blue video input signals at position x and of picture number n. As an example, we shall assume that the picture rate of the colour sequential display is 3 times the input picture rate. So, one of the colours, e.g. red, is emitted at the correct position in time, while green and blue are emitted with a delay of one-third and two-thirds of the picture period respectively. To compensate for the temporal mis-positioning of green and blue, we have to interpolate the green and blue video signals to the display using motion compensated interpolation. Applying Equation 2 to this specific example leads to: G out (,) xn B out (,) xn -- G x+ --D, n 2 = + G x --D, n 2 3 3 2 -- B x 2 + --D, n = + B x --D, n 3 3 (4) It is possible, and advisable, to use an interpolation technique that is more robust for erroneous motion vectors, as discussed in [7]. However, from the case discussed, it is clear that the remedy for the typical artifact of a colour sequential display (i.e. colour break-up at moving edges in the scene) is straightforward and absolutely feasible with current video processing technology. 5.2 Tiled displays In a tiled display, adjacent parts of the picture are shown on individual display units tiled together to form the total screen. If the individual display units are of the scanning type, like the CRTs in a vidiwall, moving objects in the image break up at the boundary of two vertically neighbouring display elements, as shown in Figure 2c.. We neglect here the possible delay between pixels in an image due to increased vertical scanning frequency per colour, as this only gives rise to mild geometrical distortion as discussed before. t = tf t = tf HBK044 Fig.4 A tiled display consisting of 4 CRTs. The scanning times are indicated. Figure 4 shows a tiled display formed of 4 CRTs, each showing a quadrant of the picture. Let us assume that the CRTs are synchronously scanned from top to bottom in exactly one picture period, i.e. we neglect the vertical flyback interval. In this case, a delay step of exactly one picture period occurs between the last lines of the upper two display elements and the first lines half way down the picture of the lower two display elements. Similarly, a delay step of one line time occurs between the last pixel of every line on the left two display elements and the first pixel of every line on the right two display elements. Since this last delay step is roughly 3 orders of magnitude smaller than the first (Eq.2), the resulting artifact is negligible. To compensate for the delay step between neighbouring lines on the upper and lower display elements, we can interpolate the video signals to the lower display elements F bottom (,) xn using a motion compensated interpolation. Applying again Equation 2 to this specific example leads to: F bottom (,) xn 2 -- F x + --D, n = + F x 2 --D, n 2 (5) Again, a more robust interpolation technique as discussed in [7] can be used, but the correction of the artifact is straightforward and absolutely feasible for a consumer price. 5.3 PC-monitors and other CRTs with an increased picture frequency To prevent large area flicker, PC-monitors are usually operated at a picture frequency that lies well above that of the picture rate of broadcast video. The straightforward way to deal with this scanning mismatch is to repeat pictures until a new one becomes available. Evidently, the

repeated images are delayed in time, which leads to a spatial mis-positioning of moving objects. For moderate speeds this can be perceived as a more or less acceptable blurring, but for somewhat higher speeds the artifact is perceived as annoying echoes seen behind moving edges. Again Equation 2 provides the recipe to eliminate this artifact: F out (,) xn = -- ( F 2 in ( x + β( n)d, n) + F in ( x ( β( n) )D, n ) ) (6) but now β changes dynamically over time. The same technique is being used in flicker-free television display driver ICs, like that discussed in [2], and in studio equipment for standards conversion between 50 Hz and 60 Hz material. However, PCs still use the (in our opinion) outdated technique of picture repetition [8]. 5.4 Liquid crystal displays In contrast with the previous examples, the LCD is not a stroboscopic display: it does not emit the luminance of a given pixel at a discrete point in time, but rather emits the light during the entire picture time. The result can be interpreted as a stroboscopic display with an infinitely high picture rate using picture repetition to deal with the scanning mis-match. Unfortunately, this interpretation does not help us, as motion compensation cannot be applied to the individually repeated images to perfectly correct for the delay errors in the repeated images. The inverse filtering according to Equation 3, where t i equals the picture period, can be considered as a theoretical remedy. In practise, a modified filter is required since the sinc-function in Equation 3 contains zeroes in the transfer characteristic. A general approach that can be realized in practise is a filter that boosts the high frequencies which will be reduced by the sinc-behaviour of the display, as defined by: F out (,) xn = k Fxn (,) + GD ( ) CDlk (,, )F x +, n l kl, N (7) where the second term on the right hand side of this equation is a high-pass FIR filter, the coefficients of which depend on the local motion D. The risk of using a high-frequency boosting filter is that variations of the filter in areas where the motion vector is unreliable become visible as a modulation of the noise. This is particularly a danger in flat areas, where most motion estimators have difficulty in finding a valid vector since the region is invariant for shifts. Figure 5 shows the resulting modulation. To prevent such noise modulation, Equation 7 can be modified to suppress peaking in areas Fig.5 The drawback of motion dependent peaking in areas where motion vectors are undefined. The upper picture shows a noisy flat image section, the lower the output from the peaking filter. The different motion vectors found in this region result, on a block-by-block basis, in visible differences in the noise pattern. that cannot suffer from blurring because they contain no detail: F out2 (,) xn = Fxn (,),( F out (,) xn Fxn (,) Th) F out (,) xn Th,( F out (,) xn Fxn (,) < Th) F out (,) xn + Th,( F out (,) xn Fxn (,) > +Th) (8) where Th is a threshold value that prevents peaking in low contrast areas. Figure 6 illustrates this processing showing a block diagram of a possible implementation. Alternatives to remedy the motion blurring effect of LCDs generally do not use video processing, but rely on methods like a flashing/scanning back-light. This eliminates the integration (reduces t i ) but brings back the flicker which was so conveniently absent because of the sample and hold character of the LCD.

MOTION ESTIMATION position HBK042 INPUT VIDEO SOURCE output INVERSE FILTERING + suppression of inverse filtering in undetailed areas HBK04 Fig.6 A correction only results where the difference between the original input signal and the output of the inverse filter is considered significant, i.e. larger than a pre-defined threshold. This remedies the modulation of noise in undetailed areas, where motion vectors are often unreliable. As a final example, the motion artifacts of LCDs can be reduced by increasing the picture frequency. This does not influence the quantity t i /T in Equation, but reduces blurring by decreasing D (Eq.7), since the speed of objects in pixels per second does not change. However, doubling the picture frequency halves D, but at the same time introduces a variable delay depending on image number (odd/even). If this delay is not compensated for, e.g. when picture repetition is applied, increasing the picture rate has no effect. Each input image is still sampled-and-held for the same amount of time. Using the methods introduced in this paper, we now know that motion compensated interpolation is a prerequisite to counteract these motion artifacts. 5.5 Plasma display panels Motion artifacts in PDPs are caused by the sub-field driving method for generating gray levels. This implies that each pixel in the image is distributed over a number of subfield pixels, and each sub-field pixel can be switched on to emit light during a period depending on the sub-field weight. The viewer is unable to perceive the individual sub-fields because their light is integrated for each pixel to the correct luminance value. Figure 7 shows the sub-field principle and the integration for a still image. However, when there is motion, the viewer will track the moving object and hence integrate sub-fields along the motion trajectory, instead of along the temporal axis. So: N SF t t I out (,) xn = W j SF j x ---------- j Dxn (,)n, T j = (9) where I out (,) xn is the perceived intensity, while W j and SF j are respectively the weight and state (0 or ) of subfield j at position x in picture n. The total number of subfields is N SF, and the delay of sub-field j is given by t j. + x + 3 x + 2 x + 3 still image x?? moving image 2 4 8 6 32 x SF0 SF SF2 SF3 SF4 SF5 SF0 SF SF2 time picture n picture n + Fig.7 The dynamic false contour motion artifact. The viewer tracks a moving object across the screen, integrating sub-fields from different pixels. This results in an unintended perceived luminance. In terms of the model, each sub-field ( bit-slice ) has a different delay that causes a spatial mis-positioning due to the motion (Eq.5). Consequently, the bit-slices corresponding to each image in the sequence are perceived at different positions. This leads to the artifact described as dynamic false contours illustrated in Fig.7 and Fig.8b. Although this contouring artifact is far more annoying, the mis-positioning also causes a blurring effect similar to the double image in Figure 8d. To prevent these motion artifacts, we have to compensate for the delay of each sub-field. The problem of motion compensation for PDPs is discussed in more detail in [5], and here we only give a short summary in terms of the model introduced in this paper. The motion compensation, i.e. the compensation of the delay of each sub-field, is not as straightforward as in the examples mentioned in Sections 5. to 5.3. Equation 20 shows the use of Equation 2 at each sub-field separately. SF jmc, (,)= xn 2 --SF x t t + ---------- j Dxn (,)n, + j T 2 --SF x t t ---------- j Dxn (,)n, j T (20) From Equation 20 we can see the first problem: since a pixel in a subfield can only be on or off, we cannot simply interpolate (spatially and temporally) a value for the motion compensated sub-field. Such an interpolation would cause a mixing of bits from two different words, which can produce quite unexpected results since even the smallest differences in value of the words can completely change the value of the individual bits. This problem can be circumvented by modifying the motion compensation: 32

t t SF j, MC2 (,) xn = SF j round x + ---------- j Dxn (,)n, (2) T The interpolation in the temporal direction is removed, and the spatial interpolation is reduced to a rounding operation. This will always result in a binary value for SF j, MC2, but the rounding errors can be quite visible (Fig.8c). These rounding errors can be prevented by re-calculating the value for each sub-field from the full-intensity motion compensated value (Eq.2). Nevertheless, although they are now aligned along the motion vector, the subfields are still taken from different words (due to the rounding) and there is still a chance that the sub-fields along the motion vector do not produce the intended luminance. In [5] we explain this and introduce a feedback mechanism to control the intensity that will be seen by a motion-tracking viewer to give the optimum subfield delay compensation. Pictures showing the typical PDP motion artifact and results of the remedial processing discussed in this sub-section are shown in Figure 8. A second problem is the sensitivity to motion errors. Since the artifacts are very annoying, any error in the estimation motion vectors will lead to (similar) artifacts that are consequently also annoying, even more so than in the case of LCDs (Fig.5). An effective remedy against this sensitivity cannot easily be found by only using motion compensation. The severity of these artifacts should be reduced by using more suitable sub-field distributions. Nevertheless, as argued in the previous sections, motion compensation is the only way to correct the sub-field delay variations. A third problem with PDPs is the data-dependency of the duration of the light emission. The processing shown in Figure 8 only compensates for the delay variations, i.e. it aligns the sub-fields along the motion trajectory. On top of that, the light pulses have a finite length which causes motion blur. In Section 4, we introduced inverse filtering as the remedy for this non-stroboscopic image rendering. However, the blurring due to integration of a certain image detail depends on the length of the light pulses that carry the information of this detail, i.e. the bit-slices involved. If we assume that the main blurring is caused by the most significant bit that varies due to the image detail, we can use this information in combination with the motion vector to determine the coefficients of the inverse filter. In a possible implementation, shown in Figure 9, the mean signal value and the peak-to-peak value of a highfrequency video component are used to determine the most significant active bit. Clearly, the correction for the motion blur of a PDP can only be an approximation. Fortunately, the length of the pulses is relatively short compared to the picture time, and therefore the de-blurring is not too critical. a Fig.8 Results on the face sequence for a PDP with 8 subfields; (a): the original, or the perceived image in case of a still image, (b): picture moving to the right, the perceived artifact without remedial processing, (c): motion compensated algorithm suffering from rounding errors, (d): an optimal algorithm [5]. INPUT VIDEO SOURCE c PEAK-to-PEAK AC VALUE MEAN SIGNAL VALUE MOTION ESTIMATION output LUT coefficients INVERSE FILTERING + + b d suppression of inverse filtering in undetailed areas HBK040 Fig.9 Possible implementation of the inverse filtering for a PDP. The approximation is worse than for an LCD, as the required filtering depends on the sub-fields involved in displaying the detail. Here, this information is estimated from the mean signal value and the amplitude of the detail signal. These two pieces of information and the motion vector are translated by the Look-Up Table (LUT) into a suitable set of coefficients for the de-blurring filter.

6 CONCLUSIONS In this paper, we presented an overview of artifacts resulting from emerging display principles. We modelled these devices and introduced a general approach, consisting of motion compensated picture interpolation and inverse filtering, to repair the image quality using signal processing. We elaborated the general correction concept for the individual display types, giving various examples. We concluded that motion estimation plays an enabling role in the reduction of display artefacts. Based on years of research in the motion estimation area [9], these algorithms have reached a level of maturity that have enabled implementation of high quality true-motion estimators in consumer electronics devices. REFERENCES [] G. de Haan, J. Kettenis, and B. Deloore, IC for motion compensated 00 Hz TV, with a smooth motion movie-mode, IEEE Transaction on Consumer Electronics, vol. 42, May 996, pp. 65-74. [2] G. de Haan, IC for motion compensated deinterlacing, noise reduction and picture rate conversion, IEEE, Transactions on Consumer Electronics, Aug. 999, pp 67-624. [3] G. de Haan, Large-display video format conversion, Journal of the SID, Vol. 8, no., 2000, pp. 79-87. [4] R.J. Schutten and G. de Haan, Real-time 2-3 pulldown elimination applying motion estimation / compensation on a programmable device, IEEE Transactions on Consumer Electronics, Vol. 44, No. 3, Aug. 998, pp. 930-938. [5] M.A. Klompenhouwer and G. de Haan, Optimal reduction of dynamic contours in plasma panel displays, Digest of the SID 00, May 2000, Long Beach, pp. 388-39. [6] Stefano Bitzakidis, Matrix video display system and method of operating such systems, European Patent Application, no. EP657860A2, issued Jun. 4, 995. [7] O.A. Ojo and G. de Haan, Robust motioncompensated video up-conversion, IEEE Transactions on Consumer Electronics, Vol. 43, no. 4, Nov. 997, pp. 045-055. [8] G. de Haan, Judder-free video on PCs, Proc. of the WinHEC 98, Mar. 998, Orlando, (CD-ROM). [9] G. de Haan, Progress in motion estimation for video format conversion, IEEE Transactions on Consumer Electronics, Vol. 46, No. 3, Aug. 2000, pp. 449-459. BIOGRAPHY Gerard de Haan received B.Sc., M.Sc., and Ph.D. degrees from Delft University of Technology in 977, 979 and 992 respectively. He joined Philips Research in 979. Currently, he is a Research Fellow in the Video Processing & Visual Perception group of Philips Research Eindhoven and a Professor at the Eindhoven University of Technology. He has a particular interest in algorithms for motion estimation, scan rate conversion and image enhancement. His work in these areas has resulted in several books, about 70 papers (www.ics.ele.tue.nl/dehaan/ publications.html), some 50 patents and patent applications, and several commercially available ICs. He was the first place winner in the 995 ICCE Outstanding Paper Awards program, the second place winner in 997 and in 998, and the 998 recipient of the Gilles Holst Award. The Philips Natural Motion Television concept, based on his PhD-study, received the European Innovation Award of the Year 95/96 from the European Imaging and Sound Association. Gerard de Haan is a Senior Member of the IEEE. Michiel A. Klompenhouwer received an M.Sc. degree in Applied Physics from the University of Twente in 998. He joined Philips research in 998, and he is currently a Research Scientist in the Video Processing and Visual Perception group of Philips Research Laboratories. He has a particular interest in video processing for emerging television displays, and is working towards a Ph.D. in this field. He has written a number of papers and his work has resulted in several patent applications.