Simon Fraser. University Oct Dolby Laboratories. Scott Daly

Similar documents
LCD Motion Blur Reduced Using Subgradient Projection Algorithm

What is the lowest contrast spatial frequency you can see? High. x x x x. Contrast Sensitivity. x x x. x x. Low. Spatial Frequency (c/deg)

The Lecture Contains: Frequency Response of the Human Visual System: Temporal Vision: Consequences of persistence of vision: Objectives_template

PREDICTION OF PERCEIVED QUALITY DIFFERENCES BETWEEN CRT AND LCD DISPLAYS BASED ON MOTION BLUR

Image Quality & System Design Considerations. Stuart Nicholson Architect / Technology Lead Christie

Improving Color Text Sharpness in Images with Reduced Chromatic Bandwidth

Motion blur estimation on LCDs

Disruptive Technologies & System Requirements

Colour Matching Technology

DISPLAY AWARENESS IN SUBJECTIVE AND OBJECTIVE VIDEO QUALITY EVALUATION

ZONE PLATE SIGNALS 525 Lines Standard M/NTSC

High-resolution screens have become a mainstay on modern smartphones. Initial. Displays 3.1 LCD

LCD and Plasma display technologies are promising solutions for large-format

Studies for Future Broadcasting Services and Basic Technologies

Case Study: Can Video Quality Testing be Scripted?

Quantify. The Subjective. PQM: A New Quantitative Tool for Evaluating Display Design Options

Research & Development of Surface-Discharge Color Plasma Display Technologies. Tsutae Shinoda

White Paper. Uniform Luminance Technology. What s inside? What is non-uniformity and noise in LCDs? Why is it a problem? How is it solved?

TOWARDS VIDEO QUALITY METRICS FOR HDTV. Stéphane Péchard, Sylvain Tourancheau, Patrick Le Callet, Mathieu Carnec, Dominique Barba

Module 3: Video Sampling Lecture 16: Sampling of video in two dimensions: Progressive vs Interlaced scans. The Lecture Contains:

Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems. School of Electrical Engineering and Computer Science Oregon State University

ALIQUID CRYSTAL display (LCD) has been gradually

DVG-5000 Motion Pattern Option

Development of Simple-Matrix LCD Module for Motion Picture

Spatial Light Modulators XY Series

DCI Requirements Image - Dynamics

PROFESSIONAL D-ILA PROJECTOR DLA-G11

HEBS: Histogram Equalization for Backlight Scaling

Getting Images of the World

ECE438 - Laboratory 4: Sampling and Reconstruction of Continuous-Time Signals

Lecture 2 Video Formation and Representation

Film Sequence Detection and Removal in DTV Format and Standards Conversion

Dynamic IR Scene Projector Based Upon the Digital Micromirror Device

An Alternative Architecture for High Performance Display R. W. Corrigan, B. R. Lang, D.A. LeHoty, P.A. Alioshin Silicon Light Machines, Sunnyvale, CA

SPATIAL LIGHT MODULATORS

D-ILA PROJECTOR DLA-G15 DLA-S15

Multimedia Systems Video I (Basics of Analog and Digital Video) Mahdi Amiri April 2011 Sharif University of Technology

PROFESSIONAL D-ILA PROJECTOR DLA-G11

Processing. Electrical Engineering, Department. IIT Kanpur. NPTEL Online - IIT Kanpur

Television History. Date / Place E. Nemer - 1

Types of CRT Display Devices. DVST-Direct View Storage Tube

CRT Dynamics. A report on the dynamical properties of CRT based visual displays

Is it 4K? Is it 4k? UHD-1 is 3840 x 2160 UHD-2 is 7680 x 4320 and is sometimes called 8k

General viewing conditions for subjective assessment of quality of SDTV and HDTV television pictures on flat panel displays

Understanding PQR, DMOS, and PSNR Measurements

Colorimetric and Resolution requirements of cameras

Image and video quality assessment using LCD: comparisons with CRT conditions

Displays AND-TFT-5PA PRELIMINARY. 320 x 234 Pixels LCD Color Monitor. Features

BRITE-VIEW BLS-2000 Professional Progressive Scan Video Converter

Technology White Paper Plasma Displays. NEC Technologies Visual Systems Division

White Paper : Achieving synthetic slow-motion in UHDTV. InSync Technology Ltd, UK

Liquid Crystal Displays

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

Home Cinema Projector LPX-500

Root6 Tech Breakfast July 2015 Phil Crawley

A new technology for artifact free pattern stimulation

High Quality Digital Video Processing: Technology and Methods

UC Berkeley UC Berkeley Previously Published Works

Common assumptions in color characterization of projectors

OPTIMAL TELEVISION SCANNING FORMAT FOR CRT-DISPLAYS

D-ILA PROJECTOR DLA-G15 DLA-S15

RECOMMENDATION ITU-R BT.1201 * Extremely high resolution imagery

Minimize your cost for Phased Array & TOFD

F250. Advanced algorithm enables ultra high speed and maximum flexibility. High-performance Vision Sensor. Features

Modulation transfer function of a liquid crystal spatial light modulator

decodes it along with the normal intensity signal, to determine how to modulate the three colour beams.

Electrical and Electronic Laboratory Faculty of Engineering Chulalongkorn University. Cathode-Ray Oscilloscope (CRO)

An Overview of Video Coding Algorithms

Technical Developments for Widescreen LCDs, and Products Employed These Technologies

AND-TFT-64PA-DHB 960 x 234 Pixels LCD Color Monitor

Deep Dive into Curved Displays

UNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT

Assessing and Measuring VCR Playback Image Quality, Part 1. Leo Backman/DigiOmmel & Co.

Analog TV Systems: Monochrome TV. Yao Wang Polytechnic University, Brooklyn, NY11201

NAPIER. University School of Engineering. Advanced Communication Systems Module: SE Television Broadcast Signal.

MIE 402: WORKSHOP ON DATA ACQUISITION AND SIGNAL PROCESSING Spring 2003

User requirements for a Flat Panel Display (FPD) as a Master monitor in an HDTV programme production environment. Report ITU-R BT.

Overview of All Pixel Circuits for Active Matrix Organic Light Emitting Diode (AMOLED)

MANAGING HDR CONTENT PRODUCTION AND DISPLAY DEVICE CAPABILITIES

Reading. Display Devices. Light Gathering. The human retina

On viewing distance and visual quality assessment in the age of Ultra High Definition TV

BVM-X300 4K OLED Master Monitor

Lecture Flat Panel Display Devices

!"#"$%& Some slides taken shamelessly from Prof. Yao Wang s lecture slides

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

An Overview of the Performance Envelope of Digital Micromirror Device (DMD) Based Projection Display Systems

Calibrate, Characterize and Emulate Systems Using RFXpress in AWG Series

Spatial-frequency masking with briefly pulsed patterns

Achieve Accurate Critical Display Performance With Professional and Consumer Level Displays

Using Low-Cost Plasma Displays As Reference Monitors. Peter Putman, CTS, ISF President, ROAM Consulting LLC Editor/Publisher, HDTVexpert.

Chapter 3 Fundamental Concepts in Video. 3.1 Types of Video Signals 3.2 Analog Video 3.3 Digital Video

ESI VLS-2000 Video Line Scaler

High Value-Added IT Display - Technical Development and Actual Products

Reduction Of Flickering In Moving Message LED Display Boards.

Application Note AN-708 Vibration Measurements with the Vibration Synchronization Module

AN OVERVIEW OF FLAWS IN EMERGING TELEVISION DISPLAYS AND REMEDIAL VIDEO PROCESSING

Monitor and Display Adapters UNIT 4

Contents. xv xxi xxiii xxiv. 1 Introduction 1 References 4

CHARACTERIZATION OF END-TO-END DELAYS IN HEAD-MOUNTED DISPLAY SYSTEMS

InSync White Paper : Achieving optimal conversions in UHDTV workflows April 2015

Transcription:

Motion Perception in Displays Scott Daly Dolby Laboratories Simon Fraser University Oct. 211

Outline LCTV Basics Transmission modulation, Spatial, Color, etc. Basics of Spatiotemporal vision Motion Eye movements Eccentricity LCD Temporal Issues Overdrive Dynamic Gamma Display Temporal Rendering Function Analysis of Temporal LCD approaches Perceptual Appearance: Motion Sharpness Effect Standardized Metrics Conclusions/Summary What s next: Other Temporal Artifacts What Does Motion Really Look Like?

LCTV Basics

Light Modulation via Liquid Crystals LCD is a transmissive display Light is not created by the liquid crystals themselves A light source behind the panel shines through the display (CCFL, LED) Diffusion panel behind the LCD scatters and re-directs the light evenly Backlight direction Polarizing Filter with Retardation Film Glass Substrate Transparent Electrode Alignment Layer Liquid Crystals Color Filter R G B Glass Substrate Polarizing Filter w/ret R G B R G B Driving the Display 2 polarizing transparent panels (One Vertical, One Horizontal) Liquid crystal solution sandwiched in between Liquid crystals are rod-shaped molecules - Bend light in response to an electric current - Act like a shutter allow light to pass through or block (or attenuate)

Pixels to resolution 45" LCD full HDTV Physical resolution vs. # Pixel Dimensions 6.22 million pixels Full HD (192 x 18 progressive) achieved in 23 Full HD now shown up to 18 for LCTV 1,92(H) RGB 1,8(V) Progressive 4k x 2k pixel resolution shown by several manufacturers (65 ; 24 million pixels).5135mm.5135mm Usually, pixel physical resolution for LCTV is near 45 ppi

Salient Characteristics of LCD: MTF & PSF LCD MTF does not vary with gray level or spatial neighbors Width Horizontal gap Vertical gap Length Rigid pixel via fixed aperture + steady Backlight MTF is sinc function based on subpixel dimensions color crosstalk correction sometimes needed CRT spatially nonlinear MTF hard to assess, use Spatial superadditivity in H direction : Spatial sub-additivity if power supply not powerful enough

Salient Characteristics of LCD: MTF details Comparison to visual system MTF = CSF Sinc is only a gradual LPF within HVS CSF window Viewing distance = 2 pix ~2H for HDTV, ~4H for VGA Width Horizontal gap Sinc MTF for MI-6 (G plane) MI-6 MTF and CSF 1 1.8.8.6.4.6.4 MTF.2.2.1.1.2.4.2.3.3.2.3.4.1.4 H SF (cy/pix) V SF (cy/pix) H SF (cy/pix).1.5.4.3.2 V SF (cy/pix).5 MTF Figure 13A: Sinc MTF for MI-6 Figure 13B: MI-6 and CSF Vertical gap Length

Current Challenges for LC TV High Dynamic Range at consumer cost Wide Color Gamut at consumer cost Ultra high resolution 4k x 2k and up Achieving perfect motion fidelity : 1. Speeding up response time for pixels How fast a pixel can change color without blurring Currently <= 2-4 milliseconds cites, but not for all gray level transitions 2. Hold-response blur (problem with Plasma also) 3. Judder (frame rate issue, problem with CRTs and Plasma also) Human visual perception plays a role in performance

Some Basics in Spatiotemporal Vision

Properties of the Visual System Properties generally dissected along these dimensions: Luminance Level Spatial Frequency Local Spatial Content Temporal Frequency Motion Global Color Eccentricity Depth

Properties of the Visual System Properties generally dissected along these dimensions: Spatial Frequency Temporal Frequency Motion

Engineering vs. Physiological Models of the Visual System Engineering Models of visual behavior aim for mathematical descriptions of key functionality Psychophysics and black-box box modeling have gotten the most mileage for practical applications While physiological plausibility is helpful, simplification is desired No need to model down to the neurotransmitter How is more important than where

Spatial Frequency Spatial behavior constant with visual angle (degrees) Spatial frequencies specified in cycles/degree (cpd, cy/deg) Spatial frequency behavior described with CSF (contrast sensitivity function) Similar to OTF of optics, MTF of electrical systems, but it is nonlinear and adaptive Measured with psychophysics One of the most useful, and widely used properties of visual system Campbell and Robson 66

Spatial Frequency Spatial behavior constant with visual angle (degrees) Spatial frequencies specified in cycles/degree (cpd, cy/deg) Spatial frequency behavior described with CSF (contrast sensitivity function) Similar to OTF of optics, MTF of electrical systems, but it is nonlinear and adaptive Measured with psychophysics One of the most useful, and widely used properties of visual system Campbell and Robson 66

Spatial Frequency Sensitivity Spatial Frequency Sensitivity >1 cd/m2.1 cd/m2

Spatial Frequency Sensitivity Log Spatial Frequency Log Contrast Sensitivity

Spatial Frequency in Application The max spatial frequency that can be displayed digitally is the Nyquist frequency It is ½ the sampling frequency (e.g., 5 pixels can display at most 25 cycles/pixel) Common max frequency seen by humans (I.e, CSF) is 3 cy/deg for medium brightness Highest max ever seen is 6 cy/deg (very high brightness, Carlson @ RCA) Examples of visual Nyquist frequencies and viewing distances for common displays: NTSC (425 lines) at 6H (255 pixels): 22 cy/deg NTSC (425 lines) at 3H (1275 pixels): 11 XGA (124x768) at 3H (234 pixels): 2 SXGA (128x124) at 1H (124 pixels): 9 1366 x 72 HDTV at 3H (216 pixels): 19 Full HDTV (192x18) at 6H (648 pixels): 57 Full HDTV (192x18) at 3H (324 pixels): 28 Full HDTV (192x18) at 2H (216 pixels): 19

2D Spatial Frequency 2D frequencies important for images 2D CSF is not rotationally symmetric (isotropic) Lack of sensitivity near 45 degrees, called the oblique effect 4 2D Spatial CSF 25 2 15 1 5 1 2 1 3 35 3 25 2 15 1 5 V spatial frequency (cy/deg) H SF cpd 3 2 V SF cpd 5 1 15 2 25 3 35 4 H spatial frequency (cy/deg) S

Temporal Frequency CSF for temporal frequencies also has been measured and modeled To right is shown temporal CSF for different light adaptation levels for luminance Top curve is best for mid-bright display applications Opponent Color signals temporal CSF also has about 1/2 the bandwidth and sensitivity of the luminance DeLange 52, Kelly 6s-7s, Watson 8s

Spatiotemporal Frequency Psychophysical data measurement of spatio-temporal temporal CSF is common Robson 66 Standing Wave Van Nes, Koenderinck, Bouman 67 Kelly 79.5 Kelly and Burbeck 8 1 Test signal is product of spatial and temporal frequency modulation Standing Wave Counterphase flicker -.5-1 -1.5 2 spatial position 4 2 4 6 time

Spatiotemporal CSF Spatiotemporal CSF (measured with counterphase flicker) Window of visibility Data shows max visible temporal frequency (CFF) near 5 cy/sec CFF = Critical Fusion Frequency = max temporal frequency that can be seen 3 2.5 2 1.5 Spatiotemporal CSF log temporal frequency (cy/sec) 3 2.5 2 1.5 1 Spatiotemporal CSF D.H Kelly 79 Koenderinck & van Doorn 79 (bimodal) 1.5 log Sensitivity.5-1 1 log SF (cy/deg) 2-1 1 2 log TF (cy/sec) Burbeck & Kelly 8 (excitatoryinhibitory separable version) Thus 6 fps usually causes no visible flicker (foveal) -.5-1 -1 -.5.5 1 1.5 2 log spatial frequency(cy/deg) Movie film at 24 fps causes visible flicker, so projectors shutter each frame 2 or 3 times to increase fundamental temporal frequency Before the 192s, movies were called the flickers

Brightness and Light Adaptation effects on T-CSF Higher brightness Increase in peak sensitivity of temporal CSF Higher brightness Increase in bandwidth of temporal CSF CFF = Critical Fusion Frequency (CSF bandwidth cut-off) Ferry-Porter Law Ferry-Porter Law ->

CFF and Eccentricity CFF= critical fusion frequency. Defined as frequency when 1% modulation signal looks identical to flat-field Viewer does not see any flicker For fovea and typical display light levels, CFF around 55 Hz For periphery at same light levels, it can increase to over 8Hz

Motion

Motion and Retinal Velocity For objects in real world, Velocity more important than flicker Standing waves can be de-composed into traveling waves Smooth tracking eye movements can reduce image velocity on the retina 3 Standing Wave = Two Opposing Travelling Waves 2 t= ;.5++ ; 4 secs 1-1 --> <-- -2 SF=6/512 cy/pix TF=1/8 cy/sec V=(1/8)/(6/512)= 11 pix/sec -3 1 2 3 4 5 spatial position amplitude Spatiovelocity CSF by Watanabe 68 Retinal Velocities & stabilization Retinal Velocity CSFs by Kelly from Motion & Vision series 79 3 CSFs for Different Retinal Velocities 2.5 3 11 64 32 2.15 deg/sec.12 128 1.5.1 2 1.5-1 -.5.5 1 1.5 2 log spatial frequency (cy/deg) log visual sensitivity

Sampled Motion and the Window of Visibility Watson, Ahumada, Farrell 86

Sampled Motion and the Window of Visibility Watson, Ahumada, Farrell 86 Rectangular support shown is window of visibility (idealized separable version) Max spatial = 5 cy/deg (depending on conditions, well studied) Max temporal = 3 cy/sec (depending on conditions and visual eccentricity, well studied) Undersampled motion Replications due to sampling = temporal aliases Note: this would look awful

Sampled Motion and the Window of Visibility Camera constrained window of visibility (not HVS) Aliasing vs. Blur tradeoffs at image capture via Temporal LPF prefilter via exposure aperture length via illumination duration Andrew Davidhazy @ RIT

Sampled Motion and the Window of Visibility Watson, Ahumada, Farrell 86 Example of smoothly perceived motion Sampling rate increases spreads out replications Preventing aliases in window of visibility results in smooth true motion Sampling rate depends on object speed and spatial content (I.e., bandwidth)

Sampled Motion and the Window of Visibility Watson, Ahumada, Farrell 86 Now that we have smooth motion by keeping aliases out of the window of visibility We still need to worry about motion blur due to capture aperture Thus the use of shorter capture time than the frame duration

Relations between Temporal, Spatial, and Motion Translational motion can be defined as ( x, y, t ) = l ( x υ t, y t l x υ y,) Its 3D Fourier spectrum is given by L ( f x, f y, f t ) = L ( f x, f y ) δ ( f xυ x + f yυ y + f t ) L ( f x, f y, f t ) f xυ x + f yυ y + f t is non zero only on the plane defined by = The motion of an object causes temporal component in the spatiotemporal spectrum. The temporal component is proportional to spatial frequency and velocity

Relations between Temporal, Spatial, and Motion and MTF Spatio-temporal spectrum is low pass filtered by the ST CSF, as well as display MTF (combined ST system MTF: T) L s ( f x, f t ) = L ( f x ) δ ( f xυ x + f t ) T ( f x ). T ( f t When eye accurately tracks the motion, the retinal image is purely spatial L s ( f x ) = L ( f x ) T ( f x ) T ( υ x f x ) ) Spatial transfer function due to display spatio-temporal MTF T d ( f x ) = T ( f x ) T ( υ x f x ) Spatial MTF Temporal MTF

Advanced Issues in Spatiotemporal Vision

Properties of Visual System: Motion: retinal velocity Retinal Velocities No Eye Movements Occur Image velocity = retinal velocity Spatiovelocity CSF (stabilized retina) 3 2.5 2 1.5 1 CSFs for Different Retinal Velocities 64 128 2 32 11 3.15 deg/sec.12.1 log visual sensitivity.5-1 -.5.5 1 1.5 2 log spatial frequency (cy/deg) Spatiovelocity CSF 3 Spatiovelocity CSF 2.5 3 2 1 log visual sensitivity -1-1 1 log SF (cy/deg) 2-1 1 2 log V (deg/sec) 2 1.5 1.5 log velocity (deg/sec) -.5-1 -1 1 2 3 log spatial frequency (cy/deg)

Properties of Visual System: Motion: Eye Movements Eye movement s tracking changes the window of visibility B. Girod 93, Perfect (and mandatory) object tracking

Properties of Visual System: Motion: Eye Movements Types of eye movements: Saccadic Eye movements (jumps) Usually > 16-3 deg/sec With larger display, larger saccades will still fit on screen, giving more of a feeling of being in real world Smooth Pursuit Eye Movements (tracking ) 8 deg/sec for 9 degree field of view (+-45 deg) 3 deg/sec for 3 deg field of view 16 14 12 1 8 6 4 Eye Tracking Velocity (deg/sec) Some retinal slippage (slope=.9) Drift Eye movements (very small ) responsible for the prevention of image fading due to low S of spatial & temporal CSFs No expected consequences of large screen on these Approx.1 to.15 deg/sec Other small eye movements: Tremor, Microsaccades Data from Meyer 85 : Smooth tracking data Red line is model we use for eye movements 2 1 2 1 1 1 1-1 Smooth Pursuit Eye Tracking o - Observer KC 5 1 15 Target Velocity (deg/sec) Smooth Pursuit Eye Tracking o - Observer KC Eye Tracking Velocity (deg/sec) -smooth pursuit + baseline drift as minimum 1-1 1 1 1 1 2 Target Velocity (deg/sec)

Properties of Visual System: Motion: Problem: spatial CSFs vs. velocity are narrower than usual CSF Static CSF viewing does not result in stabilized image on retina Eye drifts and small pursuit movements cause retinal velocities during CSF examination 1 3 CSF Models: Drift Range vs. Natural Static 1 3 CSF Models: Drift Range vs. Natural Static 1 2 1 2 <- Static CSF Sensitivity Sensitivity 1 1 <- Static CSF 1 1 <--Modified Kelly Model w/ Drift Range (.1-2. deg/s) <--Envelope of Drift Range (.1-2. deg/s) 1 1-1 1 1 1 1 2 Spatial Frequency (cy/deg) 1 1-1 1 1 1 1 2 Spatial Frequency (cy/deg) This gives us more confidence in the model for spatial attributes

3 2 1-1 -1 Eye Movement Model Spatiovelocity CSF 1 log SF (cy/deg) Spatiovelocity CSF 2-1 1 2 log V (deg/sec) Use best case eye movements for detection of moving targets log visual sensitivity Eye Movement Model Shifts image velocities to retinal velocities that are 3 2.5 2 1.5 1.5 log velocity (deg/sec) Eye Movement Model Spatiovelocity CSF low -.5 Spatiovelocity CSF -1-1 1 2 3 log spatial frequency (cy/deg) Eye Movement Model Spatiovelocity CSF 3 2.5 3 2 1-1 -1 1 log SF (cy/deg) 2-2 2 log V (deg/sec) Daly 98 (SPIE HVEI) 2 1.5 1.5 log visual sensitivity log velocity (deg/sec) -.5-1 -1 1 2 3 log spatial frequency (cy/deg)

Eye Movement Model Spatiovelocity CSF Eye Movement Model Spatiovelocity CSF 3 2 1-1 -1 1 log SF (cy/deg) 2-2 2 log V (deg/sec) Spatiovelocity CSF using Eye movement model ω = vρ log visual sensitivity (cy/sec) = (deg/sec)(cy/deg) ω = temporal frequency v = velocity Eye Movement Model Spatiovelocity CSF 3 2.5 2 1.5 1.5 log velocity (deg/sec) ρ = spatial frequency -.5-1 -1 1 2 3 log spatial frequency (cy/deg) Eye Movement Model Spatiotemporal CSF 3 2 1-1 1 log SF (cy/deg) 2-1 1 2 log TF (cy/sec) Rotation back into spatiotemporal CSF including effects of eye movements Can be used to assess smoothness of motion Eye Movement Model Spatiotemporal CSF 3 2.5 2 1.5 1.5 log visual sensitivity -.5 log temporal frequency (cy/sec) -1-1 -.5.5 1 1.5 2 log spatial frequency(cy/deg)

Spatiotemporal visibility demos

Application of SV EMM model : Analysis of Digital Video Formats Analysis of interlace, flicker and resolution issues Use spatiotemporal CSF to analyze progressive and interlace parameters 72 lines progressive @ 6 fps, 18 lines progressive @ 3 fps, 18 lines interlace @ 6 fps all have similar uncompressed data rates Viewing distance = 3H 2.5 Video Nyquist Boundaries + ST CSF Video Nyquist Boundaries + EMM ST CSF 2.5 2 1.5 1.5 72P @ 3H temporal frequency (cy/sec) -.5-1 -.5.5 1 1.5 2 spatial frequency (cy/deg) 18P @ 3H 18I @ 3H 2 1.5 1.5 72P @ 3H temporal frequency (cy/sec) -.5-1 -.5.5 1 1.5 2 spatial frequency (cy/deg) 18P @ 3H 18I @ 3H

Different Viewing Distances Analysis of interlace, flicker and resolution issues Use spatiotemporal CSF to analyze progressive and interlace parameters 72 lines progressive @ 6 fps, 18 lines progressive @ 3 fps, 18 lines interlace @ 6 fps SD signal of 48P also considered (some DVDs) Increase Viewing distance to 6H and 9H -> Interlace advantage lost 2.5 Video Format Nyquist Boundaries (6H) 2.5 Video Format Nyquist Boundaries (9H) 2 1.5 1.5 48P @ 6H --> -.5-1 -.5.5 1 1.5 2 spatial frequency (cy/deg) 72P @ 6H 18I @ 6H 2 1.5 1.5 48P @ 9H -> -.5-1 -.5.5 1 1.5 2 spatial frequency (cy/deg) 72P @ 9H temporal frequency (cy/sec) temporal frequency (cy/sec) 18I @ 9H -->

Speculative Video Format 2.5 Speculative Video Format 36Hz 18I @ 3H 2 1.5 1.5 temporal frequency (cy/sec) 36 Hz 18I @ 3H Auxiliary issues: Interlace is more difficult to compress -.5-1 -.5.5 1 1.5 2 spatial frequency (cy/deg) 2H becoming more common with large displays, so 18 not enough Cost Trumbull s Showscan (explored up to 1 Hz): some considered too realistic and not cinematic

Closer Examination of Spatiovelocity CSF via Eye Tracking

Verification of Eye Movement Model & SV CSF Laird, Pelz, Rosen, Montag and Daly (26) Spatiovelocity Model based on Kelly s experiments Using retinal stabilization to control velocities on the retina No directed eye movements However, in real image viewing applications, eyes will actually be in motion, And generally be directed as well The Spatiovelocity model may not be valid when the eyes are actually in motion if auxiliary signals from eye control circuitry to V5, the motion area, affect..?? Build/optimize 2D spatio-velocity CSF model Further refine Daly (Kelly+EMM) model Incorporate calculated retinal velocities Study effects of eye movements on retinal velocity sensitivity

Experimental Setup Equipment & Methodology: Sony Trinitron MultiScan G42 CRT ASL Series 54 Remote Eyetracker 2IFC Stimuli: Gabor (contrast, frequency, velocity) Disembodied Edge (contrast, velocity) Mean Lum. of screen 6 cd/m 2 Dist Obs. from Screen 84 cm Horiz. Deg span of screen 23.95 o Size of stimulus 2.46 o x 2.46 o

Eye tracking velocity calculations Conversion from position to degrees: Conversion from degrees to velocity:

Tested spatiotemporal frequencies Temporal Freq (Hz) Spat Freq (Cyc/Deg) 1 2 3 4 2.5 5. 7.5 8 1.25 2.5 3.75 16.625 1.25 ν = ω ρ deg = sec cycles cycles sec degree

Retinal velocities with and without directed eye movements Experiment tests 4 cases: mixtures of Gabor velocity, fixation points, and envelope: Time Trad CSF Gabor does not move Time ST CSF Sine moves, but envelope & fixation do not: Retinal velocity Time SV CSF No eye tracking Gabor moves, but fixation does not: Retinal velocity if observer can ignore envelope? Time SV CSF eye tracking Fixation moves with Gabor: Retinal velocity depends on eye tracking

Retinal velocities without directed eye movements Eye fixation is good (able to ignore moving object, if requested) Moving sines (fixed envelope) = moving gabor (moving envelope)

Retinal stasis with and without directed eye movements Eye tracking is good, results similar to static No signals related to eye motion affect neural processing (no intercedent) Data shifted horizontal for separability, since they superimpose closely

Retinal velocities with and without directed eye movements Eye tracking removes the decrease in Sensitivity with increasing temporal frequency, for all tested spatial frequencies Maybe motion sharpening at 4 cpd?

Sensitivity results on Spatiovelocity CSF model The red dots correspond to the points in the table 1Hz 2Hz 3Hz The velocities result from the particular spatial and temporal frequency combination. 4 cpd 2.5 deg/sec 5 deg/sec 7.5 deg/sec 8 cpd 1.25 deg/sec 2.5 deg/sec 3.75 deg/sec 16 cpd.625 deg/sec 1.25 deg/sec

Fine tuning parameters of SV model SV CSF in retinal velocities, v r, and spatial frequency ρ πρ 2 ( ) = 1 ρ, v ( ) R k c c 1 c 2 v c 1 2πρ exp ρ max CSF R Where: k ρ log c v = s log 2 1 + s 2 R 3 max = ( c + 2) 2 p v R 1 Kelly model modified to fit data (Kelly model only at low LA level, and noisier displays of the past) Non-linear least squares routine: Sensitivity values from model fit to experimental results 3 c 4

Test of model on combined frequencies Experiment 5 Moving edge results Sensitivity to blurring of edge As a function of edge contrast

Test of the SV model Revised SV CSF model based on new parameters (inset) Prediction results of moving edges via model : Based on Watson & Ahumada JOV 25 Use 2D integral of CSF x signal spectrum to model sensitivity Perfect eye tracking assumed Channels not needed since no masking?? OK, but could be better (facilitation?)

Verification of Eye Movement Model & SV CSF - Summary Sensitivity determined by retinal velocity Not affected by eye movements Sensitivity similar for 2 types of motion Moving sinusoids within Gabor Gabor moving across field of view Optimized 2D spatio-velocity CSF model More applicable to TV imagery Use of retinal velocity and unstabilized stimuli

LCD Temporal Basics

LCD Temporal Basics Why does LCD motion blur happen? LCD Temporal MTF components Temporal-response blur & Hold-type blur (temporal rendering function) LCD motion blur modeling LCD motion blur analysis Slow-response blur vs. hold-type blur Analysis of Proposed solutions

Slow-response blur : LCD Temporal Characteristics Input vs. Output temporal responses shown Overall speed and asymmetry are important 9 8 7 6 5 4 3 2 1 16 14 12 1 8 Level 2 Level 4 Level 6 Level 8 Level 8 Level 6 Level 4 Level 2 Leve vel Level Input Tim e in m s Output Out 8 Out 6 O ut 4 O ut 2 Out Level 2 Level Re sponse 6 4 2 Input In In 1 In 2 In 3 In 4 In 5 In 6 In 7 In 8 Output Level 8 Level 8 Level 6 Level 4 Level Level 2 Level 4 Level 6 Slower responses have more temporal LPF and lead to motion blur Asymmetric responses lead to HSF flicker Overdrive Input 6 5 4 3 Response Time (ms) 2 1 Output

Spatial consequences of Temporal LPF 1.8 f x.6.4 f x f x Temporal MTF f x Klompenhouwer 24.2 f f t t Original image spectrum Spectrum of motion sequence 2 4 6 8 1 12 Temporal Frequency Temporal low-pass of the display f t f t Spectrum after temporal lowpass of display Spectrum at the retina with eye tracking The motion of an object causes temporal component in the spatial/temporal spectrum. This spectrum is low-pass filtered by the display spatial/temporal transfer function. The eye tracking causes the retina image to have pure spatial component of spectrum without any temporal component. But the temporal low-pass filtering in the display reduces the spatial bandwidth of the retina image, which causes the perception of motion blur.

Overdrive

Improving LCD Temporal Characteristics with Overdrive Slower responses lead to motion blur Overdrive LUT from gray level to gray level (intended to necessary map) Target Overdrive No Overdrive Start t Overdrive Value Target Start Okumura 1, Sekiya 2

LCD Temporal Response and its Temporal MTF 1.9.8.7.6.5 32 ms 8 ms 2 ms OD 1.9.8.7.6.5 32 ms 8 ms 2 ms OD LCD temporal response Temporal MTF.4.3.4.3.2.2.1.1 5 1 15 2 25 3 35 4 t in ms 1 2 3 4 5 6 Temporal Frequency (Hz) Temporal overdrive can effective improve the temporal MTF thus reducing the motion blur At peak of HVS temporal CSF (8Hz), overdrive can even exceed a 2ms temporal response

Designing a temporal overdrive algorithm LCD Y 1 Target value Y 2 Y 3 Previous Value Target values d n-3 d n-2 d n-1 d n Frame buffer LCD model

Overdrive algorithm results No Overdrive With Overdrive 6 6 5 4 3 2 1 8 Input 255 14 5 4 3 2 1 8 4 72 14 136 168 2 232 72 136 2 255 255 2 136 72 136 72 8 8 8 Input Output 2 Output 255 2 Note that overdrive makes temporal responses more symmetrical: this essentially eliminates the flickering artifacts Temporal responses w/ OD are generally in range of 3-5ms

Dynamic Gamma Method for Overdrive Analysis

Dynamic Gamma Approach (Static) Display Gamma: 1.4 1 1.2 1.8.6.4 1-1 1-2 Display Output Display Output.2 5 1 15 2 25 3 Input Digital Counts 1-3 1 1 1 1 2 1 3 Input Digital Counts y= dc γ Feng et al 4 & 5

Definition of first order Dynamic Gamma Driving Waveform Output target 9% point Z n=255 DG value Z n-2 Z n-2 Z n-1 Z n= 1 Fram e N o. Response time Frame period t The LCD input/output relationship changes with time when displaying motion Dynamic gamma value: the output value measured at the end of the first frame Can use the same equipment as response time measurement Advantages over use of response times

Representation: Table vs. Figure ( ) d, = f d z n n 1 n 32 64 96 128 16 196 224 255 22 39 61 82 111 14 155 22 3 25 2 32 64 96 128 16 192 224 255 255 d n-1 : 255 Level 15 Output Measured values 25 5 135 141 15 5 175 186 23 214 236 255 1 5 Starting value 5 1 15 2 25 3 Driving Value Target value Each curve represents different previous value First order Dγ models the edge motion

Derivation of overdrive lookup table 3 25 2 15 1 5 16 32 48 64 8 96 112 128 144 16 Output Level 176 192 28 224 24 255 5 1 15 2 25 3 Target Level To go from 32 (previous frame) to 64: needs OD value of 13

Application for comparing LCD systems Dynamic Gamma useful for comparing LCD systems (overdrive + inherent temporal response) Assessment of overdrive performance with dynamic gamma: Slow LCD No OD Slow LCD with OD 3 25 2 15 32 64 96 128 16 192 224 255 3 25 2 15 32 64 96 128 16 192 224 255 Output Level Output Level 1 1 5 5 5 1 15 2 25 3 Driving Value 5 1 15 2 25 3 Driving Value Fast LCD with OD Fast LCD with OD (2 nd -order dynamic γ) 3 3 25 25 2 2 15 1 Output Level 15 1 5 5 5 1 15 2 25 3-5 5 1 15 2 25 3 Target Level 1 st order Edge 2 nd advantage real video

192** 255** Current Overdrive algorithm results Without Overdrive With Overdrive 12. 12. 1. 1. 8. 6. 4. Response time ms 8. 6. 4. 2. 2.. response time ms 128** 64** * ** 32* 64* 96* 128* 16* 192* 224* 255*. 255** 192** 128** 64** Input ** * 32* 64* 96* 128* 16* 192* 224* 255* Output input Output

Current Overdrive algorithm results Example of visual consequences: Conventional driver High Performance Overdrive

Display Temporal Rendering Function

Comparative Display Basics of Temporal Aperture t Real world temporal waveform t CRT temporal output (impulse-type) LCD temporal output (hold-type)

LCD Motion Blur LCD s slow response: slow-response blur physical; can be captured by a fixed-position camera Ideal LCD temporal output (-response) t t Practical LCD temporal output (slow response) LCD s hold filtering: hold LCD s hold-type rendition + HVS smooth pursuit & lowpass hold-type blur perceptual; only happen when human eyes are tracking; can NOT be captured by a fixed-position camera Lindholm 96, Parker 97, Kurita 98, Kurita 1

Role of eye tracking in LCD hold-type blur FRAME Eye integrates along tracking path (1-5ms, LA) For CRT display, integration of eye tracking path causes no mixing of black and white displayed elements along path CRT: impulse FRAME 1 FRAME 2 PERCEIVED FRAME FRAME 1 FRAME 2 EYE MOVEMENT: TRACKING EDGE SHARP EDGE t Result for CRT is sharp moving edge For LCD display, eye track path goes through regions of white and black displayed elements, so that mixing of signals occurs due to temporal integration of the eye LCD: hold PERCEIVED BLURRED EDGE Result for LCD (hold) is a blurred moving edge

Eye tracking Demo

Image examples t Retina image of a moving edge on hold display with eye tracking hold blur Retina image of a moving edge on impulse display with eye tracking No motion blur Quantitative Simulation

Motion Blur Due to Hold: Temporal MTF The effect of temporal hold is very similar to spatial aperture effect of CCD sensor. The temporal MTF is given by a sinc function. 1.9.8.7.6.5.4 32 ms 8 ms 2 ms OD Temporal MTF T h t = t ( f ) sin c ( f T ).3.2.1 1 2 3 4 5 6 Temporal Frequency (Hz) For a faster LCD panel, the temporal MTF is limited by the hold effect. There is diminishing gain in further improving LCD temporal response. For a given temporal sampling, the only way to reduce hold blur is to reduce temporal aperture. Plot includes LCD temporal response + hold aperture

Analysis of Temporal LCD System Issues Pan et al 5

Simplified display-perception chain I d (x,y,t) I s (x,y,t) I m (x,y,t) I o (x,y,t) S&H (h t (t)) LCD/CRT Smooth pursuit HVS Lowpass filter Input: assuming the dynamic discrete content I d (x,y,t) is an image I c (x,y,t) moving at a constant speed I d ( x, y, t ) = I c ( x + v x t, y + v y t, t ) (1) Sample-and-hold : I s ( x, y, t ) = I d ( x, y, t )* h t ( t ) h t (t) is the temporal reconstruction function of an LCD or CRT (2) Smooth pursuit eye movement: I m ( x, y, t ) = I s ( x v x t, y v y t, t ) compensates motion to make the object still on retina. (3) Lowpass filter: ( ) I o ( x, y, t ) = I m ( x, y, t )* Λ xy ( x, y ) * Λ t ( t ) Λ xy (x,y) & Λ t (t) are spatial & temporal impulse response functions of the filter

The general LCD motion blur model The input-output relationship of the chain: I ( x, y, t ) = I ( x v t ' p, y v t ' q, t t ' t ") Λ ( p, q ) Λ ( t ") dpdqdt " h ( t ') dt ' o c x y xy t t t = The spatial and temporal lowpass impulse functions (Λ xy (x,y) and Λ t (t) ) are unknown Assuming that HVS has the same lowpass impulse functions for LCD and CRT, and using image perceived on CRT as a reference I ( x, y, t ) = I ( x v t ', y v t ', t t ') h ( t ') dt ' LCD CRT LCD o x y t o t = ' '

The general LCD motion blur model I ( x, y, t ) = I ( x v t ', y v t ', t t ') h ( t ') dt ' LCD CRT LCD o x y t o t = The LCD temporal reconstruction function h LCD t (t) affects spatially and temporally. The reconstruction function h LCD t (t) can be measured directly or derived from the temporal waveform. When h LCD t (t) is δ-function, then LCD and CRT have the same result (motion blur does not exist) Generally, the model is not in the form of convolution. The motion speed v x and the LCD reconstruction function jointly determine motion blur. Faster the motion is, more blurred the perceived images are. Wider the reconstruction function is, more blurred the perceived images are. The reconstruction function is the key '

Blur width calculated by the model intensity 1. Assume a virtual horizontally moving sharp edge perceived on CRT L+D L 2. Calculate the perceived edge using the reconstruction function of the LCD I LCD ( ) CRT ( ) * LCD o x = I o x h t ( x / v x ) x 1 T h /2 T h t 3. Calculate the blur width of the calculated perceived edge intensity L+D L+9%D L+1%D L Point 1 Point 2 BW x

Traditional LCD (slow response blur + hold-type blur) 1.8.6 The temporal waveform (linear transition between frames) t T h 2T h The reconstruction function (linear transition between frames) t The temporal waveform (sine transition between frames) t.4.2 t T h 2T h.2t.4t.6t.8t T 1.2T 1.4T 1.6T 1.8T 2T The reconstruction function (linear transition between frames) 1.2 1.8.6.4 CRT Perceived Edge on CRT Perceived Edge on LCD (the first-ordre hold model) Perceived Edge on LCD (the sine hold model) x LCD (firstorder) LCD (sine) Blur width: 1.1 vt.2-15 -1-5 5 1 15 2 Position (pixel) 1 Intensity intensity

The ideal LCD temporal response (hold-type blur only) 1/T T h =T (a) (b) 1.2 1.8.6 Ideal LCD Current LCD.4.2 Blur width:.8 vt -15-1 -5 5 1 15 2 Position (pixel) t t intensity Intensity x

Hold-type blur vs. slow-response blur Hold-type +slow response blur: 1.1vT Hold-type blur:.8vt so slow response blur: 1.1vT-.8vT=.3vT 7% vs. 3% So, hold-type blur is the major factor

Four key proposed solutions 1) Black Data Insertion (BDI) Hong 4, Kimura 5 2) Backlight Flashing and Scrolling (BF) Fisekovic 1, Sluyterman 5 Adaptive backlight flashing (6 and 12 Hz), Feng 6 3) Frame Rate Doubling (FRD) Sekiya 2, Kurita 5 4) Motion-Compensated Inversing Filtering (MCIF) Klompenhauer 1, 5

Reconstruction functions of the four proposals BDI FRD 1 1 t T/2 T 2T t t T/2 T 2T t BF MCIF 1/T 1 t t T-T 1 T 2T-T 2T T 2T 1 t t t t t

Frame Rate Conversion based on Motion Compensation 1/6 sec Input Image (6 frame/sec) Converted sequence (12 frame/sec) Images created by frame rate conversion temporal 1/6 sec 1/12 sec Motion vector estimation Frame#1 Frame#2 Input 6p Preprocessing Motion vector estimation Detect motion vector Estimate object's motion inbetween Frame #1 and #2 From frame#1 and #2, create new picture that shift its as direction and speed position by 1/12sec Interpolated picture Interpolation Frame interpolation Up-converted output 12p

Comparison between different approaches BDI BF FRD MCIF Requirement on LCD temporal response Requirement on backlight temporal response High Medium High No No High No No Other Requirement No Sync between Accurate LCD and motion backlight estimation The ghosting artifact Likely Likely No No The luminance reduction artifact Yes Yes No No flickering artifact Yes Yes No No Reduction of motion blur (smaller the number is, the better) 5% (limited by LCD temporal response) 25% or less (limited by backlight temporal response) 5% (limited by LCD temporal response) Motion estimation?

Perceptual Motion Sharpening

Motion Sharpening Ramachandran 74 (observations on blurred movie frames) things tend to look blurred when they are moving fast-----but--- blurred edges look sharper when they are moving than when stationary Poor tracking blurred retina image Motion sharpening effect perceived motion is sharper than the still images, which suggests that the perception of smooth pursuit is different from still image. 3 CSFs for Different Retinal Velocities 2.5 2 1.5 1 log visual sensitivity Less understood, higher order effect Sharpness constancy Deblurring If motion sharpening effect is involved, previous analysis based on retinal image blur is insufficient.5 64 128 2 32 11 3.15 deg/sec.12.1-1 -.5.5 1 1.5 2 log spatial frequency (cy/deg)

Motion induced Blur and Sharpening Westerinck 9 Studied perceived sharpness of images with varying degrees of blur As a function of translational motion speeds

Conclusion: Evaluation of Motion Blur Reduction Motion blur characterization Objective method: measured retina image using a simulated tracking camera assuming perfect tracking Subjective method: Compared the perceived blur with blurred edge of a still image Motion blur perception The subjective method agrees with the objective derived motion blur Perception of motion blur is similar to perception of still image blur ( x v tri ( x / v ) rect ( )) * gaus ( ) / bluredge ( x ) = step ( x ) Backlight flashing can significantly reduce the b perception of motion blur. x σ

Other Key Studies

Distribution of Moving Object Velocity Analysis of ITU-R BT-121-3 test material for HDTV sequence Average;16.5 deg/sec 6 5 4 3 頻度 (%) 2 The region where blur caused by camera can be clearly seen. No.1~No.32 Frequency (%) 1 ~1 1~2 2~3 3~4 4~5 5~6 Velocity in 視覚速度 viewing (deg/s) angle (deg/sec) [3H] In terms of the fastest motion within every sequence, 7% of the sequence distributed below 2[deg/sec] T.Fujine et.al.; Real-Life In-Home Viewing Conditions for FPDs and Statistical Characteristics of Broadcast Video Signal, Digest AM-FPD 6

Observer Study of Picture Quality Improvement 5 4 3 double-rate 5% aperture Threshold of perception Acceptance limit 2 1% aperture (hold type) 1 ( x v tri ( x / v ) rect ( )) * gaus ( ) / bluredge ( x ) = step ( x ) 5 1 15 2 25 3 Velocity in Viewing Angle (deg/sec) b x σ T.Kurita; Moving Picture Quality Improvement for Hold-type AM-LCDs, SID 1, 35.1, pp.986-989 (21) Quality Scale

Analysis of Methods to Overcome Hold-Blur Hold type drive Pseudo impulse drive Motion Compensated 12Hz Basic x v ( tri ( x / v ) rect ( )) * gaus ( ) / bluredge ( x ) = step ( x ) b x σ ASV1. ASV2.2 Response time : 15ms Response time 12ms ASV3. 7ms + Pseudo impulse drive 5ms double rate LCD w/ MC double rate drive Pan, Feng, & Daly: "Quantitative Analysis of LCD Motion Blur and Performance of Existing Approaches ICIP 25 BDI (black data insertion) BF (backlight flashing) FRD (frame doubling) Requirement on LCD temporal response High Median High Requirement on backlight temporal response No High No Other Requirement No Sync between LCD and backlight The ghosting artifact Likely Likely No The luminance reduction artifact Yes Yes No flickering artifact Yes Yes No Reduction of motion blur (smaller the number is, the better) 5% (limited by LCD temporal response) 25% or less (limited by backlight temporal response) Accurate motion estimation 5% (limited by LCD temporal response)

Standardized Metrics

Motion Picture Response Time (MPRT) Steps to Measure MPRT 1. Move an edge cross screen. The edge is made of a transition from one gray level to another level. Total of 3 transitions (6 levels in digital counts or L* space) 2. Using the pursuit camera to measure the blur width 3. MPRT is average blur edge width (BEW) normalized by the moving speed - also referred to as E-BET (extended blurred edge time)

MPRT basics and issues Motion blur can be characterized by motion picture response time (MPRT) metric, which is measured with a tracking camera that simulates the eye tracking of a moving edge The system is expensive and time consuming Theoretically, motion blur is a pure temporal issue that can be uniquely determined by the temporal response function (via LTI: linear systems determined by the temporal response function (via LTI: linear systems theory) Nonlinearities (of both LCD and HVS) are only reasons for failure of LTI Small amplitude signals may be within linear region approximation of both Still, MPRT does not consider HVS effects (too much normalization) In Q&A with Someya (Mitsubishi) at IDW5, he thought that MPRT from temporal measurement is only accurate for hold displays, but not for impulse displays such as displays using backlight flashing and black data insertion At SID 6, Klompenhouwer described advanced motion blur measurement schemes as inventing a complex system to measure a simple temporal response

Motion Blur Measurement with Simulated Tracking Camera The simulated retina image is the integration of a sequence of temporal captured frames in the motion tracking trajectory E e ( x ) = E LCD ( x vt ) dt E e ( x ) = i = 1 E CCD ( x iv t, i ) t

Captured Frames in one Display Frame Period Via high speed digital camera (9 fps) Frame 1 Frame 2 Frame 3 Frame 4 Frame 5 Frame 6 Frame 7 Frame 8 Frame 9 Frame 1

Summary

Summary Basic Spatiotemporal Vision Spatiotemporal Vision with Eye Movements LCD Motion Issues Temporal Response Overdrive Temporal Rendering Function Observer study of LCTV motion sharpness matching ( x v tri ( x / v ) rect ( )) * gaus ( ) / bluredge ( x ) = step ( x ) b x σ

What s next : Other Temporal Artifacts

Other Temporal Artifacts Motion Blur and Sharpness as discussed Flicker mentioned Asymmetrical temporal response Periphery & Brightness issues Judder Stepper-like motion, seen with slower steady motions CRT s fast temporal response is not desired Multiple Edges ( x v tri ( x / v ) rect ( )) * gaus ( ) / bluredge ( x ) = step ( x ) Examples from Backlight Flashing + mismatched Overdrive Hollywood is happy with 24 fps! (looks cinematic) o DCI o Aliasing control via cameraman & editors b x σ

What does real-world Movement really look like? Human eye is poor for seeing motion

What does real-world Movement really look like? Multiples edges can be seen with some types of tracking & saccades combinations?? ( x v tri ( x / v ) rect ( )) * gaus ( ) / bluredge ( x ) = step ( x ) b x σ Marcel Duchamp (1912) Nude descending a staircase

What does real-world Movement really look like? Multiples edges can be seen with some types of tracking & saccades combinations?? ( x v tri ( x / v ) rect ( )) * gaus ( ) / bluredge ( x ) = step ( x ) b x σ

What does real-world Movement really look like? Motion blur can be seen if attentive?? ( x v tri ( x / v ) rect ( )) * gaus ( ) / bluredge ( x ) = step ( x ) b x σ

What does real-world Movement really look like? Motion blur can be seen if attentive?? Csuri (1984) @ OSU

What does real-world Movement really look like? Human eye is poor for seeing motion Muybridge 187s-> Multiples edges can be seen with some types of tracking & saccades combinations?? Motion blur can be seen if attentive?? Experience and attention have large effects on motion perception

Understanding Motion Blur & LCD TV References: Spatiotemporal analysis of displaying perceived object motion: Frequency domain analysis, (Watson 85, Girod 93, Klompenhouwer 4) Spatiovelocity analysis (Watanabe 68 Kelly 79, Adelson & Bergen 85, Daly 98, Laird 6) Time domain analysis (Adelson & Bergen 85, Pan 5) Motion blur perception (Ramachandran 74, Parker 81, Westerink 9, Bex 95, Takeuchi 5, Laird 6) Motion blur in LCD: Caused by the hold-type temporal rendering method of LCDs combined with the smooth pursuit eye movement of human visual system (HVS) (Lindholm 96, Parker 97, Kurita 98, 1, Klompenhouwer 5, Pan 5) Motion blur reduction approaches: Temporal overdrive (Okumura 1, Sekiya, 2) Temporal aperture reduction: black data insertion - BDI (Hong 4, Kimura 5), backlight flashing (Fisekovic 1, Sluyterman 5) Frame rate doubling FRD (Sekiya 2, Kurita 5) Motion compensated inverse filtering MCIF ( Klompenhouwer 1)

Thank you for your interest and patience

Reference Capability of Conference Projector 8 bit ramp

Bit Depth, Contrast, and Spatial CSF Display with: 1bits, C max =.95, L max = 1 cd/m^2 (CSF peak =5; upper 5%)

Active Matrix Drive Structure Y 1 Y 2 Circuitry X electrode Active element (Transistor) Y electrode X 2 P ix e l X 1 Transistors are attached to each subpixel for precise and faster switching of its gray value X and Y electrodes are formed on the same substrate as the TFT array. Switching signals are applied to the Y electrode. Video is applied to the X electrode

LCD Tonescale 12 1 8 6 4 2 Transparency and derivative of the S-curve function 2 4 6 8 1 Voltage applied 64 128 192 255 Corresponding code value VT-curve derivative of the S curve function Fig.5 S-curve of the LCD and its derivative. Voltage Transmission curve (device level tonescale) Inherent LCD tonescale is S-shaped, like film

LCD System Tonescale Gamma-correction via LUT usually performed for quality desktop LCD monitor, for LCTV other goals

Salient Characteristics of LCD: Brightness No brightness dependence on area Independent Backlight + Transmissive Modulation You can see the difference when playing images whose entire area is bright. CRT & Plasma TV AQUOS LCD TV Large Area Brightness 58 cd/m2 47 cd/m2 Peak brightness 23 cd/m2 47 cd/m2

Assessment along Image Quality Dimensions Power Consumption (W) 2 Life (hour) 1 Under 3 Pixel Density (PPI) Set Price (yen/inch) 1 5 over 2 1 16 Under 1 7 5 12 1 1 9 8 Size (inch) 22 25 Response Time (msec) 15 2 4 4 2 1.8.6.4.2 Under 7 1 2 3 2 3 2 2 4 5 4 6 6 3 4 6 Contrast [Dark] 28 8 4 6 1 5 2 8 6 Brightness [All screen (cd/ m2 ) Contrast [Bright (3lx)] 1 Color Gamut (vs.ntsc) over 1% Brightness [Peak] (cd/ m2 ) Power Consumption (W) 2 Life (hour) 1 Under 3 Pixel Density (PPI) Set Price (yen/inch) 5 Under 1 7 5 1 1 1 over 2 16 12 1 9 8 Size (inch) Response Time (msec) 1..8.6.4 15 2.2. 4 4 2 Under 7 1 2 3 2 3 2 2 4 5 6 3 6 6 4 28 12 8 4 6 Contrast [Dark] 1 5 Brightness [All screen] (cd/ m2 ) 8 6 5 Contrast [Bright (3lx)] 1 Color Gamut (vs.ntsc) over 1% Brightness [Peak] (cd/ m2 ) LCD (ASV) CRT PDP OLED (EL) High Speed Response Technology Wide Color B/L System / High Transmissivity Panel Optical Optimized Panel (Higher Contrast)

Aside: Spatial Superadditivity of CRT Superadditivity is a nonlinearity that makes it hard to determine MTF (as well as apply linear systems theory in image processing applications) These measurements on b/w CRT without shadow mask, which introduces phase complications as well. Data from Naiman 92 (SPIE HVEI)

Salient Characteristics of LCD: Low Reflection Comparison of Reflection When the room brightness is 3 LUX (normal brightness), liquid crystal's low reflection rate greatly reduces mirror effects. Plasma or CRT TV LCD TV

LC TV Status 26 ( usual sales caveat) Viewing Angle NOW Power Cons. Response Time (Room Temp.) 1.8.6.4.2 Response Time (Low Temp) Contrast (Bright) Brightness [Avg.] 24 Contrast (Dark) Brightness [Peak] CRT Color Gamut (EBU) Black Shift Color Depth(1 bit) Other Remaining Challenges High Dynamic Range, Wide Color Gamut, High-frame rate, Cost

Eccentricity and Periphery Eccentricity : Position in visual field degrees eccentricity refers to where your eyes are pointed, corresponds to fovea in retina 9 degrees eccentricity is near edges of visual field (periphery) Spatial Bandwidth of eye reduces in periphery Cones are densely packed in fovea : high spatial sampling -> high bandwidth They become less dense as eccentricity increases 1 3 CSFs for Eccentricity Range -25 deg 1 2 1 1 Contrast Sensitivity 1 1 1 1 Spatial Frequency (cy/deg)

Properties of Visual System: Eccentricity How eccentricity changes across image as viewing distance changes (left) Assuming viewer looking at center of image (pixel = 32) Eccentricity model predictions of how visual spatial sensitivity varies across image (right) Eccentricity vs. Location (64x48 image) Visual Sensitivity vs. Location 3 25 2 15 1 1H at pixel=32 2H "" 4H "" 6H "" 1.8.6 1H at center 2H "" 4H "" 6H "" 2H at 16 Eccentricity (deg) Sensitivity.4 5.2 1 2 3 4 5 6 Image location (pixel) 1 2 3 4 5 6 Image location

Eye Movement Model maximum smooth pursuit velocity Rotation into Spatiotemporal CSF with Eye Movement model Max smooth pursuit velocity condition dependent, some unknowns Predictive vs. non-predictive motion Oscillation : eye movements can actually lead Field of view 2.5 EMM (max 2 deg/sec) ST CSF EMM (max 8 deg/sec) ST CSF 2.5 EMM (max 3 deg/sec, Westheimer) ST CSF 2.5 2 1.5 1.5 /sec) log temporal frequency (cy/sec) log temporal frequency (cy/sec) 2 1.5 1.5 2 1.5 1.5 -.5-1 -.5.5 1 1.5 2 log spatial frequency (cy/deg) -.5-1 -.5.5 1 1.5 2 log spatial frequency (cy/deg) -.5-1 -.5.5 1 1.5 2 log spatial frequency (cy/deg) log temporal frequency (cy/

Eye Movement Model maximum smooth pursuit velocity Rotation into Spatiotemporal CSF with Eye Movement model Max smooth pursuit velocity condition dependent Predictive vs. non-predictive motion Oscillation : eye movements can actually lead Field of view If pursuit could be as high as 12 deg/sec, looks like traditional ST CSF, but with higher flicker fusion 2.5 EMM (max 12 deg/sec) ST CSF 2 1.5 1.5 log temporal frequency (cy/sec) -.5-1 -.5.5 1 1.5 2 log spatial frequency (cy/deg)

Measured eyetracks Transfer relative eye positions to degrees:

Eye tracking data of a person tracking a moving Gabor velocity variations due to instrumentation or physiology?

LCD Flickering Asymmetric temporal responses lead to HSF flicker (High Spatial Frequency) MOVING IMAGE PATTERN SEEN AT SINGLE PIXEL INPUT SIGNAL (slow)) INPUT MEAN DISPLAY RESPONSE DISPLAY MEAN (Fast) DISPLAY MEAN (Faster) TIME DISPLAY MEAN LUMINANCE

SLA s Hi-speed Image Measurement System Computer The camera captures image sequences at at least 4 times of the LC TV frame rate (4x6). {Dalsa, Uniq} One computer drives both the camera and LC TV to synchronize the data capture and processing. The algorithm & software developed by SLA makes data capture and processing 1% automatic and real-time. The measurement is fast (about 2 minutes for each temperature) and the hardware system is compact.

Application for comparing types of Overdrive Non-model Based OD Model Based OD 3 1 9 First Order Dynamic Gamma 25 2 15 8 7 6 5 4 1 3 5 2 1 5 1 15 2 25 3 1 2 3 4 5 6 7 8 9 1 3 3 25 25 2 2 15 1 Output Level 5 15 1 Output Level Second Order Dynamic Gamma 5-5 -5 5 1 15 2 25 3 Target Level 5 1 15 2 25 3 Target Level

Definition of second order dynamic gamma d n =f(d n-2, z n, z n+1 ) 3 First order Dynamic gamma Second order Dynamic gamma 25 2 Zn 15 Temporal Response Zn-2 Zn-2 Zn-1 1 Frame No. Zn+1 2 1 5-5 -6-4 -2 2 4 6 8 Time in Frame Second order DG models motion of more complex image, thus it is more realistic measure of motion blur of real video

Examples: second order responses 3 3 25 25 2 2 15 1 5 15 1 5 Temporal Response Temporal Response -5-5 -4-3 -2-1 1 2 3 4 Time in Frame -5-5 -4-3 -2-1 1 2 3 4 Time in Frame

Analysis of Backlight Flashing

Further analysis for backlight flashing With backlight flashing, the motion blur is greatly reduced. The motion image looks clear and shows more realism Object Evaluation The display output was captured with a 24 Hz camera Retina image was derived from integration along the motion trajectory (assuming perfect eye tracking, a la Laird) Subjective Evaluation to quantify the amount of motion blur relative to the perception of still image blur at various backlight flashing widths

Predicting Retinal Image Derive retina image from captured frame by integration along the motion trajectory

No Overdrive No Overdrive, no flashing No overdrive, flashing at 1/8 duty cycle Multiple edges!

Using Overdrive Overdrive, No flashing Overdrive, flashing at 1/8 duty cycle

Motion Sharpening Study Hammet & Bex 95, tried to parse out: Sharpness constancy : don t see blur until error signal of blur is visible Deblurring: neural processing generates high spatial frequencies Blur matching experiment: 1cy/deg Moving sine vs. blurrable square wave Unadapted vs. adapted to high SF Adapted assumed to knockout or at least reduce high SFs Decrease on plot means more Motion Sharpening Sharpness constancy (top-down version) should not be affected by high SF adaptation Data indicates some Deblurring processing Experiment flaw : sine waves don t blur with LPF

Observer Study on Perceived Motion Blur with Backlight Flashing

Observer study for Motion blur matching with edge Subjective experiment Compare the perceived motion blur with still image blur Characterizing the effectiveness of backlight flashing in reducing motion blur Can determine if motion sharpening effect occurs Experiment conditions 4 moving speeds: 27, 53, 8, 94 degrees per second 4 flashing duty cycle: 1 (hold), ½, ¼, and 1/8 6 observers Feng HVEI 6 (SPIE Electronic Imaging Conference)

Observer study task Perfect sharp edge moving Simulated edge Measured retina blur assuming perfect tracking ( x v tri ( x / v ) rect ( )) * gaus ( ) Gaussian edge simulated with edge width parameter σ / bluredge ( x ) = step ( x ) Observer adjusts σ to increase the blur width and/or adjust v to reduce the blur width. b x σ Moving edge compared against still edge

Results of Motion Blur Matching 3. 25. 2. 15. 1. 5.. y =.9286x -.9576 R 2 =.9919 Subjective Matched Width 5 1 15 2 25 3-5. Modeled Width Modeled width = simulated retinal blur, assuming perfect eye tracking Overdrive with different combinations of speed and flashing duty cycles (including no flashing) ( x v tri ( x / v ) rect ( )) * gaus ( ) / bluredge ( x ) = step ( x ) The regression line shows good correlation with a slope of.93, which is very close to the unit slope. This indicates the perceived blur closely matches the retina blur. b x σ Motion sharpening effects did not occur (too much masking due to edge?)

Summary of motion blur study The LCD model blur is modeled using Fourier analysis: It is the combination of display temporal low pass filtering and eye tracking To reduce motion blur improve the display temporal MTF Temporal overdrive to improve LCD temporal response Reduce the temporal aperture function to reduce motion blur due to hold ( x v tri ( x / v ) rect ( )) * gaus ( ) / bluredge ( x ) = step ( x ) Implemented overdrive and backlight b flashing on a LCD with LED backlight. The LED can be flashed at various duty cycle in sync with LCD driving x σ

Distribution of Text Scroll Velocity in TV Broadcasting From 71 programs of BS-digital broadcasting 6 Average;13.8 [deg/sec] 5 4 3 2 Average;13.8 Max;35.9 Frequency (%) 頻度 (%) 1 ~1 1~2 2~3 3~4 4~5 5~6 Velocity in viewing angle (deg/sec) [3H] 視角速度 (deg/sec) Telop, which doesn't contain camera blur, mostly distributes below 2[deg/sec] Fujine et.al.; Real-Life In-Home Viewing Conditions for FPDs and Statistical Characteristics of Broadcast Video Signal, Digest AM-FPD 6

MPRT measurement equipment Photal Otsuka electronics MPRT-2 - Equipping a pursuit CCD camera enables the evaluation close to that by eye perception - A unique algorithm as a time-based normalization permits the comparison among the different types of displays - Processes for measurements and analyses are computer-controlled including for moving picture displays on the sample FPD - Moving picture characteristic estimation for optional bitmap pictures - Expansion of lineup of selections for many purposes The system with a color CCD camera, the addition to the conventional system with a luminance filtered CCD camera The pursuit color camera for a small display, the addition to the conventional system for a medium and large display