Scope of the course. Video processing. G. de Haan. Schedule lectures 5P530. This is our field. Week 1 Week 2 Week 3 Week 4.

Similar documents
Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems. School of Electrical Engineering and Computer Science Oregon State University

Fundamentals of Multimedia. Lecture 3 Color in Image & Video

!"#"$%& Some slides taken shamelessly from Prof. Yao Wang s lecture slides

Television History. Date / Place E. Nemer - 1

Understanding Human Color Vision

Audio and Video II. Video signal +Color systems Motion estimation Video compression standards +H.261 +MPEG-1, MPEG-2, MPEG-4, MPEG- 7, and MPEG-21

Processing. Electrical Engineering, Department. IIT Kanpur. NPTEL Online - IIT Kanpur

ZONE PLATE SIGNALS 525 Lines Standard M/NTSC

An Overview of Video Coding Algorithms

The Lecture Contains: Frequency Response of the Human Visual System: Temporal Vision: Consequences of persistence of vision: Objectives_template

Power saving in LCD panels

DVG-5000 Motion Pattern Option

Communication Theory and Engineering

Multimedia. Course Code (Fall 2017) Fundamental Concepts in Video

[source unknown] Cornell CS465 Fall 2004 Lecture Steve Marschner 1

Dan Schuster Arusha Technical College March 4, 2010

Murdoch redux. Colorimetry as Linear Algebra. Math of additive mixing. Approaching color mathematically. RGB colors add as vectors

Module 3: Video Sampling Lecture 16: Sampling of video in two dimensions: Progressive vs Interlaced scans. The Lecture Contains:

Introduction & Colour

1. Broadcast television

Multimedia Systems Video I (Basics of Analog and Digital Video) Mahdi Amiri April 2011 Sharif University of Technology

Colour Reproduction Performance of JPEG and JPEG2000 Codecs

What is the lowest contrast spatial frequency you can see? High. x x x x. Contrast Sensitivity. x x x. x x. Low. Spatial Frequency (c/deg)

Presented by: Amany Mohamed Yara Naguib May Mohamed Sara Mahmoud Maha Ali. Supervised by: Dr.Mohamed Abd El Ghany

NAPIER. University School of Engineering. Advanced Communication Systems Module: SE Television Broadcast Signal.

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

To discuss. Types of video signals Analog Video Digital Video. Multimedia Computing (CSIT 410) 2

Video Signals and Circuits Part 2

Technical Bulletin 625 Line PAL Spec v Digital Page 1 of 5

4. ANALOG TV SIGNALS MEASUREMENT

Vannevar Bush: As We May Think

Video compression principles. Color Space Conversion. Sub-sampling of Chrominance Information. Video: moving pictures and the terms frame and

10 Digital TV Introduction Subsampling

Essence of Image and Video

Lecture 2 Video Formation and Representation

Chapter 3 Fundamental Concepts in Video. 3.1 Types of Video Signals 3.2 Analog Video 3.3 Digital Video

decodes it along with the normal intensity signal, to determine how to modulate the three colour beams.

Video 1 Video October 16, 2001

ANTENNAS, WAVE PROPAGATION &TV ENGG. Lecture : TV working

If your sight is worse than perfect then you well need to be even closer than the distances below.

Contents. xv xxi xxiii xxiv. 1 Introduction 1 References 4

Calibration of Colour Analysers

4. Video and Animation. Contents. 4.3 Computer-based Animation. 4.1 Basic Concepts. 4.2 Television. Enhanced Definition Systems

Computer Graphics. Raster Scan Display System, Rasterization, Refresh Rate, Video Basics and Scan Conversion

PAST EXAM PAPER & MEMO N3 ABOUT THE QUESTION PAPERS:

CM3106 Solutions. Do not turn this page over until instructed to do so by the Senior Invigilator.

Graphics Devices and Visual Perception. Human Vision. What is visual perception? Anatomy of the Eye. Spatial Resolution (Rods) Human Field of View

ECE 5765 Modern Communication Fall 2005, UMD Experiment 10: PRBS Messages, Eye Patterns & Noise Simulation using PRBS

ELEC 691X/498X Broadcast Signal Transmission Fall 2015

Analog TV Systems: Monochrome TV. Yao Wang Polytechnic University, Brooklyn, NY11201

2 Video Formation, Perception, and Representation Chapter 1 color value at any point in a video frame records the emitted or reflected light ata parti

Colorimetric and Resolution requirements of cameras

Lecture 1: Introduction & Image and Video Coding Techniques (I)

(a) (b) Figure 1.1: Screen photographs illustrating the specic form of noise sometimes encountered on television. The left hand image (a) shows the no

ELEG5502 Video Coding Technology

Video coding standards

Motion Video Compression

CHAPTER 3 COLOR TELEVISION SYSTEMS

Chapter 4 Color in Image and Video. 4.1 Color Science 4.2 Color Models in Images 4.3 Color Models in Video

Lecture 2 Video Formation and Representation

Assessing and Measuring VCR Playback Image Quality, Part 1. Leo Backman/DigiOmmel & Co.

10:15-11 am Digital signal processing

PCM ENCODING PREPARATION... 2 PCM the PCM ENCODER module... 4

Video Compression. Representations. Multimedia Systems and Applications. Analog Video Representations. Digitizing. Digital Video Block Structure

Improving Color Text Sharpness in Images with Reduced Chromatic Bandwidth

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

Using Low-Cost Plasma Displays As Reference Monitors. Peter Putman, CTS, ISF President, ROAM Consulting LLC Editor/Publisher, HDTVexpert.

OPTIMAL TELEVISION SCANNING FORMAT FOR CRT-DISPLAYS

Digital Signal. Continuous. Continuous. amplitude. amplitude. Discrete-time Signal. Analog Signal. Discrete. Continuous. time. time.

Fourier Transforms 1D

Colour Matching Technology

Errata to the 2nd, 3rd, and 4th printings, A Technical Introduction to Digital Video

Reading. Display Devices. Light Gathering. The human retina

DCI Requirements Image - Dynamics

Information Transmission Chapter 3, image and video

Midterm Review. Yao Wang Polytechnic University, Brooklyn, NY11201

DIGITAL COMMUNICATION

Television and video engineering

Man-Machine-Interface (Video) Nataliya Nadtoka coach: Jens Bialkowski

CHAPTER 2. Black and White Television Systems

Getting Images of the World

RECOMMENDATION ITU-R BT.1201 * Extremely high resolution imagery

Display-Shoot M642HD Plasma 42HD. Re:source. DVS-5 Module. Dominating Entertainment. Revox of Switzerland. E 2.00

Calibrate, Characterize and Emulate Systems Using RFXpress in AWG Series

Rec. ITU-R BT RECOMMENDATION ITU-R BT PARAMETER VALUES FOR THE HDTV STANDARDS FOR PRODUCTION AND INTERNATIONAL PROGRAMME EXCHANGE

Measurement of Microdisplays at NPL

Chapter 6: Real-Time Image Formation

Essentials of the AV Industry Welcome Introduction How to Take This Course Quizzes, Section Tests, and Course Completion A Digital and Analog World

The XYZ Colour Space. 26 January 2011 WHITE PAPER. IMAGE PROCESSING TECHNIQUES

Chrominance Subsampling in Digital Images

Mahdi Amiri. April Sharif University of Technology

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Experiment 13 Sampling and reconstruction

BTV Tuesday 21 November 2006

SM02. High Definition Video Encoder and Pattern Generator. User Manual

Rounding Considerations SDTV-HDTV YCbCr Transforms 4:4:4 to 4:2:2 YCbCr Conversion

OVE EDFORS ELECTRICAL AND INFORMATION TECHNOLOGY

Digital Representation

Wide Color Gamut SET EXPO 2016

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Transcription:

1 2 Video processing G. de Haan Scope of the course 3 24 Hz 25 Hz 30 Hz Scope of the course 50 Hz 2:1 60 Hz 2:1 CIF QCIF 1-25Hz WEB 72 Hz 85 Hz 95 Hz This is our field Theory (most repetition) pplications Tools Compress VG, SVG, XVG, etc.. 50 Hz 2:1 60 Hz 2:1 100 Hz 2:1 FPDs 50 600Hz 4 Scope of this course Make image sequences more beautiful Image enhancement Sharpness, contrast, colour, noise/artifact reduction dapt image sequences to specific display types Picture quality and display principles CRT, LCD, PDP, MMD, Picture rate conversion De-interlacing Resolution up-conversion Important tools: Motion estimation Object detection We need some basic background first Basics of digital video processing 5 Outline of the basics part 6 Schedule lectures 5P530 Digital video basics Sampling image data The sampling theorem lias in images The spectrum of a video signal For stationary images For moving images Week 1 Week 2 Week 3 Week 4 Basics (Ch 2, 3) Video displays (Ch 9) Filtering (Ch 4) Picture-Rate Conversion (Ch 7) Relate the parameters of video format to characteristics of the HVS How many samples, Horizontally, vertically? How many (amplitude) levels required for digital processing? How many images per second? Flicker, motion portrayal Relation spatial and temporal frequencies for moving images What about colour? Image filtering Linear: FIR, IIR, inverse filter Non-linear: rank-order, adaptive, neighbourhood selection pplications Week 5 Week 6 Week 7 Week 8 De-interlacing (Ch 8) Motion Estimation (Ch 10) Object Detection (Ch 11) X

2 7 Preparation for the examination 8 vailable material: Lectures 2x2 h during 7 weeks Book (Digital Video Post Processing) Version June 2010, or later Except Chapter 6 Questions in every chapter, to exercise for exam vailable from Marja de Mol, Flux 4.131 (Eu.50) Demo software (VidProc) Downloadable from w3.ics.ele.tue.nl/~dehaan/ bookssoftware (password ) Slides: w3.ics.ele.tue.nl/~dehaan/slides/ Your notes (hardly necessary when you learn from the book and do the exercises) You may bring the book to the exam! VidProc nalysing signals in the frequency domain (spectrum) 9 How to determine the presence of a sine wave? Our signal: s i = sin(ω 1 t) 1. Determine the correlation of the (input) signal, s i, with all possible (analysis) sine-waves: 1. Multiply s i with sin(ωt) with ω min < ω< ω max 1. Maximum absolute correlation, c, occurs for ω = ω 1 Log 10 c 0 ω 1 ω 10 What if we don t know the phase of our input signal? Our signal: s i = cos(ω 1 t+α) cos α 1. Determine the correlation of the (input) signal, s i, with all possible (analysis) sine-waves: 1. Multiply s i with sin(ωt) with with ω min < ω < ω max 2. Determine the correlation of the (input) signal, s i, with all possible (analysis) cosine-waves: 1. Multiply with cos(ωt) with with ω min < ω < ω max 3. Since sin 2 α+cos 2 α=1, the correlation magnitude, c = sqrt(sin 2 α + cos 2 α ), now only depends on the amplitude of the input signal (which is 1 in the example) 1. Maximum magnitude occurs for ω = ω 1 sin α 1 α α = ωt 11 What really matters: 12 Some example signals We can analyse every signal with this method Our signal: s i = sin(ω 1 t) Interesting: signal is completely defined by the magnitude and phase spectra This implies an inverse operation exists to obtain the signal from its spectrum The analysis is called Fourier analysis Jean Baptiste Joseph Fourier 1768-1830, France Our signal: s i = block(ω 1 t) frequency ω 1 3ω 1 5ω 1 7ω 1 frequency Consequence: we can write a block-wave (every signal) as a series of sine waves (with phase information) Our signal: s i = delta(ω 1 t) ω 1 frequency ω 1 3ω 1 5ω 1 7ω 1

3 13 14 The image histogram nalysing signals in the amplitude domain (histogram) For each pixel value: Count the number of pixels having that value Possibly group values into a bin and count the pixels in each bin The result is the histogram, or the gray-level distribution of the image: H(k) = #pixels with gray-level k If we normalize the result, by dividing the values by the number of pixels in the image, N, we get the normalized histogram which is an estimate of the probability density function (p.d.f.) P( k) H( k) N P( k) 1 k 15 The image histogram Probability of gray level (# of pixels) 16 The image histogram (Cont.) P(k) 1 Gray Level n unbalanced histogram does not fully utilize the dynamic range of the system Under-exposed image: concentrated on the dark side Over-exposed image: concentrated on the bright side Low-contrast image: concentrated in a narrow range balanced histogram more pleasant and gives rich look P(k) 0.5 P(k) 0.1 k k under over low balanced k 17 Modulation, 1-D sampling & spectrum 18 mplitude modulation as a basis to understand sampling cos( ) cos cos sin sin From geometry: cos( ) cos cos sin sin Consequence: X cos(b) cos( ) cos( ) 2cos cos 2cos(a) cos(a+b)+ cos(a-b) 0 b a frequency

4 19 The difference between sampling and modulation 20 This is the effect of sampling with sampling freq. a Modulation: X cos(b) cos(a+b)+ cos(a-b) 2cos(a) 0 a 2a f Sampling: 0 a frequency X cos(b) cos(b) +cos(a+b)+ cos(a-b) +cos(2a+b)+ cos(2a-b). 0 a 2a f delta(a) = cos(0)+cos(a)+cos(2a). 0 b a frequency 2a 0 a 2a f 21 Reconstruction of the input signal through post-filtering 22 Reconstruction of the input signal Stopband of reconstruction filter 0 a 2a f Stopband of reconstruction filter 0 a 2a f Input Sampling LPF Stopband of reconstruction filter 0 a 2a f Fs Output 23 What is the maximum frequency that can be reconstructed? 24 Prevention of alias and reconstruction 0 a 2a f Stopband of reconstruction filter Stopband of reconstruction filter 0 a/2 a 2a f s soon as (b>a/2), reconstruction error (alias)! Sampling Theorem: a continuous signal can be reconstructed from its discrete representation, provided it s bandwidth B is smaller than Fs/2 (a) Input LPF Sampling Fs LPF Output Both the pre- and the post-filter have a passband that stops at Fs/2 (Nyquist frequency) Harry Nyquist 1889-1976

5 25 The sampling theorem 26 1-D Sampling grids and spectra 1/f s s frequency s f s 2 We have a continuous signal We sample it to obtain a discrete representation Sampling theorem: we can reconstruct the continuous signal from its discrete representation, provided it contained no frequencies above half the sampling frequency T s 1/f s s frequency 28 Video processing G. de Haan The 3D spectrum of a video signal 29 Video signal describes a series of images 30 Multidimensional world in single electric signal? Colour varies as a function of horizontal and vertical position and also is not constant over.. Vpos Time transmission reception How to transmit all that as a single -varying voltage? Hpos

6 31 How transmit/store multi-dimensional data in 1D signal? y y x x Continuous, spatially discrete pixels Scanning as a solution: Image n 32 Therefore, we go to a 3D spectrum analysis low vertical frequency: 1 f (c/ph) higher vertical frequency: 1 (c/ph) Image n+1 higher horizontal frequency: 1 (c/pw) s a consequence, we have to choose an appropriate #lines/image and #images/second high temporal frequency: 0.5 f (c/pp) 33 The 3D video spectrum an interpretation 34 If a pattern has a non-zero f h ND f v is also non-zero? f v (c/ph) f v (c/ph) f t (c/pp) f t (c/pp) f h (c/pw) f h (c/pw) It is a diagonal pattern 35 What if a horizontal pattern moves? 36 f v (c/ph) f h (c/pw) f t (c/pp) Discrete image sequences (from 1D3D sampling)

Brightness V-position V-pos V-pos 7 37 What is a black & white image? 38 However, since we have to multiplex into 1D-signal brightness brightness H-pos brightness Brightness is a continuous function of horizontal and vertical position H-pos brightness Brightness is a function of the discrete horizontal and vertical position 39 Complete analogy: Frequencies in or in H-pos (space) dimension 40 Complete analogy: Frequencies in or in V-pos (space) dimension Brightness H-position 0 F (c/pw) 0 F (c/ph) 41 The higher the frequency, the finer the detail 42 The sampling theorem 0 F (c/pw) s f s 2 We have a continuous signal We sample it to obtain a discrete representation Sampling theorem: we can reconstruct the continuous signal from its discrete representation, provided it contained no frequencies above half the sampling frequency T s

8 43 Consequences 44 Consequences Sampling frequency in horizontal dimension (c/pw) must be at least twice the highest horizontal frequency we want to reconstruct on the screen (fs h > 2.F h_max ) In other words: Sampling frequency in vertical dimension (c/ph) must be at least twice the highest vertical frequency we want to reconstruct on the screen (fs v > 2.F v_max ) In other words: Number of samples (pixels) on every row (line) 2 s number of cycles of finest horizontal sine-wave we want to display (at least 2 pixels to make a cycle ) Number of samples (pixels) on every column >2 s number of cycles of finest vertical sine-wave we want to display (at least 2 pixels to make a cycle ) 45 2-D sampling grid and spectrum 46 1D situation: 1/f s 0 V f s frequency V 1/f h 1 st repeat repeat f v repeat f v /2 1/f v 1 st repeat 0 1 st repeat H f h /2 f h lias artifacts in images H repeat 1 st repeat repeat 47 What happens if number of samples is insufficient? 48 Illustration of alias (moire) when a resolution wedge interferes with line scanning pattern 0 a 2a f 0 a 2a f Lowest frequency most visible (alias)

9 49 Old test image used to check the resolution of TV 50 This is what happens if we halve the number of lines 51 What does alias look like on a natural image? 52 250x400pixels Disappearing features 125x200pixels Serration on edges The required sampling grid Lowest (alias) frequency dominates 53 The human eye 54 Resolution not constant over field of vision The image is spatially sampled by the retina!

10 55 Measuring the limits of vision 56 What is the required sampling grid? (vertically) Contrast grating used to analyze contrast sensitivity. Can vary: Spatial frequency (bar spacing) - cycles per deg (c/deg) Contrast (amplitude) Orientation We can resolve about 30 cycles/degree t 6 s picture height, viewing angle is about 10 degrees The finest pattern on the screen is 10*30=300 cycles Sampling theorem: we need at least 600 samples (lines) Is this conclusion correct? What is the sampling theorem? 57 The sampling theorem 58 1-D Sampling grids and spectra 1/f s s frequency s f s 2 We have a continuous signal We sample it to obtain a discrete representation Sampling theorem: we can reconstruct the continuous signal from its discrete representation, provided it contained no frequencies above half the sampling frequency T s 1/f s s frequency 59 Prevention of alias and reconstruction 60 What if the reconstruction filters are absent? Input LPF Sampling LPF Fs Output Where are the anti-alias pre-filter and where is the reconstruction filter? CCD-imaging device Matrix type of display Without post-filter repeats are LSO available So, in addition to frequency, we also have B cos( ) cos( ) 2cos cos cos( ) cos( B) 2cos(( B) / 2)cos(( B) / 2) The difference frequency (B-)/2 also visible (beat) Coarse beat more visible than fine detail Consequence: Kell-factor, K=0.7 Frequencies up to 0.35 s sampling frequency

11 61 Kell s factor in practice 62 What is the required sampling grid? (horizontally) 0 0.5 0.5 Once we calculated the required number of lines, n l =600, at given viewing distance (6 x H): The number of pixels/line, n p, follows as R x n l For R = 4/3 this means we need 800 pixels/line, if Kell s factor also applies in the horizontal dimension (pixelated displays) and 560 otherwise For R = 16/9, we need 1067 (746 for non-pixelated displays) 64 Video processing G. de Haan Short recapitulation 65 Schedule lectures 5P530 66 Preparation for the examination Week 1 Week 2 Week 3 Week 4 Basics (Ch 2, 3) Video displays (Ch 9) Filtering (Ch 4) Picture-Rate Conversion (Ch 7) Week 5 Week 6 Week 7 Week 8 De-interlacing (Ch 8) Motion Estimation (Ch 10) Object Detection (Ch 11) X vailable material: Lectures 2x2 h during 7 weeks Book (Digital Video Post Processing) Edition Dec. 2014 Except Chapter 6 Questions in every chapter, to exercise for exam vailable from Marja de Mol, Flux 4.131 (Eu.50) Demo software (VidProc) Downloadable from www.ics.ele.tue.nl/~dehaan/ bookssoftware (password ) Slides: www.ics.ele.tue.nl/~dehaan/slides/ Your notes (hardly necessary when you learn from the book and do the exercises) You may bring the book to the exam! VidProc

12 67 The sampling theorem 68 Reconstruction of the input signal through post-filtering s f s 2 We have a continuous signal We sample it to obtain a discrete representation Sampling theorem: we can reconstruct the continuous signal from its discrete representation, provided it contained no frequencies above half the sampling frequency T s Stopband of reconstruction filter 0 a 2a f Stopband of reconstruction filter 0 a 2a f Stopband of reconstruction filter 0 a 2a f 69 What is the required sampling grid? (vertically) 70 We can resolve about 30 cycles/degree t 6 s picture height, viewing angle is about 10 degrees The finest pattern on the screen is 10*30=300 cycles Sampling theorem: we need at least 600 samples (lines) The temporal sampling grid (Flicker) 71 Video is discrete in the temporal domain 72 3D frequencies in video More pictures/second affects: Motion portrayal Flicker flicker Motion f v (c/ph) f t (c/pp) f h (c/pw)

13 73 Frequency response in the temporal domain 74 Depends on the brightness level, and viewing angle The flicker threshold shifts to higher frequencies in the periphery of the vision field llows us to rapidly recognize approaching danger 75 Consequences for the design of a video system 76 Upper limit of temporal contrast sensitivity curve determines picture-rate required to prevent visible flicker TV can have lower picture-rate (smaller viewing angle) than a PC-monitor Our low sensitivity for slow brightness variation implies that we hardly notice aging of displays, even if the brightness drops with 50% The temporal sampling grid (how many images/second?) 77 What about the temporal sampling density? 78 moving scene causes (high) temporal frequencies f v (c/ph) f t (Hz) Temporal response of average viewer + sampling theorem 100Hz? Unfortunately, there is a complication f h (c/pw)

14 79 Relation spatial and temporal frequencies Fx v 1 v 2 80 The sampling theorem and video systems f 2 f 1 No motion Temporal sampling with e.g. 50 Hz No motion Ft Temporal frequency of 0 Hz Temporal frequency of n.50 Hz! 81 The sampling theorem and video systems 82 The sampling theorem and video systems Temporal sampling with e.g. 50 Hz Temporal sampling with e.g. 50 Hz Temporal frequency of e.g. 5 Hz! Temporal frequency of 5 + n.50 Hz! Temporal frequency of e.g. 120 Hz! Temporal frequency of 120 + n.50 Hz! 83 So, do we need a very high picture-rate for motion? 84 Object tracking with the eye Fine moving pattern high temporal frequency (f t ) Position on screen If f t > ½ picture rate temporal alias! However: only occasional alias visible Backwards turning carriage wheels How is this possible?

Fs in c/degree Position on retina 15 85 moving ball on the retina of the tracking eye 86 Temporal frequency above Nyquist limit imaging system Position on screen Temporal sampling with e.g. 50 Hz Tracking eye n-1 n n+1 n+2 picture number Temporal frequency of e.g. 120 Hz! 87 Conclusion for the picture-rate 88 mbiguity if f t >1/2 picture rate If high f t due to motion, correct eye-tracking removes alias This requires coarser patterns (below Nyquist) must be part of the same moving object: demo In practice little ambiguity, and picture rate chosen to prevent flicker only Visible part of video spectrum defined by eye-motion 89 Let s look at the 2D spectrum 90 nd at the diamond-shaped contour plot

16 91 Viewer response for stationary and moving images 92 Fs (cycles/degree) 30 50 Video spectrum Repeat spectrum Ft (Hz) Fs (cycles/degree) 30 50 Ft (Hz) Brightness quantisation Proportional to speed 93 Number of levels relevant for digital video processing ref in compare compare compare compare code to binary words dig out 94 How many levels required for digital processing? Experiments: we can distinguish about 200 levels in an image We shall use 8 bit representation of luminance 8 bit 4 bit 2 bit 95 The dynamic range of human vision is enormous! 96 daptation takes a while though..(ferwerda et al, 1996) starlight moonlight office light daylight flashbulb 10-6 10-2 1 10 +2 10 +4 10 +8 Scotopic Range Photopic Range Display Range

Quantized output brightness 17 97 Weber-Fechner law: Perceived brightness(light-level) 98 This is what it means, and how it is measured 99 So, this is what we need 100 Brightness as a function of video signal (CRT-gamma) db2 It occurs almost Step-size increases with brightness (highest accuracy in dark areas) automatically, due to the gamma of CRT Continuous input db1 dl1 dl1 Luminance 101 Combined effect of gamma & Weber Fechner law 102 Consequences for quantization in DC 100.00 CIE1976: B = 116(L/Ln) 1/3-16; L/Ln > 0.008856 B=116*(input/100) 1/3-16 On gamma-corrected video 256 equidistant levels (8 bit) just enough to guarantee invisible quantisation output (normalized) 75.00 50.00 25.00 B=116*(L/100) 1/3-16 L=100*(input/100) 2.8 On linear video (e.g. direct from camera, or the signal to plasma panel display) 8 bits is insufficient in the dark areas of the picture. With equidistant quantization 10 up to 14 bits are necessary (1024-16K levels) 0.00 B=903.3*(input/100) 0.00 25.00 50.00 75.00 100.00 input (normalized)

18 104 Video processing G. de Haan Quantisation and frequency domain 105 Quantisation not visible in all picture parts 106 Dithering to remove DC-errors Modified value Quantized value 4 bit input + Quant. output noise Errors in the HF-part of the spectrum less visible 8 bit Quantization-error frequency depends on content 107 Increase perceived grey levels: error diffusion 32 64 96 127 8 bits Simulate intermediate grey levels by introducing highfrequent patterns 108 Error diffusion (noise shaping); move LF-errors to HF Modified (desired) Quantized value value RGB input Weighted and distributed Quantization error + H + + -Quantization error - Quant. RGB output 3 bits H is the error feedback filter, e.g. Floyd-Steinberg: Scanning direction 3 X 7 5 1 /16 X = current pixel

300 lines 300 lines 300 lines 100 lines 19 109 Dithering and error diffusion compared 110 Dithering and error diffusion compared 111 Image compression based on frequency dependent accuracy 112 Quantisation not visible in all picture parts More pixels per image better resolution more storage capacity 400 pixels/line 133 pixels/line 4 bit 8 bit Errors in the HF-part of the spectrum less visible Quantization-error frequency depends on content 113 JPEG principle 114 JPEG compression p p p p 11 21 31 41 p p p p 12 22 32 42 13 23 33 43 p14 p 24 B p 34 p44 Multiply pixel block B with invertible matrix DCT p p p p 400 pixels/line 400 pixels/line Not all entries of matrix F=DCT.B are equally important for the image quality Divide image into blocks Quantize less important entries more coarsely to obtain F q from F Data reduced to 20% Data reduced to 10%

20 115 116 Colour vision and sampling grid Colour and the eye 117 The human eye The retina has two type of receptors, rods and cones We perceive colour only with the cones, at higher brightness levels Rods and cones are at the back of the retina, the nerves are at front 118 What about colour? Same sampling grid as luminance? Colour tri-chromatic vision Impression as function of wavelength 119 Cones and rod vision (Ferwerda et al, 1996) 120 It is a continuum, and also resolution changes

21 121 Tri-stimulus colour vision phenomena 122 To see yellow, no wavelength of 575nm is necessary Ratio of red, green and blue determines perceived colour strength The same colour perception is caused by a mix of light with 650nm and 530nm and no 440nm 123 Consequences of colour vision for technology 124 dditive colour mixing Used in displays and in fluorescent lamps Based on red, green and blue primary = RGB-model, somes white added for improved efficiency (RGBY) Primaries defined by emission of phosphors or LEDs (lamps, CRT, PDP, OLED), or colour filters (LCD) 125 Subtractive colour mixing Used in printing and photography Based on cyan, magenta and yellow primary = CMY-model Somes black primary added for improved black = CMYK-model 126 Colour vision and display technology

22 127 dditive colour mixing 128 Colour synthesis Optical superposition (projection) Only 3 primary colours required to create every colour sensation 129 Colour synthesis temporal synthesis (colour sequential) 130 Colour synthesis spatial synthesis (CRT, LCD, PDP) Shadow-mask (colour CRT) Matrix display panel Circle colseq 131 Colour matrixing 132 Changing coordinates with linear matrixing Other colour coordinates (3 linear combinations of R, G and B) Red Green Blue matrix Linear operation (reversible) Useful for compatibility with BW-TV (matrix to Y +U +V) Useful for bandwidth reduction matrix Red Green Blue X Y Z a 11 a a 21 a a 31 a Y 0.30 U - 0.1686 V 0.50 12 22 32 a a a 13 23 33 R. G B 0.59-0.3314-0.4214 0.11 R 0.50. G - 0.0786 B

H(f) 23 133 Hue and saturation in the YUV-colour space 134 Video bandwidth Original Reduced colour BW Reduced luminance BW Hue Not possible in red, green, blue colour space, as these signals LL carry luminance information 135 Required bandwidth in luminance and chrominance 136 RGB versus YUV 4:2:2 RGB, equal bandwidth, i.e. 3x: 1/f s s frequency YUV, unequal bandwidth, i.e. 1x: Colour versus luminance contrast sensitivity Higher colour sensitivity at lower frequencies High colour frequencies less visible (8:5:3) +2x: 1/f s s frequency 1/f s s frequency 137 PL and NTSC also profit from reduced UV-bandwidth Luminance Chrominance frequency UV modulated in quadrature on colour sub-carier Bandwidth UV ~ 0.25* bandwidth Y When digitized we speak of a 4:1:1 signal (4 ~ 13.5 MHz)