Multimedia Communication Systems 1 MULTIMEDIA SIGNAL CODING AND TRANSMISSION DR. AFSHIN EBRAHIMI

Size: px
Start display at page:

Download "Multimedia Communication Systems 1 MULTIMEDIA SIGNAL CODING AND TRANSMISSION DR. AFSHIN EBRAHIMI"

Transcription

1 1 Multimedia Communication Systems 1 MULTIMEDIA SIGNAL CODING AND TRANSMISSION DR. AFSHIN EBRAHIMI

2 Basics: Video and Animation 2 Video and Animation Basic concepts Television standards MPEG Digital Video Broadcasting Computer-based animation

3 Video Signal Representation Video signal representation includes: Visual representation Transmission Digitization Several important measures for the visual representation: 1. Vertical Detail and Viewing Distance Smallest detail that can be reproduced is a pixel Ideally: One pixel for every detail of a scene In practice: Some details fall between scanning lines Kell factor: only 70% of the vertical details are represented, due to the fact that some of the details of the scene fall between the scanning lines (determined by experience, measurements) 3

4 Visual Representation The Kell factor of 0.7 is independent of the way of scanning, i.e. whether the scanning is progressive (sequential scanning of lines) or the scanning is interlaced (alternate scanning, line 1, line 3,... line n-1, line 2, line 4,...) 4 Scan lines A detail where only 2 out of 3 components can be represented

5 Visual Representation 2. Horizontal Detail and Picture Width Picture width for conventional television service is 4:3 picture height 5 Geometry of the television image is based on the aspect ratio, which is the ratio of the picture width W to the height H (W:H). The conventional aspect ratio for television is 4:3 = 1.33 Modern systems use 16:9 = 1.77 The viewing distance D determines the angle α subtended by the picture height H. This angle is measured by the ratio of the picture height to viewing distance: tan(α) = H/D.

6 Visual Representation 6 3. Total Detail Content of the Image Total number of picture elements = number of vertical elements number of horizontal elements = vertical resolution aspect ratio = 525 4/3 (for NTSC) 4. Perception of Depth (3D impression) In natural vision: angular separation of the images received by the two eyes Television image: perspective appearance of objects, choice of focal length of camera lens, changes in depth of camera focus

7 Visual Representation 5. Luminance and Chrominance Usually not RGB, but YUV (or a variant) is used 7 6. Temporal Aspects of Illumination Motion is represented by a rapid succession of slightly different still pictures (frames). A discrete sequence of pictures is perceived as a continuous sequence of picture (due to Lucky weakness of the human brain). Between frames, the light is cut off briefly. For a realistic presentation, two conditions are required: Repetition rate must be high enough to guarantee smooth motion The persistence of vision must extend over the interval between flashes 7. Continuity of Motion To perceive continuous motion, the frame rate must be higher than 15 frames/sec. For smooth motion the frame rate should be frames/sec.

8 Visual Representation 8 8. Flickering Through slow motion a periodic fluctuation of brightness, a flicker effect, arises. How to avoid this disturbing effect? A first trick: display each picture several times E.g.: 16 pictures per second: very inconvenient flicker effect display every picture 3 times, i.e. with a refresh rate of 3 16 = 48 Hz To avoid flicker, a refresh rate of at least 50 Hz is needed Computer displays achieve 70 Hz of refresh rate by the use of a refresh buffer TV picture is divided into two half-pictures by line interleaving Refresh rate of 25 Hz (PAL) for the full TV picture requires a scan rate of 2 25 Hz = 50 Hz.

9 Visual Representation 9. Temporal Aspect of Video Bandwidth The eye requires a video frame to be scanned every 1/25 second Scan rate and resolution determines the video bandwidth needed for transmission During one cycle of video frequency (i.e. 1 Hz) at most two horizontally adjacent pixels can be scanned Vertical resolution and frame rate relate to horizontal (line) scan frequency: vertical lines (b) frame rate (c) = horizontal scan frequency Horizontal resolution and scan frequency relate to video bandwidth: horizontal lines (a) scan frequency / 2 = video bandwidth video bandwidth = a b c/2 (since 2 horizontally adjacent pixels can be represented simultaneously during one cycle of video frequency) 9 A computer system with a resolution of a = 1312 and b = 800 pixels out of which are visible and a frame rate c = 100 Hz needs: a horizontal scan frequency of Hz = 80 khz a video bandwidth of khz / 2 = MHz

10 Digitization 10 For image processing or transmission, the analog picture or video must be converted to a digital representation. Digitization consists of: Sampling, where the color/grey level in the picture is measured at a M N array of pixels. Quantizing, where the values in a continuous range are divided into k intervals. For a satisfiable reconstruction of a picture from quantized samples 100 or more quantizing levels might be needed Very often 256 levels are used, which are representable within 8 Bits A digital picture consists of an array of integer values representing pixels (as in a simple bitmap image format)

11 Video Controller Standards A B C = A B 11

12 Television - NTSC 12 Video format standard for conventional television systems in the USA (since 1954): NTSC (National Television Systems Committee) Picture size: 525 rows, aspect ratio 4:3, refresh rate of 30 frames/sec Uses a YIQ signal (in principle nothing but a slight variation of YUV scheme) Y = R G B I = R G B Q = R G B Composite signal for transmission of the signal to receivers: Individual components (YIQ) are composed into one signal Basic information consists of luminance information and chrominance difference signals Use appropriate modulation methods to eliminate interference between luminance and chrominance signals

13 A Typical NTSC Encoder 13

14 NTSC 14 Required bandwidth to transmit NTSC signals is 4.2 MHz, 6 MHz including sound The luminance (Y) or monochrome signal uses 3.58 MHz of bandwidth The I-signal uses 1.5 MHz of bandwidth, the Q-signal 0.5 MHz The I-signal is In-phase with the 3.58 MHz carrier wave, the Q-signal is in Quadrature (90 degrees out of phase) with the 3.58 MHz carrier wave.

15 Television - PAL 15 PAL (Phase Alternating Line, invented by W. Bruch/Telefunken 1963) Frame rate of 25 Hz, delay between frames: 1000ms / 25 frames per sec. = 40ms 625 lines, aspect ratio 4:3 Quadrature amplitude modulation similar to NTSC Bandwidth: 5.5 MHz Phase of the R-Y (V) signal is reversed by 180 degrees from line to line, to reduce color errors that occur from amplitude and phase distortion during transmission. The chrominance signal C for PAL transmission can be represented as:

16 Television Standards 16

17 Television Standards 17 -i interlaced, -p progressive (non-interlaced) More modern Television Standards: SDTV (Standard Definition TV): low resolution, aspect ratio not specified EDTV (Enhanced Definition TV): minimum of 480 lines, aspect ratio not specified HDTV (High Definition TV): minimum of 720 lines, aspect ratio of 16:9

18 Enhanced Definition TV (EDTV) EDTV systems are conventional systems which offer improved vertical and/or horizontal resolution by some tricks Comb filters improve horizontal resolution by more than 30% according to literature Separate black and white from color information to eliminate rainbow effects while extending resolution Progressive (non-interlaced) scanning improves vertical resolution Insertion of blank lines in between active lines, which are filled with information from: above line, below line, same line in previous picture 18 Other EDTV developments is: IDTV (Improved Definition Television) Intermediate level between NTSC and HDTV (High-Definition Television) in the U.S. Improve NTSC image by using digital memory to double scanning lines from 525 to One 1050-line image is displayed in 1/60 sec (60 frames/sec). Digital separation of chrominance and luminance signals prevents cross-interference

19 High Definition Systems (HDTV) HDTV is characterized by: Higher Resolution, approx. twice as many horizontal and vertical pixels as conventional systems ( , up to ) 24-bit pixels Bandwidth 5-8 times larger than for NTSC/PAL Aspect Ratio: 16/9 = Preferred Viewing Distance: between 2.4 and 3.3 meters Digital Coding is essential in the design and implementation of HDTV: Composite Coding (sampling of the composite analog video signal i.e. all signal components are converted together into a digital form) is the straightforward and easiest alternative, but: cross-talk between luminance and chrominance in the composite signal composite coding depends on the television standard sampling frequency cannot be adopted to the bandwidth requirements of components sampling frequency is not coupled with color carrier frequency 19

20 High Definition Systems (HDTV) 20 Alternative: Component Coding (separate digitization of various image components): the more important luminance signal is sampled with 13.5 MHz, the chrominance signals (R-Y, B-Y) are sampled with 6.75 MHz. Global bandwidth: 13 Mhz MHz > 19 MHz Luminance and chrominance signals are quantized uniformly with 8 Bits. Due to high data rates (1050 lines 600 pixels/line 30 frames/sec) bandwidth is approx. 19 MHz and therefore substandards (systems which need a lower data rate) for transmission have been defined.

21 High Definition Systems (HDTV) Worldwide, 3 different HDTV systems are being developed: United States Full-digital solution with 1050 lines (960 visible) and a scan rate of Hz Compatible with NTSC through IDTV Europe HD-MAC (High Definition Multiplexed Analog Components) 1250 lines (1000 visible), scan rate of 50 Hz Halving of lines (625 of 1250) and of full-picture motion allows simple conversion to PAL HD-MAC receiver uses digital image storage to show full resolution and motion Japan MUSE is a modification of the first NHK (Japan Broadcasting Company) HDTV Standard MUSE is a Direct-Broadcast-from-Satellite (DBS) System, where the 20MHz bandwidth is reduced by compression to the 8.15 MHz available on the satellite channel Full detail of the 1125 line image is retained for stationary scenes, with motion the definition is reduced by approx. 50% 21

22 High Definition Systems (HDTV) 22 -i interlaced, -p progressive (non-interlaced)

23 Television - Transmission (Substandards) HDTV data rates for transmission total picture elements = horizontal resolution vertical resolution USA: 720,000 pixels 24 Bits/pixel 60 frames/sec. = MBits/sec! Europe: 870,000 pixel 24 Bits/pixel 50 frames/sec = 1044 MBits/sec! HDTV with pixel, 24 Bits/pixel, 30 frames/sec: 1.5 GB/sec! Reduction of data rate is unavoidable, since required rates do not fit to standard capacities provided by broadband networks (e.g. 155 or 34 MBits/sec) Different substandards for data reduction are defined: 23

24 Television: Transmission Further reduction of data rates is required for picture transmission Sampling gaps are left out (only visible areas are coded): Luminance has 648 sample values per line, but only 540 of them are visible. Chrominance has 216 sample values per line, but only 180 are visible. 575 visible lines: ( ) samples/line 575 lines/frame = 517,500 samples/frame 517,500 samples/frame 8 Bits/sample 25 frames/sec = MBits/sec Reduction of vertical chrominance resolution: Only the chrominance signals of each second line are transmitted. 575 visible lines: ( ) samples/line 575 lines/frame = 414,000 samples/frame 414,000 samples/frame 8 Bits/sample 25 frames/sec = 82.8 MBits/sec Different source coding: Using an intra-frame working ADPCM with 3 instead of 8 Bits/sample 414,000 samples/frame 3 Bits/sample 25 frames/sec = MBits/sec 24

25 Compression Techniques 25 Still very high data rate: Video should have about 1.5 Mbits/sec to fit within CD technology Audio should have about kbit/channel need for further compression techniques, e.g. MPEG for video and audio

26 Classification of Applications Dialogue Mode Applications Interaction among human users via multimedia information Requirements for compression and decompression: End-to-end delay lower than 150 ms End-to-end delay of 50 ms for face-to-face dialogue applications 26 Retrieval Mode Applications A human user retrieves information from a multimedia database Requirements: Fast forward and backward data retrieval with simultaneous display Fast search for information in multimedia databases Random access to single images and audio frames with an access time less than 0.5 second Decompression should be possible without a link to other data units in order to allow random access and editing

27 Dialogue and Retrieval Mode Requirements for both dialogue and retrieval mode: Supporting scalable video in different systems Format must be independent of frame size and video frame rate Support of various audio and video data rates This will lead to different quality, thus data rates should be adjustable Synchronization of audio and video data Lip synchronization Economy (i.e. reasonably cheap solutions): Software realization: cheap, but low speed and low quality Hardware realization (VLSI chips): more expensive (at first), but high quality Compatibility It should be possible to generate multimedia data on one system and to reproduce the data on another system Programs available on CD can be read on different systems 27

28 Encoding Mechanisms for Video Basic encoding techniques like used in JPEG 28 Differential encoding for video 1. For newscast, video telephone applications, and soap operas Background often remains the same for a long time Very small difference between subsequent images Run-length coding can be used 2. Motion compensation Blocks of NxN pixels are compared in subsequent images Useful for objects moving in one direction, e.g. from left to right Other basic compression techniques: Color Look-Up Tables (CLUT) for data reduction in video streams Often used in distributed multimedia systems Silence suppression for audio Data are only encoded if the volume level exceeds a certain threshold Can be interpreted as a special case of run-length encoding

29 MPEG Growing need for a common format for representing compressed video and audio for data rates up to 1.5 Mbit/sec (typical rate of CD-ROM transfer: 1.2 Mbit/sec) Moving Pictures Expert Group (MPEG) Generic Approach can be used widely Maximum data rate for video in MPEG is very high: 1,856,000 bit/sec Data rates for audio between 32 and 448 Kbit/sec Video and audio compression of acceptable quality Suitable for symmetric as well as asymmetric compression: Asymmetric compression: more effort for coding (once) than for decoding (often) Symmetric compression: equal effort for compression and decompression, restricted end-to-end delay (e.g. interactive dialogue applications) 29

30 MPEG Today MPEG Coding for VCD (Video CD) quality Data rate of Mbit/sec MPEG-2 Super-set of MPEG-1: rates up to 8 Mbit/sec Can do HDTV MPEG-4 Coding of objects, not frames Lower bandwidth (Multimedia for the web and mobility) MPEG-7 Allows multimedia content description (ease of searching) MPEG-21 Content identification and management MP3 For coding audio only MPEG Layer-3 30

31 The MPEG Family 31

32 First: MPEG Exact image format of MPEG is defined in the image preparation phase (which is similar to JPEG) Video is seen as a sequence of images (video frames) Each image consists of 3 components (YUV format, called Y, CB, CR) Luminance component has twice as many samples in horizontal and vertical axes than other two components (chrominance): 4:2:2 scheme Resolution of luminance component maximal pixels (8 bit per pixel) Data stream includes further information, e. g.: Aspect ratio of a pixel (14 different image aspect ratios per pixel provided), e.g. 1:1 (square pixel), 16:9 (European and US HDTV) etc. Image refresh frequency (number of images per second; 8 frequencies between Hz and 60 Hz defined, among them the European standard of 25 Hz) The encoding basically is as in JPEG: DCT, quantization, entropy encoding, but with considering several images 32

33 Compression Steps 33 Subsampling of chrominance information - human visual system less sensitive to chrominance than to luminance information only 1 chrominance pixel for each 2 2 neighborhood of luminance pixels Image preparation form blocks of 8 8 pixels and group them to macro blocks Frequency transformation - discrete cosine transform converts an 8 8 block of pixel values to an 8 8 matrix of horizontal and vertical spatial frequency coefficients most of the energy concentrated in low frequency coefficients, esp. in DC coefficient Quantization for suppressing high frequencies Variable length coding - assigning codewords to values to be encoded Additionally to techniques like in JPEG, the following is used before performing DCT: Predictive coding code a frame as a prediction based on the previous / the following frame Motion compensation - prediction of the values of a pixel block by relocating the block from a known picture

34 Macro Blocks Still images - temporal prediction yields considerable compression ratio Moving images - non-translational moving patterns (e.g. rotations, waves,...) require storage of large amount of information due to irregular motion patterns Therefore: predictive coding makes sense only for parts of the image ***** division of each image into areas called macro blocks ***** Macro blocks turn out to be suitable for compression based on motion estimation Partition of macro blocks into 4 blocks for luminance and one block for each chrominance component; each block consists of 8 8 pixels. Size of macro block is a compromise between (storage) cost for prediction and resulting compression 34 luminance Y chrominance

35 Motion Compensation Prediction Motion Compensation Prediction is made between successive frames Idea: coding a frame as prediction to the previous frame is useless for fast changing sequences Thus: consider moving objects by search for the new position of a macro block from the previous frame Code the prediction to the macro block of the previous frame together with the motion vector 35

36 Motion Compensation Prediction 36 How to find the best fitting position in the new image? Search only a given window around the old position, not the whole image! Consider only the average of all pixel values, not detailed values! Set a fault threshold: stop searching if a found macro block fits good enough Search Pattern can be a spiral: this procedure in general does not give the best result, but is fast

37 Motion Compensation Prediction Alternative search procedure: Store motion vector for a macro block of the previous image Move the search window for the next search by the old vector Start searching the window by a coarse pattern Refine the search for the best patterns found 37 Left: lighter grey blocks are the searched blocks on coarse scale. Best matching fields are examined more detailed by also searching the 8 neighbored blocks. Right: when we would refine all blocks, such a picture would result; the lighter a block, the better the matching; the lightest block is taken for motion prediction. [Note: in reality not all blocks are refined, only the lightest ones; by this maybe the best block isn t found.]

38 Types of Frames 4 different types of frame coding are used in MPEG for efficient coding with fast random access I-frames: Intra-coded frames (moderate compression but fast random access) P-frames: Predictive-coded frames (with motion compensation, prediction to the previous I- or P-frame) B-frames: Bi-directionally predictive-coded frames (referencing to the previous and the following I- or P-frames) D-frames: DC frames (Limited use: encodes only DC components of intraframe coding) 38 B-frame can be decoded only after the subsequent P-frame has been decoded

39 Types of Images 39 Prediction Time Bi-directional prediction I : P : B = 1 : 2 : 6

40 I-Frames I-Frames are self-contained, i.e. represent a full image Coded without reference to other images Treated as still images Use of JPEG, but compression in real-time I-Frames may serve as points of random access in MPEG streams DCT on 8 8 blocks within macro blocks + DPCM coding of DC coefficients Typically, an I-frame may occur 3 times per second to give reasonably fast random access Typical data allocation: I-frames allocate up to 3 times as many bits as P-frames P-frames allocate 2 5 times as many bits as B-frames In case of li le mo on in the video, a greater propor on of the bits should be assigned to I-frames, since P- and B-frames only need very low number of bits 40

41 P-Frames and B-Frames P-Frame Requires information about previous I-frame and/or previous P-frame Motion estimation is done for the macro blocks of the coded frame: The motion vector (difference between locations of the macro blocks) is specified The (typically small) difference in content of the two macro blocks is computed and DCT/entropy encoded That means: P-frames consist of I-frame macro blocks (if no prediction is possible) and predictive macro blocks B-Frame Requires information about previous and following I- and/or P-frame B-frame = difference of prediction of past image and following P-/I-frame Quantization and entropy encoding of the macro blocks will be very efficient on such double-predicted frames Highest compression rate will be obtained Decoding only is possible after receiving the following I- resp. P-frame 41

42 Group of Pictures (GOP) 42 MPEG gives no instruction in which order to code the different frame types, but can be specified by a user parameter. But: each stream of MPEG frames shows a fixed pattern, the Group of Pictures: It typically starts with an I-frame It typically ends with frame right before the next I-frame Open GOP ends in B-frame, Closed in P-frame Very flexible: GOPs could be independently decoded, but they also could reference to the next GOP Typical patterns: I B B P B B P B B I I B B P B B P B B P B B I Why not have all P and B frames? It is clear that with the loss of one frame, a new full-image (I-frame) is needed to allow the receiver to recover from the information loss

43 D-Frames 43 Intraframe encoded, but only lowest frequencies of an image (DC coefficients) are encoded Used (only) for fast forward or fast rewind mode Could also be realized by suitable order of I-frames Slow rewind playback requires a lot of storage capacity: Thus all images in a "group of pictures" (GOP) are decoded in the forward mode and stored, after that rewind playback is possible

44 Coding Process 44

45 Layers of MPEG Data Streams 1. Sequence Layer Sequence header + one or more groups of picture Header contains parameters like picture size, data rate, aspect ratio, DCT quantization matrices 2. Group of Pictures Layer Contains at least one I-frame for random access Additionally timing info and user data 3. Picture Layer I-, P-, B- or D-frame, with synchronization info, resolution, range of motion vectors 4. Slice Layer Subdivision of a picture providing certain immunity to data corruption 5. Macro block Layer Basic unit for motion compensation and quantizer scale changes 6. Block Layer Basic coding unit (8 8 pixels): DCT applied at this block level 45

46 Hirarchical Structure of Data Sequence Layer 46 GOP Layer Picture Layer Slice Layer Macroblock Layer Block Layer

47 Audio Encoding Audio Encoding within MPEG Picture encoding principles can be modified for use in audio as well Sampling rates of 32, 44.1 and 48 khz Transformation into frequency domain by Fast Fourier Transform (FFT) similar to the technique which is used for video Audio spectrum is split into 32 non-interleaved subbands (for each subband, the audio amplitude is calculated); noise level determination by psychoacoustical model Psychoacustical model means to consider the human brain, e.g. recognizing only a single tone if two similar tones are played very close together, or not perceiving the quieter tone if two tones with highly different loudness are played simultaneously Each subband has its own quantization granularity Higher noise level: rough quantization (and vice versa) Single channel, two independent channels or Stereo are possible (in the case of stereo : redundancy between the two signals is used for higher compression ratio) 47

48 Audio Encoding 48

49 Audio Encoding 49 3 different layers of encoder and decoder complexity are used Quantized spectral portions of layers 1 and 2 are PCM encoded Quantized spectral portions of layer 3 are Huffman encoded MPEG layer 3 is known as mp3 14 fixed bit rates for encoded audio data stream on each layer minimal rate: 32 Kbit/sec for each layer maximal rate: 448 Kbit/sec (layer 1), 384 Kbit/sec (layer 2), 320 Kbit/sec (layer3) Variable bit rate support is possible only on layer 3

50 Audio Data Stream 50 General Background on the MPEG Audio Data Stream MPEG specifies syntax for interleaved audio and video streams e.g. synchronization information Audio data stream consists of frames, divided into audio access units composed of slots Slots consist of 4 bytes (layer 1, lowest complexity) or 1 byte (other layers) A frame always consists of a fixed number of samples Audio access unit: smallest possible audio sequence of compressed data to be decoded independently of all other data Audio access units of one frame lead to playing time between 8 msec (48 Hz) and 12 msec (32 Hz)

51 MPEG-2 Why another MPEG standard? Higher data rate as MPEG-1, but compatible extension of MPEG-1 Target rate of 40 Mbit/sec Higher resolution as needed in HDTV Support a larger number of applications - definition of MPEG-2 in terms of extensible profiles and levels for each important application class, e.g. Main Profile: for digital video transmission (2 to 80 Mbit/sec) over cable, satellite and other broadcast channels, digital storage, HDTV etc. High Profile: HDTV Scalable Profile: compatible with terrestrial TV/HDTV, packet-network video systems, backward compatibility with MPEG-1 and other standards, e.g. H.261 The encoding standard should be a toolkit rather than a flat procedure Interlaced and non-interlaced frame Different color subsampling modes e.g., 4:2:2, 4:2:0 Flexible quantization schemes can be changed at picture level Scalable bit-streams 51

52 MPEG-2 - Profiles and Levels 52

53 Scalable Profiles 53 A signal is composed of several streams (layers): Base (Lower) layer is a fully decodable image Enhancement (Upper) layer gives additional information Better resolution Higher frame rate Better quality Corresponds to JPEG hierarchical mode

54 Scalable Profiles 54 Scaling can be done on different parameters: Spatial scaling: Frames are given in different resolutions Base layer frames are used in any case Upper layer frames are stored as prediction from base layer frames Single data stream can include different image formats (CIF, CCIR 601, HDTV...) SNR scaling: The error on the lower layer given by quantization is encoded and sent on the upper layer

55 MPEG-2: Effects of Interlacing 55 Prediction Modes and Motion Compensation Frame prediction: current frame predicted from previous frame Dual prime motion compensation: Top field of current frame is predicted from two motion vectors coming from the top and bottom field of reference frame Bottom field of previous frame and top field of current frame predicts the bottom field of current frame 16 8 motion compensation mode A macroblock may have two of them A B-image macro block may have four

56 MPEG-2 Audio Standard 56 Low bit rate coding of multi-channel audio Up to five full bandwidth channels (left, right, center, 2 surround) plus additional low frequency enhancement channel and/or up to 7 commentary/multilingual channels Extension of MPEG-1 stereo and mono coding to half sampling rates (16 24 khz) improving quality for bit rates at 64 kbits/sec per channel MPEG-2 Audio Multi-channel Coding Standard: Backward compatibility with existing MPEG-1 Audio Standard Organizes formal testing of proposed MPEG-2 multi-channel audio codecs and nonbackward- compatible codecs (rates Kbits/sec)

57 MPEG-2 Streams MPEG-2 system defines how to combine audio, video and other data into single or multiple streams suitable for storage and transmission syntactical and semantically rules for synchronizing the decoding and presentation of video and audio information and avoiding buffer over- or underflow Streams include timestamps for decoding, presentation and delivery Basic multiplexing step: each stream is added system-level information and packetized Packetized Elementary Stream (PES) PESs combined to Program or Transport Stream (supporting large number of applications): Program Stream: similar to MPEG-1 stream error-free environment, variable packet lengths, constant end-to-end delay Transport Stream: combines PESs and independent time bases into single stream use in lossy or noisy media, packet length 188 bytes including header suited for digital TV and videophony over fiber, satellite, cable, ISDN, ATM Conversion between Program and Transport Stream possible (and sometimes reasonable) 57

58 MPEG-4 Originally a MPEG-3 standard for HDTV was planned But MPEG-2 scaling was sufficient; development of MPEG-3 was cancelled MPEG-4 initiative started in September 1993 Very low bit rate coding of audio-visual programs February 1997: description of requirements for the MPEG-4 standard approved Idea: development of fundamentally new algorithmic techniques New sorts of interactivity (dynamic instead of static objects) Integration of natural and synthetic audio and video material Simultaneous use of material coming from different sources Model-based image coding of human interaction with multimedia environments Low-bit rate speech coding e.g. for use in GSM Basic elements: Coding tools for audio-visual objects: efficient compression, support of object based interactivity, scalability and error robustness Formal methods for syntactic description of coded audio-visual objects 58

59 Core Idea of MPEG-4 Object based Representation Representation of the video scene is understood as a composition of video objects with respect to their spatial and temporal relationship (same with audio!) Individual objects in a scene can be coded with different parameters, at different quality levels and with different coding algorithms 59

60 MPEG-4: Objects and Scenes 60 A/V object A video object within a scene The background An instrument or voice Coded independently A/V scene Mixture of objects Individual bitstreams are multiplexed and transmitted One or more channels Each channel may have its own quality of service Synchronization information

61 Objects of a Scene 61 Scene Graph Graph without cycles Embeds objects in a coordinate system (including synchronization information) MPEG-4 provides a language for describing objects (oriented at VRML Virtual Reality Modeling Language) Usable for as well video as animated objects

62 An Example MPEG-4 Scene 62

63 MPEG-4 Stream Composition and Delivery 63

64 Linking Streams into the Scene 64

65 MPEG-7 Objectives A flexible, extensible, and multi-level standard framework for describing (not coding!) multimedia and synchronize between content and descriptions Enable fast and efficient content searching, filtering and identification Define low-level features, structure, semantic, models, collections, creation, etc. Goal: To search, identify, filter and browse audiovisual content Description of contents Descriptors Describe basic characteristics of audiovisual content Examples: Shape, Color, Texture, Description Schemes Describe combinations of descriptors Example: Spoken Content 65

66 Simple Description 66

67 MPEG MPEG 21 Solution for access to and management of digital media E.g. offering, searching, buying, Digital Rights Management,

68 Digital Video Broadcasting 1991 foundation of the ELG (European Launching Group) Goal: development of digital television in Europe 1993 renaming into DVB (Digital Video Broadcasting) goal: introduction of digital television based on satellite transmission (DVB-S) cable network technology (DVB-C) later also terrestrial transmission (DVB-T) 68

69 DVB Container DVB transmits MPEG-2 container High flexibility for the transmission of digital data No restrictions regarding the type of information DVB Service Information specifies the content of a container NIT (Network Information Table): lists the services of a provider, contains additional information for set-top boxes SDT (Service Description Table): list of names and parameters for each service within a MPEG multiplex channel EIT (Event Information Table): status information about the current transmission, additional information for set-top boxes TDT (Time and Date Table): Update information for set-top boxes 69

70 DVB Worldwide 70

71 Computer-based Animation To animate = to bring to life Animation covers changes in: time-varying positions (motion dynamics) shape, color, transparency, structure and texture of an object (update dynamics) as well as lightning, camera position, camera orientation and focus Basic Concepts of animation are Input Process Key frames, where animated objects are at extreme or characteristic positions must be digitized from drawings Often a post-processing by a computer is required Composition stage Inbetween process Changing colors 71

72 Composition Stage Foreground and background figures are combined to generate an individual frame Placing of several low-resolution frames of an animation in an array leads to a trail film (pencil test), by the use of the pan-zoom feature (This feature is available for some frame buffers) The frame buffer can take a part of an image (pan) and enlarge it to full screen (zoom) Continuity is achieved by repeating the pan-zoom process fast enough 72

73 Inbetween Process 73 Composition of intermediate frames between key frames Performed by linear interpolation (lerping) between start- and end-positions To achieve more realistic results, cubic spline-interpolation can be used Interpolated frames Rather unrealistic motion (in most cases) Key frames

74 Inbetween Process 74 more realistic motion achieved by two cubic splines X X X A function s is called cubic interpolating spline to the points a = X < X <... < X = b, if 1. s is twice continuous differentiable 2. for i = 0,..., n it is a polynomial of degree 3 This line is smooth, because the polynomials have equal primary and secondary derivatives at the points X, X,..., X

75 Inbetween Process Calculation of successive cubic splines: 75 (x) are polynomials of degree 3 Let (x) be given Then (x) = x + x + x + is constructed as follows: (x ) = x + x + x + = f(x ) (x ) = x + x + x + = f(x ) (x ) = 3 x + 2 x + = (x ) (x ) = 6 x + 2 = (x ) 4 equations for,,,

76 Changing Colors 76 Two techniques are possible 1. CLUT animation Changing of the Color Look Up Table (CLUT) of the frame buffer. This changes the colors of the image. 2. New color information for each frame Frame buffer: 640 x 512 pixel 8 Bits/pixel 30 frames per sec. = 78.6 MBits/sec data rate for complete update The first technique is much faster than the second, since changing the CLUT requires the transmission of only Kbytes (here 2. is more than 300 times faster than 1.)

77 Animation Languages Categories for Animation languages Linear-list Notations Events are described by starting and ending frame number and an action (event) 17, 31, C, ROTATE HOUSE, 1, 45 means: Between frames 17 and 31 rotate the object HOUSE around axis 1 by 45 degrees, determining the amount of rotation at each frame from table C General-purpose Languages Embed animation capability within programming languages Values of variables as parameters to the routines that perform animation e.g. ASAS, which is built on top of LISP: (grasp my-cube): cube becomes current object (cw 0.05): spin it clockwise, by a small amount Graphical Languages Describe animation in a more visual way than textual languages Express, edit and comprehend the changes in an animation Explicit descriptions of actions are replaced by a picture of the action 77

78 Controlling of Animation Techniques for controlling animations (independent of the language which describes the animation): Full Explicit Control Complete way of control, because all aspects are defined: Simple changes (scaling, translation, rotation) are specified or key frames and interpolation methods (either explicit or by direct manipulations by mouse, joystick, data glove) are provided Procedural Control Communication between objects to determine properties Physically-based systems: position of one object may influence motion of another (ball cannot pass a wall) Actor-based systems: actors pass their position to other actors to affect their behavior (actor A stays behind actor B) 78

79 Controlling of Animation Constraint-based Systems Natural way of moving from A to B is via a straight line, i.e. linearly. However, very often the motion is more complicated Movement of objects is determined by other objects, they are in contact with Compound motion may not be linear and is modeled by constraints (ball follows a pathway) Tracking Live Action Trajectories of animated objects are generated by tracking live action Rotoscoping: Film with real actors as template, designers draw over the film, change background and replace human actors with animated counterparts Attach indicators to key points of actor s body. Tracking of indicator positions provides key points in the animation model Another example: Data glove measures position and orientation of the hand flexion and extension of fingers and fingerparts From these information we can calculate actions, e.g. movements 79

80 Controlling of Animation Kinematics: Description using the position and velocity of objects. E.g. at time t = 0 the CUBE is at the origin. It moves with the constant acceleration of 0.5 m/ for 2 sec. in the direction of (1,1,4) (0, 0, 0) (1, 1, 4) 2 seconds b = 0.5 m/ 80 kinematical description of the motion of a cube Dynamics: Takes into consideration the physical laws that define the kinematics E.g. at time t = 0 the CUBE is in position (0 meters, 100 meters, 0 meters) and has a mass of 5 kg. The force of gravity acts on the cube (Result in this case: the ball will fall down) (0, 100, 0)

81 Display of Animation For the display of animations with raster systems the animated objects have to be scanconverted to their pixmap in the frame buffer. This procedure has to be done at least 10 (better: 20) times per second in order to give a reasonably smooth effect. Problem Frame rate of 20 pictures/sec. requires manipulation, scan-conversion and display of an object in only 50 msec. Scan conversion should only use a small fraction of these 50 msec since other operations (erasing, redrawing,... etc) have to be done, too 81 Solution Double-buffering: frame buffer is divided into two images, each with half of the bits of the overall frame buffer ( pipeline ). While the operation (like rotating) and scan-conversion is processed for the second half of the pixmap, the first half is displayed and vice versa. Time

82 Transmission of Animation 82 Symbolic Representation Graphical descriptions (circle) of an animated object (ball) + operations (roll) Animation is displayed at the receiver by scan-conversion of objects to pixmap Transmission rate depends on (transmission rate is context dependent): size of the symbolic representation structure, size of operation structure number of animated objects and of commands Pixmap Representation Longer times for data transmission than with symbolic representation, because of the large data size of pixmap Shorter display times, because no scan-conversion is necessary at receiver side Transmission rate = size of pixmap frame rate (fixed transmission rate)

83 Conclusions NTSC and PAL as television standards Widespread, but only belong to Enhanced Definition TV systems Needed for better quality: High Definition TV (HDTV) Problem: compression is needed for HDTV systems MPEG as standard for video and audio compression High-quality video/audio compression based on JPEG-techniques Additionally: Motion prediction between video frames Newer versions (MPEG-4) achieve further compression by considering objects Video Transmission DVB as one standard for broadcasting SDTV, EDTV, HDTV or any MPEG content to the customer Animation Technique for artificially creating videos 83

4. Video and Animation. Contents. 4.3 Computer-based Animation. 4.1 Basic Concepts. 4.2 Television. Enhanced Definition Systems

4. Video and Animation. Contents. 4.3 Computer-based Animation. 4.1 Basic Concepts. 4.2 Television. Enhanced Definition Systems Contents 4.1 Basic Concepts Video Signal Representation Computer Video Format 4.2 Television Conventional Systems Enhanced Definition Systems High Definition Systems Transmission 4.3 Computer-based Animation

More information

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur Module 8 VIDEO CODING STANDARDS Lesson 24 MPEG-2 Standards Lesson Objectives At the end of this lesson, the students should be able to: 1. State the basic objectives of MPEG-2 standard. 2. Enlist the profiles

More information

Motion Video Compression

Motion Video Compression 7 Motion Video Compression 7.1 Motion video Motion video contains massive amounts of redundant information. This is because each image has redundant information and also because there are very few changes

More information

Audio and Video II. Video signal +Color systems Motion estimation Video compression standards +H.261 +MPEG-1, MPEG-2, MPEG-4, MPEG- 7, and MPEG-21

Audio and Video II. Video signal +Color systems Motion estimation Video compression standards +H.261 +MPEG-1, MPEG-2, MPEG-4, MPEG- 7, and MPEG-21 Audio and Video II Video signal +Color systems Motion estimation Video compression standards +H.261 +MPEG-1, MPEG-2, MPEG-4, MPEG- 7, and MPEG-21 1 Video signal Video camera scans the image by following

More information

COMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards

COMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards COMP 9 Advanced Distributed Systems Multimedia Networking Video Compression Standards Kevin Jeffay Department of Computer Science University of North Carolina at Chapel Hill jeffay@cs.unc.edu September,

More information

An Overview of Video Coding Algorithms

An Overview of Video Coding Algorithms An Overview of Video Coding Algorithms Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Video coding can be viewed as image compression with a temporal

More information

Video 1 Video October 16, 2001

Video 1 Video October 16, 2001 Video Video October 6, Video Event-based programs read() is blocking server only works with single socket audio, network input need I/O multiplexing event-based programming also need to handle time-outs,

More information

Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems. School of Electrical Engineering and Computer Science Oregon State University

Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems. School of Electrical Engineering and Computer Science Oregon State University Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems Prof. Ben Lee School of Electrical Engineering and Computer Science Oregon State University Outline Computer Representation of Audio Quantization

More information

Video Compression. Representations. Multimedia Systems and Applications. Analog Video Representations. Digitizing. Digital Video Block Structure

Video Compression. Representations. Multimedia Systems and Applications. Analog Video Representations. Digitizing. Digital Video Block Structure Representations Multimedia Systems and Applications Video Compression Composite NTSC - 6MHz (4.2MHz video), 29.97 frames/second PAL - 6-8MHz (4.2-6MHz video), 50 frames/second Component Separation video

More information

Advanced Computer Networks

Advanced Computer Networks Advanced Computer Networks Video Basics Jianping Pan Spring 2017 3/10/17 csc466/579 1 Video is a sequence of images Recorded/displayed at a certain rate Types of video signals component video separate

More information

MULTIMEDIA TECHNOLOGIES

MULTIMEDIA TECHNOLOGIES MULTIMEDIA TECHNOLOGIES LECTURE 08 VIDEO IMRAN IHSAN ASSISTANT PROFESSOR VIDEO Video streams are made up of a series of still images (frames) played one after another at high speed This fools the eye into

More information

Video coding standards

Video coding standards Video coding standards Video signals represent sequences of images or frames which can be transmitted with a rate from 5 to 60 frames per second (fps), that provides the illusion of motion in the displayed

More information

Chapter 10 Basic Video Compression Techniques

Chapter 10 Basic Video Compression Techniques Chapter 10 Basic Video Compression Techniques 10.1 Introduction to Video compression 10.2 Video Compression with Motion Compensation 10.3 Video compression standard H.261 10.4 Video compression standard

More information

MPEG-2. ISO/IEC (or ITU-T H.262)

MPEG-2. ISO/IEC (or ITU-T H.262) 1 ISO/IEC 13818-2 (or ITU-T H.262) High quality encoding of interlaced video at 4-15 Mbps for digital video broadcast TV and digital storage media Applications Broadcast TV, Satellite TV, CATV, HDTV, video

More information

Multimedia Communications. Video compression

Multimedia Communications. Video compression Multimedia Communications Video compression Video compression Of all the different sources of data, video produces the largest amount of data There are some differences in our perception with regard to

More information

Multimedia Communications. Image and Video compression

Multimedia Communications. Image and Video compression Multimedia Communications Image and Video compression JPEG2000 JPEG2000: is based on wavelet decomposition two types of wavelet filters one similar to what discussed in Chapter 14 and the other one generates

More information

Chapter 3 Fundamental Concepts in Video. 3.1 Types of Video Signals 3.2 Analog Video 3.3 Digital Video

Chapter 3 Fundamental Concepts in Video. 3.1 Types of Video Signals 3.2 Analog Video 3.3 Digital Video Chapter 3 Fundamental Concepts in Video 3.1 Types of Video Signals 3.2 Analog Video 3.3 Digital Video 1 3.1 TYPES OF VIDEO SIGNALS 2 Types of Video Signals Video standards for managing analog output: A.

More information

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur Module 8 VIDEO CODING STANDARDS Lesson 27 H.264 standard Lesson Objectives At the end of this lesson, the students should be able to: 1. State the broad objectives of the H.264 standard. 2. List the improved

More information

Contents. xv xxi xxiii xxiv. 1 Introduction 1 References 4

Contents. xv xxi xxiii xxiv. 1 Introduction 1 References 4 Contents List of figures List of tables Preface Acknowledgements xv xxi xxiii xxiv 1 Introduction 1 References 4 2 Digital video 5 2.1 Introduction 5 2.2 Analogue television 5 2.3 Interlace 7 2.4 Picture

More information

Multimedia Systems Video I (Basics of Analog and Digital Video) Mahdi Amiri April 2011 Sharif University of Technology

Multimedia Systems Video I (Basics of Analog and Digital Video) Mahdi Amiri April 2011 Sharif University of Technology Course Presentation Multimedia Systems Video I (Basics of Analog and Digital Video) Mahdi Amiri April 2011 Sharif University of Technology Video Visual Effect of Motion The visual effect of motion is due

More information

Part1 박찬솔. Audio overview Video overview Video encoding 2/47

Part1 박찬솔. Audio overview Video overview Video encoding 2/47 MPEG2 Part1 박찬솔 Contents Audio overview Video overview Video encoding Video bitstream 2/47 Audio overview MPEG 2 supports up to five full-bandwidth channels compatible with MPEG 1 audio coding. extends

More information

Video compression principles. Color Space Conversion. Sub-sampling of Chrominance Information. Video: moving pictures and the terms frame and

Video compression principles. Color Space Conversion. Sub-sampling of Chrominance Information. Video: moving pictures and the terms frame and Video compression principles Video: moving pictures and the terms frame and picture. one approach to compressing a video source is to apply the JPEG algorithm to each frame independently. This approach

More information

Lecture 2 Video Formation and Representation

Lecture 2 Video Formation and Representation 2013 Spring Term 1 Lecture 2 Video Formation and Representation Wen-Hsiao Peng ( 彭文孝 ) Multimedia Architecture and Processing Lab (MAPL) Department of Computer Science National Chiao Tung University 1

More information

Chapter 2 Introduction to

Chapter 2 Introduction to Chapter 2 Introduction to H.264/AVC H.264/AVC [1] is the newest video coding standard of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG). The main improvements

More information

Digital Image Processing

Digital Image Processing Digital Image Processing 25 January 2007 Dr. ir. Aleksandra Pizurica Prof. Dr. Ir. Wilfried Philips Aleksandra.Pizurica @telin.ugent.be Tel: 09/264.3415 UNIVERSITEIT GENT Telecommunicatie en Informatieverwerking

More information

Lecture 23: Digital Video. The Digital World of Multimedia Guest lecture: Jayson Bowen

Lecture 23: Digital Video. The Digital World of Multimedia Guest lecture: Jayson Bowen Lecture 23: Digital Video The Digital World of Multimedia Guest lecture: Jayson Bowen Plan for Today Digital video Video compression HD, HDTV & Streaming Video Audio + Images Video Audio: time sampling

More information

Digital Media. Daniel Fuller ITEC 2110

Digital Media. Daniel Fuller ITEC 2110 Digital Media Daniel Fuller ITEC 2110 Daily Question: Video How does interlaced scan display video? Email answer to DFullerDailyQuestion@gmail.com Subject Line: ITEC2110-26 Housekeeping Project 4 is assigned

More information

MPEGTool: An X Window Based MPEG Encoder and Statistics Tool 1

MPEGTool: An X Window Based MPEG Encoder and Statistics Tool 1 MPEGTool: An X Window Based MPEG Encoder and Statistics Tool 1 Toshiyuki Urabe Hassan Afzal Grace Ho Pramod Pancha Magda El Zarki Department of Electrical Engineering University of Pennsylvania Philadelphia,

More information

To discuss. Types of video signals Analog Video Digital Video. Multimedia Computing (CSIT 410) 2

To discuss. Types of video signals Analog Video Digital Video. Multimedia Computing (CSIT 410) 2 Video Lecture-5 To discuss Types of video signals Analog Video Digital Video (CSIT 410) 2 Types of Video Signals Video Signals can be classified as 1. Composite Video 2. S-Video 3. Component Video (CSIT

More information

Understanding IP Video for

Understanding IP Video for Brought to You by Presented by Part 3 of 4 B1 Part 3of 4 Clearing Up Compression Misconception By Bob Wimmer Principal Video Security Consultants cctvbob@aol.com AT A GLANCE Three forms of bandwidth compression

More information

So far. Chapter 4 Color spaces Chapter 3 image representations. Bitmap grayscale. 1/21/09 CSE 40373/60373: Multimedia Systems

So far. Chapter 4 Color spaces Chapter 3 image representations. Bitmap grayscale. 1/21/09 CSE 40373/60373: Multimedia Systems So far. Chapter 4 Color spaces Chapter 3 image representations Bitmap grayscale page 1 8-bit color image Can show up to 256 colors Use color lookup table to map 256 of the 24-bit color (rather than choosing

More information

Digital Video Telemetry System

Digital Video Telemetry System Digital Video Telemetry System Item Type text; Proceedings Authors Thom, Gary A.; Snyder, Edwin Publisher International Foundation for Telemetering Journal International Telemetering Conference Proceedings

More information

Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences

Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences Michael Smith and John Villasenor For the past several decades,

More information

Transitioning from NTSC (analog) to HD Digital Video

Transitioning from NTSC (analog) to HD Digital Video To Place an Order or get more info. Call Uniforce Sales and Engineering (510) 657 4000 www.uniforcesales.com Transitioning from NTSC (analog) to HD Digital Video Sheet 1 NTSC Analog Video NTSC video -color

More information

FLEXIBLE SWITCHING AND EDITING OF MPEG-2 VIDEO BITSTREAMS

FLEXIBLE SWITCHING AND EDITING OF MPEG-2 VIDEO BITSTREAMS ABSTRACT FLEXIBLE SWITCHING AND EDITING OF MPEG-2 VIDEO BITSTREAMS P J Brightwell, S J Dancer (BBC) and M J Knee (Snell & Wilcox Limited) This paper proposes and compares solutions for switching and editing

More information

Implementation of an MPEG Codec on the Tilera TM 64 Processor

Implementation of an MPEG Codec on the Tilera TM 64 Processor 1 Implementation of an MPEG Codec on the Tilera TM 64 Processor Whitney Flohr Supervisor: Mark Franklin, Ed Richter Department of Electrical and Systems Engineering Washington University in St. Louis Fall

More information

Principles of Video Compression

Principles of Video Compression Principles of Video Compression Topics today Introduction Temporal Redundancy Reduction Coding for Video Conferencing (H.261, H.263) (CSIT 410) 2 Introduction Reduce video bit rates while maintaining an

More information

Midterm Review. Yao Wang Polytechnic University, Brooklyn, NY11201

Midterm Review. Yao Wang Polytechnic University, Brooklyn, NY11201 Midterm Review Yao Wang Polytechnic University, Brooklyn, NY11201 yao@vision.poly.edu Yao Wang, 2003 EE4414: Midterm Review 2 Analog Video Representation (Raster) What is a video raster? A video is represented

More information

Multimedia. Course Code (Fall 2017) Fundamental Concepts in Video

Multimedia. Course Code (Fall 2017) Fundamental Concepts in Video Course Code 005636 (Fall 2017) Multimedia Fundamental Concepts in Video Prof. S. M. Riazul Islam, Dept. of Computer Engineering, Sejong University, Korea E-mail: riaz@sejong.ac.kr Outline Types of Video

More information

AUDIOVISUAL COMMUNICATION

AUDIOVISUAL COMMUNICATION AUDIOVISUAL COMMUNICATION Laboratory Session: Recommendation ITU-T H.261 Fernando Pereira The objective of this lab session about Recommendation ITU-T H.261 is to get the students familiar with many aspects

More information

Introduction to Video Compression Techniques. Slides courtesy of Tay Vaughan Making Multimedia Work

Introduction to Video Compression Techniques. Slides courtesy of Tay Vaughan Making Multimedia Work Introduction to Video Compression Techniques Slides courtesy of Tay Vaughan Making Multimedia Work Agenda Video Compression Overview Motivation for creating standards What do the standards specify Brief

More information

ATSC vs NTSC Spectrum. ATSC 8VSB Data Framing

ATSC vs NTSC Spectrum. ATSC 8VSB Data Framing ATSC vs NTSC Spectrum ATSC 8VSB Data Framing 22 ATSC 8VSB Data Segment ATSC 8VSB Data Field 23 ATSC 8VSB (AM) Modulated Baseband ATSC 8VSB Pre-Filtered Spectrum 24 ATSC 8VSB Nyquist Filtered Spectrum ATSC

More information

Television History. Date / Place E. Nemer - 1

Television History. Date / Place E. Nemer - 1 Television History Television to see from a distance Earlier Selenium photosensitive cells were used for converting light from pictures into electrical signals Real breakthrough invention of CRT AT&T Bell

More information

Digital Television Fundamentals

Digital Television Fundamentals Digital Television Fundamentals Design and Installation of Video and Audio Systems Michael Robin Michel Pouiin McGraw-Hill New York San Francisco Washington, D.C. Auckland Bogota Caracas Lisbon London

More information

AT65 MULTIMEDIA SYSTEMS DEC 2015

AT65 MULTIMEDIA SYSTEMS DEC 2015 Q.2 a. Define a multimedia system. Describe about the different components of Multimedia. (2+3) Multimedia ---- An Application which uses a collection of multiple media sources e.g. text, graphics, images,

More information

A video signal consists of a time sequence of images. Typical frame rates are 24, 25, 30, 50 and 60 images per seconds.

A video signal consists of a time sequence of images. Typical frame rates are 24, 25, 30, 50 and 60 images per seconds. Video coding Concepts and notations. A video signal consists of a time sequence of images. Typical frame rates are 24, 25, 30, 50 and 60 images per seconds. Each image is either sent progressively (the

More information

ELEC 691X/498X Broadcast Signal Transmission Fall 2015

ELEC 691X/498X Broadcast Signal Transmission Fall 2015 ELEC 691X/498X Broadcast Signal Transmission Fall 2015 Instructor: Dr. Reza Soleymani, Office: EV 5.125, Telephone: 848 2424 ext.: 4103. Office Hours: Wednesday, Thursday, 14:00 15:00 Time: Tuesday, 2:45

More information

06 Video. Multimedia Systems. Video Standards, Compression, Post Production

06 Video. Multimedia Systems. Video Standards, Compression, Post Production Multimedia Systems 06 Video Video Standards, Compression, Post Production Imran Ihsan Assistant Professor, Department of Computer Science Air University, Islamabad, Pakistan www.imranihsan.com Lectures

More information

Module 3: Video Sampling Lecture 16: Sampling of video in two dimensions: Progressive vs Interlaced scans. The Lecture Contains:

Module 3: Video Sampling Lecture 16: Sampling of video in two dimensions: Progressive vs Interlaced scans. The Lecture Contains: The Lecture Contains: Sampling of Video Signals Choice of sampling rates Sampling a Video in Two Dimensions: Progressive vs. Interlaced Scans file:///d /...e%20(ganesh%20rana)/my%20course_ganesh%20rana/prof.%20sumana%20gupta/final%20dvsp/lecture16/16_1.htm[12/31/2015

More information

Implementation of MPEG-2 Trick Modes

Implementation of MPEG-2 Trick Modes Implementation of MPEG-2 Trick Modes Matthew Leditschke and Andrew Johnson Multimedia Services Section Telstra Research Laboratories ABSTRACT: If video on demand services delivered over a broadband network

More information

1. Broadcast television

1. Broadcast television VIDEO REPRESNTATION 1. Broadcast television A color picture/image is produced from three primary colors red, green and blue (RGB). The screen of the picture tube is coated with a set of three different

More information

Content storage architectures

Content storage architectures Content storage architectures DAS: Directly Attached Store SAN: Storage Area Network allocates storage resources only to the computer it is attached to network storage provides a common pool of storage

More information

Video coding. Summary. Visual perception. Hints on video coding. Pag. 1

Video coding. Summary. Visual perception. Hints on video coding. Pag. 1 Hints on video coding TLC Network Group firstname.lastname@polito.it http://www.telematica.polito.it/ Computer Networks Design and Management- 1 Summary Visual perception Analog and digital TV Image coding:

More information

OVE EDFORS ELECTRICAL AND INFORMATION TECHNOLOGY

OVE EDFORS ELECTRICAL AND INFORMATION TECHNOLOGY Information Transmission Chapter 3, image and video OVE EDFORS ELECTRICAL AND INFORMATION TECHNOLOGY Learning outcomes Understanding raster image formats and what determines quality, video formats and

More information

Information Transmission Chapter 3, image and video

Information Transmission Chapter 3, image and video Information Transmission Chapter 3, image and video FREDRIK TUFVESSON ELECTRICAL AND INFORMATION TECHNOLOGY Images An image is a two-dimensional array of light values. Make it 1D by scanning Smallest element

More information

Tutorial on the Grand Alliance HDTV System

Tutorial on the Grand Alliance HDTV System Tutorial on the Grand Alliance HDTV System FCC Field Operations Bureau July 27, 1994 Robert Hopkins ATSC 27 July 1994 1 Tutorial on the Grand Alliance HDTV System Background on USA HDTV Why there is a

More information

RECOMMENDATION ITU-R BT.1203 *

RECOMMENDATION ITU-R BT.1203 * Rec. TU-R BT.1203 1 RECOMMENDATON TU-R BT.1203 * User requirements for generic bit-rate reduction coding of digital TV signals (, and ) for an end-to-end television system (1995) The TU Radiocommunication

More information

PAL uncompressed. 768x576 pixels per frame. 31 MB per second 1.85 GB per minute. x 3 bytes per pixel (24 bit colour) x 25 frames per second

PAL uncompressed. 768x576 pixels per frame. 31 MB per second 1.85 GB per minute. x 3 bytes per pixel (24 bit colour) x 25 frames per second 191 192 PAL uncompressed 768x576 pixels per frame x 3 bytes per pixel (24 bit colour) x 25 frames per second 31 MB per second 1.85 GB per minute 191 192 NTSC uncompressed 640x480 pixels per frame x 3 bytes

More information

Colour Reproduction Performance of JPEG and JPEG2000 Codecs

Colour Reproduction Performance of JPEG and JPEG2000 Codecs Colour Reproduction Performance of JPEG and JPEG000 Codecs A. Punchihewa, D. G. Bailey, and R. M. Hodgson Institute of Information Sciences & Technology, Massey University, Palmerston North, New Zealand

More information

5.1 Types of Video Signals. Chapter 5 Fundamental Concepts in Video. Component video

5.1 Types of Video Signals. Chapter 5 Fundamental Concepts in Video. Component video Chapter 5 Fundamental Concepts in Video 5.1 Types of Video Signals 5.2 Analog Video 5.3 Digital Video 5.4 Further Exploration 1 Li & Drew c Prentice Hall 2003 5.1 Types of Video Signals Component video

More information

A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique

A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique Dhaval R. Bhojani Research Scholar, Shri JJT University, Jhunjunu, Rajasthan, India Ved Vyas Dwivedi, PhD.

More information

Understanding Compression Technologies for HD and Megapixel Surveillance

Understanding Compression Technologies for HD and Megapixel Surveillance When the security industry began the transition from using VHS tapes to hard disks for video surveillance storage, the question of how to compress and store video became a top consideration for video surveillance

More information

Overview: Video Coding Standards

Overview: Video Coding Standards Overview: Video Coding Standards Video coding standards: applications and common structure ITU-T Rec. H.261 ISO/IEC MPEG-1 ISO/IEC MPEG-2 State-of-the-art: H.264/AVC Video Coding Standards no. 1 Applications

More information

Chapter 6 & Chapter 7 Digital Video CS3570

Chapter 6 & Chapter 7 Digital Video CS3570 Chapter 6 & Chapter 7 Digital Video CS3570 Video, Film, and Television Compared Movie : a story told with moving images and sound The word motion picture and movie are the same thing The word film seems

More information

MPEG + Compression of Moving Pictures for Digital Cinema Using the MPEG-2 Toolkit. A Digital Cinema Accelerator

MPEG + Compression of Moving Pictures for Digital Cinema Using the MPEG-2 Toolkit. A Digital Cinema Accelerator 142nd SMPTE Technical Conference, October, 2000 MPEG + Compression of Moving Pictures for Digital Cinema Using the MPEG-2 Toolkit A Digital Cinema Accelerator Michael W. Bruns James T. Whittlesey 0 The

More information

EECS150 - Digital Design Lecture 12 Project Description, Part 2

EECS150 - Digital Design Lecture 12 Project Description, Part 2 EECS150 - Digital Design Lecture 12 Project Description, Part 2 February 27, 2003 John Wawrzynek/Sandro Pintz Spring 2003 EECS150 lec12-proj2 Page 1 Linux Command Server network VidFX Video Effects Processor

More information

Communication Theory and Engineering

Communication Theory and Engineering Communication Theory and Engineering Master's Degree in Electronic Engineering Sapienza University of Rome A.A. 2018-2019 Practice work 14 Image signals Example 1 Calculate the aspect ratio for an image

More information

Joint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes. Digital Signal and Image Processing Lab

Joint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes. Digital Signal and Image Processing Lab Joint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes Digital Signal and Image Processing Lab Simone Milani Ph.D. student simone.milani@dei.unipd.it, Summer School

More information

10 Digital TV Introduction Subsampling

10 Digital TV Introduction Subsampling 10 Digital TV 10.1 Introduction Composite video signals must be sampled at twice the highest frequency of the signal. To standardize this sampling, the ITU CCIR-601 (often known as ITU-R) has been devised.

More information

MPEG has been established as an international standard

MPEG has been established as an international standard 1100 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 9, NO. 7, OCTOBER 1999 Fast Extraction of Spatially Reduced Image Sequences from MPEG-2 Compressed Video Junehwa Song, Member,

More information

Improvement of MPEG-2 Compression by Position-Dependent Encoding

Improvement of MPEG-2 Compression by Position-Dependent Encoding Improvement of MPEG-2 Compression by Position-Dependent Encoding by Eric Reed B.S., Electrical Engineering Drexel University, 1994 Submitted to the Department of Electrical Engineering and Computer Science

More information

H.261: A Standard for VideoConferencing Applications. Nimrod Peleg Update: Nov. 2003

H.261: A Standard for VideoConferencing Applications. Nimrod Peleg Update: Nov. 2003 H.261: A Standard for VideoConferencing Applications Nimrod Peleg Update: Nov. 2003 ITU - Rec. H.261 Target (1990)... A Video compression standard developed to facilitate videoconferencing (and videophone)

More information

Video (Fundamentals, Compression Techniques & Standards) Hamid R. Rabiee Mostafa Salehi, Fatemeh Dabiran, Hoda Ayatollahi Spring 2011

Video (Fundamentals, Compression Techniques & Standards) Hamid R. Rabiee Mostafa Salehi, Fatemeh Dabiran, Hoda Ayatollahi Spring 2011 Video (Fundamentals, Compression Techniques & Standards) Hamid R. Rabiee Mostafa Salehi, Fatemeh Dabiran, Hoda Ayatollahi Spring 2011 Outlines Frame Types Color Video Compression Techniques Video Coding

More information

Introduction to image compression

Introduction to image compression Introduction to image compression 1997-2015 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Compression 2015 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 12 Motivation

More information

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs 2005 Asia-Pacific Conference on Communications, Perth, Western Australia, 3-5 October 2005. The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

More information

Module 1: Digital Video Signal Processing Lecture 5: Color coordinates and chromonance subsampling. The Lecture Contains:

Module 1: Digital Video Signal Processing Lecture 5: Color coordinates and chromonance subsampling. The Lecture Contains: The Lecture Contains: ITU-R BT.601 Digital Video Standard Chrominance (Chroma) Subsampling Video Quality Measures file:///d /...rse%20(ganesh%20rana)/my%20course_ganesh%20rana/prof.%20sumana%20gupta/final%20dvsp/lecture5/5_1.htm[12/30/2015

More information

In MPEG, two-dimensional spatial frequency analysis is performed using the Discrete Cosine Transform

In MPEG, two-dimensional spatial frequency analysis is performed using the Discrete Cosine Transform MPEG Encoding Basics PEG I-frame encoding MPEG long GOP ncoding MPEG basics MPEG I-frame ncoding MPEG long GOP encoding MPEG asics MPEG I-frame encoding MPEG long OP encoding MPEG basics MPEG I-frame MPEG

More information

INTERNATIONAL TELECOMMUNICATION UNION. SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Coding of moving video

INTERNATIONAL TELECOMMUNICATION UNION. SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Coding of moving video INTERNATIONAL TELECOMMUNICATION UNION CCITT H.261 THE INTERNATIONAL TELEGRAPH AND TELEPHONE CONSULTATIVE COMMITTEE (11/1988) SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Coding of moving video CODEC FOR

More information

Essence of Image and Video

Essence of Image and Video 1 Essence of Image and Video Wei-Ta Chu 2009/9/24 Outline 2 Image Digital Image Fundamentals Representation of Images Video Representation of Videos 3 Essence of Image Wei-Ta Chu 2009/9/24 Chapters 2 and

More information

Lecture 1: Introduction & Image and Video Coding Techniques (I)

Lecture 1: Introduction & Image and Video Coding Techniques (I) Lecture 1: Introduction & Image and Video Coding Techniques (I) Dr. Reji Mathew Reji@unsw.edu.au School of EE&T UNSW A/Prof. Jian Zhang NICTA & CSE UNSW jzhang@cse.unsw.edu.au COMP9519 Multimedia Systems

More information

Technical Bulletin 625 Line PAL Spec v Digital Page 1 of 5

Technical Bulletin 625 Line PAL Spec v Digital Page 1 of 5 Technical Bulletin 625 Line PAL Spec v Digital Page 1 of 5 625 Line PAL Spec v Digital By G8MNY (Updated Dec 07) (8 Bit ASCII graphics use code page 437 or 850) With all this who ha on DTV. I thought some

More information

The H.263+ Video Coding Standard: Complexity and Performance

The H.263+ Video Coding Standard: Complexity and Performance The H.263+ Video Coding Standard: Complexity and Performance Berna Erol (bernae@ee.ubc.ca), Michael Gallant (mikeg@ee.ubc.ca), Guy C t (guyc@ee.ubc.ca), and Faouzi Kossentini (faouzi@ee.ubc.ca) Department

More information

Rec. ITU-R BT RECOMMENDATION ITU-R BT PARAMETER VALUES FOR THE HDTV STANDARDS FOR PRODUCTION AND INTERNATIONAL PROGRAMME EXCHANGE

Rec. ITU-R BT RECOMMENDATION ITU-R BT PARAMETER VALUES FOR THE HDTV STANDARDS FOR PRODUCTION AND INTERNATIONAL PROGRAMME EXCHANGE Rec. ITU-R BT.79-4 1 RECOMMENDATION ITU-R BT.79-4 PARAMETER VALUES FOR THE HDTV STANDARDS FOR PRODUCTION AND INTERNATIONAL PROGRAMME EXCHANGE (Question ITU-R 27/11) (199-1994-1995-1998-2) Rec. ITU-R BT.79-4

More information

Digital television The DVB transport stream

Digital television The DVB transport stream Lecture 4 Digital television The DVB transport stream The need for a general transport stream DVB overall stream structure The parts of the stream Transport Stream (TS) Packetized Elementary Stream (PES)

More information

HDTV compression for storage and transmission over Internet

HDTV compression for storage and transmission over Internet Proceedings of the 5th WSEAS Int. Conf. on DATA NETWORKS, COMMUNICATIONS & COMPUTERS, Bucharest, Romania, October 16-17, 26 57 HDTV compression for storage and transmission over Internet 1 JAIME LLORET

More information

ITU-T Video Coding Standards

ITU-T Video Coding Standards An Overview of H.263 and H.263+ Thanks that Some slides come from Sharp Labs of America, Dr. Shawmin Lei January 1999 1 ITU-T Video Coding Standards H.261: for ISDN H.263: for PSTN (very low bit rate video)

More information

Video Compression Basics. Nimrod Peleg Update: Dec. 2003

Video Compression Basics. Nimrod Peleg Update: Dec. 2003 Video Compression Basics Nimrod Peleg Update: Dec. 2003 Video Compression: list of topics Analog and Digital Video Concepts Block-Based Motion Estimation Resolution Conversion H.261: A Standard for VideoConferencing

More information

COPYRIGHTED MATERIAL. Introduction to Analog and Digital Television. Chapter INTRODUCTION 1.2. ANALOG TELEVISION

COPYRIGHTED MATERIAL. Introduction to Analog and Digital Television. Chapter INTRODUCTION 1.2. ANALOG TELEVISION Chapter 1 Introduction to Analog and Digital Television 1.1. INTRODUCTION From small beginnings less than 100 years ago, the television industry has grown to be a significant part of the lives of most

More information

Chrominance Subsampling in Digital Images

Chrominance Subsampling in Digital Images Chrominance Subsampling in Digital Images Douglas A. Kerr Issue 2 December 3, 2009 ABSTRACT The JPEG and TIFF digital still image formats, along with various digital video formats, have provision for recording

More information

1 Overview of MPEG-2 multi-view profile (MVP)

1 Overview of MPEG-2 multi-view profile (MVP) Rep. ITU-R T.2017 1 REPORT ITU-R T.2017 STEREOSCOPIC TELEVISION MPEG-2 MULTI-VIEW PROFILE Rep. ITU-R T.2017 (1998) 1 Overview of MPEG-2 multi-view profile () The extension of the MPEG-2 video standard

More information

AN MPEG-4 BASED HIGH DEFINITION VTR

AN MPEG-4 BASED HIGH DEFINITION VTR AN MPEG-4 BASED HIGH DEFINITION VTR R. Lewis Sony Professional Solutions Europe, UK ABSTRACT The subject of this paper is an advanced tape format designed especially for Digital Cinema production and post

More information

MPEG-1 and MPEG-2 Digital Video Coding Standards

MPEG-1 and MPEG-2 Digital Video Coding Standards Heinrich-Hertz-Intitut Berlin - Image Processing Department, Thomas Sikora Please note that the page has been produced based on text and image material from a book in [sik] and may be subject to copyright

More information

Digital Representation

Digital Representation Chapter three c0003 Digital Representation CHAPTER OUTLINE Antialiasing...12 Sampling...12 Quantization...13 Binary Values...13 A-D... 14 D-A...15 Bit Reduction...15 Lossless Packing...16 Lower f s and

More information

SUMMIT LAW GROUP PLLC 315 FIFTH AVENUE SOUTH, SUITE 1000 SEATTLE, WASHINGTON Telephone: (206) Fax: (206)

SUMMIT LAW GROUP PLLC 315 FIFTH AVENUE SOUTH, SUITE 1000 SEATTLE, WASHINGTON Telephone: (206) Fax: (206) Case 2:10-cv-01823-JLR Document 154 Filed 01/06/12 Page 1 of 153 1 The Honorable James L. Robart 2 3 4 5 6 7 UNITED STATES DISTRICT COURT FOR THE WESTERN DISTRICT OF WASHINGTON AT SEATTLE 8 9 10 11 12

More information

HEVC: Future Video Encoding Landscape

HEVC: Future Video Encoding Landscape HEVC: Future Video Encoding Landscape By Dr. Paul Haskell, Vice President R&D at Harmonic nc. 1 ABSTRACT This paper looks at the HEVC video coding standard: possible applications, video compression performance

More information

Analog and Digital Video Basics

Analog and Digital Video Basics Analog and Digital Video Basics Nimrod Peleg Update: May. 2006 1 Video Compression: list of topics Analog and Digital Video Concepts Block-Based Motion Estimation Resolution Conversion H.261: A Standard

More information

Presented by: Amany Mohamed Yara Naguib May Mohamed Sara Mahmoud Maha Ali. Supervised by: Dr.Mohamed Abd El Ghany

Presented by: Amany Mohamed Yara Naguib May Mohamed Sara Mahmoud Maha Ali. Supervised by: Dr.Mohamed Abd El Ghany Presented by: Amany Mohamed Yara Naguib May Mohamed Sara Mahmoud Maha Ali Supervised by: Dr.Mohamed Abd El Ghany Analogue Terrestrial TV. No satellite Transmission Digital Satellite TV. Uses satellite

More information

VIDEO 101: INTRODUCTION:

VIDEO 101: INTRODUCTION: W h i t e P a p e r VIDEO 101: INTRODUCTION: Understanding how the PC can be used to receive TV signals, record video and playback video content is a complicated process, and unfortunately most documentation

More information

Impact of scan conversion methods on the performance of scalable. video coding. E. Dubois, N. Baaziz and M. Matta. INRS-Telecommunications

Impact of scan conversion methods on the performance of scalable. video coding. E. Dubois, N. Baaziz and M. Matta. INRS-Telecommunications Impact of scan conversion methods on the performance of scalable video coding E. Dubois, N. Baaziz and M. Matta INRS-Telecommunications 16 Place du Commerce, Verdun, Quebec, Canada H3E 1H6 ABSTRACT The

More information

Audiovisual Archiving Terminology

Audiovisual Archiving Terminology Audiovisual Archiving Terminology A Amplitude The magnitude of the difference between a signal's extreme values. (See also Signal) Analog Representing information using a continuously variable quantity

More information