Glossary. a technique that encodes a. ACELP (Adaptive Code-Excited Linear Prediction) a low bitrate speech codec. 1-pass encoding file as it is read.

Similar documents
Digital Media. Daniel Fuller ITEC 2110

06 Video. Multimedia Systems. Video Standards, Compression, Post Production

MULTIMEDIA TECHNOLOGIES

Audio and Video II. Video signal +Color systems Motion estimation Video compression standards +H.261 +MPEG-1, MPEG-2, MPEG-4, MPEG- 7, and MPEG-21

ELEC 691X/498X Broadcast Signal Transmission Fall 2015

PAL uncompressed. 768x576 pixels per frame. 31 MB per second 1.85 GB per minute. x 3 bytes per pixel (24 bit colour) x 25 frames per second

Motion Video Compression

Content storage architectures

Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems. School of Electrical Engineering and Computer Science Oregon State University

Lecture 2 Video Formation and Representation

So far. Chapter 4 Color spaces Chapter 3 image representations. Bitmap grayscale. 1/21/09 CSE 40373/60373: Multimedia Systems

Information Transmission Chapter 3, image and video

Chapter 10 Basic Video Compression Techniques

Understanding Compression Technologies for HD and Megapixel Surveillance

Video 1 Video October 16, 2001

Digital Video Telemetry System

A Digital Video Primer

By David Acker, Broadcast Pix Hardware Engineering Vice President, and SMPTE Fellow Bob Lamm, Broadcast Pix Product Specialist

Chapter 3 Fundamental Concepts in Video. 3.1 Types of Video Signals 3.2 Analog Video 3.3 Digital Video

CUFPOS402A. Information Technology for Production. Week Two:

Video Information Glossary of Terms

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Digital Television Fundamentals

Digital Video Editing

Video Compression. Representations. Multimedia Systems and Applications. Analog Video Representations. Digitizing. Digital Video Block Structure

OVE EDFORS ELECTRICAL AND INFORMATION TECHNOLOGY

Multimedia. Course Code (Fall 2017) Fundamental Concepts in Video

Advanced Computer Networks

ATSC vs NTSC Spectrum. ATSC 8VSB Data Framing

An Overview of Video Coding Algorithms

Rounding Considerations SDTV-HDTV YCbCr Transforms 4:4:4 to 4:2:2 YCbCr Conversion

Video compression principles. Color Space Conversion. Sub-sampling of Chrominance Information. Video: moving pictures and the terms frame and

COMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards

Digital Image Processing

To discuss. Types of video signals Analog Video Digital Video. Multimedia Computing (CSIT 410) 2

AT65 MULTIMEDIA SYSTEMS DEC 2015

Implementation of an MPEG Codec on the Tilera TM 64 Processor

Multimedia Systems Video I (Basics of Analog and Digital Video) Mahdi Amiri April 2011 Sharif University of Technology

Glossary Unit 1: Introduction to Video

Chapter 6 & Chapter 7 Digital Video CS3570

Digital Signage Content Overview

MPEG + Compression of Moving Pictures for Digital Cinema Using the MPEG-2 Toolkit. A Digital Cinema Accelerator

10 Digital TV Introduction Subsampling

ATI Theater 650 Pro: Bringing TV to the PC. Perfecting Analog and Digital TV Worldwide

Video coding standards

Lecture 2 Video Formation and Representation

MPEG-2. ISO/IEC (or ITU-T H.262)

Digital Signal Coding

Video Basics. Video Resolution

Understanding Multimedia - Basics

Matrox PowerStream Plus

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

Digital Video Work Flow and Standards

Colour Reproduction Performance of JPEG and JPEG2000 Codecs

In MPEG, two-dimensional spatial frequency analysis is performed using the Discrete Cosine Transform

5.1 Types of Video Signals. Chapter 5 Fundamental Concepts in Video. Component video

MPEGTool: An X Window Based MPEG Encoder and Statistics Tool 1

Film Grain Technology

50i 25p. Characteristics of a digital video file. Definition. Container. Aspect ratio. Codec. Digital media. Color space. Frame rate.

CS A490 Digital Media and Interactive Systems

Will Widescreen (16:9) Work Over Cable? Ralph W. Brown

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Module 1: Digital Video Signal Processing Lecture 5: Color coordinates and chromonance subsampling. The Lecture Contains:

Manual (English) Version: 2/18/2005

Chapter 2 Introduction to

Animation and Video. Contents. 5.0 Aims and Objectives. 5.1 Introduction. 5.2 Principles of Animation

Multimedia Communications. Video compression

Midterm Review. Yao Wang Polytechnic University, Brooklyn, NY11201

A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique

OPEN STANDARD GIGABIT ETHERNET LOW LATENCY VIDEO DISTRIBUTION ARCHITECTURE

Multimedia Systems. Part 13. Mahdi Vasighi

Video Demystified. A Handbook for the Digital Engineer. Fifth Edition. by Keith Jack

Alpha channel A channel in an image or movie clip that controls the opacity regions of the image.

Contents. xv xxi xxiii xxiv. 1 Introduction 1 References 4

Avivo and the Video Pipeline. Delivering Video and Display Perfection

Transitioning from NTSC (analog) to HD Digital Video

Serial Digital Interface

AN MPEG-4 BASED HIGH DEFINITION VTR

Multimedia Communications. Image and Video compression

Manual (English) Version:

Video Compression Basics. Nimrod Peleg Update: Dec. 2003

1. Broadcast television

Understanding IP Video for

VIDEO 101: INTRODUCTION:

Part II Video. General Concepts MPEG1 encoding MPEG2 encoding MPEG4 encoding

Video signals are separated into several channels for recording and transmission.

Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences

Television History. Date / Place E. Nemer - 1

Traditionally video signals have been transmitted along cables in the form of lower energy electrical impulses. As new technologies emerge we are

A video signal consists of a time sequence of images. Typical frame rates are 24, 25, 30, 50 and 60 images per seconds.

HEVC: Future Video Encoding Landscape

Technical requirements for the reception of TV programs, with the exception of news and public affairs programs Effective as of 1 st January, 2018

HDTV compression for storage and transmission over Internet

Chapter 2. Advanced Telecommunications and Signal Processing Program. E. Galarza, Raynard O. Hinds, Eric C. Reed, Lon E. Sun-

Color Spaces in Digital Video

Inputs and Outputs. Review. Outline. May 4, Image and video coding: A big picture

Introduction to Video Compression Techniques. Slides courtesy of Tay Vaughan Making Multimedia Work

Display-Shoot M642HD Plasma 42HD. Re:source. DVS-5 Module. Dominating Entertainment. Revox of Switzerland. E 2.00

High Quality Digital Video Processing: Technology and Methods

EECS150 - Digital Design Lecture 12 Project Description, Part 2

Transcription:

Glossary 1-pass encoding file as it is read. a technique that encodes a ACELP (Adaptive Code-Excited Linear Prediction) a low bitrate speech codec. 2-pass encoding a technique in which a file is first analyzed, and then compressed in a second pass. 2-pass encoding is slower but yields higher quality than 1-pass. 4:1:1 the color space used in NTSC DV format video. It uses one chroma sample per 4 1 block of pixels. 4:2:0 the color space used by most delivery codecs. It uses one chroma sample per 2 2 block of pixels. 4:2:2 the color space used by many authoring formats. It uses one chroma sample per 1 2 block of pixels, thus never sharing a chroma sample between fields. 5.1 Dolby Digital a surround-sound format developed by Dolby Labs. 5.1 refers to five discrete audio channels, each going to its own speaker, plus a single low-frequency channel that goes to a sub-woofer. AAC (Advanced Audio Coding) an audio codec originally designed for use in MPEG-2 and standard in MPEG-4. AAF (Advanced Authoring Format) a protocol and file format developed jointly by Microsoft, Avid, and others to define metadata defining certain aspects of a production in a manner that can be read by any AAFcompliant application. Not yet used in any mainstream products. adaptive deinterlacing the technique in which some data from the discarded field is used to construct a higher quality progressive output. ADPCM (Adaptive Differential Pulse Code Modulation) an audio compression technique that encodes the difference between samples, not the value of the sample itself. The basis of the IMA codec. alpha channel (also key channel ) an additional channel of the image that can be used to key such things as transparency. 32-bit video normally includes an 8-bit alpha channel. AltiVec a SIMD architecture built into Motorola G4 CPUs that dramatically improves the performance of media processing with software that is optimized for it. See SIMD. animated GIF a very early Web video format. Essentially, a series of 8-bit or fewer GIF images bundled into a single file. It offers poor compression efficiency for natural images, but can do very well with flat colors. Often used for Web banner advertisements. API (Application Programming Interface) a set of programming hooks that provide a means for developers to write programs that connect to core functionality within someone else s application. 429

430 Glossary artifact a visible defect in a compressed image. Typical artifacts are ringing or blockiness. ASF (Advanced Streaming Format) the file format of Windows Media..asf the file extension originally used for Windows Media files. See.wmv and.wma. aspect ratio the ratio of length to height of pictures. Standard television is 4:3, widescreen, video 16:9. Feature films often use 1.85:1, 2.35:1, and other values..asx the original file extension for Windows Media streaming metafiles that reside on Web servers. See.wax and.wvx. ATSC (Advanced Television Standards Committee) the group tasked with developing the technical standards of digital television. Also used to refer to the U.S. standard for digital television. www.atsc.org AVI (Audio Video Interleave) a format developed by Microsoft that originally allowed audio and video to be combined in a single file that could play off a CD-ROM. Technically obsolete, but still widely used. banding the visible distortion caused by insufficient bit depth. So-called because it manifests itself as visible bands in what should be smooth color or tonal gradients. bandwidth a measure of the amount of data per second analog or digital a type of transmission can carry. On the Internet, typically measured in Kbps. bartending the monitoring of the progress bar on a compression machine. While immensely aggravating on a deadline, with the right attitude bartending can make compression a contemplative experience. Bartending is an excellent time for zen meditation, email, and rereading this book. bit (short for Binary digit) a bit can have one of two possible values one or zero. A byte is a group of eight bits. Two bytes (16 bits) grouped together are called a word. Among audio people, it s common to hear phrases such as 16-bit word and 20-bit word. In these phrases, the term word describes a single sample, no matter how many bits are used to measure it. block the basic unit of DCT and other kinds of compression. Typical codecs use 8 8 blocks. blocking the distortion in which different macroblocks exhibit different errors, so the edge between them becomes visible. byte a group of eight bits. CBR (Constant Bit Rate) an encoding method in which the bitrate remains constant across the length of the file being encoded. Note that while audio codecs are often truly CBR, the size of video frames will vary somewhat. CCD (Charge Coupled Device) the light-sensitive elements used to convert light to a corresponding electrical charge in digital imaging systems such as scanners and digital video cameras. CELP (Code Excited Linear Prediction) a common low bitrate speech codec in MPEG-4. chroma the nonlinear color component of a video signal, not independent of luma.

Glossary 431 chroma subsampling (or color subsampling) the process of reducing color resolution by taking fewer color samples than luminance samples. 4:2:0 is a Y'CbCr subsampled color space. codec (COmpressor, DECompressor) the technology used to compress and play back video and audio. color space the various ways color is described mathematically. RGB and Y'CbCr 4:2:0 are color spaces. compression efficiency refers to how well a codec preserves the quality of the source at a given bitrate. Codecs with better compression efficiency can deliver better quality at a given data rate or lower data rates at a given quality. Corona the code name for Windows Media 9. DCT (Discrete Cosine Transform) a mathematical technique in which a series of numbers is turned into coefficients for a series of cosines. In compression, it s widely used as the first stage of encoding digital video. DCT operations work on pixel blocks usually in multiples of 8 (normally, 8 8). deinterlacing the process of converting interlaced video to progressive. DirectShow the current Windows API for encoding AVI files and playing back all kinds of media files. Largely replaced VfW. Can have compatibility problems with some VfW video codecs. dither a method to reduce banding by randomly distributing the error between the source pixel values, and the pixel values possible in the output color space. Dithering can improve quality before compression, but makes compression much more difficult. DRM (Digital Rights Management) the technology used to control access to technology to licensed users. Different varieties are provided by many different vendors..divx the extension for an AVI file using MPEG- 4 for video and typically MP3 for audio. Typically authored with the DivX codec. DVD (often referred to as Digital Video Disc or Digital Versatile Disc, but without any official meaning for the acronym) the name of a series of technologies based around a high-density 5-inch optical disc format. DVD-ROM (DVD-Right Once, read Many) of several DVD formats. one DVD-R (DVD-Recordable) one of several DVD formats, and the most compatible with the widest range of DVD players for recording. DVD-RW (DVD-ReWriteable) rewriteable DVD formats. DVD+RW (DVD+ReWriteable) rewriteable DVD formats. one of several one of several ECL (Edit Control Lists) used to store information such as where to inject I-frames, what filters to run, and inverse telecine cadence in MPEG encoders.

432 Glossary field the odd and even lines of a frame of interlaced video are each one field. Typically referred to as top and bottom, with top being every other line from the first line, and bottom being every other line from the second. FireWire the Apple trade name for IEEE 1394. (see IEEE 1394, i-link.) frame rate a measurement of frames per second in an image. NTSC default is 29.97, PAL is 25, and film is 24. the unit of measure- fps (Frames Per Second) ment for frame rate. full screen traditionally 640 480 from the standard resolution of the Mac II computer. gamma the name for exponential value used to control the relationship between stored nonlinear luma and displayed linear luminance. Changing gamma values changes the brightness in the middle of the luma range, but not the ends. Different display technologies (Macintosh and Windows monitors, film, and so on) have different gammas, and video must be encoded to target its eventual display gamma. GOP (Group of Pictures) in MPEG video, an I- frame followed by P- and B-frames. A closed GOP is a self-contained unit. HD (High Definition) an image format that is higher resolution than SD (Standard Definition) television. Images of 720 lines of resolution or more are usually considered HD. HSV (Hue, Saturation, Value); sometimes HSL (Hue, Saturation, Luminance) a color space system. It is used within content creation applications, but not for storage. I-frames the MPEG terminology for a keyframe. An I-frame is self-contained, and starts every GOP. IEEE 1394 a serial digital interface standard capable of running at speeds of up to 800Mbps or more. i-link the Sony trade name for IEEE 1394. (see IEEE 1394, FireWire.) interframe encoding the technique in which consecutive frames of video are compared so redundant elements can be removed. Interframe encoding is core to all delivery codecs. interlace a method of capturing and displaying video in which each video frame consists of two fields, referred to as upper and lower. As each frame is scanned onto a display such as a television screen, first one field then the other is shown. The second field consists of scan lines that fall in between the first field s scan lines, hence the term interlaced. The technique makes fast motion appear smoother and reduces flicker. The opposite of interlace is progressive scan, in which each line of video is scanned onto the display in successive order. inverse telecine the reverse of the 3:2 pulldown process, in which excess frames that were created to generate 60-field per second video are removed. IPMP (Intellectual Property Management and Protection)

Glossary 433 IRE a unit of measurement defined as 1 percent of the video range between blanking to peak white. ISMA (Internet Stream Media Alliance) the industry organization formed to accelerate the adoption of open standards for streaming rich media video, audio, and associated data over the Internet. Its members include: Apple, Cisco, IBM, Kassena, Philips, and Sun. ISO (International Organization for Standards yes, that doesn t match) an international body governing standards. Among others, they are responsible for the MPEG formats. ITU-R BT.601 (formerly CCIR 601) the standard that defines the parameters of encoded digital video signals. Typically thought of as 4:2:2, sampled at 13.5MHz, with 720 luminance samples per active line digitized at 8- or 10-bits. ITU stands for International Telecommunications Union. R is for Radio spectrum, and BT is for Broadcast Television. JPEG (Joint Photographic Experts Group) an organization who creates digital still image technologies. Also, the JPEG, which uses DCT, and is very common on the internet. JPEG 2000 a wavelet-based still image codec from the creators of JPEG. JMF (Java Media Framework) the APIenabling audio, video, and other time-based media to be added to Java applications. JVM (Java Virtual Machine) the core of Sun s popular Java language. The virtual machine refers to processor architecture emulation that allows Java applets to be run on a variety of platforms, regardless of CPU. KBps (kilobytes per second) Kbps (kilobits per second) keyframe in compression, a self-contained frame, which doesn t require a reference to any other frame. Immediate access in a compressed file is only possible to a keyframe. Called I-frame in MPEG. Also, a term from cel animation in which a single image plays an important (i.e., key ) role in defining the action that follows. It has also come to refer to a group of parameters that define a set of actions that might change over time, as the program interpolates between one snapshot of the values and another. LFE (Low Frequency E----) the.1 low-frequency channel in Dolby Digital 5.1 surround systems. The E in the acronym can mean a number of different things depending on its use: Extension, Effects, or Enhancement. lossless a type of codec that preserves all of the information contained within the original file. Depending on the source, lossless compression can result in dramatic space savings or none at all. lossy a type of codec that discards some data contained in the original file during compression. Lossy compression offers much smaller data rates than lossless. luminance (luma) a video component pertaining to brightness, referred to as Y in YUV and Y'CbCr.

434 Glossary macroblock a group of four (usually) 8 8 pixel blocks (yielding a 16 16 block) used for motion estimation during encoding. MP@ML (MainProfile@MainLevel) in MPEG, a 4:2:0 profile that covers broadcast television formats up to 720 486 at 30fps (NTSC) or 720 576 at 25fps (PAL), with data rates raging from 2Mbps to 9Mbps. MP@HL (MainProfile@HighLevel) in MPEG-2, a 4:2:2 profile at 720 486 (NTSC) or 720 576 (PAL), with data rates up to 50Mbps. MBR (Multiple Bit Rate) a method of encoding in which files scale to match the available bandwidth during real-time streaming. M-JPEG (Motion-JPEG) a 4:2:2 codec that stores each field in its own bitmap used for authoring, prevalent in the mid-90s. MMX (MultiMedia extensions) an early SIMD instruction set added to Intel s Pentium processors. MNG (Multipart Network Graphics) the motion graphics counterpart to PNG. A lossless RGB format that can encode to a variety of bit depths from 32-bit on down. Moore s Law (named for Intel founder Gordon Moore) in its modern use, it states that the performance of CPUs will double every 18 months at the same price. motion estimation a technique used to calculate if and where elements of an image have moved between one frame and the next. Motion estimation is a big differentiator between codecs, and the major consumer of CPU cycles during compression. motion search the name given algorithms used in motion estimation. Motion-JPEG see M-JPEG. MPEG (Moving Pictures Experts Group) a group working under the auspices of the ISO formed to develop international compression/decompression standards. MPEG-1 the original MPEG format. It is used in Video CD and a variety of multimedia tasks. MPEG-2 an enhanced version of MPEG that added support for interlaced frames, as well as enhanced compression. It is used for DVDs, digital satellite video transmission, and digital cable. MPEG-3 the working name for the HD MPEG format. It was eventually implemented as an extension to MPEG-2 and there is no MPEG- 3 standard MPEG-4 a major new version of MPEG with much enhanced support for streaming, low bitrates, and compression efficiency. MPEG-7 not a video format, MPEG-7 is a metadata solution for video, somewhat analogous to AAF. It is not yet supported in any products. MPEG-21 a forthcoming format for rich, interactive media delivery. Still some years away from being used in real products..mov the standard QuickTime file extension.

Glossary 435 OHCI 1394 a particular kind of IEEE 1394 card for Windows. All modern cards are OHCI, but older ones like the Digital Origin Lynx cards aren t OHCI, and won t work with DirectShow applications. PCI (Peripheral Component Interconnect) a high-speed bus for connecting peripherals to a CPU developed by Intel in 1993. It is used for capture cards in both Mac OS and Windows computers. perceptual noise shaping (PNS) an encoding scheme that reduces redundancies and irrelevancies within audio signals. pixel doubled a technique used to scale an image up by repeating each pixel, thus a 160 120 image that is pixel doubled would become a 320 240 image. premultiply a technique in which opacity is removed from an alpha channel. The term is derived from the math involved in the process the opacity is multiplied by the value of the color against which an alpha channel is composited. Premultiplied alpha channels are opaque. progressive download a technique in which a file is transmitted and will begin playing back before the entire file has been sent. progressive scan a method of capturing and displaying video in which the signal is displayed in consecutive scan lines, as opposed to interlaced. Video meant for playback on computers is almost always progressive. Also, it has become common to use the letter p to indicate progressive scan content. For example: 24p (24fps, progressive) or 1080p (1,080 lines of resolution, progressive). progressive GIF a GIF file that will be displayed in greater and greater detail, as the file is received. progressive JPEG a JPEG file that will be displayed in greater and greater detail, as the file is received. QT Apple QuickTime. QTVR Apple QuickTime VR, as in Virtual Reality. Apple s 360 degree panoramic image technology..qti the QuickTime Image file extension. quarter screen canonically 320 240, or half the height and width of the 640 480 resolution used by the first generation of color Macintosh computers. QuickTime the first and most complete video authoring, distribution, and playback architecture. From Apple Computer. raster the pattern of horizontal scan lines that make up a video picture. RealMedia the native streaming media type used with RealNetworks products. Red Book the spec for CD audio, first published in 1982. resolution RGB (Red, Green, Blue) computer displays. the height and width of an image. the color space of

436 Glossary ringing the compression errors that occur around sharp edges. RLE (Run Length Encoding) the compression technique that depends on long strings of identical pixels, and as many as possible identical lines between frames. Files sizes are reduced by replacing those pixel strings with simple code that instructs the decoder to repeat x-color, y-many times. RLE encoding is mostly used on black and white line art..ra the older file extension for RealAudio, supplanted by the.rm extension..rm the RealMedia file extension. RTP (Real Time Protocol) the protcol on which RTSP is based. RTSP (Real Time Streaming Protocol) the standard client-server control protocol for streaming. safe area the portion of the image area guaranteed to be displayed on all televisions. The most common of these are title safe and action safe. SD TV (Standard Definition TV) 720 486 pixels at 29.976fps (NTSC) or 720 576 at 25fps (PAL). SDI (Serial Digital Interface) a lossless, digital interconnection format used in high-end facilities for video capture from DVCPRO50, Digital Betacam, and similar systems. SDK (Software Development Kit) a set of developer s programming tools for writing code that takes advantage of an application s API. SIMD (Single Instruction Multiple Data) the processor architecture that simultaneously performs a single operation on multiple items of data. SIMD can radically improve speed of codecs and other media processing for software that uses it. (See AltiVec, MMX, SSE, and SSE2.) SMIL (Synchronized Multimedia Integration Language) (pronounced smile ) a programming language for authoring interactive presentations. spatial compression the technique that reduces file size by reducing redundancy within a frame. SSE (Single SIMD Extensions) the group of 70 instructions added to the Pentium III CPU to improve media processing performance over Intel s initial SIMD offering, MMX. SSE2 (Single SIMD Extensions 2) the group of 144 instructions added to the Pentium 4 CPU to further enhance performance. These extensions are of less utility in video compression. statistical multiplexing an inter-channel VBR used in cable and satellite delivery systems. Because the total bandwidth of cable and satellite systems is fixed, and any given channel s bandwidth can vary within that overall bandwidth limit, massive hardware statistical multiplexors simultaneously analyze all channels at once, dynamically adjusting the bandwidth among them to offer the highest average quality possible. streaming real-time transmission of video and/or audio data. Streaming requires a particular kind of server software.

Glossary 437 subsampling the process of reducing spatial resolution by taking samples that cover larger areas than the original samples or of reducing temporal resolutions by taking samples that cover more time than the original samples. telecine the process of converting film to video. In NTSC, each film frame, which is essentially a progressive scan, is converted to fields, with the first frame becoming three fields, the next frame two fields, then three, then two, and repeating. This is called 3:2 pull-down. Reversing this process is called inverse telecine. In PAL, telecine simply speeds the film up to 25fps and it is captured progressive. temporal compression the technique that reduces file size by reducing redundancy between frames. A critical feature of all delivery codecs. UDF (Universal Disc Format) the standard describing a practical, recordable, random access file system. Typically used on DVD media. UDP (User Datagram Protocol) a connectionless transport protocol that runs on top of IP networks. UDP offers only light error checking. It is used by all streaming formats. UI (user interface) the means by which a user interacts with software-hardware or hardware-based devices. VBR (Variable Bit Rate) an encoding technique that allows the bitrate to fluctuate according to the complexity of the content being encoded. VBV (Video Buffering Verifier) the portion of the MPEG-1 specification defining a minimum buffer size to allow MPEG-1 to work on memory-limited devices. vector a method of storing information about graphical elements such as lines and curves as data describing lengths and angles. Vector graphics have the advantage of taking relatively little space yet they can be scaled to any size without the usual distortion associated with scaling an image of fixed resolution too large. vector quantization a video compression technique used in Cinepak and Sorenson Video 1 and 2. Velocity Engine Apple s trademark for the AltiVec architecture used in G4 processors. VfW (Video for Windows) the original Windows API for authoring and playing back AVI files. Largely replaced by DirectShow, although some applications still use VfW. VLB an outdated high-speed bus used in Intel 486-based computers. It has been supplanted by the PCI bus on Pentium-class systems. VOB files (Video Object files) the DVD Video files that hold video and audio information. VRML (Virtual Reality Markup Language) an early attempt at a standard description for 3D scenes and objects. Never succeeded as a format, but was used as the basis for MPEG-4 BIFS.

438 Glossary wavelet compression a compression technique in which a signal is converted into a series of frequency bands. It is processorintensive, and so is considered better suited to still images than video..wax the Windows Media file extension for streaming audio metafiles that reside on Web servers. White Book the document written by Sony, Philips, and JVC that extended the Red Book audio CD format to include digital video. The result is commonly referred to as Video CD. WMA (Windows Media Audio) the name of the default audio codec in Windows Media..wma the file extension used for audio-only Windows Media files. WMV (Windows Media Video) the name of the default video codec in Windows Media..wmv the file extension used for Windows Media Video files. word see bit..wvx the Windows Media file extension for streaming video metafiles that reside on Web servers. Y'CbCr the luminance and color difference signals in digital video. Note that isn t a single curly quote mark after the Y, but a straight single quote ( ' ). YUV when used properly, Y refers to luminance, and U and V refer to subcarrier modulation axes in NTSC color coding in which U stands for blue minus luminance and V for red minus luminance. However, YUV has become shorthand for any luma-plus-subsampled chroma color spaces. When referring to files that reside in a computer, Y'CbCr is the proper term and is what this book uses. YUV-9 the obsolete 9-bit per pixel color space that uses one chroma sample per 4 4 block of pixels. YUV-12 another name for 4:2:0 color space. It uses 12 bits per pixel. YUV overlay the special hardware in a video card that allows it to take Y'CbCr data directly, and convert it to RGB inside the card. This is faster and of higher quality than requiring the computer to handle the conversion itself. All modern video cards include a YUV overlay.