Composite Video vs. Component Video

Similar documents
Audiovisual Archiving Terminology

Videotape Transfer. Why Transfer?

Television History. Date / Place E. Nemer - 1

Chrominance Subsampling in Digital Images

Introduction to Data Conversion and Processing

Crystal Sanchez Video Preservation 1 December Video Preservation Project: No Setup vs. Setup

Communicating And Expanding Visual Culture From Analog To Digital

Videotape to digital files solutions

Understanding Human Color Vision

Natural Radio. News, Comments and Letters About Natural Radio January 2003 Copyright 2003 by Mark S. Karney

NCTA Technical Papers

Multimedia Systems Video I (Basics of Analog and Digital Video) Mahdi Amiri April 2011 Sharif University of Technology

An Overview of Video Coding Algorithms

AN EXPLORATION OF THE BENEFITS OF MIGRATION TO DIGITAL BROADCASTING

The last beats of analog: The MII videotape

Processing. Electrical Engineering, Department. IIT Kanpur. NPTEL Online - IIT Kanpur

TERMINOLOGY INDEX. DME Down Stream Keyer (DSK) Drop Shadow. A/B Roll Edit Animation Effects Anti-Alias Auto Transition

Beyond the Resolution: How to Achieve 4K Standards

Understanding Compression Technologies for HD and Megapixel Surveillance

Digital Video Engineering Professional Certification Competencies

Digital Audio and Video Fidelity. Ken Wacks, Ph.D.

Multimedia. Course Code (Fall 2017) Fundamental Concepts in Video

DVM-3000 Series 12 Bit DIGITAL VIDEO, AUDIO and 8 CHANNEL BI-DIRECTIONAL DATA FIBER OPTIC MULTIPLEXER for SURVEILLANCE and TRANSPORTATION

ESI VLS-2000 Video Line Scaler

Chapter 3 Fundamental Concepts in Video. 3.1 Types of Video Signals 3.2 Analog Video 3.3 Digital Video

Introduction. Fiber Optics, technology update, applications, planning considerations

Traditionally video signals have been transmitted along cables in the form of lower energy electrical impulses. As new technologies emerge we are

Introduction to Fibre Optics

NAPIER. University School of Engineering. Advanced Communication Systems Module: SE Television Broadcast Signal.

HELICAL SCAN TECHNOLOGY: ADVANCEMENT BY DESIGN

4. ANALOG TV SIGNALS MEASUREMENT

To discuss. Types of video signals Analog Video Digital Video. Multimedia Computing (CSIT 410) 2

By David Acker, Broadcast Pix Hardware Engineering Vice President, and SMPTE Fellow Bob Lamm, Broadcast Pix Product Specialist

Audio and Video II. Video signal +Color systems Motion estimation Video compression standards +H.261 +MPEG-1, MPEG-2, MPEG-4, MPEG- 7, and MPEG-21

ROTARY HEAD RECORDERS IN TELEMETRY SYSTEMS

5.1 Types of Video Signals. Chapter 5 Fundamental Concepts in Video. Component video

decodes it along with the normal intensity signal, to determine how to modulate the three colour beams.

HDMI Demystified April 2011

Color Spaces in Digital Video

Alcatel-Lucent 5910 Video Services Appliance. Assured and Optimized IPTV Delivery

Presented by: Amany Mohamed Yara Naguib May Mohamed Sara Mahmoud Maha Ali. Supervised by: Dr.Mohamed Abd El Ghany

Digital audio is superior to its analog audio counterpart in a number of ways:

Why Use the Cypress PSoC?

ATI Theater 650 Pro: Bringing TV to the PC. Perfecting Analog and Digital TV Worldwide

GLOSSARY. 10. Chrominan ce -- Chroma ; the hue and saturation of an object as differentiated from the brightness value (luminance) of that object.

Digital Representation

h t t p : / / w w w. v i d e o e s s e n t i a l s. c o m E - M a i l : j o e k a n a t t. n e t DVE D-Theater Q & A

Will Widescreen (16:9) Work Over Cable? Ralph W. Brown

Mike Robin MIKE ROBIN S COLUMN SEPTEMBER Introduction. Generation of a color bars signal

Barnas International Pvt Ltd Converting an Analog CCTV System to IP-Surveillance

P1: OTA/XYZ P2: ABC c01 JWBK457-Richardson March 22, :45 Printer Name: Yet to Come

Getting Images of the World

Projection Series from JVC

Lecture 2 Video Formation and Representation

Video Signals and Circuits Part 2

User's Manual. Rev 1.0

Software Analog Video Inputs

CM-1UTP CAMERA MASTER UTP ADAPTOR

(12) Patent Application Publication (10) Pub. No.: US 2004/ A1

Exercise 1-2. Digital Trunk Interface EXERCISE OBJECTIVE

Chapter 10 Basic Video Compression Techniques

Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems. School of Electrical Engineering and Computer Science Oregon State University

Technical Solution Paper

HD Digital Videocassette Recorder HDW-250

Professor Laurence S. Dooley. School of Computing and Communications Milton Keynes, UK

Digital Media. Daniel Fuller ITEC 2110

COPYRIGHTED MATERIAL. Introduction to Analog and Digital Television. Chapter INTRODUCTION 1.2. ANALOG TELEVISION

The Final Word on SC/H and Color-framing (Orig. 2/1990 Rev. 7 1/2000 Eric Wenocur)

HIGH SPEED ASYNCHRONOUS DATA MULTIPLEXER/ DEMULTIPLEXER FOR HIGH DENSITY DIGITAL RECORDERS

Conversion of Analogue Television Networks to Digital Television Networks

CONNECTION TYPES DIGITAL AUDIO CONNECTIONS. Optical. Coaxial HDMI. Name Plug Jack/Port Description/Uses

Master-tape Equalization Revisited 1

INTERNATIONAL TELECOMMUNICATION UNION GENERAL ASPECTS OF DIGITAL TRANSMISSION SYSTEMS PULSE CODE MODULATION (PCM) OF VOICE FREQUENCIES

SingMai Electronics SM06. Advanced Composite Video Interface: HD-SDI to acvi converter module. User Manual. Revision 0.

1. Broadcast television

Checkpoint 2 Video Encoder

ADVANCED TELEVISION SYSTEMS. Robert Hopkins United States Advanced Television Systems Committee

4 Anatomy of a digital camcorder

Six witnesses. Your choice.

Color Reproduction Complex

Dan Schuster Arusha Technical College March 4, 2010

Physics in Entertainment and the Arts

Interfaces and Sync Processors

Techniques for Extending Real-Time Oscilloscope Bandwidth

Communication Lab. Assignment On. Bi-Phase Code and Integrate-and-Dump (DC 7) MSc Telecommunications and Computer Networks Engineering

..Audio and Video TV Cables.. Overview.. February 2008

How to Chose an Ideal High Definition Endoscopic Camera System

Troubleshooting EMI in Embedded Designs White Paper

OVERVIEW. YAMAHA Electronics Corp., USA 6660 Orangethorpe Avenue

DVR or NVR? Video Recording For Multi-Site Systems Explained DVR OR NVR? 1

ANTENNAS, WAVE PROPAGATION &TV ENGG. Lecture : TV working

Ending the Multipoint Videoconferencing Compromise. Delivering a Superior Meeting Experience through Universal Connection & Encoding

An Analysis of MPEG Encoding Techniques on Picture Quality

Design Brief - I35 and I35 DAC Stereo Integrated Amplifier

SHRI SANT GADGE BABA COLLEGE OF ENGINEERING & TECHNOLOGY, BHUSAWAL Department of Electronics & Communication Engineering. UNIT-I * April/May-2009 *

Elements of a Television System

SingMai Electronics SM06. Advanced Composite Video Interface: DVI/HD-SDI to acvi converter module. User Manual. Revision th December 2016

Colour Reproduction Performance of JPEG and JPEG2000 Codecs

Display-Shoot M642HD Plasma 42HD. Re:source. DVS-5 Module. Dominating Entertainment. Revox of Switzerland. E 2.00

Compressed-Sensing-Enabled Video Streaming for Wireless Multimedia Sensor Networks Abstract:

Transcription:

Composite Video vs. Component Video Composite video is a clever combination of color and black & white information. Component video keeps these two image components separate. Proper handling of each type of video is essential when optimizing preserved video quality. Transitioning from Black & White to Color When color television was introduced, television receivers existing in most homes were only capable of receiving and displaying black & white images. With the advent of color television, it became necessary to configure the new color broadcast signal such that these black & white televisions would continue to display black & white images while new color televisions would display color images. This was a challenge because black & white televisions required a single video signal indicating the overall brightness of light at each point on the screen whereas color televisions required three video signals, each representing the intensity of red, green, and blue light at each point on the screen. The new color broadcast signal could not simply be a transmission of the red, green, and blue signals since none of those signals could be used by existing black & white receivers to produce a satisfactory image. Instead, the black & white signal, as produced by a black & white camera, would still need to be transmitted. Fortunately, a simple process can be used to combine the red, green, and blue signals generated by a color camera into a single black & white signal, known as the luminance signal, or simply luminance. Today, this signal is commonly referred to as the Y signal. The Component Video Concept By combining the red, green, and blue signals using another simple process, two new signals can be generated. These two signals are called color difference signals. Today they are commonly identified as R-Y and B-Y. Since the luminance signal and both color difference signals each contain specific proportions of the red, green and blue signals, the luminance and color difference signals can be used to accurately reform the red, green, and blue signals. With this reversible reconfiguration of the red, green, and blue signals, full color information is retained while the black & white information is separated out. This set of three signals Y, R-Y, and B-Y collectively form component video. Page 1

Basing color television broadcasts on these three component signals not only allowed black & white televisions to operate properly when receiving color transmissions, but also allowed color televisions to correctly display black& white transmissions. Adaptations for Limited Channel Space A second problem confronted color television engineers. Since component video is comprised of three signals, three times the information must be conveyed than was needed for black & white video. Since the spacing between television channels had already been established, the capacity of each channel could not be tripled. In order to solve this problem, engineers examined the characteristics of the human eye. There are two types of photoreceptor cells in the human retina called rods and cones. These cells are activated by light and give us the ability to see. Rods are smaller, more tightly spaced, and more sensitive to light than cones. There are approximately 20 times more rods than cones. Cones, on the other hand, are divided into three different types. Each of the three types of cone cell is sensitive to a different color light. Because rods are more tightly spaced and more light-sensitive than cones, they allow humans to see in dim light and give sharpness to our vision, but cannot process color. In contrast, cones are less sensitive but each type of cone allows us to see a different color of light. In short, rods provide us with detail while cones provide us with color. Since color vision is less distinct than black & white vision, color information can be conveyed with less detail than black & white information without any negative effect. This insensitivity to color detail was taken advantage of when color television was being engineered because it meant that the amount of information that was needed for a color image could be reduced to an amount that was not much greater than that needed for a black & white image. Since the three component video signals isolated the black & white information, the color information could be processed differently. The two color difference signals were filtered to remove some detail and lessen their need for channel space. However, in order for the three signals to be transmitted on a single channel, or through a single wire, it was still necessary to further combine the three signals into one. Page 2

Forming a Composite Video Signal The process of combining the three component video signals into one is neither simple nor easily reversible. It takes place in two steps. The first step involves a complex process that combines the two filtered color difference signals together with another signal called the subcarrier. This sophisticated combination of signals (called quadrature modulation) results in a single signal known as chrominance. The Chrominance signal alone carries all the color information. This stage of the combination is essentially reversible. The reversal process is not simple, but the two color difference signals can be dependably reformed from the chrominance signal. If the luminance and chrominance signals are kept separate, then the red, green, and blue signals can be recreated with only minor distortions. These two signals of luminance and chrominance collectively are a quasi-component form of video commonly referred to as S-Video. The amount of information required for color is reduced to two signals, yet the original red, green, and blue signals can still be faithfully recreated with much of the perceivable image quality intact. In the final stage of combining the component video signals, the luminance and chrominance signals are simply mixed together. Once this last step is complete, one signal contains both the color and black & white information. This single color video signal is called composite video. Composite Video Image Distortions While the final mixing step in creating composite video is simple, it is not easily reversed. This is a significant problem because in order to recreate the red, green, and blue signals, the luminance and chrominance signals must be separated. The subcarrier, which is combined with the color difference signals when creating the chrominance signal, helps facilitate the un-mixing process. It differentiates the chrominance signal from the luminance signal, although there remains a degree of overlap. This overlap exists because of each television channel s constrained capacity. The overlap means it is not always possible to perfectly separate the luminance and chrominance signals once combined. Imperfect separation of luminance and chrominance results in visible distortions when a composite video signal undergoes the transition back to red, green and blue. Patterns of alternating black and white can be reproduced as color, for example. This so-called cross-color effect may appear as rainbows on striped clothing or on textured surfaces. Page 3

Sharp transitions from one color to another can also create dots which appear to crawl along the edge between the two colors. This is known as a cross-luminance effect or dot crawl. Consequently, both color and black & white televisions displayed slightly distorted images when receiving composite video transmissions. While composite video played a crucial role in the development of color television, it clearly has its disadvantages. Not only does it hinder reconstruction of the original red, green, and blue signals, but it also is difficult to record onto magnetic tape. The Challenge of Recording Composite Video The first two widely adopted videotape recording systems embodied extraordinary efforts to faithfully record color video. They produced reasonably good recordings, but these systems were bulky and expensive. They utilized heavy open reels of tape which required powerful motors to spin them. Making such equipment portable meant the machines needed to break apart into pieces, some of which required two people to carry. However, these recorders could record and playback a composite video signal with minimal distortion. Since these systems could directly record a color composite video signal, their system of recording became known as direct color. In order to develop smaller, lighter and less expensive color recorders, able to record on cassette tapes, the quality of the recording had to be further compromised. The limitation created by the spacing between television channels is similar to the limitations of early video recorders. The width of any communication channel, whether it s over-the-air or within a video recorder, is called bandwidth. The design goals for the second generation of video recorders necessitated that their bandwidth be reduced. They simply could not record all of the detail present in a composite video signal. Reducing the detail of the luminance signal can be done using a simple filter. However, due to the nature of the subcarrier, the chrominance signal cannot be filtered without completely corrupting it. Therefore, these recorders separate the luminance and chrominance signals so they could be processed individually. The luminance signal is filtered as needed and the chrominance signal is subjected to a process known as heterodyning to reduce the color information s bandwidth requirement. Early videocassette recorders all used this method to record composite video signals. This system of recording is called heterodyne color, or colorunder. UMATIC, Betamax, VHS and Video 8 are examples of recorders which fall into this category. Page 4

Eventually, UMATIC-SP (UMATIC Superior Performance), S-VHS (Super VHS), ED Beta (Extended Definition Betamax) and Hi8 (High-Band Video 8) were developed to improve the recorded image quality by increasing these recorders bandwidth. This improvement was possible because of a new generation of videotape known as metal-particle tape as opposed to the earlier ferric-oxide tape. This generation of recorders also included S-Video input and output connectors in addition to composite video connectors. S-Video connectors support a two-wire system that keeps the luminance and chrominance signals separate. While the improved bandwidth of these recorders remained somewhat limited, the use of S-Video connectors removed the problematic need to separate the chrominance and luminance signals. Component Video Recording Just prior to the development of metal-particle tape, another approach to improving recorded image quality was taken. This approach abandoned recording composite video in favor of recording component video. Two new recording formats, Betacam and MII moved from a single-wire to a three-wire interconnection system. This not only kept the luminance signal separated, but also eliminated the need to combine the two color difference signals together. The bandwidth needed to record the two color difference signals was halved through a process known as Compressed Time Division Multiplexing (CTDM). These new component video recorders were intended for the broadcast market and its close cousin the ENG (Electronic News Gathering) market. They not only recorded component video for the first time, they also introduced camcorders for the first time. While the MII format never saw much use, the Betacam format was widely adopted. Shortly after Betacam s introduction, Betacam-SP was released which offered greater bandwidth through the use of metal-particle tape. While recordings made using the Betacam-SP format were excellent, component video did have a major downside; since three wires are needed to carry the video information instead of one, major infrastructure changes were needed at post-production and broadcast facilities. This limited the scope of component video s adoption. Digital Component Video Today, digital technology has allowed component video to be the preferred method of representing video information. Advances in electronics and in digital technology not only allow the three component video signals to pass through a single wire without being combined, they also allow the same wire to simultaneously pass 16 channels of audio. Component video therefore forms the basis of all commonly used digital video equipment. Page 5

While composite digital equipment does exist and was somewhat popular during the analog-todigital transition period, the use of digital composite equipment was brief. Preserving Image Quality Component video can be transformed into composite video through the process already described, which is referred to as encoding. Component video equipment invariably provides composite video connections for compatibility with composite video equipment. However, the use of these connections should be avoided whenever possible because even if video information is originally in component form, once it s encoded into composite form, much of the benefit of component video is lost. Since all modern digital standards are based on component video, when migrating analog component video or S-Video recordings to digital, the use of composite video at any point in this process should be avoided, otherwise image quality will be reduced. Furthermore, when converting analog composite video to a digital form, it will be necessary to transform the composite video signal into the three component video signals. This process is known as decoding, and as previously noted, it is an imperfect process. Since decoding composite video has long been important to retaining good image quality, much effort has gone into developing the best methods possible. The most successful decoding methods are sophisticated and highly successful, but are expensive to implement, so not all decoders produce the same results. Where image quality is important, it is essential to use a high quality decoder. The goal of using the best decoder also holds true when archiving digital composite video but this is even more of a challenge. For best results, a digital decoder should be used, however there were few digital decoders made and even fewer survive today. Page 6