CMOS Video Cameras and EECS 452

Similar documents
Design and Implementation of an AHB VGA Peripheral

Design and implementation (in VHDL) of a VGA Display and Light Sensor to run on the Nexys4DDR board Report and Signoff due Week 6 (October 4)

EEM Digital Systems II

FPGA Laboratory Assignment 4. Due Date: 06/11/2012

SparkFun Camera Manual. P/N: Sense-CCAM

AD9884A Evaluation Kit Documentation

Chrontel CH7015 SDTV / HDTV Encoder

Digilent Nexys-3 Cellular RAM Controller Reference Design Overview

SingMai Electronics SM06. Advanced Composite Video Interface: HD-SDI to acvi converter module. User Manual. Revision 0.

Graduate Institute of Electronics Engineering, NTU Digital Video Recorder

Design and Implementation of SOC VGA Controller Using Spartan-3E FPGA

VID_OVERLAY. Digital Video Overlay Module Rev Key Design Features. Block Diagram. Applications. Pin-out Description

EECS150 - Digital Design Lecture 12 - Video Interfacing. Recap and Outline

Lab 3: VGA Bouncing Ball I

Lecture 14: Computer Peripherals

EECS150 - Digital Design Lecture 12 Project Description, Part 2

Pivoting Object Tracking System

Lab Assignment 2 Simulation and Image Processing

Section 14 Parallel Peripheral Interface (PPI)

MACROVISION RGB / YUV TEMP. RANGE PART NUMBER

VGA Port. Chapter 5. Pin 5 Pin 10. Pin 1. Pin 6. Pin 11. Pin 15. DB15 VGA Connector (front view) DB15 Connector. Red (R12) Green (T12) Blue (R11)

The World Leader in High Performance Signal Processing Solutions. Section 15. Parallel Peripheral Interface (PPI)

Group 1. C.J. Silver Geoff Jean Will Petty Cody Baxley

Checkpoint 2 Video Encoder

Smart Night Light. Figure 1: The state diagram for the FSM of the ALS.

SingMai Electronics SM06. Advanced Composite Video Interface: DVI/HD-SDI to acvi converter module. User Manual. Revision th December 2016

AC : DIGITAL DESIGN MEETS DSP

ESI VLS-2000 Video Line Scaler

HDMI-UVC/HDMI-Parallel converter [SVO-03 U&P]

VHDL Design and Implementation of FPGA Based Logic Analyzer: Work in Progress

By Tom Kopin CTS, ISF-C KRAMER WHITE PAPER

VGA Controller. Leif Andersen, Daniel Blakemore, Jon Parker University of Utah December 19, VGA Controller Components

1 Terasic Inc. D8M-GPIO User Manual

CH7021A SDTV / HDTV Encoder

IMS B007 A transputer based graphics board

Group 1. C.J. Silver Geoff Jean Will Petty Cody Baxley

ECE 372 Microcontroller Design

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Introductory Digital Systems Laboratory

Spartan-II Development System

Block Diagram. dw*3 pixin (RGB) pixin_vsync pixin_hsync pixin_val pixin_rdy. clk_a. clk_b. h_s, h_bp, h_fp, h_disp, h_line

Lab # 9 VGA Controller

Design of VGA Controller using VHDL for LCD Display using FPGA

DT3162. Ideal Applications Machine Vision Medical Imaging/Diagnostics Scientific Imaging

VGA 8-bit VGA Controller

PO3030K 1/6.2 Inch VGA Single Chip CMOS IMAGE SENSOR. Last update : 20. Sept. 2004

YSC -HD-AK1 HDMI / HD-SDI

TSIU03: Lab 3 - VGA. Petter Källström, Mario Garrido. September 10, 2018

An FPGA Based Solution for Testing Legacy Video Displays

Traditionally video signals have been transmitted along cables in the form of lower energy electrical impulses. As new technologies emerge we are

Using HERON modules with FPGAs to connect to FPDP

Fingerprint Verification System

TV Character Generator

Checkpoint 2 Video Interface

Omni. isiontm. Advanced Information Preliminary Datasheet

Description. July 2007 Rev 7 1/106

V6118 EM MICROELECTRONIC - MARIN SA. 2, 4 and 8 Mutiplex LCD Driver

AN-ENG-001. Using the AVR32 SoC for real-time video applications. Written by Matteo Vit, Approved by Andrea Marson, VERSION: 1.0.0

15 Inch CGA EGA VGA to XGA LCD Wide Viewing Angle Panel ID# 833

Section 4. Display Connector

Video Graphics Array (VGA)

Chapter 3 Fundamental Concepts in Video. 3.1 Types of Video Signals 3.2 Analog Video 3.3 Digital Video

Lab 6: Video Game PONG

Display Technology. Cathode Ray Tube. Images stolen from various locations on the web...

Lancelot. VGA video controller for the Altera Nios II processor. V4.0. December 16th, 2005

SNG-2150C User s Guide

1. Synopsis: 2. Description of the Circuit:

Camera Controller Project Report - EDA385. Einar Vading, ael09eva Alexander Nässlander, ada09ana Carl Cristian Arlock, ada07car November 1, 2013

Design of a Binary Number Lock (using schematic entry method) 1. Synopsis: 2. Description of the Circuit:

Combinational vs Sequential

XC-77 (EIA), XC-77CE (CCIR)

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Introductory Digital Systems Laboratory

FPGA Design. Part I - Hardware Components. Thomas Lenzi

Real-Time Digital Oscilloscope Implementation in 90nm CMOS Technology FPGA

MultiScopeLite. Users Guide. Video Measurement and Calibration Tools. RHMG Software Tools Library 1/18/2013. Bill and Scott Werba

Lab #5: Design Example: Keypad Scanner and Encoder - Part 1 (120 pts)

VIDEO 101 LCD MONITOR OVERVIEW

Parallel Peripheral Interface (PPI)

Block Diagram. deint_mode. line_width. log2_line_width. field_polarity. mem_start_addr0. mem_start_addr1. mem_burst_size.

Teletext Inserter Firmware. User s Manual. Contents

12.1 Inch CGA EGA VGA SVGA LCD Panel - ID #492

DATASHEET HMP8154, HMP8156A. Features. Ordering Information. Applications. NTSC/PAL Encoders. FN4343 Rev.5.00 Page 1 of 34.

CH7053A HDTV/VGA/ DVI Transmitter

FPGA-BASED EDUCATIONAL LAB PLATFORM

Modeling Latches and Flip-flops

IT T35 Digital system desigm y - ii /s - iii

DT3130 Series for Machine Vision

OV9650 Color CMOS SXGA (1.3 MegaPixel) CameraChip Implementation Guide

EAN-Performance and Latency

Chapter 9 MSI Logic Circuits

March 13, :36 vra80334_appe Sheet number 1 Page number 893 black. appendix. Commercial Devices

The Project & Digital Video. Today. The Project (1) EECS150 Fall Lab Lecture #7. Arjun Singh

Logic Analysis Basics

Application Note 20D45X Family

SignalTap Plus System Analyzer

Logic Analysis Basics

2.6 Reset Design Strategy

Programmable Logic Design I

MIPI D-PHY Bandwidth Matrix Table User Guide. UG110 Version 1.0, June 2015

GALILEO Timing Receiver

New GRABLINK Frame Grabbers

Transcription:

CMOS Video Cameras and EECS 452 K.Metzger, February 24, 2009 Figure 1: C3088/OV6620 CMOS video camera mounted on Spartan-3 Starter Board via connector B1. 1 Introduction Several past EECS 452 projects have made use of video cameras. A significant amount of project time was consumed researching possibilities, choosing a camera, physically interfacing it to either a DSK or FPGA and then programming and/or designing support logic. At this point the team was ready to get on with their project in the time remaining in the semester. Usually not much. There are a number of significant steps involved in going from saying "I m going to locate tennis balls." and actually getting an image on which to start trying. The primary purpose of this note to gather in one place useful overview information about video options that might be useful to an EECS 452 project. The information in this note can be augmented by studying the relevant data sheets, 1

searching out information on the web and experimenting with actual hardware. I have learning exercise level OV6620 VHDL support that can be had for the asking. It was written mostly to help me learn what is involved working with these small cameras. A useful source of information for vision oriented student projects can be found at http://www.cs.cmu.edu/~cmucam/home.html. I have had at least three groups come by my office this semester to ask about cameras and their use. I ve given out copies of a CD holding materials that I ve collected and the VHDL for my camera/monitor experimental set up which I also demonstrated. Unfortunately there isn t any real way to open-the-hood and see what s going on inside. It seemed to me that a more coherent presentation would be useful, hence this document. The writing is a bit on the rough side and the content could be complete. It was written/assembled in a two day period at the start of Spring Break. If it proves useful and desirable more effort will be put into it for possible use by future teams. For now, it contains much more information on the topic than was available for any previous EECS 452 video oriented project team at the start of their project. By a considerable amount. 2 A short walk through Scene Camera Interface Frame Buffer DSP Frame Buffer Interface Display. 1. Scene content and lighting (sun, incadencent, florescent, etc) will have a major effect on an image s color quality. What the camera sees is generally not what the user sees, color wise. This is because of the adaptability of the human eye/brain. We see what we expect to see. In the old days, photographers used films matched to the color temperature of the expected lighting. There existed separate films to be used in daylight or in incandescent lightening. Modern digital cameras monitor the light characteristics and adjust their color sensitivity accordingly. Generally, but not always, this works well. 2. A camera has a lens that focuses an image onto a CMOS imaging surface. The surface contains photosensitive diodes. These accumulate charge depending on the light falling on them. A picture is started by clearing the stored charge. The diodes are allowed to accumulate charge. Thses charges can be read and transferred into other non-light sensitive diodes or transistors effecting an electronic shutter. 3. The image frame is organized in terms of rows of pixels. Some of the photodiodes are sensitive to red light, some to green and the remainder to blue. The color sensitivities are commonly arranged in a specific pattern 2

(Bayer). The Bayer pattern uses 50% green sensitive pixels, 25% red sensitive pixels and 25% blue sensitive pixels (individually). The camera electronics interpolates these pixels in order to generate a full color image. There is a loss of resolution, compared to a black and white image generated using the same pixel information. "The Bayer arrangement of color filters on the pixel array of an image sensor." Image and text from the Wikipedia. 4. The image frame is scanned out to the outside world. Typically by the camera itself. This has to be done in a timely manner before the charges on the photodiodes deteriorate significantly. The data is interpolated and reformatted in this process. 5. A typical CMOS camera to user interface has 8 or 16 lines for pixel data, a clock, and synchronization signals. Synchronization signals typically include start of frame and start of scan line (CMOS sensor row). Most CMOS camera are configured using a I2C or a similar (proprietary) interface. A reset line is also likely present. 6. The simple CMOS cameras available to us (i.e., left over from prior projects) encode the pixel color/intensity values using 8 bits of luminance information and 8 bits of chrominance information. It takes a bit of work to convert these values back to RGB. It is possible to scan the RGB values out as well but these generally use the Bayer pattern. In this case a pixel is either red, or green or blue only. The luminance values come from each pixel and can be used to generate a black and white image with higher resolution than has a full color picture. The chrominance (color) information results from an interpolation of near-by pixels. In some sense one is given 16 bits of combined information and is asked to extract three 8-bit color values. This is not likely to be an exact process. 3

7. If the image frame is not immediately consumed it must be placed into a more non-volatile memory where it is can be worked on. Quite often this memory is sized to hold two image frames. One portion being load while the other is being worked on. If the camera outpaces the processing one needs to either get a more powerful processor or perhaps drop the data rate by skipping frames. 8. The processing an image is application dependent. In normal operation (frame after frame) there isn t a lot of time to do much, using our hardware. Moving an image into the C5510 can be accomplished either using the S3SB via the McBSP (running a fire hose into a straw) or via a DYI or purchased DMA interface (garden hose). Either way frames likely will need to be skipped. 9. The remaining steps involve generating a display frame to be sent to a video monitor. 10. Displays can be generated on the fly for from information contained in a display buffer or from some combination. Most monitors have three analog color inputs, red, green and blue and two inputs for synchronization signals, vertical sync and horizontal sync. The vertical sync waveform signals the start of a display frame. The horizontal sync waveform signals the start of a row of pixels. 11. Almost all modern LCD monitors support a wide range of display sizes and have large significant image synchronization abilities. If driven using a somewhat off standard timing they generally will complain. The old analog monitors often could be easily damaged using out of tolerance timings. The above is a reasonably accurate broad stroke description of what happens going from camera to display via a processing application. Lots and lots of details need to be considered when creating a working system. When implementing your own system, your best friends will be the relevant data sheets/manuals and a systematic step-by-step implement-and-test process. 3 CMOS video cameras on hand OV6620 CMOS, nominally quarter VGA OV7620 CMOS, nominally VGA Samsung/Sparkfun, E700 (MagnaChip HV7131GP), VGA 4

We also have a Digilent VDEC1 composite video in to digital data stream converter. This has been used several time in conjunction with the Digilent Virtex-2 FPGA board. When reading data sheets (and any code that I supply) stay alert for inconsistencies and errors. For example, the OV6620 data sheet claims an image size of 101,376 pixel but specifies an frame size of 356 292=103952 pixels. The values tripped up a student who didn t check the product value and assumed that the values were consistent. Projects are not required to use these specific cameras or camera interfaces. Other choices exist. In particular the Altera DE2 board used in EECS 270 has a composite video in. The follow on version, the DE2-70, has two such inputs. Potential image capture/processing units are: DIY frame buffer/spi, Spartan-3 Starter Board, Digilent XUP Virtex-2 Pro, Altera DE2 and DE2-70, Other? Some standard image sizes: Format h v pixel count VGA 640 480 307200 Quarter VGA 320 240 76800 XVGA 1024 768 768432 When working with images keep in mind what you see and perceive in terms of color and what a camera sees and perceives are quite often different. Cameras and their images sizes: Camera frame dimensions pixels pixel clock OV6620 356 292 103952 8.86 MHz OV7620 664 492 326688 13.5 MHz E700 652 488 318176 12.5 (?) MHz The pixel clock numbers for the OmniVision parts are for 16-bit pixels and are generated by the included interface board. The clock value doubles when using 8-bit pixels. Camera generated pixel rates are asynchronous to the S3SB clock. A clock boundary is crossed moving pixel data from the camera into the FPGA. Metastability can be a problem and should be taken into account. 5

Figure 2: C3088 camera board with OV6620 camera mounted along with the camera adapter board used to connect to the Spartan-3 Starter Board B1 connector. 3.1 OV6620 and OV7620 Requires a 5 Volt supply. However, the output drivers can be powered by 3.3 Volts allowing direct interfacing to the Spartan-3 Starter Board. Camera boards need to be modified to allow powering the output drivers using 3.3 Volts. The camera itself still needs to be powered using a 5 Volt supply. I have been modifying cameras on as a needed basis. Camera is configured using a I2C (?) interface. 3.2 Samsung E700 Camera is configured using a I2C interface. The user supplies the clock. A max 25 MHz clock is allowed resulting in a 30 frames per second image rate. The pixel clock is likely 12.5 MHz, the data is multiplexed giving a 25 MHz 8-bit byte rate. I think. Lower clocks can be used to drop the frame and pixel rate. I couldn t locate a minimum value. 6

GAMMA VTO DENB analog processing r g b Y Cb Cr mx mx ADC ADC formatter video port Y(7:0) *UV(7:0) column sense amp exposure detect white balance detect * Note: Outputs UV(7:0) are not available on the OV6120. row select (356x292) image array registers sys-clk video timing generator exposure control WB control SCCB interface 1/2 XVCLK1 PCLK FODD CHSYNC MIR HREF VSYNC FREZ FSIN PROG AGCEN FZEX AWB AWBTH/ AWBTM SIO-1 SIO-0 SBB Figure 3: Block diagram showing the internal organization of the OV6620 CMOS camera. From the OV6620 data manual. 7

Figure 4: Sparkfun breakout board with E700 camera mounted. Image from the Sparkfun catalog page. 4 Capturing frames Need to either process on the fly or capture a frame. Or perhaps, a portion of a frame. The Spartan-3 Starter Board has 1 Mbyte of 10 ns static RAM. This can be used to capture an image. Then the S3SB-McBSP link can be used to transfer the image into a file on the PC. One can use the USB-PC link to move images directly to a PC file and/or MATLAB. Students have captured images in the past. I presently don t have any VHDL and/or C and/or MATLAB code for doing this. I do have pieces that I would use were I to do this. At present, this is a task left as an exercise for the student. 4.1 Data formats The default format consists of Y/UV values. The Y refers to luminance (brightness) information and the UV values are chrominance values. U are blue-luminance values and V are red-luminance values (- mean minus here). Pixels arrive from the camera in 16-bit values the form YU, YV, YU, YV,.... In order to generate a RGB display pairs of these are used to generate two RGB pixels. One needs to study the camera data sheet intently to understand how to do this correctly. The luminance information changes pixel to pixel. The same B and R information is used to generate the RB components on two RGB pixels. The G value 8

Figure 5: E700 camera mounted on adapter board used to connect it to the Spartan-3 Starter Board B1 connector. is generated using the same U and V values and so also affects two RGB output pixels. Essentially the color information has less resolution than the intensity information. I found the data sheets confusing and had my problems with implementing the conversion from YUV to RGB. I found the task a challenge. The OV6620 cameras allow acquiring the raw RGB pixels. One could use these pretty much directly taking in account the Bayer pattern. I ve not tried this. I ve not checked the E700 data manual to see if this is an option, but I expect it to likely be. 4.2 Signals and their timing The basic set consists of: Reset: A signal to the camera placing it into a known state. The OmniVision cameras are put into 16-bit bus mode by reset. Pixel clock: Generated by the camera in order allow clocking the data off of the pixel bus. Pixel bus: The pixel information. The OV cameras can be operated in either 16 or 8 bit mode. The 8-bit bus data is clocked at twice the rate as used with the 16-bit bus. The E700 uses a 8-bit bus. 9

Pixel Array 652 x 492 Row Decoder Timing Control Config Registers I2C Slave RESETB MCLK ENB SCK SDA Test Logic Column CDS 10bit ADC PGA RGB Gain Auto White Balance Auto Exposure Control Gamma Color Interpolation Color Correction & Color Space Conversion YCbCr Digital Gain Control Output Formatting VCLK Y[7:0] C[7:0] VSYNC HSYNC Figure 6: Block diagram showing the internal organization of the HV7131GP CMOS camera used in the Samsung E700 cell phone camera. From the HV131GP data manual. The Y and C lines are time multiplexed onto a single set of 8 pins. T clk PCLK HREF T su T hd Y[7:0] 10 80 10 U Y V Y 80 10 repeat for all data bytes Pixel Data 8-bit Timing (PCLK rising edge latches data bus) Figure 7: OV6620 pixel timing diagram. From the OV6620 data manual. 10

H ref: Indicates the start of a row of pixels. V sync: Indicates the start of a new frame. Figure 7 shows the timing associated with scanning out a single row using 8 bit pixels. The pixel data is sent out starting UYVY.... The pixel clock period is nominally 56 ns and the maximum value of T su and T hd is 15 ns. 4.3 Frame buffer For the OV6620 pixel rate is nominally 8.9 MHz and a new frame is generated approximately 0.0166 seconds (60.1 Hz frame rate). A frame contains 103952 pixels. How much memory is needed? Should one implement a DYI frame buffer on a separate board? Should one use two buffers, one to hold a frame while the other is being loaded with a new frame? Things to think about. The SRAM memory on the S3SB is organized as two banks of 256K of 16-bit words. With reasonable care it should be possible to make use of up to eight OV6620 frames. Or two E700 or OV7620 frames. If two 8-bit pixels are placed into a 16-bit word then the data rate into memory is about 5M words per second, for the OV6620. 4.4 Digilent VDEC1 Video Decoder Board Camera to be supplied extra. Uses a Hirose FX2 data connector. Compatible with the Virtex-2 FPGA board and the Nexys-2. This board has been used by three past projects, with reasonable success. The ADV7183 is configured... To be added someday. WARNING: be very careful when assigning pins in your FPGA design based on the pins on the VDEC1 schematic! I ve not checked but they are likely to be mirror imaged compared to the pin names on the FPGA board with even and odd possibly being interchanged as well. There is actually a good reason why this might have been done. 4.5 Altera DE2 and DE2-70 These boards have not been previously used by an EECS 452 project. The DE2 has support for one video input channel. The DE2-70 has support for two input video channels. The Altera DE2 is used in EECS course and there are boards available that can be borrowed for use by an EECS 452 project. The DE2 uses the... video decoder chip. 11

Figure 8: VDEC1 Video Decoder board and associated block diagram. Compatible camera to supplied by the user. Images from the Digilent web pages. The DE2-70 uses the... video decoder chip. 5 Displaying an image Something left to write about. Not much more complicated than was done in lab. A single entity was used to generate the sync pulses and the addresses of the next pixel to be sent to the monitor. 12

13 Figure 9: Block diagram of the Analog Devices ADV7183 Multiformat SDTV Video Decoder. From the Analog Devices ADV7183 data manual. AIN1 AIN12 CVBS S-VIDEO YPrPb 04997-001 12 INPUT MUX SCLK SDA ALSB CLAMP CLAMP CLAMP A/D A/D A/D 10 10 10 SYNC PROCESSING AND CLOCK GENERATION SERIAL INTERFACE CONTROL AND VBI DATA DATA PREPROCESSOR DECIMATION AND DOWNSAMPLING FILTERS SYNC AND CLK CONTROL ADV7183B CONTROL AND DATA 10 10 CHROMA DIGITAL FINE CLAMP LUMA DIGITAL FINE CLAMP F SC RECOVERY CHROMA DEMOD STANDARD DEFINITION PROCESSOR LUMA FILTER SYNC EXTRACT CHROMA FILTER VBI DATA RECOVERY MACROVISION DETECTION GAIN CONTROL LINE LENGTH PREDICTOR GAIN CONTROL GLOBAL CONTROL STANDARD AUTODETECTION LUMA RESAMPLE RESAMPLE CONTROL CHROMA RESAMPLE LUMA 2D COMB (4H MAX) L-DNR AV CODE INSERTION CTI C-DNR CHROMA 2D COMB (4H MAX) SYNTHESIZED LLC CONTROL FREE RUN OUTPUT CONTROL 16 OUTPUT FORMATTER 8 8 PIXEL DATA HS VS FIELD LLC1 LLC2 SFL INTRQ

6 My practice video VHDL Camera0Top CamCom OV6620 VGAtiming DisplayMan MemManager Figure 10: OV6620 camera in to video display VHDL modules. Seven segment display not shown. The goal was to use the C3088/OV6620 as a video camera with the video being displayed on a computer monitor. The main challenges were: moving pixels from the camera output into a display memory. converting the camera Y UV pixel format into RGB pixels for the monitor. moving data from the frame memory to the display. managing the frame memory. I cheated and did not synchronize the framing of the data between the camera and the display monitor. Pixels are written into the frame memory as they arrive from the camera and are sent to the display as needed by the monitor. In a real sense the frame buffer is used to convert between frame rates. It was felt that this was a reasonable first cut solution for dealing with non-synchronous pixel rates. The result works well and thinking more about it, I don t know what else one would rationally do otherwise. My original effort used a 16-bit pixel interface adapter to go from the camera board to the S3SB B1. This was later replaced by a 8-bit pixel interface board which also allowed generation of 8-bit VGA output, something the S3SB does not have provision for. This board was also designed to allow use with the E700 camera. 6.1 CamCom This entity is used to interact with a user via the S3SB slide switches, push buttons, LEDs and seven-segment display. 14

When the OmniVision cameras are first powered or reset they goe into 16-bit mode. The camera control subsystem is initialized such that all that needs to be done in order to switch into 8 bit mode is to push push button 0. A some creaky and inelegant procedure however it was quick and expedient. Someday to be replaced. This interface was intended to allow an experimenter to have read and access to the command/control registers contained on the OV cameras. They were meant to allow one to ask and answer questions along the lines of What happens if...?. 6.2 OV620 Handles the transition between the clock used by the camera and the clock used by the S3SB. Accepts two 16-bit frames and converts into two 8-bit RGB pixels that are sent to the memory manager as a 16-bit word to be written into the frame memory. This module also generates the frame address assuming a 640 480 display. It also flips the image in order to make it show right side up. A simple, nearest 1/16th, approximation to values found in a Xilinx application note is used to convert Y/UV values to RGB values. Values were sign extended and added/subtracted in combinatorial fashion in order to keep the computational time to a minimum. 6.3 VGAtiming The display parameters are pretty much those used in lab, a 60 MHz frame rate and a 25 MHz pixel clock. This module generates the vertical sync and horizontal sync pulses as well as the horizontal and vertical components of the address of the next pixel to be displayed. The VGA pixel rate is 25 MHz. 6.4 DisplayMan Picks up 16-bit words containing pixel pairs and sends them to the VGA sequentially. The 8-bit values are unpacked into 3 bits of red, 3 bits of green and 2 bits of blue. 6.5 MemManager The Spartan-3 Starter Board contains 1M bytes of static RAM. This organized as two banks of 256K 16-bit words. The two banks are not totally independent and 15

Figure 11: Lighting and whatever caused my grey sweater to show as blue. It shows grey/green at night when the sun has gone down. Image is placed into the top left corner of the display buffer. Whatever was left over in memory from whenever was used to generate the rest of the display. The C5510 DSK just happened to be sitting there when I took the picture. It had no other involvement. can be used to form 32-bit words. The SRAM chips have a nominal minimum 10 ns cycle time. In actual practice the cycle time needs to be somewhat larger. The frame memory was configured as 640 480 pixels. The memory manager allows the camera to view memory as write only and the display manager as read only. The camera pixel rate is 8.86 MHz. The monitor pixel rate is 25 MHz. The memory manager deals with it. 7 Summary Capturing an image from a video camera is a non-trivial task. It is a task that has been accomplished a number of times in EECS 452. Almost always this hasn t been without significant effort. The data rates are high and the amounts of data are relatively large compared to the resources available on the lab equipment. Asking for high resolution either on input or output comes with an associated 16

price. We have become accustomed to high quality video on our PCs. There is a significant history and infrastructure that has evolved behind this. The processors used in a modern graphics interface are so powerful that researchers are networking these processors together to create state-of-the-art super computers. All that we have in the lab are low end FPGAs and a garden variety DSP system. These can accomplish a lot but they are not in the same league. What follows is advice. It might be good advice, it might be bad advice. I have good intentions, but the decisions (and the consequences) are your s to make. Choose your camera carefully. Generally this will determine the number of pixels that need to be contained in the frame buffer and the pixel data rate. One does not have to accept all frames. Don t try the system as a single entity. Divide and conquer. Try to organize the design in self contained units that share a minimum amount of information between each other. Design for testing. Whenever reasonable, test individual section separately. Start simple then add complexity evolving the system into its final form. Read the data sheets/manuals carefully BEFORE you start designing. Try to understand what is being said, and perhaps what isn t. Pay close attention to timings, which edges are doing what and set-up and hold times. 8 References OmniVision Advanced Information Preliminary V6620/OV6120. CMOS Image Sensor with Image Signal Processing HV7131GP V3.4, MagnaChip. Multiformat SDTV Video Decoder ADV7183B data manual, Analog Devices. Chu s book about VHDL on the Spartan-3 Starter Board. There is at least one copy on reserve in the library. Pedroni s VHDL book. A must to own! Copies are on reserve in the library. Google. Choosing the right key words can lead to a wealth of information. A Camera adapter board for S3SB A PC board was designed and manufactured to interface the OmniVision camera as well as the E700 to the S3SB B1 connector. In addition an 8-bit resistor only multichannel D/A converter is included for generating multi-bit color. (The S3SB 17

Figure 12: Schematic for the OV/E700 adapter board. Starter Board connector B1. Plugs into Spartan-3 18

Figure 13: The OV/E700 adapter board, left is camera side view, right is B1 plug side view. uses one bit each for R, G and B.) The red D/A channel has 3 bits, the green channel has 3 bits and the blue channel has 2 bits. The resulting display is reasonably acceptable. Four B1 camera interface boards are available. Two are configured for use with the OmniVision cameras. These have been tested. One board is configured for use with the E700 camera. This has not been tested. The last board does not have any components mounted. The boards can be set up to work with either camera, I ve just haven t done so. JP1 contains the pins that plug into the S3SB socket B1. Because of physical laws the even pins on JP1 connect to the odd pins on B1. As should be expected the odd pins on JP1 connect to the even pins on B1. JP3 and JP4 are used for the E700. Be careful to plug the Sparkfun adapter board in correctly! You have two choices. JP2 is used for the OV camera. The camera goes up. Plugging in the camera should be hazard free. Note: only a 3.3 Volt driver modified camera board should be used! If you purchase your own, modify it! JP5 is to allow connecting the analog composite luminance signal to a monitor. This allows black and white monitoring of the camera s operation. X1 is the 16 pin connector that goes to the monitor. The board is designed to support only 8-bit transfers for the OV. The OV comes up in 16-bit mode so it is necessary to use the SPI interface to set 8-bit mode. 19

resistor function value (ohms) R1 red2 511 R2 red1 1000 R3 red0 2000 R4 grn0 2000 R5 grn1 1000 R6 grn2 511 R7 blu0 1000 R8 blu1 511 R9 H sync 100 R10 V sync 100 R11 SDA 200 R12 SCL 200 Figure 14: Resistor signals and Ohm values. The E700 only uses 8-bit pixel transfers. This camera should simply power up and work. Have not worked out the UCF pin assignments for the E700 camera. To be done. It was my intention that the pin assignments and usage would be very similar. 20

# OV Camera board in B1 # NET "camera<0>" LOC = "E10" IOSTANDARD = LVCMOS33; # Y0 NET "camera<1>" LOC = "T3" IOSTANDARD = LVCMOS33; # Y1 NET "camera<2>" LOC = "C11" IOSTANDARD = LVCMOS33; # Y2 NET "camera<3>" LOC = "N11" IOSTANDARD = LVCMOS33; # Y3 NET "camera<4>" LOC = "D11" IOSTANDARD = LVCMOS33; # Y4 NET "camera<5>" LOC = "P10" IOSTANDARD = LVCMOS33; # Y5 NET "camera<6>" LOC = "C12" IOSTANDARD = LVCMOS33; # Y6 NET "camera<7>" LOC = "R10" IOSTANDARD = LVCMOS33; # Y7 # NET "cam_pclk" CLOCK_DEDICATED_ROUTE = FALSE; NET "cam_pwdn" LOC = "D12" IOSTANDARD = LVCMOS33; # PWDN NET "cam_rst" LOC = "T7" IOSTANDARD = LVCMOS33; # RST NET "cam_sdas" LOC = "E11" IOSTANDARD = LVCMOS33 PULLUP; # SDAS #NET "cam_fodd" LOC = "R7" IOSTANDARD = LVCMOS33; # FODD NET "cam_scl" LOC = "N6" IOSTANDARD = LVCMOS33; # SCL NET "cam_href" LOC = "B16" IOSTANDARD = LVCMOS33;# HREF NET "cam_pclk" LOC = "R3" IOSTANDARD = LVCMOS33; # PCLK NET "cam_vsyn" LOC = "M6" IOSTANDARD = LVCMOS33; # VSYN # Camera board VGA connector # NET "red2<0>" LOC = "C15" IOSTANDARD = LVCMOS33; NET "red2<1>" LOC = "C16" IOSTANDARD = LVCMOS33; NET "red2<2>" LOC = "D15" IOSTANDARD = LVCMOS33; NET "grn2<0>" LOC = "D16" IOSTANDARD = LVCMOS33; NET "grn2<1>" LOC = "E15" IOSTANDARD = LVCMOS33; NET "grn2<2>" LOC = "E16" IOSTANDARD = LVCMOS33; NET "blu2<1>" LOC = "F15" IOSTANDARD = LVCMOS33; NET "blu2<2>" LOC = "G15" IOSTANDARD = LVCMOS33; NET "hs2" LOC = "G16" IOSTANDARD = LVCMOS33; NET "vs2" LOC = "H15" IOSTANDARD = LVCMOS33; Figure 15: S3SB UCF file additions for use with the OV camera boards. 21