Lab Assignment 2 Simulation and Image Processing

Similar documents
FPGA Laboratory Assignment 4. Due Date: 06/11/2012

Design and implementation (in VHDL) of a VGA Display and Light Sensor to run on the Nexys4DDR board Report and Signoff due Week 6 (October 4)

VHDL Design and Implementation of FPGA Based Logic Analyzer: Work in Progress

Design and Implementation of an AHB VGA Peripheral

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Introductory Digital Systems Laboratory

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Introductory Digital Systems Laboratory

VID_OVERLAY. Digital Video Overlay Module Rev Key Design Features. Block Diagram. Applications. Pin-out Description

EEM Digital Systems II

Tutorial 11 ChipscopePro, ISE 10.1 and Xilinx Simulator on the Digilent Spartan-3E board

Laboratory 4. Figure 1: Serdes Transceiver

LogiCORE IP Video Timing Controller v3.0

Design of VGA Controller using VHDL for LCD Display using FPGA

TSIU03: Lab 3 - VGA. Petter Källström, Mario Garrido. September 10, 2018

Digilent Nexys-3 Cellular RAM Controller Reference Design Overview

VHDL test bench for digital image processing systems using a new image format

EECS 578 SVA mini-project Assigned: 10/08/15 Due: 10/27/15

CPE 329: Programmable Logic and Microprocessor-Based System Design

EECS150 - Digital Design Lecture 12 - Video Interfacing. Recap and Outline

Video Graphics Array (VGA)

T1 Deframer. LogiCORE Facts. Features. Applications. General Description. Core Specifics

Lab 3: VGA Bouncing Ball I

RAPID SOC PROOF-OF-CONCEPT FOR ZERO COST JEFF MILLER, PRODUCT MARKETING AND STRATEGY, MENTOR GRAPHICS PHIL BURR, SENIOR PRODUCT MANAGER, ARM

Video Output and Graphics Acceleration

SPI Serial Communication and Nokia 5110 LCD Screen

Authentic Time Hardware Co-simulation of Edge Discovery for Video Processing System

Lecture 14: Computer Peripherals

LFSRs as Functional Blocks in Wireless Applications Author: Stephen Lim and Andy Miller

Lab #5: Design Example: Keypad Scanner and Encoder - Part 1 (120 pts)

HDL & High Level Synthesize (EEET 2035) Laboratory II Sequential Circuits with VHDL: DFF, Counter, TFF and Timer

Spartan-II Development System

FPGA Prototyping using Behavioral Synthesis for Improving Video Processing Algorithm and FHD TV SoC Design Masaru Takahashi

Spartan-II Development System

FPGA Design with VHDL

Digital Systems Laboratory 1 IE5 / WS 2001

IMS B007 A transputer based graphics board

Block Diagram. dw*3 pixin (RGB) pixin_vsync pixin_hsync pixin_val pixin_rdy. clk_a. clk_b. h_s, h_bp, h_fp, h_disp, h_line

EECS150 - Digital Design Lecture 13 - Project Description, Part 3 of? Project Overview

Design of a Binary Number Lock (using schematic entry method) 1. Synopsis: 2. Description of the Circuit:

Serial FIR Filter. A Brief Study in DSP. ECE448 Spring 2011 Tuesday Section 15 points 3/8/2011 GEORGE MASON UNIVERSITY.

Why FPGAs? FPGA Overview. Why FPGAs?

FPGA Design. Part I - Hardware Components. Thomas Lenzi

Design and Implementation of SOC VGA Controller Using Spartan-3E FPGA

LogiCORE IP Spartan-6 FPGA Triple-Rate SDI v1.0

Main Design Project. The Counter. Introduction. Macros. Procedure

1. Synopsis: 2. Description of the Circuit:

EE178 Lecture Module 4. Eric Crabill SJSU / Xilinx Fall 2005

Traffic Light Controller

Digital Blocks Semiconductor IP

Design and analysis of microcontroller system using AMBA- Lite bus

FPGA Based Implementation of Convolutional Encoder- Viterbi Decoder Using Multiple Booting Technique

EDA385 Bomberman. Fredrik Ahlberg Adam Johansson Magnus Hultin

T-COR-11 FPGA IP CORE FOR TRACKING OBJECTS IN VIDEO STREAM IMAGES Programmer manual

Laboratory 1 - Introduction to Digital Electronics and Lab Equipment (Logic Analyzers, Digital Oscilloscope, and FPGA-based Labkit)

Lab 6: Video Game PONG

Testing Results for a Video Poker System on a Chip

Modeling Latches and Flip-flops

FPGA-BASED EDUCATIONAL LAB PLATFORM

VGA Controller. Leif Andersen, Daniel Blakemore, Jon Parker University of Utah December 19, VGA Controller Components

Fingerprint Verification System

AC : DIGITAL DESIGN MEETS DSP

ENGG2410: Digital Design Lab 5: Modular Designs and Hierarchy Using VHDL

Faculty of Electrical & Electronics Engineering BEE3233 Electronics System Design. Laboratory 3: Finite State Machine (FSM)

Lecture 2: Digi Logic & Bus

Lab #10 Hexadecimal-to-Seven-Segment Decoder, 4-bit Adder-Subtractor and Shift Register. Fall 2017

Single Channel LVDS Tx

Smart Night Light. Figure 1: The state diagram for the FSM of the ALS.

Using SignalTap II in the Quartus II Software

COE758 Xilinx ISE 9.2 Tutorial 2. Integrating ChipScope Pro into a project

EZwindow4K-LL TM Ultra HD Video Combiner

Design and Implementation of Nios II-based LCD Touch Panel Application System

LogiCORE IP AXI Video Direct Memory Access v5.01.a

ECE532 Digital System Design Title: Stereoscopic Depth Detection Using Two Cameras. Final Design Report

Main Design Project. The Counter. Introduction. Macros. Procedure

Lattice Embedded Vision Development Kit User Guide

UG0651 User Guide. Scaler. February2018

SingMai Electronics SM06. Advanced Composite Video Interface: HD-SDI to acvi converter module. User Manual. Revision 0.

2D Scaler IP Core User s Guide

Video Painting Group Report

1 Terasic Inc. D8M-GPIO User Manual

Xilinx Answer Eye Qualification

Lab 4: Hex Calculator

cs281: Introduction to Computer Systems Lab07 - Sequential Circuits II: Ant Brain

PROCESSOR BASED TIMING SIGNAL GENERATOR FOR RADAR AND SENSOR APPLICATIONS

Using on-chip Test Pattern Compression for Full Scan SoC Designs

Sub-LVDS-to-Parallel Sensor Bridge

EE178 Spring 2018 Lecture Module 5. Eric Crabill

CHAPTER1: Digital Logic Circuits

Task 4_B. Decoder for DCF-77 Radio Clock Receiver

IS1500 (not part of IS1200) Logic Design Lab (LD-Lab)

EECS150 - Digital Design Lecture 10 - Interfacing. Recap and Topics

ECE 532 PONG Group Report

Checkpoint 4. Waveform Generator

Proposal. Figure 1: Slot Machine [1]

Snapshot. Sanjay Jhaveri Mike Huhs Final Project

Sandia Project Document.doc

2.13inch e-paper HAT (D) User Manual

Altera s Max+plus II Tutorial

Lossless Compression Algorithms for Direct- Write Lithography Systems

ECE 270 Lab Verification / Evaluation Form. Experiment 9

Product Obsolete/Under Obsolescence

Transcription:

INF5410 Spring 2011 Lab Assignment 2 Simulation and Image Processing Lab goals Implementation of bus functional model to test bus peripherals. Implementation of a simple video overlay module Implementation of a human skin color detection module. Deliverables For passing this lab, a zip archive named in the format lab2 UserName1[ UserName2].zip (e.g., lab2 hsimpson.zip) has to be send to koch@ifi.uio.no. You may work in groups of up to two, and submit a group report. The names of group members, both names and usernames, shall be listed on the cover page of the report. The submission deadline is March 19th. The file has to contain: your commented source code (only the files you have changed). the configuration bitstream of the system containing the skin color detection module and the video overlay module. the corresponding place&route report generated by the ISE tools (static.par) the corresponding simulated result bitmap file generated by ModelSim 1

Figure 1: Project hierarchy a report analog to the one submitted for the lab assignment 1. Note that all questions have to be answered. Lab setup In this lab we will use the VHDL based verification framework presented in the lectures. Principles of reusable design development discussed in the lectures will also be used. This includes naming convention, programming practices, and file hierarchy. We will use the same SoC system as in lab1. In lab1 the verification framework was used to verify the MIPS CPU. In this lab, we verify peripherals connected to the SoC bus. Go on the course website and download the archive containing the projects for Modelsim simulation and Xilinx implementation tools. The project is illustrated in Figure 1. The main difference in the system used in the last lab is the DVI decoder that converts an optional HDMI video input to an internal video stream. The top level file (static.vhd) is as far as possible complete. This lab consists of four tasks. In Task2 a simulation environment has to be extended in order to execute bus transactions to our modules. In Task3, and Task4 the simulation environment will be used to implement and verify two video modules. The video modules shall also be tested on the development board. For the report, both modules can be simulated together. Task 1 (Questions) a) In the provided system, the modules and the CPU are multiplexed to the video source clock. Consider now that our video modules and the CPU subsystem are connected to different clock domains. How would you synchronize the processing part of the system with the video stream? b) Switching between two clocks is critical and should not be performed with normal user logic. Describe the reason for this problem! Note that we use a special primitive (BUFGMUX) for this purpose. 2

c) Let us assume that we use the other HDMI input to connect a second camera to the system in order to perform depth map computation for a stereo vision application. How would you synchronize the two video streams? Task 2 (Simulation Environment) To verify peripherals isolated from the CPU we need a bus functional model (BFM). A BFM implements bus transactions. A bus transaction consists of setting the address, data and control signals in the right sequence in order to generate a bus transaction such as write or read. Note that the simulation environment substitutes the CPU in order to speed up simulation. To implement a BFM functionality to the pacman module you need to do the following: 1. Add a bus interface to the pacman module (soc/core/pacman/hdl/pacman (ent rtl).vhdl). 2. Implement the BFM model procedures (soc/core/pacman/tb/tb chip beh.vhd). 3. Add BFM commands to the include procedure in (soc/core/pacman/tb/tb chip beh.vhd). 4. Add test vectors to a command file start.cmd. The following diagram shows read and write transactions on the SoC bus: Memory Module rst clk cs address 0000 0400 0000............ 03FF 03FE 03F6 0000 write_en readdata 00000000 writedata 00000000 A00............ 000............... 00000000 Your task is to: Bus read Figure 2: Bus transactions Bus write 1. Implement read (busr), write (busw) and check (busc) procedures in the tb chip beh.vhdl according to the timing diagram. 2. Add read, write and check to the command parser in the tb chip beh.vhdl file. 3. Use the procedures to perform read, write and check from a stimuli file. 3

Task 3 (Video Overlay Module) In this task, the provided pacman module shall be extended such that the pacman color and position can be set by bus write transactions to the register file. To make the modules interchangeable, the following address mapping shall be used (note that byte addresses will be mapped to 4-byte addresses in static.vhd, therefore, base + 1 becomes base + 4): address function format (16 bit) mode base + 0 red [15... 8]: - & [7... 0]: value write only base + 1 green [15... 8]: - & [7... 0]: value write only base + 2 blue [15... 8]: - & [7... 0]: value write only base + 4 horizontal pos [15... 0]: value read/write base + 5 vertical pos [15... 0]: value read/write All read operations should be registered. This means that the readdata output should be driven directly by a register, which implies that reading has a latency of one clock cycle. Questions a) Why is it useful to hide registers (i.e., write only mode) by not making them accessible by the CPU, when possible? b) Could a synthesis tool automatically detect registers that are used in write mode only? Please give reasons for your answer! c) What do you have to change to read the position registers directly (i.e., without the readdata register)? What would be the impact on the system? Optional tasks which are not mandatory for passing this lab but that help to improve programming skills or simply for having some programming fun: Change the sprite bitmap: take a normal black & white BMP image that is flipped in horizontal direction (turn the image upside down) and truncate the header in a hex-editor. This can be easily done with the editor HxD.exe (included in the lab archive file). That editor has nice copy functions and allows to set the number of bytes displayed per row. In many cases, the result looks a bit like ASCII art. Change the color of the pacman if a center pixel of the pacman slides over a skin colored region which is detected with the module in Task 4. Extend the auto move direction from left-to-right to support all four directions by mirroring and rotating the pacman image (which can be achieved by reading the image transposed). 4

Advanced: Implement a random generator for changing auto move direction and distance for the entire next move. Good randomness would allow to instantiate multiple instances of the pacman module while giving each pacman an individual behavior - so a simple (deterministic) approach would not work. Easier, but less HW fun, would be to use the CPU to set random move commands to the different instances. Task 4 (Human Skin Color Detection Module) Detecting humans within a video stream is useful for many applications including camera-based human machine interfaces or observation tasks. Human skin color has a specific color spectrum which can be classified by range monitoring the color values. There exist several approaches for this task which can be basically distinguished by the used color space model (e.g., RGB, YCrCb, or TSL). Vezhnevets et al. gives an overview about this topic in A Survey on Pixel-Based Skin Color Detection Techniques. A relatively simple but yet well working classification scheme was proposed by Peer et al. in Human Skin Colour Clustering for Face Detection. that uses the RGB color space model. For 8-bit R G B color ranges (0 255) the classifier works as follows: (R,G,B) is classified as skin if: R > 95 and G > 40 and B > 20 and max{r,g,b} min{r,g,b} > 15 and R G > 15 and R > G and R > B The task of this lab is to implement, simulate, and test a VHDL module that uses this classification scheme. The module shall mark a skin color pixel by changing its R G B value to red (x"ff 00 00"). In addition, the module should count the number of skin colored pixels per frame and provide the result in a register that can be read by our MIPS CPU from lab1 at (base address + 0). For this task, we are not using the write port to the module. However, by using internal registers for compare values, the classifier could be implemented generic and adaptive. Questions a) How many pixels are classified as skin color in the Lena test picture? Use the simulation and read the pixel counter after one run of the module. b) Estimate the number of operations per pixel a software implementation running on your MIPS CPU would take! Assuming a 1080p50 HD video stream (1920 1080 @ 50 Hz), how many GOPS (billion operations per second) does your module provide? c) Taking the result from the previous question, how many instructions could the total system perform, if the remaining area contributes with the same density (instructions per time and area (in terms of slices or LUTs))? 5

Appendix A. Aditional information and common pitfalls: The Digilent Adept tool is not automatically reloading configuration bitfiles that have been changed by the ISE tools. Use the file dialog (Browse) and re-select the configuration bitstream (static.bit). We use the bitmap format for video in and out. In this format, the picture is stored upside down row by row starting from the lower left corner, while the video raster scan starts from the upper left corner. In many cases, like for the skin color detection, there would be no difference; otherwise, both the input picture and the result might be horizontally flipped for true simulation. There is an issue that the video modules might require to press the reset after reconfiguration or changing the state of the HDMI input. Most HDMI video cameras expect an answer from the connected device (e.g., our board or a monitor) using a serial two signal I2C connection. The supplied design has no corresponding I2C modules and we use the I2C from the monitor connected to the boards HDMI output instead. This requires to set jumpers JP6 and JP7 (located close to the Ethernet plug). It is possible to take the SCL and SDA jumpers for this purpose (located on the diagonal opposite corner of the Ethernet plug). 6

Appendix B. Aditional information on project files: In addition to the directory structure presented in the lectures we have two additional directories: directory project target description project files (put your project here) board or fpga specific directories (UCF files etc.) When creating projects, make sure you reference the files and not copy the files. Make sure you add all files to the project (core/lab2div/hdl, core/mips ngc/ngc, core/pacman/hdl, core/skin color/hdl, target/atlys). The compile script (compile.tcl) in core/pacman/sim/case video overlay can be used to build the libraries required for testing the pacman module. In Modelsim, type source compile.tcl, and do setup.do to start the simulation. A ModelSim project for pacman is provided in the core/pacman/sim/case video overlay directory. The canvas.tcl can be used to generate graphical output in ModelSim. Read, write and check procedures shall be implemented in the tb chip beh.vhdl file. The parser is the include procedure in the tb chip beh.vhdl file. Stimuli can be added to the start.cmd file. If you start with an ISE project from scratch, add the following synthesis option -change error to warning HDLCompiler:1511 ; 7