Surveillance Robot based on Image Processing

Similar documents
Introduction to GRIP. The GRIP user interface consists of 4 parts:

Smart Traffic Control System Using Image Processing

Home Monitoring System Using RP Device

Multipurpose Robot. Himanshu Gupta 1, Mohammad Shahid 2

Implementation of A Low Cost Motion Detection System Based On Embedded Linux

Real-time body tracking of a teacher for automatic dimming of overlapping screen areas for a large display device being used for teaching

technology T05.2 teach with space MEET THE SENSE HAT Displaying text and images on the Sense HAT LED matrix

Image Processing Using MATLAB (Summer Training Program) 6 Weeks/ 45 Days PRESENTED BY

PCB Error Detection Using Image Processing

Intelligent Monitoring Software IMZ-RS300. Series IMZ-RS301 IMZ-RS304 IMZ-RS309 IMZ-RS316 IMZ-RS332 IMZ-RS300C

Face Recognition using IoT

Chapter 60 Development of the Remote Instrumentation Systems Based on Embedded Web to Support Remote Laboratory

MATLAB & Image Processing (Summer Training Program) 4 Weeks/ 30 Days

DX-10 tm Digital Interface User s Guide

IOT BASED ENERGY METER RATING

Written Progress Report. Automated High Beam System

New Products and Features on Display at the 2012 IBC Show

GAUGE M7 Connected Display 7

A Design Approach of Automatic Visitor Counting System Using Video Camera

Weekly report: January 25 - Februry 8, 2018

Microbolometer based infrared cameras PYROVIEW with Fast Ethernet interface

Development of Image Processing based Human Tracking and Control Algorithm for a Service Robot

An Automatic Motion Detection System for a Camera Surveillance Video

FEATURES MPEG4/MJPEG DVR

1ms Column Parallel Vision System and It's Application of High Speed Target Tracking

MotionPro. Team 2. Delphine Mweze, Elizabeth Cole, Jinbang Fu, May Oo. Advisor: Professor Bardin. Midway Design Review

The BBC micro:bit: What is it designed to do?

Designing and Implementing an Affordable and Accessible Smart Home Based on Internet of Things

Application of Internet of Things for Equipment Maintenance in Manufacturing System

The Haply Development Kit

TRAFFIC SURVEILLANCE VIDEO MANAGEMENT SYSTEM

Professor Henry Selvaraj, PhD. November 30, CPE 302 Digital System Design. Super Project

INTRODUCTION OF INTERNET OF THING TECHNOLOGY BASED ON PROTOTYPE

Research Article 2016

User Manual SM-7070WR

An Integrated EMG Data Acquisition System by Using Android app

User Manual for ICP DAS WISE Monitoring IoT Kit -Microsoft Azure IoT Starter Kit-

AXIS P14 Network Camera Series. AXIS P1425-LE Mk II Network Camera. AXIS P1435-LE Network Camera. User Manual

User Manual For X3-H0402 MDVR. Mobile Digital Video Recorder. User manual for X3-H0402

Just a T.A.D. (Traffic Analysis Drone)

Display Interfaces. Display solutions from Inforce. MIPI-DSI to Parallel RGB format

AXIS P14 Network Camera Series AXIS P1448-LE Network Camera. User Manual

Chapter 1. Introduction to Digital Signal Processing

PRODUCT GUIDE 2017/18 SHARE SERIES ACCESSORIES CAM SERIES CONVERTERS

VOB - data over Video Overlay Box

SCode V3.5.1 (SP-501 and MP-9200) Digital Video Network Surveillance System

VGA Controller. Leif Andersen, Daniel Blakemore, Jon Parker University of Utah December 19, VGA Controller Components

D21DKV IP VIDEO DOOR STATION. Brushed Stainless Steel Display Module Keypad Module

VIDEO GRABBER. DisplayPort. User Manual

IOT Based Fuel Monitoring For Vehicles

An Iot Based Smart Manifold Attendance System

D21DKV IP VIDEO DOOR STATION. Display Module Keypad Module

IoT Strategy Roadmap

Design and Realization of the Guitar Tuner Using MyRIO

DT3130 Series for Machine Vision

2. Problem formulation

For high performance video recording and visual alarm verification solution, TeleEye RX is your right choice!

The Micropython Microcontroller

Alice EduPad Board. User s Guide Version /11/2017

Automatic Tablet Pack Quality Monitoring System for Small Scale Pharmaceutical Firms Ratish Rao.N 1, Dr.Surekha B 2

ISELED - A Bright Future for Automotive Interior Lighting

STX Stairs lighting controller.

HD SDI Cameras. = XON Tri-brid NDVR

Beethoven Bot. Oliver Chang. University of Florida. Department of Electrical and Computer Engineering. EEL 4665-IMDL-Final Report

ISSN (PRINT): , (ONLINE): , VOLUME-5, ISSUE-4,

AC335A. VGA-Video Ultimate Plus BLACK BOX Back Panel View. Remote Control. Side View MOUSE DC IN OVERLAY

6.111 Final Project Proposal Kelly Snyder and Rebecca Greene. Abstract

FCPM-6000RC. Mini-Circuits P.O. Box , Brooklyn, NY (718)

The software concept. Try yourself and experience how your processes are significantly simplified. You need. weqube.

DMC550 Technical Reference

Intelligent Security and Fire Ltd

VERINT EDGEVR 200 INTELLIGENT DIGITAL VIDEO RECORDER (Rev A)

VIDEO ALARM VERIFICATION UNIT VIVER

Operation Guide Version 2.0, December 2016

VNS2200 Amplifier & Controller Installation Guide

User manual. Long Range Wireless HDMI/SDI HD Video Transmission Suite

HEART ATTACK DETECTION BY HEARTBEAT SENSING USING INTERNET OF THINGS : IOT

AUTOMATIC LICENSE PLATE RECOGNITION(ALPR) ON EMBEDDED SYSTEM

Make technology more simple, Make life more intelligent. Firefly-PX3-SE. Product. Specifications. Version Date Updated content

INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad

Real-time Chatter Compensation based on Embedded Sensing Device in Machine tools

Re: ENSC440 Design Specification for the License Plate Recognition Auto-gate System

D2102V IP VIDEO DOOR STATION. Brushed Stainless Steel 2 Call buttons

An FPGA Based Solution for Testing Legacy Video Displays

SDI-MP1010-GM-60P-M-RA 3G/HD-SDI Output Video Transceiver. SDI-MP1010-GM-60P-M-RA Features. Block Diagram SDI-MP1010-GM-60P-M-RA.

D2101V IP VIDEO DOOR STATION. Brushed Stainless Steel 1 Call button

AXIS M30 Network Camera Series. AXIS M3046-V Network Camera. AXIS M3045 V Network Camera. User Manual

Aegis Electronic Group

TL7650/TL7651/TL7652 Dual HD-SDI / DVI(HDMI) Output Video Transceiver. TL765x Series Features. Block Diagram TL7650. Dual HD-SDI + DVI(HDMI)

LED TEST. ATX Hardware GmbH West and Feasa Enterprises Limited a long-standing partnership. SPECIALIST FIXTURE SOLUTIONS

AXIS M30 Series AXIS M3015 AXIS M3016. User Manual

SCode V3.5.1 (SP-601 and MP-6010) Digital Video Network Surveillance System

Laser Conductor. James Noraky and Scott Skirlo. Introduction

FLIR Daylight and Thermal Surveillance (P/T/Z) Multi-Sensor systems

INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) APPLIANCE SWITCHING USING EYE MOVEMENT FOR PARALYZED PEOPLE

Package Contents. LED Protocols Supported. Safety Information. Physical Dimensions

Speech Recognition and Voice Separation for the Internet of Things

HOME AUTOMATION USING IOT LINKED WITH FACEBOOK FACIAL RECOGNITION

AXIS M5525 E PTZ Network Camera. User Manual

Five-Input Universal Switcher with Wireless Presentation Link

Transcription:

Surveillance Robot based on Image Processing Anjini Ratish P, Darshan Sindhe D, Nagaraj K, Rajeshwar N S, Ravindra V. Asundi Electronics and Communication Engineering, BMS Institute of Technology and Management, Bengaluru, India Abstract: This project attempts to provide surveillance to a particular area with the help of a chained robot which will roam the entire given area with randomness. The module uses image processing techniques to observe the environment and notifies the control station. Three main aspects of the project are to maunder in a given area with randomness, observe changes in the environment through captured images and provide parking surveillance. The robot is equipped with ultrasonic sensors and proximity sensors for self guidance purpose. Once the robot is powered, its autonomy nature will guide towards the randomness. Camera module is used to capture images of environment and detects changes. It notifies the control station if necessary changes have been detected. Parking surveillance system is another feature which will identify the number plate of the vehicle approaching to the parking lobby and verifies for wrong parking in a parking spot. Keywords: Surveillance, Image Processing, Background Subtraction, Motion Detection I. Introduction The importance of surveillance robots is increasing as security is top priority in today s unsecured society. With the advance in technology, we are able to build more robust robots to handle hazardous situations and save several lives. Introducing image processing into surveillance robot can result in effective method to handle and inform hazardous situations to the control station. The main purpose of the robot we are making is to provide visual information to a selected area and provide surveillance to that particular area. Hence the main feature of our robot is an onboard video cameraalso the robot must be compact and sel f contained in the sense it must have an onboard battery pack and wireless interface to the human controller. This project attempts to address the need for a self-contained security system. Currently, security systems require many costly components and a complicated installation process [1]. Two basic types of systems are currently available. The first is a wired system. One drawback is that installation of a wired system can take a lot of time and money. Another drawback is that it is a permanent part of the home. If the owner moves, the security system must stay. The second type of system is a wireless one. The components for this are also costly [2]. Wireless systems are more mobile, but they require batteries which must be changed every so often. II. RELATED WORK [2]. Huang, Shih-Chia proposes a novel and accurate approach to motion detection for the automatic video surveillance system. method achieves complete detection of moving objects by involving three significant proposed modules: a background modeling (BM) module, an alarm trigger (AT) module, and an object extraction (OE) module. But the disadvantage is that it applies only for static cameras [8].Bokade and V. R. Ratnaparkhe proposes a method for controlling a wireless robot for surveillance using an application built on Android platform. But it does not employ Image Processing algorithms. Used only for Live streaming. Hence there s a need for a human to monitor the Surveillance continuously.. III. METHODOLOGY A. Motion Detection Background subtraction (BS) is a common and widely used technique for generating a foreground mask (namely, a binary image containing the pixels belonging to moving objects in the scene) by using static cameras. As the name suggests, BS calculates the foreground mask performing a subtraction between the current frame and a background model, containing the static part of the scene or, more in general, everything that can be considered as background given the characteristics of the observed scene. 108 Page

Background modelling consists of two main steps: 1. Background Initialization; 2. Background Update. FIG 1 Background Subtraction In the first step, an initial model of the background is computed, while in the second step that model is updated in order to adapt to possible changes in the scene. B. Character Recognition C. Object Tracking First it was necessary to capture (or receive) the image or, specifically the frame containing the image (frame). The size is 160x120 pixels. The frame at large (eg. 640 pixels wide and 480 pixels high), caused slowdowns in the recognition process when the image was transmitted remotely. The system default is RGB color, this color system is represented in the webcam frame obtained through the basic colors: red (Red), Green (Green) and blue (Blue). These colors are represented on a pixel by pixel dimensional vector, for example, the color red is represented 0com values (0, 255, 0), respectively represented for each channel. After the captured image, the conversion from RGB color system to the color HSV (hue, saturation, and value) was undertaken, since this model describes similar to the recognition by the human eye colors. Since the RGB (red, green and blue) system has the colors based on combinations of the primary colors (red, green and blue) and the HSV system defines colors as their color, sparkle and shine (hue, saturation, and value), facilitating the extraction of information. In diagram the step 2 shows the conversion from RGB to HSV, using the "cvtcolor" native OpenCV, which converts the input image from an input color system to another function. With the image in HSV model, it was necessary to find the correct values of HSV minimum and maximum color of the object that will be followed. To save these values, were made two vectors with minimal HSV and HSV maximum color object as values: minimum Hue (42) Minimum saturation (62) Minimum brightness (63) Maximum Hue (92) Maximum Saturation (255) Maximum Brightness (235). So the next step to generate a binary image, the relevant information may be limited only in the context of these values. These values are 109 Page

needed to limit the color pattern of the object. A function of comparing the pixel values with the standard values of the inserted vector was used. The result was a binary image providing only one value for each pixel. Having made the segmentation, resulting in the binary image, it is noted that noise are still present in the frame. These noises are elements that hinder the segmentation (including obtaining the actual size) of the object. To fix (or attempt to fix) this problem, it was necessary to apply a morphological transformation through operators in the frame, so that the pixels were removed that did not meet the desired standard. For this, the morphological operator EROSION, who performed a "clean" in the frame, reducing noise contained in it was used. Then it was used to "Moments" function, which calculates the moments of positive contour (white) using an integration of all pixels present in the contour. This feature is only possible in a frame already binarized and without noise, so that the size of the contour of the object is not changed by stray pixels in the frame, which hinder and cause redundancy ininformation. moments = cv2.moments (imgerode, True) FIG 2- Robot IV. Hardware A. Raspberry pi Raspberry pi 3 Model B is used which delivers 6 times the processing capacity than its previous version. It is stated that this Pi board has an upgraded Broadcom BCM2837 ARMCortex-A53 1.2GHz processor. This board also has an increased memory of 1Gbyte LPDDR2-900 RAM. The Pi Board boots from Micro SD card which runs a version of Linux operating system. The Raspberry Pi board takes power from Micro USB socket of 5 Volts, 2 Amperes. The Board itself has a 10/100 BaseT Ethernet socket and 4 numbers USB 2.0 connector. The board has 40-pin 2x20 strip expansion header of which each pin spaced 2.54mm. This 40-pin provides 27 GPIO for connecting input output devices as well as +3.3, +5 Volts and GND supply lines. The board also has 15-pin MIPI Camera Serial Interface for interfacing Raspberry Pi camera module. B. Robot FIG 3 - Raspberry pi 3 -B FIG 4- Robot Hardware 110 Page

Fig. 2 shows the architecture of robot hardware. The robot is built using mechanical and electrical components. The main controlling unit of the system is Raspberry Pi. Power supply circuit is designed to provide power to Pi board and motor driver IC (L293D). Two dc motors are used to control the forward, backward, left and right movement of robot. Camera module is used to take picture frames for video. Tilt motion of camera is controlled using servomotors to provide the wide capturing area. C. Motor Driver Circuit FIG 5- Motor Driver Circuit Motor driver IC allows DC motor to run in either clockwise or anticlockwise direction. L293D works on H-Bridge[3] principle. There are two H-Bridges in IC. In Fig. 5 there are two Enable pins which are connected to logic 1. There are two supply pins; VSS which is connected to +5 volts and VS which is connected to +12 volts. The higher voltage +12 volt provides current to DC motor. There are four input pins, each of two pins control a single DC motor. By changing the logic level on two pins like 0 and 1 or 1 and 0 the motor rotation direction has been controlled V. Software A. For Embedding Code in Raspberry Pi For the coding of Raspberry Pi, Python language[4] is used. Python language is easy-to-learn since it has a small set of keywords and clearly defined syntax. The diverse set of libraries for Python is portable and compatible with almost all operating systems like Windows, UNIX, and Macintosh. Its programming is 4 times shorter than languages likes JAVA. There is no need to declare any type of arguments or variables. It uses simple functions and variables without defining classes. B. Open CV OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code C. SSH Secure Shell (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network. Used to remotely login the pi from the Laptop or mobile. VI. Future Scope Robot can be designed to walk on uneven surfaces and stairs. Artificial intelligence can be included in the robot. Robot can be connected to the cloud and share data between similar robots on surveillance. 111 Page

VII. Conclusion This system can be effectively made use in many applications, offices and also as a home security system required by the user. It also provides a wide opportunity for various extensions according to the user requirements. Since it connected with a network, it can be accessed from wherever the user can manage within a given radius and if the user runs out of range his partner can take up the controls using the portable device software. It provides real time video streaming for reconnaissance and surveillance operations. By taking the action against received data, any desired surveillance operation can be done. Although it s not a finished product in some ways, it can be used as a prototype to develop better, more effective electronics gadget(eg: equipped with GSM, Intelligence, cloud etc.) which provide higher security sensitive levels, greater range and more secure body. VIII. References [1]. Elgammal, Ahmed, David Harwood, and Larry Davis. "Non-parametric model for background subtraction." Computer Vision ECCV 2000 (2000): 751-767. [2]. Huang, Shih-Chia. "An advanced motion detection algorithm with video quality analysis for video surveillance systems." IEEE transactions on circuits and systems for video technology 21.1 (2011): 1-14. [3]. Jacob Gold berger, Sam Roweis, and Ruslan Salakhutdinov Geoff Hinton. "Neighbourhood components analysis." NIPS 04 (2004). [4]. Soucy, Pascal, and Guy W. Mineau. "A simple KNN algorithm for text categorization." Data Mining, 2001. ICDM 2001, Proceedings IEEE International Conference on. IEEE, 2001. [5]. Du, Shan, et al. "Automatic license plate recognition (ALPR): A state-of-the-art review." IEEE Transactions on Circuits and Systems for Video Technology 23.2 (2013): 311-325. [6]. Park, Myoungkuk, et al. "Performance Guarantee of an Approximate Dynamic Programming Policy for Robotic Surveillance." IEEE Transactions on Automation Science and Engineering 13.2 (2016): 564-578. [7]. Yadav, Sanjana, and Archana Singh. "An image matching and object recognition system using webcam robot." Parallel, Distributed and Grid Computing (PDGC), 2016 Fourth International Conference on. IEEE, 2016. [8]. Bokade, Ashish U., and V. R. Ratnaparkhe. "Video surveillance robot control using smartphone and Raspberry pi." Communication and Signal Processing (ICCSP), 2016 International Conference on. IEEE, 2016. 112 Page