BuddyCam Joseph Cao, CSE, Steven Gurney, CSE, Saswati Swain, EE, and Kyle Wright, CSE

Similar documents
BuddyCam Joseph Cao, CSE, Steven Gurney, CSE, Saswati Swain, EE, and Kyle Wright, CSE

Just a T.A.D. (Traffic Analysis Drone)

MotionPro. Team 2. Delphine Mweze, Elizabeth Cole, Jinbang Fu, May Oo. Advisor: Professor Bardin. Midway Design Review

Pattern Based Attendance System using RF module

Plug & Play Mobile Frontend For Your IoT Solution

Understanding Compression Technologies for HD and Megapixel Surveillance

AppNote - Managing noisy RF environment in RC3c. Ver. 4

IMPROVING VIDEO ANALYTICS PERFORMANCE FACTORS THAT INFLUENCE VIDEO ANALYTIC PERFORMANCE WHITE PAPER

Team Members: Erik Stegman Kevin Hoffman

Implementation of A Low Cost Motion Detection System Based On Embedded Linux

DISTRIBUTION STATEMENT A 7001Ö

PYROPTIX TM IMAGE PROCESSING SOFTWARE

A Real Time Hi Speed Tracker for Chain Snatcher

E90 Proposal: Shuttle Tracker

Beethoven Bot. Oliver Chang. University of Florida. Department of Electrical and Computer Engineering. EEL 4665-IMDL-Final Report

TV Character Generator

Real-time Chatter Compensation based on Embedded Sensing Device in Machine tools

6.111 Project Proposal IMPLEMENTATION. Lyne Petse Szu-Po Wang Wenting Zheng

New Products and Features on Display at the 2012 IBC Show

CHARACTERIZATION OF END-TO-END DELAYS IN HEAD-MOUNTED DISPLAY SYSTEMS

ENGINEER AND CONSULTANT IP VIDEO BRIEFING BOOK

VLSI Chip Design Project TSEK06

1ms Column Parallel Vision System and It's Application of High Speed Target Tracking

White Paper. Video-over-IP: Network Performance Analysis

Chapter 60 Development of the Remote Instrumentation Systems Based on Embedded Web to Support Remote Laboratory

MotionPro. Team 2. Delphine Mweze, Elizabeth Cole, Jinbang Fu, May Oo. Advisor: Professor Bardin. Preliminary Design Review

METHOD, COMPUTER PROGRAM AND APPARATUS FOR DETERMINING MOTION INFORMATION FIELD OF THE INVENTION

6.111 Final Project Proposal Kelly Snyder and Rebecca Greene. Abstract

Surveillance Robot based on Image Processing

NEW APPROACHES IN TRAFFIC SURVEILLANCE USING VIDEO DETECTION

Milestone Solution Partner IT Infrastructure Components Certification Report

Research & Development. White Paper WHP 318. Live subtitles re-timing. proof of concept BRITISH BROADCASTING CORPORATION.

Biometric Voting system

Smart Traffic Control System Using Image Processing

Last Edit: 19 Feb 2018

Improve Visual Clarity In Live Video SEE THROUGH FOG, SAND, SMOKE & MORE WITH NO ADDED LATENCY A WHITE PAPER FOR THE INSIGHT SYSTEM.

VNP 100 application note: At home Production Workflow, REMI

Digital Video Telemetry System

A LOW COST TRANSPORT STREAM (TS) GENERATOR USED IN DIGITAL VIDEO BROADCASTING EQUIPMENT MEASUREMENTS

Digital Audio Design Validation and Debugging Using PGY-I2C

ECE 4220 Real Time Embedded Systems Final Project Spectrum Analyzer

In total 2 project plans are submitted. Deadline for Plan 1 is on at 23:59. The plan must contain the following information:

SELECTING A HIGH-VALENCE REPRESENTATIVE IMAGE BASED ON IMAGE QUALITY. Inventors: Nicholas P. Dufour, Mark Desnoyer, Sophie Lebrecht

V9A01 Solution Specification V0.1

Keyboard Controlled Scoreboard

Image Contrast Enhancement (ICE) The Defining Feature. Author: J Schell, Product Manager DRS Technologies, Network and Imaging Systems Group

Features Horizon Electronic Marquee s Provide Years of Performance. spectrumhorizon.com

ECE532 Digital System Design Title: Stereoscopic Depth Detection Using Two Cameras. Final Design Report

Introduction to GRIP. The GRIP user interface consists of 4 parts:

Enhancing the TMS320C6713 DSK for DSP Education

Failure Modes, Effects and Diagnostic Analysis

REQUEST FOR PROPOSALS: FOR AN INTEGRATED IN-CAR AND BODY-WORN VIDEO MANAGEMENT SYSTEM

TVU MediaMind Server. Monitor, control, manage and distribute all your video content. Advantages

TVU MediaMind Server. Monitor, control, manage and distribute all your video content. Advantages

INSTITUTE OF AERONAUTICAL ENGINEERING (Autonomous) Dundigal, Hyderabad

TERRESTRIAL broadcasting of digital television (DTV)

New Technologies: 4G/LTE, IOTs & OTTS WORKSHOP

IMIDTM. In Motion Identification. White Paper

PoLTE: The GPS Alternative for IoT Location Services

A Novel Study on Data Rate by the Video Transmission for Teleoperated Road Vehicles

Written Progress Report. Automated High Beam System

INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) APPLIANCE SWITCHING USING EYE MOVEMENT FOR PARALYZED PEOPLE

D-Lab & D-Lab Control Plan. Measure. Analyse. User Manual

Microbolometer based infrared cameras PYROVIEW with Fast Ethernet interface

IOT BASED ENERGY METER RATING

9/23/2014. Andrew Costin, Tom Syster, Ryan Cramer Advisor: Professor Hack Instructor: Professor Lin May 5 th, 2014

ECE 480. Pre-Proposal 1/27/2014 Ballistic Chronograph

Genomics Institute of the Novartis Research Foundation ( GNF )

Parade Application. Overview

Certificate # CU RIM 182 DC Powered Electronic Passenger Information Sign (PIS) Specifications

Therefore, HDCVI is an optimal solution for megapixel high definition application, featuring non-latent long-distance transmission at lower cost.

SPECIFICATION NO Model 207 Automatic GTAW Welding System

Automatic Projector Tilt Compensation System

LIFE SAVING INNOVATION THROUGH OUTSOURCING

STB Front Panel User s Guide

THE NEW LASER FAMILY FOR FINE WELDING FROM FIBER LASERS TO PULSED YAG LASERS

(12) Patent Application Publication (10) Pub. No.: US 2007/ A1

SWITCH: Microcontroller Touch-switch Design & Test (Part 2)

Acquisition Control System Design Requirement Document

OccupEye User Manual. Region 1 Revision 1.0

Group 1. C.J. Silver Geoff Jean Will Petty Cody Baxley

UBC Thunderbots 2009 Team Description Paper. Alim Jiwa, Amanda Li, Amir Bahador Moosavi zadeh, Howard Hu, George Stelle, Byron Knoll, Kevin Baillie,

ISSN (PRINT): , (ONLINE): , VOLUME-5, ISSUE-4,

Embedded Systems Lab. Dynamic Traffic and Street Lights Controller with Non-Motorized User Detection

Pivoting Object Tracking System

Frame Processing Time Deviations in Video Processors

B. The specified product shall be manufactured by a firm whose quality system is in compliance with the I.S./ISO 9001/EN 29001, QUALITY SYSTEM.

ITU-T Y Specific requirements and capabilities of the Internet of things for big data

Internet of Things Technology Applies to Two Wheeled Guard Robot with Visual Ability

Super-Doubler Device for Improved Classic Videogame Console Output

Challenges in the design of a RGB LED display for indoor applications

ITU-T Y.4552/Y.2078 (02/2016) Application support models of the Internet of things

Schematic Analysis of P10 16x32 RGB LED Panel 3 in 1 DIP Type Dual (Dual In-Line Package) on Trafficlight Revolution

Milestone Leverages Intel Processors with Intel Quick Sync Video to Create Breakthrough Capabilities for Video Surveillance and Monitoring

INTRODUCTION AND FEATURES

A Design Approach of Automatic Visitor Counting System Using Video Camera

The Art of Low-Cost IoT Solutions

Cisco Video Surveillance 6400 IP Camera

HEART ATTACK DETECTION BY HEARTBEAT SENSING USING INTERNET OF THINGS : IOT

IEEE802.11a Based Wireless AV Module(WAVM) with Digital AV Interface. Outline

Transcription:

Team 7 Page 1 BuddyCam Joseph Cao, CSE, Steven Gurney, CSE, Saswati Swain, EE, and Kyle Wright, CSE Abstract Unmanned aircraft systems (UAS) are systems that incorporate a Unmanned aircraft vehicle (UAV or drones) and a base controller while providing a means of communication between the two. UAS are rising in popularity and have helped with providing useful information in monitoring, security, and search and rescue. One main areas of use is Law Enforcement, in which the UAS provides tactful surveillance, subject tracking, and assistance in investigation. BuddyCam is a deployable UAS capable of autonomously identifying, tracking and recording Federal, State and Local law enforcement officers in high stress situations. The system consists of a quadcopter equipped with a fixed camera, to capture aerial footage of an officer (subject of frame), and a base station, that performs the real time image processing, object identification and tracking through computer vision. Data is transmitted from the Google Compute Engine over a 4G network connected to a Raspberry Pi. I. I NTRODUCTION THERE is a growing concern on the relationship between law enforcement officers and the public. Events such as riots, shooting of unarmed civilians and violence against officers have created challenges in evaluating officer performances due to lack of information and video evidence. A review of cases show: countless wrongful accusations against officers, citizens and officers alike often having trouble remembering important details after an adrenaline fueled event. A case revolving around the topic of life and death for a police officers and the public involved a couple, Mendez and Garcia, that were expecting a child. They were just a step above homelessness, living in a rat infested shack. One day deputies came to the property searching for a man who had violated the terms of a parole. Not knowing it was police, Mendez picked up a BB gun and began to rise. The deputies opened fire, hitting Mendez 14 times and Garcia once in her back. The couple sued L.A. County for violation of their Fourth Amendment rights [1]. Legal practitioners are adopting a critical eye towards forsence evidence, and the role they play in convictions. Some shortfalls in the current system include operations problems, reliability tests, and bias of legal representatives [2]. One of the challenges in court cases include the reliability factor of the court admitted evidence. The lack of comparable data or ground truth available for the public results in the court having to drop cases, or side in favor of the party at fault. [2] This issue can be summed up by Edmond et al. [3]: The absence of a database or some other credible method of assigning significance to purported similarities means the observer has no reasonable basis on which to draw conclusions about identity. Video surveillance has been used for the past couple years as a means of preventing crime. [4] Court cases including video surveillance, must ascertain how the video was recorded, whether the transporting of the video compromised its reliability. Body webcams (BWCs) are a new and growing addition for police-citizen encounters. Through the use of BWCs, there was a 68 percent decrease in use-of-force complaints. It is unknown whether the decline was due to the use of the recording technology, but such a drop is most likely due to the fact the false complaints were preempted due to the presence of live recording [5]. Not all BWC video footage will be used in every case, or determine the outcome of a case, although the video will provide circumstantial evidence. Despite the low-light, barely visible video, audio from the BWC provide courts with sufficient evidence. Tackling this concern of evidence reliability and growing concern of false accusations against officers we provide a solution of BuddyCam. BuddyCam, is a Deployable Unmanned Aerial System (UAS) capable of autonomously identifying, tracking and recording officers. This quadcopter is equipped with a camera that provides an aerial video capture of the officer and wirelessly transmits the video to a base station. The base station performs real-time image processing, object identification and tracking through the use of computer vision. The UAS moves based on the location of the officer, which is determined through the use of an Infrared (IR) Beacon. Flight instructions are determined through the image processing, and are sent back to the UAS. While this system provides valuable real-time information, the video footage is readily available for officers, and law enforcement superiors improving situational awareness. To begin the design of the UAS systems, we analyzed current limitations which included: limited visibility, shaky footage, and biased footage due to first person perspective of BWCs. Our solution UAS will be fully autonomous after lift-off, enabling the officer to focus on the situation at hand. The UAS will be able to track and keep the subject in the middle of the video frame, with a maximum of 1.6 seconds out of frame, this was calculated from the latency of transferring the data back and forth through the google compute engine. The UAS will maintain a minimum height of 10 feet, which prevents it from interfering with the officers on scene, and maintain a line of sight of subject within a radial distance of 15 feet. This allows for unbiased footage and ensuring all interactions are captured in the recording. The last specification is the UAS being able to operate for at least 10 minutes, which was determined from the UAS abilities itself, and there changes based on funding and power added to the

Team 7 Page 2 UAS. These specifications are summarized in Table 1. The UAS will be deployed by the officer, and through the use of a unique signal (IR Beacon), the UAS will be able to keep track of the officer and follow them throughout the entire scene. Once the officer has completed addressing the situation, the UAS will be turned off and allowed to return to it s starting location. TABLE 1. Specifications Requirement System should be simple and easy to deploy Operate as to not interfere with officer s duties. Specification Fully autonomous after initial lift-off Minimum height of 10 feet components, an array of high-power IR LEDs and a corresponding IR receiver on the sensing array of the drone. The remote processing station subsystem will be hosted on Google Compute Engine and will take in the video feed and sensor data from the wearable tracker and calculate the necessary flight commands to keep the police officer in the center of the frame. Following this brief overview will be a more in-depth discussion of BuddyCam s subsystems. B. Block 1: IR Beacon This part of the system consists of an Infrared (IR) Beacon, a portable device that transmits a unique signal to help the UAS with tracking of the officer (Fig. 1 ). The IR Beacon will be used simultaneously with the OpenCV identification and tracking. In order to complete this part of the system it is important to understand how IR communication works. Maintain a view of the officer during deployment. Record for as long as any conflict or response would take to be resolved. Radial line of sight, 15 feet from subject Track and keep subject in frame, no less than 1.6 seconds out of frame Operational time, more than 10 minutes A. Overview II. DESIGN The current BuddyCam system consists of three major subsystems: the UAV (Fig. 8 Orange), a wearable tracker (Fig. 8 Green). and a remote processing station (Fig. 8 Blue). The UAV subsystem can be further subdivided into the Base UAV and Added Sensing Array systems. The Base UAV consists of bare features included with any UAV. This includes a flight controller, a power distribution and battery system, rotors, and manual controls (pilot remote). For the drone we are using (IRIS+ 3DR) there are also elementary accident avoidance and proximity systems. On top of these essential UAV features, we will be building an Added Sensing Array that will be housed on the Base UAV. The Added Sensing Array includes a wide-angle infrared camera and Raspberry Pi 3. These components were chosen as they meet the system requirements of being able to track and record a police officer in an emergency response situation. The wide-angle infrared camera will be able to view both the unique infrared beacon located on the officer to provide tracking and accomodate a high field of view of the situation. The Raspberry Pi will interface with the camera and act as data passthrough using a wireless 4G modem. This data passthrough will be utilized to deliver the live video stream as well as receive motion controls from an offsite Google Compute Engine. The wearable tracker subsystem consists of two main Fig. 1 Flashing LED Schematic. This IR beacon operates at 15 volts using a 555 Timer (IC) Chip, to create a blinking series of IR LEDs when switch is turned on. This wireless communication technology is very similar to visible light, except it had slightly longer wavelength [6]. IR radiation is undetectable to the human eye, making it great for wireless communication. IR signals are modulated, patterned, data so it is unique to the receiver. Most IR communication work under 38kHz modulation, but other frequencies can be used as well. When the switch is on, the transmitting IR LED blinks quickly for a fraction of a second to transmit data to the receiving device. The pulse width modulated signal (Fig 2.) can be controlled through a microcontroller, which allows for the waveform to be read by an input pin and decoded as a serial bit stream.

Team 7 Page 3 Fig. 2: Pulse width modulated signal (square wave). The output of the IR Beacon continually switches state from high to low without interference from the user. Gives beacon intermittent motion, by switching the IR LEDs between on and off. The IR beacon will be set up with a 555 timer chip that sends a pulse modulated IR signal at 38 khz frequency [7]. The IR Beacon will be connected to a 9 volts battery and operate through an on and off switch. The benefits of using an IR Beacon include, detection in the light, signal is unique so multiple officers can use the UAS on scene. The beacon circuit is shown in (Fig 3.). Currently, the IR Beacon can be detected with lenses without IR filters from about 10 feet. The next steps for the beacon include making it portable, replacing the one IR LED with a high power LED for a stronger signal that can be detected at a greater distance. After the IR Beacon is made portable, it will be attached to an individual for testing and analyzing how well it can be detected from the UAS. If the UAS is able to determine the subject of the frame through the detection of the pulsing IR signal then it is complete, otherwise the IR Beacon will be adjusted to address the issues of testing. Fig. 3: IR LED Beacon demoed at MDR, viewed using camera with IR filter removed. This Beacon is currently running at 9 volts but will be modified to run at 15 volts, using external battery source (portable). C. Block 2: Google Compute Engine/4G Interface In order to deliver a high performance system that will meet the necessary requirements of law enforcement personnel, a strong processing component must be utilized for the image processing. While the Raspberry Pi 3 offers a great platform for general purpose computation and hobbyists, the onboard VideoCore IV GPU lacks the raw graphics processing power that this system would call for [8]. Were it to be used, it could severely limit the viewable framerates and analysis time, which is unacceptable as the effectiveness of the application demands on-time data delivery. If the video or flight instructions are not sent on time it could result in a loss of evidence, which defeats the purpose of the system. For this reason, the Raspberry Pi is left to serve simply as a communications device interfacing each system component and to allow data to be passed between them. Due to the additional weight and power constraints that the onboard battery would have, implementing a more powerful component on the UAS itself would lead to a multitude of issues, severely limiting our options for processors and degrading operating time. Instead, we harness the power of Google Compute Engine, a cloud computing solution developed to meet the requirements of the user. Utilizing Google s cloud platform provides several benefits: the platform is comprised of a large collection of scalable virtual machines that can be reconfigured to meet the individual needs of each client, [9] compared to other cloud implementations Google s price-performance ratio is top-tier and allows for a powerful implementation while ensuring it is still suitable for our limited budget, the robust fiber network put in place by Google will certainly not pose as a bottleneck in our solution, having a scalable performance solution will allow us to have the graphics processing capability that is needed for our OpenCV implementation to work efficiently. The Debian based remote server will run the Python code and OpenCV libraries needed to do the image processing. Upon receiving the video transmission, the stream will be input to our algorithm and flight instructions will be generated as an output. These will be stored and retrieved by the UAS periodically to instruct it where to move based off of the person in frame. The video transmission will also be available on a live monitoring buffer on the server as well as stored on a long-term archival container for later review. The utilization of a 4G LTE modem does raise some concern regarding the latency video transmission from the UAS to the server, there are ways in which we look to minimize this. The first being the use of a video codec meant for transmission over wireless networks. A well known codec is H.264/MPEG4-AVC, a video compression standard that is utilized by many industries [10]. The original intention of this codec was to provide a good image quality of the transmitted video, while doing so at lower bit rates than most other codecs. jit is, however, still robust enough to vary the quality and bitrate to that determined by the application. Though this standard and others like it serve to achieve such a task as our own, with the limited time we have our solution may benefit from the utilization of a prebuilt camera solution. The Sky Drone FPV 2 utilizes a custom UART protocol and 4G LTE technology to stream video footage at latencies under 150 ms [11].

Team 7 Page 4 Fig.4: Sky Drone FPV 2 4G Camera/Transmitter. The device provides a solution which uses a custom video codec to provide quality footage at some of the lowest latencies for video over 4G networks. With a technology capable of delivering the video feed fast enough for the image processing to be done, the UAS will be able to maintain a line of sight on the tracked individual. D. Block 3 - Object Tracking At the heart of the BuddyCam system is an artificial intelligence system capable of detecting and tracking an object as it moves in its environment. Elements of this system will be completely software based, and interact directly with the sensing array system block live video feed captured directly from the drone mounted camera will be processed by the object tracking system and logic will be returned to the drone to dictate which direction it is to move. Once the live video stream has been sent from the Sensing Array block to the Object Tracking block, software processing of the video feed begins. To start, the video feed is broken down into 30 frames per second, and each frame is processed individually in real time. The frame is first segmented using a process called image thresholding. This creates a mask on top of the frame, segmented only one specific shade of blue in the frame. This shade of blue correlates with the average police officer s uniform, allowing the system to differentiate from civilians and officers. To perform the segmentation, each pixel value in the image is recalculated according to a mask matrix, and the color blue in a HSV color space is segmented for, meaning all colors that are not blue are turned to black and colors that are blue are turned to white. An example is shown in Fig. 5. Fig. 5: Image mask in HSV colorspace using image thresholding. The largest pixel region within the image that correlates to the specified HSV boundaries is selected via BLOB analysis. Next, the distance between the center of this segmented subject and the center of the frame is calculated with a trivial distance formula displayed in Fig. 6. This is used to determine whether the subject is in the left, right, top or bottom of the frame. Necessary logic is derived from this information to tell the drone in which direction to move to maintain the center of the subject as close to the center of the frame as possible. Fig. 6: Overlayed relative movement from center. The circle depicts the result of the BLOB analysis after applying the HSV mask. The red is tracking the approximate center of that region. Testing of the Object Tracking block will take place through analyzing the results of this image processing in a variety of environments. Variables to consider include lighting, background of the environment, amount of people in the frame, and how fast the people are moving. These variables can be considered individually by manually controlling the others in a controlled testing environment. E. Block 4 - GCE Component Integration Once the target has been successfully identified and tracked within the frame of the video stream, the relative movements of the subject with respect to the center of the frame combined with the information from tracking the IR beacon will be translated into flight instructions that move the drone to keep the officer in frame. These commands will be stored on a

Team 7 Page 5 database on GCE and the Raspberry Pi will constantly query this database to obtain new movement instructions. Once the Raspberry Pi receives a new movement instruction, it will send it to the onboard flight controller, which then sends commands to the rotors. Other commands such as manual controls, emergency landing, or subject switching can also be sent through this data path. The operational flow of the system is summarized in Fig. 8. Note that our original design at PDR, as shown in Fig. 7., utilized a nearby base station which has been replaced by the GCE server. This design change allows immediate viewing and archival of video as well as scaling computation for multiple drones and officers. As noted by evaluators Polizzi and Koren, the processing station would have also severely limit the range of the drone s operation and that using existing 4G infrastructure would sufficient for sending compressed video. In addition to the replacement of the processing station with GCE, the usage of a wearable IR beacon and data from cellular GPS was also added. These two additional sources of information will help achieve better reliability. While the IR beacon and visual tracking will allow for fine movement of the drone to keep the officer in the center of the video frame, if the view of the officer becomes too obstructed and the officer moves outside of the camera s field of view, the use of GPS location data will allow the drone to move back within a distance where it will be able to regain a visual of the officer again. Fig. 7: Original block diagram presented at PDR Fig. 8: Updated block diagram presented MDR. Note the addition of wearable trackers and a control server hosted on GCE instead of a nearby base station. III. PROJECT MANAGEMENT Upon presentation of our project at the Preliminary Design Review, our group laid out a series of tangible, deliverable goals for the future of the project. By Mid Design Review, our group s goal was to have the Object Tracking block completely finished. This included a live demonstration of video processing and object tracking, using a fixed camera and the various methods of software tracking explored above. Through much hard work and development of the project, we were successfully able to deliver upon this goal for Mid Design Review. While we did change a few specifications of the project relating to hardware and the location of processing, we were fully able to demonstrate a fixed camera obtaining a live feed and tracking a subject through the frame. In our group s preliminary design review, project management was broken up in the following way; Saswati would be tasked with implementing the wireless transmission of analog video, a communication link from a drone mounted camera to the raspberry pi using radio frequency. Joseph would be responsible for implementing object recognition in image processing software. This included researching possible methods, such as image segmentation, for determining which object in the video frame is the desired subject. Finally, Steven and Kyle were tasked with implementing object tracking through the frame. This included implementing software that both determined the relative location in frame of the subject, as well as implementing the logic that will tell the drone where to move in order to maintain the subject in the center of the frame. Following our preliminary design review, several changes were made to the technical implementation of our project, explained above, that moved the location of video processing from a Raspberry Pi base station to Google Compute Engine cloud (see differences between Fig 7. and Fig. 8). Further, it was determined that an infrared beacon was necessary to further and more accurately detect the subject of the video

Team 7 Page 6 frame. Per these specification changes, Saswati s responsibilities were changed from video transmission to the implementation of the infrared beacon. IV. CONCLUSION BuddyCam is proceeding on schedule and meeting our expectations. We intended to have our object recognition and tracking program written and running on a stationary camera for MDR. Taking on this objective first would layout most of the groundwork needed to complete the project. This goal was achieved, as well as preliminary implementations of both the infrared beacon circuit and the interface for communication between the flight controller and the Raspberry Pi. With one of the most challenging aspects of the project completed, we have moved onto focusing on our CDR deliverables, for which we intend to complete the work on the IR beacon, configure Google Compute Engine to run our OpenCV program, initialize the remote connection between the Raspberry Pi and server via 4G, and begin formatting flight controls based on the outputs of the program. The schedule for our intended goals is seen in the Gantt chart (Fig. 9), indicating the aforementioned tasks be complete by the CDR presentation. Koren for taking the time to meet with us and give us feedback on the current achievements of the project. Their comments and recommendations have helped shape the project to its current state and greatly influenced our way of thinking about how we look to finish out the year. VI. REFERENCES [1] Bains, Chiraag. Can Cops Use Force With Impunity When They Ve Created an Unsafe Situation? Slate Magazine, The Slate Group, a Graham Holdings Company, 15 June 2017, www.slate.com/articles/news_and_politics/jurisprudence/2017/06/the_supreme_co urt_suggests_cops_use_of_force_is_always_justified.html. [2] O'Brien, Éadaoin, et al. Science in the Court: Pitfalls, Challenges and Solutions. Philosophical Transactions of the Royal Society B: Biological Sciences, The Royal Society, 5 Aug. 2015, www.ncbi.nlm.nih.gov/pmc/articles/pmc4581010/. [3] Edmond G, Kemp R, Porter G, Hamer D, Burton M, Biber K, San Roque M. 2010. Atkins v The Emperor: the cautious use of unreliable expert opinion. Int. J. Evid. Proof 14, 146 166 [4] Using Video Surveillance as Evidence in Court. SecurityBros, securitybros.com/using-video-surveillance-as-evidence-in-court/. [5] When Body-Worn Cameras Become a Matter of the Courts. PoliceOne, 23 Mar. 2017, www.policeone.com/policing-in-the-video-age/articles/320408006-when-body-wo rn-cameras-become-a-matter-of-the-courts/. [6] A1RONZO. IR Communication. IR Communication, learn.sparkfun.com/tutorials/ir-communication#res. [7] Jayant. IR Transmitter and Receiver. IR Transmitter and Receiver Circuit Diagram, circuitdigest.com/electronic-circuits/ir-transmitter-and-receiver-circuit. [8] Raspberry Pi 3 Benchmarks. MagPi, RaspberryPi.org, Dec. 2016, www.raspberrypi.org/magpi/raspberry-pi-3-specs-benchmarks/. [9] Compute Engine. Google Cloud Platform, Google, Inc., cloud.google.com/compute/. [10] An Overview of H.264 Advance Video Coding. VCodex, Vcodex, www.vcodex.com/an-overview-of-h264-advanced-video-coding/. [11] Sky Drone FPV 2. Sky Drone, www.skydrone.aero/products/sky-drone-fpv. Fig. 9: Gantt Chart through FDR Once our CDR objectives are complete, the majority of work beyond that point is integration of subsystems. The IR beacon array must be combined with the OpenCV tracking, the flight controller must be interfaced with the Raspberry Pi and connected to the server via 4G, and testing and debugging of any issues that arise will take a large amount of time to ensure that our system does not fail. By continuing to hold our group and advisor meetings, staying on schedule, and remaining dedicated to finishing what we ve started, the future of BuddyCam looks successful. V. ACKNOWLEDGMENT We would like to extend our appreciation and gratitude to Professor Pishro-Nik who throughout this process has helped keep us motivated and on track, as well as provided us with the resources and tools necessary to complete each iteration of the project. We would also like to thank Professors Polizzi and