Tone Insertion To Indicate Timing Or Location Information

Similar documents
Real-time interaction with television content

Adaptive HVAC Operation To Reduce Disruptive Fan Noise Levels During Noise-Sensitive Events

Multi-channel automatic gain control

Video Steaming. Using OBS and You Tube Live Steaming. June 2017

(12) Patent Application Publication (10) Pub. No.: US 2007/ A1

SELECTING A HIGH-VALENCE REPRESENTATIVE IMAGE BASED ON IMAGE QUALITY. Inventors: Nicholas P. Dufour, Mark Desnoyer, Sophie Lebrecht

Software Quick Manual

HCS-4100/20 Series Application Software

Audio Watermarking (NexTracker )

(12) Patent Application Publication (10) Pub. No.: US 2013/ A1

Audio Watermarking (SyncNow ) Audio watermarking for Second Screen SyncNow with COPYRIGHT 2011 AXON DIGITAL DESIGN B.V. ALL RIGHTS RESERVED

UAB Medicine Video Brand Standards

Williamson County, TX EXPRESSION OF INTEREST

METHOD, COMPUTER PROGRAM AND APPARATUS FOR DETERMINING MOTION INFORMATION FIELD OF THE INVENTION

COLUMBIA COUNTY, WISCONSIN COURTROOM VIDEO CONFERENCE & AV SYSTEMS REQUEST FOR PROPOSALS

INTRODUCTION AND FEATURES

06 Video. Multimedia Systems. Video Standards, Compression, Post Production

Adtec Product Line Overview and Applications

Zargis TeleSteth User Manual

hawkeyeinnovations.com pulselive.com

Software Quick Manual

Appeal decision. Appeal No France. Tokyo, Japan. Tokyo, Japan. Tokyo, Japan. Tokyo, Japan. Tokyo, Japan

Polycom, Inc

Paper No Entered: April 9, 2015 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD

What is the minimum sound pressure level iphone or ipad can measure? What is the maximum sound pressure level iphone or ipad can measure?

HCS-4100/50 Series Fully Digital Congress System

(12) Publication of Unexamined Patent Application (A)

SPR-11P Portable Transport Stream Recorder and Player

Product Information. EIB 700 Series External Interface Box

Software Quick Manual

GY-HM200SP USERS GUIDE

ITU-T Y.4552/Y.2078 (02/2016) Application support models of the Internet of things

Stream Labs, JSC. Stream Logo SDI 2.0. User Manual

Promotion Package Pricing

Auro 11.1 update for ICMP. Installation manual

RIDER CATCH-UP RIGHTS 1

Classroom Setup... 2 PC... 2 Document Camera... 3 DVD... 4 Auxiliary... 5

A NEW METHOD FOR RECALCULATING THE PROGRAM CLOCK REFERENCE IN A PACKET-BASED TRANSMISSION NETWORK

ViewCommander-NVR. Version 6. User Guide

DETEXI Basic Configuration

ViewCommander- NVR Version 3. User s Guide

Trinity User Guide. The Trinity Conference Room has the capability to provide the following meeting services:

Event Triggering Distribution Specification

Development of a wearable communication recorder triggered by voice for opportunistic communication

UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD. SAMSUNG ELECTRONICS CO., LTD.; Petitioner

Matrox PowerStream Plus

o VIDEO A United States Patent (19) Garfinkle u PROCESSOR AD OR NM STORE 11 Patent Number: 5,530,754 45) Date of Patent: Jun.

RECOMMENDATION ITU-R BT (Questions ITU-R 25/11, ITU-R 60/11 and ITU-R 61/11)

DC-105 Quick Installation Guide

Primex Wireless, Inc. July, Wells Street Lake Geneva, WI

RFQ Office of Administrative Hearings Audio Recording System Questions and Answers

ATSC vs NTSC Spectrum. ATSC 8VSB Data Framing

First Encounters with the ProfiTap-1G

Network Cameras User s Manual

Implementing Playback Delay Across Multiple Sites with Dramatic Cost Reduction and Simplification Joe Paryzek, Pre-Sales Support Grass Valley, a

SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Infrastructure of audiovisual services Coding of moving video

HOW TO MAKE EFFECTIVE CONSTRUCTION TRAINING VIDEOS

(12) Patent Application Publication (10) Pub. No.: US 2008/ A1

FlexWATCH Smart Multi Viewer Player

DISCOVERING THE POWER OF METADATA

IP LIVE PRODUCTION UNIT NXL-IP55

Intelligent Monitoring Software IMZ-RS300. Series IMZ-RS301 IMZ-RS304 IMZ-RS309 IMZ-RS316 IMZ-RS332 IMZ-RS300C

Understanding ATSC 2.0

How many seconds of commercial time define a commercial minute? What impact would different thresholds have on the estimate?

Information Radio Transmitter Programming Guide

ISVClient. User Guide

Proposed Standard Revision of ATSC Digital Television Standard Part 5 AC-3 Audio System Characteristics (A/53, Part 5:2007)

P1: OTA/XYZ P2: ABC c01 JWBK457-Richardson March 22, :45 Printer Name: Yet to Come

ATSC Proposed Standard: A/341 Amendment SL-HDR1

ATSC Digital Television Standard: Part 6 Enhanced AC-3 Audio System Characteristics

AXIS M30 Series AXIS M3015 AXIS M3016. User Manual

ENGINEERING COMMITTEE Digital Video Subcommittee SCTE

Developing Android on Android

Rec. ITU-R BT RECOMMENDATION ITU-R BT * WIDE-SCREEN SIGNALLING FOR BROADCASTING

Motion Video Compression

BILFINGER MAUELL RIO OPERATION CENTER

Brandlive Production Playbook

Generation and Measurement of Burst Digital Audio Signals with Audio Analyzer UPD

ENGINEERING COMMITTEE Digital Video Subcommittee AMERICAN NATIONAL STANDARD ANSI/SCTE

Technical Note PowerPC Embedded Processors Video Security with PowerPC

SoundExchange compliance Noncommercial webcaster vs. CPB deal

User Guide. c Tightrope Media Systems Applies to Cablecast Build 46

Mobile DTV Viewer. User Manual. Mobile DTV ATSC-M/H DVB-H 1Seg. Digital TV ATSC DVB-T, DVB-T2 ISDB-T V 4. decontis GmbH Sachsenstr.

2-/4-Channel Cam Viewer E- series for Automatic License Plate Recognition CV7-LP

Terms of Use and The Festival Rules

WELCOME TO THE NEW REHEARSCORE (APP)

ENGINEERING COMMITTEE Digital Video Subcommittee AMERICAN NATIONAL STANDARD ANSI/SCTE

Wireless Cloud Camera TV-IP751WC (v1.0r)

DS-7204/7208/7216HVI-ST Series DVR Technical Manual

FA Cup Final Wembley Stadium London HA9 0WS 27th May 2017

DISTRIBUTION STATEMENT A 7001Ö

DVG MPEG-2 Measurement Generator

Multiprojection and Capture

(12) Patent Application Publication (10) Pub. No.: US 2015/ A1

Part 1 Basic Operation

EPIPHAN VIDEO. Pearl Plays

INTRODUCTION TO ENDNOTE

Room Guide Melbourne Campus 250 Victoria Parade Laser House Level 3

Will Anyone Really Need a Web Browser in Five Years?

Ending the Multipoint Videoconferencing Compromise. Delivering a Superior Meeting Experience through Universal Connection & Encoding

Transcription:

Technical Disclosure Commons Defensive Publications Series December 12, 2017 Tone Insertion To Indicate Timing Or Location Information Peter Doris Follow this and additional works at: http://www.tdcommons.org/dpubs_series Recommended Citation Doris, Peter, "Tone Insertion To Indicate Timing Or Location Information", Technical Disclosure Commons, (December 12, 2017) This work is licensed under a Creative Commons Attribution 4.0 License. This Article is brought to you for free and open access by Technical Disclosure Commons. It has been accepted for inclusion in Defensive Publications Series by an authorized administrator of Technical Disclosure Commons.

Doris: Tone Insertion To Indicate Timing Or Location Information TONE INSERTION TO INDICATE TIMING OR LOCATION INFORMATION ABSTRACT Disclosed herein is a mechanism for inserting a series of tones that indicate timing information or location information. For example, the mechanism can include inserting tones into audio data associated with a video to indicate timing and/or location information. BACKGROUND Video content providers often provide video content recorded by individual users. This video, uploaded to a server by the user, often lacks timing information that indicates a time that the video was captured or location information that indicates a location where the video was captured. The lack of timing and location information can make it difficult to, for example, synchronize multiple videos or audio content with the video and identify other videos captured by other users at a similar time or a similar location. As a more particular example, multiple users might record videos of an event, such as a concert, but without time stamps associated with the recorded video, it can be difficult to identify the videos as capturing the same event at the same time. DESCRIPTION A playback system can use the mechanism to insert periodic tones which can identify the time along with the place of origin or other desired information. A recording system can receive captured video data (e.g., recorded from a video camera) and corresponding audio data (e.g., recorded from a microphone). Using the mechanism, the recording system can retrieve the audio data associated with the video and examine the tones of the audio data that indicate timing information (e.g., timestamps and/or any other suitable timing information) or location information. In the absence of universal tones from a playback system, the data can be inserted Published by Technical Disclosure Commons, 2017 2

Defensive Publications Series, Art. 973 [2017] by the recording system. Additionally or alternatively, in some instances, the audio data and the video data can be transmitted, for example, to a server that hosts the audio data and the video data. The audio data that includes the inserted tones can be used for any suitable purpose, for example, to allow the server to block transmission of the audio and/or video until a predetermined duration of time has elapsed since the recording of the audio and video, to identify one or more other videos that were captured at a similar time, to replace audio data for a particular video with audio data capturing the same event that is of higher quality, and/or for any other suitable purpose. FIG. 1 illustrates an example method for inserting tones to indicate timing information. The method can be performed by a device that records video and/or audio data, such as a video camera with an associated microphone. 3

Doris: Tone Insertion To Indicate Timing Or Location Information At step 102, the device can receive audio data and video data in any suitable manner. For example, the audio data can be received from a microphone and the video data can be received from a video camera. In some instances, the device can be any suitable type of device, such as a mobile device (e.g., a mobile phone, a tablet computer, a wearable computer, a laptop computer, etc.), a desktop computer, a web camera associated with a laptop or desktop computer, a video camera, and/or any other suitable type of device. At step 104, the device can identify a tone to be inserted or determine that a tone should be inserted at a time point of the audio data. The tone can be identified or determined in any Published by Technical Disclosure Commons, 2017 4

Defensive Publications Series, Art. 973 [2017] suitable manner. For example, in some instances, the tone can be determined using a calculation or table based on a date and time corresponding to the time point. As a more particular example, the tone can have a frequency, a frequency sweep, or a combination of frequencies that encodes the date and time and/or location. In some instances, the device can identify the tone based on any suitable existing specification, such as the time codes of the Society of Motion Picture and Television Engineers (SMPTE) time codes. The identified tone can be at any suitable frequency, combination of frequencies, or frequency range. For example, the tone can be in a frequency range that is generally inaudible to human listeners (e.g., above 20 khz, an ultrasonic tone, and/or at any other suitable frequency). As another example, the tone can be determined to be at a frequency that is generally masked by the audio content at the time point. As a more particular example, in instances where the audio content includes portions of a concert, the tone can be within a frequency that is near dominant frequencies of the audio content. In some instances, the tone can indicate a location associated with the audio data and the video data, for example, indicating a location where the audio data and the video data were captured. As a more particular example, the tone can encode the location based on Global Positioning System (GPS) coordinates, a name of a city or town, and/or any other suitable location information. In some instances, the tone can be inserted in response to receiving a request from a user recording the audio data and the video data to insert information indicating the location. At step 106, the device can insert the identified tone at the time point. The tone can be inserted using any suitable technique or combination of techniques. For example, the received audio data from step 102 can be transmitted to a unit for tone insertion (e.g., via a transmission 5

Doris: Tone Insertion To Indicate Timing Or Location Information unit of a playback system), and the unit can insert the tone at the time point. The modified audio data can then be output from the unit for storage and/or transmission in connection with the corresponding video data. At step 108, the device can store and/or transmit the modified audio data in connection with the corresponding video data. In instances where the modified audio data and the corresponding video data is stored on the device, the modified audio data and the corresponding video data can be stored in any suitable type of memory on the device. Additionally or alternatively, in some instances, the modified audio data and the corresponding video data can be transmitted to a server, for example, a server that hosts video content and streams the video content to a user device in response to receiving a request from the user device. The device can then loop back to step 104 and can identify a second tone to be inserted at a second time point of the audio data. The device can loop through steps 104-108 at any suitable frequency (e.g., once per second, once per five seconds, and/or at any other suitable frequency). Additionally, in instances where the modified audio data and the corresponding video data is transmitted to a server for storage, the modified audio data and the corresponding video data can be transmitted at any suitable time. For example, the modified audio data and the corresponding video data can be transmitted from the device in response to receiving an indication that the device has finished capturing a particular event (e.g., based on an input from a user of the device). As another example, the modified audio data and the corresponding video data can be transmitted from the device periodically (e.g., every minute, every two minutes, and/or at any other suitable frequency) during capture of the event. The server can use the modified audio data and the corresponding video data in any suitable manner. For example, the server can aggregate multiple videos each with corresponding Published by Technical Disclosure Commons, 2017 6

Defensive Publications Series, Art. 973 [2017] modified audio data uploaded by multiple users based on the inserted tones. In some such instances, the multiple videos can be identified based on the inserted tones, for example, to identify videos that were captured at a similar location (e.g., within a predetermined distance, and/or any other suitable similar location) and/or at a similar time (e.g., on the same date, within the same hour, and/or any other suitable similar time). As a more particular example, the server can identify multiple videos that were captured at a similar location and at a similar time, and can identify, for each scene of the videos, one view captured from one camera (e.g., a view identified as having a clear or interesting angle, and/or any other suitable view) and can stitch together each of the identified views from the multiple videos based on the time points associated with the inserted tones. As another more particular example, the server can identify, for a first video, a second video captured from a similar location and at a similar time, and can synchronize the modified audio data corresponding to the second video with the first video based on the time points associated with the inserted tones to provide a clearer or better audio track for the first video. As another example, the server can use time points associated with the inserted tones to determine a time the video is to become available to be provided to users. As a more particular example, in some instances, a creator of the video content can specify that the video is not to be available for viewing for at least a predetermined duration of time (e.g., one hour, two hours, one day, and/or any other suitable duration of time) after capture of the video content. The server can then compare a current time with the time stamps indicated by the inserted tones to determine whether the video can be provided to viewers at the current time. Accordingly, a mechanism for for inserting a series of tones that indicate timing information or location information is provided. 7