Digital Imaging and Communications in Medicine (DICOM) Supplement 202: Real Real-Time Video

Similar documents
Scalable Media Systems using SMPTE John Mailhot November 28, 2018 GV-EXPO

New Standards That Will Make a Difference: HDR & All-IP. Matthew Goldman SVP Technology MediaKind (formerly Ericsson Media Solutions)

The following references and the references contained therein are normative.

ST2110 Why Is It So Important?

New Technologies for Premium Events Contribution over High-capacity IP Networks. By Gunnar Nessa, Appear TV December 13, 2017

AVTP Pro Video Formats. Oct 22, 2012 Rob Silfvast, Avid

Vision Standards Bring Sharper View to Medical Imaging

Datasheet Densité IPG-3901

IP and SDI Work Seamlessly Together

UCR 2008, Change 3, Section 5.3.7, Video Distribution System Requirements

MISB ST STANDARD. Time Stamping and Metadata Transport in High Definition Uncompressed Motion Imagery. 27 February Scope.

THINKING ABOUT IP MIGRATION?

Frame Compatible Formats for 3D Video Distribution

Research & Development. White Paper WHP 297. Media Synchronisation in the IP Studio BRITISH BROADCASTING CORPORATION. July 2015.

supermhl Specification: Experience Beyond Resolution

Pro Video Formats for IEEE 1722a

Technology Day Italy. 4K Broadcast Chain. David A. Smith February 2017

Today s Speaker. SMPTE Standards Update: 3G SDI Standards. Copyright 2013 SMPTE. All rights reserved. 1

PixelNet. Jupiter. The Distributed Display Wall System. by InFocus. infocus.com

White Paper. Video-over-IP: Network Performance Analysis

Network Working Group Request for Comments: 3497 Category: Standards Track G. Goncher Tektronix A. Mankin Bell Labs, Lucent Corporation March 2003

How to Chose an Ideal High Definition Endoscopic Camera System

TBS8030 HDMI Encoder User Guide

one century of international standards

PTP: Backbone of the SMPTE ST2110 Deployment

Broadcast Media Networks Over IP The View From the AES and SMPTE Worlds

Advice on the use of 3 Gbit/s HD-SDI interfaces

Neuron. Network attached processing for broadcast infrastructures

ITU-T Y Functional framework and capabilities of the Internet of things

1 Scope. 2 Introduction. 3 References MISB STD STANDARD. 9 June Inserting Time Stamps and Metadata in High Definition Uncompressed Video

A Unified Approach for Repairing Packet Loss and Accelerating Channel Changes in Multicast IPTV

OmniStream R-Type. Dual-Channel Networked AV Encoder. Introduction. Applications

Audio Watermarking (NexTracker )

Matrox PowerStream Plus

SMPTE x720 Progressive Image Sample Structure - Analog and Digital representation and Analog Interface

TIME-COMPENSATED REMOTE PRODUCTION OVER IP

SMPTE STANDARD Gb/s Signal/Data Serial Interface. Proposed SMPTE Standard for Television SMPTE 424M Date: < > TP Rev 0

ITU-T Y.4552/Y.2078 (02/2016) Application support models of the Internet of things

Jupiter PixelNet. The distributed display wall system. infocus.com

SWITCHED BROADCAST CABLE ARCHITECTURE USING SWITCHED NARROWCAST NETWORK TO CARRY BROADCAST SERVICES

B. The specified product shall be manufactured by a firm whose quality system is in compliance with the I.S./ISO 9001/EN 29001, QUALITY SYSTEM.

Digital Video Engineering Professional Certification Competencies

A better way to get visual information where you need it.

Uncompressed high quality video over IP. Ladan Gharai USC/ISI

Parade Application. Overview

IP LIVE PRODUCTION UNIT NXL-IP55

MV-8 Series & MV-Flex Multiviewers Integrated and Standalone Options for Every Operation

The SMPTE ST 2059 Network-Delivered Reference Standard

ATSC Standard: Video Watermark Emission (A/335)

ATSC Standard: A/342 Part 1, Audio Common Elements

quantumdata 980 Series Test Systems Overview of Applications

ATSC Digital Television Standard: Part 6 Enhanced AC-3 Audio System Characteristics

Standards Update. Alan Lambshead, SVP Standards Howard Lukk, Director of Engineering and Standards November 2, 2017

OmniStream R-Type. Single-Channel Networked AV Decoder. Introduction. Applications

Initial Report of the UHDTV Ecosystem Study Group

EXTENDED RECORDING CAPABILITIES IN THE EOS C300 MARK II

REAL-WORLD LIVE 4K ULTRA HD BROADCASTING WITH HIGH DYNAMIC RANGE

VNP 100 application note: At home Production Workflow, REMI

DISCOVERING THE POWER OF METADATA

SingMai Electronics SM06. Advanced Composite Video Interface: HD-SDI to acvi converter module. User Manual. Revision 0.

ATSC Candidate Standard: Video Watermark Emission (A/335)

DOCSIS SET-TOP GATEWAY (DSG): NEXT GENERATION DIGITAL VIDEO OUT-OF-BAND TRANSPORT

*Please note that although this product has been approved in Japan, its launch in other countries has not yet been confirmed.

PRODUCT BROCHURE. Gemini Matrix Intercom System. Mentor RG + MasterMind Sync and Test Pulse Generator

2.1 Introduction. [ Team LiB ] [ Team LiB ] 1 of 1 4/16/12 11:10 AM

An Introduction to IP Video and Precision Time Protocol WHITE PAPER

Hands-On Real Time HD and 3D IPTV Encoding and Distribution over RF and Optical Fiber

SDTV 1 DigitalSignal/Data - Serial Digital Interface

Serial Digital Interface

TECHNICAL SPECIFICATIONS FOR THE DELIVERY OF CLOSE TO TRANSMISSION TELEVISION PROGRAMMES TO THE

ITU-T Y Specific requirements and capabilities of the Internet of things for big data

The art of conforming to the SMPTE traffic model: part I. EBU Media over IP Architect Willem Vermost

Research & Development. White Paper WHP 318. Live subtitles re-timing. proof of concept BRITISH BROADCASTING CORPORATION.

MediaKind Content Processing

Barco surgical displays. High-accuracy visualization solutions for surgery and endoscopy

Proposed Standard Revision of ATSC Digital Television Standard Part 5 AC-3 Audio System Characteristics (A/53, Part 5:2007)

Minimal Compression HD-SDI Video over IP Encoder, AES67 Support NMX-ENC-N1134A (FGN1134A-SA), Stand Alone NMX-ENC-N1134A-C (FGN1134A-CD), Card

This document is meant purely as a documentation tool and the institutions do not assume any liability for its contents

Model 7130 HD Downconverter and Distribution Amplifier Data Pack

Digital Television Fundamentals

Transports 4K AV Signal over 10 GbE Network ARD IP Flash Caster. HDMI 2.0 USB 2.0 RS-232 IR Gigabit LAN

PROPOSED SMPTE STANDARD

EBU INTERFACES FOR 625 LINE DIGITAL VIDEO SIGNALS AT THE 4:2:2 LEVEL OF CCIR RECOMMENDATION 601 CONTENTS

UTAH 100/UDS Universal Distribution System

Transitioning from NTSC (analog) to HD Digital Video

Operating room monitor Medical-grade 4K UHD monitor

A320 Supplemental Digital Media Material for OS

Date <> Time-of-day <> Frequency <> Phase

OmniStream. Dual-Channel Networked AV Encoder. Introduction. Applications

Development of Media Transport Protocol for 8K Super Hi Vision Satellite Broadcasting System Using MMT

V9A01 Solution Specification V0.1

980 Protocol Analyzer General Presentation. Quantum Data Inc Big Timber Road Elgin, IL USA Phone: (847)

Alain Legault Hardent. Create Higher Resolution Displays With VESA Display Stream Compression

TR 038 SUBJECTIVE EVALUATION OF HYBRID LOG GAMMA (HLG) FOR HDR AND SDR DISTRIBUTION

IP LIVE PRODUCTION UNIT NXL-IP55 USO RESTRITO. OPERATION MANUAL 1st Edition (Revised 2) [English]

Video Industry Making Significant Progress on Path to 4K/UHD

HDMI / Video Wall over IP Receiver with PoE

ATSC Proposed Standard: A/341 Amendment SL-HDR1

SMPTE Technology Committee Meeting Outcome Summary

Audio Watermarking (SyncNow ) Audio watermarking for Second Screen SyncNow with COPYRIGHT 2011 AXON DIGITAL DESIGN B.V. ALL RIGHTS RESERVED

Transcription:

1 2 3 4 5 6 7 Digital Imaging and Communications in Medicine (DICOM) 8 9 Supplement 202: Real Real-Time Video 10 11 12 13 14 15 16 17 18 19 20 Prepared by: 21 22 23 24 25 26 27 28 DICOM Standards Committee, Working Group 13 1300 N. 17th Street Rosslyn, Virginia 22209 USA VERSION: Draft First Read, April 24, 2017 Developed in accordance with: DICOM Workitem 2016-12-D This is a draft document. Do not circulate, quote, or reproduce it except with the approval of NEMA. Copyright 2016 NEMA

29 30 Table of Contents TO BE COMPLETED

31 **** Editorial content to be removed before Final Text **** 32 TODO: 33 34 35 Editor s Notes External sources of information 36 37 38 Editorial Issues and Decisions # Issue Status 39 40 Closed Issues # Issues 41 42 43 44 Open Issues # Issues Status 1 Name of the supplement ( Real Real-Time Video proposed)? Open 2 Do we specify use case(s) and which level of detail? Open 3 Do we embrace also multi-frame medical imaging (e.g.; live US, live RF) or only (visible light) video? Open 4 How shall we deal with proper understanding and proper referencing of SMPTE/VSF documents Open 5 How we proceed with the medical metadata, either using a VSF/SMPTE defined mechanism or a Open pure RTP one, respecting the classical DICOM encoding? 6 Provide a table where we list of kind of information to convey in the metadata along with the video. Open Look at part 18 (how to define recoding e.g. media type/dicom) and enhanced CT/MR objects (list of information which are constant vs. variable). 7 Selection of metadata to be conveyed and why (justified based on the use cases). Be very selective. Open Which frequency for sending the metadata (every frame?). 8 Is there a mechanism to register (in SMPTE or others) for a domain specific options? Open **** End of Editorial content to be removed before Final Text ****

45 Scope and Field of Application 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 This Supplement describes a new standard for the transport of real-time video and associated medical data, titled Real-Time Video Transport Protocol. DICOM has developed several standards for the storage of medical video in endoscopy, microscopy or echography, typically. But medical theaters such as the operating room (OR) are for the moment still using proprietary solutions to handle communication of real-time video and associated information like patient demographics, study description or 3D localization of imaging sources. The new Real-Time Video standard will enable to deploy interoperable devices inside the OR and beyond, enabling a better management of imaging information, impacting directly the quality of care. Professional video (e.g. TV studios) equipment providers and users have defined a new standardized approach for conveying video and associated information (audio, ancillary data, metadata ) enabling the deployment of equipment in a distributed way (vs. peer-to-peer). The supplement defines an IP-based new DICOM [SOP Class++, Services TO BE COMPLETED ] for the transport of real-time video including quality compatible with the communication inside the operating room (OR). SMPTE ST 2110 suite, elaborated on the basis of Technical Recommendation TR03 originated by the VSF (Video Services Forum) is used as a platform. The specific level of requirements (size and complexity of metadata, quality of image, ultra low latency, variety of image resolution, restriction to pixel ratio of 1 TO BE CHECKED AND COMPLETED) introduce some necessary restrictions of the SMPTE ST 2110 suite recommendations. In addition to these recommendations, DICOM is defining a mechanism enabling to convey specific medical metadata along with the video while respecting the architecture defined in TR03. This proposed Supplement includes a number of Addenda to existing Parts of DICOM: - PS 3.1 Introduction and Overview (will add introduction of the new protocol) - PS 3.2 Conformance (will add conformance for real-time communication) - PS 3.3 Information Object Definitions (may add new Information Object Definitions if existing IODs are not sufficient) - PS 3.4 Service Class Specifications (will add new Service Class Specifications for real-time communication) - PS 3.5 Data Structures and Encoding (may add new Data Structure and Semantics for data related to real-time communication) - PS 3.6 Data Dictionary (may add new Data definition related to real-time communication and video description) - PS 3.7 Message Exchange (will add new Message Exchange definition for real-time communication) - PS 3.8 Network Communication Support for Message Exchange (will add new Network Communication Support For Message Exchange (e.g. synch.)) - PS 3.17: Explanatory Information (may add new explanatory information (e.g. video transports standards))

85 86 87 - PS 3.18 Web Services (may add new Web Services for supporting the real-time communication (e.g. config.)) Potentially a new Part may be created for specifying the real-time communication Services.

88 89 90 91 PS3.17: Add a new Annex Real-Time Video Use Cases as indicated. XX Real-Time Video Use Cases (Informative) 92 93 94 95 Figure XX-1: Overview diagram of operating room As shown on Figure XX-1, the DICOM Real-Time Video (DICOM-RTV) communication is used to connect various video or multiframe sources to various destinations, through a standard IP switch. 96 97 98 99 100 101 102 103 104 Figure XX-2: Real-Time Video flow content overview As shown on figure Figure XX-2, the DICOM Real-Time Video flow is comprised of typically three different sub-flows ( essences ) for respectively video, audio and medical-metadata information. Using the intrinsic capability of IP to convey different flow on the same support, the information conveyed on the Ethernet cable will include three kinds of blocks for video (thousands for each video frame), audio (hundreds for each frame) and medical-metadata (units for each frame), respectively represented as V (video), A (audio) and M (metadata) on the Figure XX-3. The information related to one frame will be comprise alternate blocks of the three types, the video related ones being largely more frequent. 105 106 Figure XX-3: Real-Time Video flow details

107 XX.1 Generic Use Case 1: Duplicating video on additional monitors 108 109 110 Figure XX-4: Duplicating on additional monitor In the context of image guided surgery, two operators are directly contributing to the procedure: 111 112 a surgeon performing the operation itself, using relevant instruments; an assistant controlling the imaging system (e.g. coelioscope). 113 114 115 116 117 118 119 120 121 122 123 124 125 In some situations, both operators cannot stand on the same side of the patient. Because the control image has to be in front of each operator, two monitors are required, a primary one, directly connected to the imaging system, and the second one being on the other side. Additional operators (e.g. surgery nurse) also have to see what is happening in order to anticipate actions (e.g. providing instrument). The live video image has to be transferred on additional monitors with a minimal latency, without modifying the image itself (resolution ). The latency between the two monitors (see Figure XX-4) should be compliant with collaborative activity on surgery where the surgeon is operating based on the second monitor and the assistant is controlling the endoscope based on the primary monitor. This supplement addresses only the communication aspects, not the presentation. Some XXscopes are now producing UHD video, with the perspective to support also HDR (High Dynamic Range) for larger color gamut management (upo to 10 bits per channel) as well as HFR (High Frame Rate), i.e.; up to 120 Hz. XX.2 Generic Use Case 2: Post Review by Senior 126 127 128 129 130 Figure XX-5: Recording multiple video sources A junior surgeon performs a procedure which apparently goes well. The next day, the patient state is not ok, requiring the surgeon to refer the patient to a senior surgeon. In order to decide what to do, the senior surgeon: 131 has to review and understand what happened;

132 133 134 takes the decision to re-operate the patient or not; if a new operation is performed, needs to have access to the sequence of the first operation which is suspected. 135 136 137 138 139 140 141 142 Moreover, the junior surgeon has to review her/his own work in order to prevent against a new mistake. A good quality recording of video needs to be kept, at least for some timea certain duration, including all the video information (endoscopy, overhead, monitoring, ) and associated metadata (see Figure XX-5). Storing the video has to be doable in real-time. The recording has to maintain time consistency between the different video channels. The format of recording is out of the scope of the supplement, as well as the way of replaying the recorded videos. Only the method for feeding the recorder with the synchronized videos and associated metadata is specified by the present supplement. XX.3 Generic Use Case 3: Automatic display in Operating Room (OR) 143 144 145 146 147 148 149 150 151 152 153 154 Figure XX-6: Displaying multiple source on one unique monitor OR are more and more equipped with large monitors displaying all the necessary information. Depending on the stage of the procedure, the information to display is changing. In order to improve the quality of the real-time information shared inside the OR, it is relevant to automate the set-up of such a display, based on the metadata conveyed along with the video (e.g. displaying the XXXscope image only when relevant). All the video streams have to be transferred with the relevant information (patient, study, equipment ), as shown on the Figure XX-6. The mechanisms relative to the selection and execution of layout of images on the large monitor are out of the scope of this supplement. Only the method for conveying the multiple synchronized video along with the metadata, used as parameters for controlling the layout, are is specified in the present supplement. XX.4 Generic Use Case 4: Augmented Reality 155 156 157 158 159 160 Figure XX-7: Application combining multiple real-time video sources Image guided surgery is gradually becoming mainstream, mainly because minimally invasive. In order to guide the surgeon gesture, several procedures are based on the 3D display of patient anatomy reconstructed from MR or CT scans. But real-time medical imaging (3D ultrasound typically) can also be used as reference. Display devices (glasses, tablets ) will be used to show real-time composite image

161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 merged from the main video imaging (endoscopy, overhead, microscopy ) and the multi-frame medical imaging. The real-time composite image could be also exported as a new video source, through the DICOM Real-Time Video protocol. All video streams have to be transferred with ultra-low latency and very strict synchronization between frames (see Figure XX-7). Metadata associated with the video has to be updated at the frame rate (e.g. 3D position of the US probe). The mechanisms used for combining multiple video sources or to detect and follow 3D position of devices are out of scope of this supplement. Only the method for conveying the multiple synchronized video/multiframe sources along with the parameters, that may change at evering frame, is specified in the present supplement. XX.5 Generic Use Case 5: Robotic aided surgery Robotic assisted surgery is emerging. Image guided robots or cobots are gradually used for diffent kinds of procedures. In the near future, different devices will have to share the information provided by the robot synchronized with the video produced by imaging sources. I order to be able to process properly the information provided by the robot, it should be possible to convey such information at a frequency bigger that the video frequency, i.e.; 400 Hz vs. 60 Hz for present HD. 176

177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 PS3.17: Add a new Annex Transport of Elementary Stream over IP as indicated. YY Transport of Elementary Stream over IP (Informative) Carriage of audiovisual signals in their digital form across television plants has historically been achieved using coaxial cables that interconnect equipments through Serial Digital Interface (SDI) ports. The SDI technology provides a reliable transport method to carry a multiplex of video, audio and metadata with strict timing relationships. The features and throughput of IP networking equipment having improved steadily, it has become practical to use IP switching and routing technology to convey and switch video, audio, and metadata essence within television facilities. Existing standards such as SMPTE ST 2022-6:2012 have seen a significant adoption in this type of application where they have brought distinct advantages over SDI albeit only performing Circuit Emulation of SDI (ie. Perfect bit-accurate transport of the SDI signal contents). However, the essence multiplex proposed by the SDI technology may be considered as somewhat inefficient in many situations where a significant part of the signal is left unused if little or no audio &/or ancillary data has to be carried along with the video raster, as depicted in figure YY-1 below: Active lines 194 195 196 197 198 199 200 201 Figure YY-1 structure of a High Definition SDI signal As new image formats such as UHD get introduced, the corresponding SDI bit-rates increase, way beyond 10Gb/s and the cost of equipments that need to be used at different points in a TV plant to embed, deembed, process, condition, distribute, etc the SDI signals becomes a major concern. Consequently there has been a desire in the industry to switch and process different essence elements separately, leveraging on the flexibility and cost-effectiveness of commodity networking gear and servers.

202 203 204 205 206 207 208 209 210 211 The Video Services Forum (VSF) has authored its Technical Recommendation #3 (a.k.a. VSF-TR03) describing the principles of a system where streams of different essences (namely video, audio, metadata to begin with) can be carried over an IP-based infrastructure whilst preserving their timing characteristics. VSF TR03 leverages heavily on existing technologies such as RTP, AES67, PTP, mostly defining how they can be used together to build the foundations of a working ecosystem. The TR03 work prepared by VSF has been handed off to the Society of Motion Picture & Television Engineers (SMPTE) for due standardization process. The 32nf60 Drafting Group has broken down the TR03 work into different documents addressing distinct aspects of the system. This family of standards (once approved, which is not yet the case at the time of this writing) bears the ST 2110 prefix. The initial documents identified in the family are: 212 213 214 215 216 ST 2110-10: System Timing and definitions; ST 2110-20: Uncompressed active video; ST 2110-30: Uncompressed PCM audio; ST 2110-40: Ancillary data; ST 2110-50: ST 2022-6 as an essence. 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 The system is intended to be extensible to a variety of essence types, its pivotal point being the use of the RTP protocol. In this sytem, essence streams are encapsulated separately into RTP before being individually forwarded through the IP network. A system is built from devices that have senders and/or receivers. Streams of RTP packets flow from senders to receivers. RTP streams can be either unicast or multicast, in which case multiple receivers can receive the stream over the network. Devices may be adapters that convert from/to existing standard interfaces like HDMI or SDI, or they may be processors that receive one or more streams from the IP network, transform them in some way and transmit the resulting stream(s) to the IP network. Cameras and monitors may transmit and receive elementary RTP streams directly through an IP-connected interface, eliminating the need for legacy video interfaces. Proper operation of the ST 2110 environment relies on a solid timing infrastructure that has been largely inspired by the one used in AES67 for Audio over IP. Inter-stream synchronization relies on timestamps in the RTP packets that are sourced by the senders from a common Reference Clock. The Reference Clock is distributed over the IP network to all participating senders and receivers via PTP (Precision Time Protocol version 2, IEEE 1588-2008). Synchronization at the receiving device is achieved by the comparison of RTP timestamps with the common Reference Clock. The timing relationship between different streams is determined by their relationship to the Reference Clock. Each device maintains a Media Clock which is is frequency locked to its internal timebase and advances at an exact rate specified for the specific media type. The media clock is used by senders to sample media and by receivers when recovering digital media streams. For video and ancillary data, the rate of the media clock is 90 khz, whereas for audio it can be 44.1 khz, 48 khz, or 96KHz. For each specific media type RTP stream, the RTP Clock operates at the same rate as the Media Clock.

241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 ST 2110-20 proposes a very generic mechanism for RTP encapsulation of a video raster. It supports arbitrary resolutions, frame rates, and proposes a clever pixel packing accommodating an extremely wide variety of bit depths and sampling modes. It is very heavily inspired from IETF RFC4175. ST 2110-30 provides a method to encapsulate PCM digital audio using AES67 to which it applies a number of constraints. ST 2110-40 provides a simple method to tunnel packets of SDI ancillary data present in a signal over the IP network and enables a receiver to reconstruct an SDI signal that will embed the ancillary data at the exact same places it occupied in the original stream. Devices that contain one or more sender have to construct one SDP (Session Description Protocol) object per RTP Stream.These SDP objects are made available through the management interface of the device, thereby publishing the characteristics of the stream they encapsulate. This provides the basic information a system needs to gather in order to identify the available signal sources on the network. It is worth noting that although ST 2110 currently describes the method for transporting video and audio as uncompressed essence, the same principles may be applied to other types of media by selecting the appropriate RTP payload encapsulation scheme, and complying to the general principles defined by ST 2110-10.