KEY INDICATORS FOR MONITORING AUDIOVISUAL QUALITY

Similar documents
Recent developments in visual quality monitoring by key performance indicators

ARTEFACTS. Dr Amal Punchihewa Distinguished Lecturer of IEEE Broadcast Technology Society

Keep your broadcast clear.

Monitoring of audio visual quality by key indicators

The History of Video Quality Model Validation

Monitoring video quality inside a network

ETSI TR V1.1.1 ( )

Objective video quality measurement techniques for broadcasting applications using HDTV in the presence of a reduced reference signal

SERIES J: CABLE NETWORKS AND TRANSMISSION OF TELEVISION, SOUND PROGRAMME AND OTHER MULTIMEDIA SIGNALS Measurement of the quality of service

Lecture 23: Digital Video. The Digital World of Multimedia Guest lecture: Jayson Bowen

Margaret H. Pinson

Project No. LLIV-343 Use of multimedia and interactive television to improve effectiveness of education and training (Interactive TV)

Set-Top Box Video Quality Test Solution

Contents. xv xxi xxiii xxiv. 1 Introduction 1 References 4

Colour Reproduction Performance of JPEG and JPEG2000 Codecs

Case Study: Can Video Quality Testing be Scripted?

Video Quality Evaluation with Multiple Coding Artifacts

IEEE TRANSACTIONS ON BROADCASTING 1

OBJECTIVE VIDEO QUALITY METRICS: A PERFORMANCE ANALYSIS

Video Quality Evaluation for Mobile Applications

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

ANALYSIS OF FREELY AVAILABLE SUBJECTIVE DATASET FOR HDTV INCLUDING CODING AND TRANSMISSION DISTORTIONS

Stereoscopic. How can we receive feedback to improve the QoE in an Internet television service? Liyuan Xing

PEVQ ADVANCED PERCEPTUAL EVALUATION OF VIDEO QUALITY. OPTICOM GmbH Naegelsbachstrasse Erlangen GERMANY

Joint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes. Digital Signal and Image Processing Lab

Quality impact of video format and scaling in the context of IPTV.

Case Study Monitoring for Reliability

Chapter 3 Fundamental Concepts in Video. 3.1 Types of Video Signals 3.2 Analog Video 3.3 Digital Video

On viewing distance and visual quality assessment in the age of Ultra High Definition TV

Deliverable reference number: D2.1 Deliverable title: Criteria specification for the QoE research

DISPLAY AWARENESS IN SUBJECTIVE AND OBJECTIVE VIDEO QUALITY EVALUATION

UC San Diego UC San Diego Previously Published Works

ETSI TR V1.1.1 ( )

PERFORMANCE EVALUATION OF VIDEO QUALITY ASSESSMENT METHODS BASED ON FRAME FREEZING

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Estimating the impact of single and multiple freezes on video quality

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

OPERATOR VIDEO MONITORING PRACTICES. April 17, 2013

Understanding PQR, DMOS, and PSNR Measurements

An Evaluation of Video Quality Assessment Metrics for Passive Gaming Video Streaming

Audio and Video II. Video signal +Color systems Motion estimation Video compression standards +H.261 +MPEG-1, MPEG-2, MPEG-4, MPEG- 7, and MPEG-21

Characterizing Perceptual Artifacts in Compressed Video Streams

End-to-end assessment of mobile video services from Rohde & Schwarz

An Overview of Video Coding Algorithms

A New Standardized Method for Objectively Measuring Video Quality

Understanding Compression Technologies for HD and Megapixel Surveillance

Perceptual Coding: Hype or Hope?

MULTIMEDIA TECHNOLOGIES

Perceptual Video Metrics, a new vocabulary for QoE. Jeremy Bennington Cheetah Technologies

DIGITAL TV RESEARCH LINE

OL_H264e HDTV H.264/AVC Baseline Video Encoder Rev 1.0. General Description. Applications. Features

1 Overview of MPEG-2 multi-view profile (MVP)

RECOMMENDATION ITU-R BT Methodology for the subjective assessment of video quality in multimedia applications

IP based networks, such as the Internet, are more frequently

ROBUST ADAPTIVE INTRA REFRESH FOR MULTIVIEW VIDEO

AUDIOVISUAL COMMUNICATION

Implementation of MPEG-2 Trick Modes

IPTV (and Digital Cable TV) Performance Management. Alan Clark Telchemy Incorporated

Video Codec Requirements and Evaluation Methodology

Hands-On 3D TV Digital Video and Television

Lund, Sweden, 5 Mid Sweden University, Sundsvall, Sweden

OL_H264MCLD Multi-Channel HDTV H.264/AVC Limited Baseline Video Decoder V1.0. General Description. Applications. Features

PAL uncompressed. 768x576 pixels per frame. 31 MB per second 1.85 GB per minute. x 3 bytes per pixel (24 bit colour) x 25 frames per second

Research Topic. Error Concealment Techniques in H.264/AVC for Wireless Video Transmission in Mobile Networks

TECHNICAL STANDARDS FOR DELIVERY OF TELEVISION PROGRAMMES TO

Lecture 2 Video Formation and Representation

Perceptual Effects of Packet Loss on H.264/AVC Encoded Videos

ABSTRACT 1. INTRODUCTION

ERROR CONCEALMENT TECHNIQUES IN H.264 VIDEO TRANSMISSION OVER WIRELESS NETWORKS

InSync White Paper : Achieving optimal conversions in UHDTV workflows April 2015

1. INTRODUCTION. Index Terms Video Transcoding, Video Streaming, Frame skipping, Interpolation frame, Decoder, Encoder.

By David Acker, Broadcast Pix Hardware Engineering Vice President, and SMPTE Fellow Bob Lamm, Broadcast Pix Product Specialist

Hands-On Real Time HD and 3D IPTV Encoding and Distribution over RF and Optical Fiber

Copyright is owned by the Author of the thesis. Permission is given for a copy to be downloaded by an individual for the purpose of research and

Inputs and Outputs. Review. Outline. May 4, Image and video coding: A big picture

High Efficiency Video coding Master Class. Matthew Goldman Senior Vice President TV Compression Technology Ericsson

Image and video quality assessment using LCD: comparisons with CRT conditions

Adaptive Key Frame Selection for Efficient Video Coding

PREDICTION OF PERCEIVED QUALITY DIFFERENCES BETWEEN CRT AND LCD DISPLAYS BASED ON MOTION BLUR

TEN.02_TECHNICAL DELIVERY - INTERNATIONAL

Motion Video Compression

Compressed-Sensing-Enabled Video Streaming for Wireless Multimedia Sensor Networks Abstract:

Analysis of MPEG-2 Video Streams

Understanding IP Video for

ACHIEVING HIGH QOE ACROSS THE COMPUTE CONTINUUM: HOW COMPRESSION, CONTENT, AND DEVICES INTERACT

Error Resilient Video Coding Using Unequally Protected Key Pictures

Understanding Multimedia - Basics

INTERNATIONAL TELECOMMUNICATION UNION

Chapter 2. Advanced Telecommunications and Signal Processing Program. E. Galarza, Raynard O. Hinds, Eric C. Reed, Lon E. Sun-

SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Infrastructure of audiovisual services Coding of moving video

TECHNICAL SUPPLEMENT FOR THE DELIVERY OF PROGRAMMES WITH HIGH DYNAMIC RANGE

Multimedia Communication Systems 1 MULTIMEDIA SIGNAL CODING AND TRANSMISSION DR. AFSHIN EBRAHIMI

4K UHDTV: What s Real for 2014 and Where Will We Be by 2016? Matthew Goldman Senior Vice President TV Compression Technology Ericsson

IP Telephony and Some Factors that Influence Speech Quality

Video coding standards

Advanced Computer Networks

White Paper. Video-over-IP: Network Performance Analysis

HIGH DEFINITION H.264/AVC SUBJECTIVE VIDEO DATABASE FOR EVALUATING THE INFLUENCE OF SLICE LOSSES ON QUALITY PERCEPTION

COMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards

AUTOMATIC QUALITY ASSESSMENT OF VIDEO FLUIDITY IMPAIRMENTS USING A NO-REFERENCE METRIC. Ricardo R. Pastrana-Vidal and Jean-Charles Gicquel

Transcription:

Proceedings of Seventh International Workshop on Video Processing and Quality Metrics for Consumer Electronics January 30-February 1, 2013, Scottsdale, Arizona KEY INDICATORS FOR MONITORING AUDIOVISUAL QUALITY Mikołaj Leszczuk, Mateusz Hanusiak, Ignacio Blanco (AGH University) Emmanuel Wyckens (Orange Labs), Silvio Borer (SwissQual) September 14, 2012 Abstract Automating quality checking is currently based on finding major video and audio artefacts. The Monitoring Of Audiovisual quality by key Indicators (MOAVI) subgroup of the Video Quality Experts Group (VQEG) is an open collaborative project for developing no-reference models for monitoring audiovisual service quality. MOAVI is a complementary, industry-driven alternative to Quality of Experience (QoE) used to automatically measure audiovisual quality by using simple indicators of perceived degradation. The goal is to develop a set of key indicators (including blocking effects, blurring effects, freeze/jerkiness effects, ghosting effects, slice video stripe errors, aspect ratio problems, field order problems, photosensitive epilepsy flashing effects, silence, and clipping) describing service quality in general (the list is not final, but it includes the most important artifacts), and to select subsets for each potential application. Therefore, the MOAVI project concentrates on models based on key indicators contrary to models predicting overall quality. 1 INTRODUCTION Current Quality of Experience (QoE) models of the No-Reference (NR) algorithm, such as those reported in related research work [1], address measuring quality of networked multimedia using objective parametric models. These models may experience slight problems in predicting overall audiovisual QoE. Therefore, a complementary, industry-driven alternative used to measure quality automatically by using simple perceived indicators can now be proposed. Consequently, the Monitoring Of Audio-Visual quality by key Indicators (MOAVI) [2] subgroup of the Video Quality Experts Group (VQEG) [3], an open collaborative project for developing NR models for monitoring audiovisual service quality, is developing such a set of key indicators. This paper is organized as follows. Section 2 describes related limitations. Section 3 presents MOAVI s key indicators. Section 4 concludes the paper and outlines of future work. The research leading to these results has received funding from the European Community s Seventh Framework Programme (FP7/2007-2013) under grant agreement 218086 (INDECT). 86 VPQM2013

2 BACKGROUND AND STATE-OF-THE-ART This section presents limitations of the state-of-the-art Full-Reference (FR), Reduced- Reference (RR) and NR metrics for standardized models. Most of the models in the recommendations have been validated using one of the following hypotheses: Frame freezes of up to 2 seconds, No degradation at the beginning or at the end of the video sequence are allowed, No skipped frames, The video reference should be clean (no spatial or temporal distortions), Minimum delay is supported between video reference and video (sometimes with constant delay), The up or downscaling operations are not always taken into account, Most models are based on measuring conventional blurriness, blockiness and jerkiness artefacts for producing predictive Mean Opinion Scores (MOS). Most of algorithms producing the MOS scores combine the blur, block and jerkiness metrics. The weighting between each indicator could be a simple mathematical function. If one of the indicators is in-correct, the global predictive score is completely wrong. The other indicators mentioned in MOAVI(e.g. ghosting, slice error) are not taken into account for MOS. The history of the ITU-T Recommendations is shown in Table 1, while the metrics based on video signal only are shown in Table 2. Table 1: History of ITU-T Recommendations. Type of Model Format Recommendation Year FR SD J.144 [4] 2004 FR QCIF/CIF/VGA J.247 [5] 2008 RR QCIF/CIF/VGA J.246 [6] 2008 FR SD J.144 [4] 2004 RR SD J.249 [7] 2010 FR HD J.341 [8] 2011 RR HD J.342 [9] 2011 Bitstream VGA HD In progress Exp. 2013 Hybrid VGA HD In progress Exp. 2013 The related research work [10] addresses measuring multimedia quality in mobile networks with an objective parametric model. Current standardization activity at ITU- T SG12 on models for multimedia and IPTV based on bit-stream information is also closely related. SG12 is now working on models for IPTV. Q.14/12 is responsible for these projects, provisionally called P.NAMS (non-intrusive parametric model for 87

Table 2: Synthesis of the FR, RR and NR MOS models Type of ITU-T Model FR RR NR HDTV J.341 [8] n/a n/a SDTV J.144 [4] n/a n/a Resolution VGA J.247 [5] J.246 [6] n/a CIF J.247 [5] J.246 [6] n/a QCIF J.247 [5] J.246 [6] n/a assessment of performance of multimedia streaming) and P.NBAMS (non-intrusive bitstream model for assessment of performance of multimedia streaming). P.NAMS uses packet-header information only (e.g. from IP through MPEG2-TS), while P.NBAMS is able to use the payload information (i.e. coded bit- stream) [11]. However, this work has so far focused on the overall quality (in MOS units), while MOAVI is focusing on Key Performance Indicators (KPI). The MOAVI project could be used to human behavior over longer period, and to propose an adapted model with enhanced SSCQE methods. Most of the recommended models are based on a global quality evaluation of the video sequences as in P.NAMS and P.NBAMS projects. The predictive score is correlated to the subjective score obtained using global evaluation method (SAMVIQ, DSCQS, ACR, etc.). Generally, the duration of the video sequences is limited to 10 s or 15 s in order to avoid a forgiveness effect (the observer cannot assess the video corretly after 30 s, and is prone to giving more weight to artefacts occurring at the end of the sequence). When a single model is used for monitoring video services, the global scores are provided for fixed temporal windows and without any acknowledgement of the previous scores. 3 MOAVI S KEY INDICATORS FOR AUTOMATED QUALITY CHECKING Automating quality checking is currently based on finding major video and audio artefacts. The processing is performed on the video signal and/or the bit- stream. Quality checking can be conducted before, during, and/or after the encoding process. However, in MOAVI, no MOS is provided. MOAVI key artefact indicators are classified into four directories based on their origins: 1. Capturing 2. Processing 3. Transmission 4. Displaying 88

3.1 Capturing Artefacts are introduced during video recording. Images and video are captured using cameras comprising of an optical system and a sensor with processing circuitry. Artefacts based on capture will affect both analogue and digital systems as they are at the front end of the image acquisition. Reflected light from the object or scene forms an image on the sensor [12]. Artefacts include: blurring, flickering, exposure time distortions, ghosting, mute, shaking, rainbow effect, lip sync, blackout, clipping, and vignetting. 3.2 Processing Processing is required to meet constraints such as bandwidth limitations imposed by the medium and to provide immunity against medium noise. There are many coding techniques for removing the redundancies in images and video. Coding can introduce artefacts such as reduced spatial and temporal resolution, which are the common and dominant undesirable visible effects [12]. Artefacts include: blocking, blurring, flickering, freezing/jerkiness (jerky motion), ghosting, ringing/mosquito, colour bleeding, lip sync, clipping, and framing (pillar-boxing/letter-boxing). 3.3 Transmission When data is transmitted through a medium, some of the data may be lost, distorted or may result in multiple data due to reflections. When data arrives through many paths in addition to the direct path, the distortion is known as multipath distortion and affects both analogue and digital communications [12]. Artefacts include: blocking, blurring, flickering, freezing/jerkiness (jerky motion), ghosting, ringing/mosquito, mute, block missing, stripe noise, colour bleeding, lip sync, and blackout. 3.4 Displaying As the technology was developed, different display systems started to offer different subjective quality with the same resolution. With the latest display screens, the difference is reduced to the minimum between OLED, LCD and SED technologies. Artefacts include: block missing, stripe noise, aspect ratio error, photosensitive epilepsy flashing effect, lip sync, blackout, and framing (pillar-boxing/letter-boxing). 4 CONCLUSIONS AND NEXT STEPS This project is still in its infancy. Questions need to be submitted to the MOAVI Co- Chairs. Nine people have been involved in this activity so far. In the next step, methods for measuring distortions will be analysed. Psychophysical experiments will then be conducted for distortions for which quantitative thresholds are missing. As a result, the thresholds will be contributed to the research community (by means of a published scientific paper). 89

REFERENCES [1] R. Venkatesh, Babu Ajit, S. Bopardikar, Andrew Perkis, and Odd Inge Hillestad, No-reference metrics for video streaming applications, 2002. [2] Emmanuel Wyckens, Silvio Borer, and Mikołaj Leszczuk, MOAVI (Monitoring of Audio Visual Quality by Key Indicators) Project, VQEG, July 2012, http://www.its.bldrdoc.gov/vqeg/projects/moavi/moavi.aspx. [3] VQEG, The Video Quality Experts Group, July 2012, http://www.vqeg.org/. [4] ITU-T J.144, Objective perceptual video quality measurement techniques for digital cable television in the presence of a full reference, 2004. [5] ITU-T J.247, Objective perceptual multimedia video quality measurement in the presence of a full reference, 2008. [6] ITU-T J.246, Perceptual isual quality measurement techniques for multimedia services over digital cable television networks in the presence of a reduced bandwidth reference, 2008. [7] ITU-T J.249, Perceptual video quality measurement techniques for digital cable television in the presence of a reduced reference, 2010. [8] ITU-T J.341, Objective perceptual multimedia video quality measurement of HDTV for digital cable television in the presence of a full reference, 2011. [9] ITU-T J.342, Objective multimedia video quality measurement of HDTV for digital cable television in the presence of a reduced reference signal, 2011. [10] J. Gustafsson, G. Heikkila, and M. Pettersson, Measuring multimedia quality in mobile networks with an objective parametric model, in Image Processing, 2008. ICIP 2008. 15th IEEE International Conference on, oct. 2008, pp. 405 408. [11] Akira Takahashi, Kazuhisa Yamagishi, and Ginga Kawaguti, Global standardization activities recent activities of qos / qoe standardization in itu-t sg12, Ntt Technical Review, vol. 6, no. 9, pp. 1 5, 2008. [12] Amal Punchihewa and Donald G. Bailey, Artefacts in Image and Video Systems: Classification and Mitigation, in Image and Vision Computing New Zealand, 2002. 90