UC San Diego UC San Diego Previously Published Works

Similar documents
Monitoring video quality inside a network

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

MPEG-2. ISO/IEC (or ITU-T H.262)

IEEE TRANSACTIONS ON BROADCASTING 1

Research Topic. Error Concealment Techniques in H.264/AVC for Wireless Video Transmission in Mobile Networks

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

Chapter 10 Basic Video Compression Techniques

Chapter 2 Introduction to

Implementation of an MPEG Codec on the Tilera TM 64 Processor

Multimedia Communications. Video compression

Overview: Video Coding Standards

Video compression principles. Color Space Conversion. Sub-sampling of Chrominance Information. Video: moving pictures and the terms frame and

Contents. xv xxi xxiii xxiv. 1 Introduction 1 References 4

Reduced complexity MPEG2 video post-processing for HD display

An Overview of Video Coding Algorithms

Modeling and Evaluating Feedback-Based Error Control for Video Transfer

ABSTRACT 1. INTRODUCTION

COMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards

Error Resilient Video Coding Using Unequally Protected Key Pictures

Lecture 2 Video Formation and Representation

Constant Bit Rate for Video Streaming Over Packet Switching Networks

Video coding standards

An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions

Improved Error Concealment Using Scene Information

Multimedia Communications. Image and Video compression

ERROR CONCEALMENT TECHNIQUES IN H.264 VIDEO TRANSMISSION OVER WIRELESS NETWORKS

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Analysis of Packet Loss for Compressed Video: Does Burst-Length Matter?

Content storage architectures

COMP 9519: Tutorial 1

Lecture 23: Digital Video. The Digital World of Multimedia Guest lecture: Jayson Bowen

Error concealment techniques in H.264 video transmission over wireless networks

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ISCAS.2005.

Dual frame motion compensation for a rate switching network

Quality impact of video format and scaling in the context of IPTV.

PACKET-SWITCHED networks have become ubiquitous

Modeling and Optimization of a Systematic Lossy Error Protection System based on H.264/AVC Redundant Slices

MPEGTool: An X Window Based MPEG Encoder and Statistics Tool 1

MPEG has been established as an international standard

FLEXIBLE SWITCHING AND EDITING OF MPEG-2 VIDEO BITSTREAMS

Introduction to Video Compression Techniques. Slides courtesy of Tay Vaughan Making Multimedia Work

WYNER-ZIV VIDEO CODING WITH LOW ENCODER COMPLEXITY

The H.26L Video Coding Project

Joint source-channel video coding for H.264 using FEC

Joint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes. Digital Signal and Image Processing Lab

Dual Frame Video Encoding with Feedback

WHAT MAKES FOR A HIT POP SONG? WHAT MAKES FOR A POP SONG?

ERROR CONCEALMENT TECHNIQUES IN H.264

Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences

A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique

Part1 박찬솔. Audio overview Video overview Video encoding 2/47

Error Concealment for Dual Frame Video Coding with Uneven Quality

SERIES J: CABLE NETWORKS AND TRANSMISSION OF TELEVISION, SOUND PROGRAMME AND OTHER MULTIMEDIA SIGNALS Measurement of the quality of service

Objective video quality measurement techniques for broadcasting applications using HDTV in the presence of a reduced reference signal

Evaluation of video quality metrics on transmission distortions in H.264 coded video

Motion Re-estimation for MPEG-2 to MPEG-4 Simple Profile Transcoding. Abstract. I. Introduction

CS229 Project Report Polyphonic Piano Transcription

A New Standardized Method for Objectively Measuring Video Quality

Analysis of Video Transmission over Lossy Channels

AUDIOVISUAL COMMUNICATION

ITU-T Video Coding Standards

Motion Video Compression

Perceptual Effects of Packet Loss on H.264/AVC Encoded Videos

Predicting Performance of PESQ in Case of Single Frame Losses

ATSC vs NTSC Spectrum. ATSC 8VSB Data Framing

A Framework for Advanced Video Traces: Evaluating Visual Quality for Video Transmission Over Lossy Networks

ROBUST ADAPTIVE INTRA REFRESH FOR MULTIVIEW VIDEO

In MPEG, two-dimensional spatial frequency analysis is performed using the Discrete Cosine Transform

Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264

Video 1 Video October 16, 2001

A look at the MPEG video coding standard for variable bit rate video transmission 1

1022 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL 2010

INFORMATION THEORY INSPIRED VIDEO CODING METHODS : TRUTH IS SOMETIMES BETTER THAN FICTION

Performance Evaluation of Error Resilience Techniques in H.264/AVC Standard

Bit Rate Control for Video Transmission Over Wireless Networks

AN IMPROVED ERROR CONCEALMENT STRATEGY DRIVEN BY SCENE MOTION PROPERTIES FOR H.264/AVC DECODERS

Wipe Scene Change Detection in Video Sequences

The H.263+ Video Coding Standard: Complexity and Performance

Implementation of MPEG-2 Trick Modes

06 Video. Multimedia Systems. Video Standards, Compression, Post Production

Video Compression. Representations. Multimedia Systems and Applications. Analog Video Representations. Digitizing. Digital Video Block Structure

Analysis of a Two Step MPEG Video System

P SNR r,f -MOS r : An Easy-To-Compute Multiuser

White Paper. Video-over-IP: Network Performance Analysis

SUMMIT LAW GROUP PLLC 315 FIFTH AVENUE SOUTH, SUITE 1000 SEATTLE, WASHINGTON Telephone: (206) Fax: (206)

Color Quantization of Compressed Video Sequences. Wan-Fung Cheung, and Yuk-Hee Chan, Member, IEEE 1 CSVT

Video Transmission. Thomas Wiegand: Digital Image Communication Video Transmission 1. Transmission of Hybrid Coded Video. Channel Encoder.

yintroduction to video compression ytypes of frames ysome video compression standards yinvolves sending:

Error-Resilience Video Transcoding for Wireless Communications

INTRA-FRAME WAVELET VIDEO CODING

1 Overview of MPEG-2 multi-view profile (MVP)

PAL uncompressed. 768x576 pixels per frame. 31 MB per second 1.85 GB per minute. x 3 bytes per pixel (24 bit colour) x 25 frames per second

KEY INDICATORS FOR MONITORING AUDIOVISUAL QUALITY

Dual frame motion compensation for a rate switching network

ABSTRACT ERROR CONCEALMENT TECHNIQUES IN H.264/AVC, FOR VIDEO TRANSMISSION OVER WIRELESS NETWORK. Vineeth Shetty Kolkeri, M.S.

SUBJECTIVE QUALITY OF VIDEO BIT-RATE REDUCTION BY DISTANCE ADAPTATION

CERIAS Tech Report Preprocessing and Postprocessing Techniques for Encoding Predictive Error Frames in Rate Scalable Video Codecs by E

(12) Patent Application Publication (10) Pub. No.: US 2006/ A1

Temporal Error Concealment Algorithm Using Adaptive Multi- Side Boundary Matching Principle

MULTI-STATE VIDEO CODING WITH SIDE INFORMATION. Sila Ekmekci Flierl, Thomas Sikora

Transcription:

UC San Diego UC San Diego Previously Published Works Title Classification of MPEG-2 Transport Stream Packet Loss Visibility Permalink https://escholarship.org/uc/item/9wk791h Authors Shin, J Cosman, P C Publication Date 21-3-1 Peer reviewed escholarship.org Powered by the California Digital Library University of California

CLASSIFICATION OF MPEG-2 TRANSPORT STREAM PACKET LOSS VISIBILITY Jihyun Shin and Pamela C. Cosman Dept. of Electrical and Computer Engineering, University of California, San Diego ABSTRACT We classify the visibility of TS (Transport Stream) packet losses for SDTV and HDTV MPEG-2 compressed video streams. TS packet losses can cause various temporal and spatial losses. The visual effect of a TS packet loss depends on many factors, in particular whether the loss causes a whole frame loss or partial frame loss. We develop models for predicting loss visibility for both SDTV and HDTV resolutions for frame loss and partial frame loss cases. We compare the dominant predictive factors and the results for the two resolutions. We achieve more than 85% classification accuracy. Index Terms TS packet, support vector machine, classification, packet loss visibility 1. INTRODUCTION Most prior work on the visual effects of packet loss studied average perceptual scores of videos subject to average packet loss rates. In contrast, our prior work focussed on predicting the visibility of individual packet losses [1, 2, 3], and provided loss visibility models for MPEG-2 and H.264 compressed videos. These models assumed that one horizontal slice of macroblocks was put in one packet. However MPEG- 2 video streams are often packetized into Transport Stream (TS) packets of 188 bytes. The goal of this work is to predict the loss visibility for these common fixed-size packets. This would be useful for visibility-based unequal error protection, or for packet dropping strategies during congestion. The visual effect of a TS packet loss varies considerably depending on many factors. We ignore the extreme case of sequence loss from losing a portion of the sequence header. We focus on frame loss (FL) when the packet contains a frame header, and partial frame loss (PFL), when it does not. TS packets when lost can cause various temporal errors. If a reference frame is lost, the error can propagate. We conducted a subjective experiment with human observers to determine the visibility of individual TS packets with different characteristics. Experiments were conducted on two different frame sizes: SDTV (72 48) and HDTV (192 188). Depending on the fraction of viewers who saw each loss, the This work was supported by Futurewei Technologies, Inc. and by the Center for Wireless Communications at UCSD. packet is classified as leading to a visible or invisible loss. We developed computable classifiers to predict this class. In this paper, Section 2 describes the design and ground truth results of our subjective test. Section 3 describes the factors used for classification as well as the support vector machine classifier. Section 4 presents results and conclusions. 2. SUBJECTIVE EXPERIMENT For the subjective tests, we conducted a single stimulus test, where only the lossy video being evaluated is shown (no original video). The observer was situated approximately six picture heights from the screen for the SD videos and three picture heights for the HD videos, conforming to ITU and DSL Forum Recommendations [4, 5]. A Group of Pictures (GOP) consists of 15 frames and the frame rate was 3 frames per second. Within the GOP, we refer to the P frames and B frames as follows: IB 1 B 2 P 1 B 1 B 2 P 2 B 1 B 2 P 3 B 1 B 2 P 4 B 1 B 2 I DVD-quality MPEG-2 video was used without audio. The nine raw video sources, with widely varying motions and textures, are in HD format; SD versions are obtained by downscaling with bicubic interpolation. The videos were concatenated to form a 2-minute sequence. Each subject watches one set: a lossy 2-minute SD video and the corresponding HD version. The experiment typically takes one hour, which includes an introductory session and a break. When viewers see a glitch, they press the space bar. Three different lossy video sets were generated by dropping TS packets randomly with pre-determined numbers for the loss types from the original video set. The videos are each observed by ten people. Table 1 shows the pre-determined numbers for the loss types. To create the lossy videos, one TS packet was dropped in every 4 second interval. The packet was dropped during the first 3 seconds of the interval and the last 1 second was used as a guard interval. This allowed the observer time to react to each individual loss, and also allowed the decoder to terminate any possible error propagation from the loss, thereby isolating each individual TS packet loss. For each resolution, data is gathered for a total of 9 TS packet losses over 6 minutes of MPEG-2 video. Of the 9 losses, 188 affected an entire frame and 712 losses lead to a partial loss. 978-1-4244-4296-6/1/$25. 21 IEEE 91 ICASSP 21

45 4 35 3 25 2 15 1 5 5 4 3 1 2 3 4 5 6 7 8 9 1 (a) Distribution of partial frame losses Frame Loss Type I P 1 P 2 P 3 P 4 B 1 B 2 Number of losses 6 8 14 32 48 4 4 Partial Frame Loss Type I P 1 P 2 P 3 P 4 B Number of losses 14 18 12 124 124 132 Table 1. Numbers of frame losses (top) and partial frame losses (bottom) to indicate that data of both resolutions are similar for nonreference frame losses. However, the distribution of reference frame losses shows a clear resolution-dependent trend: reference frame losses are more visible in HDTV. 2 5 1 1 2 3 4 5 6 7 8 9 1 (b) Distribution of frame losses 5 Fig. 1. Distributions for SDTV (left bars) and HDTV (right). The lossy MPEG-2 compressed video streams were decoded by ffmpeg [6]. This decoder, on confronted with a frame header loss, will produce errors in the previous frame as well, because bits of the currently damaged frame are overwritten on the previous frame (frame header loss prevents the decoder from realizing that data from a new frame is being decoded). Because of the encoding frame order, header loss of an I-frame, P-frame, or B 1 -frame damages a reference frame, and errors propagate until a new I-frame. For the PFL case, the ffmpeg decoder will conceal the loss using a weighted motion-compensated error concealment strategy. A TS packet loss was classified as invisible if two or fewer out of the ten viewers saw the loss. If seen by 3, the loss was categorized as visible. (This definition is arbitrary; it would be of interest to examine results using a more stringent definition.) For SDTV, 76.1% of the data are invisible, and for HDTV, 71.6%. Fig. 1 shows the number of observers who reacted to packet losses for PFL (Fig. 1(a)) and FL (Fig. 1(b)) cases. The distributions of visible PFL are similar for the two different resolutions. Over 8% of PFL packets are invisible for both resolutions. However, the FL cases for HDTV are more visible than they are for SDTV. Fig. 1(b) shows that the distribution of observed frame losses for HDTV is shifted to the right compared to the SDTV data. The fraction of invisible data is 51.6% for SDTV and 36.2% for HDTV. As a preliminary exploration of the differences in observer responses, the data for whole frame losses was divided into reference frame losses and non-reference frame losses. Fig. 2(a) shows the subtraction of the distributions for SDTV and HDTV reference frame losses. Fig. 2(b) is the corresponding figure for non-reference frame losses. Fig. 2(b) appears 1 5 5 1 1 2 3 4 5 6 7 8 9 1 (a) Reference frame losses 1 2 3 4 5 6 7 8 9 1 (b) Non-reference frame losses Fig. 2. SDTV distribution minus HDTV distribution. 3. SUPPORT VECTOR MACHINE CLASSIFIER Tables 2 and 3 list the factors for the PFL and FL cases which will be extracted at the encoder from an individual TS packet to predict the visibility. Macroblocks which are part of the lost packet but which are perfectly concealed by the ffmpeg error concealment approach are not counted as being lost. Therefore the factors IMSE, MBnum, avex and avey which consider lost macroblocks do not include the perfectly concealed macroblocks in the computation. The only factor in common between FL and PFL classifiers is TMDR, which is the number of frames affected by the TS packet loss. For B 2 frames, TMDR = 1. For other frames, TMDR depends on the distance to the next I frame. The maximum value TMDR can have is 17 because our GOP has 15 frames, so an I packet loss can corrupt all frames in the GOP as well as the last two B frames from the previous GOP. 911

Factor TMDR IMSE MBnum avex avey Factor TMDR ResEng avemvx avemvy magmv angmv stdmvmag stdmvang Meaning Temporal Duration Mean square error per lost pixel Number of lost MBs Average horizontal location of lost MBs Average vertical location of lost MBs Table 2. Extracted factors for PFL classifiers Meaning Temporal Duration Average residual energy of luminance component per pixel after motion compensation in the lost MBs Average motion in horizontal direction Average motion in vertical direction Average of motion vector magnitudes Average of motion vector angles Variance of motion vector magnitudes Variance of motion vector angles Table 3. Extracted factors for FL classifiers MBnum, avex, and avey are meaningful for PFLs, but not useful for FLs, as they have constant values (for SDTV, MBnum=135, avex=36, avey=24) for FL packets. IMSE (Initial Mean Square Error) is the mean squared error per pixel between the decoded videos with and without a TS packet loss evaluated over the pixels in lost macroblocks. IMSE, a factor used for PFL, is ill-defined for FL since the lost frame is discarded. Instead of attempting to estimate a value for IMSE, the FL classifier instead uses ResEng, the average residual energy of the luminance component per pixel after motion compensation. Since there are no motion vectors in an I-frame, when the packet being evaluated occurs in an I- frame, the motion vectors of the previous frame (non I-frame) are used for the six factors related to motion vectors. We divide the data into four data sets, based on resolution (SDTV/HDTV) and loss type (FL/PFL). For each data set, the factors are extracted. Instead of using these factors directly as features of the support vector machine (SVM) classifier [7], we calculated a Feature Matrix: FM[j, i] = N n=1 (C n(p i,n P j,n )) 2 for 1 i, j K where K is the number of data points, N is the number of factors, C n is the absolute correlation coefficient between the loss visibility and the nth factor, and P i,n denotes the value of the nth factor for the ith data point. The values of C n are listed in Table 4. For PFL, N =5and K = 712, and for FL, N =8and K = 188. After calculating a feature matrix, each column is normalized to zero mean and unit variance. We train a SVM classifier with the feature matrix under a radial basis function kernel using the LIBSVM software [8]. Factor of PFL SDTV HDTV TMDR.25.183 IMSE.468.1541 MBnum.432.3985 avex.149.27 avey.444.345 Factor of FL SDTV HDTV TMDR.4662.5183 ResEng.1766.147 avemvx.688.313 avemvy.566.411 magmv.49.33 angmv.4367.3332 stdmvmag.684.524 stdmvang.1671.185 Table 4. Absolute values of correlation coefficient 4. RESULTS The performance of a classifier is measured by 8-fold cross validation. We calculated performance with all combinations of factors. The best set of i +1factors always included the best set of i factors. Figs. 3 and 4 show classification accuracy as we add more factors. The order of adding factors is given in Tables 5 and 6. The first line in Tables 5 and 6 shows the accuracy for the default classifier which uses no factors and classifies all PFL packets and SDTV FL as invisible, and which classifies all HDTV FL packets as visible. Frame Loss Classifiers: Only three factors, TMDR, magmv, and stdmvmag, contribute significant performance improvement in both resolutions. If a SDTV FL classifier uses only TMDR, the performance is merely 2.7% lower than with the magmv factor used alone. However, the HDTV FL classifier performance using only magmv is 11.7% worse than with TMDR alone. Therefore, TMDR is a very important factor in both FL classifiers although the listed orders are slightly different. TMDR values differ for reference and non-reference frames, and also differ among reference frames. HDTV tests showed only 5% of non-reference frame (B 2 ) losses are visible and 79.7% of reference frame losses are visible. The performance improvement of the final FL classifiers is 36.6% for SDTV and 27.1% for HDTV. Partial Frame Loss Classifiers: The performance of the classifiers is 86.5% and 83.4% for SDTV and HDTV resolutions. This represents improvements of only 3.8% and 2.6% by using five factors relative to the default classifier which classifies all TS PFL packets as invisible. Because the training sets of PFL are so unbalanced towards the invisible class, the SVM does not have sufficient opportunity to learn about visible data. Fig. 4 shows the performance improvement by adding factors. Four factors are useful for the SDTV case, and only two factors for HDTV. The number of lost macroblocks 912

is the most significant factor for both resolutions. Discussion and Conclusions: TS packet losses are more visible for HD than for SD. This is probably because in our viewing conditions (which follow the DSL Forum recommendations) HD pictures occupy a much larger field of view than SD. One would expect it to be harder to detect glitches in pictures with a smaller angular field of view; this intuition is borne out by the SD FL data. Prediction accuracy for the SD PFL classifier is slightly higher than for the HD PFL classifier. This may be because, for our encoding parameters, the 188-byte TS packet size covers a higher fraction of the frame size in the SD case than in the HD case, and prediction may be slightly easier with a higher fraction of the frame data available. Subjective experiments show that, for whole frame losses, there is a substantial difference in the visibility of the loss for HDTV and SDTV for reference frame losses, but not for non-reference frame losses. Reference frame losses in HD are mostly visible whereas reference frame losses in SD are not. Over 8% of partial frame losses are invisible in both resolutions. The ability of a PFL classifier to extract useful parameters from a TS packet and improve on the baseline visibility prediction is very limited. In contrast, for TS packets which lead to whole frame loss, extraction of only three simple factors from the TS packet (the time duration of the loss, and two factors that depend on motion vectors) can hugely improve the prediction of packet importance (loss visibility). 8 fold cross validaton 95 9 85 8 75 7 65 6 55 HDTV SDTV 5 1 2 3 4 5 6 7 8 Number of factors Fig. 3. FL classification accuracy vs. number of factors SD Factors Accuracy HD Factors Accuracy None (inv) 51.6% None (vis) 63.8% magmv 73.9% TMDR 82.9% TMDR 85.6% stdmvmag 89.3% stdmvmag 88.2% magmv 9.4% stdmvang 88.2% stdmvang 9.9% angmv 88.2% angmv 9.9% avemvy 88.2% avemvy 9.9% avemvx 88.2% avemvx 9.9% ResEng 88.2% ResEng 9.9% Table 5. Performance from adding factors in FL classifiers 8 fold cross validaton 87 86 85 84 83 82 81 SDTV HDTV 8 1 2 3 4 5 Number of factors Fig. 4. PFL classification accuracy vs. number of factors [3] T.-L. Lin, Y. Zhi, S. Kanumuri, P. Cosman, and A. Reibman. Perceptual quality based packet dropping for generalized video GOP structures. ICASSP, 29. [4] ITU-R BT.71-4 Subjective Assessment Methods for Image Quality in High-Definition Television. Jan 1998. [5] DSL Forum Technical Report TR-126: Triple-play Services Quality of Experience (QoE) Requirements. Dec 26. [6] The official website of ffmpeg : http://ffmpeg.org/. [7] C. Cortes and V. Vapnik. Support-vector networks. In Machine Learning, pages 273 297, 1995. [8] C.-C. Chang and C.-J. Lin. The official website of LIB- SVM : http://www.csie.ntu.edu.tw/ cjlin/libsvm/. 5. REFERENCES [1] S. Kanumuri, P. Cosman, and A. Reibman. A generalized linear model for MPEG-2 packet-loss visibility. Intl. Packet Video Workshop, Dec 24. [2] S. Kanumuri, P. Cosman, A. Reibman, and V. Vaishampayan. Modeling packet-loss visibility in MPEG-2 video. IEEE Transactions on Multimedia, 8:341 355, Apr 26. SD Factors Accuracy HD Factors Accuracy None (inv) 82.7% None (inv) 8.8% MBnum 84.8% MBnum 82.5% TMDR 85.5% TMDR 83.4% avex 86.2% avex 83.4% avey 86.5% avey 83.4% IMSE 86.5% IMSE 83.4% Table 6. Performance from adding factors in PFL classifiers 913