A HYBRID METRIC FOR DIGITAL VIDEO QUALITY ASSESSMENT. University of Brasília (UnB), Brasília, DF, , Brazil {mylene,

Similar documents
Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Research Topic. Error Concealment Techniques in H.264/AVC for Wireless Video Transmission in Mobile Networks

PERCEPTUAL QUALITY OF H.264/AVC DEBLOCKING FILTER

Perceptual Analysis of Video Impairments that Combine Blocky, Blurry, Noisy, and Ringing Synthetic Artifacts

Joint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes. Digital Signal and Image Processing Lab

An Overview of Video Coding Algorithms

Reduced complexity MPEG2 video post-processing for HD display

SERIES J: CABLE NETWORKS AND TRANSMISSION OF TELEVISION, SOUND PROGRAMME AND OTHER MULTIMEDIA SIGNALS Measurement of the quality of service

Methodology for Objective Evaluation of Video Broadcasting Quality using a Video Camera at the User s Home

Evaluation of video quality metrics on transmission distortions in H.264 coded video

A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique

Contents. xv xxi xxiii xxiv. 1 Introduction 1 References 4

PERCEPTUAL QUALITY ASSESSMENT FOR VIDEO WATERMARKING. Stefan Winkler, Elisa Drelie Gelasca, Touradj Ebrahimi

Objective video quality measurement techniques for broadcasting applications using HDTV in the presence of a reduced reference signal

SUBJECTIVE ASSESSMENT OF H.264/AVC VIDEO SEQUENCES TRANSMITTED OVER A NOISY CHANNEL

An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions

Chapter 2 Introduction to

Video Over Mobile Networks

Dual Frame Video Encoding with Feedback

Analysis of Packet Loss for Compressed Video: Does Burst-Length Matter?

Colour Reproduction Performance of JPEG and JPEG2000 Codecs

The H.263+ Video Coding Standard: Complexity and Performance

ERROR CONCEALMENT TECHNIQUES IN H.264 VIDEO TRANSMISSION OVER WIRELESS NETWORKS

Motion Video Compression

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ISCAS.2005.

AUDIOVISUAL COMMUNICATION

Video Quality Evaluation with Multiple Coding Artifacts

A Study of Encoding and Decoding Techniques for Syndrome-Based Video Coding

WYNER-ZIV VIDEO CODING WITH LOW ENCODER COMPLEXITY

KEY INDICATORS FOR MONITORING AUDIOVISUAL QUALITY

Selective Intra Prediction Mode Decision for H.264/AVC Encoders

Video coding standards

Video Quality Evaluation for Mobile Applications

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Research Article. ISSN (Print) *Corresponding author Shireen Fathima

Dual frame motion compensation for a rate switching network

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

1. INTRODUCTION. Index Terms Video Transcoding, Video Streaming, Frame skipping, Interpolation frame, Decoder, Encoder.

OBJECTIVE VIDEO QUALITY METRICS: A PERFORMANCE ANALYSIS

Constant Bit Rate for Video Streaming Over Packet Switching Networks

Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264

Study of AVS China Part 7 for Mobile Applications. By Jay Mehta EE 5359 Multimedia Processing Spring 2010

ETSI TR V1.1.1 ( )

PACKET-SWITCHED networks have become ubiquitous

Modeling and Optimization of a Systematic Lossy Error Protection System based on H.264/AVC Redundant Slices

MULTI-STATE VIDEO CODING WITH SIDE INFORMATION. Sila Ekmekci Flierl, Thomas Sikora

Understanding Compression Technologies for HD and Megapixel Surveillance

AN IMPROVED ERROR CONCEALMENT STRATEGY DRIVEN BY SCENE MOTION PROPERTIES FOR H.264/AVC DECODERS

Using enhancement data to deinterlace 1080i HDTV

Error concealment techniques in H.264 video transmission over wireless networks

COMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards

Region Adaptive Unsharp Masking based DCT Interpolation for Efficient Video Intra Frame Up-sampling

Introduction to Video Compression Techniques. Slides courtesy of Tay Vaughan Making Multimedia Work

ERROR CONCEALMENT TECHNIQUES IN H.264

PERCEPTUAL QUALITY COMPARISON BETWEEN SINGLE-LAYER AND SCALABLE VIDEOS AT THE SAME SPATIAL, TEMPORAL AND AMPLITUDE RESOLUTIONS. Yuanyi Xue, Yao Wang

Project Proposal: Sub pixel motion estimation for side information generation in Wyner- Ziv decoder.

Monitoring video quality inside a network

Lecture 2 Video Formation and Representation

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement

PEVQ ADVANCED PERCEPTUAL EVALUATION OF VIDEO QUALITY. OPTICOM GmbH Naegelsbachstrasse Erlangen GERMANY

IEEE TRANSACTIONS ON BROADCASTING 1

Principles of Video Compression

CERIAS Tech Report Preprocessing and Postprocessing Techniques for Encoding Predictive Error Frames in Rate Scalable Video Codecs by E

Understanding PQR, DMOS, and PSNR Measurements

White Paper. Video-over-IP: Network Performance Analysis

Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences

RATE-DISTORTION OPTIMISED QUANTISATION FOR HEVC USING SPATIAL JUST NOTICEABLE DISTORTION

Error Resilient Video Coding Using Unequally Protected Key Pictures

Systematic Lossy Error Protection of Video based on H.264/AVC Redundant Slices

Robust 3-D Video System Based on Modified Prediction Coding and Adaptive Selection Mode Error Concealment Algorithm

ARTEFACTS. Dr Amal Punchihewa Distinguished Lecturer of IEEE Broadcast Technology Society

Lund, Sweden, 5 Mid Sweden University, Sundsvall, Sweden

Performance Evaluation of Error Resilience Techniques in H.264/AVC Standard

II. SYSTEM MODEL In a single cell, an access point and multiple wireless terminals are located. We only consider the downlink

A Color Gamut Mapping Scheme for Backward Compatible UHD Video Distribution

Error Concealment for SNR Scalable Video Coding

Overview: Video Coding Standards

STUDY OF AVS CHINA PART 7 JIBEN PROFILE FOR MOBILE APPLICATIONS

Project No. LLIV-343 Use of multimedia and interactive television to improve effectiveness of education and training (Interactive TV)

Comparative Study of JPEG2000 and H.264/AVC FRExt I Frame Coding on High-Definition Video Sequences

Digital Video Telemetry System

Error Resilience for Compressed Sensing with Multiple-Channel Transmission

Measuring and Interpreting Picture Quality in MPEG Compressed Video Content

Video Compression - From Concepts to the H.264/AVC Standard

Chapter 10 Basic Video Compression Techniques

Scalable Foveated Visual Information Coding and Communications

Video Codec Requirements and Evaluation Methodology

WE CONSIDER an enhancement technique for degraded

Lab 6: Edge Detection in Image and Video

Real Time PQoS Enhancement of IP Multimedia Services Over Fading and Noisy DVB-T Channel

Joint source-channel video coding for H.264 using FEC

Quality impact of video format and scaling in the context of IPTV.

The H.26L Video Coding Project

ETSI TR V1.1.1 ( )

Dual frame motion compensation for a rate switching network

Video compression principles. Color Space Conversion. Sub-sampling of Chrominance Information. Video: moving pictures and the terms frame and

Improved Error Concealment Using Scene Information

arxiv: v2 [cs.mm] 17 Jan 2018

Speeding up Dirac s Entropy Coder

Transcription:

A HYBRID METRIC FOR DIGITAL VIDEO QUALITY ASSESSMENT Mylène C.Q. Farias 1, Marcelo M. Carvalho 2, Hugo T.M. Kussaba 1, and Bruno H.A. Noronha 1 1 Department of Computer Science 2 Department of Electrical Engineering University of Brasília (UnB), Brasília, DF, 70910-900, Brazil {mylene, carvalho}@ieee.org ABSTRACT In this paper, we present a hybrid no-reference video quality metric. The proposed metric blindly estimates the quality of videos degraded by compression and transmission artifacts. The metric is composed by two no-reference artifact metrics that estimate the strength of blockiness and blurriness artifacts. A combination model is used to add the packet loss rate information to the quality estimate and eliminate the disturbance in the artifact metric values caused by the packet losses. Index Terms video quality metrics, artifacts, quality assessment, no-reference quality metrics, packet-loss, quality of service. 1. INTRODUCTION Digital video communication has evolved into an important field in the past few years. There have been significant advances in compression and transmission techniques, which have made possible to deliver high quality video to the end user. In particular, the advent of new technologies has allowed the creation of many new telecommunication services (e.g., direct broadcast satellite, digital television, high definition TV, Internet video). In these services, the level of acceptability and popularity of a given multimedia application is clearly related to the reliability of the service and the quality of the content provided. In this context, the term quality of experience (QoE) describes the quality of the multimedia service provided to the end user. Although there has been some debate regarding the actual meaning of this term, it is generally agreed that QoE encompasses different aspects of the user experience, such as video and audio quality, user expectation, display type, and viewing conditions. In this work, we are interested in estimating video quality according to user perception. This work was supported in part by Conselho Nacional de Desenvolvimento Cientfico e Tecnológico (CNPq) - Brazil, in part by a grant by Fundação de Empreendimentos Científicos e Tecnológicos (Finatec) Brazil, and in part by a grant from DPP - University of Brasília (UnB). The most accurate way to determine the quality of a video is by measuring it using psychophysical experiments with human subjects (subjective metrics) [1]. Unfortunately, these experiments are expensive, time-consuming and hard to incorporate into a design process or an automatic quality of service control. Therefore, the ability to measure video quality accurately and efficiently, without using human observers, is highly desirable in practical applications. With this in mind, fast algorithms that give a physical measure (objective metrics) of the video quality are needed to obtain an estimate of the quality of a video when being transmitted, received or displayed. As far as quality metrics are concerned, the networking community has been using simple metrics to quantify the quality of service (QoS) delivered to a given application, such as bit error rate (BER) or packet loss rate (PLR). Likewise, within the signal processing community, quality measurements have been largely limited to a few objective measures, such as peak signal-to-noise ratio (PSNR) and total squared error (TSE). Although these metrics are relevant for data links and generic signals in which every bit is considered equally important within the bitstream, they are not considered good estimates of the user s opinion about the received multimedia content [2, 3]. As a result, there is an ongoing effort to develop video quality metrics that are able to accurately detect impairments and estimate their annoyance as perceived by human viewers. To date, most of the quality metrics proposed in the literature are Full Reference (FR) metrics [3], i.e., metrics that use the original to compute an estimate of the quality. FR metrics have limited applications and cannot be used in most real-time video transmission applications, like for example, broadcasting and video streaming. In such cases, the initial undistorted signal (reference) is not available or not accessible at the receiver side and, therefore, requiring the reference video or even a small portion of it becomes a serious impediment in real-time video transmission applications. To measure video quality in such applications, it is essential to use a no-reference (NR) or a reduced reference (RR) video

quality metric, i.e., a metrics that blindly estimates the quality of the video using no information (nr) or limited information about the original [4]. Most NR metrics have limited performance because estimating the quality or degradation without the original is a difficult task. In fact, in the Final Report of the Video Quality Experts Group (VQEG) Multimedia Phase I, it was found that the correlations for the submitted FR, RR, and NR metrics were around 80%, 78%, and 56%, respectively, corroborating the case that, to date, NR metrics still have poor performance [5]. In this paper, we propose a hybrid quality metric that consists of a combination of a QoS metric and an NR objective quality metric: the QoS metric takes into account packet loss rates, while the NR metric consists of two no-reference artifact metrics. The main advantage of this approach is the fact that it gives an estimate of quality without requiring the reference, while, at the same time, it uses additional, networkrelated information in order to leverage the NR metric performance. To assess the effectiveness of our hybrid metric, we evaluate the quality of H.264/AVC digital video transmissions subjected to packet loss patterns typical of the Internet backbone. 2. ARTIFACT METRICS In this work, we focus on three of the most common artifacts present in digital videos: blockiness, blurriness, and packet loss. Blockiness is a type of artifact characterized by a block pattern visible in the picture. It is due to the independent quantization of individual blocks in block-based DCT coding schemes (usually, 8 8 pixels in size), leading to discontinuities at the boundaries of adjacent blocks. The blocking effect is often the most visible artifact in a compressed video, given its periodicity and the extent of the pattern. Modern codecs, like the H.264, use a deblocking filter to reduce the annoyance caused by this artifact. Blurriness is characterized by a loss of spatial detail and a reduction of edge sharpness. In the compression stage, blurriness is introduced by the suppression of the high-frequency coefficients due to coarse quantization. In video transmission over IP networks, video packets typically traverse a number of links to get to its destination. Packet losses may occur due to buffer overflow at network routers (caused by network congestion) or signal transmission/reception errors at the physical layer. Typical impairments caused by these errors are packet-loss, jitter, and delays. Among these, packet-loss is probably the most annoying impairment. As the name suggests, packet-loss impairments are caused by a complete loss of the packet being transmitted, as a consequence of transmission errors. As a consequence, parts (blocks) of the video are missing for several frames. Figures 1(a) and 1(b) depict two sample frames of videos affected by packet-loss and a combination of blockiness and blurriness, respectively. The severity of the impairments can vary tremendously depending on the bitrate and network conditions. The strength of blockiness and blurriness can be estimated using specific artifact metrics, while the strength of packet loss artifacts can be roughly estimated measuring the packet loss rate for the video at the receiver (QoS parameter). In this section, we present the two no-reference artifact metrics used to estimate blockiness and blurriness. (a) Fig. 1. Sample video frames containing medium and severe intensity packet-loss impairments. 2.1. Blockiness Metric Vlachos algorithm estimates the blockiness signal strength by comparing the cross-correlation of pixels inside (intra) and outside (inter) the borders of the coding blocking structure of a frame [6]. The algorithm considers that the size of the enconding blocks is b s b s, with b s = 8. The frame Y(i,j) is partitioned into blocks and sampled to yield sub-images, given by: s(m,n) = {Y(i,j) : m = i modb s, n = j modb s }, (1) where (i,j) are frame pixel coordinates and x mod y denotes congruence (remainder of integer division x/y). The sub-image s(m, n) contains the subset of pixels which are congruent with respect to block size. We can think of s(m, n) as a sub-image obtained from sub-sampling the frame Y by b s pixels in both horizontal and vertical directions. Clearly, if before downsampling a shift is given to the frame Y, i.e., Y s = Y(i + m,j + n), different sub-images will be generated. This shift can be understood as a sampling phase. We represent a sub-image with sampling phase(m, n) bys m,n. To estimate blockiness, seven sub-images with different sampling phases are considered. Figure 2 displays a zoom of this sampling structure where the different symbols represent a pixel of each different sub-image. The set composed of the pixels in sub-imagess 0,0,s 0,7,s 7,0, ands 7,7 make out the set of inter-block pixels, while the set composed of pixels ins 0,0, s 0,1,s 1,0, ands 1,1 make out the set of intra-block pixels. The correlation between a pair of images provides a measure of their similarity. To measure the correlation between two given images, x and y, we first calculate the correlation surface [7] using the following expression: ( F C x,y = F 1 ) (x) F(y) F, (2) (x) F(y) (b)

Fig. 2. Frame sampling structure for correlation-based blockiness metric in both horizontal and vertical directions. wheref andf 1 denote the forward and inverse two dimensional discrete Fourier transform, respectively, and * denotes the complex conjugate. For identical images, the correlation surface has a unique peak, which is the two dimensional Dirac delta function. For non-identical images, which is usually the case, several peaks can be simultaneously present. The magnitude of the highest peak is used as a measure of correlation betweenxandy [7] : p(x,y) = max{c x,y (i,j)}, (3) i,j where (i, j) are the horizontal and vertical coordinates. One problem with this equation is that the periodic nature of the Fourier transform introduces sharp transitions at the borders [8]. So, before the maximum is taken, it is necessary to filter C x,y using a Hamming window to force the elements to a constant value around the borders. To estimate the blockiness signal strength, we measure the correlation between the intra- and inter-block sub-images. In other words, we find the highest peaks of the phase correlation surfaces computed between the pairs of subimages. Considering the following subimages s 0 = s(0,0), s 1 = s(0,1), s 2 = s(1,0), s 3 = s(1,1), s 4 = s(0,1), s 5 = s(7,7), s 6 = s(0,7), s 7 = s(7,0), s 8 = s(0,0), the blockiness measure is given by the ratio between a measure of intra-block similarity and a measure of inter-block similarity: Block = P intra P inter, (4) wherep intra = 3 i=1 p 0,i andp inter = 8 i=6 p 5,i. The more blockiness is introduced, the values of P inter become smaller and, consequently, the value of Block increases. The blockiness measure for the set of all frames is obtained by taking the median of the measures over all frames. 2.2. Blurriness Metric Most of the existing blurring metrics are based on the idea that blur makes the edges larger or less sharp [9, 10]. In this work, we implemented a no-reference blur (blurriness signal) metric which also makes use of this very simple idea. The algorithm measures blurriness by measuring the width of the edges in the frame. The first step consists of finding strong edges using the Canny edge detector algorithm. The output of the Canny algorithm gives the magnitude of the edge pixels, M(i,j), and their orientation, O(i,j). We selected only the strong edges of the frame (M(i,j) > 25). The width of an edge is defined as the distance between the two local extremes, P 1 and P 2, on each side of the edge, as shown in Figure 3. 250 200 150 100 50 P1 edge 60 65 70 75 80 Fig. 3. The width of the edge is used as a measure of the blurriness signal strength. P 1 is the first local extreme andp 2 is the second one. If the edge is horizontal,p 1 will be located above the edge pixel, while P 2 will be below it. If the edge is vertical, P 1 will be located to the left of the edge pixel, while P 2 will be to the right of it. The width of the edge, width(i, j), at position(i,j) is given by the difference between the two extremes P 1 (i,j) and P 2 (i,j). The blurriness signal strength measure for a frame was obtained by averaging widths over all strong edges of this frame. So, given that a frame Y has L strong edges pixels, the blurriness signal strength measure for this frame is given by: Blur = 1 L N i=0 j=0 P2 M width(i,j). (5) The blurriness signal strength measure for the whole video is obtained by taking the median of the measures over all frames. 3. THE HYBRID QUALITY METRIC In order to obtain a hybrid quality metric, we investigate the performance of each indvidual quality metric across a set of typical video sequences subject to different bitrates and packet loss levels. Once their individual performance is assessed, we propose a final combination model for the hybrid quality metric. Figure 4 summarizes the overall idea of the proposed hybrid metric.

Fig. 4. Block diagram of the proposed hybrid quality metric. For our study, we used publicly-available videos in CIF format (352 288 pixels), YUV 4:2:0 color format, with 300 frames. The videos we used were foreman, mother, mobile, news, and paris, all compressed with target bitrates of 50k, 100k, 150k, 200k, 250k, 300k, 350k, and 400k bps. In order to simulate packet losses in a given bitstream, we used the transmitter simulator [11], a software that simulates the transmission of H.264/AVC bitstreams over error-prone channels. For simulation of packet losses, the transmitter simulator makes use of error pattern files that are based on actual experiments carried out on the Internet backbone. The error pattern files correspond to packet loss rates (PLR) of 0.1%, 0.4%, 1%, 3%, 5%, and 10%, respectively. For analysis, we considered H.264/AVC bitstreams that were packetized according to the Real-Time Transfer Protocol (RTP). In simulations, all packets were treated equally regarding their succeptibility to loss (i.e., we did not focus on specific types of packets, such as those carrying intra-coded slices, for example). To illustrate the quality range of the videos used in this work, Figures 5 and 6 show the Peak Signal to Noise Ratio (PSNR) values for different bitrates and PLR values for the videos foreman and paris. Observe that, for PLR values less than or equal to 1%, the PSNR values increase with the target bitrate. But, for PLR values greater than 1%, the PSNR values do not necessarily increase with the target bitrate. Figures 7 and 8 depict the blockiness metric output values for the videos foreman and paris, respectively, under different target bitrates, PSNR, and PLR values. Once again, notice that for PLR values equal or less than 1%, blockiness strength values decrease with the target bitrate (and PLR). But, for PLR values greater than 1%, blockiness strength values are disturbed by the packet losses and do not have a reliable behaviour. Figures 9 and 10 depict the blurriness metric output values for the videos foreman and paris, respectively, under different target bit rate, PSNR, and PLR values. We can notice from these graphs that the behaviour of the blurriness metric is more robust against the influence of PLR than the blockiness metrics. Figure 11 depicts the relationship between packet loss rate (PLR) and PSNR values for the foreman video sequence. In 36 34 32 30 28 26 24 no loss 22 50 100 150 200 250 300 350 400 Bit Rate (Kbps) Fig. 5. PSNR values for foreman video under different bit rates and packet loss rates. this graph, each point corresponds to a different bitrate. As we can clearly observe, a zero PLR (or very low PLR value such as 0.1%, 0.4%, and 1.0%) does not necessarily mean a highquality video, since it also depends on the coding scheme, as expressed by the wide range of PSNR values observed for these cases. On the other hand, as the PLR increases (3%, 5%, and 10%, respectively), not only the PSNR values decrease across all bitstreams, but their variability also decreases, indicating that PLR becomes a more consistent quality measure within this range, in spite of the coding scheme (similar behavior is also observed with the other video sequences). Therefore, it is exactly where the blockiness and blurriness metrics present their lower performance that the PLR becomes a more consistent measure of overall video quality. Based on such observations, we propose the hybrid quality metricq, which is given by Q = (1 β)f 1 (Blur, Block)+βf 2 (PLR), (6) where β is a weighting factor, and f 1 (Blur, Block) and f 2 (PLR) are quality estimators based on the blockiness, blurriness, and PLR metrics, respectively, given by f 1 (Blur, Block) = 14.7 Block 1.1 Blur+42.2 (7)

34 32 30 28 26 no loss Blockiness 1.15 1.1 1.05 1 0.95 0.9 0.85 24 0.8 5 22 20 50 100 150 200 250 300 350 400 Bit Rate (Kbps) 0.65 20 22 24 26 28 30 32 34 Fig. 6. PSNR values for the paris video under different bit rates and packet loss rates. Fig. 8. Blockiness values for the paris video under different bit rates and PSNR values. 1.1 1 0.9 5 0.65 0.6 Blockiness 0.8 Blurriness 0.55 0.5 0.6 0.5 0.4 22 24 26 28 30 32 34 36 0.45 0.4 0.35 22 24 26 28 30 32 34 36 Fig. 7. Blockiness values for the foreman video under different bit rates and PSNR values and different PLR values. Fig. 9. Blurriness values for the foreman video under different bit rates and PSNR values. f 2 (PLR) = 2.1 ln(plr)+29.1, (8) where the functional forms off 1 andf 2 were found by fitting the artifact metrics and PLR values to the PSNR values. This hybrid quality metric Q takes into account the fact that, at low PLR values, the quality is well predicted by the picture metrics. But, as the PLR increases, the results for the no-reference artifact metrics start degrading because the packet losses introduce new content to the video in a highly nonlinear way. Because of that, we introduce the weighting factor β = PLR/α, where α works as a scaling factor that expresses the value above which the PLR becomes unbearable for the streaming video. In our tests, we found α = 11. Figure 12 depicts the results of applying the hybrid quality metric Q to the videos foreman and paris, each compressed with target bitrates of 50k, 100k, 150k, 200k, 250k, 300k, 350k, and 400k bps, and PLR values of 0.1%, 0.4%, 1%, 3%, 5%, and 10%. The combination model has r = 78.94%, presenting, therefore, a good performance for a noreference quality metric. 4. CONCLUSIONS In this paper, we presented a hybrid no-reference video quality metric targeted at the transmission of videos over the Internet. The proposed metric blindly estimates the quality of videos degraded by compression and digital transmission artifacts. The metric is composed by two no-reference artifact metrics that estimate the strength of blockiness and blurriness artifacts. A combination model is used to add the packet loss rate information to the metric, eliminating the disturbance in the artifact metrics values caused by higher packet loss rates. Further studies are needed in order to better understand and characterize the interactions among the different types of artifacts and their relation to video quality.

0.9 0.85 0.8 5 32 31 30 f(x) = 0.4210 x + 15.9137 r = 894 29 Blurriness 0.65 0.6 Q 28 27 0.55 0.5 26 0.45 25 0.4 20 22 24 26 28 30 32 34 24 20 22 24 26 28 30 32 34 36 Fig. 10. Blurriness values for the paris video under different bit rates and PSNR values. Fig. 12. Output values of proposed hybrid metric for videos foreman and paris. Packet Loss Rate (%) 10 8 6 4 2 0 22 24 26 28 30 32 34 36 Fig. 11. Packet loss rate values for the foreman video under different bit rates and PSNR values. 5. REFERENCES [1] ITU Recommendation BT.500-8, Methodology for subjective assessment of the quality of television pictures, 1998. [2] B. Girod, What s wrong with mean-squared error?, in Digital Images and Human Vision, Andrew B. Watson, Ed., pp. 207 220. MIT Press, Cambridge, Massachusetts, 1993. [3] S. Winkler, A perceptual distortion metric for digital color video, in Proc. SPIE Conference on Human Vision and Electronic Imaging, San Jose, CA, USA, 1999, vol. 3644, pp. 175 184. [4] M.C.Q. Farias and S.K. Mitra, No-reference video quality metric based on artifact measurements, in Proc. IEEE Intl. Conf. on Image Processing, 2005, pp. III: 141 144. [5] Video Quality Experts Group, Final report from the video quality experts group in the validation of objective models of multimedia quality assessment, Tech. Rep., http://ftp.crc.ca/test/pub/crc/vqeg/, 2008. [6] T. Vlachos, Detection of blocking artifacts in compressed video, Electronics Letters, vol. 36, no. 13, pp. 1106 1108, 2000. [7] J.J. Pearson, D.C. Rines, S. Coldsman, and C.D. Kuglin, Video rate image correlation processor, in Proc. SPIE Conference on Application of Digital Image Processing, San Diego, CA, 1977, vol. 119, pp. 197 205. [8] G.A. Thomas, Television motion measurement for datv and other applications, Tech. Rep. 1987/11, BBC Res, Dept. Rep., 1987. [9] P. Marziliano, F. Dufaux, S. Winkler, and T. Ebrahimi, Perceptual blur and ringing metrics: Application to JPEG2000, Signal Processing: Image Communication, vol. 19, no. 2, pp. 163 172, 2004. [10] E-P. Ong, W. Lin, Z. Lu, S. Yao, X. Yang, and L. Jinag, No-reference JPEG2000, in Proc. IEEE International Conference on Multimedia and Expo, Baltimore, USA, 2003, vol. 1, pp. 545 548. [11] F. De Simone, M. Tagliasacchi, M. Naccari, S. Tubaro, and T. Ebrahimi, A H. 264/AVC video database for the evaluation of quality metrics, in Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on. IEEE, pp. 2430 2433.