AUTOMATIC QUALITY ASSESSMENT OF VIDEO FLUIDITY IMPAIRMENTS USING A NO-REFERENCE METRIC. Ricardo R. Pastrana-Vidal and Jean-Charles Gicquel

Similar documents
RECOMMENDATION ITU-R BT Methodology for the subjective assessment of video quality in multimedia applications

Estimating the impact of single and multiple freezes on video quality

TR 038 SUBJECTIVE EVALUATION OF HYBRID LOG GAMMA (HLG) FOR HDR AND SDR DISTRIBUTION

AUDIOVISUAL COMMUNICATION

DISPLAY AWARENESS IN SUBJECTIVE AND OBJECTIVE VIDEO QUALITY EVALUATION

Understanding PQR, DMOS, and PSNR Measurements

OBJECTIVE VIDEO QUALITY METRICS: A PERFORMANCE ANALYSIS

Objective video quality measurement techniques for broadcasting applications using HDTV in the presence of a reduced reference signal

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

SERIES J: CABLE NETWORKS AND TRANSMISSION OF TELEVISION, SOUND PROGRAMME AND OTHER MULTIMEDIA SIGNALS Measurement of the quality of service

ABSTRACT 1. INTRODUCTION

Keep your broadcast clear.

KEY INDICATORS FOR MONITORING AUDIOVISUAL QUALITY

PREDICTION OF PERCEIVED QUALITY DIFFERENCES BETWEEN CRT AND LCD DISPLAYS BASED ON MOTION BLUR

Research Topic. Error Concealment Techniques in H.264/AVC for Wireless Video Transmission in Mobile Networks

Perceptual Effects of Packet Loss on H.264/AVC Encoded Videos

PERCEPTUAL QUALITY ASSESSMENT FOR VIDEO WATERMARKING. Stefan Winkler, Elisa Drelie Gelasca, Touradj Ebrahimi

SUBJECTIVE QUALITY EVALUATION OF HIGH DYNAMIC RANGE VIDEO AND DISPLAY FOR FUTURE TV

PEVQ ADVANCED PERCEPTUAL EVALUATION OF VIDEO QUALITY. OPTICOM GmbH Naegelsbachstrasse Erlangen GERMANY

QUALITY ASSESSMENT OF VIDEO STREAMING IN THE BROADBAND ERA. Jan Janssen, Toon Coppens and Danny De Vleeschauwer

Motion Video Compression

COMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards

Joint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes. Digital Signal and Image Processing Lab

ETSI TR V1.1.1 ( )

Project No. LLIV-343 Use of multimedia and interactive television to improve effectiveness of education and training (Interactive TV)

Video Quality Evaluation with Multiple Coding Artifacts

Lecture 2 Video Formation and Representation

An Analysis of MPEG Encoding Techniques on Picture Quality

Video Quality Evaluation for Mobile Applications

Predicting Performance of PESQ in Case of Single Frame Losses

Contents. xv xxi xxiii xxiv. 1 Introduction 1 References 4

Dual frame motion compensation for a rate switching network

Lund, Sweden, 5 Mid Sweden University, Sundsvall, Sweden

A New Standardized Method for Objectively Measuring Video Quality

Case Study: Can Video Quality Testing be Scripted?

PERCEPTUAL QUALITY OF H.264/AVC DEBLOCKING FILTER

An Overview of Video Coding Algorithms

Chapter 10 Basic Video Compression Techniques

Selective Intra Prediction Mode Decision for H.264/AVC Encoders

White Paper. Uniform Luminance Technology. What s inside? What is non-uniformity and noise in LCDs? Why is it a problem? How is it solved?

Analysis of Packet Loss for Compressed Video: Does Burst-Length Matter?

SUBJECTIVE ASSESSMENT OF H.264/AVC VIDEO SEQUENCES TRANSMITTED OVER A NOISY CHANNEL

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

The H.263+ Video Coding Standard: Complexity and Performance

Subjective quality and HTTP adaptive streaming: a review of psychophysical studies

Video coding standards

Real Time PQoS Enhancement of IP Multimedia Services Over Fading and Noisy DVB-T Channel

ON THE USE OF REFERENCE MONITORS IN SUBJECTIVE TESTING FOR HDTV. Christian Keimel and Klaus Diepold

Perceptual Analysis of Video Impairments that Combine Blocky, Blurry, Noisy, and Ringing Synthetic Artifacts

Robust 3-D Video System Based on Modified Prediction Coding and Adaptive Selection Mode Error Concealment Algorithm

WYNER-ZIV VIDEO CODING WITH LOW ENCODER COMPLEXITY

A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique

INTERNATIONAL TELECOMMUNICATION UNION

Monitoring video quality inside a network

ARTEFACTS. Dr Amal Punchihewa Distinguished Lecturer of IEEE Broadcast Technology Society

1 Overview of MPEG-2 multi-view profile (MVP)

AN IMPROVED ERROR CONCEALMENT STRATEGY DRIVEN BY SCENE MOTION PROPERTIES FOR H.264/AVC DECODERS

Content storage architectures

Chapter 2. Advanced Telecommunications and Signal Processing Program. E. Galarza, Raynard O. Hinds, Eric C. Reed, Lon E. Sun-

Colour Reproduction Performance of JPEG and JPEG2000 Codecs

Performance Evaluation of Error Resilience Techniques in H.264/AVC Standard

Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

PERCEPTUAL QUALITY COMPARISON BETWEEN SINGLE-LAYER AND SCALABLE VIDEOS AT THE SAME SPATIAL, TEMPORAL AND AMPLITUDE RESOLUTIONS. Yuanyi Xue, Yao Wang

From SD to HD television: effects of H.264 distortions versus display size on quality of experience

A SUBJECTIVE STUDY OF THE INFLUENCE OF COLOR INFORMATION ON VISUAL QUALITY ASSESSMENT OF HIGH RESOLUTION PICTURES

Overview: Video Coding Standards

ETSI TR V1.1.1 ( )

IP based networks, such as the Internet, are more frequently

Deliverable reference number: D2.1 Deliverable title: Criteria specification for the QoE research

MPEGTool: An X Window Based MPEG Encoder and Statistics Tool 1

PERFORMANCE EVALUATION OF VIDEO QUALITY ASSESSMENT METHODS BASED ON FRAME FREEZING

ROBUST ADAPTIVE INTRA REFRESH FOR MULTIVIEW VIDEO

FLEXIBLE SWITCHING AND EDITING OF MPEG-2 VIDEO BITSTREAMS

Margaret H. Pinson

Compressed-Sensing-Enabled Video Streaming for Wireless Multimedia Sensor Networks Abstract:

Understanding IP Video for

Multimedia Communications. Video compression

Evaluation of video quality metrics on transmission distortions in H.264 coded video

IP Telephony and Some Factors that Influence Speech Quality

An Evaluation of Video Quality Assessment Metrics for Passive Gaming Video Streaming

HEVC: Future Video Encoding Landscape

UC San Diego UC San Diego Previously Published Works

Multimedia Communications. Image and Video compression

Improved Error Concealment Using Scene Information

SUBJECTIVE QUALITY OF VIDEO BIT-RATE REDUCTION BY DISTANCE ADAPTATION

Digital Media. Daniel Fuller ITEC 2110

Bridging the Gap Between CBR and VBR for H264 Standard

Adaptive Key Frame Selection for Efficient Video Coding

A Framework for Advanced Video Traces: Evaluating Visual Quality for Video Transmission Over Lossy Networks

RECOMMENDATION ITU-R BT.1203 *

MPEG has been established as an international standard

OPTIMAL TELEVISION SCANNING FORMAT FOR CRT-DISPLAYS

Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences

Performance Improvement of AMBE 3600 bps Vocoder with Improved FEC

The H.26L Video Coding Project

UHD Features and Tests

ELEC 691X/498X Broadcast Signal Transmission Fall 2015

Measuring and Interpreting Picture Quality in MPEG Compressed Video Content

PAL uncompressed. 768x576 pixels per frame. 31 MB per second 1.85 GB per minute. x 3 bytes per pixel (24 bit colour) x 25 frames per second

Transcription:

AUTOMATIC QUALITY ASSESSMENT OF VIDEO FLUIDITY IMPAIRMENTS USING A NO-REFERENCE METRIC Ricardo R. Pastrana-Vidal and Jean-Charles Gicquel France Telecom R&D TECH/QVP/MAI 4 rue de Clos Courtel 35512 - Cesson Sévigné, France ABSTRACT Apparent motion discontinuities, caused by image dropping, are a common temporal degradation in real time applications over Internet. The end-user perceives a fluidity break of visual information having an impact on his quality assessment of the delivered sequence. We present a new metric to detect and evaluate the impact of image dropping on user quality perception. The measure is based on picture dropping detection, perceptual thresholding, a novel psychovisual quality function and a temporal summation function (temporal pooling) modeling the assessment mechanism of the human assessors. This new assessment model integrates the abrupt temporal variation, occurring at the end of fluidity impairment, as a second factor for quality estimation. The metric predictions show a high correlation with the observer s ratings under several fluidity break conditions: isolated, regular an irregular, sporadic of different discontinuity durations, distributions and densities. 1. INTRODUCTION Over the past few years, there has been an increasing interest in real time video services over packet networks due to the constant improving of compression techniques, communication protocols and the availability of higher bit rates. For real time video applications, streaming is the technology solution because the data needs to be transmitted as soon as it is generated in order to deliver continuous media play out. These applications can tolerate only a short delay in the signal restitution. However, video streaming is highly resource intensive in relation to processing power, networking and storage resources. Furthermore, packets of media data are transmitted over unreliable networks. Packet loss could produce significant spatiotemporal impairments in the received video. The internet is not the only source of video degradation. Source coding could also introduce spatial and temporal artifacts. Further authors information : ricardo.pastrana@francetelecom.com. Severe motion discontinuities are some of the most common degradations in video streaming. The end-user perceives a jerky motion when the discontinuities are uniformly distributed over time and an instantaneous fluidity break is perceived when the motion loss is isolated or irregularly distributed. Bit rate adaptation techniques, cell loss in the packet networks or restitution strategy, could be the origin of this perceived jerkiness. At the source coding stage, temporal down-sampling is one of the most employed techniques for bit rate adaptation; the sequence undergoes an image dropping operation that affects the motion information. Packet loss or jitter could cause a sporadic or nonuniform image discarding at the decoding process because of the buffering time limit [1]. The last picture that was received is then displayed until a new image has been reconstructed. The end-user then perceives a frozen image followed by an abrupt displacement of the objects. When considering quality, it is essential to quantify user perception of the received sequence. Subjective assessment methods are used in order to quantify the impact of signal distortion on human quality perception. Nowadays, psychovisual experiments are the only recognized way to characterize perceived quality but they are complex in their design, time-consuming in their execution and the tests must be carried out under a controlled environment (laboratory). The need for automatic measurements at the end-user level is therefore evident. Signal impairment metrics based on human perception and assessment models may be a welladapted approach to estimate the performance of visual communication systems. The conception of no-reference metrics was motivated by the fact that quality can be measured even in cases when the original sequence is unavailable. Recently, a noreference perceptual metric of video fluidity impairments caused by image dropping was proposed by the authors [2]. The metric uses a judgment model obtained from several subjective quality assessment tests [3] [4]. This model estimates subjective quality using two parameters: fluidity impairment duration and density. In this first approach the in-

called quality response for an isolated impairment. This function accounts for the effect of each fluidity break independently of the others. The quality response is based only on the duration of this temporal discontinuity. In this first approach the influence of abrupt displacement of the objects immediately after a frozen image was not taken into account. Our new model combines a modified version of the quality function for a single fluidity break; the density of discontinuities over an analysis window of 10s and a power function depending on the impairment s density. The quality is given by the difference between the MOS of an unimpaired sequence (mos ref ) and the total quality degradation (d total ): Fig. 1. Fluidity break: a description from an inter image mean motion point of view. fluence of abrupt displacement of the objects immediately after a frozen image was not taken into account (inter and intra content variability). Indeed, for this model, a fluidity break appearing in the middle of a low motion action has the same effect as it appears in a high motion event. In this paper we present a new metric to detect and evaluate the impact of image dropping on user quality perception. This metric takes into account the abrupt temporal variation occurring at the end of a fluidity impairment caused by frame dropping. 2. ASSESSMENT MODEL In the introduction we highlighted that frame dropping, caused at several stages of the communication service, is perceived as a frozen image followed by an abrupt displacement of the objects (Figure 1). This apparent motion discontinuity can be seen as degradation of the perceptual fluidity axis. In the following text, we will use the terms temporal discontinuity and image freeze as a synonym of fluidity break. Previous works on perceptual characterization of fluidity breaks [4] [3] have shown that quality impairment mainly depends on the duration and temporal density of discontinuities (burst of dropped frames) in a non linear manner. These findings led us to propose a model to calculate the effect on quality of several frame dropping conditions: regular and non regular discarding processes; sporadically dropped pictures of different burst durations, distribution profiles and densities. This model [3] estimates subjective quality using two parameters: fluidity impairment duration and density. Basically, the model estimates quality, integrating the degradation effect of every fluidity break. The impact of each fluidity break is calculated by a function Q = mos ref d total, (1) d total = min { } d pooling, d max, (2) [ Tmax ] 1/2 d pooling =, (3) t g=t min d tg d tg = [ê(t gi, δ tgi )] p(n(tg)), i (4) ê(t gi, δ tgi ) = mos ref q(t gi, δ tgi ), (5) q(t gi, δ tgi ) = m max m max m min 1 + (b/t g ) s ) δ tgi, (6) p(n(t g )) = p max p max p min 1 + (c/n(t g )) r. (7) The overall degradation d total is limited to d max accounting for boundary scale effect [5]. In fact, observers tend to avoid the use of extreme values of the assessment scale. d pooling is the overall degradation in the 10sec sequence, t g is the freeze duration, T min is the minimal duration of a perceptible discontinuity (T min > threshold) and T max corresponds to the analysis time window of 10sec. d tg is the calculated contribution from all fluidity breaks having a duration of t g, the term n(t g ) corresponds to the distribution of discontinuity duration. The expression q(t g, δ tg ) is the quality function for an isolated fluidity break having a duration of t g. This new function integrates the abrupt temporal variation δ tg, occurring at the end of the i th fluidity impairment, as second factor for quality estimation. The constants m max and m min are the extreme quality values found in the experiment results. The abrupt temporal variation δ is estimated using a normalized RMSE over luminance component of two consecutive images: δ = f [ ] RMSE(I o, I o+1 ), (8) val peak

where I(x, y, t o ) is the image at the end of the freeze and I(x, y, t o + 1) is the first image appearing after the freeze; val peak corresponds to pixel peak value. f represent a transformation function to modify the variation range. The p(n(t g )) is the power or exponent function that depends on the distribution of burst duration. This function was obtained by fitting the data of optimized exponents for several burst densities of a constant duration [6]. The power function accounts for the fact that subjects are more sensitive to dropping variations than to a uniform discarding. By means of this variable exponent, the contribution of each fluidity break is less significant when the number of discontinuities of the same duration is high. The exponent value could vary from 1 to 2. The proposed model expresses our main hypothesis: the overall degradation caused by a certain frame dropping distribution may be calculated as the integration (summation) of individual effects. The term q(t g, δ tg ) takes into account the non linear negative reaction to a single discontinuity (burst of dropped images) found in our previous subjective experiments. 3. METRIC The main goal of our work was to propose a metric that predicts the quality assessment from a group of users when a video is impaired by frame dropping, taking into account the dynamic context of fluidity impairments. The proposed no-reference metric is based on fluidity break detection, perceptual thresholding, a novel psychovisual quality function ( 2) and a temporal summation function (temporal pooling) modeling the assessment mechanism of the human assessors. The metric produces a quality score between 0 and 100. Scores are shown in a numerical scale related to five quality categories (bad, poor, fair, good and excellent) that are linearly distributed. Figure 2 plots the flow chart of the metric process. The fluidity break detection is performed evaluating whether an image freeze is present in the sequence. An image freeze is detected when the temporal derivative of the image luminance is null: detec = 1 I(x, y, t) I(x, y, t + 1) = 0. A freeze is considered as perceptible when its duration is greater than the detection threshold (τ threshold ). When a freeze is perceptible, the variation of temporal information occurring at the end of freeze (eq. 8) is also computed. After the perceptual detection of all freezes, a posteriori distribution of impairment durations (histogram) is calculated. Finally, the assessment model ( 2) produces a quality score for the sequence impaired by image dropping. Fig. 2. Flow chart of the no-reference metric process. 4. EVALUATION Metric performance was evaluated by comparing metric predictions against quality assessment scores from a group of observers. Results from eleven tests compose the data base. Before analysing metric performance, the subjective evaluation method is reviewed and experimental conditions are described. 4.1. Observers and method The quality ratings from subjects (about 20 observers per test) were gathered using the SAMVIQ method (Subjective Assessment Methodology for VIdeo Quality) originally developed by the QVP laboratory, France Telecom R&D. This method has been adopted by EBU (European Broadcasting Union) [7] and its standardization is in progress at ITU-R study group 6Q (Quality assessment). SAMVIQ conception was motivated by the advent of multimedia applications that required new methods for subjective quality evaluation of its coded signals. Most of the existing standardized evaluation methods were mainly designed for conventional television pictures [5]. Nevertheless, multimedia is markedly different from TV domains in terms of kind of access, signal transmission and restitution

This multi stimuli method with random access allows the efficient discrimination of different quality levels in low or intermediate quality ranges. The SAMVIQ shows a high intra-inter laboratory repeatability and ratings stability [8]. 4.2. Apparatus Fig. 3. Test organization example for the SAMVIQ method systems, image formats and viewing distances. Information access is performed by fixed or mobile receivers, video format is progressive, frame rates can be fixed or variable and image size has a wide range from SQCIF to HDTV. Advanced codecs are able to compress video from very low bit rates (some kbps) up to high bit rate (several Mbps). Moreover, in multimedia the viewing distance may vary significantly depending on the application. In TV the viewing distance used for subjective evaluations is well defined and can be 4 H or 6 H (H is picture height). In addition, TV monitors are not well adapted to display multimedia images because they use interlace mode and exhibit flicker. The SAMVIQ method (Subjective Assessment Methodology for Video Quality) was therefore proposed in order to cope with these differences in evaluation conditions. SAMVIQ is a multi stimuli continuous quality scale method using explicit and hidden references. In this method the observer is invited to assess several versions of a sequence. The observer can stop, review and modify the score of each version of a sequence as desired. This method includes an explicit reference sequence (unprocessed) as well as several versions of the same sequence that include both processed and unprocessed (name hidden reference) sequences. Each sequence is displayed singly and rated using a continuous quality scale similar to the one used in the DSCQS method. A numerical scale (0-100) is related to five quality categories (bad, poor, fair, good and excellent) that are linearly distributed. Sequences access is randomized and prevents the observers from attempting to vote in an identical way according to an established order. When all the versions of a current sequence have been rated by the viewer, a new sequence is presented. A test organization example is given in Figure 3. The test was carried out in a luminance-controlled environment. The background luminance conditions were based on the ITU-R BT.500-9 [5] recommendation. All videos were stored and displayed on the PC station. The computer CRT was a NEC Multisync FE750+, 17 inches. The maximum luminance of the screen was set to 70 cd/m2 using the pluge test signal following the ITU-R BT.814 [9] recommendation. This setting, proposed for a TV evaluation test, was used because a recommendation for PC screens was still not proposed. The distance between subject and computer screen was approximately 60cm. This is a distance frequently observed in home and professional computer viewing situations. The SEOVQ [10] (Subjective Evaluation and Optimization of Video Quality) software tool for quality evaluation of multimedia images was used to carry out the quality tests. The software tool can play audiovisual sequences using proprietary or non proprietary formats and gather participant s answers by means of an evaluation interface. For each video content the observer is asked to assess explicit reference and N test conditions using a software slider placed on the right side of the evaluation interface. The buttons under the display zone are used to select one of the test conditions. The selection buttons are randomly related to test sequences in order to avoid an ordering effect. 4.3. Selected contents A wide set of video contents representative of audiovisual services were used: sport (basketball and tennis games), entertainment (action film, jazz concert and two motorcycle races), news, advertisements clips, videoconference (four sequences) and two MPEG test sequences (Barcelona and Mobile&Calendar). The sequences were 10 or 15 sec long to avoid the forgiveness effect [11]. The image format was CIF (352 288), RGB 24 bits and 25ips. 4.4. Stimuli The impaired sequences present different profiles of fluidity breaks placed at different motion contexts. The fluidity breaks, caused by image dropping, were simulated using the algorithm described in [4]. The duration of the impairments was greater than or equal to the detection thresholds and they were selected with enough perceptual distance. Spatial degradation was not introduced in order to avoid tradeoff between acuity and fluidity in the quality assessment.

Fig. 4. Scatter plots of metric predictions vs. MOS for the complete data set. Degradation profiles comprise of: isolated bursts of dropped pictures varying in duration from 80 to 5040ms; sporadic bursts of dropped pictures (1, 3, 5, 8, 15) for different burst durations and temporal distributions; regular image down sampling (12.5, 8.33, 6.25, 5, 3.57 ips); and image rate reductions varying in duration. A differential discontinuity produced by periodic temporal down sampling and an isolated fluidity break was also evaluated. 5. RESULTS In order to evaluate the accuracy and monotonic property of model predictions [12], Pearson s correlation coefficient (r), determination coefficient (r 2 ), Spearman s rank order correlation (r s ) and standard prediction error (e) were used. Figure 4 shows metric predictions as function of MOS scores. The metric predictions show a high correlation with the observer s ratings. Pearson correlation (r = 0.92), Spearman Rank order correlation (r s = 0.92) and standard error (e = 7.17) confirm the metric performance. Correlations were calculated over 372 samples (Mean Opinion Scores and metric estimations). The evaluation data base was obtained from several tests conducted in the last four years. Observing the Figure 4, we can note that dispersion is greater in low quality range. Most of the dispersion points correspond to a test evaluating the impact of image dropping on videoconference contents alone. It seems that the participants, for that test, showed a stronger negative reaction for fluidity breaks of long duration. Nevertheless, this result must be confirmed by further work. Correlation excluding videoconference conditions increase (r = 0.95 and r s = 0.95) and standard error is re- Fig. 5. Scatter plots excluding videoconference conditions. duced (e = 5.88). Dispersion confirms (Figure 5) that the metric is coherent with subjective evaluation all along the quality scale. A metric for fluidity break impairments is not enough to evaluate the overall video quality. This metric must be combined with other perceptual measures of spatial degradation in order to estimate the global sequence quality. 6. CONCLUSION A new metric to evaluate the impact of fluidity breaks, caused by image dropping, on user quality perception was presented. The proposed no-reference metric is able to calculate the effect on quality under several image dropping conditions: regular and non regular discarding processes; sporadically dropped pictures for different burst durations, distribution profiles and densities. Furthermore, the metric takes into account the influence of spatiotemporal luminance variations due to motion. Potential applications of the proposed metric involve video quality metrics design, perceptual image discard algorithms and temporal granularity optimization in source coding. 7. ACKNOWLEDGEMENTS We would like to thank Nicolas Ramin, PhD student, for his technical contribution providing scoring results from videoconference evaluations. Our thanks go to Bernard Letertre at QVP Lab, France Telecom R&D, involved in the observer s selection and subjective laboratory organization. We also thank the PhD students who participated in the set up of preliminary tests. Special thanks go to Emily Watts for her corrections.

8. REFERENCES [1] M. Kalman, E. Steinbach, and B. Girod, R-D optimized media streaming enhanced with adaptive media playout, in Multimedia and Expo, 2002. ICME 02. Proceedings. 2002 IEEE International Conference on, 2002, vol. 1, pp. 869 872. [2] Ricardo Rafael Pastrana-Vidal, Jean-Charles Gicquel, Catherine Colomes, and Hocine Cherifi, Métrique perceptuelle de rupture de fluidité vidéo sans référence, in CORESA, Lille, 2004. [11] R. Aldridge, J. Davidoff, M. Ghanbari, D. Hands, and D. Pearson, Recency effect in the subjective assessment of digitally-coded television pictures, in Image Processing and its Applications, 1995., Fifth International Conference on, Edinburgh UK, 1995, pp. 336 339. [12] Ann Marie Rohaly, John Libert, Philip Corriveau, and Arthur Webster, Final report from the video quality experts group on the validation of objective models of video quality assessment, Tech. Rep., VQEG, March 2000. [3] Ricardo Rafael Pastrana-Vidal, Jean-Charles Gicquel, Catherine Colomes, and Hocine Cherifi, Frame dropping effects on user quality perception, in WIAMIS (Workshop on Image Analysis for Multimedia Interactive Services), Lisboa, 2004. [4] Ricardo Rafael Pastrana-Vidal, Jean-Charles Gicquel, Catherine Colomes, and Hocine Cherifi, Sporadic frame dropping impact on quality perception, in Human Vision and Electronic Imaging IX, San José, 2004, vol. 5292, pp. 182 193, SPIE. [5] BT.500-9 ITU-R Recommendation, Méthodologie d Évaluation subjective de la qualité des images de télévision, Tech. Rep. BT.500-9, ITU, November 1998. [6] Ricardo Rafael Pastrana-Vidal, Vers une Métrique Perceptuelle de Qualité Audiovisuelle dans un Contexte à Service Non Garanti, Ph.D. thesis, Université de Bourgogne, 2005. [7] Jean-Louis Blin, SAMVIQ - subjective assessment methodology for video quality, Tech. Rep. BPN 056, EBU Project Group B/VIM Video In Multimedia, May 2003. [8] Jean-Louis Blin, Stability of SAMVIQ method according to 2 groups of observers and 2 types of display, Tech. Rep. 6Q/83-E, ITU-R, Question 102/6, October 2004. [9] BT.814-1 ITU-R Recommendation, Specifications and alignment procedures for setting of brightness and contrast of displays, Tech. Rep. BT.814-1, ITU, July 1994. [10] Jean-Louis Blin, SEOVQ software tool for quality, preference and acceptability evaluation of multimedia images, Tech. Rep. FT.BD.FTR&D/DIH/EQS/462/02/JLB, France Telecom R&D, October 2002.