ACHIEVING HIGH QOE ACROSS THE COMPUTE CONTINUUM: HOW COMPRESSION, CONTENT, AND DEVICES INTERACT

Similar documents
Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

Evaluation of video quality metrics on transmission distortions in H.264 coded video

An Evaluation of Video Quality Assessment Metrics for Passive Gaming Video Streaming

Lecture 2 Video Formation and Representation

Lund, Sweden, 5 Mid Sweden University, Sundsvall, Sweden

KEY INDICATORS FOR MONITORING AUDIOVISUAL QUALITY

Project No. LLIV-343 Use of multimedia and interactive television to improve effectiveness of education and training (Interactive TV)

Research Topic. Error Concealment Techniques in H.264/AVC for Wireless Video Transmission in Mobile Networks

Selective Intra Prediction Mode Decision for H.264/AVC Encoders

ABSTRACT 1. INTRODUCTION

Constant Bit Rate for Video Streaming Over Packet Switching Networks

A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique

Deliverable reference number: D2.1 Deliverable title: Criteria specification for the QoE research

SERIES J: CABLE NETWORKS AND TRANSMISSION OF TELEVISION, SOUND PROGRAMME AND OTHER MULTIMEDIA SIGNALS Measurement of the quality of service

COMPRESSION OF DICOM IMAGES BASED ON WAVELETS AND SPIHT FOR TELEMEDICINE APPLICATIONS

ARTEFACTS. Dr Amal Punchihewa Distinguished Lecturer of IEEE Broadcast Technology Society

MULTI-STATE VIDEO CODING WITH SIDE INFORMATION. Sila Ekmekci Flierl, Thomas Sikora

Bit Rate Control for Video Transmission Over Wireless Networks

II. SYSTEM MODEL In a single cell, an access point and multiple wireless terminals are located. We only consider the downlink

Characterizing Perceptual Artifacts in Compressed Video Streams

Video Quality Evaluation with Multiple Coding Artifacts

Feasibility Study of Stochastic Streaming with 4K UHD Video Traces

UC San Diego UC San Diego Previously Published Works

Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Using enhancement data to deinterlace 1080i HDTV

Wireless Multi-view Video Streaming with Subcarrier Allocation by Frame Significance

PERCEPTUAL QUALITY OF H.264/AVC DEBLOCKING FILTER

P SNR r,f -MOS r : An Easy-To-Compute Multiuser

The H.263+ Video Coding Standard: Complexity and Performance

Video Quality Evaluation for Mobile Applications

Quality impact of video format and scaling in the context of IPTV.

An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions

Case Study: Can Video Quality Testing be Scripted?

Understanding PQR, DMOS, and PSNR Measurements

Express Letters. A Novel Four-Step Search Algorithm for Fast Block Motion Estimation

A SUBJECTIVE STUDY OF THE INFLUENCE OF COLOR INFORMATION ON VISUAL QUALITY ASSESSMENT OF HIGH RESOLUTION PICTURES

Quantify. The Subjective. PQM: A New Quantitative Tool for Evaluating Display Design Options

Research Article. ISSN (Print) *Corresponding author Shireen Fathima

Understanding Compression Technologies for HD and Megapixel Surveillance

HEBS: Histogram Equalization for Backlight Scaling

1. INTRODUCTION. Index Terms Video Transcoding, Video Streaming, Frame skipping, Interpolation frame, Decoder, Encoder.

GNURadio Support for Real-time Video Streaming over a DSA Network

Objective video quality measurement techniques for broadcasting applications using HDTV in the presence of a reduced reference signal

Project Proposal: Sub pixel motion estimation for side information generation in Wyner- Ziv decoder.

OBJECTIVE VIDEO QUALITY METRICS: A PERFORMANCE ANALYSIS

High Efficiency Video coding Master Class. Matthew Goldman Senior Vice President TV Compression Technology Ericsson

ROBUST ADAPTIVE INTRA REFRESH FOR MULTIVIEW VIDEO

IP Video driving more Users & Uses

Predicting Performance of PESQ in Case of Single Frame Losses

Methodology for Objective Evaluation of Video Broadcasting Quality using a Video Camera at the User s Home

Margaret H. Pinson

FAST SPATIAL AND TEMPORAL CORRELATION-BASED REFERENCE PICTURE SELECTION

Dual frame motion compensation for a rate switching network

Perceptual Coding: Hype or Hope?

Error concealment techniques in H.264 video transmission over wireless networks

DISPLAY AWARENESS IN SUBJECTIVE AND OBJECTIVE VIDEO QUALITY EVALUATION

RECOMMENDATION ITU-R BT Methodology for the subjective assessment of video quality in multimedia applications

MPEGTool: An X Window Based MPEG Encoder and Statistics Tool 1

Video Codec Requirements and Evaluation Methodology

Adaptive Key Frame Selection for Efficient Video Coding

PAPER Wireless Multi-view Video Streaming with Subcarrier Allocation

Estimating the impact of single and multiple freezes on video quality

MANAGING HDR CONTENT PRODUCTION AND DISPLAY DEVICE CAPABILITIES

ERROR CONCEALMENT TECHNIQUES IN H.264 VIDEO TRANSMISSION OVER WIRELESS NETWORKS

A Color Gamut Mapping Scheme for Backward Compatible UHD Video Distribution

An Overview of Video Coding Algorithms

Chapter 2. Advanced Telecommunications and Signal Processing Program. E. Galarza, Raynard O. Hinds, Eric C. Reed, Lon E. Sun-

17 October About H.265/HEVC. Things you should know about the new encoding.

ESTIMATING THE HEVC DECODING ENERGY USING HIGH-LEVEL VIDEO FEATURES. Christian Herglotz and André Kaup

M.Padmaja 1, K.Prasuna 2.

IP Telephony and Some Factors that Influence Speech Quality

Region Adaptive Unsharp Masking based DCT Interpolation for Efficient Video Intra Frame Up-sampling

SWITCHED INFINITY: SUPPORTING AN INFINITE HD LINEUP WITH SDV

The Future of Control Room Visualiza on

Optimal Interleaving for Robust Wireless JPEG 2000 Images and Video Transmission

Metrics for Video Quality Assessment in Mobile Scenarios

HEVC: Future Video Encoding Landscape

Technical report on validation of error models for n.

LOW-COMPLEXITY BIG VIDEO DATA RECORDING ALGORITHMS FOR URBAN SURVEILLANCE SYSTEMS

Extreme Experience Research Report

PERCEPTUAL QUALITY ASSESSMENT FOR VIDEO WATERMARKING. Stefan Winkler, Elisa Drelie Gelasca, Touradj Ebrahimi

An Analysis of MPEG Encoding Techniques on Picture Quality

A Novel Parallel-friendly Rate Control Scheme for HEVC

Colour Reproduction Performance of JPEG and JPEG2000 Codecs

ATSC Standard: Video Watermark Emission (A/335)

Monitoring video quality inside a network

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

Color Quantization of Compressed Video Sequences. Wan-Fung Cheung, and Yuk-Hee Chan, Member, IEEE 1 CSVT

Error Resilience for Compressed Sensing with Multiple-Channel Transmission

H.264/AVC analysis of quality in wireless channel

Color Image Compression Using Colorization Based On Coding Technique

Joint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes. Digital Signal and Image Processing Lab

PEVQ ADVANCED PERCEPTUAL EVALUATION OF VIDEO QUALITY. OPTICOM GmbH Naegelsbachstrasse Erlangen GERMANY

Measuring and Interpreting Picture Quality in MPEG Compressed Video Content

Implementation and performance analysis of convolution error correcting codes with code rate=1/2.

Analysis of MPEG-2 Video Streams

Browsing News and Talk Video on a Consumer Electronics Platform Using Face Detection

Analysis of Packet Loss for Compressed Video: Does Burst-Length Matter?

Improved Error Concealment Using Scene Information

Transcription:

Proceedings of Seventh International Workshop on Video Processing and Quality Metrics for Consumer Electronics January 0-February, 0, Scottsdale, Arizona ACHIEVING HIGH QOE ACROSS THE COMPUTE CONTINUUM: HOW COMPRESSION, CONTENT, AND DEVICES INTERACT Yiting Liao, Audrey Younkin, Jeffrey Foerster, Philip Corriveau Intel Labs, Intel Corporation, Hillsboro, OR, 97, USA ABSTRACT Streaming video for entertainment, information, and communication is sharply rising due to proliferating technology, new services and a strong desire to view content by the consumer. Managing quality of experience (QoE) under the cost constraints is the key to satisfying consumer and monetizing services. A set of subjective tests were developed to investigate video quality under the interaction of content type (spatial/motion levels, bitrate and resolution), display resolution, and device size (HDTV, tablet, and smartphone). Learning from this data showed how QoE was correlated to the objective quality metrics, video content characteristics and device features. An estimation model was designed to estimate the video QoE and results demonstrated performance gains over traditional video quality assessment (VQA) tools.. INTRODUCTION With video traffic exponentially growing, the discussion of providing high video quality at the minimal storage/ bandwidth/power cost becomes increasingly important[]. One of the most challenging problems in designing an intelligent and adaptable video delivery ecosystem is the lack of video quality assessment tools that map across multiple devices, and hence usages. The ultimate goal here is to determine an end user expected experience for video within and across the compute continuum. To make this a reality an ecosystem needs to be built that enables the provisioning of seamless and consistent user video experience across multiple devices. World-wide, the mobile data traffic is predicted to increase 8x by the year 06 []. If current infrastructure does not evolve to meet this expansion, potential customers will experience degradations in a number of new services []. This increase in mobile traffic is due to more mobile technology and applications that are becoming more prevalent. From peer-to-peer video conferencing, to real-time delivery media sharing, there is a drive beyond the implementation of these segments. There are several key market drivers:. High Demand for Video: People want video content, whether for entertainment or information [, ]. Now more than ever it permeates our interactive culture.. Communication Through Video: Contact via video telephony, video messaging, or video sharing are more abundant []. People are craving a face-to-face interaction that works [].. Abundance of Mobile Devices: There is a proliferation of devices that can actually capture, receive and transmit video [, ]. Social Media Impact: Web.0, social media, and user generated content site provide a platform for users to launch video [].. Merging of Traditional TV with Streaming: In addition to basic and paid cable TV channels, consumers are shifting to streaming or download services [, 6]. With sophisticated devices, emerging services and social motivations all on the rise, the current environment should shift from its current instantiation. Transmission of real-time video is typically based on bandwidth allocation [7]. The current best solutions do not offer a way to manage video services based on QoE. Adapting video to the throughput rather than the QoE could lead to either bad video QoE or unnecessary bandwidth consumption. One of the most important gaps to enable video QoE management is to understand the human perceptions of video quality and develop objective or automatic methods to approximate human perceived quality. Researchers have done various subjective studies to investigate different factors that impact video quality. In [8], subjective experiments were conducted to compare codec performance and effects of transmission errors on visual quality. Reference [9] built a LIVE Video Quality Database to evaluate distortions such as compression and wireless packet-loss. In addition, a LIVE mobile VQA database was created to study dynamically varying distortion such as frame-freezes [0]. The goal of our study is to investigate the human perceived video quality under the impact of compression 9 VPQM0

settings (bitrate and resolution), video characteristics (spatial details and motion level), and display devices (display resolution and screen size) and to design an automatic quality assessment method to predict subjective quality under different impact factors. The subjective testing methodology and test video settings were introduced in Section. Section summarized the observation and learning of the subjective results. Section presented a subjective quality estimation algorithm and demonstrated its performance gains compared to other VQA tools.. SUBJECTIVE TESTING. Subjective Testing Methodology Experimental Design: A within-subjects design using a single stimulus method was employed ensuring all participants were exposed to each of the variables for one device only. This design reduces the subjective error of variance and less participants are needed to get valid statistical results when compared to a between subjects design []. A between-subjects design was used for different devices (HDTV, tablet and smartphone). Subjective Rating Scale: Mean Opinion Score (MOS) rating and testing are an international standard used for numerically calculating the perception of image and video quality []. The MOS is a quantitative value that is calculated by the arithmetic mean of all the individual participants ratings of the images/videos under test []. The rating scale has a numerical value associated with a qualitative descriptor, see Table. Table MOS Quality Scale Numerical Value Quality Anchor Artifact Descriptor Excellent There are no artifacts Good Artifacts are slightly noticeable, but do not bother me Fair Artifacts are noticeable, and they bother me a little Poor Artifacts are very noticeable, and the bother me a lot Bad Artifacts are severely noticeable, and I would not continue to watch Viewing Conditions: All evaluations took place in a usability lab located at the Intel facilities in Hillsboro, Oregon, USA. Participants were asked to sit cm away from the HDTV, cm away from the tablet, and participants choice of distance for the smartphone (the smartphone was held in the participant s hand). In order to ensure proper viewing experience, the viewing angle off the center axis did not exceed 0 degrees []. Ergonomically adjustable chairs were provided for each participant to ensure that the line of sight was vertically centered on the displays. In addition, the lighting in the room was dimmed to approximately 0lux to ensure the participant s pupil did not dilate or constrict between content ratings []. There was no glare from overhead lights that could interfere with the displays. Equipment: Table below shows the display characteristics of the three different devices used in this research. Table Equipment Details HDTV Tablet Smartphone Display Size 0..8 Display Resolution 90x080 80x800 80x70 Display Type LCD TFT AMOLED Participants: A total of 9 participants were recruited, ~0 for each device type. Participants were randomly selected from the Intel Jones Farm campus in Hillsboro, OR. Ages ranged from -6 and gender was distributed as HDTV M/7F, Tablet 7M/F, Smartphone 8M/F. Each participant was screened for acuity (Snellen Eye Chart TM) and color deficiency (Ishihara Testing Plate TM) [, ]. Procedure: Participants were asked to read and sign a consent form and then completed vision screenings. The facilitator gave instructions on the flow of the study, the tasks, and rating scale. Familiarization and practice trails were carried out to begin the test session; participants were encouraged to ask questions during this time. Participants were reminded that there were no right or wrong answers and that their opinions were highly valued. The length of each test session was approximately 60 minutes.. Videos for the Subjective Testing A total of eight source videos for the HDTV test and twelve source videos were used for the tablet and smartphone tests. All source videos were uncompressed, high quality videos in a ::0 progressive format with a resolution of 90x080. Table describes the video sequences used in the subjective test. The first nine sequences have a frame rate of 0 fps, while the last five sequences are at fps. The test sequences range from 0- seconds, except AspenLeaves ( seconds long). As shown in Table, the videos marked with a superscript A are used for the HDTV test while videos marked with a superscript B are used for the tablet and smartphone tests. A wide assortment of content with varying levels of spatial complexity, motion levels, and scene depictions was selected. 9

MOS Table Video Content Description and Categorization Sequence Description Aspen A, B Aspen trees with their leaves turned yellow for fall. AspenLeaves A, B Aspen leaves are blowing gently in the breeze. BackSneak B An evening, high school football game. ControlledBurn A, B The controlled burn of a house. FrontEnd A, B Two men showing a hardware box RedKayak A, B A man rowing a red kayak on a stream, including different views, zoom, and water flow patterns. SpeedBag A, B A boxer hitting a speed bag and giving some advice. TouchdownPass A A football game ends in a touchdown. WestWindEasy A, B A poem scrolling vertically on the left, and a scene of grasses blowing in the wind on the right. BbScore B A basketball game with score, camera pans and zooms. CrowdRun B A crowd of people running across a grass field PedestrianArea B People walking in a street intersection, still camera. Sunflower B A bee moving over a sunflower, still camera. Tractor B A tractor moving in a field, camera following the movement. The source videos are encoded using Microsoft Expression Encoder with H.6 codec at different encoding bitrates and resolutions to evaluate the compression distortion of video [6]. The encoding bitrates and resolutions are carefully selected for different video sequences on different devices in order to obtain a wide range of video quality. Table shows the encoding bitrate and resolution ranges used for different devices. Table Encoding Settings Device Bitrate Range Resolution Range HDTV 0kbps~6Mbps 8x~90x080 Tablet 77kbps~6Mbps 8x~080x70 Smartphone 0kbps~Mbps 8x~080x70 For HDTV, ten coded video clips are generated for each video sequence and for tablet and smartphone, eight coded video clips are generated for each video sequence. Note that in order to achieve a desired range of video quality, the bitrate and resolution sets for each video sequence are chosen differently based on the content characteristics and display device. In total, there are 80 compressed videos SSIM / 0. 0.6 0.7 0.8 0.9 PSNR SSIM 0 0 0 PSNR (db) Figure Objective Quality Scores (PSNR, SSIM, and ) vs. MOS generated for the HDTV test and two sets of 96 videos for tablet and smartphone.. RESULTS ANALYSIS The performance of three full-reference objective quality metrics was first evaluated: PSNR (Peak Signal to Noise Ratio), SSIM (Structural SIMilarity)[7], (Multi-Scale SSIM)[8]. Figure plots the objective scores of PSNR, SSIM, and vs. MOS for all videos evaluated in the subjective test. The Pearson linear Correlation Coefficient (PCC) between the objective scores of PSNR, SSIM, MSSSIM and MOS is 0.69, 0.660, and 0.700 respectively. PSNR demonstrates the worst performance among the three metrics since the data points for the PSNR scatter more randomly in the figure. Even though has the highest PCC value and scatters in a smaller area, it still doesn t monotonically correspond to MOS. For example, the dotted line in the figure shows that for a value of 0.87, the MOS score can vary from. to.. Additional factors that may improve the performance of the objective quality metrics were evaluated. was used in the rest of the paper since it had the best performance among the metrics studied. Figure shows the data points based on the devices and plots a linear fitting curve between and MOS for each device. The results demonstrate that human perception of the video quality is impacted by the devices. For the same piece of content, participants tend to give a lower quality score on a larger screen for clips with the same. This indicates that people may have higher expectation on video quality for devices with a 9

MOS MOS. HDTV Tablet Smartphone. FrontEnd@HDTV FrontEnd@Tablet WestWindEasy@HDTV WestWindEasy@Smartphone..... 0. 0.6 0.7 0.7 0.8 0.8 0.9 0.9 Figure Device-based and MOS Mapping larger screen because of the surrounding environment or the quality degradation is more visible on a bigger screen. Figure shows that performing device-based and MOS mapping can improve the estimation accuracy of the subjective quality. The PCC values between MS- SSIM and MOS for HDTV, Tablet, and Smartphone are 0.889, 0.870, and 0.7989 respectively. Furthermore, the PCC values between and MOS are calculated separately for different video content displayed on different device as shown in Table. The correlation between and MOS becomes very high (~0.98) when considered in a content-specific and device-specific manner. Table Pearson Linear Correlation Coefficients between and MOS HDTV Tablet Smartphone Aspen 0.9867 0.9860 0.976 AspenLeaves 0.96 -- -- BackSneak -- 0.99 0.996 BbScore -- 0.980 0.9907 ControlledBurn 0.97 0.980 0.967 FrontEnd 0.996 0.996 0.987 PedestrianArea -- 0.990 0.969 RedKayak 0.987 0.97 0.9700 SpeedBag 0.9796 0.99 0.98 Sunflower -- 0.98 0.989 Tractor -- 0.98 0.980 TouchdownPass 0.99 -- -- WestWindEasy 0.998 0.9898 0.996 Figure shows the vs. MOS results of FrontEnd and WestWindEasy on different devices. The FrontEnd sequence has a steeper mapping function. 0.8 0.8 0.9 0.9 Figure Content and Device-based and MOS Mapping compared to WestWindEasy and the trend stays consistent on different devices. The potential reason is that FrontEnd is a relatively easy-to-encode video with low spatial details and low motion while WestWindEasy has more spatial details and is harder to encode. Given the same value, the distortion in the FrontEnd sequence may be more noticeable since there are fewer spatial details to focus on. The results show that directly mapping objective quality score to subjective quality leads to inaccurate estimation results, but the estimation will be greatly improved if some impact factors such as video characteristics and device information are taken into account.. SUBJECTIVE QUALITY ESTIMATION A MOS estimator was designed to estimate subjective Figure MOS Estimator Based on, Video Characteristics and Device 9

Actual MOS.... PCC = 0.9866.... Estimated MOS Figure Estimated MOS vs. Actual MOS Using Proposed Estimation Algorithm quality based on objective quality score, video characteristics, and device features as shown in Figure. The estimator takes inputs from the objective quality calculator, the content analyzer, and the device detector. The objective quality calculator calculates the objective quality score such as for the input video; the content analyzer estimates the video characteristics such as the amount of spatial details (S) and the motion level (M); the device detector gathers device information such as display resolution (R) and device type (D). The estimated MOS of the input video can be calculated used the following equation: MOS = α + β () where α, β are functions of four impact factors S, M, R, D introduced above. Figure shows the estimated MOS vs. actual MOS using the proposed estimator. The results shown here assume that α and β can be accurately estimated based on the proposed impact factors. Therefore, this is probably the best the estimator can do. Our future work will include () Applying machine learning techniques to build a regression model to calculate α and β based on impact factors; () Validating the designed estimator with using more video dataset; () Investigating video optimization techniques that can benefit from the proposed quality estimator.. CONCLUSIONS This study presents unique methods by including specific content and device characteristics. By conducting a set of subjective studies across devices, this data gives insight as to what tradeoffs can be made. In some instances a lower bitrate at a higher resolution can produce the same subjective quality rating that a high bitrate video generates. Correlating the spatial and motion levels of the content with device resolution and screen size with the predictor produces a highly successful estimator. Improving this estimation via subjective ratings enhances the objective quality estimated scores. 6. REFERENCES. Cisco Visual Networking Index: Global Mobile; Data Traffic Forecast Update, 0 06. Cisco. July. 0 http://www.cisco.com/en/us/solutions/collateral/ns/ns/ns 7/ns70/ns87/white_paper_c-086.pdf. O Hara, K.., Mitchell, A. S.; Vorbau A., Consuming Video on Mobile Devices Proceedings of the SIGCHI, CHI, pp 87-866, 007.. Markkanen, A. and Shey, D. Mobile Video Services; Video Telephony, Sharing, Messaging, and On-Demand Streaming, Research Report, ABIresearch, December, 0.. O Hara, K., Black, A., and Lipson, M., Everyday practices with Mobile Video Telephony, Proceedings of the SIGCHI, CHI, pp 87-880, April, 006.. Halvey, M. J. and Keane, M. T., Exploring Social Dynamic in Online Media Sharing, Proceedings on the 6 th International Conference on Word Wide Web, pp 7-7, 007. 6. Brown, B. and Barkhuus, L. The Television Will be Revolutionized: Effects of PVR s and Filesharing on Television Watching, Proceedings of the SIGCHI, CHI, pp 66-666, April, 006. 7. Wu, D., Hou, Y.T., Zhu, W., Zhang, Y., Peha, J.M., "Streaming video over the Internet: approaches and directions," Circuits and Systems for Video Technology, IEEE Transactions on, vol., no., pp.8-00, Mar 00. 8. Winkler, S. and Dufaux, F. Video quality evaluation for mobile applications, Proceedings SPIE/IS&T Visual Communications and Image Processing Conference, Vol. 0, pp. 9-60, 00. 9. Seshadrinathan, K., Soundararajan, R., Bovik, A. C., and Cormack, L. K., "Study of Subjective and Objective Quality Assessment of Video," Image Processing, IEEE Transactions on, vol.9, no.6, pp.7-, June 00. 0. A. K. Moorthy, L. K. Choi, A. C. Bovik and G. deveciana, Video Quality Assessment on Mobile Devices: Subjective, Behavioral and Objective Studies, IEEE Journal of Selected Topics in Signal Processing, to appear in October 0.. International Telecommunication Union. Methodology for the Subjective Assessment of the Quality of Television Pictures. ITU-R Recommendation BT.00-. 00.. Bibby, P., With-in Subjects Overview Lecture, C8MST Statistical Methods, Department of Psychology, University of Nottingham. 006.. Younkin, A. and Corriveau, P., Predicting and Average End- User s Experience of Video Playback, VPQM Conference Proceedings. January 007.. Snellen Eye Chart. 006. http://www.allegromedical.com/snelleneye-chart-89809.html. Ishihara Testing Plates. 006. http://www.allegromedical.com/official-ishihara-colorblindnesstest-906.html 6. Microsoft Expression Encoder (version ) [Software]. 0. http://www.microsoftstore.com/store/msstore/en_us/pd/productid. 6900?siteID=SRi0yYDlqd0-rfcJq766JL..jWFuYYlA 7. Z. Wang, A.C. Bovik, H.R. Sheikh and E.P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Transactions on Image Processing, vol., no.pp. 600-6, April 00. 8. Z. Wang, E. P. Simoncelli and A. C. Bovik, "Multi-scale structural similarity for image quality assessment," IEEE Asilomar Conference Signals, Systems and Computers, Nov. 00. 9