Extreme Experience Research Report

Similar documents
IP Telephony and Some Factors that Influence Speech Quality

AUDIOVISUAL COMMUNICATION

SWITCHED INFINITY: SUPPORTING AN INFINITE HD LINEUP WITH SDV

Predicting Performance of PESQ in Case of Single Frame Losses

Logic Design. Flip Flops, Registers and Counters

Understanding PQR, DMOS, and PSNR Measurements

Application Note #63 Field Analyzers in EMC Radiated Immunity Testing

UTTR BEST TELEMETRY SOURCE SELECTOR

8K Resolution: Making Hyperrealism a Reality

Static Timing Analysis for Nanometer Designs

Philips Model US-24ST2200/27

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

AND9191/D. KAI-2093 Image Sensor and the SMPTE Standard APPLICATION NOTE.

CHARACTERIZATION OF END-TO-END DELAYS IN HEAD-MOUNTED DISPLAY SYSTEMS

for Television ---- Formatting AES/EBU Audio and Auxiliary Data into Digital Video Ancillary Data Space

VivoSense. User Manual Galvanic Skin Response (GSR) Analysis Module. VivoSense, Inc. Newport Beach, CA, USA Tel. (858) , Fax.

Practical Bit Error Rate Measurements on Fibre Optic Communications Links in Student Teaching Laboratories

Design Issues Smart Camera to Measure Coil Diameter

MULTIMEDIA TECHNOLOGIES

Case Study Monitoring for Reliability

DDA-UG-E Rev E ISSUED: December 1999 ²

EEC 116 Fall 2011 Lab #5: Pipelined 32b Adder

RECOMMENDATION ITU-R BT.1203 *

II. SYSTEM MODEL In a single cell, an access point and multiple wireless terminals are located. We only consider the downlink

ETSI TR V1.1.1 ( )

Project 6: Latches and flip-flops

Synchronization Issues During Encoder / Decoder Tests

Experiment: FPGA Design with Verilog (Part 4)

ITU-T Y Specific requirements and capabilities of the Internet of things for big data

PulseCounter Neutron & Gamma Spectrometry Software Manual

Laboratory 1 - Introduction to Digital Electronics and Lab Equipment (Logic Analyzers, Digital Oscilloscope, and FPGA-based Labkit)

ECE 4220 Real Time Embedded Systems Final Project Spectrum Analyzer

Joint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes. Digital Signal and Image Processing Lab

Motion Video Compression

Student Laboratory Experiments Exploring Optical Fibre Communication Systems, Eye Diagrams and Bit Error Rates

TECHNOLOGY 2 LX177 SERIES 4 HL167 SERIES 6 LV SERIES 8 HL67 SERIES 10 DIMENSIONS 12

ELEN Electronique numérique

Application Note AN-708 Vibration Measurements with the Vibration Synchronization Module

The Art of Low-Cost IoT Solutions

Milestone Solution Partner IT Infrastructure Components Certification Report

By David Acker, Broadcast Pix Hardware Engineering Vice President, and SMPTE Fellow Bob Lamm, Broadcast Pix Product Specialist

D-Lab & D-Lab Control Plan. Measure. Analyse. User Manual

The Measurement Tools and What They Do

OBJECTIVE VIDEO QUALITY METRICS: A PERFORMANCE ANALYSIS

Agilent Parallel Bit Error Ratio Tester. System Setup Examples

Task 4_B. Decoder for DCF-77 Radio Clock Receiver

Achieving Faster Time to Tapeout with In-Design, Signoff-Quality Metal Fill

CODING EFFICIENCY IMPROVEMENT FOR SVC BROADCAST IN THE CONTEXT OF THE EMERGING DVB STANDARDIZATION

ETSI TS V6.0.0 ( )

Clock - key to synchronous systems. Lecture 7. Clocking Strategies in VLSI Systems. Latch vs Flip-Flop. Clock for timing synchronization

Clock - key to synchronous systems. Topic 7. Clocking Strategies in VLSI Systems. Latch vs Flip-Flop. Clock for timing synchronization

SERIES J: CABLE NETWORKS AND TRANSMISSION OF TELEVISION, SOUND PROGRAMME AND OTHER MULTIMEDIA SIGNALS Measurement of the quality of service

2 MHz Lock-In Amplifier

Using Extra Loudspeakers and Sound Reinforcement

DM Scheduling Architecture

Case Study: Can Video Quality Testing be Scripted?

What is sync? Why is sync important? How can sync signals be compromised within an A/V system?... 3


IT T35 Digital system desigm y - ii /s - iii

SUBJECTIVE QUALITY EVALUATION OF HIGH DYNAMIC RANGE VIDEO AND DISPLAY FOR FUTURE TV

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement

Implementation of an MPEG Codec on the Tilera TM 64 Processor

Understanding and Calculating Probability of Intercept

Realizing Waveform Characteristics up to a Digitizer s Full Bandwidth Increasing the effective sampling rate when measuring repetitive signals

Practical Application of the Phased-Array Technology with Paint-Brush Evaluation for Seamless-Tube Testing

data and is used in digital networks and storage devices. CRC s are easy to implement in binary

SignalTap Plus System Analyzer

TSIU03, SYSTEM DESIGN. How to Describe a HW Circuit

Evaluating Oscilloscope Mask Testing for Six Sigma Quality Standards

Bringing an all-in-one solution to IoT prototype developers

PRACTICAL APPLICATION OF THE PHASED-ARRAY TECHNOLOGY WITH PAINT-BRUSH EVALUATION FOR SEAMLESS-TUBE TESTING

What really changes with Category 6

Precision testing methods of Event Timer A032-ET

Keep your broadcast clear.

ETSI TS V3.0.2 ( )

PSYCHOACOUSTICS & THE GRAMMAR OF AUDIO (By Steve Donofrio NATF)

ECE 5765 Modern Communication Fall 2005, UMD Experiment 10: PRBS Messages, Eye Patterns & Noise Simulation using PRBS

Oculomatic Pro. Setup and User Guide. 4/19/ rev

Co-location of PMP 450 and PMP 100 systems in the 900 MHz band and migration recommendations

T ips in measuring and reducing monitor jitter

Network Working Group. Category: Informational Preston & Lynch R. Daniel Los Alamos National Laboratory February 1998

THE CAPABILITY of real-time transmission of video over

Design for Testability

Processor time 9 Used memory 9. Lost video frames 11 Storage buffer 11 Received rate 11

Atlas SCR. User Guide. Thyristor and Triac Analyser Model SCR100

11. Sequential Elements

Project Summary EPRI Program 1: Power Quality

Good afternoon! My name is Swetha Mettala Gilla you can call me Swetha.

APPLICATION NOTE. Figure 1. Typical Wire-OR Configuration. 1 Publication Order Number: AN1650/D

FSM Cookbook. 1. Introduction. 2. What Functional Information Must be Modeled

Dual Link DVI Receiver Implementation

Clocking Spring /18/05

Long and Fast Up/Down Counters Pushpinder Kaur CHOUHAN 6 th Jan, 2003

EE178 Spring 2018 Lecture Module 5. Eric Crabill

A Video Frame Dropping Mechanism based on Audio Perception

GLI-12 V1.1 GLI 12 V2.0

NEW APPROACHES IN TRAFFIC SURVEILLANCE USING VIDEO DETECTION

DMX 512 Language Date: Venerdì, febbraio 12:15:08 CET Topic: Educational Lighting Site

CSE 352 Laboratory Assignment 3

Auditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are

Transcription:

Extreme Experience Research Report

Contents Contents 1 Introduction... 1 1.1 Key Findings... 1 2 Research Summary... 2 2.1 Project Purpose and Contents... 2 2.1.2 Theory Principle... 2 2.1.3 Research Architecture... 3 2.1.4 Research Methodology... 4 3 Research Result... 5 3.1 Web Page Loading... 5 3.1.1 Zero Waiting Time for Loading Texts and Images Is 270 ms... 5 3.2 Video Streaming... 5 3.2.1 Zero Waiting Time for Video Streaming Is 80 ms... 5 3.2.2 Data Verification for Video Streaming Zero Waiting Time... 6 3.2.3 Zero Waiting Time for Video Asynchronization Is 220 ms... 7 3.2.4 The Extreme Experience of Definition Is Affected by Screen Sizes and Illumination Conditions... 7 3.3 Video Call... 9 3.3.1 Zero Waiting Time for Interaction During Video Calls Is 210 ms... 9 4 Appendix... 10 4.1 About This Report... 10 4.2 Contact Us... 10 4.3 Disclaimer... 10

Figures Figures Figure 2-1 Project purpose... 2 Figure 2-2 Research principle... 3 Figure 2-3 Research architecture... 4 Figure 2-4 Zero waiting time for loading texts and images... 4 Figure 3-1 Zero waiting time for loading texts and images... 5 Figure 3-2 Zero waiting time for video streaming... 6 Figure 3-3 Zero waiting time for video asynchronization... 7 Figure 3-4 Zero waiting time for interaction during video calls... 9

Tables Tables Table 3-1 Human visual perception ability... 6 Table 3-2 Materials used in the experiment of zero waiting time for video asynchronization... 7 Table 3-3 Extreme experience of definition... 8 Table 3-4 Materials used in the experiment of the extreme experience of definition... 8

1 Introduction 1 Introduction The experience baseline is obtained through the service tests based on the existing network. Due to the constraints such as network environment and test methods, the test results represent only the current status of the network and service deployment rather than the actual experience of users. From the perspective of human factors engineering, this research obtains the extreme experience of human perception through professional data collection and analysis system (data on video, audio, bioelectricity, eye movement, brain wave and pressure distribution). This document provides guidelines for improvements on network products and solutions and the theoretical basis for the ultimate goal of the future network development based on the data concerning users' extreme experience. In this report, "zero wait time" is used to describe the extreme experience when a user cannot perceive the waiting time of the service. 1.1 Key Findings The zero waiting time for loading texts and images is 270 ms, and 80 ms for video buffering. Users cannot perceive waiting time less than the preceding thresholds. If audio and video are not synchronized for more than 220 ms, users will perceive the asynchronization problem when they are making video calls or watching videos. During a video call, if the interaction delay exceeds 210 ms, users will perceive the delay in the conversation. The extreme experience of video definition is affected by the screen sizes of display devices and illumination conditions. For mainstream smart phones whose screen sizes are between 4.3 and 6 inches, degradation of video definition is perceived when video bit rates are ranging from 400kbit/s to 1.1 Mbit/s. 1

2 Research Summary 2 Research Summary 2.1 Project Purpose and Contents Figure 2-1 Project purpose Webpage Text Refreshing Image Refreshing Unperceived Acceptable Unacceptable Extreme Experiment Verification Experiment Unendurable Experiment on Texts on Texts on Texts Extreme Experiment on Images Verification Experiment on Images Unendurable Experiment on Images Voice Voice Distortion Extreme Experiment on Voice Distortion Verification Experiment on Voice Distortion Unendurable Experiment on Voice Distortion Noise Level Video Streaming Buffering Continuity Synchronization Definition Extreme Experiment on Video Streaming Extreme Experiment on Video Continuity Extreme Experiment on Asynchronization Extreme Experiment on Video Definition Verification Experiment on Video Streaming Verification Experiment on Video Continuity Verification Experiment on Asychronization Verification Experiment on Video Definition Unendurable Experiment on Video Streaming Unendurable Experiment on Video Continuity Unendurable Experiment on Asychronization Unendurable Experiment on Video Definition Screen Size Illuminance Video Call Interaction Extreme Experiment on Interaction Verification Experiment on Interaction Unendurable Experiment on Interaction 2.1.2 Theory Principle The tests in this research focus on user experience of the current mainstream Internet services (webpage, video, social networking, and VoIP). User experience can be divided mainly into two categories: time-related (waiting time for text and image refreshing, video streaming, and also during video calls the waiting time for attaining video continuity, seamless interaction, as well as video and audio synchronization), and quality-related (image resolution and sound clarity). Users' final experience is graded into three levels: unperceived, acceptable, and unacceptable. In addition, factors such as noise level, illuminance and the screen sizes of the test terminals are all considered in this research. For many years, the pupil reaction has been applied to visual studies. It has long been believed that the diameter or the size of the pupil can be calculated by certain function using the total luminous flux that the eye has received. Recent studies show that the pupillary light 2

2 Research Summary reflex system does not only respond to the illumination changes, but also to the spatial distribution changes of photic stimulation. It has been proven that simple graphic changes can trigger graphic-pupil reflex. Therefore, slight eye movements such as dilating pupils can be used to detect photic stimulation changes in real time. Figure 2-2 Research principle Fluctuations in human perception can be manifested in the changes of human's pupil diameters. This experiment captures users' extreme perceptions, such as zero-perception and zero waiting time, by using the eye-tracking device to detect changes in their pupil sizes. Compared with other physiological indicators, such as reaction time, galvanic skin response, and sweat glands secretion, the measurement of pupil size is more accurate. With its error range within 5 ms, this method can provide accurate data to measure extreme experience. 2.1.3 Research Architecture Figure 2-3 shows the architecture of the research. 3

2 Research Summary Figure 2-3 Research architecture Method Framework Laboratory Scenario User Experience Affect Psychological Level Eye Tracking Behavior Scenario Unperceived Acceptable Unacceptable Satisfied Physiologica l Changes Changes of Pupil Sizes Ring Eye treacking Changes in moods Psychological Changes Give Up Emotional Changes Changes of Pupil Sizes Ring Ring Reference Timer Reference Verification Experiment Eye Tracking Statistics Gender/Education/Age/ Income and etc. Statistics Environment Variables Screen Size/Noise/ Illuminance Screen Size/Noise/ Illuminance Interviews Interviews Analyzing Root Causes 2.1.4 Research Methodology To illustrate the research methodology, the extreme experiment of zero waiting time for loading texts and images is used as an example. The procedures of the experiment are shown in Figure 2-4. Within T time after pressing Enter (entering the URL address), the texts are displayed on the page, while the images are still being loaded and will appear after another T time. This process composes one segment of the experimental materials and a next segment of the experimental material will be triggered by pressing Enter after the completion of former segment. The length of the T time is gradually extended, ranging from 50 ms to 2000 ms. Figure 2-4 Zero waiting time for loading texts and images Preparing for the experiment... T Texts are displayed on the web browser. Images are buffering T Both texts and images are displayed on the web browser 200ms STOP Latency Percived Based on the original data recorded by an eye-tracking device, the corresponding T time during which eye movements abruptly change, indicating the occurrence of extreme experience, can be identified. The accuracy of the result can be improved by narrowing the range of T time through multiple phases of experiment. Indicators of the extreme experience of other services can be obtained using a similar experimental procedure. 4

3 Research Result 3 Research Result 3.1 Web Page Loading 3.1.1 Zero Waiting Time for Loading Texts and Images Is 270 ms This experiment studies users' perception of the waiting time for loading texts and images. If the waiting time is below a threshold, users cannot perceive the waiting time for loading texts and images. The threshold is termed zero waiting time. Figure 3-1 Zero waiting time for loading texts and images The zero waiting time is gradually narrowed down to an accurate range through three phases of experiments. In the first phase of the experiment, the zero waiting time lies between 200 800 ms. In the second phase, it lies between 250 350 ms. In the third phase, it lies between 250 300 ms. The range of zero waiting time for loading texts and images is finalized between 250 300 ms, with its mean value being 270 ms. 3.2 Video Streaming 3.2.1 Zero Waiting Time for Video Streaming Is 80 ms This experiment studies users' perception of the waiting time of video buffering. When the video buffering time is lower than a threshold, users cannot perceive the waiting time for video streaming. This experience is called zero waiting. 5

3 Research Result Figure 3-2 Zero waiting time for video streaming In the first phase of the experiment, the zero waiting time lies between 50 200 ms. In the second phase, it lies between 40 160 ms (The experiment is practiced at an increment of 10 ms in the second phase). The mean value of 18 participants is 70 ms. Since the zero waiting time for video streaming, which is 70 ms, differs significantly from that for the texts and images, which is 270 ms, more participants are involved in the third phase of the experiment to test the validity of the data. Eight more participants are added to the third phase. After excluding an outlier (280 ms), the time range is finalized between 50 100 ms. Therefore, the validity of the preceding data is preliminarily verified. Based on the results obtained in the second phase, it is concluded that the zero waiting time of video streaming is 80 ms. 3.2.2 Data Verification for Video Streaming Zero Waiting Time The zero waiting time of video streaming is 80 ms (ranging from 40 120 ms). Considering that it differs significantly from that of the texts and images loading (270 ms) and that of video calls (220 ms), the validity of the data needs to be verified. 1. Repeat the preceding experiment. Eight more participants are added to the renewed experiment. After excluding an outlier (280 ms), the time range is finalized between 50 100 ms. Therefore, the validity of the preceding data is preliminarily verified. 2. Statistics Verification. The zero waiting time of video streaming is closely related to that of video continuity, which is 17 frames, equaling 58 59 ms and falling into the time range of the former data. 3. Theoretical Verification. According to the Partrick's speech at the ACM in 1997, humans' cognitive speed at perceiving videos is 3 to 6 times faster than that at perceiving static images. His theory can be applied to explain the difference between the zero waiting time of video streaming and that of texts and images loading. Table 3-1 Human visual perception ability Objects of Visual Perception Extreme Perception (Second) Texts 0.812 Images 0.65 Animated graphics 0.21 Videos 0.23 Source: Partrick L. & Xin F. Motion Effects towards Cognitive Speed, ACM-Graphics Open Lecture, U.S., 1997 6

3 Research Result 3.2.3 Zero Waiting Time for Video Asynchronization Is 220 ms This experiment studies users' perception of asynchronization between sound and images during video streaming and video calls. When the asynchronization between sound and images exceeds certain thresholds, users would perceive that the sound and images are not synchronized. Figure 3-3 Zero waiting time for video asynchronization The parameters of the video materials used in this experiment are listed in the table. In the first phase of the experiment, the maximum time for a participant among the 11 participants to perceive asynchronization is 1000 ms, and the minimum time is 50 ms. To narrow the range and improve accuracy, the second phase of the experiment is conducted. Based on the narrowed range obtained from the first phase, the extreme value of zero waiting time of the 16 participants is obtained. Based on the 3SD principle and comparison of adjacent points, it is concluded that there is no outlier among the obtained data and the calculated mean value is 220 ms. Table 3-2 Materials used in the experiment of zero waiting time for video asynchronization Asyn. (ms) 50 100 150 200 250 300 350 400 500 600 700 800 900 1000 1500 2000 3.2.4 The Extreme Experience of Definition Is Affected by Screen Sizes and Illumination Conditions This experiment studies users' perception of the video definition during video streaming and video calls. It aims to identify the threshold of the definition where users start to perceive the degradation of video definition. The data of extreme experience of definition in relationship to screen sizes and illumination conditions are listed in Table 3-3. 7

3 Research Result Table 3-3 Extreme experience of definition Screen Size Environ. 3.5 4.3 5 6 7 7.9 9.7 14 Outdoor 240p_low 240p_low 240p_high 360p_low 360p_low 360p_low 360p_low 576p Office 240p_low 360p_low 360p_low 360p_high 480p 720p_low 576p 720p_high Night 240p_low 360p_low 360p_low 480p 576p 576p 576p 720p_high In terms of the screen sizes, the larger the screen size is, the higher the video bit rate will be when users perceive the degradation of video definition. For a 3.5-inch screen, participants perceive the degradation of video definition when the definition of the video clip is 240p_low. For mainstream smart phones whose screen sizes are between 4.3 and 6 inches, degradation of video definition is perceived mainly when the definitions of the video clips are between 360p and 480p, with video bit rates ranging from 400 kbit/s to 1.1 Mbit/s. In terms of environmental conditions, as the illumination decreases, participants will perceive the degradation of video definition. Note 1: The parameters of the video materials used in this experiment are listed in Table 3-4. Since the definition of a video clip is closely related to its genre, video codec, and video bit rate, the quantitative conclusions drawn in this test have their own limitations. Note 2: In this experiment, the illumination environment is simulated in a laboratory and is set to three typical lighting conditions (outdoor 1000Lx, office 300-400Lx, at night 1-2lx) which are measured by the illuminometer. Table 3-4 Materials used in the experiment of the extreme experience of definition Experiment Material Video Definition Video Bit Rate (kbit/s) Video Codec (H.264) 240p_low 426*240 250 Main@2.1 240p_high 426*240 400 High@3.0 360p_low 640*360 400 Main@2.1 360p_high 640*360 750 High@3.0 480p 854*480 1100 High@3.1 576p 1024*576 1600 High@3.1 720p_low 1280*720 1100 High@3.1 720p_high 1280*720 2400 High@3.1 1080p_low 1920*1080 2200 High@4.0 1080p_high 1920*1080 5000 High@4.0 8

3 Research Result 3.3 Video Call 3.3.1 Zero Waiting Time for Interaction During Video Calls Is 210 ms This experiment studies the zero waiting time of interaction during video calls. If the waiting time of interaction is below certain threshold, users cannot perceive any delay during image transmission. Figure 3-4 Zero waiting time for interaction during video calls The zero waiting time is gradually narrowed down to an accurate range through three phases of experiments. In the first phase of the experiment, the zero waiting time lies between 100 800 ms. In the second phase, it lies between 150 300 ms. In the third phase, the range of zero waiting time is between 170 260 ms, with its mean value being 210 ms. 9

4 Appendix 4 Appendix 4.1 About This Report This research is the first of its kind in applying human factors engineering to study user experience. This research may be incomplete due to limited participants and experimental materials. The results of the experiments are for reference only. More research concerning the key indicators will be conducted in the future. Due to the length of this report, only some of the important conclusions are listed. You can contact us to obtain the complete report and experiment details. To create a baseline standard for user experience regardless of RAT, you can refer to the MOS for voice quality to establish MOS systems for web pages, video streaming and video calls. In these MOS systems, the scores will be based only on human perceptions, in which 5 equals zero waiting time and 1 equals the maximum level of endurance. Theoretically, according to the network conditions, there will be certain maximum values of zero waiting time for different RATs. Achieving the maximum value can serve as the goal of the development of each RAT. 4.2 Contact Us Author: Wang Bin, Email: i.wangbin@huawei.com Contact mlab (MBB lab): MBBlab@huawei.com 4.3 Disclaimer This report is a product of Huawei mlab. The information provided in this report is for reference only. This research may be incomplete due to limited participants and experimental materials. The rights of revision regarding the content of this report are reserved to Huawei. Huawei holds no responsibility for the consequences of revision. Information contained in this report, express or implied, constitutes neither any basis for investment purposes nor a warranty of any kind. mlab may add to, correct and amend 10

4 Appendix information in this report without notice, but does not guarantee immediate release of the revised version. Huawei will not be liable for any direct or indirect investment profit or loss caused thereby. This document is an intellectual property of Huawei mlab. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei. If any content of this report is released by any other party in the form of reference, it should be noted as the intellectual property of Huawei mlab. There shall be no citation, deletion and modification that violate the original meaning of this report. 11