Color Quantization of Compressed Video Sequences. Wan-Fung Cheung, and Yuk-Hee Chan, Member, IEEE 1 CSVT

Similar documents
An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions

Express Letters. A Novel Four-Step Search Algorithm for Fast Block Motion Estimation

Adaptive Key Frame Selection for Efficient Video Coding

MPEG has been established as an international standard

SHOT DETECTION METHOD FOR LOW BIT-RATE VIDEO CODING

WITH the rapid development of high-fidelity video services

WYNER-ZIV VIDEO CODING WITH LOW ENCODER COMPLEXITY

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ISCAS.2005.

Robust 3-D Video System Based on Modified Prediction Coding and Adaptive Selection Mode Error Concealment Algorithm

Analysis of a Two Step MPEG Video System

Free Viewpoint Switching in Multi-view Video Streaming Using. Wyner-Ziv Video Coding

Wipe Scene Change Detection in Video Sequences

A New Compression Scheme for Color-Quantized Images

Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

Constant Bit Rate for Video Streaming Over Packet Switching Networks

EMBEDDED ZEROTREE WAVELET CODING WITH JOINT HUFFMAN AND ARITHMETIC CODING

Research Article. ISSN (Print) *Corresponding author Shireen Fathima

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

Selective Intra Prediction Mode Decision for H.264/AVC Encoders

Modeling and Optimization of a Systematic Lossy Error Protection System based on H.264/AVC Redundant Slices

1022 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL 2010

PACKET-SWITCHED networks have become ubiquitous

Error Resilience for Compressed Sensing with Multiple-Channel Transmission

CERIAS Tech Report Preprocessing and Postprocessing Techniques for Encoding Predictive Error Frames in Rate Scalable Video Codecs by E

Video coding standards

Video compression principles. Color Space Conversion. Sub-sampling of Chrominance Information. Video: moving pictures and the terms frame and

Visual Communication at Limited Colour Display Capability

Bit Rate Control for Video Transmission Over Wireless Networks

Wireless Multi-view Video Streaming with Subcarrier Allocation by Frame Significance

TERRESTRIAL broadcasting of digital television (DTV)

IN OBJECT-BASED video coding, such as MPEG-4 [1], an. A Robust and Adaptive Rate Control Algorithm for Object-Based Video Coding

ENCODING OF PREDICTIVE ERROR FRAMES IN RATE SCALABLE VIDEO CODECS USING WAVELET SHRINKAGE. Eduardo Asbun, Paul Salama, and Edward J.

MULTI-STATE VIDEO CODING WITH SIDE INFORMATION. Sila Ekmekci Flierl, Thomas Sikora

Dual frame motion compensation for a rate switching network

Reduced complexity MPEG2 video post-processing for HD display

Chapter 10 Basic Video Compression Techniques

Motion Re-estimation for MPEG-2 to MPEG-4 Simple Profile Transcoding. Abstract. I. Introduction

Color Image Compression Using Colorization Based On Coding Technique

CODING EFFICIENCY IMPROVEMENT FOR SVC BROADCAST IN THE CONTEXT OF THE EMERGING DVB STANDARDIZATION

Analysis of Packet Loss for Compressed Video: Does Burst-Length Matter?

Multimedia Communications. Image and Video compression

SCALABLE video coding (SVC) is currently being developed

The H.263+ Video Coding Standard: Complexity and Performance

ROBUST ADAPTIVE INTRA REFRESH FOR MULTIVIEW VIDEO

Dual Frame Video Encoding with Feedback

Shot Transition Detection Scheme: Based on Correlation Tracking Check for MB-Based Video Sequences

Multimedia Communications. Video compression

A Linear Source Model and a Unified Rate Control Algorithm for DCT Video Coding

Robust Joint Source-Channel Coding for Image Transmission Over Wireless Channels

176 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 13, NO. 2, FEBRUARY 2003

CHROMA CODING IN DISTRIBUTED VIDEO CODING

AUDIOVISUAL COMMUNICATION

Joint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes. Digital Signal and Image Processing Lab

DWT Based-Video Compression Using (4SS) Matching Algorithm

Minimax Disappointment Video Broadcasting

New Architecture for Dynamic Frame-Skipping Transcoder

Impact of scan conversion methods on the performance of scalable. video coding. E. Dubois, N. Baaziz and M. Matta. INRS-Telecommunications

Overview: Video Coding Standards

SCENE CHANGE ADAPTATION FOR SCALABLE VIDEO CODING

Manuel Richey. Hossein Saiedian*

MPEG-2. ISO/IEC (or ITU-T H.262)

Robust Transmission of H.264/AVC Video using 64-QAM and unequal error protection

AN IMPROVED ERROR CONCEALMENT STRATEGY DRIVEN BY SCENE MOTION PROPERTIES FOR H.264/AVC DECODERS

COMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards

Systematic Lossy Error Protection of Video based on H.264/AVC Redundant Slices

Drift Compensation for Reduced Spatial Resolution Transcoding

A Unified Approach to Restoration, Deinterlacing and Resolution Enhancement in Decoding MPEG-2 Video

Error Resilient Video Coding Using Unequally Protected Key Pictures

INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET)

Technical report on validation of error models for n.

Key Techniques of Bit Rate Reduction for H.264 Streams

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

AN UNEQUAL ERROR PROTECTION SCHEME FOR MULTIPLE INPUT MULTIPLE OUTPUT SYSTEMS. M. Farooq Sabir, Robert W. Heath and Alan C. Bovik

A Study of Encoding and Decoding Techniques for Syndrome-Based Video Coding

UC San Diego UC San Diego Previously Published Works

THE NEWEST international video coding standard is

PAPER Wireless Multi-view Video Streaming with Subcarrier Allocation

3D MR Image Compression Techniques based on Decimated Wavelet Thresholding Scheme

Content storage architectures

Systematic Lossy Error Protection of Video Signals Shantanu Rane, Member, IEEE, Pierpaolo Baccichet, Member, IEEE, and Bernd Girod, Fellow, IEEE

Research Topic. Error Concealment Techniques in H.264/AVC for Wireless Video Transmission in Mobile Networks

II. SYSTEM MODEL In a single cell, an access point and multiple wireless terminals are located. We only consider the downlink

A High Performance VLSI Architecture with Half Pel and Quarter Pel Interpolation for A Single Frame

Dual frame motion compensation for a rate switching network

Chapter 2 Introduction to

Wyner-Ziv Coding of Motion Video

Motion Video Compression

An optimal broadcasting protocol for mobile video-on-demand

Robust Transmission of H.264/AVC Video Using 64-QAM and Unequal Error Protection

FAST SPATIAL AND TEMPORAL CORRELATION-BASED REFERENCE PICTURE SELECTION

H.264/AVC Baseline Profile Decoder Complexity Analysis

Implementation of an MPEG Codec on the Tilera TM 64 Processor

2-Dimensional Image Compression using DCT and DWT Techniques

An Image Compression Technique Based on the Novel Approach of Colorization Based Coding

Fast thumbnail generation for MPEG video by using a multiple-symbol lookup table

DATA COMPRESSION USING THE FFT

Fast Mode Decision Algorithm for Intra prediction in H.264/AVC Video Coding

Rate-Distortion Analysis for H.264/AVC Video Coding and its Application to Rate Control

Transcription:

CSVT -02-05-09 1 Color Quantization of Compressed Video Sequences Wan-Fung Cheung, and Yuk-Hee Chan, Member, IEEE 1 Abstract This paper presents a novel color quantization algorithm for compressed video data. The proposed algorithm extracts DC coefficients and motion vectors of blocks in a shot to estimate a cumulative color histogram of the shot and, based on the estimated histogram, design a color palette for displaying the video sequence in the shot. It significantly reduces the complexity of the generation of a palette by effectively reducing the number of training vectors used in training a palette without sacrificing the quality. The palette obtained can provide a good display quality even if zooming and panning exist in a shot. The experimental results show that the proposed method can achieve a significant SNR improvement as compared with conventional video color quantization schemes when zooming and panning are encountered in a shot. Index Terms Color quantization, color palette, compressed domain video processing, color display, MPEG, cumulative color histogram, compressed video, DC sequence. I. INTRODUCTION Many currently available image display devices can only display a limited number of colors simultaneously. Accordingly, in order to display a full-color video sequence, color quantization [1] is necessary for a display device to produce a set of limited number of colors based on the contents of the sequence and then use it to represent all colors appearing in the sequence. The set of limited colors is generally called a palette. Either a fixed palette or an adaptive palette can be used to display a video sequence. The implementation of the former approach is easy, but the output quality is generally poor. As for the latter approach, there are a lot of implementation schemes. One of the simplest ways is to produce a palette for each frame of the sequence. However, though its output provides the optimum distortion performance in terms of MSE, this approach is seldom used as the computational cost is very high and the frequent changing of color palette results 1 Manuscript received May 7, 2002. This work was substantially supported by a grant (A046) from the Center for Multimedia Signal Processing, HKPolyU. W. F. Cheung and Y. H. Chan are with the Center for Multimedia Signal Processing, Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hong Kong, Hong Kong Special Administrative Region, China. (phone: 852-2766-6264; fax: 852-2362-8439; e-mail: enyhchan@polyu.edu. hk). in screen flicker. In practice, an alternative in which a palette is produced for each shot is used instead as flicker between shots are usually not detectable by human eyes. A number of color quantization schemes were proposed to generate a palette based on some selected colors called training vectors [1-4]. Examples include the median-cut algorithm [1], the octree quantization algorithm [2] and the variance-minimization algorithm [4]. The training vectors involved in the palette-generation process plays a significant role in the process. They are treated as the representatives of all colors to be handled. Hence, they should be unbiased and cannot be too few. However, the complexity of the process is exponentially proportional to the number of training vectors. A compromise has to be reached in order to produce a palette at a reasonable computation cost. Tricks have been proposed to reduce the computational effort for producing a palette by reducing the number of training vectors carefully. For example, one can select a key frame from a shot and use the colors that appear in the key frame as training vectors. One can also use the so-called DC sequence of a shot or even the DC image of the key frame of a shot to generate training vectors for training purpose [5,6]. Using key frames to generate color palettes can reduce training vectors significantly but it does not work when the shot of interest involves zooming and panning. Some colors may be missing before the zoom but appear after the zoom. The colors of a key frame in such a shot cannot represent all colors appearing in the shot. In the past, color palettes are generated with uncompressed data. However, most of video sequences nowadays come in compressed formats such as [7] and [8]. Hence, it would be attractive to have a simple algorithm which can produce a color palette from the compressed data directly as it can save an amount of computation effort for video decompression. In this paper, we propose a novel color palette design method for compressed video sequences. This method extracts DC coefficients and motion vectors from a MPEG bit-stream to produce a color palette for a shot such that one can quantize a video sequence efficiently. II. DC SEQUENCE Most current video coding standards such as MPEG divide frames in a sequence into I-, P- and B- frames[8]. All frames are partitioned into a number of blocks of size 8x8 and each block is then either intracoded or intercoded. Specifically, all

CSVT -02-05-09 2 blocks in an I-frame are intracoded while blocks in a P- or B- frame can be either intracoded or intercoded. In [5], a DC image is defined as the image consisting of DC coefficients of all blocks in an image. A video sequence contains a number of frames and its corresponding DC sequence is defined accordingly to be a sequence of DC images associated with the frames. An encoded video sequence contains a number of blocks which are either intracoded or intercoded. Without decompressing the video sequence, it is impossible to get the DC sequence. A method was suggested in [5] to obtain an approximated DC sequence as follows. For an intracoded block, its DC coefficient is directly extracted from the bit-stream. As for an intercoded block, its best matching block in the reference frame is first identified with the associated motion vector of the block. Its DC coefficient is then estimated to be the weighted sum of the DC coefficients of the blocks which are partially covered by the best matching block in the reference frame. In formulation, we have 4 P ( dc) = P ( dc w (1) i= 1 r, i ) where P(dc) denotes the DC coefficient of an intercoded block, P r, i ( dc) denotes the DC coefficient of the i th block covered by the associated best matching block in the reference frame and w i is the corresponding area proportion being covered by the best matching block. After obtaining the approximated DC sequence of a shot, all DC coefficients are used as training vectors to generate a palette. This method can effectively reduce a number of training vectors without ignoring any frame in a shot. Accordingly, it can provide an unbiased palette for the shot successfully. However, as it generates imaginary colors during the estimation of the DC coefficients of intercoded blocks, the number of training vectors can be as many as the total number of blocks in a shot and sometimes it is still very computationintensive. III. THE PROPOSED ALGORITHM In our proposed algorithm, a cumulative color histogram of the shot is estimated and the palette is generated based on the estimated cumulative histogram. A cumulative color histogram is the distribution of colors in a shot and it tells the frequency of occurrence of a color appearing in the shot. In our algorithm, the cumulative histogram is estimated in two steps. In the first step, all DC coefficients of intracoded blocks in a shot are considered as the representative colors of their blocks and extracted to construct a color histogram. In the second step, intercoded blocks are handled. For each intercoded block, its representative color is considered as a composite of some existing colors previously extracted from the intracoded blocks and the frequency of occurrence of a particular involved color is increased by its proportion in the composition. The proportion is determined based on the proportion of area overlapped by a block. Figure 1 shows an i example of how to extract colors of an intercoded block and update the cumulative histogram. In this example, all blocks other than blocks n and c are intracoded. As all intracoded blocks were handled in the first stage, representative colors of blocks a, b, d, p and q, say, C a, C b, C d, C p and C q, were included in the histogram already. The best matching block of block n in the reference frame covers 3 intracoded blocks and 1 intercoded block the best matching block of which in the corresponding reference frame in turn covers 2 intracoded blocks. As a result, the frequency of occurrence of C, C, C d, C p and C should be increased by 0.25, 0.25, 0.25, q 0.125 and 0.125 respectively in the cumulative histogram according to the area proportions of the blocks in the composition of block n. In practice, the estimation of the cumulative histogram can be implemented in a GOP-by-GOP or even frame-by-frame manner as long as the block dependency is taken care of. By doing so, it is not necessary to process all intracoded blocks in a shot before handling an intercoded block, which can reduce the time to obtain the estimation result. After handling all intercoded blocks in a shot, a cumulative color histogram is constructed. This histogram is different from a conventional color histogram in a way that the frequency of occurrence of a color could be a fractional number. Any color whose frequency of occurrence is larger than zero is considered as a representative color of the shot and used as one of the training vectors in the training process. The frequency of occurrence is taken into account during the training process. The proposed approach significantly reduces the number of training vectors used in color quantization. One can see that the maximum number of training vectors is equivalent to the total number of intracoded blocks in a shot. In practical applications, only a few of blocks in a shot are intracoded. Table 1 shows the total number of different representative colors involved in the generation of a color palette for a shot when different approaches are used. It shows that the proposed method can reduce the total number of representative colors to 20~40% of that required by the approach using an approximated DC-sequence. Simulation results also shows that, when the proposed approach is used, only one third to one fifth of time is required to generate a color palette as compared with the approach using an approximated DC sequence. IV. SIMULATION RESULTS Simulation has been carried out to evaluate the performance of the proposed approach. In our simulation, two movies in DVD format, Negotiator and Vertical Limit, were used as the source input of the simulation. In our simulation, camera breaks were detected and the movies were split into a number of shots accordingly. Each shot was then compressed with an MPEG-1 video coder [7]. The ITU-T Recommendation H.263 [8] quantization method a b

CSVT -02-05-09 3 was adopted, and the motion search range was [-16, 15.5] during the compression. In order to simplify the testing process, only the first frame was encoded as I-frame and all other frames in a shot were encoded as P-frames with a fixed quantization parameter (QP). Color palettes of 256 colors each were then generated with various schemes. Specifically, the schemes we studied are as follows. VQ-EF A color palette is generated for each frame in a shot with all colors in the frame VQ-AF A single color palette is generated for a shot with all colors in the shot VQ-DCS A single color palette is generated for a shot with the DC sequence of the shot VQ-DCH A single color palette is generated for a shot with the proposed approach VQ-DCSa A single color palette is generated for a shot with the approximated DC sequence of the shot CQ-DCKF A single color palette is generated for a shot with the approach proposed in [6] UQ-EF Fixed uniform quantization of R, G and B components, bit allocation = (3,3,2) The prefix 'VQ-' implies that the LBG algorithm[9] is used to generate a color palette with training colors in a particualr scheme. Figures 2 and 3 show the PSNR performance of various schemes at different QPs in different cases. Here, PSNR is defined as 2 255 PSNR = 10 log (2) MSE where MSE is the mean square error of the colors between a particular frame and its color-quantized output. Table 2 shows the average PSNR per frame in a shot for different schemes. The output of UQ-EF suffers from very severe degradation and contouring. The simulation results of the proposed and other schemes are significantly better. VQ-EF provides an optimized PSNR performance as it can minimize the color quantization error for each frame. However, the complexity is very high. It is not only because of its frame-oriented nature, but the huge number of training colors involved in the training process of palette generation as well. Besides, decompression is necessary before extracting training colors. This makes it not applicable for real time applications. Moreover, frequent change of color palette leads to screen flicker. All shot-oriented schemes can reduce screen flicker. Among them, at a cost of huge complexity that makes it impossible for real-time applications, VQ-AF provides the best PSNR performance as all colors in the shot are taken into account during its training process. It is used here as a reference for evaluating how close a particular shot-oriented scheme's performance is to the best. CQ-DCKF uses the DC image of a key frame to generate a color palette and hence its computational cost is very low. However, comparatively, it cannot handle cases such as zooming and panning well as, in such cases, the shot cannot be well represented with a single key frame. This fact is reflected in Figures 2 and 3 that the PSNR of the output sequence gets worse and worse when the shot involves panning or zooming. The problem is that the size of the training color set is too small. The palette obtained is very biased to the key frame and hence the output quality is very sensitive to the scene change detection algorithm. VQ-DCS, VQ-DCSa and the proposed VQ-DCH can handle zooming and panning cases. The proposed scheme is superior to the other two as its complexity is much lower. As shown in Table 1, the number of training colors involved in the proposed scheme is much fewer. This superiority is not gained at a cost of quality. As shown in Figures 2 and 3, the PSNR performance of the proposed scheme is better than that of VQ-DCSa and, as compared with that of VQ-DCSa, very close to that of VQ-DCS. The inferiority of VQ-DCSa might be due to the estimation error of the DC coefficients of intercoded blocks when VQ-DCSa is exploited. In general, the proposed method can, respectively, achieve a PSNR improvement of 4~9dB and 1~5dB as compared with CQ- DCKF and VQ-DCSa. Figures 4 and 5 show some simulation results for subjective evaluation. One can see that the output of the proposed scheme can report the colors more faithfully. This can be observed by comparing the orange and blue colors on the right of the pictures in Figure 4. In Figure 5, the results of VQ-DCS and CQ-DCKF are biased to blue while ours is not. Using the cumulative color histogram estimated with the proposed approach is generally superior to using the DC sequence in extracting training colors for palette generation. The proposed algorithm can work with any palette generation algorithms besides the LBG algorithm to provide a good PSNR performance. Figure 6 shows the case when the median-cut algorithm[1] is used. In this Figure, the prefix 'MC-' implies that the median-cut algorithm[1] is used to get the palette with the training vectors. The complexity of the median-cut algorithm is much lower than that of the LBG algorithm and real-time color quantization can be easily achieved with a typical Pentium-II computer. V. CONCLUSIONS Limited color palette devices are popular nowadays. Color quantization has to be carried out so as to display true color video sequences with these devices. In this paper, a novel color quantization scheme for compressed video sequences is proposed. This scheme generates a color palette for each shot based on an estimated cumulative color histogram of the shot. It operates in the compressed domain directly and hence is able to save an amount of computation effort for decompression. It produces no screen flicker and can handle both zooming and panning cases. Unlike the approach used in [6], the proposed approach generates a color palette which is not biased to a single frame. As compared with the approach using DC sequences, the number of the training colors involved in the palette generation is significantly reduced, which in turn reduces the complexity of the process

CSVT -02-05-09 4 significantly. Simulation results show that the quality of the output obtained with the proposed scheme is better than those obtained with other conventional shot-oriented color quantization schemes which operate in the compressed domain. Total no. of different representative colors Video Sequence QP Total no. of Approximated DC-sequence frames DC-sequence Ours Negotiator Shot-1 16 144 82673 48064 6768 Negotiator Shot-2 16 144 82323 26868 6222 Negotiator Shot-3 8 115 46321 12368 3112 Negotiator Shot-4 12 201 68513 25841 11350 Vertical-Limit Shot-1 16 152 72179 43486 9378 Vertical-Limit Shot-2 8 68 47738 36242 8356 Video sequence Table 1 QP Total number of different representative colors in different shots Average PSNR per frame in a shot (db) UQ-EF VQ-EF VQ-AF VQ-DCS VQ-DCSa VQ-DCH CQ-DCKF MC-DCS MC-DCSa MC-DCH Negotiator Shot-1 (Zooming In) Negotiator Shot-2 (Zooming Out) Negotiator Shot-3 (Zooming Out + Panning) Vertical Limit Shot-2 (Panning) 8 23.29 37.80 36.65 33.92 31.65 33.52 28.94 32.70 28.16 32.50 12 23.19 37.13 36.05 33.17 31.58 32.84 28.79 32.42 28.57 31.94 16 23.07 36.71 35.70 32.67 31.15 32.35 28.66 31.81 28.67 31.51 8 23.35 37.93 36.67 33.90 33.07 33.66 29.17 33.50 30.05 32.62 12 23.22 37.36 36.17 33.30 32.83 33.11 28.93 33.03 30.05 32.25 16 23.16 36.93 35.82 32.90 32.41 32.69 28.67 32.39 28.98 32.02 8 24.02 41.11 40.01 38.28 36.61 38.30 33.58 37.82 34.95 37.65 12 23.98 40.97 39.79 38.17 36.21 37.36 33.56 37.72 34.89 37.54 16 23.95 40.97 39.88 38.34 36.80 37.98 33.54 37.52 35.05 37.50 8 20.50 39.57 38.11 35.97 31.74 36.80 27.12 35.49 32.39 34.98 12 20.48 39.41 37.68 35.77 31.70 36.56 27.09 35.63 32.22 35.90 16 20.47 39.24 37.56 35.78 31.41 36.41 27.07 35.11 32.10 35.60 Table 2 Average PSNR performance of various color quantization schemes

CSVT -02-05-09 5 block p block q best matching block of block c frame j, reference frame of frame i block a block c best matching block of block n frame i, reference frame of frame m block b block d frame m block n, block of interest Blocks c and n are intercoded. Others are intracoded. composition of block c: 0.5(blk. p + blk q) composition of block n: 0.25(blk. a + blk. b + blk. c + blk.d) Color Freq C a, C b, C d +0.25 C p, C q +0.125 where C x is the DC coefficient of block x Figure 1 Extracting colors of an intercoded block to update the cumulative histogram Figure 2 The PSNR performance of various palette generation schemes at QP=8 under different conditions: zooming in, zooming out, zooming out + panning and panning. 5

CSVT -02-05-09 6 Figure 3 The PSNR performance of various palette generation schemes at QP=16 under different conditions: zooming in, zooming out, zooming out + panning and panning. 6

CSVT -02-05-09 7 Figure 4 Performance of various color quantization schemes in zooming-in case: Original of the 90 th frame of the MPEG-encoded Negotiator Shot-1 (QP16); VQ-DCS output; VQ-DCH output and CQ-DCKF output. Figure 5 Performance of various color quantization schemes in panning case: Original of the 66 th frame of the MPEG-encoded Vertical Limit Shot-2 (QP16); VQ-DCS output; VQ-DCH output and CQ-DCKF output. 7

CSVT -02-05-09 8 Figure 6 The PSNR performance of various palette generation schemes at QP=12 under different conditions: zooming in, zooming out, zooming out + panning and panning. REFERENCES [1] P. Heckbert, Color image quantization for frame buffer display, Computer Graphic, Vol.16, no.3, pp.297-307, July 1982. [2] M. Gervautz and W. Pugathofer, A simple method for color quantization: Octee quantization, Graphics Gems, A. Glassner, Ed., New York: Academic Press, 1990, pp.287-293. [3] M.T. Orchard and C.A. Bouman, Color quantization of images, IEEE Transactions on Signal Processing, Vol.39, no.12, pp2677-2690, Dec 1991. [4] S. Wan, S. Wong, and P. Prusinkiewicz, An algorithm for multidimensional data clustering, ACM Trans. Math. Software, Vol.14, no.2, pp153-162, 1988. [5] B.L. Yeo, and B.Liu, "Rapid scene analysis on compressed video Source", IEEE Transactions on Circuits and Systems for Video Technology, Vol.5, 6, pp.533-544, Dec. 1995. [6] S.C. Pei, C. M. Cheng and L. F. Ho, Limited color display for compressed image and video, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 10 Issue 6, pp.913-922, Sep. 2000. [7] ISO/IEC 11172-2, Coding of moving pictures and associated audio for digitized storage media up to about 1.5 Mbits/sec - Part 2: Video, 1993. [8] ITU Telecommunication Standardization Sector of ITU, Video coding for low bit-rate communication, ITU-T Recommendation H.263, Mar. 1996. [9] Y.Linde, A.Buzo and R.M. Gary, An algorithm for vector quantizer design, IEEE trans. Comm., Vol.28, pp.84-95, 1980. 8