Using Motion-Compensated Frame-Rate Conversion for the Correction of 3 : 2 Pulldown Artifacts in Video Sequences

Size: px
Start display at page:

Download "Using Motion-Compensated Frame-Rate Conversion for the Correction of 3 : 2 Pulldown Artifacts in Video Sequences"

Transcription

1 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 10, NO. 6, SEPTEMBER Using Motion-Compensated Frame-Rate Conversion for the Correction of 3 : 2 Pulldown Artifacts in Video Sequences Kevin Hilman, Hyun Wook Park, Senior Member, IEEE, and Yongmin Kim, Fellow, IEEE Abstract Currently, the most popular method of converting 24 frames per second (fps) film to 60 fields/s video is to repeat each odd-numbered frame for 3 fields and each even-numbered frame for 2 fields. This method is known as 3 : 2 pulldown and is an easy and inexpensive way to perform 24 fps to 60 fields/s frame-rate conversion. However, the 3 : 2 pulldown introduces artifacts, which are especially visible when viewing on progressive displays and during slow-motion playback. We have developed a motion-compensated frame-rate conversion algorithm to reduce the 3 : 2 pulldown artifacts. By using frame-rate conversion with interpolation instead of field repetition, mean square error and blocking artifacts are reduced significantly. The techniques developed here can also be applied to the general frame-rate conversion problem. Index Terms Film-to-video conversion, frame-rate conversion, motion compensation, spatiotemporal interpolation, video processing, 3:2 pulldown. I. INTRODUCTION THE POPULARITY of movies on video tape and digital versatile disk (DVD) that can be watched at home has helped motion pictures find their way to a large television audience. However, in order to view movies or filmed programs on television, film-to-video conversion is necessary. This is because film is produced at a rate of 24 frames per second (fps), whereas National Television Standards Committee (NTSC) televisions require a rate of 60 fields/s. Since NTSC televisions use the interlaced scanning mechanism, we use the term field for each half of an interlaced image and the term frame for a progressively-scanned image. For conventional applications with low- and medium-resolutions, such as video conferencing or NTSC television, relatively straightforward solutions have been shown to suffice for film-to-video conversion, such as the 3 : 2 pulldown method for fixed spatiotemporal interpolation [1]. Fig. 1 shows the 3 : 2 pulldown technique of 24 fps to 60 fields/s conversion. In Fig. 1, frames are separated by thick vertical lines, and fields are separated by dotted lines. Each even-numbered frame in the 24 fps sequence corresponds to two fields in the 60 fields/s sequence, Manuscript received February 4, 1999; revised October 18, This paper was recommended by Associate Editor R. Lancini. K. Hilman and Y. Kim are with the Image Computing Systems Laboratory, Departments of Electrical Engineering and Bioengineering, University of Washington, Seattle, WA USA ( ykim@u.washington.edu). H. W. Park was with the Image Computing Systems Laboratory, Departments of Electrical Engineering and Bioengineering, University of Washington, Seattle, WA USA. He is now with Korea Advanced Institute of Science and Technology, Seoul, Korea. Publisher Item Identifier S (00) Fig. 1. Conversion of 24-fps film to 60 fields/s video using 3 : 2 pulldown. Superscripts e and o indicate the even and odd fields, respectively. while each odd-numbered frame corresponds to three fields. This repetition of fields leads to frames with field motion artifacts due to two fields in a newly created frame coming from two separate original frames, as shown in the shaded frames of Fig. 1. When shown on progressive displays, these frames with field motion artifacts are especially noticeable because two fields from different original frames are displayed together to form a single progressive frame. For other applications, such as HDTV and slow-motion playback, these simple approaches, such as 3 : 2 pulldown, have consistently led to noticeable degradations in video quality [1]. To reduce these motion artifacts, linear interpolation or decimation could be used in generating frames at intermediate temporal locations with regard to the original frame sequence. However, it is well known that such techniques require a compromise between blurring and motion jerkiness [2]. It has been found that the use of motion information in standard conversion problems is generally required in order to reduce the blurring and motion jerkiness [1], [3], [4]. We have developed a motion-compensated frame-rate conversion technique to obtain high-quality results without excessive computation requirements. To reduce blocking artifacts, we use an overlapping-block motion-compensation (OBMC) technique similar to that developed for H.263 [5] and MPEG-4 [6]. In this paper, Section II describes our development of the motion estimation (ME) and motion-compensated frame-rate conversion algorithm. Simulation results, comparisons in terms of mean-square-error (MSE) and blockiness, and computation requirements are described in Section III. Finally, conclusions are given in Section IV. II. FRAME-RATE CONVERSION Fig. 2 shows an example of the temporal distribution of frames in 24-to-30 fps frame-rate conversion. Two out of five new frames can be obtained by copying original frames, while the other three are interpolated. Since the quality of /00$ IEEE

2 870 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 10, NO. 6, SEPTEMBER 2000 Fig. 2. Temporal distribution of frames in 24-to-30 fps conversion. the original frames is supposed to be better than that of the interpolated frames, it is a disadvantage that only half of the originally available frames are displayed. Also, the temporal locations of the interpolated frames are not symmetric with respect to the original frames. To linearly interpolate a frame that is located at distances from the original frames in Fig. 2, the weights to be used are and for and, respectively. In 24-to-30 fps conversion, as shown in Fig. 2, the weights would be 1/5, 2/5, 3/5, and 4/5, and they require division by five, which is a more difficult task compared to division by two or four. This problem has to be also faced when a noninteger conversion factor is involved [7]. Castagno et al. [7] proposed an alternative time-shifted temporal distribution of frames for a 1.25 up-conversion factor. The time-shifted temporal distribution for 24-to-30 fps conversion is shown in Fig. 3. In this time-shifted scheme, two out of five frames are interpolated and the remaining three are replicas of the original frames. Frame is copied, frame is copied and time-shifted, frame is interpolated from original frames 2 and 3 and time-shifted, frame is interpolated from original frames 3 and 4 and time-shifted, and finally, frame is copied and time-shifted. Here, the interpolation weights are all 1/2, which is much easier to support than Fig. 2. Using this method, the new frames are then displayed every 33.3 ms for 30-fps display. It is possible that the time-shifted method could introduce a certain amount of motion jerkiness since some of the frames undergo a small temporal displacement. However, subjective testing has shown that this artifact is not disturbing even in the most critical condition of slow uniform motion [7]. In this paper, the developed motion-compensated frame-rate conversion method adopts the approach in Fig. 3. A. Spatiotemporal-Correlation ME The first step in motion-compensated interpolation is to determine the motion of the objects in a scene. This problem is known as ME. The block matching technique has been widely used for ME due to its simplicity and ease of implementation in Fig. 3. Temporal distribution of frames for time-shifted 24-to-30 fps conversion. hardware. Our approach focuses mainly on taking advantage of spatial and temporal correlations among motion vectors (MVs). It has been observed that block MVs are correlated to the MVs of their spatially adjacent blocks [8]. This is due to the fact that in many natural video scenes, rigid bodies are typically large with respect to the block size. The main idea is to select a set of initial MV candidates from spatially and/or temporally-neighboring blocks and choose the best one (according to a certain rule) as the initial estimate for further refinement. MV Candidate Selection: For the first few blocks in a frame, there are no good spatial predictors. One approach is to perform a full search on several blocks in the top-left corner. Another method is to assume zero motion for the first blocks and let the refinement process find good MVs. After initialization, the algorithm continues along the row from left to right, and then proceeds to the next row. For each block, there are two spatial candidate MVs; one from the block immediately to the left of the current block, and the other from the block immediately above. There is also a temporal candidate MV from the block in the same position of the previous frame. A sum of absolute differences (SAD) is calculated between the current block and each of the three blocks determined by these candidate MVs. An SAD is also computed with the zero MV. Of these four, the vector that gives the minimum block SAD is chosen as the best candidate MV. MV Refinement Process: After selecting the best candidate vector, it is refined further by using the three-step search. The refinement process begins with the best candidate MV. The new search center is set at in MV space. Starting from the new center, a three-step search [9] is performed. B. Frame Interpolation Once MVs for all the blocks are determined, this motion information is utilized in motion-compensated frame interpolation. As shown in Fig. 3, the frames to be interpolated are halfway between adjacent frames. To find the blocks in the

3 HILMAN et al.: USING MOTION-COMPENSATED FRAME-RATE CONVERSION 871 th and th frames to be used for interpolation, we start with the estimated MV, as shown in Fig. 4(a). This vector represents the motion of a block between frames and, and it is calculated using our block-based ME algorithm. Using the assumption of linear motion, we estimate that each block will have moved one half of its MV distance in the interpolated frame. Before interpolating a block in the interpolated frame, one block from each of the th and th frames is searched as shown in Fig. 4(b). The interpolation is performed by averaging these two motion-compensated blocks from the frames and. This can be thought of as the interpolation stage of an MPEG-2 decoder for frames [9] without the addition of any residual error signal. C. Reduction of Blocking Artifacts A common side effect of block-based processing is blocking artifacts on the block boundaries. As an attempt to reduce the blocking artifacts, we developed a technique similar to the overlapping-block motion-compensation (OBMC) methods in video coding standards, such as H.263 [5] and MPEG-4 [6]. Fig. 5 shows how the interpolation is performed to reduce blockiness where the size for the grids is. This algorithm is performed in two passes. In pass 1, each block on the standard grid is copied from a motion-compensated block in the th frame, shown as the thick-outlined block in Fig. 5(a). The MVs used for this motion compensation are derived from the block MVs calculated during ME in the same way as in Section II-B. In pass 2, each block on a shifted grid is copied from a motion-compensated block in the th frame, shown as the thin-outlined block in Fig. 5(b). The origin of the shifted block grid is at (8, 8) instead of (0, 0), causing blocks on the shifted grid to overlap with blocks on the standard grid. Each block on the shifted grid overlaps with four blocks on the standard grid. These four blocks are marked as 1, 2, 3, and 4 in Fig. 5(a). Shifting by 8 in each direction allows the block from the th frame to contribute equally to the four blocks that it overlaps on the standard grid. The motion compensation for pass 2 is performed in the same way as in pass 1. However, our ME algorithm only computes MVs on the standard grid. Therefore, MVs on the shifted grid must also be generated to perform motion compensation. Based on the property of spatial correlation among MVs, interpolating these vectors by using neighboring MVs would result in reduced computational complexity while not significantly reducing the quality of the interpolated images. Therefore, the MVs on the shifted grid are interpolated from the four neighboring MVs on the standard grid using bilinear interpolation rather than computing them explicitly using ME. The overlapping blocks are then combined using a weighted combination as shown in Fig. 5(c). For example, the weight matrices used to combine the overlapping section of the outlined blocks [shaded area in Fig. 5(c)] are shown in Fig. 6. The weights used to combine the block from the th frame and the overlapping portions of the other three blocks are similar. For the th frame, the weights decrease as the block boundary is approached in the standard grid, and for the th frame, the weights decrease as the block boundary is approached in the shifted grid. This type of weighting causes smooth transitions across block boundaries and less pronounced block edges be- Fig. 4. (a) Estimated MV, v, computed during ME. (b) Interpolation using blocks from (k 0 1)th and kth frames. Fig. 5. Interpolation using overlapping blocks. (a) Motion-compensated block from the (k 0 1)th frame (standard grid). (b) Overlapping motion-compensated block from the kth frame (shifted grid). (c) Combination of the two motion-compensated blocks. Fig weight matrices for combining two overlapping blocks [shaded area in Fig. 5(c)] from (a) the (k 0 1)th frame and (b) the kth frame. cause the contribution of the th and th frames to the interpolated frame varies depending on the pixel location within the block. III. SIMULATION RESULTS A. Frame Doubling Several quantitative measures have been used in an attempt to shed some light on the quality of the interpolated images. One

4 872 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 10, NO. 6, SEPTEMBER 2000 method consists of using our frame-rate conversion algorithm to perform frame-rate doubling on a progressive sequence as shown in Fig. 7. Every other frame of the original sequence was used as input to the frame-rate doubler, and the interpolated frames with a dashed outline were generated. An MSE could then be computed between the original and interpolated frames. Fig. 8 shows the MSE for the interpolated frames with two different progressive sequences. Fig. 8(a) shows the results of the Football sequence, while Fig. 8(b) shows the results of the Miss America sequence. The interpolated frame numbers on the axis are defined in Fig. 7. The Football sequence was used to test the algorithm when applied to those sequences with large and complicated motion, and the Miss America sequence was used to evaluate how the algorithm performs on sequences with little motion. Each figure has two error measures; the first one using the full-search ME with a search range of and the second one using our ME algorithm based on spatiotemporal correlation among MVs. In both sequences, it is clear that our spatiotemporal correlation ME algorithm produces interpolated frames with less error than the full-search ME algorithm. Over the 50 interpolated frames of the Football sequence, the average MSE is 71.2 for our algorithm and for the full-search algorithm, giving 41% less error with our algorithm. For the Miss America sequence, the average MSE from both algorithms is 10.5, showing no improvement in MSE with our algorithm. For the Football sequence, the difference in MSE between algorithms is much larger than that for the Miss America sequence because the Football sequence has much more motion and is more challenging to interpolate than the Miss America sequence. It was difficult to compare our results to others in the literature because none have given numerical comparisons of frame-interpolated sequences to 3 : 2 pulldown sequences. Because of the lack of a gold standard when doing frame interpolation, many researchers in the literature did not perform objective measurements at all. Many simply made subjective claims about the quality of the interpolated images [3], [4] while others just showed interpolated frames [2], [10]. However, we can compare the results of our frame interpolation algorithm with the results of other researchers when performing frame doubling. Wong and Au [11] used their algorithm to perform frame doubling on the Miss America and Claire sequences that were temporally subsampled by a factor of two. They reported an average PSNR of 37.7 and 41.1 db for Miss America and Claire, respectively, while our spatiotemporal correlation ME algorithm provides an average PSNR of 37.9 and 45.7 db, respectively. Fig. 7. Calculation of error in interpolated frames. ponents of spatially-adjacent block MVs. The distance is then defined as Fig. 9 shows the distance histograms with our ME algorithm and the full-search ME algorithm. The average vector distance is 5.9 for our algorithm and 9.1 for the full-search algorithm. Also, the histograms in Fig. 9 show that there are more MV differences close to zero with our algorithm. MVs generated by our algorithm are more correlated to their spatial neighbors than those in full-search ME due to the fact that our ME algorithm uses neighboring spatial and temporal MVs as starting points for the MV search. C. Artifacts of Motion-Compensated Interpolation For video coding, such as MPEG, correlated MV fields are not as important as they are when used for frame interpolation. In MPEG, when a less-optimal MV is found, the penalty is that a larger residual error needs to be coded in the bitstream (requiring more bits), but the resulting displayed frame can still have good quality because of the additional error information. With block-based frame interpolation, no residual error information is used. Blocking artifacts in the interpolated frames are typically caused by discontinuities in MV fields. Because our algorithm is designed to produce correlated MV fields, the likelihood of these blocking artifacts is reduced. However, some blocking artifacts may still exist due to the nature of block-based processing. In order to quantitatively measure the blockiness in the interpolated images, the blockiness factor was calculated by first computing all horizontal and vertical edges in a frame. Horizontal and vertical edge images were computed as follows: (1) B. Correlation of MV Fields Our spatiotemporal correlation ME algorithm produces MVs that are more correlated than the full-search algorithm. To illustrate this MV correlation, we can analyze the MV differences computed between the adjacent block MVs from 50 frames of the Football sequence. First, we compute the differences between the horizontal components and the vertical com- (2) where is the frame, and are the width and height of the frame, respectively, is the horizontal-edge image, and is the vertical-edge image. The total edge image

5 HILMAN et al.: USING MOTION-COMPENSATED FRAME-RATE CONVERSION 873 Fig. 8. MSE calculations comparing our spatiotemporal correlation ME algorithm to full-search ME. (a) Football sequence with the average MSE of 72.1 and 120.3, respectively. (b) Miss America sequence with the average MSE of 10.5 and was derived by thresholding the sum of the horizontal and vertical edge images to reduce noise where if otherwise. (3) The threshold used for the Football sequence was 10, so that edges from the grass in the background were eliminated. When varying the threshold from 5 to 20, the results were not significantly different. We then computed a value summing all edges that lie on block boundaries and another value summing all edges in the frame that do not lie on block boundaries. The blockiness factor calculated is the ratio of the strength of the edges on block boundaries to that of the rest of the edges in a frame as follows: (4) where is the largest integer less than or equal to. Fig. 10 shows the blockiness factor of the interpolated images produced by our algorithm without and with OBMC. It also

6 874 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 10, NO. 6, SEPTEMBER 2000 Fig. 9. Histograms of MV differences for our ME algorithm and the full-search ME algorithm. Fig. 10. Blockiness in interpolated images. The average is 0.12 for the original sequence, 0.28 for our algorithm without OBMC, 0.14 for our algorithm with OBMC, and 0.37 for the full-search algorithm. shows the blockiness factor with the full-search ME algorithm. The correlated MV fields used for interpolation by our algorithm cause less blockiness when compared to the full-search algorithm, and the average blockiness from our algorithm without OBMC and full-search algorithm is 0.28 and 0.37, respectively. However, the blockiness was still disturbing in both cases, especially in scenes with large motion. When the blocking artifact reduction algorithm employing OBMC is applied to our frame interpolation algorithm, the blockiness is reduced significantly as shown in Fig. 10. With OBMC, the average blockiness factor of 0.14 is only slightly higher than that of the original sequence (0.12), but 50% lower than that of our algorithm without OBMC (0.28). The reduction in the blockiness factor is due to the fact that the weighting parameters used in combining blocks make the boundaries of overlapping blocks less conspicuous and the block edges less pronounced. Fig. 11 compares the MSEs of our algorithm with and without OBMC on the Football sequence. The OBMC has little impact on the interpolated frames in terms of MSE considering the fact that the average MSE increases from 72.1 for the algorithm without OBMC to 75.1 with OBMC. Fig. 12(a) (d) shows a frame-interpolated image from 3 : 2 pulldown, full-search ME, our method without OBMC, and our method with OBMC, respectively. The image from 3 : 2 pulldown shows a lot of field-motion artifacts, and the images from full-search ME and our method without OBMC show some blocking artifacts around foot area. The image from full-search ME shows a little more blocking artifacts than our method without OBMC. However, our method with OBMC incorporated shows the best image quality without the field motion and blocking artifacts. The blocking-artifact reduction algorithm was designed as a method to reduce the artifacts of block-based processing while not significantly increasing computational complexity and reducing the image quality. Although pixel-level MVs have also been shown to reduce blocking artifacts, they also introduce a significant increase in computational complexity [12], [13].

7 HILMAN et al.: USING MOTION-COMPENSATED FRAME-RATE CONVERSION 875 Fig. 11. MSE computation using overlapping block techniques. Average MSE is 72.1 for our algorithm without OBMC, and 75.1 with OBMC. Fig. 12. Frame-interpolated images from: (a) 3 : 2 pulldown; (b) full-search ME; (c) our method without OBMC; and (d) our method with OBMC. These images are the magnified portion from the Football sequence. D. Computation Requirements The ME task is the most compute-intensive part of the frame-rate conversion algorithm. There are three operations (one subtract, one absolute, and one add) for each pixel when computing a block SAD. Since only two out of every four original frames are interpolated, ME only needs to be done for these two frames, resulting in an effective frame rate of 12 fps.

8 876 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 10, NO. 6, SEPTEMBER 2000 This gives a total operation count of 3 operations/pixel pixels/block search locations blocks/frame frames/s, where the frame and block sizes are and, respectively. For full-search ME using a search range of, there are search locations. For of 7 and 15, ME operations require 2.79 and BOPS, respectively. For the traditional three-step search algorithm, which is more suitable for real-time applications, there are search locations, resulting in million operations per second (MOPS). For our ME algorithm, there are 4 search locations for the initial MV and search locations for the three-step search, giving a total of 28 search locations, which leads to MOPS. The computation requirement of our ME algorithm is slightly higher than that of the three-step search, but significantly less than the full-search method. On the other hand, the three-step search produces much lower PSNRs than the full search [14], not to mention our ME algorithm. For the interpolation stage, we employ the time-shifted temporal distribution proposed by Castagno et al. [7] and described in Section II. Using this method, only simple averaging is needed for the interpolation of the motion-compensated pixels. There are three operations (one addition, one rounding, and one shift) per pixel required to perform the averaging. With 4:2:0 YC C video, this adds an additional 1.5 times the number of pixels. This results in 3 operations/pixel ( ) pixels/frame frames/s MOPS. With the addition of the OBMC method, a weighted sum needs to be computed instead of an average. The weighting can be performed by either a fixed-point multiply or a lookup table resulting in 2 operations/pixel for the weighting and 1 operation/pixel for the addition. Thus, interpolation using OBMC uses a total of 3 operations/pixel and 18.7 MOPS just as in the original interpolation algorithm. Therefore, our correction algorithm of 3 : 2 pulldowns takes a total of MOPS of sustained performance. This is well within current programmable mediaprocessors computing power, such as the Philips Trimedia with peak performance of 4 BOPS [15], the Texas Instruments C80 and C62x series with peak performance of 2.4 BOPS [16] and 2 BOPS [17], respectively, and the Equator Technologies/Hitachi MAP with peak performance of 20 BOPS [18]. IV. CONCLUSION We have combined techniques of fast ME and motion-compensated frame-rate conversion into an integrated framework for correcting 3 : 2 pulldown artifacts in video sequences that originate from film. This algorithm was designed for a video playback environment where real-time processing is required. Therefore, keeping computational complexity low has been given high priority. Our fast ME algorithm was shown to produce MVs suitable for frame interpolation and generate fewer blocking artifacts than MVs from the full-search ME algorithm. This occurs because our algorithm takes into account the spatial and temporal correlation among MVs, which results in smooth motion-compensated interpolation in intermediate frames. The algorithm developed is also much less compute-intensive than the fullsearch algorithm, which allows for easier and less-expensive implementations in hardware or in software on programmable mediaprocessors. The blocking artifacts generated by block-based processing were reduced substantially by the blocking-artifact reduction algorithm. The methods developed here can be useful in many application areas. Most movies recorded onto DVD have been stored using 3 : 2 pulldown or using the MPEG-2 s repeat_first_field and top_field_first flags. Our algorithm could be incorporated in a DVD playback device to correct the 3 : 2 pulldown artifacts that would otherwise be present in the displayed video. The frame-rate interpolation stage could be used in the same device to enhance slow-motion playback by interpolating intermediate frames for smoother slow motion. Our frame-rate interpolation can also be employed in video conferencing or other video applications where an increase in the frame rate is desired. For example, many video conferencing applications use low frame rates, such as 10 or 15 fps, due to limited communications bandwidth and computing power available. Using our algorithm, the frame rate of a video conferencing session could be easily doubled, resulting in video with smoother motion. REFERENCES [1] R. L. Lagendijk and M. I. Sezan, Motion compensated frame rate conversion of motion pictures, IEEE Int. Conf. Acoust., Speech, Signal Processing, vol. 3, pp , [2] T. T. Chao and C. L. Huang, Motion-compensated spatio-temporal interpolation for frame rate up-conversion of interlaced or progressive image sequence, Proc. SPIE: Visual Communications and Image Processing Conf., vol. 2308, pp , [3] B. R. Mason and R. L. Robinson, Applications of motion compensation to standards conversion and film transfer, SMPTE J., vol. 102, pp , [4] H. Sonehara, Y. Nojiri, K. Iguchi, Y. Sugiura, and H. Hirabayashi, Reduction of motion judder on video images converted from film, SMPTE J., vol. 106, pp , [5] Video Coding for Low Bit Rate Communication, Standardization Sector of ITU, ITU-T Recommendation H.263 Version 2, [6] Generic coding of Audio-Visual Objects (Part 2: Visual), ISO/IEC JTC1/SC29/WG11 N2553, [7] R. Castagno, P. Haavisto, and G. Ramponi, A method for motion adaptive frame rate up-conversion, IEEE Trans. Circuits Syst. Video Technol., vol. 6, pp , Oct [8] J. Chalidabhongse and C. C. J. Kuo, Fast motion vector estimation using multiresolution-spatio-temporal correlations, IEEE Trans. Circuits Syst. Video Technol., vol. 7, pp , June [9] V. Bhaskaran and K. Konstantinides, Image and Video Compression Standards. Norwell, MA: Kluwer, [10] C. Cafforio, F. Rocca, and S. Tubaro, Motion compensated image interpolation, IEEE Trans. Commun., vol. 38, pp , [11] C. K. Wong and O. C. Au, Modified motion compensated temporal frame interpolation for very low bit rate video, IEEE Int. Conf. Acoust., Speech, Signal Processing, vol. 4, pp , [12] K. Xie, L. V. Bycken, and A. Oosterlinck, Determining accurate and reliable motion fields for motion-compensated interpolation, IEEE Trans. Circuits Syst. Video Technol., vol. 5, pp , [13] T. Wuyts, E. V. Eycken, and A. Oosterlinck, Calculating motion vectors for an interpolated motion field, in Proc. SPIE: Advanced Image and Video Communications and Storage Technologies, vol. 2451, 1995, pp [14] R. Li, B. Zeng, and M. L. Liou, A new three-step search algorithm for block motion estimation, IEEE Trans. Circuits Syst. Video Technol., vol. 4, pp , [15] Phillips Corp. (1998) Trimedia. [Online] Available: [16] Texas Instruments. (1998) TMS320C8x Executive Summary. [Online] Available:

9 HILMAN et al.: USING MOTION-COMPENSATED FRAME-RATE CONVERSION 877 [17] Texas Instruments. (1998) TMS320C62x Executive Summary. [Online] Available: [18] C. Basoglu, R. J. Gove, K. Kojima, and J. O Donnell, A single-chip processor for media applications: The MAP1000, Int. J. Imaging Syst. and Technol., vol. 10, pp , Kevin Hilman received the B.S. degree in May 1996 in computer engineering from the University of Minnesota, Duluth, and the M.S. degree in October 1998 from the University of Washington, Seattle, in electrical engineering. He is interested in the area of digital video technologies. Currently, he is a Technical Staff member of Equator Technologies, Seattle, WA. HyunWook Park (A 93 SM 99) received the B.S. degree in electrical engineering from Seoul National University, Seoul, Korea, in 1981, and the M.S. and Ph.D. degrees in electrical engineering from Korea Advanced Institute of Science and Technology (KAIST), Seoul, Korea, in 1983 and 1988, respectively. He has been an Associate Professor of Electrical Engineering Department, KAIST, Taejon, Korea, since He was a Research Associate from 1989 to 1992 and a Visiting Associate Professor from 1998 to 1999 at the University of Washington, Seattle. His current research interests include image computing system, image compression, image segmentation, medical imaging, and multimedia systems. Dr. Park is a member of SPIE. Yongmin Kim (S 79 M 82 SM 87 F 96) received the B.S. degree in electronics engineering from Seoul National University, Seoul, Korea, and the M.S. and Ph.D. degrees in electrical and computer engineering from the University of Wisconsin, Madison. He is Professor and Chair of Bioengineering, Professor of Electrical Engineering, and Adjunct Professor of both Radiology and of Computer Science and Engineering at the University of Washington, Seattle. His research interests include algorithms and systems for multimedia, image processing, computer graphics, medical imaging, high-performance programmable processor architecture, and modeling and simulation. His group has filed 64 invention disclosures and 40 patents, and 20 commercial licenses have been signed. Dr. Kim is a member of the Editorial Board of the IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, the IEEE Press series in Biomedical Engineering, and Annual Reviews of Biomedical Engineering. He received the Early Career Achievement Award from the IEEE Engineering in Medicine and Biology Society in 1988.

Express Letters. A Novel Four-Step Search Algorithm for Fast Block Motion Estimation

Express Letters. A Novel Four-Step Search Algorithm for Fast Block Motion Estimation IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 6, NO. 3, JUNE 1996 313 Express Letters A Novel Four-Step Search Algorithm for Fast Block Motion Estimation Lai-Man Po and Wing-Chung

More information

FRAME RATE CONVERSION OF INTERLACED VIDEO

FRAME RATE CONVERSION OF INTERLACED VIDEO FRAME RATE CONVERSION OF INTERLACED VIDEO Zhi Zhou, Yeong Taeg Kim Samsung Information Systems America Digital Media Solution Lab 3345 Michelson Dr., Irvine CA, 92612 Gonzalo R. Arce University of Delaware

More information

Adaptive Key Frame Selection for Efficient Video Coding

Adaptive Key Frame Selection for Efficient Video Coding Adaptive Key Frame Selection for Efficient Video Coding Jaebum Jun, Sunyoung Lee, Zanming He, Myungjung Lee, and Euee S. Jang Digital Media Lab., Hanyang University 17 Haengdang-dong, Seongdong-gu, Seoul,

More information

Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264

Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264 Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264 Ju-Heon Seo, Sang-Mi Kim, Jong-Ki Han, Nonmember Abstract-- In the H.264, MBAFF (Macroblock adaptive frame/field) and PAFF (Picture

More information

Motion Re-estimation for MPEG-2 to MPEG-4 Simple Profile Transcoding. Abstract. I. Introduction

Motion Re-estimation for MPEG-2 to MPEG-4 Simple Profile Transcoding. Abstract. I. Introduction Motion Re-estimation for MPEG-2 to MPEG-4 Simple Profile Transcoding Jun Xin, Ming-Ting Sun*, and Kangwook Chun** *Department of Electrical Engineering, University of Washington **Samsung Electronics Co.

More information

Impact of scan conversion methods on the performance of scalable. video coding. E. Dubois, N. Baaziz and M. Matta. INRS-Telecommunications

Impact of scan conversion methods on the performance of scalable. video coding. E. Dubois, N. Baaziz and M. Matta. INRS-Telecommunications Impact of scan conversion methods on the performance of scalable video coding E. Dubois, N. Baaziz and M. Matta INRS-Telecommunications 16 Place du Commerce, Verdun, Quebec, Canada H3E 1H6 ABSTRACT The

More information

An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions

An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions 1128 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 11, NO. 10, OCTOBER 2001 An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions Kwok-Wai Wong, Kin-Man Lam,

More information

Video coding standards

Video coding standards Video coding standards Video signals represent sequences of images or frames which can be transmitted with a rate from 5 to 60 frames per second (fps), that provides the illusion of motion in the displayed

More information

Lecture 2 Video Formation and Representation

Lecture 2 Video Formation and Representation 2013 Spring Term 1 Lecture 2 Video Formation and Representation Wen-Hsiao Peng ( 彭文孝 ) Multimedia Architecture and Processing Lab (MAPL) Department of Computer Science National Chiao Tung University 1

More information

An Overview of Video Coding Algorithms

An Overview of Video Coding Algorithms An Overview of Video Coding Algorithms Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Video coding can be viewed as image compression with a temporal

More information

Temporal Error Concealment Algorithm Using Adaptive Multi- Side Boundary Matching Principle

Temporal Error Concealment Algorithm Using Adaptive Multi- Side Boundary Matching Principle 184 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.12, December 2008 Temporal Error Concealment Algorithm Using Adaptive Multi- Side Boundary Matching Principle Seung-Soo

More information

Using enhancement data to deinterlace 1080i HDTV

Using enhancement data to deinterlace 1080i HDTV Using enhancement data to deinterlace 1080i HDTV The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Andy

More information

Selective Intra Prediction Mode Decision for H.264/AVC Encoders

Selective Intra Prediction Mode Decision for H.264/AVC Encoders Selective Intra Prediction Mode Decision for H.264/AVC Encoders Jun Sung Park, and Hyo Jung Song Abstract H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression

More information

ECE3296 Digital Image and Video Processing Lab experiment 2 Digital Video Processing using MATLAB

ECE3296 Digital Image and Video Processing Lab experiment 2 Digital Video Processing using MATLAB ECE3296 Digital Image and Video Processing Lab experiment 2 Digital Video Processing using MATLAB Objective i. To learn a simple method of video standards conversion. ii. To calculate and show frame difference

More information

1. INTRODUCTION. Index Terms Video Transcoding, Video Streaming, Frame skipping, Interpolation frame, Decoder, Encoder.

1. INTRODUCTION. Index Terms Video Transcoding, Video Streaming, Frame skipping, Interpolation frame, Decoder, Encoder. Video Streaming Based on Frame Skipping and Interpolation Techniques Fadlallah Ali Fadlallah Department of Computer Science Sudan University of Science and Technology Khartoum-SUDAN fadali@sustech.edu

More information

Reduced complexity MPEG2 video post-processing for HD display

Reduced complexity MPEG2 video post-processing for HD display Downloaded from orbit.dtu.dk on: Dec 17, 2017 Reduced complexity MPEG2 video post-processing for HD display Virk, Kamran; Li, Huiying; Forchhammer, Søren Published in: IEEE International Conference on

More information

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS Item Type text; Proceedings Authors Habibi, A. Publisher International Foundation for Telemetering Journal International Telemetering Conference Proceedings

More information

Interlace and De-interlace Application on Video

Interlace and De-interlace Application on Video Interlace and De-interlace Application on Video Liliana, Justinus Andjarwirawan, Gilberto Erwanto Informatics Department, Faculty of Industrial Technology, Petra Christian University Surabaya, Indonesia

More information

Film Sequence Detection and Removal in DTV Format and Standards Conversion

Film Sequence Detection and Removal in DTV Format and Standards Conversion TeraNex Technical Presentation Film Sequence Detection and Removal in DTV Format and Standards Conversion 142nd SMPTE Technical Conference & Exhibition October 20, 2000 Scott Ackerman DTV Product Manager

More information

Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences

Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences Michael Smith and John Villasenor For the past several decades,

More information

Research Topic. Error Concealment Techniques in H.264/AVC for Wireless Video Transmission in Mobile Networks

Research Topic. Error Concealment Techniques in H.264/AVC for Wireless Video Transmission in Mobile Networks Research Topic Error Concealment Techniques in H.264/AVC for Wireless Video Transmission in Mobile Networks July 22 nd 2008 Vineeth Shetty Kolkeri EE Graduate,UTA 1 Outline 2. Introduction 3. Error control

More information

New Architecture for Dynamic Frame-Skipping Transcoder

New Architecture for Dynamic Frame-Skipping Transcoder 886 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 11, NO. 8, AUGUST 2002 New Architecture for Dynamic Frame-Skipping Transcoder Kai-Tat Fung, Yui-Lam Chan, and Wan-Chi Siu, Senior Member, IEEE Abstract Transcoding

More information

UNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT

UNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT UNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT Stefan Schiemenz, Christian Hentschel Brandenburg University of Technology, Cottbus, Germany ABSTRACT Spatial image resizing is an important

More information

United States Patent: 4,789,893. ( 1 of 1 ) United States Patent 4,789,893 Weston December 6, Interpolating lines of video signals

United States Patent: 4,789,893. ( 1 of 1 ) United States Patent 4,789,893 Weston December 6, Interpolating lines of video signals United States Patent: 4,789,893 ( 1 of 1 ) United States Patent 4,789,893 Weston December 6, 1988 Interpolating lines of video signals Abstract Missing lines of a video signal are interpolated from the

More information

Principles of Video Compression

Principles of Video Compression Principles of Video Compression Topics today Introduction Temporal Redundancy Reduction Coding for Video Conferencing (H.261, H.263) (CSIT 410) 2 Introduction Reduce video bit rates while maintaining an

More information

Motion Video Compression

Motion Video Compression 7 Motion Video Compression 7.1 Motion video Motion video contains massive amounts of redundant information. This is because each image has redundant information and also because there are very few changes

More information

Region Adaptive Unsharp Masking based DCT Interpolation for Efficient Video Intra Frame Up-sampling

Region Adaptive Unsharp Masking based DCT Interpolation for Efficient Video Intra Frame Up-sampling International Conference on Electronic Design and Signal Processing (ICEDSP) 0 Region Adaptive Unsharp Masking based DCT Interpolation for Efficient Video Intra Frame Up-sampling Aditya Acharya Dept. of

More information

Module 3: Video Sampling Lecture 16: Sampling of video in two dimensions: Progressive vs Interlaced scans. The Lecture Contains:

Module 3: Video Sampling Lecture 16: Sampling of video in two dimensions: Progressive vs Interlaced scans. The Lecture Contains: The Lecture Contains: Sampling of Video Signals Choice of sampling rates Sampling a Video in Two Dimensions: Progressive vs. Interlaced Scans file:///d /...e%20(ganesh%20rana)/my%20course_ganesh%20rana/prof.%20sumana%20gupta/final%20dvsp/lecture16/16_1.htm[12/31/2015

More information

FAST SPATIAL AND TEMPORAL CORRELATION-BASED REFERENCE PICTURE SELECTION

FAST SPATIAL AND TEMPORAL CORRELATION-BASED REFERENCE PICTURE SELECTION FAST SPATIAL AND TEMPORAL CORRELATION-BASED REFERENCE PICTURE SELECTION 1 YONGTAE KIM, 2 JAE-GON KIM, and 3 HAECHUL CHOI 1, 3 Hanbat National University, Department of Multimedia Engineering 2 Korea Aerospace

More information

MPEGTool: An X Window Based MPEG Encoder and Statistics Tool 1

MPEGTool: An X Window Based MPEG Encoder and Statistics Tool 1 MPEGTool: An X Window Based MPEG Encoder and Statistics Tool 1 Toshiyuki Urabe Hassan Afzal Grace Ho Pramod Pancha Magda El Zarki Department of Electrical Engineering University of Pennsylvania Philadelphia,

More information

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video Mohamed Hassan, Taha Landolsi, Husameldin Mukhtar, and Tamer Shanableh College of Engineering American

More information

ERROR CONCEALMENT TECHNIQUES IN H.264 VIDEO TRANSMISSION OVER WIRELESS NETWORKS

ERROR CONCEALMENT TECHNIQUES IN H.264 VIDEO TRANSMISSION OVER WIRELESS NETWORKS Multimedia Processing Term project on ERROR CONCEALMENT TECHNIQUES IN H.264 VIDEO TRANSMISSION OVER WIRELESS NETWORKS Interim Report Spring 2016 Under Dr. K. R. Rao by Moiz Mustafa Zaveri (1001115920)

More information

MPEG has been established as an international standard

MPEG has been established as an international standard 1100 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 9, NO. 7, OCTOBER 1999 Fast Extraction of Spatially Reduced Image Sequences from MPEG-2 Compressed Video Junehwa Song, Member,

More information

Research Article. ISSN (Print) *Corresponding author Shireen Fathima

Research Article. ISSN (Print) *Corresponding author Shireen Fathima Scholars Journal of Engineering and Technology (SJET) Sch. J. Eng. Tech., 2014; 2(4C):613-620 Scholars Academic and Scientific Publisher (An International Publisher for Academic and Scientific Resources)

More information

Fast thumbnail generation for MPEG video by using a multiple-symbol lookup table

Fast thumbnail generation for MPEG video by using a multiple-symbol lookup table 48 3, 376 March 29 Fast thumbnail generation for MPEG video by using a multiple-symbol lookup table Myounghoon Kim Hoonjae Lee Ja-Cheon Yoon Korea University Department of Electronics and Computer Engineering,

More information

Chapter 10 Basic Video Compression Techniques

Chapter 10 Basic Video Compression Techniques Chapter 10 Basic Video Compression Techniques 10.1 Introduction to Video compression 10.2 Video Compression with Motion Compensation 10.3 Video compression standard H.261 10.4 Video compression standard

More information

Implementation of MPEG-2 Trick Modes

Implementation of MPEG-2 Trick Modes Implementation of MPEG-2 Trick Modes Matthew Leditschke and Andrew Johnson Multimedia Services Section Telstra Research Laboratories ABSTRACT: If video on demand services delivered over a broadband network

More information

A SVD BASED SCHEME FOR POST PROCESSING OF DCT CODED IMAGES

A SVD BASED SCHEME FOR POST PROCESSING OF DCT CODED IMAGES Electronic Letters on Computer Vision and Image Analysis 8(3): 1-14, 2009 A SVD BASED SCHEME FOR POST PROCESSING OF DCT CODED IMAGES Vinay Kumar Srivastava Assistant Professor, Department of Electronics

More information

Error concealment techniques in H.264 video transmission over wireless networks

Error concealment techniques in H.264 video transmission over wireless networks Error concealment techniques in H.264 video transmission over wireless networks M U L T I M E D I A P R O C E S S I N G ( E E 5 3 5 9 ) S P R I N G 2 0 1 1 D R. K. R. R A O F I N A L R E P O R T Murtaza

More information

A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique

A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique Dhaval R. Bhojani Research Scholar, Shri JJT University, Jhunjunu, Rajasthan, India Ved Vyas Dwivedi, PhD.

More information

Analysis of Packet Loss for Compressed Video: Does Burst-Length Matter?

Analysis of Packet Loss for Compressed Video: Does Burst-Length Matter? Analysis of Packet Loss for Compressed Video: Does Burst-Length Matter? Yi J. Liang 1, John G. Apostolopoulos, Bernd Girod 1 Mobile and Media Systems Laboratory HP Laboratories Palo Alto HPL-22-331 November

More information

Drift Compensation for Reduced Spatial Resolution Transcoding

Drift Compensation for Reduced Spatial Resolution Transcoding MERL A MITSUBISHI ELECTRIC RESEARCH LABORATORY http://www.merl.com Drift Compensation for Reduced Spatial Resolution Transcoding Peng Yin Anthony Vetro Bede Liu Huifang Sun TR-2002-47 August 2002 Abstract

More information

Project Proposal: Sub pixel motion estimation for side information generation in Wyner- Ziv decoder.

Project Proposal: Sub pixel motion estimation for side information generation in Wyner- Ziv decoder. EE 5359 MULTIMEDIA PROCESSING Subrahmanya Maira Venkatrav 1000615952 Project Proposal: Sub pixel motion estimation for side information generation in Wyner- Ziv decoder. Wyner-Ziv(WZ) encoder is a low

More information

INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET)

INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 ISSN 0976 6464(Print)

More information

Rounding Considerations SDTV-HDTV YCbCr Transforms 4:4:4 to 4:2:2 YCbCr Conversion

Rounding Considerations SDTV-HDTV YCbCr Transforms 4:4:4 to 4:2:2 YCbCr Conversion Digital it Video Processing 김태용 Contents Rounding Considerations SDTV-HDTV YCbCr Transforms 4:4:4 to 4:2:2 YCbCr Conversion Display Enhancement Video Mixing and Graphics Overlay Luma and Chroma Keying

More information

PERCEPTUAL QUALITY OF H.264/AVC DEBLOCKING FILTER

PERCEPTUAL QUALITY OF H.264/AVC DEBLOCKING FILTER PERCEPTUAL QUALITY OF H./AVC DEBLOCKING FILTER Y. Zhong, I. Richardson, A. Miller and Y. Zhao School of Enginnering, The Robert Gordon University, Schoolhill, Aberdeen, AB1 1FR, UK Phone: + 1, Fax: + 1,

More information

A Unified Approach to Restoration, Deinterlacing and Resolution Enhancement in Decoding MPEG-2 Video

A Unified Approach to Restoration, Deinterlacing and Resolution Enhancement in Decoding MPEG-2 Video Downloaded from orbit.dtu.dk on: Dec 15, 2017 A Unified Approach to Restoration, Deinterlacing and Resolution Enhancement in Decoding MPEG-2 Video Forchhammer, Søren; Martins, Bo Published in: I E E E

More information

MULTI-STATE VIDEO CODING WITH SIDE INFORMATION. Sila Ekmekci Flierl, Thomas Sikora

MULTI-STATE VIDEO CODING WITH SIDE INFORMATION. Sila Ekmekci Flierl, Thomas Sikora MULTI-STATE VIDEO CODING WITH SIDE INFORMATION Sila Ekmekci Flierl, Thomas Sikora Technical University Berlin Institute for Telecommunications D-10587 Berlin / Germany ABSTRACT Multi-State Video Coding

More information

(12) United States Patent

(12) United States Patent (12) United States Patent Sims USOO6734916B1 (10) Patent No.: US 6,734,916 B1 (45) Date of Patent: May 11, 2004 (54) VIDEO FIELD ARTIFACT REMOVAL (76) Inventor: Karl Sims, 8 Clinton St., Cambridge, MA

More information

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur Module 8 VIDEO CODING STANDARDS Lesson 24 MPEG-2 Standards Lesson Objectives At the end of this lesson, the students should be able to: 1. State the basic objectives of MPEG-2 standard. 2. Enlist the profiles

More information

A VLSI Architecture for Variable Block Size Video Motion Estimation

A VLSI Architecture for Variable Block Size Video Motion Estimation A VLSI Architecture for Variable Block Size Video Motion Estimation Yap, S. Y., & McCanny, J. (2004). A VLSI Architecture for Variable Block Size Video Motion Estimation. IEEE Transactions on Circuits

More information

Module 1: Digital Video Signal Processing Lecture 3: Characterisation of Video raster, Parameters of Analog TV systems, Signal bandwidth

Module 1: Digital Video Signal Processing Lecture 3: Characterisation of Video raster, Parameters of Analog TV systems, Signal bandwidth The Lecture Contains: Analog Video Raster Interlaced Scan Characterization of a video Raster Analog Color TV systems Signal Bandwidth Digital Video Parameters of a digital video Pixel Aspect Ratio file:///d

More information

Efficient Implementation of Neural Network Deinterlacing

Efficient Implementation of Neural Network Deinterlacing Efficient Implementation of Neural Network Deinterlacing Guiwon Seo, Hyunsoo Choi and Chulhee Lee Dept. Electrical and Electronic Engineering, Yonsei University 34 Shinchon-dong Seodeamun-gu, Seoul -749,

More information

SUMMIT LAW GROUP PLLC 315 FIFTH AVENUE SOUTH, SUITE 1000 SEATTLE, WASHINGTON Telephone: (206) Fax: (206)

SUMMIT LAW GROUP PLLC 315 FIFTH AVENUE SOUTH, SUITE 1000 SEATTLE, WASHINGTON Telephone: (206) Fax: (206) Case 2:10-cv-01823-JLR Document 154 Filed 01/06/12 Page 1 of 153 1 The Honorable James L. Robart 2 3 4 5 6 7 UNITED STATES DISTRICT COURT FOR THE WESTERN DISTRICT OF WASHINGTON AT SEATTLE 8 9 10 11 12

More information

AN IMPROVED ERROR CONCEALMENT STRATEGY DRIVEN BY SCENE MOTION PROPERTIES FOR H.264/AVC DECODERS

AN IMPROVED ERROR CONCEALMENT STRATEGY DRIVEN BY SCENE MOTION PROPERTIES FOR H.264/AVC DECODERS AN IMPROVED ERROR CONCEALMENT STRATEGY DRIVEN BY SCENE MOTION PROPERTIES FOR H.264/AVC DECODERS Susanna Spinsante, Ennio Gambi, Franco Chiaraluce Dipartimento di Elettronica, Intelligenza artificiale e

More information

ERROR CONCEALMENT TECHNIQUES IN H.264

ERROR CONCEALMENT TECHNIQUES IN H.264 Final Report Multimedia Processing Term project on ERROR CONCEALMENT TECHNIQUES IN H.264 Spring 2016 Under Dr. K. R. Rao by Moiz Mustafa Zaveri (1001115920) moiz.mustafazaveri@mavs.uta.edu 1 Acknowledgement

More information

Research and Development Report

Research and Development Report BBC RD 1996/9 Research and Development Report A COMPARISON OF MOTION-COMPENSATED INTERLACE-TO-PROGRESSIVE CONVERSION METHODS G.A. Thomas, M.A., Ph.D., C.Eng., M.I.E.E. Research and Development Department

More information

WE CONSIDER an enhancement technique for degraded

WE CONSIDER an enhancement technique for degraded 1140 IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO. 9, SEPTEMBER 2014 Example-based Enhancement of Degraded Video Edson M. Hung, Member, IEEE, Diogo C. Garcia, Member, IEEE, and Ricardo L. de Queiroz, Senior

More information

SCALABLE video coding (SVC) is currently being developed

SCALABLE video coding (SVC) is currently being developed IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 16, NO. 7, JULY 2006 889 Fast Mode Decision Algorithm for Inter-Frame Coding in Fully Scalable Video Coding He Li, Z. G. Li, Senior

More information

Spatio-temporal inaccuracies of video-based ultrasound images of the tongue

Spatio-temporal inaccuracies of video-based ultrasound images of the tongue Spatio-temporal inaccuracies of video-based ultrasound images of the tongue Alan A. Wrench 1*, James M. Scobbie * 1 Articulate Instruments Ltd - Queen Margaret Campus, 36 Clerwood Terrace, Edinburgh EH12

More information

H.264/AVC Baseline Profile Decoder Complexity Analysis

H.264/AVC Baseline Profile Decoder Complexity Analysis 704 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 13, NO. 7, JULY 2003 H.264/AVC Baseline Profile Decoder Complexity Analysis Michael Horowitz, Anthony Joch, Faouzi Kossentini, Senior

More information

Color Quantization of Compressed Video Sequences. Wan-Fung Cheung, and Yuk-Hee Chan, Member, IEEE 1 CSVT

Color Quantization of Compressed Video Sequences. Wan-Fung Cheung, and Yuk-Hee Chan, Member, IEEE 1 CSVT CSVT -02-05-09 1 Color Quantization of Compressed Video Sequences Wan-Fung Cheung, and Yuk-Hee Chan, Member, IEEE 1 Abstract This paper presents a novel color quantization algorithm for compressed video

More information

Introduction to image compression

Introduction to image compression Introduction to image compression 1997-2015 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Compression 2015 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 12 Motivation

More information

COMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards

COMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards COMP 9 Advanced Distributed Systems Multimedia Networking Video Compression Standards Kevin Jeffay Department of Computer Science University of North Carolina at Chapel Hill jeffay@cs.unc.edu September,

More information

RECOMMENDATION ITU-R BT.1201 * Extremely high resolution imagery

RECOMMENDATION ITU-R BT.1201 * Extremely high resolution imagery Rec. ITU-R BT.1201 1 RECOMMENDATION ITU-R BT.1201 * Extremely high resolution imagery (Question ITU-R 226/11) (1995) The ITU Radiocommunication Assembly, considering a) that extremely high resolution imagery

More information

Lecture 23: Digital Video. The Digital World of Multimedia Guest lecture: Jayson Bowen

Lecture 23: Digital Video. The Digital World of Multimedia Guest lecture: Jayson Bowen Lecture 23: Digital Video The Digital World of Multimedia Guest lecture: Jayson Bowen Plan for Today Digital video Video compression HD, HDTV & Streaming Video Audio + Images Video Audio: time sampling

More information

PACKET-SWITCHED networks have become ubiquitous

PACKET-SWITCHED networks have become ubiquitous IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 7, JULY 2004 885 Video Compression for Lossy Packet Networks With Mode Switching and a Dual-Frame Buffer Athanasios Leontaris, Student Member, IEEE,

More information

Chapter 3 Fundamental Concepts in Video. 3.1 Types of Video Signals 3.2 Analog Video 3.3 Digital Video

Chapter 3 Fundamental Concepts in Video. 3.1 Types of Video Signals 3.2 Analog Video 3.3 Digital Video Chapter 3 Fundamental Concepts in Video 3.1 Types of Video Signals 3.2 Analog Video 3.3 Digital Video 1 3.1 TYPES OF VIDEO SIGNALS 2 Types of Video Signals Video standards for managing analog output: A.

More information

Visual Communication at Limited Colour Display Capability

Visual Communication at Limited Colour Display Capability Visual Communication at Limited Colour Display Capability Yan Lu, Wen Gao and Feng Wu Abstract: A novel scheme for visual communication by means of mobile devices with limited colour display capability

More information

Understanding PQR, DMOS, and PSNR Measurements

Understanding PQR, DMOS, and PSNR Measurements Understanding PQR, DMOS, and PSNR Measurements Introduction Compression systems and other video processing devices impact picture quality in various ways. Consumers quality expectations continue to rise

More information

VERY low bit-rate video coding has triggered intensive. Significance-Linked Connected Component Analysis for Very Low Bit-Rate Wavelet Video Coding

VERY low bit-rate video coding has triggered intensive. Significance-Linked Connected Component Analysis for Very Low Bit-Rate Wavelet Video Coding 630 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 9, NO. 4, JUNE 1999 Significance-Linked Connected Component Analysis for Very Low Bit-Rate Wavelet Video Coding Jozsef Vass, Student

More information

Module 1: Digital Video Signal Processing Lecture 5: Color coordinates and chromonance subsampling. The Lecture Contains:

Module 1: Digital Video Signal Processing Lecture 5: Color coordinates and chromonance subsampling. The Lecture Contains: The Lecture Contains: ITU-R BT.601 Digital Video Standard Chrominance (Chroma) Subsampling Video Quality Measures file:///d /...rse%20(ganesh%20rana)/my%20course_ganesh%20rana/prof.%20sumana%20gupta/final%20dvsp/lecture5/5_1.htm[12/30/2015

More information

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur Module 8 VIDEO CODING STANDARDS Lesson 27 H.264 standard Lesson Objectives At the end of this lesson, the students should be able to: 1. State the broad objectives of the H.264 standard. 2. List the improved

More information

Multimedia Communications. Video compression

Multimedia Communications. Video compression Multimedia Communications Video compression Video compression Of all the different sources of data, video produces the largest amount of data There are some differences in our perception with regard to

More information

Constant Bit Rate for Video Streaming Over Packet Switching Networks

Constant Bit Rate for Video Streaming Over Packet Switching Networks International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Constant Bit Rate for Video Streaming Over Packet Switching Networks Mr. S. P.V Subba rao 1, Y. Renuka Devi 2 Associate professor

More information

Reducing tilt errors in moiré linear encoders using phase-modulated grating

Reducing tilt errors in moiré linear encoders using phase-modulated grating REVIEW OF SCIENTIFIC INSTRUMENTS VOLUME 71, NUMBER 6 JUNE 2000 Reducing tilt errors in moiré linear encoders using phase-modulated grating Ju-Ho Song Multimedia Division, LG Electronics, #379, Kasoo-dong,

More information

New-Generation Scalable Motion Processing from Mobile to 4K and Beyond

New-Generation Scalable Motion Processing from Mobile to 4K and Beyond Mobile to 4K and Beyond White Paper Today s broadcast video content is being viewed on the widest range of display devices ever known, from small phone screens and legacy SD TV sets to enormous 4K and

More information

Multimedia. Course Code (Fall 2017) Fundamental Concepts in Video

Multimedia. Course Code (Fall 2017) Fundamental Concepts in Video Course Code 005636 (Fall 2017) Multimedia Fundamental Concepts in Video Prof. S. M. Riazul Islam, Dept. of Computer Engineering, Sejong University, Korea E-mail: riaz@sejong.ac.kr Outline Types of Video

More information

Lecture 1: Introduction & Image and Video Coding Techniques (I)

Lecture 1: Introduction & Image and Video Coding Techniques (I) Lecture 1: Introduction & Image and Video Coding Techniques (I) Dr. Reji Mathew Reji@unsw.edu.au School of EE&T UNSW A/Prof. Jian Zhang NICTA & CSE UNSW jzhang@cse.unsw.edu.au COMP9519 Multimedia Systems

More information

Introduction to Video Compression Techniques. Slides courtesy of Tay Vaughan Making Multimedia Work

Introduction to Video Compression Techniques. Slides courtesy of Tay Vaughan Making Multimedia Work Introduction to Video Compression Techniques Slides courtesy of Tay Vaughan Making Multimedia Work Agenda Video Compression Overview Motivation for creating standards What do the standards specify Brief

More information

Objective video quality measurement techniques for broadcasting applications using HDTV in the presence of a reduced reference signal

Objective video quality measurement techniques for broadcasting applications using HDTV in the presence of a reduced reference signal Recommendation ITU-R BT.1908 (01/2012) Objective video quality measurement techniques for broadcasting applications using HDTV in the presence of a reduced reference signal BT Series Broadcasting service

More information

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ISCAS.2005.

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ISCAS.2005. Wang, D., Canagarajah, CN., & Bull, DR. (2005). S frame design for multiple description video coding. In IEEE International Symposium on Circuits and Systems (ISCAS) Kobe, Japan (Vol. 3, pp. 19 - ). Institute

More information

Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems. School of Electrical Engineering and Computer Science Oregon State University

Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems. School of Electrical Engineering and Computer Science Oregon State University Ch. 1: Audio/Image/Video Fundamentals Multimedia Systems Prof. Ben Lee School of Electrical Engineering and Computer Science Oregon State University Outline Computer Representation of Audio Quantization

More information

Implementation of an MPEG Codec on the Tilera TM 64 Processor

Implementation of an MPEG Codec on the Tilera TM 64 Processor 1 Implementation of an MPEG Codec on the Tilera TM 64 Processor Whitney Flohr Supervisor: Mark Franklin, Ed Richter Department of Electrical and Systems Engineering Washington University in St. Louis Fall

More information

SERIES J: CABLE NETWORKS AND TRANSMISSION OF TELEVISION, SOUND PROGRAMME AND OTHER MULTIMEDIA SIGNALS Measurement of the quality of service

SERIES J: CABLE NETWORKS AND TRANSMISSION OF TELEVISION, SOUND PROGRAMME AND OTHER MULTIMEDIA SIGNALS Measurement of the quality of service International Telecommunication Union ITU-T J.342 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU (04/2011) SERIES J: CABLE NETWORKS AND TRANSMISSION OF TELEVISION, SOUND PROGRAMME AND OTHER MULTIMEDIA

More information

Implementation of Memory Based Multiplication Using Micro wind Software

Implementation of Memory Based Multiplication Using Micro wind Software Implementation of Memory Based Multiplication Using Micro wind Software U.Palani 1, M.Sujith 2,P.Pugazhendiran 3 1 IFET College of Engineering, Department of Information Technology, Villupuram 2,3 IFET

More information

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /76.

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /76. Czerepinski, P. J., Davies, C., Canagarajah, C. N., & Bull, D. R. (2000). Matching pursuits video coding: dictionaries and fast implementation. IEEE Transactions on circuits and systems for video technology,

More information

CM3106 Solutions. Do not turn this page over until instructed to do so by the Senior Invigilator.

CM3106 Solutions. Do not turn this page over until instructed to do so by the Senior Invigilator. CARDIFF UNIVERSITY EXAMINATION PAPER Academic Year: 2013/2014 Examination Period: Examination Paper Number: Examination Paper Title: Duration: Autumn CM3106 Solutions Multimedia 2 hours Do not turn this

More information

(a) (b) Figure 1.1: Screen photographs illustrating the specic form of noise sometimes encountered on television. The left hand image (a) shows the no

(a) (b) Figure 1.1: Screen photographs illustrating the specic form of noise sometimes encountered on television. The left hand image (a) shows the no Chapter1 Introduction THE electromagnetic transmission and recording of image sequences requires a reduction of the multi-dimensional visual reality to the one-dimensional video signal. Scanning techniques

More information

Multimedia Communications. Image and Video compression

Multimedia Communications. Image and Video compression Multimedia Communications Image and Video compression JPEG2000 JPEG2000: is based on wavelet decomposition two types of wavelet filters one similar to what discussed in Chapter 14 and the other one generates

More information

Overview: Video Coding Standards

Overview: Video Coding Standards Overview: Video Coding Standards Video coding standards: applications and common structure ITU-T Rec. H.261 ISO/IEC MPEG-1 ISO/IEC MPEG-2 State-of-the-art: H.264/AVC Video Coding Standards no. 1 Applications

More information

DWT Based-Video Compression Using (4SS) Matching Algorithm

DWT Based-Video Compression Using (4SS) Matching Algorithm DWT Based-Video Compression Using (4SS) Matching Algorithm Marwa Kamel Hussien Dr. Hameed Abdul-Kareem Younis Assist. Lecturer Assist. Professor Lava_85K@yahoo.com Hameedalkinani2004@yahoo.com Department

More information

THE TRANSMISSION and storage of video are important

THE TRANSMISSION and storage of video are important 206 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 21, NO. 2, FEBRUARY 2011 Novel RD-Optimized VBSME with Matching Highly Data Re-Usable Hardware Architecture Xing Wen, Student Member,

More information

DETECTION OF SLOW-MOTION REPLAY SEGMENTS IN SPORTS VIDEO FOR HIGHLIGHTS GENERATION

DETECTION OF SLOW-MOTION REPLAY SEGMENTS IN SPORTS VIDEO FOR HIGHLIGHTS GENERATION DETECTION OF SLOW-MOTION REPLAY SEGMENTS IN SPORTS VIDEO FOR HIGHLIGHTS GENERATION H. Pan P. van Beek M. I. Sezan Electrical & Computer Engineering University of Illinois Urbana, IL 6182 Sharp Laboratories

More information

International Journal for Research in Applied Science & Engineering Technology (IJRASET) Motion Compensation Techniques Adopted In HEVC

International Journal for Research in Applied Science & Engineering Technology (IJRASET) Motion Compensation Techniques Adopted In HEVC Motion Compensation Techniques Adopted In HEVC S.Mahesh 1, K.Balavani 2 M.Tech student in Bapatla Engineering College, Bapatla, Andahra Pradesh Assistant professor in Bapatla Engineering College, Bapatla,

More information

An Efficient Reduction of Area in Multistandard Transform Core

An Efficient Reduction of Area in Multistandard Transform Core An Efficient Reduction of Area in Multistandard Transform Core A. Shanmuga Priya 1, Dr. T. K. Shanthi 2 1 PG scholar, Applied Electronics, Department of ECE, 2 Assosiate Professor, Department of ECE Thanthai

More information

Design of a Fast Multi-Reference Frame Integer Motion Estimator for H.264/AVC

Design of a Fast Multi-Reference Frame Integer Motion Estimator for H.264/AVC http://dx.doi.org/10.5573/jsts.2013.13.5.430 JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.13, NO.5, OCTOBER, 2013 Design of a Fast Multi-Reference Frame Integer Motion Estimator for H.264/AVC Juwon

More information

Adaptive reference frame selection for generalized video signal coding. Carnegie Mellon University, Pittsburgh, PA 15213

Adaptive reference frame selection for generalized video signal coding. Carnegie Mellon University, Pittsburgh, PA 15213 Adaptive reference frame selection for generalized video signal coding J. S. McVeigh 1, M. W. Siegel 2 and A. G. Jordan 1 1 Department of Electrical and Computer Engineering 2 Robotics Institute, School

More information

A Novel Macroblock-Level Filtering Upsampling Architecture for H.264/AVC Scalable Extension

A Novel Macroblock-Level Filtering Upsampling Architecture for H.264/AVC Scalable Extension 05-Silva-AF:05-Silva-AF 8/19/11 6:18 AM Page 43 A Novel Macroblock-Level Filtering Upsampling Architecture for H.264/AVC Scalable Extension T. L. da Silva 1, L. A. S. Cruz 2, and L. V. Agostini 3 1 Telecommunications

More information

THE VIDEO CODEC plays an important role in the

THE VIDEO CODEC plays an important role in the IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 6, NO. 3, JUNE 2004 489 A Region-Based H.263+ Codec and Its Rate Control for Low VBR Video Hwangjun Song and C.-C. Jay Kuo, Fellow, IEEE Abstract This paper presents

More information