FRAME RATE CONVERSION OF INTERLACED VIDEO

Similar documents
Using Motion-Compensated Frame-Rate Conversion for the Correction of 3 : 2 Pulldown Artifacts in Video Sequences

AN IMPROVED ERROR CONCEALMENT STRATEGY DRIVEN BY SCENE MOTION PROPERTIES FOR H.264/AVC DECODERS

Using enhancement data to deinterlace 1080i HDTV

Express Letters. A Novel Four-Step Search Algorithm for Fast Block Motion Estimation

Region Adaptive Unsharp Masking based DCT Interpolation for Efficient Video Intra Frame Up-sampling

IC FOR MOTION-COMPENSATED DE-INTERLACING, NOISE REDUCTION, AND PICTURE-RATE CONVERSION

Adaptive Key Frame Selection for Efficient Video Coding

Interlace and De-interlace Application on Video

Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264

Efficient Implementation of Neural Network Deinterlacing

1. INTRODUCTION. Index Terms Video Transcoding, Video Streaming, Frame skipping, Interpolation frame, Decoder, Encoder.

Rounding Considerations SDTV-HDTV YCbCr Transforms 4:4:4 to 4:2:2 YCbCr Conversion

An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions

Research Topic. Error Concealment Techniques in H.264/AVC for Wireless Video Transmission in Mobile Networks

WE CONSIDER an enhancement technique for degraded

Lecture 2 Video Formation and Representation

Motion Re-estimation for MPEG-2 to MPEG-4 Simple Profile Transcoding. Abstract. I. Introduction

Temporal Error Concealment Algorithm Using Adaptive Multi- Side Boundary Matching Principle

AN OVERVIEW OF FLAWS IN EMERGING TELEVISION DISPLAYS AND REMEDIAL VIDEO PROCESSING

MULTI-STATE VIDEO CODING WITH SIDE INFORMATION. Sila Ekmekci Flierl, Thomas Sikora

ECE3296 Digital Image and Video Processing Lab experiment 2 Digital Video Processing using MATLAB

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ISCAS.2005.

Impact of scan conversion methods on the performance of scalable. video coding. E. Dubois, N. Baaziz and M. Matta. INRS-Telecommunications

Region Based Laplacian Post-processing for Better 2-D Up-sampling

Module 4: Video Sampling Rate Conversion Lecture 25: Scan rate doubling, Standards conversion. The Lecture Contains: Algorithm 1: Algorithm 2:

Chapter 10 Basic Video Compression Techniques

Video Processing Applications Image and Video Processing Dr. Anil Kokaram

Film Sequence Detection and Removal in DTV Format and Standards Conversion

Chapter 3 Fundamental Concepts in Video. 3.1 Types of Video Signals 3.2 Analog Video 3.3 Digital Video

Module 3: Video Sampling Lecture 16: Sampling of video in two dimensions: Progressive vs Interlaced scans. The Lecture Contains:

No Reference, Fuzzy Weighted Unsharp Masking Based DCT Interpolation for Better 2-D Up-sampling

A Unified Approach to Restoration, Deinterlacing and Resolution Enhancement in Decoding MPEG-2 Video

MPEG has been established as an international standard

WYNER-ZIV VIDEO CODING WITH LOW ENCODER COMPLEXITY

CHROMA CODING IN DISTRIBUTED VIDEO CODING

Image Resolution and Contrast Enhancement of Satellite Geographical Images with Removal of Noise using Wavelet Transforms

Module 1: Digital Video Signal Processing Lecture 5: Color coordinates and chromonance subsampling. The Lecture Contains:

Drift Compensation for Reduced Spatial Resolution Transcoding

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

Communication Theory and Engineering

Deinterlacing An Overview

Project Proposal: Sub pixel motion estimation for side information generation in Wyner- Ziv decoder.

Reduced complexity MPEG2 video post-processing for HD display

New-Generation Scalable Motion Processing from Mobile to 4K and Beyond

A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique

ERROR CONCEALMENT TECHNIQUES IN H.264 VIDEO TRANSMISSION OVER WIRELESS NETWORKS

SCALABLE video coding (SVC) is currently being developed

An Overview of Video Coding Algorithms

Design of a Fast Multi-Reference Frame Integer Motion Estimator for H.264/AVC

Video Quality Evaluation with Multiple Coding Artifacts

Analysis of Packet Loss for Compressed Video: Does Burst-Length Matter?

(a) (b) Figure 1.1: Screen photographs illustrating the specic form of noise sometimes encountered on television. The left hand image (a) shows the no

N T I. Introduction. II. Proposed Adaptive CTI Algorithm. III. Experimental Results. IV. Conclusion. Seo Jeong-Hoon

InSync White Paper : Achieving optimal conversions in UHDTV workflows April 2015

A Novel Video Compression Method Based on Underdetermined Blind Source Separation

FAST SPATIAL AND TEMPORAL CORRELATION-BASED REFERENCE PICTURE SELECTION

Chapter 2. Advanced Telecommunications and Signal Processing Program. E. Galarza, Raynard O. Hinds, Eric C. Reed, Lon E. Sun-

Multichannel Satellite Image Resolution Enhancement Using Dual-Tree Complex Wavelet Transform and NLM Filtering

Constant Bit Rate for Video Streaming Over Packet Switching Networks

A SVD BASED SCHEME FOR POST PROCESSING OF DCT CODED IMAGES

UNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT

R&D White Paper WHP 085. The Rel : a perception-based measure of resolution. Research & Development BRITISH BROADCASTING CORPORATION.

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

Video compression principles. Color Space Conversion. Sub-sampling of Chrominance Information. Video: moving pictures and the terms frame and

THE popularity of multimedia applications demands support

Module 1: Digital Video Signal Processing Lecture 3: Characterisation of Video raster, Parameters of Analog TV systems, Signal bandwidth

Dual Frame Video Encoding with Feedback

Robust Transmission of H.264/AVC Video using 64-QAM and unequal error protection

1ms Column Parallel Vision System and It's Application of High Speed Target Tracking

DWT Based-Video Compression Using (4SS) Matching Algorithm

INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET)

White Paper : Achieving synthetic slow-motion in UHDTV. InSync Technology Ltd, UK

Fast thumbnail generation for MPEG video by using a multiple-symbol lookup table

Research Article. ISSN (Print) *Corresponding author Shireen Fathima

Camera Motion-constraint Video Codec Selection

Robust 3-D Video System Based on Modified Prediction Coding and Adaptive Selection Mode Error Concealment Algorithm

Improved Error Concealment Using Scene Information

United States Patent: 4,789,893. ( 1 of 1 ) United States Patent 4,789,893 Weston December 6, Interpolating lines of video signals

Interframe Bus Encoding Technique for Low Power Video Compression

EMBEDDED ZEROTREE WAVELET CODING WITH JOINT HUFFMAN AND ARITHMETIC CODING

Lecture 23: Digital Video. The Digital World of Multimedia Guest lecture: Jayson Bowen

Contents. xv xxi xxiii xxiv. 1 Introduction 1 References 4

1022 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL 2010

A video signal processor for motioncompensated field-rate upconversion in consumer television

High-Definition, Standard-Definition Compatible Color Bar Signal

A simplified fractal image compression algorithm

Research and Development Report

AN UNEQUAL ERROR PROTECTION SCHEME FOR MULTIPLE INPUT MULTIPLE OUTPUT SYSTEMS. M. Farooq Sabir, Robert W. Heath and Alan C. Bovik

Video coding standards

Memory Efficient VLSI Architecture for QCIF to VGA Resolution Conversion

Error Concealment for SNR Scalable Video Coding

Motion Video Compression

A Study of Encoding and Decoding Techniques for Syndrome-Based Video Coding

Frame Interpolation and Motion Blur for Film Production and Presentation GTC Conference, San Jose

SERIES J: CABLE NETWORKS AND TRANSMISSION OF TELEVISION, SOUND PROGRAMME AND OTHER MULTIMEDIA SIGNALS Measurement of the quality of service

hdtv (high Definition television) and video surveillance

Robust Transmission of H.264/AVC Video Using 64-QAM and Unequal Error Protection

REDUCED-COMPLEXITY DECODING FOR CONCATENATED CODES BASED ON RECTANGULAR PARITY-CHECK CODES AND TURBO CODES

ESI VLS-2000 Video Line Scaler

OBJECT-BASED IMAGE COMPRESSION WITH SIMULTANEOUS SPATIAL AND SNR SCALABILITY SUPPORT FOR MULTICASTING OVER HETEROGENEOUS NETWORKS

Transcription:

FRAME RATE CONVERSION OF INTERLACED VIDEO Zhi Zhou, Yeong Taeg Kim Samsung Information Systems America Digital Media Solution Lab 3345 Michelson Dr., Irvine CA, 92612 Gonzalo R. Arce University of Delaware Dept. of Electrical & Computer Engineering 140 Evans Hall, Newark DE, 19716 ABSTRACT Due to the existence of various video systems with different temporal resolutions, frame rate conversion (FRC) is often required to convert video from one standard to another. Converting the frame rate of interlaced video, generally, includes two stages: deinterlacing and frame rate conversion. To obtain higher quality results, methods utilizing motion compensation have been proposed with the goal of eliminating motion blur artifacts. If separate motion estimations are performed in deinterlacing and FRC, respectively, the overall system suffers from very high computational complexity. To address this problem, a novel approach is proposed in this paper, in which deinterlacing and FRC share a single motion estimation unit. In addition, an extended motion estimation method is proposed to search fast motions in video sequences. In our simulations, very high quality results are obtained. 1. INTRODUCTION Various video standards with different temporal resolutions are available in the industry. For instance, North American TV system (NTSC) uses 60 fields/s interlaced video, and European TV system (PAL) uses 50 fields/s interlaced video. Note that, the interlaced video scans every other line in one half of the frame time and then scans the other half of the lines. Each frame is scanned in two fields and each field contains half the number of lines in a frame. FRC is required to convert video from one standard to another. One of the simplest frame rate conversion methods is frame/field repetition. For example, 3:2 pull-down is the most popular method to convert 24 frames/s film (progressive) to 60 fields/s video (interlaced), in which each odd numbered frame is repeated into 3 fields and each even numbered frame is repeated into 2 fields. This 3:2 pull-down method often leads to visible artifacts such as motion blur. Generally, frame rate conversion of interlaced video includes two stages: deinterlacing and frame rate conversion. Deinterlacing converts interlaced video to progressive video, and then the frame rate is converted to be compatible with the target video system. In order to obtain higher quality results, methods utilizing motion compensation have been proposed with the goal of eliminating motion blur artifacts. Motion estimation is as expected critical to the performance of motion compensated deinterlacing (MCD) and motion compensated frame rate conversion (MCFRC). If separate motion estimations are performed in deinterlacing and FRC, respectively, the overall system suffers from very high computational complexity. Different from video compression, true motion estimation is required in MCD and MCFRC. If motion estimation fails in video compression, more bits can be encoded to compensate the performance. However, if it happens in MCD or MCFRC, visible artifacts may occur. Recently, many true motion estimation methods have been proposed [1, 2, 3]. Since those methods search motion vector in some pre-defined search region, they are not well suited to track video scenes with fast motion objects. Furthermore, using motion vectors based on the blocks of the current frame leads to overlapped and empty regions in the interpolated frames due to the multi-passing and nopassing motion trajectories, respectively. Methods to correct these problems introduce artifacts of their own [4]. Other algorithms directly search the bidirectional motion vectors of the blocks in the missing frame [5, 6]. These motion estimators, however, are easily trapped into local minima since the pixels of the missing frame do not exist. 2. THE PROPOSED ALGORITHM To address the above mentioned problems, a new method is proposed to convert the frame rate of interlaced video as illustrated in Fig. 1. Similar to conventional methods, the proposed method is a two-stage approach as well, deinterlacing first followed by frame rate conversion. In the first stage, an extended motion estimation is applied to search the motion vectors of interlaced video. The obtained motion vectors are not only used in the motion compensated pixel interpolation of deinterlacing, but also transferred to the second stage to estimate pixel-wise bidirectional motion vectors of the missing frame. The bidirectional motion vec-

Input video Ddeinterlacing Extended motion estimation MCD Motion vectors Frame rate conversion Bi-directional motion vector etimation Bi-diretional motion vectors MCFRC Fig. 1. Frame rate conversion of interlaced video. Output video tors define the motion trajectories from the missing frame to the current and previous frames. Therefore, the pixels in the missing frame can be motion compensated interpolated as well. 2.1. Motion Compensated Deinterlacing In order to systematically describe the deinterlacing problem, denote f n as the incoming interlaced video at time instant n and f n (i,j) as the pixel value at row i column j. By the definition of the interlaced video signal, the pixel values of f n are available only for the even lines (i.e., i = 0,2,4, ), if f n is a top field. Similarly, the pixel values of f n are available only for the odd lines (i.e., i = 1,3,5, ) if f n is a bottom field. Top and bottom fields are typically available alternately in the time axis. Deinterlacing problem can be stated as a process to reconstruct or interpolate the unavailable pixels in each field, such as the pixels in the odd lines of a top field and the pixels in the even lines of a bottom field. If f n is the current field to be interpolated, f n 1 is already interpolated and all pixels are available. Numbers of deinterlacing algorithms have been exploited and studied comprehensively by many researchers during the last decade.it is well-known that a 3D (spatial-temporal) deinterlacing algorithm based on a motion detection gives more pleasing performance than a 2D (spatial) deinterlacing algorithm[7]. The key point of a 3D deinterlacing algorithm is how to precisely detect motion in interlaced video. If f n (i,j) is the current pixel to be interpolated, the local difference d n (i,j) is calculated as d n (i,j) = 1 9 1 1 p= 1 q= 1 f n 1 (i + 2p,j + q) f n+1 (i + 2p,j + q). (1) Generally, the local difference is very small if there is no motion between f n 1 (i,j) and f n+1 (i,j), or very large if there is motion. Therefore, the motion signal m n (i,j) can be calculated as m n (i,j) = max(0,min( d n(i,j) th 1 th 2 th 1,1)), (2) where th 1 and th 2 are two constant thresholds satisfying th 1 < th 2. The motion signal value 0 indicates no motion, while the value 1 indicates motion. If there is no motion, a simple temporal interpolation shown in Eq. (3) can be applied, f T n (i,j) = 1 2 (f n 1(i,j) + f n+1 (i,j)). Otherwise, a motion compensated interpolation shown in Eq. (5) can be used, (3) f M n (i,j) = Median(f n 1 (i + dy,j + dx), (4) f n (i + 1,j),f n (i 1,j)), (5) where (dx, dy) denotes the corresponding motion vector. Therefore, the motion signal m n (i,j) can mix such two interpolations, and output the final result as f n (i,j) = m n (i,j) f M n (i,j) + (1 m n (i,j)) f T n (i,j). (6) To obtain the motion vector (dx,dy), the extended motion estimation is proposed as illustrated in Fig. 2. In essence, Search Region 4 Search Region 2 Search Region 3 C B A Search Region 1 Fig. 2. The extended motion estimation. the extended motion estimation applies block matching techniques to search motion vector in a pre-defined searching region, such as the search region 1 in Fig. 1. For instance, the full searching, the 3-step searching, or any currently available true motion vector searching can be used in this step. If the matching position is located on the edge of the search region, such as pixel A, a new local search is performed around the matching position, such as the search region 2. This procedure is iterated until no better matching position can be obtained in a new local searching region. For example, a better matching position B is found in the search

region 2, and the search is extended to the search region 3. It is further extended to the search region 4, since another new better matching position C is found. The searching stops when no better matching position is detected in the search region 4. By doing so, reliable large motion vectors can be detected as well as regular ones. To apply the motion estimation on the current field f n and the previous deinterlaced frame f n 1, the missing lines in f n can be vertically interpolated first or simply not used during block matching. If the input video is progressive, deinterlacing is turned off, but the motion estimation should still be performed for the frame rate conversion. The extended motion estimation can be applied on both interlaced and progressive videos. 2.2. Motion Compensated Frame Rate Conversion Converting the frame rate of an input video sequence requires to generate frames at certain time instants where no original frames are available. Therefore, the frame rate conversion problem can be summarized as the interpolation of frame (0 < τ < 1) from frame f n 1 and frame f n. Let (i,j) be the current pixel to be interpolated. Similar to deinterlacing, the pixel (i,j) can be classified into non-motion or motion region. Since frame is between f n 2 and f n, and between f n 1 and f n+1 as well, the motion signal m n 1 (i,j) or m n (i,j), obtained in the deinterlacing of f n 1 or f n respectively, can be used as the motion signal m n τ (i,j) of the pixel (i,j). Formally, m n τ (i,j) is obtained as { mn 1 (i,j) if m m n τ (i,j) = n 1 (i,j) is available, m n (i,j) if m n (i,j) is available. (7) It is guaranteed that one and only one out of the two motion signals m n 1 (i,j) and m n (i,j) is available, since f n 1 and f n are two alternate fields. If the pixel (i,j) is in non-motion region, a simple linear temporal interpolation shown in Eq. (8) is applied as f T n τ(i,j) = τ f n 1 (i,j) + (1 τ) f n (i,j), (8) where time intervals τ and 1 τ are used as the weights. If the pixel (i,j) is in motion region, motion compensated method shown in Eq. (9) is applied as f M n τ(i,j) = τ f n 1 (i + dy 1,j + dx 1 ) +(1 τ) f n (i + dy 2,j + dx 2 ), (9) where (dx 1,dy 1 ) and (dx 2,dy 2 ) are the bidirectional motion vectors of the pixel (i,j). Therefore, the motion signal m n τ (i,j) can mix such two interpolations and output the final result as (i,j) = m n τ (i,j) f M n τ(i,j) +(1 m n τ (i,j)) f T n τ(i,j). (10) The bidirectional motion vectors can be constructed from the motion vectors obtained in the deinterlacing of f n. Assume (dx,dy) is the motion vector of the pixel f n (p,q), which passes through the pixel (p + τ dy,q + τ dx). If i = p+τ dy and j = q +τ dx, the bidirectional motion vectors (dx 1,dy 1 ) and (dx 2,dy 2 ) of the pixel (i,j) can be approximated as dx 1 = (1 τ) dx; dy 1 = (1 τ) dy; dx 2 = τ dx; dy 2 = τ dy. (11) The above method may lead to multi-passing or no-passing motion trajectories on pixel (i,j). Fig. 3 shows an example of a pixel with multi-passing motion trajectories. In f n 1 1- τ τ ( i, j) Fig. 3. Filtering the multi-passing motion trajectories. this case, a filter such as median or mean can be applied, and the bidirectional motion vectors can be obtained from the filtered result. Fig. 4 shows an example of a pixel with nopassing motion trajectory. In this case, another filter such as median or mean can be applied on the neighborhood motion trajectories in a local window, and the bidirectional motion vectors can be obtained from the filtered result. The obtained bidirectional motion vectors can be further filtered to obtain a smoother motion field of the frame. Since constructing the bidirectional motion vectors only requires a small amount of computation, the computational complexity is greatly reduced compared to a system having two separate motion estimation units. Another system is proposed in [8], where deinterlacing and frame rate conversion shares a single bidirectional motion estimation unit. If multiple frames need to be interpolated between two original frames, independent bidirectional motion estimation should be applied on each interpolated frame. While in our method, f n

1- τ τ Table 1. PSNR (db) of simulation results. f n 1 ( i, j) f n Video 1 2 proposed akiyo 36.273 38.922 41.607 bigjug 27.484 27.699 29.348 canadianflag 28.991 30.210 32.301 foreman 30.791 30.083 32.712 handshake 25.072 29.495 30.542 patterntracer 33.708 32.235 36.234 scroll 25.403 25.061 31.562 skistop 29.377 29.078 31.873 Pixel with motion trajectory Pixel without motion trajectory Fig. 4. Filtering the neighborhood motion trajectories if no motion trajectory passes through (i,j). those bidirectional motion vectors all can be constructed from one motion estimation shared with deinterlacing. 3. THE SIMULATIONS Performance evaluation of frame rate conversion is difficult since the original target frame rate video is not available. To overcome this limitation, a simple method is proposed, which consists of up-converting PAL video (50 fields/s) to NTSC video (60 fields/s) and then down-converting back to PAL video by the same method. The peak signal to noise ratio (PSNR) of the interpolated PAL video relative to the original one is calculated as a measurement. In our simulation, the proposed method is compared to method 1 described in [4], and method 2 described in [5]. The simulation objects are 8 interlaced videos with different aspects. Video akiyo shows slow motions on a stationary background; video bigjug shows lots of fast horizontal motions on a slow horizontally moving background; video candidanflag shows fast motions in different directions; video Foreman shows slow motions and fast moving background; video handshake shows very fast vertical motions; video patterntracer shows a zooming in scene with slow motions; video scroll shows vertically moving characters; video skistop shows slow horizontal motions on a fast horizontal moving background. The proposed deinterlacing method is applied in each method, since methods 1 and 2 are only for progressive video. The simulation results are shown in Table 1. It can be found that the proposed method yields more robust results. Generally, it gains 1 4dB compared to the other methods. Fig. 5 shows an example of the interpolated results of video bigjug by using different methods. In Fig. 5(c) interpolated by method 1, obvious artifacts are introduced by copying pixels from the previous frame to the empty regions in the missing frame [4]. Blurring edges are observed in Fig. 5(d) due to the overlapping-block motion compensation [5]. High quality interpolated frames are obtained using the proposed method, such as the one shown in Fig. 5(e). In addition, the extended motion estimation is very effective to search fast motions. Such an example is illustrated in Fig. 6. In the left top of Fig. 6(c), zoomed in to (e), ghost shows on the edge of the white gate due to the failed detection of fast motions. In the left top of Fig. 6(d), zoomed in to (f), fast motions are obtained and no such ghost exists. 4. CONCLUSION In this paper, a method of frame rate conversion of interlaced video is proposed, in which the deinterlacing and frame rate conversion shares a single motion estimation unit to reduce the computational complexity. Also, the extended motion estimation is proposed to search fast motions. Very reliable simulation results are obtained. 5. REFERENCES [1] G. de Haan, P. W. A. C. biezen, H. Huijgen, and O. A. Ojo, True-motion estimation with 3-d recursive search block matching, IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, no. 5, pp. 368 379, Oct. 1993. [2] A. Pelagotti and G. de Haan, High quality picture rate up-conversion for video on tv and pc, Proc. Philips Conf. on Digital Signal Processing, pp. 857 865, Nov. 1999. [3] S. Lee Y. Han and J. Kim, Motion compensated frame interpolation by new block-based motion estimation algorithm, IEEE Transactions on Consumer Electronics, vol. 50, pp. 752 759, May 2004.

(a) (b) (c) (d) (e) (f) (g) (h) Fig. 5. (a) Previous field (field 3 in 50Hz video), (b) Following field (field 4 in 50Hz video), (c) Interpolated field using method 1 (field 4 in 60Hz video),(d) Interpolated field using method 2 (field 4 in 60Hz video), (e) Interpolated field using the proposed method (field 4 in 60Hz video), (f) Sub image cut from (c), (g) Sub image cut from (d), (h) Sub image cut from (e). [4] Kobad A. Bugwadia, Eric D. Petajan, and Narindra N. Puri, Progressive-scan rate up-conversion of 24/30 hz source materials for hdtv, IEEE Transactions on Consumer Electronics, vol. 42, no. 3, pp. 312 321, Aug. 1996. [5] Kevin Hilman, Hyun Wook Park, and Yongmin Kim, Using motion-compensated frame-rate conversion for the correction of 3:2 pulldown artifacts in video sequences, IEEE Transactions on Circuits and Systems for Video Technology, vol. 10, no. 6, pp. 869 877, Sept. 2000. [6] Byung Tae Choi, Sung Hee Lee, and Sung Jea Ko,

(a) (b) (c) (d) (e) (f) Fig. 6. (a) Previous field (field 57 in 50Hz video), (b) Following field (field 58 in 50Hz video), (c) Interpolated field without using the extended motion estimation (field 69 in 60Hz video), (d) Interpolated field using the extended motion estimation (field 69 in 60Hz video), (e) Sub image cut from (c), (f) Sub image cut from (d). New frame rate up-conversion using bi-directional motion estimation, IEEE Transactions on Consumer Electronics, vol. 46, no. 3, pp. 603 609, Aug. 2000. [7] Yeong-Taeg Kim and Shin-Haeng Kim, Method of detecting motion in an interlaced video sequence utilizing region by region motion information and apparatus for motion detection, U.S. Patent Application, No. 003391, Oct. 2001. [8] G. de Haan, Paul W. A. C. Biezen, and O. A. Ojo, An evolutionary architecture for motion-compensated 100 hz television, IEEE Trans. Circuits Syst. Video Technol., vol. 5, no. 3, pp. 207 217, June 1995.