Video Color Conceptualization using Optimization

Size: px
Start display at page:

Download "Video Color Conceptualization using Optimization"

Transcription

1 Video olor onceptualization using Optimization ao iaohun Zhang YuJie Guo iaojie School of omputer Science and Technology, Tianjin University, hina Tel: Fax: xcao, yujiezh, Yiu-ming heung Department of omputer Science, Hong Kong Baptist University, Hong Kong SA, hina Tel: Fax: olor conceptualization aims to propagate color concepts from a library of natural color images to the input image by changing the main color. However, the existing method may lead to spatial discontinuities in images because of the absence of a spatial consistency constraint. In this paper, to solve this problem, we present a novel method to force neighboring pixels with similar intensities to have similar color. Using this constraint, the color conceptualization is formalized as an optimization problem with a quadratic cost function. Moreover, we further expand twodimensional (still image) color conceptualization to three-dimensional (video), and use the information of neighboring pixels in both space and time to improve the consistency between neighboring frames. The performance of our proposed method is demonstrated for a variety of images and video sequences. color conceptualization, color discontinuity, optimization, color correspondence, video sequence Images and videos provide visual perceptions. There are many aspects of the content of an image, each providing different information. An important aspect providing much of the visual perception of an image is the composition of colors. surka et al. [1] abstracted the concepts of look and feel (e.g. capricious, classic, cool, and delicate impressions) from an image according to color combinations. In practice, one may want to edit an image or video according to different task demands or personal preferences. Generally, altering the color of the image or video is a popular and intuitive way to meet such requirements [, 3, 4, 5, 6, 7, 8, 9, 10, 11]. einhard et al. [5] proposed a method of borrowing the color characteristics of an image via simple statistical 1

2 Figure 1: (a) Verbal terms extracted by clustering many images into different moods according to their hue distribution. The left three columns are some of the clustered images, while the right column shows the hue distributions of the image. (b)the input image and its hue distribution. analysis. esearchers [, 3, 4] have proposed different colorization or color transfer methods that obtain colors from given reference images taking a color correspondence approach. Automatic colorization methods that search for reference images on the Internet using various filtering algorithms have been proposed [6, 7]. The success of the methods [, 3, 4, 5, 6, 7] depends heavily on finding a suitable reference image, which can be a rigorous and time-consuming task. The colorization methods employed in [8, 9] are based on a set of chrominance scribbles. The process is tedious and does not always provide natural-looking results. ohen-or et al. [10] and Tang et al. [11] changed the colors of pictures to give the sense of a more harmonious state using empirical harmony templates of color distribution. However, this technique cannot change colors to flexibly meet the demands of users. Hou and Zhang [1] first introduced a novel technique to change the image color intentionally, which is called image color conceptualization. In their work [1], prototypes of color distributions were generated by clustering a vast number of images, and the mood of the input image was then changed by transferring the color distributions to it. u et al. [13] also proposed a method with which to change the emotion conveyed by images. They used a learning framework for discovering emotion-related knowledge, such as color and texture. They then constructed emotion-specific models from features of the image super-pixels. To change the conveyed emotion, they defined a piece-wise linear transformation to align the feature distribution of the target image to the statistical model. The goal of their method was to change the high-level emotion, while the method that we propose here focuses on changing color using low-level features. The method proposed in this paper is most closely related to the work of Hou and Zhang [1]. Hou and Zhang designed a clustering model to generate prototypes of color distributions from an input library of natural landscape and architectural images, and labeled each distribution with a verbal term such as warm or cold (see Fig. 1 (a)). The main component of each color distribution ( i.e., the color concept), which corresponds to the representative color mood of the image, is then extracted. The propagation of a certain color concept to the target image is manipulated by adopting the peak-mapping method. However, since the hue wheel is shifted without the consideration of spatial information, some artifacts may be introduced during the

3 propagation. In this paper, we use an optimization method to solve the problem employing a simple premise: neighboring pixels in space-time with similar intensities should have similar colors [8]. With consideration of the spatial information, the spatial continuity of colors in the generated image is ensured. Moreover, the optimization plays an important role in expanding the color conceptualization technique to three dimensions (i.e., video). Figure : The left half of each picture is the original input image, and the right half is the colorconceptualized result. olor conceptualization for video is much more attractive and challenging than that for a still image. However, to the best of our knowledge, no such system exists. The most straightforward idea is to apply the color conceptualization technique to each frame individually. However, this does not exploit the coherence between adjacent frames. In fact, a video may contain many different shots and, even in the same shot, there are many significant changes such as different illumination and the movement of objects. Therefore, the result obtained by simple application to individual frames effect of the method is often far from satisfactory. Even in the same shot, adjacent frames probably differ in terms of their color conceptualization, which results in flickering. Another possible solution for color conceptualization of video is video object segmentation [14, 15], in which changes are made across frames of the same shot to avoid flickering. Unfortunately, current three-dimensional segmentation techniques are not precise enough. In this paper, we alternatively apply the optimization method to video color conceptualization to ensure the continuity of colors both spatially and temporally, and thus provide a pleasant experience when watching the output video. evin et al. [8] proposed a method of coloring image sequences making the simple assumption that nearby pixels in space time with similar gray levels should have similar colors. This assumption is further employed by us in the most important step of the video color conceptualization. The proposed method is demonstrated to be effective for solving the discontinuity problem in an experiment. The paper is organized as follows. In Section 1, we discuss the existing method for image color conceptualization and formulate the problem of color conceptualization. In Section, we propose an optimization method to improve the existing technique, and extend the proposed color conceptualization method to three dimensions (i.e., video). We also detail problems arising in implementation and the corresponding solutions. Various experiments are carried out in Section 3 for both images and video frames. Section 4 presents conclusions. 3

4 1 elated Work The main goal of color conceptualization is to extract color concepts by clustering images and to change the mood of an input image via propagating the expected color concept to it (as shown in Fig.). All the work in this paper is conducted in HSV [16] color space and is based on hue wheel representation. 1.1 Hue Wheel epresentation of Image In HSV color space, the hue is distributed around a circle, starting with the red primary at 0, passing through the green primary at 10 and the blue primary at 40, and then wrapping back to red at 360 (as shown in Fig. 1 (b)). Given an input image, we firstly convert it into the HSV color space. The hue wheel H () i of the input image is then defined as [1] H ( i) S( p) V ( p). (1) ( i 1) i H( p) Here H( p ), S( p) and V( p ) are the hue, saturation and value of pixel p from image, and i [1,360] is an integer. The range of hue is divided into 360 orientations. We thus obtain 360 bins around the hue wheel. Subsequently, by calculating the value of H () histogram of hue wheel representation i for every i, the H is obtained as shown in Fig. 1 (b). The expression respects the fact that the pixels with high saturation and high brightness always attract more attention. For one image, there might be multiple peaks in the hue wheel. However, only the dominant color is represented by the dominant peak; therefore, we choose the strongest peak as the main color of the image. To cut the main hue peak at the proper position in the hue wheel, we adopt the three following steps (as shown in the upper half of Fig. 3). 1. Fit peak Pk ( ) by a Gaussian function G (, where and are the mean and variance of G ).. Set the left cut position.5 and the right cut position Save Pk ( ) ( k ) as the main hue peak of the image. (since about 98.76% of the Gaussian distribution is within k ). 4

5 Figure 3: (Upper) The hue wheel of the input image. (lower) The hue wheel of the color concept. There are two alternatives for the values in [, ) : a shift to the range [, ] or no change; there are two alternatives for the hue values in (, ] : a shift to the range [, ] or no change. r r 1. lustering Images Numerous color naming models intend to relate a numerical color space with semantic color names in natural language such as grass green and light sky blue [17,18,19]. The terms relate to color impressions; e.g., light sky blue distinguishes a particular color mood from other color distributions [1]. Most of the images convey an atmosphere by a main color. By clustering images through the Kullback-eibler (K) divergence of the distributions of hue wheels, we can extract typical moods. The K divergence of hue wheels D( ) is defined as 360 H () i D( ) H ( i)log, H () i i1 () where H is the hue wheel of the input image and H is the hue wheel of an image category. Given an image library, we use the algorithm proposed in [1] to cluster images into different categories. Images in the same category have the same mood and we label each category with a subjective description such as warm or cold. For an image category, the hue wheels of all the images in the category are calculated according to equation (1), and are then assumed to form the hue wheel of the category, which is represented by represents the current main mood, which we call the color concept. H. The dominant peak of H 1.3 Propagating the olor oncept olor conceptualization is the process of replacing the hue peak of the input image with the desired color concept. Here we normalize the hue peak according to 5

6 ( i, H ) i t t H H () t () t (3) and then use the algorithm in [1] (which we call the color mapping algorithm for convenience) to propagate the color concept as follows. 1. For each i (, ), calculate ( i, H ).. For each i, find j that satisfies ( i, H ) ( j, H ). 3. Assign i j. Figure 4: (eft) The input image. (Middle) The result obtained using the method in [1], with color discontinues on the petal. (ight) The result obtained using our method. In the color manipulations made using the color mapping algorithm, the peak of a hue wheel is uniformly cut off at i and i. However, in real implementation, this may result in artifacts in the image introduced by the splitting of a contiguous region of the image [10]. An example is presented in Fig. 4 (middle). The splitting occurs in some regions with similar color, with part of a region falling within the peak and the other part falling outside the peak, which leads to discontinuity of color after color transformation. To solve this problem, Hou and Zhang [1] used a local minimum position to cut the peak, and achieved good results for most images. However, the method is not always effective (Fig. 4 (middle)). In many cases, directly cutting off the hue peak at any position will similarly result in discontinuity. Therefore, it is necessary to explore a new approach that uses spatial information to enforce spatial continuity. olor onceptualization using Optimization.1 Spatially onsistent Image olor onceptualization Inspired by image and video colorization assisted by optimization [8], we combine the cost function and optimization of the hue wheel to solve the peak boundary problem. The main steps are elaborated below. 6

7 1. Fit the hue peak Pk ( ) of the input image with a Gaussian function G, as in [1] (see, the red fitting line in the upper half of Fig. 3). The left cut position is initialized as.5, and the right cut position as.5. The hue peak falling in [, ] is changed to the desired color concept using the color mapping algorithm mentioned above.. Define two new cut positions, d falling in [0, ) and (, ] and d, and keep the hue values (i.e., to the left of half of Fig. 3) unchanged. The parameter d will be discussed in Section 3. and the right side of in the upper 3. There are two alternatives for the pixels with hue values falling in [, ) or (, ] (the parts below the black curly braces in the upper half of Fig. 3): to change to the color concept or not. In the case that the color concept is changed, the hue values of pixels falling in [, ) are changed to, and the hue values of pixels falling in (, ] are changed to. Here and are respectively the left border and right border of the desired color concept (as shown in Fig. 3). The optimal scheme B ( ) of the given image is determined by minimizing the following function over choices for all undetermined pixels. B( ) arg min( H( p) w H( q) ), p qn ( p) pq (4) where H( p) is the hue value of pixel p in the input image, and N( p ) is the set of eight neighbors of pixel p. Note that wpq is the weight coefficient satisfying [0] wpq e d ( xp x, q )/ 1, (5) where d( x, x ) is the squared difference between the intensities of pixels p and q. Here p q 1 is the variance of intensity in a window around pixel p. Obviously, difference between intensities decreases. For a given pixel p, qn ( p) w pq wpq increases as the 1. The minimization in equation (4) guarantees that neighboring pixels have similar colors if their intensities are similar.. Video olor onceptualization ompared with still-image color conceptualization, video color conceptualization is much more attractive and challenging because it involves the issue of ties and changes between adjacent frames. In addition, there may be various scenes in one video, and their theme contents and main colors can vary. If conceptualized uniformly, the video will likely appear awkward and distorted. 7

8 Moreover, the color conceptualization one desires should be based on video content, rather than being arbitrary. Therefore, scene segmentation is essential. State-of-the-art shot-detection methods [, 3] can be used in our framework. To demonstrate the performance of our method, we use a simple and effective method to distinguish different scenes in the video that is based on the square of the absolute difference of the gray value. In practice, we compute the average value of the squared absolute difference between adjacent frames from the first frame: n M I( k) I ( k), f f f1 k1 (6) where I ( k ) is the gray value of pixel k in frame f. For a given threshold, the frame f is f treated as the beginning of a new scene, if threshold. The remaining work is to concentrates on each single scene. M is equal to or greater than a pre-defined f Even for the same scene, video color conceptualization cannot be as simple as image color conceptualization. Applying image color conceptualization to each frame individually usually leads to flickering artifacts in the output video; e.g., Fig. 5 (b). There are two main reasons for this. First, the hue wheels of two adjacent frames are highly unlikely to be exactly the same, which results in different colors needing to be changed in the two frames. Second, the edges of the objects changing during the conceptualized process are unstable because of the absence of a time consistency constraint. Instead of calculating the hue wheel representation of each single frame separately, a hue wheel representation of the whole shot can be computed using equation (1). Similar to the first two steps in the propagating of the color concept described in section.1, the hue peak of the video shot is fitted with a Gaussian function, G, and the left and right borders are.5 and.5 respectively. Two additional cut positions are dv and d. Subsequently, the hue values falling in [, ] are changed to [, ] v according to the color mapping algorithm mentioned above, while the hue values falling in [0, ) or (, ] remain unchanged. There are also two options for the pixels with hue values falling in [, ) or (, ] : a shift in the hue value or no shift. However, as opposed to the case of the image color conceptualization, we use both spatial and temporal information to structure the optimization problem so that the best scheme for the whole shot can be obtained. Analogously, according to the principle that neighboring frames with similar intensities are expected to have similar color, the objective function to be minimized can be formalized as B( ) H( p) H( q) pv qn ( p), (7) 8

9 where H( p) is hue value of the pixel p in the input video, and w is the weight coefficient pq satisfying equation (5). As opposed to the case of image manipulation, N( p ) here represents 6 neighboring pixels in spatial-temporal space [4]. Figure 5: (a) Four successive frames of an input video. (b) olor conceptualization results obtained using Hou and Zhang s method [1] for each frame individually, with discontinuous and varying red regions on a leaf. (c) (d) olor conceptualization results obtained using our method..3 olor orrespondence In the case that the hue of a pixel is to be changed, we have previously changed the hue to if the hue value falls in[, ), and to if the hue value falls in (, ]. However, this would still result in artifacts since pixels with different hue values may change to the same value. onsequently, instead of changing all pixels to the same value, we employ a more elaborate scheme [10] to achieve correspondence of color appearance [1]: H( p ) r (1 G H( p ( ) )), (8) where p is a pixel with a hue value falling in [, ], H ( p) is the hue value that pixel p will change to if it needs to change, H( p ) is the original hue value of the pixel p, and r( ) is a parameter that will be discussed later. and are the mean values of the Gaussian function fitting the hue peak of the shot and the concept peak, respectively. G is a Gaussian function with mean zero and standard deviation and ranges continuously from 0 to 1. From equation (8), we find that the hue values of pixels falling in (, ] will become distributed near in the same order as their original values, and become compact (as shown in 9

10 Fig.3). The hue values of pixels falling in [, ) are changed to be near using a similar method..4 ircle Problem of Hue The main principle of our method is that neighboring pixels in space-time that have similar intensities should have similar colors. Under this assumption, we decided the hue value of each undetermined pixel according to the weighted sum of its adjacent pixels. However, the distribution of the hue value is a circular ring, where hue 0 and hue represent the same color. As an extreme example, if the hue of an undetermined pixel depends on two neighboring pixels with weighting coefficients H and H 1 0 w and w 0.5, and the hue values of the two pixels are, then using the proposed method, the expected hue value of the undetermined pixel is. This means that the color of a pixel in a pile of red pixels may change to green, which is obviously unreasonable. The simplest solution is to make the hue distribution linear by disconnecting the hue wheel at an appropriate point according to the specific input pictures. The undetermined points and their neighboring points are always in or near the hue peaks of the input image and the color concept. We should find a cutoff point as far from both hue peaks as possible. We can then guarantee that there is only one distance between any two neighboring pixels among the undetermined points and the near hue values will not be put apart. A 1 and A are the median points of distance to and, from two directions, respectively, and the one with larger is the farthest point to the two main peaks. Therefore, the median point A 1 or A with larger distance to 3 Experimental esults will be selected as the cutoff point. Figure 6: (a) The input image. (b) (c) (d) The resulting image obtained using our method with d=10,140, and 60. In this section, we present various image and video results obtained using our proposed method. First, we illustrate that color conceptualization is different from color transfer in two respects. First, the main purpose of color conceptualization is to change the mood of a picture while that is not the case for color transfer. Second, color conceptualization generates color concepts by 10

11 clustering a number of pictures once, while the color transfer method has to find a suitable reference picture for each target image. We experimentally investigate the performance of our method for a variety of pictures and videos. The parameter d (introduced in Section.1) is crucial because it decides the number of pixels with undetermined hue values. If the value of d is too small, there are quite a few pixels with undetermined hue values (as shown in Fig. 6 (b), the color of the mountain on the left is not consistent). On the other hand, if the value of d is too large, some background pixels are wrongly labeled as undetermined (as shown in Fig. 6 (c), almost the whole image becomes the same color). In our implementation, we set d 60 as shown in Fig. 6 (d). Figure 7: (a) The upper image is the input image. The white areas in the bottom image are undetermined pixels. (b) (c) (d) esulting images obtained using our method with r =.5, r = 3, and r = 4, respectively. The parameters r and (introduced in Section.3) collaboratively decide the closeness of the hue distribution of the undetermined pixels. The hue values in (, ] would change to [, r ], where r decides the maximum distribution width and decides the specific distribution as shown in Fig. 7. r must be larger than.5 because the distribution range must include (.5,.5 ] according to our method. On the other hand, the value of r cannot be arbitrarily large. If the value is too large, there may be unexpected colors in the undetermined pixels because the distribution width of the hue is too large. Figure 7 shows results for an image with different values of r. The results show that our method is not sensitive to the parameter r. Even the magnified views show only minor differences with respect to varying r. We use r throughout our experiments. 3 Figure 8: The first picture is the input image and other pictures are the output conceptualized images. 11

12 The value of should be obtained to guarantee that Therefore, we obtain the value of H( p) into equation (8). changes exactly to. by substituting r 3, H( p) and For image color conceptualization, we should choose a certain color concept from existing concepts (here we cluster six color concepts using the V database [5] as the image library). Figure shows two examples of image color conceptualization. For the first picture, the change in color concept implies a change in season, because leaves can be yellow in autumn and tend to be green in spring. The different colors in the second picture suggest different weather. Figure 8 shows another natural scene conceptualized using our method. As we improve Hou and Zhang s method [1] by taking spatial information into consideration, our proposed method performs better in some cases, especially when there are color differences in the same region of an object. In Fig.9, the magnified images show the performance improvement over to the existing method. Moreover, this technique is applicable not only to the field of image processing, but also to the previewing of artwork coloring. An example is shown in Fig. 10. Figure 9: (a) Three input images. (b) Output images obtained using Hou and Zhang s image color conceptualization [1]. (c) Output images obtained using our image color conceptualization method. Experiments further demonstrate that our method performs well for video. Simply applying Hou and Zhang s image color conceptualization [1] to each frame individually leads to color discontinuity and flickering as demonstrated in Fig. 5 (b). For a better view, see our supplemental video material. Since we take the temporal information into account, the results obtained using our method as shown in Fig. 5 (c) and Fig. 5 (d) are significantly better. Figure 11 presents more comparisons not only between our video color conceptualization and Hou and Zhang s image method for individual video frames, but also between our video color conceptualization and our new image method applied to video frames individually. This comparison allows can help us to observe the role of temporal information in overcoming the flickering problem and shows the advantage of the video method. Figure 11 (b) shows frames of an input video, and Fig. 11 (a) shows the hue wheel representation of the three frames and the whole video. We see a difference in the hue wheel representation between two frames. Then Fig. 11 (c), (d) and (e) shows three 1

13 groups of frames of the resulting video obtained using Hou and Zhang s image method, our image method only considering the spatial information and our video method considering both spatial and temporal information. Some artifacts are observed in the magnified in views of (c) and (d). Figure 10: (a) The input image of a crocus artwork and its hue wheel representation. (b) The effect of coloring the crocus yellow and the hue wheel representation of the output image. (c) The effect of coloring the crocus green and the hue wheel representation of the output image. Figure 1 shows other video examples. olor conceptualization can be applied in many fields, such as image and video processing, advertising and music television processing, and mood consistency regulation in image cut and paste. 4 Discussion and onclusions We proposed an image color conceptualization method based on an existing method [1] and an optimization algorithm [8], and expanded it to video processing. Our main contributions include taking the spatial information into account to improve color continuity, and expanding our image-based method to video color conceptualization by enforcing spatio-temporal consistency. Experiments carried out for both images and videos demonstrated the performance of our proposed method. 13

14 Figure 11: (a) Hue wheel representation of the three frames in (b) and the whole video. (b) Three frames of the input video. (c) (d) and (e) are the resulting frames obtained using Hou and Zhang s image method [1], our image method and our video method. eferences 1. surka G, Skaff S, Marchesotti, et al. Building look & feel concept models from color combinations. The Visual omputer, vol. 7, no. 1, pp , Welsh T, Ashikhmin M and Mueller K. Transferring color to greyscale images. AM Transactions on Graphics (TOG)- Proceedings of AM SIGGAPH, vol. 1, no. 3, pp , Irony, ohen-or D and ischinski D. olorization by example. In Eurographics Symposium on endering, pp , harpiat G, Hofmann M and Scholkopf B. Automatic image colorization via multimodal predictions. In Proc. EV, pp , einhard E, Ashikhmin M, Gooch B, et al. olor transfer between images. IEEE omputer Graphics Applications, vol. 1, no. 5, pp , iu, Wan, Qu Y, et al. Intrinsic colorization. AM Transactions on Graphics (SIGGAPH Asia 008 issue), vol. 7, no. 5, pp. 15:1-15:9, hia A, Zhuo S, Gupta, et al. Semantic olorization with internet images. AM Transactions on Graphics, vol. 30, no. 6, pp. 156:1-156:7,

15 8. evin A, ischinski D and Weiss Y. olorization using optimization. In Proceedings of AM SIGGAPH, pp , Yatziv and Sapiro G. Fast image and video colorization using chrominance blending. IEEE Transactions on Image Processing, vol. 15, no. 5, pp , ohen-or D, Sorkine O, Gal, et al. olor harmonization, AM Transactions on Graphics (TOG), vol. 5, no. 3, pp , Tang Z, Miao Z, Wan Y, et al. olor harmonization for images. Journal of Electronic Imaging, vol. 0, no., pp , Hou and Zhang. olour conceptualization. In Proceedings of the fifteenth AM international conference on Multimedia, pp , u M, Ni B, Tang J and Yan S. Image e-emotionalizing. PM, ee Y, Kim J, and Grauman K. Key-segments for video object segmentation. In IV, pages , Zhang B, Zhao H, and ao, Video Object Segmentation with Shortest Path, The 0th Anniversary AM Multimedia, Preprint, Hanbury A. onstructing cylindrical coordinate colour spaces. Pattern ecognition and Image Processing Group, vol.9, no. 4, pp , iu Y, Zhang D, u G, et al. egion-based image retrieval with high-level semantic color names. In Proc. of IEEE 11th International Multi- Media Modelling onference, pp , 005. Figure 1: Three groups of video color conceptualization results. In each group, the upper row shows five frames of the input video, and the lower row shows the output. 18. Goldstein E. Sensation and perception (the 5th Edition), Brooks/ole, Berk T, Brownston and Kaufmann A. A new color-naming system for graphics languages. In IEEE GA, vol., no. 3, pp , Weiss Y. Segmentation using eigenvectors: A unifying view. In International onference on omputer Vision, pp ,

16 1. Morovic J, and uo M. The fundamentals of gamut mapping: A survey. Journal of Imaging Science and Technology, vol. 45, no. 3, pp , ee H, Yu J, Im Y, et al. A unified scheme of shot boundary detection and anchor shot detection in news video story parsing. Multimedia Tools and Applications, vol. 51, no. 3, pp , Amudha J, adha D, Naresh P. Video Shot Detection using Saliency Measure. International Journal of omputer Applications, vol. 45, no., pp. 17-4, Shi J and Malik J. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol., no. 8, pp , Oliva A and Torralba A. Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of omputer Vision, vol. 4, no. 3, pp ,

Reducing False Positives in Video Shot Detection

Reducing False Positives in Video Shot Detection Reducing False Positives in Video Shot Detection Nithya Manickam Computer Science & Engineering Department Indian Institute of Technology, Bombay Powai, India - 400076 mnitya@cse.iitb.ac.in Sharat Chandran

More information

Research Article. ISSN (Print) *Corresponding author Shireen Fathima

Research Article. ISSN (Print) *Corresponding author Shireen Fathima Scholars Journal of Engineering and Technology (SJET) Sch. J. Eng. Tech., 2014; 2(4C):613-620 Scholars Academic and Scientific Publisher (An International Publisher for Academic and Scientific Resources)

More information

A Framework for Segmentation of Interview Videos

A Framework for Segmentation of Interview Videos A Framework for Segmentation of Interview Videos Omar Javed, Sohaib Khan, Zeeshan Rasheed, Mubarak Shah Computer Vision Lab School of Electrical Engineering and Computer Science University of Central Florida

More information

VISUAL CONTENT BASED SEGMENTATION OF TALK & GAME SHOWS. O. Javed, S. Khan, Z. Rasheed, M.Shah. {ojaved, khan, zrasheed,

VISUAL CONTENT BASED SEGMENTATION OF TALK & GAME SHOWS. O. Javed, S. Khan, Z. Rasheed, M.Shah. {ojaved, khan, zrasheed, VISUAL CONTENT BASED SEGMENTATION OF TALK & GAME SHOWS O. Javed, S. Khan, Z. Rasheed, M.Shah {ojaved, khan, zrasheed, shah}@cs.ucf.edu Computer Vision Lab School of Electrical Engineering and Computer

More information

Supplemental Material: Color Compatibility From Large Datasets

Supplemental Material: Color Compatibility From Large Datasets Supplemental Material: Color Compatibility From Large Datasets Peter O Donovan, Aseem Agarwala, and Aaron Hertzmann Project URL: www.dgp.toronto.edu/ donovan/color/ 1 Unmixing color preferences In the

More information

Circular Statistics Applied to Colour Images

Circular Statistics Applied to Colour Images Circular Statistics pplied to Colour Images llan Hanbury PRIP, TU Wien, Favoritenstraße 9/183, -1040 Vienna, ustria hanbury@prip.tuwien.ac.at bstract Three methods for summarising the characteristics of

More information

Region Adaptive Unsharp Masking based DCT Interpolation for Efficient Video Intra Frame Up-sampling

Region Adaptive Unsharp Masking based DCT Interpolation for Efficient Video Intra Frame Up-sampling International Conference on Electronic Design and Signal Processing (ICEDSP) 0 Region Adaptive Unsharp Masking based DCT Interpolation for Efficient Video Intra Frame Up-sampling Aditya Acharya Dept. of

More information

Color Quantization of Compressed Video Sequences. Wan-Fung Cheung, and Yuk-Hee Chan, Member, IEEE 1 CSVT

Color Quantization of Compressed Video Sequences. Wan-Fung Cheung, and Yuk-Hee Chan, Member, IEEE 1 CSVT CSVT -02-05-09 1 Color Quantization of Compressed Video Sequences Wan-Fung Cheung, and Yuk-Hee Chan, Member, IEEE 1 Abstract This paper presents a novel color quantization algorithm for compressed video

More information

Using the NTSC color space to double the quantity of information in an image

Using the NTSC color space to double the quantity of information in an image Stanford Exploration Project, Report 110, September 18, 2001, pages 1 181 Short Note Using the NTSC color space to double the quantity of information in an image Ioan Vlad 1 INTRODUCTION Geophysical images

More information

Adaptive Key Frame Selection for Efficient Video Coding

Adaptive Key Frame Selection for Efficient Video Coding Adaptive Key Frame Selection for Efficient Video Coding Jaebum Jun, Sunyoung Lee, Zanming He, Myungjung Lee, and Euee S. Jang Digital Media Lab., Hanyang University 17 Haengdang-dong, Seongdong-gu, Seoul,

More information

BBM 413 Fundamentals of Image Processing Dec. 11, Erkut Erdem Dept. of Computer Engineering Hacettepe University. Segmentation Part 1

BBM 413 Fundamentals of Image Processing Dec. 11, Erkut Erdem Dept. of Computer Engineering Hacettepe University. Segmentation Part 1 BBM 413 Fundamentals of Image Processing Dec. 11, 2012 Erkut Erdem Dept. of Computer Engineering Hacettepe University Segmentation Part 1 Image segmentation Goal: identify groups of pixels that go together

More information

Interlace and De-interlace Application on Video

Interlace and De-interlace Application on Video Interlace and De-interlace Application on Video Liliana, Justinus Andjarwirawan, Gilberto Erwanto Informatics Department, Faculty of Industrial Technology, Petra Christian University Surabaya, Indonesia

More information

UNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT

UNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT UNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT Stefan Schiemenz, Christian Hentschel Brandenburg University of Technology, Cottbus, Germany ABSTRACT Spatial image resizing is an important

More information

Visual Communication at Limited Colour Display Capability

Visual Communication at Limited Colour Display Capability Visual Communication at Limited Colour Display Capability Yan Lu, Wen Gao and Feng Wu Abstract: A novel scheme for visual communication by means of mobile devices with limited colour display capability

More information

Processing. Electrical Engineering, Department. IIT Kanpur. NPTEL Online - IIT Kanpur

Processing. Electrical Engineering, Department. IIT Kanpur. NPTEL Online - IIT Kanpur NPTEL Online - IIT Kanpur Course Name Department Instructor : Digital Video Signal Processing Electrical Engineering, : IIT Kanpur : Prof. Sumana Gupta file:///d /...e%20(ganesh%20rana)/my%20course_ganesh%20rana/prof.%20sumana%20gupta/final%20dvsp/lecture1/main.htm[12/31/2015

More information

Robert Alexandru Dobre, Cristian Negrescu

Robert Alexandru Dobre, Cristian Negrescu ECAI 2016 - International Conference 8th Edition Electronics, Computers and Artificial Intelligence 30 June -02 July, 2016, Ploiesti, ROMÂNIA Automatic Music Transcription Software Based on Constant Q

More information

Browsing News and Talk Video on a Consumer Electronics Platform Using Face Detection

Browsing News and Talk Video on a Consumer Electronics Platform Using Face Detection Browsing News and Talk Video on a Consumer Electronics Platform Using Face Detection Kadir A. Peker, Ajay Divakaran, Tom Lanning Mitsubishi Electric Research Laboratories, Cambridge, MA, USA {peker,ajayd,}@merl.com

More information

White Paper. Uniform Luminance Technology. What s inside? What is non-uniformity and noise in LCDs? Why is it a problem? How is it solved?

White Paper. Uniform Luminance Technology. What s inside? What is non-uniformity and noise in LCDs? Why is it a problem? How is it solved? White Paper Uniform Luminance Technology What s inside? What is non-uniformity and noise in LCDs? Why is it a problem? How is it solved? Tom Kimpe Manager Technology & Innovation Group Barco Medical Imaging

More information

Wipe Scene Change Detection in Video Sequences

Wipe Scene Change Detection in Video Sequences Wipe Scene Change Detection in Video Sequences W.A.C. Fernando, C.N. Canagarajah, D. R. Bull Image Communications Group, Centre for Communications Research, University of Bristol, Merchant Ventures Building,

More information

Subjective Similarity of Music: Data Collection for Individuality Analysis

Subjective Similarity of Music: Data Collection for Individuality Analysis Subjective Similarity of Music: Data Collection for Individuality Analysis Shota Kawabuchi and Chiyomi Miyajima and Norihide Kitaoka and Kazuya Takeda Nagoya University, Nagoya, Japan E-mail: shota.kawabuchi@g.sp.m.is.nagoya-u.ac.jp

More information

On the Characterization of Distributed Virtual Environment Systems

On the Characterization of Distributed Virtual Environment Systems On the Characterization of Distributed Virtual Environment Systems P. Morillo, J. M. Orduña, M. Fernández and J. Duato Departamento de Informática. Universidad de Valencia. SPAIN DISCA. Universidad Politécnica

More information

An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions

An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions 1128 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 11, NO. 10, OCTOBER 2001 An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions Kwok-Wai Wong, Kin-Man Lam,

More information

AUDIOVISUAL COMMUNICATION

AUDIOVISUAL COMMUNICATION AUDIOVISUAL COMMUNICATION Laboratory Session: Recommendation ITU-T H.261 Fernando Pereira The objective of this lab session about Recommendation ITU-T H.261 is to get the students familiar with many aspects

More information

Color Image Compression Using Colorization Based On Coding Technique

Color Image Compression Using Colorization Based On Coding Technique Color Image Compression Using Colorization Based On Coding Technique D.P.Kawade 1, Prof. S.N.Rawat 2 1,2 Department of Electronics and Telecommunication, Bhivarabai Sawant Institute of Technology and Research

More information

Lecture 2 Video Formation and Representation

Lecture 2 Video Formation and Representation 2013 Spring Term 1 Lecture 2 Video Formation and Representation Wen-Hsiao Peng ( 彭文孝 ) Multimedia Architecture and Processing Lab (MAPL) Department of Computer Science National Chiao Tung University 1

More information

Colour Reproduction Performance of JPEG and JPEG2000 Codecs

Colour Reproduction Performance of JPEG and JPEG2000 Codecs Colour Reproduction Performance of JPEG and JPEG000 Codecs A. Punchihewa, D. G. Bailey, and R. M. Hodgson Institute of Information Sciences & Technology, Massey University, Palmerston North, New Zealand

More information

Drift Compensation for Reduced Spatial Resolution Transcoding

Drift Compensation for Reduced Spatial Resolution Transcoding MERL A MITSUBISHI ELECTRIC RESEARCH LABORATORY http://www.merl.com Drift Compensation for Reduced Spatial Resolution Transcoding Peng Yin Anthony Vetro Bede Liu Huifang Sun TR-2002-47 August 2002 Abstract

More information

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 12, NO. 7, NOVEMBER

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 12, NO. 7, NOVEMBER IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 12, NO. 7, NOVEMBER 2010 717 Multi-View Video Summarization Yanwei Fu, Yanwen Guo, Yanshu Zhu, Feng Liu, Chuanming Song, and Zhi-Hua Zhou, Senior Member, IEEE Abstract

More information

2. AN INTROSPECTION OF THE MORPHING PROCESS

2. AN INTROSPECTION OF THE MORPHING PROCESS 1. INTRODUCTION Voice morphing means the transition of one speech signal into another. Like image morphing, speech morphing aims to preserve the shared characteristics of the starting and final signals,

More information

CONSTRUCTION OF LOW-DISTORTED MESSAGE-RICH VIDEOS FOR PERVASIVE COMMUNICATION

CONSTRUCTION OF LOW-DISTORTED MESSAGE-RICH VIDEOS FOR PERVASIVE COMMUNICATION 2016 International Computer Symposium CONSTRUCTION OF LOW-DISTORTED MESSAGE-RICH VIDEOS FOR PERVASIVE COMMUNICATION 1 Zhen-Yu You ( ), 2 Yu-Shiuan Tsai ( ) and 3 Wen-Hsiang Tsai ( ) 1 Institute of Information

More information

Man-Machine-Interface (Video) Nataliya Nadtoka coach: Jens Bialkowski

Man-Machine-Interface (Video) Nataliya Nadtoka coach: Jens Bialkowski Seminar Digitale Signalverarbeitung in Multimedia-Geräten SS 2003 Man-Machine-Interface (Video) Computation Engineering Student Nataliya Nadtoka coach: Jens Bialkowski Outline 1. Processing Scheme 2. Human

More information

Robust 3-D Video System Based on Modified Prediction Coding and Adaptive Selection Mode Error Concealment Algorithm

Robust 3-D Video System Based on Modified Prediction Coding and Adaptive Selection Mode Error Concealment Algorithm International Journal of Signal Processing Systems Vol. 2, No. 2, December 2014 Robust 3-D Video System Based on Modified Prediction Coding and Adaptive Selection Mode Error Concealment Algorithm Walid

More information

RECOMMENDATION ITU-R BT Studio encoding parameters of digital television for standard 4:3 and wide-screen 16:9 aspect ratios

RECOMMENDATION ITU-R BT Studio encoding parameters of digital television for standard 4:3 and wide-screen 16:9 aspect ratios ec. ITU- T.61-6 1 COMMNATION ITU- T.61-6 Studio encoding parameters of digital television for standard 4:3 and wide-screen 16:9 aspect ratios (Question ITU- 1/6) (1982-1986-199-1992-1994-1995-27) Scope

More information

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs 2005 Asia-Pacific Conference on Communications, Perth, Western Australia, 3-5 October 2005. The Development of a Synthetic Colour Test Image for Subjective and Objective Quality Assessment of Digital Codecs

More information

Constant Bit Rate for Video Streaming Over Packet Switching Networks

Constant Bit Rate for Video Streaming Over Packet Switching Networks International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Constant Bit Rate for Video Streaming Over Packet Switching Networks Mr. S. P.V Subba rao 1, Y. Renuka Devi 2 Associate professor

More information

Image Resolution and Contrast Enhancement of Satellite Geographical Images with Removal of Noise using Wavelet Transforms

Image Resolution and Contrast Enhancement of Satellite Geographical Images with Removal of Noise using Wavelet Transforms Image Resolution and Contrast Enhancement of Satellite Geographical Images with Removal of Noise using Wavelet Transforms Prajakta P. Khairnar* 1, Prof. C. A. Manjare* 2 1 M.E. (Electronics (Digital Systems)

More information

HEBS: Histogram Equalization for Backlight Scaling

HEBS: Histogram Equalization for Backlight Scaling HEBS: Histogram Equalization for Backlight Scaling Ali Iranli, Hanif Fatemi, Massoud Pedram University of Southern California Los Angeles CA March 2005 Motivation 10% 1% 11% 12% 12% 12% 6% 35% 1% 3% 16%

More information

Keywords Separation of sound, percussive instruments, non-percussive instruments, flexible audio source separation toolbox

Keywords Separation of sound, percussive instruments, non-percussive instruments, flexible audio source separation toolbox Volume 4, Issue 4, April 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Investigation

More information

Selective Intra Prediction Mode Decision for H.264/AVC Encoders

Selective Intra Prediction Mode Decision for H.264/AVC Encoders Selective Intra Prediction Mode Decision for H.264/AVC Encoders Jun Sung Park, and Hyo Jung Song Abstract H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression

More information

THE importance of music content analysis for musical

THE importance of music content analysis for musical IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2007 333 Drum Sound Recognition for Polyphonic Audio Signals by Adaptation and Matching of Spectrogram Templates With

More information

Optimized Color Based Compression

Optimized Color Based Compression Optimized Color Based Compression 1 K.P.SONIA FENCY, 2 C.FELSY 1 PG Student, Department Of Computer Science Ponjesly College Of Engineering Nagercoil,Tamilnadu, India 2 Asst. Professor, Department Of Computer

More information

Temporal Error Concealment Algorithm Using Adaptive Multi- Side Boundary Matching Principle

Temporal Error Concealment Algorithm Using Adaptive Multi- Side Boundary Matching Principle 184 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.12, December 2008 Temporal Error Concealment Algorithm Using Adaptive Multi- Side Boundary Matching Principle Seung-Soo

More information

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS

DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS Item Type text; Proceedings Authors Habibi, A. Publisher International Foundation for Telemetering Journal International Telemetering Conference Proceedings

More information

Audio-Based Video Editing with Two-Channel Microphone

Audio-Based Video Editing with Two-Channel Microphone Audio-Based Video Editing with Two-Channel Microphone Tetsuya Takiguchi Organization of Advanced Science and Technology Kobe University, Japan takigu@kobe-u.ac.jp Yasuo Ariki Organization of Advanced Science

More information

MusCat: A Music Browser Featuring Abstract Pictures and Zooming User Interface

MusCat: A Music Browser Featuring Abstract Pictures and Zooming User Interface MusCat: A Music Browser Featuring Abstract Pictures and Zooming User Interface 1st Author 1st author's affiliation 1st line of address 2nd line of address Telephone number, incl. country code 1st author's

More information

Module 4: Video Sampling Rate Conversion Lecture 25: Scan rate doubling, Standards conversion. The Lecture Contains: Algorithm 1: Algorithm 2:

Module 4: Video Sampling Rate Conversion Lecture 25: Scan rate doubling, Standards conversion. The Lecture Contains: Algorithm 1: Algorithm 2: The Lecture Contains: Algorithm 1: Algorithm 2: STANDARDS CONVERSION file:///d /...0(Ganesh%20Rana)/MY%20COURSE_Ganesh%20Rana/Prof.%20Sumana%20Gupta/FINAL%20DVSP/lecture%2025/25_1.htm[12/31/2015 1:17:06

More information

Story Tracking in Video News Broadcasts. Ph.D. Dissertation Jedrzej Miadowicz June 4, 2004

Story Tracking in Video News Broadcasts. Ph.D. Dissertation Jedrzej Miadowicz June 4, 2004 Story Tracking in Video News Broadcasts Ph.D. Dissertation Jedrzej Miadowicz June 4, 2004 Acknowledgements Motivation Modern world is awash in information Coming from multiple sources Around the clock

More information

Key Frame Extraction and Shot Change Detection for compressing Color Video

Key Frame Extraction and Shot Change Detection for compressing Color Video Communication Technology, Vol 3, Issue, January- 4 ISS (Print) 23-556 Key Frame xtraction and Shot Change Detection for compressing Color Video Dr. A. SKhobragade, eha S Wahab Dept.of &T ngineering YeshwantraoChavan

More information

Journal of Chemical and Pharmaceutical Research, 2015, 7(3): Research Article

Journal of Chemical and Pharmaceutical Research, 2015, 7(3): Research Article Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research, 2015, 7(3):555-563 Research Article ISSN : 0975-7384 CODEN(USA) : JCPRC5 Application of color matching in the design of network

More information

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ICIP.2016.

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ICIP.2016. Mercer Moss, F., Zhang, F., Baddeley, R. J., & Bull, D. R. (2017). What's on TV: A large scale quantitative characterisation of modern broadcast video content. In 2016 IEEE International Conference on

More information

Project Summary EPRI Program 1: Power Quality

Project Summary EPRI Program 1: Power Quality Project Summary EPRI Program 1: Power Quality April 2015 PQ Monitoring Evolving from Single-Site Investigations. to Wide-Area PQ Monitoring Applications DME w/pq 2 Equating to large amounts of PQ data

More information

Luma Adjustment for High Dynamic Range Video

Luma Adjustment for High Dynamic Range Video 2016 Data Compression Conference Luma Adjustment for High Dynamic Range Video Jacob Ström, Jonatan Samuelsson, and Kristofer Dovstam Ericsson Research Färögatan 6 164 80 Stockholm, Sweden {jacob.strom,jonatan.samuelsson,kristofer.dovstam}@ericsson.com

More information

Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis

Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis Automatic Extraction of Popular Music Ringtones Based on Music Structure Analysis Fengyan Wu fengyanyy@163.com Shutao Sun stsun@cuc.edu.cn Weiyao Xue Wyxue_std@163.com Abstract Automatic extraction of

More information

AN IMPROVED ERROR CONCEALMENT STRATEGY DRIVEN BY SCENE MOTION PROPERTIES FOR H.264/AVC DECODERS

AN IMPROVED ERROR CONCEALMENT STRATEGY DRIVEN BY SCENE MOTION PROPERTIES FOR H.264/AVC DECODERS AN IMPROVED ERROR CONCEALMENT STRATEGY DRIVEN BY SCENE MOTION PROPERTIES FOR H.264/AVC DECODERS Susanna Spinsante, Ennio Gambi, Franco Chiaraluce Dipartimento di Elettronica, Intelligenza artificiale e

More information

Module 1: Digital Video Signal Processing Lecture 3: Characterisation of Video raster, Parameters of Analog TV systems, Signal bandwidth

Module 1: Digital Video Signal Processing Lecture 3: Characterisation of Video raster, Parameters of Analog TV systems, Signal bandwidth The Lecture Contains: Analog Video Raster Interlaced Scan Characterization of a video Raster Analog Color TV systems Signal Bandwidth Digital Video Parameters of a digital video Pixel Aspect Ratio file:///d

More information

Lecture 5: Clustering and Segmentation Part 1

Lecture 5: Clustering and Segmentation Part 1 Lecture 5: Clustering and Segmentation Part 1 Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today Segmentation and grouping Gestalt principles Segmentation as clustering K means Feature

More information

MPEG has been established as an international standard

MPEG has been established as an international standard 1100 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 9, NO. 7, OCTOBER 1999 Fast Extraction of Spatially Reduced Image Sequences from MPEG-2 Compressed Video Junehwa Song, Member,

More information

RainBar: Robust Application-driven Visual Communication using Color Barcodes

RainBar: Robust Application-driven Visual Communication using Color Barcodes 2015 IEEE 35th International Conference on Distributed Computing Systems RainBar: Robust Application-driven Visual Communication using Color Barcodes Qian Wang, Man Zhou, Kui Ren, Tao Lei, Jikun Li and

More information

FRAME RATE CONVERSION OF INTERLACED VIDEO

FRAME RATE CONVERSION OF INTERLACED VIDEO FRAME RATE CONVERSION OF INTERLACED VIDEO Zhi Zhou, Yeong Taeg Kim Samsung Information Systems America Digital Media Solution Lab 3345 Michelson Dr., Irvine CA, 92612 Gonzalo R. Arce University of Delaware

More information

Improving Frame Based Automatic Laughter Detection

Improving Frame Based Automatic Laughter Detection Improving Frame Based Automatic Laughter Detection Mary Knox EE225D Class Project knoxm@eecs.berkeley.edu December 13, 2007 Abstract Laughter recognition is an underexplored area of research. My goal for

More information

1022 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL 2010

1022 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL 2010 1022 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL 2010 Delay Constrained Multiplexing of Video Streams Using Dual-Frame Video Coding Mayank Tiwari, Student Member, IEEE, Theodore Groves,

More information

Evaluation of Automatic Shot Boundary Detection on a Large Video Test Suite

Evaluation of Automatic Shot Boundary Detection on a Large Video Test Suite Evaluation of Automatic Shot Boundary Detection on a Large Video Test Suite Colin O Toole 1, Alan Smeaton 1, Noel Murphy 2 and Sean Marlow 2 School of Computer Applications 1 & School of Electronic Engineering

More information

Speech and Speaker Recognition for the Command of an Industrial Robot

Speech and Speaker Recognition for the Command of an Industrial Robot Speech and Speaker Recognition for the Command of an Industrial Robot CLAUDIA MOISA*, HELGA SILAGHI*, ANDREI SILAGHI** *Dept. of Electric Drives and Automation University of Oradea University Street, nr.

More information

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine Project: Real-Time Speech Enhancement Introduction Telephones are increasingly being used in noisy

More information

Essence of Image and Video

Essence of Image and Video 1 Essence of Image and Video Wei-Ta Chu 2009/9/24 Outline 2 Image Digital Image Fundamentals Representation of Images Video Representation of Videos 3 Essence of Image Wei-Ta Chu 2009/9/24 Chapters 2 and

More information

Research on sampling of vibration signals based on compressed sensing

Research on sampling of vibration signals based on compressed sensing Research on sampling of vibration signals based on compressed sensing Hongchun Sun 1, Zhiyuan Wang 2, Yong Xu 3 School of Mechanical Engineering and Automation, Northeastern University, Shenyang, China

More information

Improving Performance in Neural Networks Using a Boosting Algorithm

Improving Performance in Neural Networks Using a Boosting Algorithm - Improving Performance in Neural Networks Using a Boosting Algorithm Harris Drucker AT&T Bell Laboratories Holmdel, NJ 07733 Robert Schapire AT&T Bell Laboratories Murray Hill, NJ 07974 Patrice Simard

More information

A repetition-based framework for lyric alignment in popular songs

A repetition-based framework for lyric alignment in popular songs A repetition-based framework for lyric alignment in popular songs ABSTRACT LUONG Minh Thang and KAN Min Yen Department of Computer Science, School of Computing, National University of Singapore We examine

More information

EDDY CURRENT IMAGE PROCESSING FOR CRACK SIZE CHARACTERIZATION

EDDY CURRENT IMAGE PROCESSING FOR CRACK SIZE CHARACTERIZATION EDDY CURRENT MAGE PROCESSNG FOR CRACK SZE CHARACTERZATON R.O. McCary General Electric Co., Corporate Research and Development P. 0. Box 8 Schenectady, N. Y. 12309 NTRODUCTON Estimation of crack length

More information

Automatic Rhythmic Notation from Single Voice Audio Sources

Automatic Rhythmic Notation from Single Voice Audio Sources Automatic Rhythmic Notation from Single Voice Audio Sources Jack O Reilly, Shashwat Udit Introduction In this project we used machine learning technique to make estimations of rhythmic notation of a sung

More information

Detection of Panoramic Takes in Soccer Videos Using Phase Correlation and Boosting

Detection of Panoramic Takes in Soccer Videos Using Phase Correlation and Boosting Detection of Panoramic Takes in Soccer Videos Using Phase Correlation and Boosting Luiz G. L. B. M. de Vasconcelos Research & Development Department Globo TV Network Email: luiz.vasconcelos@tvglobo.com.br

More information

Reduced complexity MPEG2 video post-processing for HD display

Reduced complexity MPEG2 video post-processing for HD display Downloaded from orbit.dtu.dk on: Dec 17, 2017 Reduced complexity MPEG2 video post-processing for HD display Virk, Kamran; Li, Huiying; Forchhammer, Søren Published in: IEEE International Conference on

More information

ZONE PLATE SIGNALS 525 Lines Standard M/NTSC

ZONE PLATE SIGNALS 525 Lines Standard M/NTSC Application Note ZONE PLATE SIGNALS 525 Lines Standard M/NTSC Products: CCVS+COMPONENT GENERATOR CCVS GENERATOR SAF SFF 7BM23_0E ZONE PLATE SIGNALS 525 lines M/NTSC Back in the early days of television

More information

2D Interleaver Design for Image Transmission over Severe Burst-Error Environment

2D Interleaver Design for Image Transmission over Severe Burst-Error Environment 2D Interleaver Design for Image Transmission over Severe Burst- Environment P. Hanpinitsak and C. Charoenlarpnopparut Abstract The aim of this paper is to design sub-optimal 2D interleavers and compare

More information

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video

Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video Mohamed Hassan, Taha Landolsi, Husameldin Mukhtar, and Tamer Shanableh College of Engineering American

More information

1. INTRODUCTION. Index Terms Video Transcoding, Video Streaming, Frame skipping, Interpolation frame, Decoder, Encoder.

1. INTRODUCTION. Index Terms Video Transcoding, Video Streaming, Frame skipping, Interpolation frame, Decoder, Encoder. Video Streaming Based on Frame Skipping and Interpolation Techniques Fadlallah Ali Fadlallah Department of Computer Science Sudan University of Science and Technology Khartoum-SUDAN fadali@sustech.edu

More information

Restoration of Hyperspectral Push-Broom Scanner Data

Restoration of Hyperspectral Push-Broom Scanner Data Restoration of Hyperspectral Push-Broom Scanner Data Rasmus Larsen, Allan Aasbjerg Nielsen & Knut Conradsen Department of Mathematical Modelling, Technical University of Denmark ABSTRACT: Several effects

More information

Music Source Separation

Music Source Separation Music Source Separation Hao-Wei Tseng Electrical and Engineering System University of Michigan Ann Arbor, Michigan Email: blakesen@umich.edu Abstract In popular music, a cover version or cover song, or

More information

2.4.1 Graphics. Graphics Principles: Example Screen Format IMAGE REPRESNTATION

2.4.1 Graphics. Graphics Principles: Example Screen Format IMAGE REPRESNTATION 2.4.1 Graphics software programs available for the creation of computer graphics. (word art, Objects, shapes, colors, 2D, 3d) IMAGE REPRESNTATION A computer s display screen can be considered as being

More information

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng S. Zhu, P. Ji, W. Kuang and J. Yang Institute of Acoustics, CAS, O.21, Bei-Si-huan-Xi Road, 100190 Beijing,

More information

CS229 Project Report Polyphonic Piano Transcription

CS229 Project Report Polyphonic Piano Transcription CS229 Project Report Polyphonic Piano Transcription Mohammad Sadegh Ebrahimi Stanford University Jean-Baptiste Boin Stanford University sadegh@stanford.edu jbboin@stanford.edu 1. Introduction In this project

More information

Role of Color Processing in Display

Role of Color Processing in Display Advances in Computational Sciences and Technology ISSN 0973-6107 Volume 10, Number 7 (2017) pp. 2183-2190 Research India Publications http://www.ripublication.com Role of Color Processing in Display Mani

More information

Video coding standards

Video coding standards Video coding standards Video signals represent sequences of images or frames which can be transmitted with a rate from 5 to 60 frames per second (fps), that provides the illusion of motion in the displayed

More information

Rec. ITU-R BT RECOMMENDATION ITU-R BT PARAMETER VALUES FOR THE HDTV STANDARDS FOR PRODUCTION AND INTERNATIONAL PROGRAMME EXCHANGE

Rec. ITU-R BT RECOMMENDATION ITU-R BT PARAMETER VALUES FOR THE HDTV STANDARDS FOR PRODUCTION AND INTERNATIONAL PROGRAMME EXCHANGE Rec. ITU-R BT.79-4 1 RECOMMENDATION ITU-R BT.79-4 PARAMETER VALUES FOR THE HDTV STANDARDS FOR PRODUCTION AND INTERNATIONAL PROGRAMME EXCHANGE (Question ITU-R 27/11) (199-1994-1995-1998-2) Rec. ITU-R BT.79-4

More information

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS)

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) International Journal of Emerging Technologies in Computational

More information

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur

Module 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur Module 8 VIDEO CODING STANDARDS Lesson 27 H.264 standard Lesson Objectives At the end of this lesson, the students should be able to: 1. State the broad objectives of the H.264 standard. 2. List the improved

More information

SIDRA INTERSECTION 8.0 UPDATE HISTORY

SIDRA INTERSECTION 8.0 UPDATE HISTORY Akcelik & Associates Pty Ltd PO Box 1075G, Greythorn, Vic 3104 AUSTRALIA ABN 79 088 889 687 For all technical support, sales support and general enquiries: support.sidrasolutions.com SIDRA INTERSECTION

More information

A Real Time Infrared Imaging System Based on DSP & FPGA

A Real Time Infrared Imaging System Based on DSP & FPGA A Real Time Infrared Imaging ystem Based on DP & FPGA Babak Zamanlooy, Vahid Hamiati Vaghef, attar Mirzakuchaki, Ali hojaee Bakhtiari, and Reza Ebrahimi Atani Department of Electrical Engineering Iran

More information

homework solutions for: Homework #4: Signal-to-Noise Ratio Estimation submitted to: Dr. Joseph Picone ECE 8993 Fundamentals of Speech Recognition

homework solutions for: Homework #4: Signal-to-Noise Ratio Estimation submitted to: Dr. Joseph Picone ECE 8993 Fundamentals of Speech Recognition INSTITUTE FOR SIGNAL AND INFORMATION PROCESSING homework solutions for: Homework #4: Signal-to-Noise Ratio Estimation submitted to: Dr. Joseph Picone ECE 8993 Fundamentals of Speech Recognition May 3,

More information

Improving Color Text Sharpness in Images with Reduced Chromatic Bandwidth

Improving Color Text Sharpness in Images with Reduced Chromatic Bandwidth Improving Color Text Sharpness in Images with Reduced Chromatic Bandwidth Scott Daly, Jack Van Oosterhout, and William Kress Digital Imaging Department, Digital Video Department Sharp aboratories of America

More information

VGA Controller. Leif Andersen, Daniel Blakemore, Jon Parker University of Utah December 19, VGA Controller Components

VGA Controller. Leif Andersen, Daniel Blakemore, Jon Parker University of Utah December 19, VGA Controller Components VGA Controller Leif Andersen, Daniel Blakemore, Jon Parker University of Utah December 19, 2012 Fig. 1. VGA Controller Components 1 VGA Controller Leif Andersen, Daniel Blakemore, Jon Parker University

More information

Film Grain Technology

Film Grain Technology Film Grain Technology Hollywood Post Alliance February 2006 Jeff Cooper jeff.cooper@thomson.net What is Film Grain? Film grain results from the physical granularity of the photographic emulsion Film grain

More information

Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264

Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264 Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264 Ju-Heon Seo, Sang-Mi Kim, Jong-Ki Han, Nonmember Abstract-- In the H.264, MBAFF (Macroblock adaptive frame/field) and PAFF (Picture

More information

52 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 7, NO. 1, FEBRUARY 2005

52 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 7, NO. 1, FEBRUARY 2005 52 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 7, NO. 1, FEBRUARY 2005 Spatially Localized Image-Dependent Watermarking for Statistical Invisibility and Collusion Resistance Karen Su, Student Member, IEEE, Deepa

More information

APPLICATIONS OF DIGITAL IMAGE ENHANCEMENT TECHNIQUES FOR IMPROVED

APPLICATIONS OF DIGITAL IMAGE ENHANCEMENT TECHNIQUES FOR IMPROVED APPLICATIONS OF DIGITAL IMAGE ENHANCEMENT TECHNIQUES FOR IMPROVED ULTRASONIC IMAGING OF DEFECTS IN COMPOSITE MATERIALS Brian G. Frock and Richard W. Martin University of Dayton Research Institute Dayton,

More information

An Overview of Video Coding Algorithms

An Overview of Video Coding Algorithms An Overview of Video Coding Algorithms Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Video coding can be viewed as image compression with a temporal

More information

Efficient Implementation of Neural Network Deinterlacing

Efficient Implementation of Neural Network Deinterlacing Efficient Implementation of Neural Network Deinterlacing Guiwon Seo, Hyunsoo Choi and Chulhee Lee Dept. Electrical and Electronic Engineering, Yonsei University 34 Shinchon-dong Seodeamun-gu, Seoul -749,

More information

Error Concealment for SNR Scalable Video Coding

Error Concealment for SNR Scalable Video Coding Error Concealment for SNR Scalable Video Coding M. M. Ghandi and M. Ghanbari University of Essex, Wivenhoe Park, Colchester, UK, CO4 3SQ. Emails: (mahdi,ghan)@essex.ac.uk Abstract This paper proposes an

More information

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY Eugene Mikyung Kim Department of Music Technology, Korea National University of Arts eugene@u.northwestern.edu ABSTRACT

More information

REDUCING DYNAMIC POWER BY PULSED LATCH AND MULTIPLE PULSE GENERATOR IN CLOCKTREE

REDUCING DYNAMIC POWER BY PULSED LATCH AND MULTIPLE PULSE GENERATOR IN CLOCKTREE Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 5, May 2014, pg.210

More information