Image-Based Synchronization in Mobile TV for a Multi-Broadcast-Receiver
|
|
- Veronica Cummings
- 6 years ago
- Views:
Transcription
1 Image-Based Synchronization in Mobile TV for a Multi-Broadcast-Receiver Tobias Tröger 1, Henning Heiber 2, Andreas Schmitt 2 and André Kaup 1 1 Multimedia Communications and Signal Processing University of Erlangen-Nuremberg Cauerstr. 7, Erlangen, Germany {troeger, kaup}@lnt.de Abstract In this paper we propose a novel technique for imagebased synchronization of two mobile received TV sequences having different spatial resolutions, deviant image qualities, and possibly lost image blocks. The sequences are aligned spatially and temporally based on an affine transform. The optimal transformation parameters are determined numerically by the Levenberg- Marquardt-Algorithm due to complexity reasons. The presented algorithm leads to accurate synchronization of mobile TV and is robust even in case of image cropping and arbitrarily shaped error patterns. Keywords DVB-H, DVB-T, Mobile TV, Multi-Broadcast Reception, Synchronization, T-DMB INTRODUCTION Currently, more and more TV programs are broadcasted simultaneously based on terrestrial transmission standards like DVB-T, DVB-H, or T-DMB. Digital Video Broadcasting - Terrestrial (DVB-T) provides terrestrial broadcasting of SDTV or HDTV video, audio streams, and data. The DVB-T technique was developed for stationary and portable reception. Although mobile reception was not aimed at by the standard originally, mobile DVB-T receivers are available in the meantime. Based on the internet protocol, low-resolution digital multimedia can be transmitted by Digital Video Broadcasting - Handheld (DVB-H). DVB-H is an adapted version of the DVB-T technique which is especially optimized for mobile reception conditions. Terrestrial Digital Multimedia Broadcasting (T-DMB) is an extension of the well-known Digital Audio Broadcasting - Standard (DAB) providing terrestrial transmission of digital lowresolution TV signals. Perfect temporal and spatial synchronization of video sequences is required for various multimedia applications like multi-view video coding [1] or inter-sequence error concealment of TV signals [2]. In this paper, our 2 Development Infotainment Audi AG Ingolstadt, Germany {henning.heiber, andreas.schmitt}@audi.de focus is on synchronization of mobile TV sequences received in a multi-broadcast scenario. Fig. 1 shows such a scenario for simultaneous reception of DVB-T and T- DMB signals with a car-mounted device. Assuming multi-broadcast reception of DVB-T, DVB-H and T-DMB signals, redundant TV programs having similar image content are available on different broadcast channels. This redundancy offers new possibilities to enhance robustness of mobile TV reception. In particular, errors which occur in TV broadcasts being transmitted over terrestrial channels can be concealed by pixel-wise combination of corresponding TV sequences. Assuming a prospective multi-broadcast-receiver, however, these TV sequences are usually shifted in time against each other as they are transmitted according to different broadcasting standards. In detail, the processing times of source coding, error protection and packetization differ for instance. Further aspects are buffer sizes and burst-wise transmission. Summing up, the delay is hard to predict in advance for a given reception scenario. As a-priori information about the delay between corresponding TV programs is not available at a future multibroadcast-receiver (MBR), it has to be determined periodically for successful synchronization. Due to the deviant broadcasting standards, an image-based approach seems natural. In the following, some aspects are listed which have to be taken into account for image-based synchronization of mobile TV: First, packet errors usually occur while broadcasting digital TV to mobile receivers over terrestrial channels. As a consequence of block-based video coding, blocks, slices or frames are finally distorted in the decoding process (see Fig 1.). Image-based synchronization therefore has to cope with lost image information. Second, TV sequences broadcasted according to the mentioned standards typically have different image resolutions. Taking into account that DVB-H and T- DMB sequences are shown on mobile receivers Figure 1: Multi-broadcast reception scenario of mobile TV
2 . DVB-H or both DVB-T and T-DMB. The algorithm is generally introduced for a high-resolution sequence (HRS) and a corresponding low-resolution sequence (LRS) having similar image content but identical frame rates. In contrast to the HRS, the LRS is being considered error-free due to higher error protection. Based on a future realistic multi-broadcast reception scenario, we assume the LRS being delayed in comparison to the HRS. Figure 2: Image cropping in mobile TV with small displays, both transmission standards usually apply low spatial image resolutions like QVGA or CIF (see Fig. 1). As DVB-T supports higher spatial image resolutions like SDTV and HDTV, however, the resolutions have to be matched for image-based synchronization of mobile TV. Third, the particular image qualities of mobile received TV sequences vary significantly with the given data rate. DVB-T sequences usually have high image quality. However, DVB-H and T-DMB sequences are typically coded with a low bit rate and therefore have moderate image quality. As a consequence, imagebased synchronization must be robust against coding artifacts. Last, image cropping has to be considered for imagebased synchronization of mobile TV. Compared to DVB-T signals, DVB-H and T-DMB sequences have been possibly cropped at the image margins by the sender-site video generation process. Fig. 2 shows a resized and cropped image of the sequence crew on the left. The lost parts are marked red in the corresponding image on the right, which is just resized. Cropped TV sequences do not necessarily contain identical but similar images in a multi-broadcast reception scenario. Therefore, the synchronization method has to cope with cropped image parts. In general, synchronization of video sequences is a wellknown problem which has been studied in literature in recent years. In 2004, Tuytelaars and Gool presented a technique for the synchronization of arbitrarily moving cameras and general 3D scenes [3]. In the same year, Yan and Pollefeys proposed another synchronization method which is based on space-time interest points [4]. Whitehead et al. introduced a formalization of the video synchronization problem in 2005 [5]. One year later, Wolf and Zomet presented a synchronization algorithm in the context of reconstructing non-rigid scenes from two nonsynchronized but fixed video cameras [6]. Also in 2006, Ukrainitz and Irani described the synchronization of two video sequences containing a dynamic scene as finding the spatial temporal coordinate transformation [7]. Finally, Barkowsky et al. compared matching strategies for temporal alignment of two video sequences in 2006 [8]. However, to the best of our knowledge, image-based synchronization of two mobile received TV sequences being characterized by cropped image parts, lost image blocks, different image resolutions and deviant image qualities was not considered up to the present. We therefore propose a novel technique for image-based synchronization of two mobile received TV sequences either multi-broadcasted according to both DVB-T and SYNCHRONIZATION METHOD Problem Formulation In Fig. 3, a multi-broadcasted TV sequence is shown which is received as a HRS and a corresponding delayed LRS. t 0 is the starting point of the reception, d is the delay between the HRS and the LRS given in frames. At t 0, we take the first received frame of the HRS as reference frame A 0. Based on the image transformation method described below, we compare A 0 with several candidate frames BBc of the LRS to find the most similar frame of the LRS. The candidate frames B cb lie within the search range SR = [ tb, tb + L 1]. t B indicates the point in time when the first candidate frame BBc of the LRS arrives at the receiver. L is the length of the search range SR and defines the index c [ 0, L 1] of the candidate frames B cb As we assume the LRS being delayed in our case, t B is always greater or equal t 0 : t B t 0 (1) Image Transformation The image resolutions of the HRS and LRS differ and therefore have to be aligned first. In the proposed imagebased synchronization method, this is done by an image transformation process. Considering the sender-site video generation step, the LRS can be understood as a resized and possibly cropped version of the corresponding HRS. Therefore, the desired image transformation has to cope with scaling and shifting. Figure 3: Multi-broadcast reception scenario of mobile TV comprising a HRS and a delayed LRS
3 An image transformation capable of this is the affine transform introduced in [9]. It can handle translation, rotation, non-uniform as well as uniform scaling and shear. In general, an arbitrary vector x is mapped on a vector y by the affine transform: y = Ax + b (2) In our case, an arbitrary image B(r, of the LRS shall be mapped onto the corresponding image A(m,n) of the HRS, where r {1,...,R}, s {1,...,S}, m {1,...,M} and n {1,...,N} depict the pixel positions (M > R, N > S), respectively. Then, (2) can be rewritten as follows: m a1 a2 b1 r = n a3 a4 b2 s (3) As we only have to consider scaling and translation, the affine transform can be simplified by setting a 2 and a 3 to zero (a 2 = a 3 = 0). The transformation parameters a are then defined as follows: T [ a1 a4 b1 b2 a = ] (4) With respect to a given metric, the best-fitting transformation parameters have to be determined. The metric is introduced in (5) as the mean squared error (MSE) between A(m,n) and a transformed image B a ( m, n). For robustness, the mapping procedure only takes error-free samples into account which are indicated by error mask W(m,n). Image Registration We have now introduced a rule to map an arbitrary image B(r, onto an arbitrary image A(m,n) according to a given metric. To find the exact delay between the HRS and the corresponding LRS, we have to register A 0 (m,n) to the frame B( r, of the LRS which is most similar. Therefore the affine transform is performed for A0(m,n) and each candidate frame B( r, within the search range SR separately by minimizing the MSE in (5). In doing so, the image transformation method provides a minimum residual r for each frame B c ( r, of the LRS due to the given metric. B c (6) (7) Minimizing MSE(a) can be understood also as finding the best-fitting transformation parameters out of a set D for each frame B c ( r,. (7) is solved numerically as described below for each candidate frame ( r, within SR. Then, we can determine the frame which is most similar to A0(m,n) by finding the minimum residual of r( Bc ( r, ) and taking the argument. The delay d can be read off directly from the index of the resulting candidate frame ( r, : Once the correct delay d between the LRS and HRS is found, it just has to be recalculated in case of a varying delay. The synchronization of the HRS and the LRS can be finally achieved by delaying each frame of the HRS according to d. Amongst others, the overall complexity of the proposed image-based synchronization method is significantly determined by the number of candidate images in the image registration step. Therefore, the length L of search range SR should be chosen reasonably small and t B reasonably close to the actual delay due to prior knowledge. Additionally, several enhancements of the algorithm can be applied to reduce complexity besides simply shortening L. First, the number of candidate frames BBc(r, within the search range SR can be restricted by a feature-based selection prior to the synchronization step. Second, the affine transform can only be performed for characteristic image parts of A 0 (m,n) and B cb (r,. Optimization Technique As shown above, image registration is an important step of the proposed algorithm for synchronization of mobile TV sequences. According to (7), it is based on the minimization of MSE(a) in an affine image transformation process. As the unknown transformation parameters a lie in an unbounded set D, however, the minimization problem is computationally complex for each candidate frame B c ( r,. Therefore it seems natural to utilize a numerical optimization technique. Considering (5) and (7), the error function MSE(a) being minimized is expressed as the sum of squares of nonlinear functions. We use the Levenberg-Marquardt- Algorithm (LMA), here, as it has become a standard method for non-linear least-squares problems ([10], [11]). The LMA can be thought of as a blend of the gradient descent and Gauss-Newton method. It is a robust gradient method which converges with a high probability even for inaccurate start values. Adopting gradient methods for our algorithm, the minimum of the error function MSE(a) can be found by iteratively stepping down along the gradient of the function with step size λ. Intuitively, this leads to the parameter update of the gradient descent method where d is the negative gradient of the error function at a i. B d (8) (9) (5)
4 In general, the error function from (5) can be approximated by its Taylor series expansion where D is the Hessian matrix evaluated at a i and c denotes the function value at a i. (10) Assuming the MSE(a) being quadratic around a i, higher order higher order terms can be truncated. The corresponding rule for updating the transformation parameters a is given in [10] under consideration of gradient information and curvature of the MSE(a): (11) This Gauss-Newton iteration and the update rule for the gradient descent method given in (9) have complementary advantages which can be combined by the approach from Levenberg ([10]). (12) However, the Hessian matrix D is not taken into consideration for large λ. As a consequence, each component of the gradient is scaled according to the curvature by replacing the identity matrix I with the diagonal of the Hessian D. (13) Finally, the update rule from Levenberg and Marquardt in (13) has the desired characteristics: Low curvature leads to a large update step, high curvature results in a small update step. The LMA is iteratively performed in four steps: 1. Compute MSE(a i ) 2. Define a moderate λ 3. Update the parameter vector a i in (13) and evaluate the error function at the new parameter a i+1 4. If the error function has declined, it implies that the quadratic assumption was correct. Then, the influence of the gradient descent method has to be attenuated by decreasing λ (e.g. by a factor of 10, [10]). grown, increase λ (e.g. by a factor of 10, [10]). Go to step 3. In our algorithm for image-based synchronization of mobile TV, the LMA is executed in two stages as described in [2]. In the first stage, the LMA is applied to a subsampled version of frame A 0 (m,n). We define the start value a i for the optimization according to the LMA as ratio of the HRS and the LRS frame sizes (a i = [M/R, N/S, 0, 0] T ). The transformation parameters resulting from the first optimization stage are fairly coarse. They are taken as start values for the second optimization step, in which they are refined using the full resolution of A 0 (m,n). Performing the synchronization algorithm for consecutive frames A(m,n), temporal outliers of the transformation parameters a may be discarded by limiting the maximum deviation from the temporal mean. Outliers can occur if the degree of distortions is too high for a particular frame. EXPERIMENTAL RESULTS Simulation Setup The proposed algorithm for image-based synchronization of mobile received TV sequences was tested on the wellknown sequences crew, discovery city and shuttle. Crew is characterized by slow motion but sporadic flashlights. Discovery city contains fast motion and scene cuts. By contrast, shuttle has very little temporal activity. Some passages can be even considered as sequence of consecutive still images. The synchronization algorithm is independently performed for the first 70 frames of all three mentioned test sequences respectively. We use the YUV color space and consider only the luminance Y. Based on powerful synchronization of mobile received TV sequences in a future multi-broadcast-receiver for DVB-T, DVB-H and T-DMB, our focus is on a realistic simulation scenario. Possibly, the DVB-H and T-DMB signals are generated based on a DVB-T signal. Therefore, we assume the LRS being delayed in comparison to the HRS. Here, we simulate a constant delay d of 15 frames. The search range SR is set to [0, 20]. Due to powerful error protection schemes of DVB-H and T- DMB, the LRS is assumed to be error-free. Keeping in mind, that DVB-T was developed for stationary and portable reception, mobile adoption of DVB-T will usually lead to block and slice errors in the decoding process. Here, the HRS therefore contains randomly distributed block losses comprising 5%, 10%, 20% and 40% of each image. However, DVB-H and T-DMB sequences usually have poor image quality due to a limited data rate. We thus coded the LRS with the reference implementation of the H.264/AVC standard. The applied coding scheme is IBPBP. As the coding process of TV sequences is not part of the DVB-H or T-DMB standard, we simulate a wide quality range. The quantization parameter (QP) is varied from 15 to 45 in steps of 5. Furthermore, a GOP length of 16 is used. As DVB-T applies high image quality in general, the HRS stays uncompressed, here. Each image of the HRS has 720x576 samples, which typically is the spatial resolution of mobile TV according to the DVB-T standard. DVB-H and T-DMB signals, however, usually have format QVGA or CIF. LRS images therefore have either a size of 320x240 or 352x288 samples in our case. As mentioned before, the LRS is always a resized version of the HRS containing similar images. Possibly it is cropped at the image margins, additionally. Summing up, we consider the LRS being available in four versions in our simulations: format CIF with pure resizing (case 1), format CIF with resizing and cropping (case 2), format QVGA with pure resizing (case 3) and format QVGA with resizing and cropping (case 4). Considering the LMA, the stopping criteria for the optimization method are crucial parameters. A limited number of 100 iterations has been sufficient in our tests. Further aspects are the termination tolerance of the MSE and the termination tolerance of the affine transform parameters a which both have to be restricted reasonably. Finally, the resampling step in the optimization process has to be considered. It is performed as linear interpolation in our simulations. Some restrictions for image-based synchronization of
5 a) LRS case 1: CIF (resized) b) LRS case 2: CIF (resized & cropped) c) LRS case 3: QVGA (resized) d) LRS case 4: QVGA (resized & cropped) Figure 4: Correct detection rates of sequence crew plotted in percent against the quantization parameter of the LRS (5%, 10%, 20% and 40% of each HRS image are lost) mobile received TV sequences are introduced in this paper by the absence of frame drops and the assumption of identical frame rates for HRS and LRS. If the frame rates differ in practice, an up- or down-conversion technique has to be utilized prior to the algorithm. Performance Evaluation Depending on the application, accurate or approximate synchronization of mobile received TV sequences may be necessary. For performance evaluation of the proposed algorithm, we therefore distinguish between the correct detection rate (CDR) and the synchronization error (SE). The CDR shows the percentage of delays determined correctly in respect to the processed HRS frames. The SE is defined as mean absolute difference (MAD) in frames between the computed and the correct delay taking only inaccurate matches into account. Let us first consider exact image-based synchronization of two TV sequences. Fig. 4a-d shows the correct detection rates of sequence crew for all LRS cases. In each subfigure, the CDRs are plotted against the quantization parameter for an image loss of 5%, 10%, 20% and 40%. As can be clearly seen, the fundamental characteristics of the CDR are comparable for all considered LRS cases and are therefore depicted generally. Fig. 4a-d shows that the CDR is strongly connected with the image quality of the LRS. High image quality which is indicated by a low QP leads to a high CDR and vice versa. For the QP lying in an interval of 15 to 35, the CDR is relatively constant between 97% and 100% for all considered image loss percentages and LRS cases. Considering LRS cases 2 and 4, the CDR even is exactly 100% in the given interval. Coding crew according to H.264/AVC and a QP of 40 or 45 leads to an objective image quality of about 31 or 29 db in terms of PSNR Y. Looking at such a degraded image quality, the CDR decreases significantly. In LRS case 1 and 2, the CDR is around 50% to 60% for QP 45. Cases 3 and 4 even show a CDR of 35% to 40%. Using the LRS with resolution QVGA (case 3) and the HRS being characterized by an image loss of 20% as example (see Fig. 4c), it can be seen that the CDR is not always a strict monotonic function. Determining the minimum MSE between a frame of the HRS and a transformed candidate frame of the LRS according to (7) is not necessarily a convex problem. This aspect occurs in general due to the spatial characteristics of the considered frames and is aggravated by the presence of lost image parts and compression artifacts. Therefore, the LMA possibly converges towards a local minimum of the MSE. As a consequence, the minimum residual within search range SR may lead to a synchronization mismatch in (8). Summing up, the proposed algorithm for image-based synchronization of mobile TV can provide excellent results in terms of CDR for high and moderate image qualities. Just in case of poor image quality which is mostly not acceptable for the viewer, the CDR is not sufficient any more. Assuming a QP greater 45, the constant delay introduced by digital TV transmission is possibly varied by frame drops in the coding process. Then, a reasonable evaluation of the proposed synchronization method is not possible any more. As the LMA is a powerful optimization method which mostly converges even for inaccurate start values with a high probability, it can handle cropped image parts reliably (see Fig. 4b/d). The CDR is therefore relatively inde-
6 Table 1: Correct detection rate and synchronization error for sequences crew, discovery city and shuttle against the quantization parameter of the LRS regarding image losses of 5%, 10%, 20% and 40% and LRS cases 1-4 sequence measure QP = 15 QP = 20 QP = 25 QP = 30 QP = 35 QP = 40 QP = 45 crew CDR SE discovery city CDR SE shuttle CDR SE pendent of the applied LRS case. The same holds for the image loss percentage of the HRS. The optimization is robust against lost image blocks as it performs well even for 40% image loss. The corresponding CDR 40% differs only marginally from the CDR 5% determined for 5% image loss and is even higher in some cases (see Fig. 4b). Simulations show that the image content usually is not a crucial parameter for image-based synchronization of mobile TV. This becomes clear as the CDR is exactly or approximately 100% for high and moderate image quality in each LRS case (see Fig. 4a-d). In other words, an arbitrary frame of the HRS can be used in a future MBR for exact synchronization of the whole sequence assuming a constant delay. In case of a slightly changing delay, however, a periodic recalculation is required. As outlined before, the CDRs have a high correlation for all considered LRS cases and all image loss percentages. Therefore, a combined CDR is shown Tab. 1 for sequences crew, discovery city and shuttle regarding an image loss of 5%, 10%, 20% and 40% and LRS cases 1-4. Crew performs best in terms of CDR as the sporadic flashlights enhance exact synchronization due to the image-based algorithm. Discovery city has comparable results for high and moderate image quality. However, shuttle performs significantly weaker for all image qualities due to the low temporal activity which aggravates exact synchronization in combination with lost blocks. The SE was defined above as MAD between the determined and the correct delay only evaluated for inaccurately registered HRS frames. Tab.1 shows the SE averaged over all image loss percentages and all LRS cases. Unlike the CDR, the SE has a weak correlation with the QP indicating the image quality. Only for shuttle, the SE rises with decreasing image quality. It is further noticeable that the SE is below 4 on average for all sequences even in case of low image quality although the maximum could be 15 in our simulations. Considering crew at QP 45, the correct delay is not found in about 56% of all cases for instance. However, a mean error of 1.16 frames still might be acceptable depending on the application. SUMMARY AND CONCLUSIONS We proposed a new technique for image-based synchronization of two TV sequences which are received in a multi-broadcast scenario. The first TV sequence is characterized by lost blocks, a high spatial resolution and a high image quality. The second TV sequence is error-free and has low spatial resolution and moderate image quality typically. The presented synchronization method is based on an image transformation step which is numerically solved by an optimization method. Experimental results show that the algorithm is robust even in case of arbitrarily shaped errors and cropped image parts. REFERENCES [1] Garbas, J.-U., Fecker, U., Tröger, T., Kaup, A., 4D Scalable Multi-View Video Coding Using Disparity Compensated View Filtering and Motion Compensated Temporal Filtering, Intern. Workshop on Multimedia Signal Processing, pp , Victoria, Canada (2006). [2] Tröger, T., Heiber, H., Schmitt, A., Kaup, A., Inter- Sequence Error Concealment of High-Resolution Video Sequences in a Multi-Broadcast-Reception Scenario, Proc. Europ. Sign. Processing Conf., Lausanne, Switzerland (2008). [3] Tuytelaars, T., Van Gool, L., Synchronizing Video Sequences, Proc. IEEE Computer Society Conf. on Comp. Vision and Pattern Recog., pp , Washington, DC (2004). [4] Yan, J., Pollefeys, M., Video synchronization via space-time interest point distribution, Proc. of Advanced Concepts for Intelligent Vision Systems, Brussels, Belgium (2004). [5] Whitehead, A., Laganiere, R., Bose, P., Temporal Synchronization of Video Sequences in Theory and in Practice, Proc. IEEE Workshop on Motion and Video Computing, pp , Breckenridge, CO (2005). [6] Wolf. L, Zomet, A., Wide Baseline Matching between Unsynchronized Video Sequences, Intern. Journal of Computer Vision, Volume 68, No. 1, pp , Springer (2006). [7] Ukrainitz, U., Irani M., Aligning Sequences and Actions by Maximizing Space-Time Correlations, Lecture Notes in Comp. Science, Vol. 3953, pp , Springer (2006). [8] Barkowsky, M., Bitto, R., Bialkowski, J., Kaup, A., Comparison of Matching Strategies for Temporal Frame Registration in the Perceptual Evaluation of Video Quality, Proc. Intern. Workshop Video Processing and Quality Metrics, Scottsdale, AZ (2006). [9] Ohm, J.-R., Multimedia Communication Technology: Representation, Transmission and Identification of Multi-Media Signals, pp , Springer, Berlin (2004). [10] Press, W. H., Flannery, B.O., Teukolsky, S.A., Vetterling, W.T., Numerical Recipes in C: The Art of Scientific Computing, Cambridge University Press, pp , Cambridge, U.K. (1988). [11] Nocedal, J., Wright, S.J., Numerical Optimization, Springer, pp , New York, USA (1999).
INTER-SEQUENCE ERROR CONCEALMENT OF HIGH-RESOLUTION VIDEO SEQUENCES IN A MULTI-BROADCAST-RECEPTION SCENARIO
INTER-SEQUENCE ERROR CONCEALMENT OF HIGH-RESOLUTION VIDEO SEQUENCES IN A MULTI-BROADCAST-RECEPTION SCENARIO Tobias Tröger, Jens-Uwe Garbas, Henning Heiber, Andreas Schmitt and André Kaup Multimedia Communications
More informationSkip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video
Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video Mohamed Hassan, Taha Landolsi, Husameldin Mukhtar, and Tamer Shanableh College of Engineering American
More informationResearch Topic. Error Concealment Techniques in H.264/AVC for Wireless Video Transmission in Mobile Networks
Research Topic Error Concealment Techniques in H.264/AVC for Wireless Video Transmission in Mobile Networks July 22 nd 2008 Vineeth Shetty Kolkeri EE Graduate,UTA 1 Outline 2. Introduction 3. Error control
More informationAUDIOVISUAL COMMUNICATION
AUDIOVISUAL COMMUNICATION Laboratory Session: Recommendation ITU-T H.261 Fernando Pereira The objective of this lab session about Recommendation ITU-T H.261 is to get the students familiar with many aspects
More informationCOMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards
COMP 9 Advanced Distributed Systems Multimedia Networking Video Compression Standards Kevin Jeffay Department of Computer Science University of North Carolina at Chapel Hill jeffay@cs.unc.edu September,
More informationAdaptive Key Frame Selection for Efficient Video Coding
Adaptive Key Frame Selection for Efficient Video Coding Jaebum Jun, Sunyoung Lee, Zanming He, Myungjung Lee, and Euee S. Jang Digital Media Lab., Hanyang University 17 Haengdang-dong, Seongdong-gu, Seoul,
More informationUniversity of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ISCAS.2005.
Wang, D., Canagarajah, CN., & Bull, DR. (2005). S frame design for multiple description video coding. In IEEE International Symposium on Circuits and Systems (ISCAS) Kobe, Japan (Vol. 3, pp. 19 - ). Institute
More informationAnalysis of Packet Loss for Compressed Video: Does Burst-Length Matter?
Analysis of Packet Loss for Compressed Video: Does Burst-Length Matter? Yi J. Liang 1, John G. Apostolopoulos, Bernd Girod 1 Mobile and Media Systems Laboratory HP Laboratories Palo Alto HPL-22-331 November
More informationModeling and Optimization of a Systematic Lossy Error Protection System based on H.264/AVC Redundant Slices
Modeling and Optimization of a Systematic Lossy Error Protection System based on H.264/AVC Redundant Slices Shantanu Rane, Pierpaolo Baccichet and Bernd Girod Information Systems Laboratory, Department
More informationAnalysis of Video Transmission over Lossy Channels
1012 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 18, NO. 6, JUNE 2000 Analysis of Video Transmission over Lossy Channels Klaus Stuhlmüller, Niko Färber, Member, IEEE, Michael Link, and Bernd
More informationMULTI-STATE VIDEO CODING WITH SIDE INFORMATION. Sila Ekmekci Flierl, Thomas Sikora
MULTI-STATE VIDEO CODING WITH SIDE INFORMATION Sila Ekmekci Flierl, Thomas Sikora Technical University Berlin Institute for Telecommunications D-10587 Berlin / Germany ABSTRACT Multi-State Video Coding
More informationAn Overview of Video Coding Algorithms
An Overview of Video Coding Algorithms Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Video coding can be viewed as image compression with a temporal
More informationRobust 3-D Video System Based on Modified Prediction Coding and Adaptive Selection Mode Error Concealment Algorithm
International Journal of Signal Processing Systems Vol. 2, No. 2, December 2014 Robust 3-D Video System Based on Modified Prediction Coding and Adaptive Selection Mode Error Concealment Algorithm Walid
More informationAdvanced Video Processing for Future Multimedia Communication Systems
Advanced Video Processing for Future Multimedia Communication Systems André Kaup Friedrich-Alexander University Erlangen-Nürnberg Future Multimedia Communication Systems Trend in video to make communication
More informationFree Viewpoint Switching in Multi-view Video Streaming Using. Wyner-Ziv Video Coding
Free Viewpoint Switching in Multi-view Video Streaming Using Wyner-Ziv Video Coding Xun Guo 1,, Yan Lu 2, Feng Wu 2, Wen Gao 1, 3, Shipeng Li 2 1 School of Computer Sciences, Harbin Institute of Technology,
More informationJoint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes. Digital Signal and Image Processing Lab
Joint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes Digital Signal and Image Processing Lab Simone Milani Ph.D. student simone.milani@dei.unipd.it, Summer School
More informationROBUST REGION-OF-INTEREST SCALABLE CODING WITH LEAKY PREDICTION IN H.264/AVC. Qian Chen, Li Song, Xiaokang Yang, Wenjun Zhang
ROBUST REGION-OF-INTEREST SCALABLE CODING WITH LEAKY PREDICTION IN H.264/AVC Qian Chen, Li Song, Xiaokang Yang, Wenjun Zhang Institute of Image Communication & Information Processing Shanghai Jiao Tong
More informationAN IMPROVED ERROR CONCEALMENT STRATEGY DRIVEN BY SCENE MOTION PROPERTIES FOR H.264/AVC DECODERS
AN IMPROVED ERROR CONCEALMENT STRATEGY DRIVEN BY SCENE MOTION PROPERTIES FOR H.264/AVC DECODERS Susanna Spinsante, Ennio Gambi, Franco Chiaraluce Dipartimento di Elettronica, Intelligenza artificiale e
More informationJoint source-channel video coding for H.264 using FEC
Department of Information Engineering (DEI) University of Padova Italy Joint source-channel video coding for H.264 using FEC Simone Milani simone.milani@dei.unipd.it DEI-University of Padova Gian Antonio
More informationPERCEPTUAL QUALITY COMPARISON BETWEEN SINGLE-LAYER AND SCALABLE VIDEOS AT THE SAME SPATIAL, TEMPORAL AND AMPLITUDE RESOLUTIONS. Yuanyi Xue, Yao Wang
PERCEPTUAL QUALITY COMPARISON BETWEEN SINGLE-LAYER AND SCALABLE VIDEOS AT THE SAME SPATIAL, TEMPORAL AND AMPLITUDE RESOLUTIONS Yuanyi Xue, Yao Wang Department of Electrical and Computer Engineering Polytechnic
More informationChapter 10 Basic Video Compression Techniques
Chapter 10 Basic Video Compression Techniques 10.1 Introduction to Video compression 10.2 Video Compression with Motion Compensation 10.3 Video compression standard H.261 10.4 Video compression standard
More informationContents. xv xxi xxiii xxiv. 1 Introduction 1 References 4
Contents List of figures List of tables Preface Acknowledgements xv xxi xxiii xxiv 1 Introduction 1 References 4 2 Digital video 5 2.1 Introduction 5 2.2 Analogue television 5 2.3 Interlace 7 2.4 Picture
More informationMultimedia Communications. Image and Video compression
Multimedia Communications Image and Video compression JPEG2000 JPEG2000: is based on wavelet decomposition two types of wavelet filters one similar to what discussed in Chapter 14 and the other one generates
More informationResearch Article. ISSN (Print) *Corresponding author Shireen Fathima
Scholars Journal of Engineering and Technology (SJET) Sch. J. Eng. Tech., 2014; 2(4C):613-620 Scholars Academic and Scientific Publisher (An International Publisher for Academic and Scientific Resources)
More informationDual Frame Video Encoding with Feedback
Video Encoding with Feedback Athanasios Leontaris and Pamela C. Cosman Department of Electrical and Computer Engineering University of California, San Diego, La Jolla, CA 92093-0407 Email: pcosman,aleontar
More informationCharacterization and improvement of unpatterned wafer defect review on SEMs
Characterization and improvement of unpatterned wafer defect review on SEMs Alan S. Parkes *, Zane Marek ** JEOL USA, Inc. 11 Dearborn Road, Peabody, MA 01960 ABSTRACT Defect Scatter Analysis (DSA) provides
More informationIntra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences
Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences Michael Smith and John Villasenor For the past several decades,
More informationMultimedia Communications. Video compression
Multimedia Communications Video compression Video compression Of all the different sources of data, video produces the largest amount of data There are some differences in our perception with regard to
More informationError Resilience for Compressed Sensing with Multiple-Channel Transmission
Journal of Information Hiding and Multimedia Signal Processing c 2015 ISSN 2073-4212 Ubiquitous International Volume 6, Number 5, September 2015 Error Resilience for Compressed Sensing with Multiple-Channel
More informationImproved Error Concealment Using Scene Information
Improved Error Concealment Using Scene Information Ye-Kui Wang 1, Miska M. Hannuksela 2, Kerem Caglar 1, and Moncef Gabbouj 3 1 Nokia Mobile Software, Tampere, Finland 2 Nokia Research Center, Tampere,
More informationCODING EFFICIENCY IMPROVEMENT FOR SVC BROADCAST IN THE CONTEXT OF THE EMERGING DVB STANDARDIZATION
17th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 CODING EFFICIENCY IMPROVEMENT FOR SVC BROADCAST IN THE CONTEXT OF THE EMERGING DVB STANDARDIZATION Heiko
More informationAnalysis of MPEG-2 Video Streams
Analysis of MPEG-2 Video Streams Damir Isović and Gerhard Fohler Department of Computer Engineering Mälardalen University, Sweden damir.isovic, gerhard.fohler @mdh.se Abstract MPEG-2 is widely used as
More informationUC San Diego UC San Diego Previously Published Works
UC San Diego UC San Diego Previously Published Works Title Classification of MPEG-2 Transport Stream Packet Loss Visibility Permalink https://escholarship.org/uc/item/9wk791h Authors Shin, J Cosman, P
More informationModule 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur
Module 8 VIDEO CODING STANDARDS Lesson 27 H.264 standard Lesson Objectives At the end of this lesson, the students should be able to: 1. State the broad objectives of the H.264 standard. 2. List the improved
More informationPrinciples of Video Compression
Principles of Video Compression Topics today Introduction Temporal Redundancy Reduction Coding for Video Conferencing (H.261, H.263) (CSIT 410) 2 Introduction Reduce video bit rates while maintaining an
More informationA Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique
A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique Dhaval R. Bhojani Research Scholar, Shri JJT University, Jhunjunu, Rajasthan, India Ved Vyas Dwivedi, PhD.
More informationOBJECT-BASED IMAGE COMPRESSION WITH SIMULTANEOUS SPATIAL AND SNR SCALABILITY SUPPORT FOR MULTICASTING OVER HETEROGENEOUS NETWORKS
OBJECT-BASED IMAGE COMPRESSION WITH SIMULTANEOUS SPATIAL AND SNR SCALABILITY SUPPORT FOR MULTICASTING OVER HETEROGENEOUS NETWORKS Habibollah Danyali and Alfred Mertins School of Electrical, Computer and
More informationERROR CONCEALMENT TECHNIQUES IN H.264 VIDEO TRANSMISSION OVER WIRELESS NETWORKS
Multimedia Processing Term project on ERROR CONCEALMENT TECHNIQUES IN H.264 VIDEO TRANSMISSION OVER WIRELESS NETWORKS Interim Report Spring 2016 Under Dr. K. R. Rao by Moiz Mustafa Zaveri (1001115920)
More informationLecture 2 Video Formation and Representation
2013 Spring Term 1 Lecture 2 Video Formation and Representation Wen-Hsiao Peng ( 彭文孝 ) Multimedia Architecture and Processing Lab (MAPL) Department of Computer Science National Chiao Tung University 1
More informationSystematic Lossy Error Protection of Video Signals Shantanu Rane, Member, IEEE, Pierpaolo Baccichet, Member, IEEE, and Bernd Girod, Fellow, IEEE
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 18, NO. 10, OCTOBER 2008 1347 Systematic Lossy Error Protection of Video Signals Shantanu Rane, Member, IEEE, Pierpaolo Baccichet, Member,
More informationUsing enhancement data to deinterlace 1080i HDTV
Using enhancement data to deinterlace 1080i HDTV The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Andy
More informationAdvanced Computer Networks
Advanced Computer Networks Video Basics Jianping Pan Spring 2017 3/10/17 csc466/579 1 Video is a sequence of images Recorded/displayed at a certain rate Types of video signals component video separate
More informationConstant Bit Rate for Video Streaming Over Packet Switching Networks
International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Constant Bit Rate for Video Streaming Over Packet Switching Networks Mr. S. P.V Subba rao 1, Y. Renuka Devi 2 Associate professor
More informationBit Rate Control for Video Transmission Over Wireless Networks
Indian Journal of Science and Technology, Vol 9(S), DOI: 0.75/ijst/06/v9iS/05, December 06 ISSN (Print) : 097-686 ISSN (Online) : 097-5 Bit Rate Control for Video Transmission Over Wireless Networks K.
More informationTERRESTRIAL broadcasting of digital television (DTV)
IEEE TRANSACTIONS ON BROADCASTING, VOL 51, NO 1, MARCH 2005 133 Fast Initialization of Equalizers for VSB-Based DTV Transceivers in Multipath Channel Jong-Moon Kim and Yong-Hwan Lee Abstract This paper
More informationCONSTRAINING delay is critical for real-time communication
1726 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 16, NO. 7, JULY 2007 Compression Efficiency and Delay Tradeoffs for Hierarchical B-Pictures and Pulsed-Quality Frames Athanasios Leontaris, Member, IEEE,
More informationCompressed-Sensing-Enabled Video Streaming for Wireless Multimedia Sensor Networks Abstract:
Compressed-Sensing-Enabled Video Streaming for Wireless Multimedia Sensor Networks Abstract: This article1 presents the design of a networked system for joint compression, rate control and error correction
More informationFLEXIBLE SWITCHING AND EDITING OF MPEG-2 VIDEO BITSTREAMS
ABSTRACT FLEXIBLE SWITCHING AND EDITING OF MPEG-2 VIDEO BITSTREAMS P J Brightwell, S J Dancer (BBC) and M J Knee (Snell & Wilcox Limited) This paper proposes and compares solutions for switching and editing
More informationChapter 2 Introduction to
Chapter 2 Introduction to H.264/AVC H.264/AVC [1] is the newest video coding standard of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG). The main improvements
More informationSynchronization-Sensitive Frame Estimation: Video Quality Enhancement
Multimedia Tools and Applications, 17, 233 255, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Synchronization-Sensitive Frame Estimation: Video Quality Enhancement SHERIF G.
More informationParameters optimization for a scalable multiple description coding scheme based on spatial subsampling
Parameters optimization for a scalable multiple description coding scheme based on spatial subsampling ABSTRACT Marco Folli and Lorenzo Favalli Universitá degli studi di Pavia Via Ferrata 1 100 Pavia,
More information1022 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL 2010
1022 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL 2010 Delay Constrained Multiplexing of Video Streams Using Dual-Frame Video Coding Mayank Tiwari, Student Member, IEEE, Theodore Groves,
More informationRobust Transmission of H.264/AVC Video using 64-QAM and unequal error protection
Robust Transmission of H.264/AVC Video using 64-QAM and unequal error protection Ahmed B. Abdurrhman 1, Michael E. Woodward 1 and Vasileios Theodorakopoulos 2 1 School of Informatics, Department of Computing,
More informationA Study of Encoding and Decoding Techniques for Syndrome-Based Video Coding
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com A Study of Encoding and Decoding Techniques for Syndrome-Based Video Coding Min Wu, Anthony Vetro, Jonathan Yedidia, Huifang Sun, Chang Wen
More information1. INTRODUCTION. Index Terms Video Transcoding, Video Streaming, Frame skipping, Interpolation frame, Decoder, Encoder.
Video Streaming Based on Frame Skipping and Interpolation Techniques Fadlallah Ali Fadlallah Department of Computer Science Sudan University of Science and Technology Khartoum-SUDAN fadali@sustech.edu
More informationMPEG-2. ISO/IEC (or ITU-T H.262)
1 ISO/IEC 13818-2 (or ITU-T H.262) High quality encoding of interlaced video at 4-15 Mbps for digital video broadcast TV and digital storage media Applications Broadcast TV, Satellite TV, CATV, HDTV, video
More informationWYNER-ZIV VIDEO CODING WITH LOW ENCODER COMPLEXITY
WYNER-ZIV VIDEO CODING WITH LOW ENCODER COMPLEXITY (Invited Paper) Anne Aaron and Bernd Girod Information Systems Laboratory Stanford University, Stanford, CA 94305 {amaaron,bgirod}@stanford.edu Abstract
More informationError Resilient Video Coding Using Unequally Protected Key Pictures
Error Resilient Video Coding Using Unequally Protected Key Pictures Ye-Kui Wang 1, Miska M. Hannuksela 2, and Moncef Gabbouj 3 1 Nokia Mobile Software, Tampere, Finland 2 Nokia Research Center, Tampere,
More informationUNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT
UNIVERSAL SPATIAL UP-SCALER WITH NONLINEAR EDGE ENHANCEMENT Stefan Schiemenz, Christian Hentschel Brandenburg University of Technology, Cottbus, Germany ABSTRACT Spatial image resizing is an important
More informationOn the Characterization of Distributed Virtual Environment Systems
On the Characterization of Distributed Virtual Environment Systems P. Morillo, J. M. Orduña, M. Fernández and J. Duato Departamento de Informática. Universidad de Valencia. SPAIN DISCA. Universidad Politécnica
More informationA Framework for Advanced Video Traces: Evaluating Visual Quality for Video Transmission Over Lossy Networks
Hindawi Publishing Corporation EURASIP Journal on Applied Signal Processing Volume, Article ID 3, Pages DOI.55/ASP//3 A Framework for Advanced Video Traces: Evaluating Visual Quality for Video Transmission
More informationMULTIVIEW DISTRIBUTED VIDEO CODING WITH ENCODER DRIVEN FUSION
MULTIVIEW DISTRIBUTED VIDEO CODING WITH ENCODER DRIVEN FUSION Mourad Ouaret, Frederic Dufaux and Touradj Ebrahimi Institut de Traitement des Signaux Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015
More informationDELTA MODULATION AND DPCM CODING OF COLOR SIGNALS
DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS Item Type text; Proceedings Authors Habibi, A. Publisher International Foundation for Telemetering Journal International Telemetering Conference Proceedings
More informationVideo compression principles. Color Space Conversion. Sub-sampling of Chrominance Information. Video: moving pictures and the terms frame and
Video compression principles Video: moving pictures and the terms frame and picture. one approach to compressing a video source is to apply the JPEG algorithm to each frame independently. This approach
More informationSCALABLE video coding (SVC) is currently being developed
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 16, NO. 7, JULY 2006 889 Fast Mode Decision Algorithm for Inter-Frame Coding in Fully Scalable Video Coding He Li, Z. G. Li, Senior
More informationTHE CAPABILITY of real-time transmission of video over
1124 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 15, NO. 9, SEPTEMBER 2005 Efficient Bandwidth Resource Allocation for Low-Delay Multiuser Video Streaming Guan-Ming Su, Student
More informationPACKET-SWITCHED networks have become ubiquitous
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 7, JULY 2004 885 Video Compression for Lossy Packet Networks With Mode Switching and a Dual-Frame Buffer Athanasios Leontaris, Student Member, IEEE,
More informationSERIES J: CABLE NETWORKS AND TRANSMISSION OF TELEVISION, SOUND PROGRAMME AND OTHER MULTIMEDIA SIGNALS Measurement of the quality of service
International Telecommunication Union ITU-T J.342 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU (04/2011) SERIES J: CABLE NETWORKS AND TRANSMISSION OF TELEVISION, SOUND PROGRAMME AND OTHER MULTIMEDIA
More informationPEVQ ADVANCED PERCEPTUAL EVALUATION OF VIDEO QUALITY. OPTICOM GmbH Naegelsbachstrasse Erlangen GERMANY
PEVQ ADVANCED PERCEPTUAL EVALUATION OF VIDEO QUALITY OPTICOM GmbH Naegelsbachstrasse 38 91052 Erlangen GERMANY Phone: +49 9131 / 53 020 0 Fax: +49 9131 / 53 020 20 EMail: info@opticom.de Website: www.opticom.de
More informationA video signal consists of a time sequence of images. Typical frame rates are 24, 25, 30, 50 and 60 images per seconds.
Video coding Concepts and notations. A video signal consists of a time sequence of images. Typical frame rates are 24, 25, 30, 50 and 60 images per seconds. Each image is either sent progressively (the
More informationObjective video quality measurement techniques for broadcasting applications using HDTV in the presence of a reduced reference signal
Recommendation ITU-R BT.1908 (01/2012) Objective video quality measurement techniques for broadcasting applications using HDTV in the presence of a reduced reference signal BT Series Broadcasting service
More informationModule 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur
Module 8 VIDEO CODING STANDARDS Lesson 24 MPEG-2 Standards Lesson Objectives At the end of this lesson, the students should be able to: 1. State the basic objectives of MPEG-2 standard. 2. Enlist the profiles
More informationRobust Transmission of H.264/AVC Video Using 64-QAM and Unequal Error Protection
Robust Transmission of H.264/AVC Video Using 64-QAM and Unequal Error Protection Ahmed B. Abdurrhman, Michael E. Woodward, and Vasileios Theodorakopoulos School of Informatics, Department of Computing,
More informationVideo Codec Requirements and Evaluation Methodology
Video Codec Reuirements and Evaluation Methodology www.huawei.com draft-ietf-netvc-reuirements-02 Alexey Filippov (Huawei Technologies), Andrey Norkin (Netflix), Jose Alvarez (Huawei Technologies) Contents
More informationP SNR r,f -MOS r : An Easy-To-Compute Multiuser
P SNR r,f -MOS r : An Easy-To-Compute Multiuser Perceptual Video Quality Measure Jing Hu, Sayantan Choudhury, and Jerry D. Gibson Abstract In this paper, we propose a new statistical objective perceptual
More informationMinimax Disappointment Video Broadcasting
Minimax Disappointment Video Broadcasting DSP Seminar Spring 2001 Leiming R. Qian and Douglas L. Jones http://www.ifp.uiuc.edu/ lqian Seminar Outline 1. Motivation and Introduction 2. Background Knowledge
More informationRECOMMENDATION ITU-R BT.1203 *
Rec. TU-R BT.1203 1 RECOMMENDATON TU-R BT.1203 * User requirements for generic bit-rate reduction coding of digital TV signals (, and ) for an end-to-end television system (1995) The TU Radiocommunication
More informationA New Standardized Method for Objectively Measuring Video Quality
1 A New Standardized Method for Objectively Measuring Video Quality Margaret H Pinson and Stephen Wolf Abstract The National Telecommunications and Information Administration (NTIA) General Model for estimating
More informationInterleaved Source Coding (ISC) for Predictive Video Coded Frames over the Internet
Interleaved Source Coding (ISC) for Predictive Video Coded Frames over the Internet Jin Young Lee 1,2 1 Broadband Convergence Networking Division ETRI Daejeon, 35-35 Korea jinlee@etri.re.kr Abstract Unreliable
More informationRegion Adaptive Unsharp Masking based DCT Interpolation for Efficient Video Intra Frame Up-sampling
International Conference on Electronic Design and Signal Processing (ICEDSP) 0 Region Adaptive Unsharp Masking based DCT Interpolation for Efficient Video Intra Frame Up-sampling Aditya Acharya Dept. of
More informationMPEG has been established as an international standard
1100 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 9, NO. 7, OCTOBER 1999 Fast Extraction of Spatially Reduced Image Sequences from MPEG-2 Compressed Video Junehwa Song, Member,
More informationError-Resilience Video Transcoding for Wireless Communications
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Error-Resilience Video Transcoding for Wireless Communications Anthony Vetro, Jun Xin, Huifang Sun TR2005-102 August 2005 Abstract Video communication
More informationSystematic Lossy Error Protection of Video based on H.264/AVC Redundant Slices
Systematic Lossy Error Protection of based on H.264/AVC Redundant Slices Shantanu Rane and Bernd Girod Information Systems Laboratory Stanford University, Stanford, CA 94305. {srane,bgirod}@stanford.edu
More informationROBUST ADAPTIVE INTRA REFRESH FOR MULTIVIEW VIDEO
ROBUST ADAPTIVE INTRA REFRESH FOR MULTIVIEW VIDEO Sagir Lawan1 and Abdul H. Sadka2 1and 2 Department of Electronic and Computer Engineering, Brunel University, London, UK ABSTRACT Transmission error propagation
More informationEvaluation of video quality metrics on transmission distortions in H.264 coded video
1 Evaluation of video quality metrics on transmission distortions in H.264 coded video Iñigo Sedano, Maria Kihl, Kjell Brunnström and Andreas Aurelius Abstract The development of high-speed access networks
More informationPerformance Evaluation of Error Resilience Techniques in H.264/AVC Standard
Performance Evaluation of Error Resilience Techniques in H.264/AVC Standard Ram Narayan Dubey Masters in Communication Systems Dept of ECE, IIT-R, India Varun Gunnala Masters in Communication Systems Dept
More informationError concealment techniques in H.264 video transmission over wireless networks
Error concealment techniques in H.264 video transmission over wireless networks M U L T I M E D I A P R O C E S S I N G ( E E 5 3 5 9 ) S P R I N G 2 0 1 1 D R. K. R. R A O F I N A L R E P O R T Murtaza
More informationError Concealment for SNR Scalable Video Coding
Error Concealment for SNR Scalable Video Coding M. M. Ghandi and M. Ghanbari University of Essex, Wivenhoe Park, Colchester, UK, CO4 3SQ. Emails: (mahdi,ghan)@essex.ac.uk Abstract This paper proposes an
More informationReduced complexity MPEG2 video post-processing for HD display
Downloaded from orbit.dtu.dk on: Dec 17, 2017 Reduced complexity MPEG2 video post-processing for HD display Virk, Kamran; Li, Huiying; Forchhammer, Søren Published in: IEEE International Conference on
More informationMotion Video Compression
7 Motion Video Compression 7.1 Motion video Motion video contains massive amounts of redundant information. This is because each image has redundant information and also because there are very few changes
More informationAn Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions
1128 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 11, NO. 10, OCTOBER 2001 An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions Kwok-Wai Wong, Kin-Man Lam,
More informationContent storage architectures
Content storage architectures DAS: Directly Attached Store SAN: Storage Area Network allocates storage resources only to the computer it is attached to network storage provides a common pool of storage
More informationCS229 Project Report Polyphonic Piano Transcription
CS229 Project Report Polyphonic Piano Transcription Mohammad Sadegh Ebrahimi Stanford University Jean-Baptiste Boin Stanford University sadegh@stanford.edu jbboin@stanford.edu 1. Introduction In this project
More informationColor Image Compression Using Colorization Based On Coding Technique
Color Image Compression Using Colorization Based On Coding Technique D.P.Kawade 1, Prof. S.N.Rawat 2 1,2 Department of Electronics and Telecommunication, Bhivarabai Sawant Institute of Technology and Research
More informationOptimized Color Based Compression
Optimized Color Based Compression 1 K.P.SONIA FENCY, 2 C.FELSY 1 PG Student, Department Of Computer Science Ponjesly College Of Engineering Nagercoil,Tamilnadu, India 2 Asst. Professor, Department Of Computer
More informationRate-distortion optimized mode selection method for multiple description video coding
Multimed Tools Appl (2014) 72:1411 14 DOI 10.1007/s11042-013-14-8 Rate-distortion optimized mode selection method for multiple description video coding Yu-Chen Sun & Wen-Jiin Tsai Published online: 19
More informationFast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264
Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264 Ju-Heon Seo, Sang-Mi Kim, Jong-Ki Han, Nonmember Abstract-- In the H.264, MBAFF (Macroblock adaptive frame/field) and PAFF (Picture
More informationStudy of White Gaussian Noise with Varying Signal to Noise Ratio in Speech Signal using Wavelet
American International Journal of Research in Science, Technology, Engineering & Mathematics Available online at http://www.iasir.net ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629
More informationInterlace and De-interlace Application on Video
Interlace and De-interlace Application on Video Liliana, Justinus Andjarwirawan, Gilberto Erwanto Informatics Department, Faculty of Industrial Technology, Petra Christian University Surabaya, Indonesia
More informationScalable multiple description coding of video sequences
Scalable multiple description coding of video sequences Marco Folli, and Lorenzo Favalli Electronics Department University of Pavia, Via Ferrata 1, 100 Pavia, Italy Email: marco.folli@unipv.it, lorenzo.favalli@unipv.it
More information