Impact Of ATM Traffic Shaping On MPEG-2 Video Quality*
|
|
- Garry Tyler
- 6 years ago
- Views:
Transcription
1 IJCA, Vol. 10, No. 3, Sept Impact Of ATM Traffic Shaping On MPEG-2 Video Quality* Yongdong Wang and Michael Jurczyk University of Missouri - Columbia, Columbia, Missouri 65211, USA Abstract This paper presents the impact of traffic shaping on the quality of MPEG-2 video transmissions over ATM networks. The performance evaluation is accomplished by simulating an ATM network using real MPEG2 video streams from video broadcast applications. First, it is shown how cell loss influences the perceived video quality. Cell loss can result in either frame corruption or frame loss, which causes differing video quality impairments. Next, a simulation study is conducted showing how leaky-bucket traffic-shaping influences video quality. It is discussed how shaper parameters have to be chosen to obtain highest video quality. In general, underdimensioning as well as overdimensioning of the shaper should be avoided because it results in video quality degradation. Also, it is shown that the optimal choice of the shaper s token buffer size depends on the overall motion/burstiness in the video. A well-dimensioned shaper is able to increase video quality significantly as compared to not shaping at all. Most of the research on traffic shaping centers on the characteristic of the shaped traffic rather than on the resulting application performance. This study clearly shows that traffic shaping can actually enhance video quality significantly while it shapes the traffic to adhere to the negotiated QoS (Quality of Service) parameters. Key Words : ATM networks, networked multimedia, MPEG video coding, traffic shaping, video quality 1 Introduction Digital video transmission is a difficult and challenging issue in the area of communication. The bandwidth requirements for transmitting digital video are several orders of magnitude greater than for regular data transmission [25]. This drives the evolution of Broadband Integrated Service Digital Networks (B-ISDN) [6]. The underlying technology that makes B-ISDN possible is the Asynchronous Transfer Mode (ATM) technology. Due to its high bandwidth and QoS * This research was supported by the University of Missouri Research Board under Grant RB Department of Computer Engineering and Computer Science, 121 Engineering Building West, mjurczyk@cecs.missouri.edu. guarantees, ATM networks are able to provide high-quality digital video transmission [16, 23]. Numerous works have been reported about video transmission over ATM networks. It is possible to control the video coding procedure and generate a constant bit rate (CBR) video stream [10], which can be transmitted efficiently using ATM s AAL-1. However, it results in varying video quality. Videos can also be encoded using fixed quantization (fixed video quality), resulting in variable bit rate (VBR) video streams [20]. On the one hand, when several independent VBR video streams share the same ATM switch buffers, the low bit rate of one source will be offset by the high bit rate of other sources, resulting in high network utilization. On the other hand, buffer overflow might occur if multiple sources transmit at high rate at the same time, resulting in cell loss. Due to the MPEG coding, losing a data unit can cause information loss not only in the current frame but also in the dependent frames. In addition, VBR video exhibits significant burstiness due to scene changes and the compression algorithm. These properties make the multiplexed VBR video transmission over ATM a challenging issue. To combat the cell loss problem, control mechanisms are present in ATM networks: (1) connection setup negotiation and (2) traffic shaping/policing. During connection setup negotiation, traffic descriptors (peak and average cell rate, burst length and duration) and service requirements (maximum tolerable delay and desired delay jitter) are taken into account to determine whether to accept the connection and the amount of resources to reserve for a particular connection. The second method is to use traffic shaping/policing to change the behavior of the VBR video stream. Numerous works about traffic shaping/policing were recently introduced in the literature, e.g., [1, 2, 5, 7, 12, 14, 18, 19, 26-28]. A traffic policer sits at the network side and acts like a policeman to prevent the end-users from breaking the QoS contracts. If the measured traffic exceeds the negotiated parameters, the traffic policer will drop cells, or mark them as being low priority [1, 12, 14, 26]. Traffic shaping is done at the user side. The user traffic is shaped by delaying cells rather than dropping cells to help meet the QoS contract [18, 19]. While it was shown in [12] that the choice of the traffic policing algorithm has a profound impact on the video quality, the influence of traffic shaping on video quality is still not well studied (most of the research on traffic shaping centers on the ISCA Copyright 2003
2 2 IJCA, Vol. 10, N0. 3 Sept characteristic of the shaped traffic rather than on the resulting application performance). This paper therefore studies the influence of traffic shaping on video quality. It is shown that leaky bucket traffic shaping is able to increase broadcast video quality significantly without breaking the QoS contract. The paper makes the three following contributions: 1) The perceptual impact of cell loss on video quality is discussed. Cell loss can result in either frame corruption or frame loss, which causes differing video quality impairments. 2) The influence of traffic shaping on the resulting application performance and not just on the characteristic of the shaped traffic itself is studied. It is shown that traffic shaping, which is normally used to shape the traffic to adhere to the negotiated QoS parameters, can also help to achieve higher video quality. 3) It is shown what kind of tradeoffs are involved in choosing optimal shaper parameters and how these parameters can be calculated from the video statistics. The rest of the paper is structured as follows. In the next section, MPEG coding is reviewed, while in Section 3, ATM basics and the leaky bucket traffic shaper are presented. In Section 4, the perceptual impact of cell loss on video quality is discussed. The simulation setup is introduced in Section 5. Section 6 presents a simulation study on how traffic shaping influences the quality of video transmitted over ATM networks, while optimum shaper parameters are derived in Section 7. Conclusions are presented in Section 8. 2 MPEG Video Coding The acronym MPEG stands for Moving Picture Expert Group, which worked to generate the specification under ISO [11]. The specification includes three standards: MPEG-1, MPEG-2, and MPEG-4. The MPEG-1 and MPEG-2 standards are similar in basic concepts. Both of them are based on motion compensation and DCT (Discrete Cosine Transform) coding. MPEG-2 supports several profiles and resolution levels. MPEG-4 deviates from these traditional approaches and introduces the notion of objects that can be coded and transmitted independently. Because MPEG-2 is a finalized standard and is presently being utilized in more and more applications, this study concentrates on MPEG-2 video. MPEG video is composed of a hierarchy of layers, which are used for error controlling, random search, and synchronization. From the top, the first layer is the video sequence layer. The second layer is the group of picture (GOP) layer, which is composed of one intraframe (I frame) and some nonintraframes (P or B frames). The third layer is the picture layer and the layer beneath is the slice layer. Each slice is a contiguous sequence of macroblocks (typically, a slice is a picture row). Macroblocks can be further divided into blocks, which are 8 8 arrays of pixels. Each of these layers has it s own unique 32-bit start code. As in the standard for still image compression, Discrete Cosine Transform (DCT) is used in MPEG coding. DCT decomposes the signal into underlying spatial frequencies. It applies to each 8 8 block. A frequency-adaptive quantization of the DCT coefficients, according to human visual characteristics, is performed. As the result of DCT and quantization, most of the higher frequency coefficients have been quantized to zero. Since most of the non-zero DCT coefficients are typically concentrated in the upper left hand corner of the coefficient matrix, a zigzag scanning pattern together with run-length coding of the DCT coefficients is used to optimize compression. As a final step, Huffman-like entropy coding is applied. Considerable compression efficiency can be achieved if temporal redundancy is considered. Temporal processing exploiting this redundancy uses motion compensation prediction that is applied to each macroblock. The principle of motion compensation prediction is that typically consecutive video frames are similar. A two -dimensional spatial search is performed on each luminance macroblock of the referenced frame. When a relatively good match with the current macroblock is found, the encoder assigns a motion vector to the macroblock. After obtaining the motion vector, the difference (residual error) between the current and the referenced macroblock will be coded and appended to the motion vector. If no match is found, the current macroblock is coded using the intraframe-coding technique. A video frame can be encoded as an intra-frame (I frame), forward-predicted frame (P frame), or bi-directional predicted frame (B frame). An I frame is encoded as a single image with no reference to any past or future frames, while motion compensation is used in B and P frames. In P frames only forward prediction is used, while B frames are coded based on a backward prediction from succeeding I or P frames, as well as on a forward prediction from a previous frame. Backward prediction requires the future frame to be encoded and transmitted first. Each video sequence is composed of a series of Groups of Pictures (GOPs). The GOP structure is intended to assist random access into a sequence. A GOP is an independent decodable unit that can be of any size as long as it begins with an I frame. Figure 1 shows a GOP pattern in which the arrows represent the inter-frame dependencies. A typical, widely used GOP is the sequence IBBPBBPBBPBB, which is also used in this study. Figure 1: A GOP encoding pattern and dependencies. 3 ATM Networks An ATM network is a cell-switching network that uses a fixed length packet (cell) as the transmission unit [16]. The length of a cell is 53 bytes with a 5 byte header and a 48 byte payload. ATM networks operate in a connection-oriented
3 IJCA, Vol. 10, No. 3, Sept mode: first, a logical/virtual connection is setup, over which data is then transferred. During the connection setup phase, quality of service (QoS) parameters such as sustained cell rate, peak cell rate, and burst length can be defined for a specific connection. The call admission control (CAC) then tries to setup the connection adhering to these QoS parameters. Once the connection is up, these parameters are guaranteed by the network for the lifetime of the connection. The core of an ATM network consists of ATM switches interconnected by fiber links. An ATM switch consists of a switch fabric and cell buffers that are used to eliminate cell contention. Three main switch architectures exist that mainly differ in the placement of the internal cell buffers [3, 4, 13]. In input-buffered switches, a buffer is located at each switch input port; in output-buffered switches, a buffer is located at each switch output port, while in central-memory buffered switches, cells are buffered in a central memory. Output queueing is widely used and the ATM switch used in this study is based on this buffering strategy. Symmetric output-buffered switches with B inputs and B outputs are assumed, in which queues are located at each output port of the switching element. Cells arriving simultaneously at input ports that are destined for the same output are queued in the buffer of that output port. The switch is able to write up to B cells (at most one cell from each input) to a specific output queue during one switch cycle time to avoid cell loss. The cells in the output queue are served on a FIFO basis to maintain the integrity of the cell sequence. Cells will be dropped if they reach a full output buffer. Many different traffic shapers were introduced in the literature. In contrast to traffic policers that discard nonconforming cells, traffic shapers space cell departures by delaying cells in a buffer. To accomplish this, a leaky bucket is most often used. The general model of a leaky bucket traffic shaper is depicted in Figure 2 [14, 17, 24]. The Leaky Bucket consists of an input buffer of size IB and a token buffer of size TB. Cells drain out of the input buffer with a rate of IBR and tokens drain out of the token buffer with a rate of TBR. To guarantee that no cells are lost in the shaper, the input buffer size is chosen very large. TB and TBR can be adjusted, while IBR is set to be IBR=TB/T + TBR with T the frame duration of 33 ms [14]. Every cell entering the network (leaving the input buffer) places one token into the token buffer. If the token buffer is full, cells will wait in the input buffer until an empty slot in the token buffer is available. 4 Perceptual Impact of Cell Loss on Video Quality Losing information in an MPEG stream results in video quality degradation. The level of degradation depends on what information is lost and to which frame the information belongs. Information loss results in two distinct video impairments, depending on whether information within a frame or a frame itself is lost. 4.1 Loss of Information within a Frame The loss of information within a frame will impact the frame slice the lost information belongs to. As an example, consider Figure 3. Figure 3a shows an original I-frame of the flower garden movie, while Figure 3b shows the decoded frame when losing one ATM cell. The video sequence was encoded with a video slice corresponding to a horizontal strip of an image. The loss of a cell causes image corruption up to the next resynchronization point (i.e., the next slice header). This is referred to as spatial loss propagation. The extent of frame corruption depends on the relative position of the lost information within a slice. Due to the predictive nature of the MPEG-2 algorithm, when losses occur in a reference frame (I or P), image corruption will remain until the next resynchronization point (i.e., the next I frame) is received. This results in the impairment propagation across multiple frames, which is known as temporal loss propagation. This is shown in Figure 4 where the original last B-frame of the GOP (Figure 4a and the corrupted last B-frame of the GOP (Figure 4b are shown, assuming cell loss in the GOP s I-frame. Figures 3b and 4b also show that the MPEG decoder used implicitly implements some error concealment. Without any error concealment, the depicted cell loss would result in a black block starting from the point of cell loss and ending at the splice end (cell loss and spatial loss propagation in Figures 3b) and 4b) [8]). This happens if after a frame was decoded, the frame buffer space in the decoder is reset to all zeros. When the next frame is decoded into this buffer and part of a splice is lost, the corresponding zeros will not be overwritten, resulting in the black block. In the decoder used, buffer space is not reset before a new frame is decoded. Thus, when part of a splice is lost, the corresponding buffer section is not overwritten and will contain decoded macroblocks from the previous frame, alleviating the impact of cell loss on video quality. 4.2 Loss of Frame Figure 2: General model of a leaky bucket Each encoded frame in an MPEG stream starts with a START_OF_PICTURE (SOP) code. When processing a bit stream, the decoder will search the stream until an SOP occurs. After receiving the SOP the decoder starts to decode the
4 4 IJCA, Vol. 10, N0. 3 Sept information that follows to assemble the frame. If an SOP is lost during stream transmission, the decoder re-synchronizes by reading in the bit stream and discarding the bits until it recognizes the SOP of the next frame in the stream. During this resynchronization, the decoder will continue displaying the previous frame. Thus, when an SOP is lost, the corresponding frame will be skipped and a previous frame will be displayed instead. The effect of frame loss on video quality depends on the type of frame lost. Because no other frames depend on B frames, no frames will be corrupted if a B frame is lost. However, because a different frame will be displayed, a slight jerk that depends on the overall motion in that scene might be visible when watching the movie. When an I or P frame is lost, all P and B frames that depend on the lost frame will most likely be corrupted. Motion vectors in the P and B frames point to macroblocks in the lost frame. The decoder uses the corresponding macroblocks from the previous I or P frame instead, which results in the corruption of moving objects within the frame. Because other P and B frames within the GOP depend on the corrupted frame, those frames will be corrupted as well (temporal loss propagation). The decoder will resynchronize after the fetch of the next I frame, ending the video corruption. This impairment is shown in Figure 3c. In the scene the tree is a moving object (moving from right to left through the picture), while the background is moving only very slowly. In Figure 3c, the decoded B frame following a lost I frame is shown. The moving object (tree) is corrupted and although not really visible in Figure 3c, parts of the background are corrupted as well. Figure 4c shows the last B- frame of the GOP that is also corrupted because of the I-frame loss (temporal loss propagation). In this paper, video quality is judged by the average mean square error MSE (calculated by comparing each displayed frame with its corresponding original fra me) over all displayed video frames. Because the loss of I and P frames results in the corruption of dependent P and B frames in a GOP, frame loss and MSE are correlated. However, this is not true if a B frame is lost (a previous frame will be displayed instead, resulting in the difference of only one frame), so that the percentage of frame loss is also used as a video quality measure. The quality of the corrupted videos was also judged subjectively by watching the video. A subjectively lower video quality coincided with a higher measured MSE, as will be shown further in Section 6. For example, the video quality resulting from losing a frame (Figures 3c and 4c) was judged as being lower than the video quality resulting from losing a cell within a frame (Figures 3b and 4b). This coincided with an MSE = 400 for the lost frame case and an MSE = 20 for the lost cell case (measured over the affected GOP). 5 Simulation Setup In our study, the system under consideration shown in Figure 5 is used. It includes an ATM switch, a number of lost cell spatial loss propagation (a) (b) (c) Figure 3: (a) original I frame, (b) effect of cell loss within I frame, (c) effect in following B frame when I frame is lost (a) (b) (c) Figure 4: (a) original B frame, (b) effect of cell loss within I frame, (c) effect of loss of I frame; last frame (B) of GOP shown
5 IJCA, Vol. 10, No. 3, Sept video sources, traffic shapers, and a destination. In this study, we consider video broadcasting applications. Video broadcasting has a maximum end-to-end delay requirement of 500ms [9] (including encoding, cell packaging, transporting, decoding, and displaying of frames). Cells received by the destination that would result in a higher end-to-end delay will be dropped by the destination. Figure 5: ATM system diagram Eight different real MPEG-2 video clips are used as the video sources for video broadcasting applications. The movies used were downloaded from different web sites including [15] and converted to MPEG-2. Essential simulation results for all eight movies are shown in Section 7, while in-depth results for three of those eight movies are shown in Section 6. The first of these three videos is a low motion video clip that is part of an interview. It is approximately 42 seconds long (consisting of 1,297 frames with a size of pixels) with all low motion scenes. In the following discussions, we call this video clip low motion video. The second video is a movie clip from the trailer of the movie Titanic. It is approximately 45 seconds long (consisting of 1,408 frames with a size of pixels) with a mix of high and medium motion scenes. In the following discussions, we call this video clip medium motion video. The third movie depicts an animated high-speed flight over a surface. It is approximately 25 seconds long (consisting of 733 frames with a size of pixels) with high motion scenes and is therefore called high motion video. All three video clips were encoded with the group of picture (GOP) pattern IBBPBBPBBPBB. Video can be characterized by, among others, its average cell rate (ACR), which is defined as the total number of ATM cells in the video clip divided by the playing time of the video. The low motion video clip used in this study has an ACR of 2,640 cells, the medium motion video clip has an ACR of 4,800 cells, while the high motion video clip has an ACR of 5,310 cells. Frame statistics of the three movies are listed in Table 1. For transmission, the video stream is packaged into AAL-1 layer cells with a 47-byte payload. The number of video sources in a simulation under consideration ranges from 10 to 16 to make a reasonable traffic load range. The start time of each video source is randomly selected within 360 ms (the time period of one GOP). Our ATM simulator is based on the ATM simulator developed by the (National Institute of Standards and Technology (NIST) [22] and was adapted to incorporate real video sources and the various traffic shapers discussed in this study. All simulation results shown are the average of 60 independent simulation runs. All ATM switches studied have output buffers with a length of 100 cells each. For the low motion video simulations, the switch speed is set to 25 Mb/s and the switch is connected to 25 Mb/s physical links with a length of 1 km each. For the medium and high motion video simulations, the switch speed is set to 50 Mb/s and the switch is connected to 50 Mb/s physical links with a length of 1 km each. These rather low switch/link rates were chosen to obtain a reasonable simulation run time (less video sources have to be used to generate realistic switch loads). Longer simulation runs with faster switches/links and more video sources show that the results presented and conclusions drawn are also valid for higher switch and link rates (e.g., OC-3). The MPEG2 encoder/decoder pair we used is the MPEG-2 Video Codec developed by the MPEG Software Simulation Group (MSSG) [21]. If the frame -start-code of a frame is lost during the transmitting, the decoder will not decode the frame but skip it. Thus, frames might be lost. If other parts of a frame are lost, the decoder is able to decode that frame but will introduce errors in that frame. To capture the quality of a transmitted video, we used three measures: (1) the cell loss ratio (number of lost cells divided by the number of all cells transmitted), (2) the frame loss ratio (number of lost frames divided by the number of all frames), and (3) the mean square error (MSE) of the received frames that are compared to their corresponding original frames. During a simulation run, a sending node will send the MPEG-coded video file frame -by-frame to the shaper/network with frame duration of 33ms. An MPEG-coded video frame i to be sent out is divided into n i ATM cells; the cells are equally spaced during the frame period of 33ms, and are sent out. At the receiver side, received video data is combined into an mpeg video file. Video data lost in the network is therefore not included in the reconstructed MPEG video file. The received video file is then decoded into individual UYV frame components using the MPEG decoder. The decoder was slightly modified to indicate frames that were skipped during the decoding process to calculate the lost-frame-ratio. Each U, Y, and V frame component is then compared to its original Table 1: Minimum, maximum, and average number of cells per frame of the movies used Low motion movie Medium motion movie High motion movie Frame Type I B P I B P I B P Min Max Average
6 6 IJCA, Vol. 10, N0. 3 Sept (non-corrupted) counterpart to calculate the resulting MSE. 6 Effect of Regular Traffic Shaping on Broadcast Video Quality In this section we investigate the effects of traffic shaping on the quality of broadcast video, which is not very delay sensitive and allows a maximum end-to-end delay of 500ms. The influence of the traffic shaper s TB and TBR on the video quality is studied. TB adjusts the burst size and peak cell rate of the cell stream. TBR adjusts the sustained cell rate (SCR) of the cell stream leaving the shaper and is therefore related to the ACR of the video clip. In all simulations of this study, we set TBR = S*ACR with S being an overdimensioning factor with 1 S 5 [14] that is used to scale the TBR (note that when S is larger than 5, the traffic shaper loses its effect and the video transmission becomes VBR). The overdimensioning S determines the rate at which the video stream is released into the network. If a large factor (S > 5) is chosen, arriving cells will be instantaneously sent into the network without any shaping. The smaller S is, the longer cells have to wait in the input buffer to be released into the network. First we investigate the effect of TBR on the video quality. Sixteen video sources were used to generate a relatively high network load. The token buffer size was set to TB = 100 cells. Figures 6, 7, and 8 show the resulting cell loss ratio, frame loss ratio, and MSE for the low motion movie (Figure 6), the medium motion movie (Figure 7), and the high motion movie (Figure 8). It can be seen that if TBR is smaller than ACR (S < 1), the video quality is the worst (high frame loss and MSE). More and more cells will pile up in the input buffer of the leaky bucket traffic shaper, which results in high cell delay so cells cannot meet the end-to-end delay requirement. If TBR is higher than ACR (S > 1), video quality (frame loss and MSE) begins to degrade with increasing S. Especially in the range of 1.4 S 2.4, the video quality degrades sharply. From those curves, it can be seen that the degradation is coming from the increased cell loss rate. It turns out that for TBR > ACR, the higher the Sustained Cell Rate, the higher the cell loss rate and the lower the video quality. The study shows that, for S > 1, all cells are dropped in the switch because of congestion in the network, while no cells are dropped for exceeding the end-toend delay. This shows that the dominant factor of cell loss for video broadcasting application is network congestion. When the traffic is heavily overdimensioned (S = 3.0), the video quality degrades little and becomes stable with the increasing S. That is because for S > 3.0, the video source traffic resembles VBR more closely so that the further increase of S has little impact on video quality. Furthermore, with increasing S, both frame loss rate and MSE increase. For 1.4 S 5, approximately 2 percent of the lost frames are I-frames, 22 percent are P-frames, and 75 percent are B-frames. Thus, around 25 percent of the lost frames are reference frames, and the loss of these frames will result in a higher MSE because of the temporal loss propagation as discussed in Section 4. Therefore, the increase in MSE whenincreasing S stems from cell loss within frames and from reference frame loss. To investigate the effect of the token buffer size TB on video broadcast performance, we select S = 1.0, 1.2, and 1.4, under which relatively high video quality is gained (see Figures 6 to 8), and vary TB. The results are shown in Figures 9 to 11 for the low, medium, and high mo tion video cases. Highest video quality (lowest frame loss rate and MSE) is achieved with a token buffer size of around 100 for the low motion, about 50 for the medium motion movie, and around 10 for the high motion movie. For the low motion video, the size of I frames is much larger than the size of P frame and B frame as compared to the medium and high motion video cases (see Table 1). The lower the motion in a video, the more effective the motion compensation mechanism producing smaller B and P frames. The low motion movie is therefore burstier than the medium motion video, which in turn is burstier than the high-motion video. To account for this varying burstiness, the token buffer size has to be increased for decreasing motion in a video to increase the burstiness of the traffic stream leaving the shaper. Figures 12 to 14 show a comparison of video broadcasting performance with shaping and without shaping (VBR) under different network loads (number of video sources). It can be concluded that a leaky bucket traffic shaper with TBR and TB chosen properly improves the broadcast video quality significantly, while it is able to shape the traffic to conform to the QoS contract. How to properly choose TB and TBR is further discussed in Section 7. To further illustrate the gain in video quality through traffic shaping, Figures 15 and 16 depict (a) the original frames, (b) received and decoded frames under the traffic scenario of 16 VBR video sources, and (c) received and decoded frames under the traffic scenario of 16 shaped video sources (with TB = 10, and S = 1.0) from the high-motion movie. In both the VBR and shaped traffic cases, the simulations were run with the same random number generation seed resulting in the same start times of the movies between the two simulations (while the start times of the 16 movies belonging to a specific simulation run differed). Thus, the only difference between the two simulations was the traffic -shaping algorithm used (either shaping or no shaping). In many cases, while frames in the VBR traffic case were highly distorted, the corresponding frames in the shaped traffic case were free of distortion. To consider a worst-case scenario, frames were selected in Figures 15 and 16 that had high distortion in the shaped traffic case. Figure 14 suggests a 20-fold reduction in MSE when using shaping. This is also evident from the actual decoded frames. While in the VBR case the frames shown have unacceptable quality, the frames in the shaping case show much less, more acceptable distortion. The findings can be summarized as follows. For video broadcasting applications, the dominant factor leading to cell loss is network congestion, not the end-to-end delay. To gain the best video broadcasting performance, TBR should be set toacr (S = 1; i.e., the video stream sustained cell rate is equal to the average cell rate of the movie), which largely reduces the network congestion and lowers the cell loss rate. The optimal TB depends on the motion/burstiness of the video. In
7 IJCA, Vol. 10, No. 3, Sept Figure 6: Low motion video broadcasting performance under varying TBR, 16 video sources, TB = 100, TBR = ACR * S Figure 7:Medium motion video broadcasting performance under varying TBR, 16 video sources TB = 100, TBR = ACR * S Figure 8: High motion video broadcasting performance under varying TBR, 16 video sources, TB = 100, TBR = ACR * S our scenarios, a TB of around 10 is suitable for the high motion video, a TB of around 50 is suitable for the medium motion video, while a TB of around 100 is suitable for low motion video. With those reasonably selected TBR and TB, the leaky bucket traffic shaper significantly improves the video broadcasting performance (decrease in the frame loss by a factor of up to 75; decrease of MSE by a factor of up to 65) compared with direct VBR video broadcasting. This results in a substantial increase in perceived video quality. 7 Leaky Bucket Shaper Dimensioning In the previous section it was shown that for reasonably selected TBR and TB, the leaky bucket traffic shaper significantly improved the video broadcasting performance. In this section, how to calculate the optimum token buffer rate and size is studied. A multimedia system is assumed where movies stored on a video server are streamed out to users. In this case, characteristics of a video such as average cell rate and burstiness can be calculated off-line and these parameters can be used to dimension its associated traffic shaper before the video is streamed out. As was shown in the previous section, the token buffer rate TBR of the leaky bucket traffic shaper should be set to the average cell rate (averaged over all frames) of the movie
8 8 IJCA, Vol. 10, N0. 3 Sept Figure 9: Low motion video broadcasting performance under varying TB and S, 16 video sources, TBR = ACR * S Figure 10: Medium motion video broadcasting performance under varying TB and S, 16 video sources, TBR = ACR * S Figure 11: High motion video broadcasting performance under varying TB and S, 16 video sources, TBR = ACR * S (S = 1.0). Assuming a movie with N frames and assuming that the size (in cells) of a frame i is c i, TBR (in cells per frame) can be calculated to: TBR = ACR = N i= 1 N c i. (1) The optimum token buffer size TB depends on the burstiness of the movie, as was shown in the previous section. There are several ways to specify the burstiness of a movie. One way is to compare the average size of I frames to the average size of B frames. Because the cells belonging to a frame are assumed to be spaced evenly over time over a frame duration, a burst change occurs only at every frame change. This burst change will be the highest if a frame consisting of a small number of cells is followed by a frame consisting of a large number of cells and vice versa, which normally happens when an I frame is transmitted following a B frame and vice versa. Comparing the average I frame size to the average B frame size will therefore give an indication on the burstiness of the movie. This characteristic is used here to calculate the optimum token buffer size TB. Assuming a movie with an average I frame size of I a and an average B frame size of B a, the following
9 IJCA, Vol. 10, No. 3, Sept Figure 12: Effect of traffic shaping on low motion video broadcasting, TB=100, S=1.0 Figure 13: Effect of traffic shaping on medium motion video broadcasting, TB=50, S=1.0 Figure 14: Effect of traffic shaping on high motion video broadcasting, TB=10, S=1.0 simple approximation is used to calculate the optimum TB (in tokens) TB I a = F Ba with F being a constant. The constant F was determined empirically by using the aforementioned eight movies of different sizes and content/motion as listed in Table 2, and (2) obtaining the optimum TB through simulation. Simulations were run with 16 video sources and with TB in the range from 0 to 200 tokens with an increment of 10 tokens. It was found that a good approximation was achieved when choosing F = 8 tokens. Optimum simulated TB and calculated TB using Equation (2) with F = 8 for these movies are shown in Table 2. Equation (2) tracks the optimum simulated token buffer size quite well. It has to be noted though that Equation (2) is only an approximation and a more rigorous study on how to calculate an optimum TB is needed to derive a more accurate optimum token buffer size.
10 10 IJCA, Vol. 10, N0. 3 Sept (a) (b) (c) Figure 15: Effect of traffic shaping on video quality (a) original frame, (b) received frame without shaping (VBR), (c) received frame with shaping (a) (b) (c) Figure 16: Effect of traffic shaping on video quality (a) original frame, (b) received frame without shaping (VBR), (c) received frame with shaping Table 2: Simulated and calculated optimum TB for movies under investigation (F = 8) Movie Frame size (in pixel pixel) Average I frame size (in cells) Average B frame size cells) Simulated optimum TB (in cells) Calculated optimum TB (in cells) Low motion movie Medium motion movie High motion movie Quelle DH Storm Epson Mystic Conclusions The influence of traffic shaping on the quality of broadcast video transmitted over ATM networks was studied. First, the perceptual impact of cell loss on video quality was discussed. It was shown how losing cells within a frame and losing a frame s start code influences video quality. Then the effects of a leaky bucket traffic shaper on video quality was studied under different token buffer rates and token buffer sizes, and the tradeoffs involved were discussed. In general, underdimensioning of the shaper (i.e., injecting cells into the network with an average rate lower than the average video cell rate) as well as overdimensioning of the shaper (i.e., injecting cells into the network with an average rate higher than the average video cell rate) should be avoided because it results in video quality degradation. It was also shown that the optimal choice of the shaper s token buffer size depends on the overall motion/burstiness of the video. It was studied how the optimum token buffer size can be approximated from video characteristics such as average frame sizes. Furthermore, it
11 IJCA, Vol. 10, No. 3, Sept was shown that traffic shaping is able to decrease frame loss by a factor of up to 75 and MSE by a factor of up to 65, compared to direct VBR video broadcasting, which results in a substantial visible video quality improvement for broadcast applications. This study clearly shows that a traffic shaper customized to video applications is able to significantly enhance the quality of broadcast video streams transmitted over ATM networks, while shaping the stream to conform to negotiated QoS parameters. References [1] C. V. N. Albuquerque, M. Faerman, and O. Duarte, Implementations of Traffic Control Mechanisms for High-Speed Networks, IEEE Proceeding of Telecommunications Symposium, 1: , August [2] A. F. Atlasis, G. I. Stassinopoulos, and A. V. Vasilakos, Leaky Bucket Mechanism with Learning Algorithm for ATM Traffic Ppolicing, IEEE Proceeding of Computers and Communications, pp , July [3] T. R. Banniza, G. Eilenberger, B. Pauwels, and Y. Therasse, Design and Technology Aspects of VLSI's for ATM Switches, IEEE Journal on Selected Areas in Communications, 9(8): , October [4] P. Barri and J. A. O. Goubert, Implementation of a 16 by 16 Switching Element for ATM Exchange, IEEE Journal on Selected Areas in Communications, 9(5): , June [5] A. Catsoulis, Y. Kamaras, K. Kavidopoulos, and N. Mitrou, An Adaptive Shaper with Effective-Rate Enforcement for ATM Traffic, Third IEEE Symposium on Computers and Communications, pp , July [6] CCITT, Recommendation I.150: B-ISDN ATM Functional Characteristics, CCITT, Geneva, [7] W.-T. Chen, W.-S. Huang, and C.-H. Lin, A Policing Algorithm for MPEG Streams on ATM Network, IEEE International Conference on Communications, 1: , June [8] P. Cuenca, L. Orozoco-Barbosa, F. Quiles, and A. Garrido, Loss-Resilient ATM Protocol Architecture for MPEG-2 Video Communications, IEEE Selected Areas in Communications, 18(6): , June [9] I. Dalgic and F. A. Tobagi, Performance Evaluation of ATM Networks Carrying Constant and Variable Bit - Rate Video Traffic, IEEE Journal on Selected Areas in Communications, 15(6): , August [10] N. G. Duffield, K. K. Ramakrishnan, and A. R. Reibman, Issues of Quality and Multiplexing when Smoothing Rate Adaptive Video, IEEE Transactions on Multimedia, 1(4): , December [11] Generic Coding of Moving Pictures and Associated Audio Information: Video, ISO/IEC JTC1/SC29/WG11, MPEG-2 Draft International Standard, [12] J. Gu, M. Jurczyk, and C. W. Chen, Impact of ATM Traffic Control on MPEG-2 Video Quality, IEEE International Symposium on Circuits and Systems, pp , May [13] M. G. Hluchyj and M. J. Karol, Queueing in High- Performance Packet Switching, IEEE Journal on Selected Areas in Communications, 6(9): , December [14] J. S. M. Ho, H. Uzunalioglu, and I. F. Akyildiz, Cooperating Leaky Bucket for Average Rate Enforcement of VBR Video Traffic in ATM Networks, IEEE INFOCOM, pp , April [15] [16] J. Ivanova and M. Jurczyk, Computer Networks in The Encyclopedia of Physical Science and Technology - Third Edition, R. A. Meyers, ed., Academic Press, San Diego, California, [17] V. G. Kulkarni and N. Gautam, Leaky Buckets: Sizing and Admission Control, IEEE International Conference on Decision and Control, 1: , December [18] M. Li and Z. Tsai, Design and Analysis of the GCRA Traffic Shaper for VBR Services in ATM Networks, IEEE International Conference on Communications, 1: , August [19] M. Li and Z. Tsai, An ATM Traffic Shaper for Delay- Sensitive Delay-Insensitive VBR Services, IEEE International Conference on Information Networking, pp , January [20] W. Lou and M. E. Zarki, Quality Control for VBR Video over ATM Networks, IEEE Journal on Selected Areas in Communications, 15(6): , August [21] Mpeg Software Simulation Group, MPEG-2 Video Codec, Version 1.2, webpage: org/mpeg/ MSSG/#source, July [22] NIST ATM Network Simulator, Version 2.0, webpage: [23] M. Orzessek and P. Sommer, ATM & MPEG-2: Integrating Digital Video into Broadband Networks, Prentice Hall, Upper Saddle River, NJ, [24] P. Pancha and M. El Zarki, Leaky Bucket Access Control for VBR MPEG Video, IEEE INFOCOM'95, pp , April [25] S. V. Raghavan and S. K. Tripathi, Networked Multimedia Systems, Prentice-Hall, Upper Saddle River, NJ, [26] E. Rathgeb, Policing of Realistic VBR Video Traffic in ATM Network, International Journal of Digital and Analog Communication System, 6(5): , May [27] J. Rexford, F. Bonomi, A. Greenberg, and A. Wong, Scalable Architecture for Integrated Traffic Shaping and Link Scheduling in High-Speed ATM Switches, IEEE Journal on Selected Areas in Communications, 15(5): , June [28] Y. Wang and M. Jurczyk, Impact of Traffic Shaping in ATM Networks on Video Quality, ICPP Workshop on Parallel and Distributed Multimedia Systems, pp , August 2000.
12 12 IJCA, Vol. 10, N0. 3 Sept Yongdong Wang obtained his M.S. in Computer Science from the University of Missouri-Columbia, in In 1997, he graduated from Tsinghua University, China, with a M.S. in Computer Engineering. He is currently a principal Member of Technical Staff at Celox networks, INC., headquartered in Boston, MA. His research interests include high performance edge net-working with QOS, MPEG-2 over ATM network, ATM Internetworking, and Parallel/Distributed Systems. Michael Jurczyk obtained his Ph.D. in Electrical Engineering form the University of Stuttgart, Germany, in In 1996, he was a visiting assistant professor at the School of Electrical and Computer Engineering at Purdue University. He is currently an assistant professor at the Computer Engineering and Computer Science Department at the University of Missouri-Columbia. His research interests include parallel and distributed systems, interconnection networks for parallel and communication systems, ATM networking, and networked multimedia.
MPEGTool: An X Window Based MPEG Encoder and Statistics Tool 1
MPEGTool: An X Window Based MPEG Encoder and Statistics Tool 1 Toshiyuki Urabe Hassan Afzal Grace Ho Pramod Pancha Magda El Zarki Department of Electrical Engineering University of Pennsylvania Philadelphia,
More informationRelative frequency. I Frames P Frames B Frames No. of cells
In: R. Puigjaner (ed.): "High Performance Networking VI", Chapman & Hall, 1995, pages 157-168. Impact of MPEG Video Trac on an ATM Multiplexer Oliver Rose 1 and Michael R. Frater 2 1 Institute of Computer
More informationChapter 10 Basic Video Compression Techniques
Chapter 10 Basic Video Compression Techniques 10.1 Introduction to Video compression 10.2 Video Compression with Motion Compensation 10.3 Video compression standard H.261 10.4 Video compression standard
More informationImplementation of an MPEG Codec on the Tilera TM 64 Processor
1 Implementation of an MPEG Codec on the Tilera TM 64 Processor Whitney Flohr Supervisor: Mark Franklin, Ed Richter Department of Electrical and Systems Engineering Washington University in St. Louis Fall
More informationAUDIOVISUAL COMMUNICATION
AUDIOVISUAL COMMUNICATION Laboratory Session: Recommendation ITU-T H.261 Fernando Pereira The objective of this lab session about Recommendation ITU-T H.261 is to get the students familiar with many aspects
More informationMotion Video Compression
7 Motion Video Compression 7.1 Motion video Motion video contains massive amounts of redundant information. This is because each image has redundant information and also because there are very few changes
More informationA look at the MPEG video coding standard for variable bit rate video transmission 1
A look at the MPEG video coding standard for variable bit rate video transmission 1 Pramod Pancha Magda El Zarki Department of Electrical Engineering University of Pennsylvania Philadelphia PA 19104, U.S.A.
More informationVideo 1 Video October 16, 2001
Video Video October 6, Video Event-based programs read() is blocking server only works with single socket audio, network input need I/O multiplexing event-based programming also need to handle time-outs,
More informationSkip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video
Skip Length and Inter-Starvation Distance as a Combined Metric to Assess the Quality of Transmitted Video Mohamed Hassan, Taha Landolsi, Husameldin Mukhtar, and Tamer Shanableh College of Engineering American
More informationMultimedia Communications. Video compression
Multimedia Communications Video compression Video compression Of all the different sources of data, video produces the largest amount of data There are some differences in our perception with regard to
More informationPattern Smoothing for Compressed Video Transmission
Pattern for Compressed Transmission Hugh M. Smith and Matt W. Mutka Department of Computer Science Michigan State University East Lansing, MI 48824-1027 {smithh,mutka}@cps.msu.edu Abstract: In this paper
More informationMultimedia Communications. Image and Video compression
Multimedia Communications Image and Video compression JPEG2000 JPEG2000: is based on wavelet decomposition two types of wavelet filters one similar to what discussed in Chapter 14 and the other one generates
More informationError prevention and concealment for scalable video coding with dual-priority transmission q
J. Vis. Commun. Image R. 14 (2003) 458 473 www.elsevier.com/locate/yjvci Error prevention and concealment for scalable video coding with dual-priority transmission q Jong-Tzy Wang a and Pao-Chi Chang b,
More informationModule 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur
Module 8 VIDEO CODING STANDARDS Lesson 27 H.264 standard Lesson Objectives At the end of this lesson, the students should be able to: 1. State the broad objectives of the H.264 standard. 2. List the improved
More informationThe H.263+ Video Coding Standard: Complexity and Performance
The H.263+ Video Coding Standard: Complexity and Performance Berna Erol (bernae@ee.ubc.ca), Michael Gallant (mikeg@ee.ubc.ca), Guy C t (guyc@ee.ubc.ca), and Faouzi Kossentini (faouzi@ee.ubc.ca) Department
More informationModule 8 VIDEO CODING STANDARDS. Version 2 ECE IIT, Kharagpur
Module 8 VIDEO CODING STANDARDS Lesson 24 MPEG-2 Standards Lesson Objectives At the end of this lesson, the students should be able to: 1. State the basic objectives of MPEG-2 standard. 2. Enlist the profiles
More informationCOMP 249 Advanced Distributed Systems Multimedia Networking. Video Compression Standards
COMP 9 Advanced Distributed Systems Multimedia Networking Video Compression Standards Kevin Jeffay Department of Computer Science University of North Carolina at Chapel Hill jeffay@cs.unc.edu September,
More informationConstant Bit Rate for Video Streaming Over Packet Switching Networks
International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Constant Bit Rate for Video Streaming Over Packet Switching Networks Mr. S. P.V Subba rao 1, Y. Renuka Devi 2 Associate professor
More informationA Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique
A Novel Approach towards Video Compression for Mobile Internet using Transform Domain Technique Dhaval R. Bhojani Research Scholar, Shri JJT University, Jhunjunu, Rajasthan, India Ved Vyas Dwivedi, PhD.
More informationPerformance Evaluation of Error Resilience Techniques in H.264/AVC Standard
Performance Evaluation of Error Resilience Techniques in H.264/AVC Standard Ram Narayan Dubey Masters in Communication Systems Dept of ECE, IIT-R, India Varun Gunnala Masters in Communication Systems Dept
More informationVideo compression principles. Color Space Conversion. Sub-sampling of Chrominance Information. Video: moving pictures and the terms frame and
Video compression principles Video: moving pictures and the terms frame and picture. one approach to compressing a video source is to apply the JPEG algorithm to each frame independently. This approach
More informationJoint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes. Digital Signal and Image Processing Lab
Joint Optimization of Source-Channel Video Coding Using the H.264/AVC encoder and FEC Codes Digital Signal and Image Processing Lab Simone Milani Ph.D. student simone.milani@dei.unipd.it, Summer School
More informationVideo coding standards
Video coding standards Video signals represent sequences of images or frames which can be transmitted with a rate from 5 to 60 frames per second (fps), that provides the illusion of motion in the displayed
More informationChapter 2 Introduction to
Chapter 2 Introduction to H.264/AVC H.264/AVC [1] is the newest video coding standard of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG). The main improvements
More informationMPEG-2. ISO/IEC (or ITU-T H.262)
1 ISO/IEC 13818-2 (or ITU-T H.262) High quality encoding of interlaced video at 4-15 Mbps for digital video broadcast TV and digital storage media Applications Broadcast TV, Satellite TV, CATV, HDTV, video
More informationUC San Diego UC San Diego Previously Published Works
UC San Diego UC San Diego Previously Published Works Title Classification of MPEG-2 Transport Stream Packet Loss Visibility Permalink https://escholarship.org/uc/item/9wk791h Authors Shin, J Cosman, P
More informationThe H.26L Video Coding Project
The H.26L Video Coding Project New ITU-T Q.6/SG16 (VCEG - Video Coding Experts Group) standardization activity for video compression August 1999: 1 st test model (TML-1) December 2001: 10 th test model
More informationPrinciples of Video Compression
Principles of Video Compression Topics today Introduction Temporal Redundancy Reduction Coding for Video Conferencing (H.261, H.263) (CSIT 410) 2 Introduction Reduce video bit rates while maintaining an
More informationAnalysis of MPEG-2 Video Streams
Analysis of MPEG-2 Video Streams Damir Isović and Gerhard Fohler Department of Computer Engineering Mälardalen University, Sweden damir.isovic, gerhard.fohler @mdh.se Abstract MPEG-2 is widely used as
More informationINTERNATIONAL TELECOMMUNICATION UNION. SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Coding of moving video
INTERNATIONAL TELECOMMUNICATION UNION CCITT H.261 THE INTERNATIONAL TELEGRAPH AND TELEPHONE CONSULTATIVE COMMITTEE (11/1988) SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Coding of moving video CODEC FOR
More informationAn Overview of Video Coding Algorithms
An Overview of Video Coding Algorithms Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Video coding can be viewed as image compression with a temporal
More informationWhite Paper. Video-over-IP: Network Performance Analysis
White Paper Video-over-IP: Network Performance Analysis Video-over-IP Overview Video-over-IP delivers television content, over a managed IP network, to end user customers for personal, education, and business
More informationTHE CAPABILITY of real-time transmission of video over
1124 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 15, NO. 9, SEPTEMBER 2005 Efficient Bandwidth Resource Allocation for Low-Delay Multiuser Video Streaming Guan-Ming Su, Student
More informationDynamic bandwidth allocation scheme for multiple real-time VBR videos over ATM networks
Telecommunication Systems 15 (2000) 359 380 359 Dynamic bandwidth allocation scheme for multiple real-time VBR videos over ATM networks Chae Y. Lee a,heem.eun a and Seok J. Koh b a Department of Industrial
More informationDigital Video Telemetry System
Digital Video Telemetry System Item Type text; Proceedings Authors Thom, Gary A.; Snyder, Edwin Publisher International Foundation for Telemetering Journal International Telemetering Conference Proceedings
More informationROBUST ADAPTIVE INTRA REFRESH FOR MULTIVIEW VIDEO
ROBUST ADAPTIVE INTRA REFRESH FOR MULTIVIEW VIDEO Sagir Lawan1 and Abdul H. Sadka2 1and 2 Department of Electronic and Computer Engineering, Brunel University, London, UK ABSTRACT Transmission error propagation
More informationError Resilient Video Coding Using Unequally Protected Key Pictures
Error Resilient Video Coding Using Unequally Protected Key Pictures Ye-Kui Wang 1, Miska M. Hannuksela 2, and Moncef Gabbouj 3 1 Nokia Mobile Software, Tampere, Finland 2 Nokia Research Center, Tampere,
More informationBit Rate Control for Video Transmission Over Wireless Networks
Indian Journal of Science and Technology, Vol 9(S), DOI: 0.75/ijst/06/v9iS/05, December 06 ISSN (Print) : 097-686 ISSN (Online) : 097-5 Bit Rate Control for Video Transmission Over Wireless Networks K.
More informationModeling and Optimization of a Systematic Lossy Error Protection System based on H.264/AVC Redundant Slices
Modeling and Optimization of a Systematic Lossy Error Protection System based on H.264/AVC Redundant Slices Shantanu Rane, Pierpaolo Baccichet and Bernd Girod Information Systems Laboratory, Department
More informationContents. xv xxi xxiii xxiv. 1 Introduction 1 References 4
Contents List of figures List of tables Preface Acknowledgements xv xxi xxiii xxiv 1 Introduction 1 References 4 2 Digital video 5 2.1 Introduction 5 2.2 Analogue television 5 2.3 Interlace 7 2.4 Picture
More informationIn MPEG, two-dimensional spatial frequency analysis is performed using the Discrete Cosine Transform
MPEG Encoding Basics PEG I-frame encoding MPEG long GOP ncoding MPEG basics MPEG I-frame ncoding MPEG long GOP encoding MPEG asics MPEG I-frame encoding MPEG long OP encoding MPEG basics MPEG I-frame MPEG
More informationABSTRACT ERROR CONCEALMENT TECHNIQUES IN H.264/AVC, FOR VIDEO TRANSMISSION OVER WIRELESS NETWORK. Vineeth Shetty Kolkeri, M.S.
ABSTRACT ERROR CONCEALMENT TECHNIQUES IN H.264/AVC, FOR VIDEO TRANSMISSION OVER WIRELESS NETWORK Vineeth Shetty Kolkeri, M.S. The University of Texas at Arlington, 2008 Supervising Professor: Dr. K. R.
More informationA Cell-Loss Concealment Technique for MPEG-2 Coded Video
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 10, NO. 4, JUNE 2000 659 A Cell-Loss Concealment Technique for MPEG-2 Coded Video Jian Zhang, Member, IEEE, John F. Arnold, Senior Member,
More informationPacket Scheduling Algorithm for Wireless Video Streaming 1
Packet Scheduling Algorithm for Wireless Video Streaming 1 Sang H. Kang and Avideh Zakhor Video and Image Processing Lab, U.C. Berkeley E-mail: {sangk7, avz}@eecs.berkeley.edu Abstract We propose a class
More information1022 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL 2010
1022 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL 2010 Delay Constrained Multiplexing of Video Streams Using Dual-Frame Video Coding Mayank Tiwari, Student Member, IEEE, Theodore Groves,
More informationImplementation of MPEG-2 Trick Modes
Implementation of MPEG-2 Trick Modes Matthew Leditschke and Andrew Johnson Multimedia Services Section Telstra Research Laboratories ABSTRACT: If video on demand services delivered over a broadband network
More informationUniversity of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ISCAS.2005.
Wang, D., Canagarajah, CN., & Bull, DR. (2005). S frame design for multiple description video coding. In IEEE International Symposium on Circuits and Systems (ISCAS) Kobe, Japan (Vol. 3, pp. 19 - ). Institute
More informationDCT Q ZZ VLC Q -1 DCT Frame Memory
Minimizing the Quality-of-Service Requirement for Real-Time Video Conferencing (Extended abstract) Injong Rhee, Sarah Chodrow, Radhika Rammohan, Shun Yan Cheung, and Vaidy Sunderam Department of Mathematics
More informationReal Time PQoS Enhancement of IP Multimedia Services Over Fading and Noisy DVB-T Channel
Real Time PQoS Enhancement of IP Multimedia Services Over Fading and Noisy DVB-T Channel H. Koumaras (1), E. Pallis (2), G. Gardikis (1), A. Kourtis (1) (1) Institute of Informatics and Telecommunications
More informationBridging the Gap Between CBR and VBR for H264 Standard
Bridging the Gap Between CBR and VBR for H264 Standard Othon Kamariotis Abstract This paper provides a flexible way of controlling Variable-Bit-Rate (VBR) of compressed digital video, applicable to the
More informationVideo Compression. Representations. Multimedia Systems and Applications. Analog Video Representations. Digitizing. Digital Video Block Structure
Representations Multimedia Systems and Applications Video Compression Composite NTSC - 6MHz (4.2MHz video), 29.97 frames/second PAL - 6-8MHz (4.2-6MHz video), 50 frames/second Component Separation video
More information176 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 13, NO. 2, FEBRUARY 2003
176 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 13, NO. 2, FEBRUARY 2003 Transactions Letters Error-Resilient Image Coding (ERIC) With Smart-IDCT Error Concealment Technique for
More informationFrame Processing Time Deviations in Video Processors
Tensilica White Paper Frame Processing Time Deviations in Video Processors May, 2008 1 Executive Summary Chips are increasingly made with processor designs licensed as semiconductor IP (intellectual property).
More informationDual frame motion compensation for a rate switching network
Dual frame motion compensation for a rate switching network Vijay Chellappa, Pamela C. Cosman and Geoffrey M. Voelker Dept. of Electrical and Computer Engineering, Dept. of Computer Science and Engineering
More informationResearch Topic. Error Concealment Techniques in H.264/AVC for Wireless Video Transmission in Mobile Networks
Research Topic Error Concealment Techniques in H.264/AVC for Wireless Video Transmission in Mobile Networks July 22 nd 2008 Vineeth Shetty Kolkeri EE Graduate,UTA 1 Outline 2. Introduction 3. Error control
More informationSUMMIT LAW GROUP PLLC 315 FIFTH AVENUE SOUTH, SUITE 1000 SEATTLE, WASHINGTON Telephone: (206) Fax: (206)
Case 2:10-cv-01823-JLR Document 154 Filed 01/06/12 Page 1 of 153 1 The Honorable James L. Robart 2 3 4 5 6 7 UNITED STATES DISTRICT COURT FOR THE WESTERN DISTRICT OF WASHINGTON AT SEATTLE 8 9 10 11 12
More informationFast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264
Fast MBAFF/PAFF Motion Estimation and Mode Decision Scheme for H.264 Ju-Heon Seo, Sang-Mi Kim, Jong-Ki Han, Nonmember Abstract-- In the H.264, MBAFF (Macroblock adaptive frame/field) and PAFF (Picture
More informationCOMP 9519: Tutorial 1
COMP 9519: Tutorial 1 1. An RGB image is converted to YUV 4:2:2 format. The YUV 4:2:2 version of the image is of lower quality than the RGB version of the image. Is this statement TRUE or FALSE? Give reasons
More informationAn Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions
1128 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 11, NO. 10, OCTOBER 2001 An Efficient Low Bit-Rate Video-Coding Algorithm Focusing on Moving Regions Kwok-Wai Wong, Kin-Man Lam,
More informationUnderstanding Compression Technologies for HD and Megapixel Surveillance
When the security industry began the transition from using VHS tapes to hard disks for video surveillance storage, the question of how to compress and store video became a top consideration for video surveillance
More informationThe transmission of MPEG-2 VBR video under usage parameter control
INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS Int. J. Commun. Syst. 2001; 14:125}146 The transmission of MPEG-2 VBR video under usage parameter control Lou Wenjing, Chia Liang Tien*, Lee Bu Sung and Wang
More informationDELTA MODULATION AND DPCM CODING OF COLOR SIGNALS
DELTA MODULATION AND DPCM CODING OF COLOR SIGNALS Item Type text; Proceedings Authors Habibi, A. Publisher International Foundation for Telemetering Journal International Telemetering Conference Proceedings
More informationELEC 691X/498X Broadcast Signal Transmission Fall 2015
ELEC 691X/498X Broadcast Signal Transmission Fall 2015 Instructor: Dr. Reza Soleymani, Office: EV 5.125, Telephone: 848 2424 ext.: 4103. Office Hours: Wednesday, Thursday, 14:00 15:00 Time: Tuesday, 2:45
More informationUnderstanding IP Video for
Brought to You by Presented by Part 3 of 4 B1 Part 3of 4 Clearing Up Compression Misconception By Bob Wimmer Principal Video Security Consultants cctvbob@aol.com AT A GLANCE Three forms of bandwidth compression
More informationAnalysis of Video Transmission over Lossy Channels
1012 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 18, NO. 6, JUNE 2000 Analysis of Video Transmission over Lossy Channels Klaus Stuhlmüller, Niko Färber, Member, IEEE, Michael Link, and Bernd
More informationIntra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences
Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences Michael Smith and John Villasenor For the past several decades,
More informationQoS Mapping between User's Preference and Bandwidth Control for Video Transport
33 QoS Mapping between User's Preference and Bandwidth Control for Video Transport Kentarou Fukuda, Naoki Wakamiya, Masayuki Murata and Hideo Miyahara Department of Informatics and Mathematical Science
More informationFLEXIBLE SWITCHING AND EDITING OF MPEG-2 VIDEO BITSTREAMS
ABSTRACT FLEXIBLE SWITCHING AND EDITING OF MPEG-2 VIDEO BITSTREAMS P J Brightwell, S J Dancer (BBC) and M J Knee (Snell & Wilcox Limited) This paper proposes and compares solutions for switching and editing
More informationERROR CONCEALMENT TECHNIQUES IN H.264 VIDEO TRANSMISSION OVER WIRELESS NETWORKS
Multimedia Processing Term project on ERROR CONCEALMENT TECHNIQUES IN H.264 VIDEO TRANSMISSION OVER WIRELESS NETWORKS Interim Report Spring 2016 Under Dr. K. R. Rao by Moiz Mustafa Zaveri (1001115920)
More informationChapter 2. Advanced Telecommunications and Signal Processing Program. E. Galarza, Raynard O. Hinds, Eric C. Reed, Lon E. Sun-
Chapter 2. Advanced Telecommunications and Signal Processing Program Academic and Research Staff Professor Jae S. Lim Visiting Scientists and Research Affiliates M. Carlos Kennedy Graduate Students John
More informationSelective Intra Prediction Mode Decision for H.264/AVC Encoders
Selective Intra Prediction Mode Decision for H.264/AVC Encoders Jun Sung Park, and Hyo Jung Song Abstract H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression
More informationENCODING OF PREDICTIVE ERROR FRAMES IN RATE SCALABLE VIDEO CODECS USING WAVELET SHRINKAGE. Eduardo Asbun, Paul Salama, and Edward J.
ENCODING OF PREDICTIVE ERROR FRAMES IN RATE SCALABLE VIDEO CODECS USING WAVELET SHRINKAGE Eduardo Asbun, Paul Salama, and Edward J. Delp Video and Image Processing Laboratory (VIPER) School of Electrical
More informationBuffering strategies and Bandwidth renegotiation for MPEG video streams
Buffering strategies and Bandwidth renegotiation for MPEG video streams by Nico Schonken Submitted in fulfillment of the requirements for the degree of Master of Science in the Department of Computer Science
More informationImprovement of MPEG-2 Compression by Position-Dependent Encoding
Improvement of MPEG-2 Compression by Position-Dependent Encoding by Eric Reed B.S., Electrical Engineering Drexel University, 1994 Submitted to the Department of Electrical Engineering and Computer Science
More informationcomplex than coding of interlaced data. This is a significant component of the reduced complexity of AVS coding.
AVS - The Chinese Next-Generation Video Coding Standard Wen Gao*, Cliff Reader, Feng Wu, Yun He, Lu Yu, Hanqing Lu, Shiqiang Yang, Tiejun Huang*, Xingde Pan *Joint Development Lab., Institute of Computing
More informationOverview: Video Coding Standards
Overview: Video Coding Standards Video coding standards: applications and common structure ITU-T Rec. H.261 ISO/IEC MPEG-1 ISO/IEC MPEG-2 State-of-the-art: H.264/AVC Video Coding Standards no. 1 Applications
More informationMPEG has been established as an international standard
1100 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 9, NO. 7, OCTOBER 1999 Fast Extraction of Spatially Reduced Image Sequences from MPEG-2 Compressed Video Junehwa Song, Member,
More informationTemporal Error Concealment Algorithm Using Adaptive Multi- Side Boundary Matching Principle
184 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.12, December 2008 Temporal Error Concealment Algorithm Using Adaptive Multi- Side Boundary Matching Principle Seung-Soo
More informationDelay Cognizant Video Coding: Architecture, Applications and Quality Evaluations
Draft to be submitted to IEEE Transactions on Image Processing. Please send comments to Yuan-Chi Chang at yuanchi@eecs.berkeley.edu. Delay Cognizant Video Coding: Architecture, Applications and Quality
More informationPart1 박찬솔. Audio overview Video overview Video encoding 2/47
MPEG2 Part1 박찬솔 Contents Audio overview Video overview Video encoding Video bitstream 2/47 Audio overview MPEG 2 supports up to five full-bandwidth channels compatible with MPEG 1 audio coding. extends
More informationExperimental Results from a Practical Implementation of a Measurement Based CAC Algorithm. Contract ML704589 Final report Andrew Moore and Simon Crosby May 1998 Abstract Interest in Connection Admission
More informationReduced complexity MPEG2 video post-processing for HD display
Downloaded from orbit.dtu.dk on: Dec 17, 2017 Reduced complexity MPEG2 video post-processing for HD display Virk, Kamran; Li, Huiying; Forchhammer, Søren Published in: IEEE International Conference on
More informationFeasibility Study of Stochastic Streaming with 4K UHD Video Traces
Feasibility Study of Stochastic Streaming with 4K UHD Video Traces Joongheon Kim and Eun-Seok Ryu Platform Engineering Group, Intel Corporation, Santa Clara, California, USA Department of Computer Engineering,
More informationModeling and Evaluating Feedback-Based Error Control for Video Transfer
Modeling and Evaluating Feedback-Based Error Control for Video Transfer by Yubing Wang A Dissertation Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE In partial fulfillment of the Requirements
More informationA Novel Macroblock-Level Filtering Upsampling Architecture for H.264/AVC Scalable Extension
05-Silva-AF:05-Silva-AF 8/19/11 6:18 AM Page 43 A Novel Macroblock-Level Filtering Upsampling Architecture for H.264/AVC Scalable Extension T. L. da Silva 1, L. A. S. Cruz 2, and L. V. Agostini 3 1 Telecommunications
More informationCONSTRAINING delay is critical for real-time communication
1726 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 16, NO. 7, JULY 2007 Compression Efficiency and Delay Tradeoffs for Hierarchical B-Pictures and Pulsed-Quality Frames Athanasios Leontaris, Member, IEEE,
More informationDual Frame Video Encoding with Feedback
Video Encoding with Feedback Athanasios Leontaris and Pamela C. Cosman Department of Electrical and Computer Engineering University of California, San Diego, La Jolla, CA 92093-0407 Email: pcosman,aleontar
More informationII. SYSTEM MODEL In a single cell, an access point and multiple wireless terminals are located. We only consider the downlink
Subcarrier allocation for variable bit rate video streams in wireless OFDM systems James Gross, Jirka Klaue, Holger Karl, Adam Wolisz TU Berlin, Einsteinufer 25, 1587 Berlin, Germany {gross,jklaue,karl,wolisz}@ee.tu-berlin.de
More informationAdaptive Key Frame Selection for Efficient Video Coding
Adaptive Key Frame Selection for Efficient Video Coding Jaebum Jun, Sunyoung Lee, Zanming He, Myungjung Lee, and Euee S. Jang Digital Media Lab., Hanyang University 17 Haengdang-dong, Seongdong-gu, Seoul,
More informationAudio and Video II. Video signal +Color systems Motion estimation Video compression standards +H.261 +MPEG-1, MPEG-2, MPEG-4, MPEG- 7, and MPEG-21
Audio and Video II Video signal +Color systems Motion estimation Video compression standards +H.261 +MPEG-1, MPEG-2, MPEG-4, MPEG- 7, and MPEG-21 1 Video signal Video camera scans the image by following
More informationINFORMATION THEORY INSPIRED VIDEO CODING METHODS : TRUTH IS SOMETIMES BETTER THAN FICTION
INFORMATION THEORY INSPIRED VIDEO CODING METHODS : TRUTH IS SOMETIMES BETTER THAN FICTION Nitin Khanna, Fengqing Zhu, Marc Bosch, Meilin Yang, Mary Comer and Edward J. Delp Video and Image Processing Lab
More informationPAPER Wireless Multi-view Video Streaming with Subcarrier Allocation
IEICE TRANS. COMMUN., VOL.Exx??, NO.xx XXXX 200x 1 AER Wireless Multi-view Video Streaming with Subcarrier Allocation Takuya FUJIHASHI a), Shiho KODERA b), Nonmembers, Shunsuke SARUWATARI c), and Takashi
More informationAn Improved Fuzzy Controlled Asynchronous Transfer Mode (ATM) Network
An Improved Fuzzy Controlled Asynchronous Transfer Mode (ATM) Network C. IHEKWEABA and G.N. ONOH Abstract This paper presents basic features of the Asynchronous Transfer Mode (ATM). It further showcases
More informationDigital Video Engineering Professional Certification Competencies
Digital Video Engineering Professional Certification Competencies I. Engineering Management and Professionalism A. Demonstrate effective problem solving techniques B. Describe processes for ensuring realistic
More informationMULTI-STATE VIDEO CODING WITH SIDE INFORMATION. Sila Ekmekci Flierl, Thomas Sikora
MULTI-STATE VIDEO CODING WITH SIDE INFORMATION Sila Ekmekci Flierl, Thomas Sikora Technical University Berlin Institute for Telecommunications D-10587 Berlin / Germany ABSTRACT Multi-State Video Coding
More informationH.261: A Standard for VideoConferencing Applications. Nimrod Peleg Update: Nov. 2003
H.261: A Standard for VideoConferencing Applications Nimrod Peleg Update: Nov. 2003 ITU - Rec. H.261 Target (1990)... A Video compression standard developed to facilitate videoconferencing (and videophone)
More informationError Concealment for Dual Frame Video Coding with Uneven Quality
Error Concealment for Dual Frame Video Coding with Uneven Quality Vijay Chellappa, Pamela C. Cosman and Geoffrey M. Voelker University of California, San Diego, vchellap@ucsd.edu,pcosman@ucsd.edu Abstract
More informationFree Viewpoint Switching in Multi-view Video Streaming Using. Wyner-Ziv Video Coding
Free Viewpoint Switching in Multi-view Video Streaming Using Wyner-Ziv Video Coding Xun Guo 1,, Yan Lu 2, Feng Wu 2, Wen Gao 1, 3, Shipeng Li 2 1 School of Computer Sciences, Harbin Institute of Technology,
More informationCERIAS Tech Report Preprocessing and Postprocessing Techniques for Encoding Predictive Error Frames in Rate Scalable Video Codecs by E
CERIAS Tech Report 2001-118 Preprocessing and Postprocessing Techniques for Encoding Predictive Error Frames in Rate Scalable Video Codecs by E Asbun, P Salama, E Delp Center for Education and Research
More informationPACKET-SWITCHED networks have become ubiquitous
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 7, JULY 2004 885 Video Compression for Lossy Packet Networks With Mode Switching and a Dual-Frame Buffer Athanasios Leontaris, Student Member, IEEE,
More information