Error Resilience

Error resilience and recovery from data loss are important for achieving robust video transmission. On the one hand, errors can be suppressed by increasing the reliability of the transport channel. The cost for increased reliability is increased overhead (bandwidth) and/or increased latency due to retransmissions. On the other hand, the effect of errors can be minimized in the media plane by using coding schemes that stop error propagation. As video compression relies heavily on prediction that utilizes correlations, error propagation is a serious problem that needs to be addressed. Several tools are available in all video coding standards that constrain error propagation and facilitate recovery from errors.

The simplest technique to stop temporal error propagation is to refresh the decoder by inserting an I (or IDR) picture and cut the prediction chain from earlier pictures. This technique introduces a trade-off between latency and quality degradation due to the relatively large sizes of I pictures compared to P pictures of comparable quality. In order to avoid latency for constant channel bandwidths, the encoder either has to reduce the quality of the I picture or skip several P pictures. Another possibility is to make a partial refresh of a picture and thereby spread the cost of intra coding over several pictures. All codecs discussed so far support segments of one kind or the other. In H.261 each frame of a video sequence is divided into a number of segments called a Group of Blocks (GOBs), where each GOB

contains 33 macroblocks arranged in 3 rows by 11 columns. H.263 also uses GOBs, but here they correspond to single rows of macroblocks. Instead of refreshing the entire picture at one time instant, an encoder can distribute the cost over the sequence by refreshing GOBs or just particular macroblocks. The time it takes before the decoder has recovered depends on the refresh algorithm used by the encoder.

In order to limit spatial error propagation, different ways of segmenting a picture can be employed. A slice has no dependences on other slices in a picture, i.e. there is no prediction from one slice to another. The loss of a slice can thus be isolated to one part of the screen. Moreover, by aligning slice boundaries with transport layer boundaries, it is possible to limit the effect ofthelossof one transport unit. The MPEG-1 and MPEG-2 video standards support slices that contain a variable number of consecutive macroblocks in scanning order. Slices were also introduced in the second version of H.263 and are supported by Profile 3. An advantage with slices is that they can have variable lengths and end at an arbitrary macroblock position. This makes it possible to align the slice sizes with the data units of the underlying transport protocol. It is for instance possible to optimize the usage of slices for RTP by ending a slice before the maximum payload size of an RTP packet is reached.

Slices in H.264/AVC work similarly and can also be combined with several other tools that increase error resilience. Flexible macroblock ordering, also known as slice groups, and arbitrary slice ordering make it possible to shape and reorder slices. Redundant slices provide alternative representations of slices in case the original slices are lost.

0 0

Post a comment