In data transmission, one of the most important design goals is to minimize error rate. Error rate may be defined as the ratio of the number of bits incorrectly received to the total number of bits transmitted or to a familiar number such as 1000, 1,000,000, and so on. CCITT holds with a design objective of better than one error in one million (bits transmitted). This is expressed as 1 x 10-6. Many circuits in industrialized nations provide error performance two or more orders of magnitude better than this.
ITU-T has revised their G.821 recommendation (Ref. 28) to reflect considerably better performance. Error performance is stated in terms of ESR (errored second ratio) and SESR (severely errored second ratio) with a recommended measurement period of one month. We should expect an ESR of <0.08 and an SESR of <0.002.
One method for minimizing the error rate would be to provide a "perfect" transmission channel, one that will introduce no errors in the transmitted information at the output of the receiver. However, that perfect channel can never be achieved. Besides improvement of the channel transmission parameters themselves, error rate can be reduced by forms of a systematic redundancy. In old-time Morse code, words on a bad circuit were often sent twice; this is redundancy in its simplest form. Of course, it took twice as long to send a message; this is not very economical if the number of useful words per minute received is compared to channel occupancy.
This illustrates the tradeoff between redundancy and channel efficiency. Redundancy can be increased such that the error rate could approach zero. Meanwhile, the information transfer across the channel would also approach zero. Thus unsystematic redundancy is wasteful and merely lowers the rate of useful communication. On the other hand, maximum efficiency could be obtained in a digital transmission system if all redundancy and other code elements, such as "start" and "stop" elements, parity bits, and other "overhead" bits, were removed from the transmitted bit stream. In other words, the channel would be 100% efficient if all bits transmitted were information bits. Obviously, there is a tradeoff of cost and benefits somewhere between maximum efficiency on a data circuit and systematically added redundancy.
There is an important concept here that should not be missed. An entirely error-free channel does not exist. It is against the laws of nature. We may have a channel with an excellent error performance, but some few errors will persist. A performance parameter in these circumstances would be residual error rate.
Was this article helpful?