channel coding theorem

views updated

channel coding theorem In communication theory, the statement that any channel, however affected by noise, possesses a specific channel capacity – a rate of conveying information that can never be exceeded without error, but that can, in principle, always be attained with an arbitrarily small probability of error. The theorem was first expounded and proved by Claude Elwood Shannon in 1948.

Shannon showed that an error-correcting code always exists that will reduce the probability of error below any predetermined level. He did not, however, show how to construct such a code (this remains the central problem of coding theory), although he did show that randomly chosen codes are as good as any others, provided they are extremely long.

Among Shannon's results for specific channels, the most celebrated is that for a power-limited continuous-amplitude channel subject to white Gaussian noise. If the signal power is limited to PS and the noise power is PN, the capacity of such a channel is C = ½ν log2(1 + PS/PN) bit/s

If it is a discrete-time channel, ν is the number of epochs per second; if it is a continuous-time channel, ν is the minimum number of samples per second necessary to acquire all the information from the channel. In the latter case, if ν is to be finite, the channel must be band-limited; if W is its bandwidth (in Hz), then, by Nyquist's criterion, C = W log2(1 + PS/PN) bit/s

This is sometimes called the Shannon–Hartley law, and is often applied, erroneously, in circumstances less restricted than those described. This and other expressions for the capacity of specific channels should not be confused with the channel coding theorem, which states only that there is a finite capacity (which may be zero) and that it can be attained without error.

See also Shannon's model, source coding theorem.