One If By Land

William L. Weaver
TL;DR Innovation
Published in
5 min readFeb 2, 2018

Data Transmission and Error Correction

Early railroad signals used a white light to signal “go” and a red light to signal “stop”. This wireless communication system between stationmasters and engineers worked well until that fateful day when a red cover lens fell out of its housing, signaling both oncoming trains to proceed at full speed. Thankfully, communication errors rarely result in such disasters. However, integrating the volume of digital information pumped through wires, broadcast over the earth, and bounced off satellites each day, a small number of errors per transmission can amount to a staggering loss of global throughput. Several error detection and correction techniques have been developed to minimize the effect of data transmission errors. The complexity and sophistication of these techniques have kept pace with the soaring transmission rates and ever-increasing sources of error-generating interferences.

Photo by dldusdn on Pixabay

There are two fundamental approaches to detecting and correcting errors in a stream of digital data. In the first approach, known as “backward error correction” (BEC), the transmitter appends a predetermined code to the data it sends to the receiver. By examining this code, the receiver can determine if the data was received without error. If an error is detected, the receiver responds to the transmitter with a request to resend the data, known as an “automatic repeat request” (ARQ). In the second approach, termed “forward error correction” (FEC), the data is encoded such that a transmission error can not only be detected, but also corrected by the receiver, thus obviating the need to assert an ARQ.

In the days of extremely low transmission rates through dedicated low-noise wire connections, BEC could be implemented using signal “parity”. The error detection code, known as the “parity bit”, was generated by summing the number of binary ones contained in the packet of binary data and setting the parity bit to one or zero such that the total number of binary ones including the parity bit was even or odd depending on the use of “even-” or “odd-parity”. If multiple errors are present, however, there is a 50% chance the packet has maintained the correct parity. To decrease the severity of this problem, the “checksum” method was developed. Assume that a packet contains four, eight-bit data bytes, each representing a value between 0 and 255. All four values are summed and the total is divided by the number of possible values in each byte (256). The remainder of this division is termed the checksum and this value is appended to the data bytes. Although more resilient to multi-bit errors, the checksum method is defeated if the value of one byte is increased while a second byte is decreased by the same amount.

Since the bits are indistinguishable among bytes in the checksum method, a technique was devised that preserves the uniqueness of each individual bit. In this method, if one bit is flipped, it is difficult for the flip of a second bit in the packet to compensate for the error. This cyclic redundancy check (CRC) method interprets each data bit in a packet as the coefficient of a very large polynomial. The resulting polynomial is divided by a carefully selected standard polynomial and the remainder of the division is transmitted with the data as a CRC checksum. The ability of each standard CRC polynomial to detect a large combination of errors varies. Some popular standard CRC polynomials are known as XMODEM, ARC compression, and CRC-32 — used by Ethernet, PKZip, and FDDI.

In the case of a remote transmitter, such as a deep-space probe, or the streaming of real-time voice or video, an ARQ may not be available. The FEC method intersperses the data transmission with the codes required to correct a transmission error. The heavily anticipated Bluetooth wireless data communication protocol uses two methods of forward error correction, namely 1/3 FEC and 2/3 FEC.

In 1/3 FEC, the transmitter simply repeats each data bit three times in succession. For example, the data value 101 would be sent as 111000111 in 1/3 FEC mode. If the value for 1 (111) is corrupted to (110) during the transmission, the value can still be interpreted as a 1. The 1/3 designation indicates one bit of information is received for every three bits transmitted. This reduces the throughput to 33% of maximum, but removes the need for an ARQ. Due to the low throughput, 1/3 FEC is used to encode the relatively short header of each transmission session.

While working at Bell Labs in 1948, Richard W. Hamming developed a FEC method wherein simple parity bits were placed within the data bits. Each data bit is related to a combination of parity bits and by examining the parity bit pattern (known as Hamming Codes), the exact bit in error can be identified and corrected. Bluetooth uses a five-bit Hamming Code to transmit every ten bits. The resulting 15-bit packet is comprised of two-thirds data and one-third code bits, earning the designation 2/3 FEC. This mode of FEC increases the throughput to 66% of maximum. The Hamming Codes have the ability to correct only one bit in error, but can detect multi-bit errors. The short 15-bit packet length minimizes the noise exposure and the 2/3 FEC asserts an ARQ if more than one error is found.

Additional FEC methods including Reed Solomon and Viterbi are available and nascent techniques are under development. As data transmission rates increase and wireless channels become congested, error correction will continue to serve a critical role. Both lanterns were functional that famous night in 1775 when Paul Revere saw them in the church bell tower. Had one of them failed, America’s favorite pastime might be cricket.

This material originally appeared as a Contributed Editorial in Scientific Computing and Instrumentation 17:10 September 2000, pg. 14.

William L. Weaver is an Associate Professor in the Department of Integrated Science, Business, and Technology at La Salle University in Philadelphia, PA USA. He holds a B.S. Degree with Double Majors in Chemistry and Physics and earned his Ph.D. in Analytical Chemistry with expertise in Ultrafast LASER Spectroscopy. He teaches, writes, and speaks on the application of Systems Thinking to the development of New Products and Innovation.

--

--

William L. Weaver
TL;DR Innovation

Explorer. Scouting the Adjacent Possible. Associate Professor of Integrated Science, Business, and Technology La Salle University, Philadelphia, PA, USA