Intro to Convolutional Coding — Part I

Yair Mazal
Nerd For Tech
Published in
4 min readMay 22, 2021
Convolutional encoder, Teridon at English Wikipedia, Public domain, via Wikimedia Commons

This post aims to be the first of two posts serving as a quick introduction to convolutional coding. If you’re new to the subject, convolutional coding is a class of error correction codes used to enhance the reliability of digital communication. In this post, I’ll cover the basics and representations and the encoding process. The following post covers the decoding process using the Viterbi algorithm.

So What is it, and How Does it Work?

Convolutional codes have been widely used in wireless communications (WiFi, cellular, and satellite) and are constituents of the widely used Turbo coding. Unlike block codes, convolutional codes don’t have a finite block length and instead may be considered as linear filters, either IIR or FIR.

A convolutional encoder utilizes linear shift registers (LSR’s) to encode k input bits into n output bits, thus yielding a code of rate R=k/n. Each output bit depends on the last and last L input bits, where L is called the “constraint length.” L describes the amount of “memory” the encoder has (most of the literature uses capital K, but I find this confusing because of small k).

For instance, consider the following rate R=½ encoder:

Convolutional encoder with rate R=½, and constraint length K=2

The blocks labeled with a D stand for delays, and therefore the output depends on the last input bit and the previous two inputs. The output is dictated according to the parity check equations:

All addition operators should be considered as modulu two additions (as long as we consider a binary alphabet), which can be implemented via XOR’s.

According to standard terminology, the relation of output bits to inputs depicted above is expressed using generator polynomials, where the power of variables x stands for delays. For the above encoder, it reads:

generator polynomial for the above encoder

These are sometimes also expressed as binary vectors whose elements are the coefficients of various powers, such that for our example, it reads:

Vector representation of generators

The latter, formulates the impulse response of the FIR filter and suggests expressing the output of the j-th filter to the input of the i-th bit using a convolution:

More generally, for an encoder that accepts k input bits per clock cycle, and outputs n bits, while using a constraint length r, the output at time t will be:

Other Representations of the Convolutional Encoder

Finite State Machine

In addition to the block diagram view shown above, there are other ways to think of it that yield simpler forms for implementation, especially in software. An encoder with constraint length L can be considered as a finite state machine and depicted as a directed graph with 2ᴸ vertices, representing the possible states of the LSR’s. Each of the vertices has 2ᵏ edges representing possible transitions due to 2ᵏ possible inputs. This representation can be simply implemented in software. The above encoder can be expressed using the following FSM:

FSM representation of the encoder

The single digit along each edge indicates the input bit value above, and the two digits indicate the output. Using this representation, a stream can be encoded by implementing an FSM.

Trellis

Another representation can be obtained by a Trellis section. The section is a bipartite graph, connecting the current state to the following possible states:

A single trellis section representing the encoder on top

Each column has 2ᴸ vertices corresponding to possible encoder states, and each vertex has 2ᵏ edges leaving it, corresponding to possible inputs. These sections may be joined to form a trellis:

The trellis representation will be later valuable for decoding using the Viterbi algorithm.

Recursive Encoders

There is another type of popular convolutional encoder which utilizes a feedback loop. A systematic recursive encoder is the basic building block of turbo encoders. An example of such an encoder is shown below:

Recursive systematic encoder with rate R=½

Note the feedback loop, resulting in an IIR filter and not an FIR. That’s it for today. In the next post, I’ll give details regarding the decoding process.

Sources

In writing this post, I used:

--

--

Yair Mazal
Nerd For Tech

PhD student at BGU, enthusiast regarding all sciences