Dithering & Data Representation

Isuru Pamuditha
13 min readNov 1, 2020

--

A non-deterministic continuous signal [Matlab representation of an audio clip] — (Image was created by the author)

“Visualization gives you answers to questions you didn’t know you had” — Ben Schneiderman

[ Please not that the first part of the article is focused on providing the basic background knowledge required to understand “what dithering is”. Those who have the adequate knowledge can skip Part — I. ]

[ Part — I ]

Data Representation

Before moving on to more theoretical concepts lets take a look at how we represent data and visualize them. Lets assume that you are watching a movie on your computer or a YouTube video on your phone. As you know we can change the quality of the video being played to our preference. By choosing a quality level of 720p/HD/UHD/4k, you get a higher quality visual feed and a higher data consumption and by choosing 240p, 360p,480p you sort of get the opposite. As you may know this happens due to the representation or specifications of each method. The more data points that you have (better quality visuals), the more memory or data you consume. This is general knowledge and as you might already know this trade-off between the quality & the memory requirement is a fascinating area to research. Even though I took “video-stream/storage” as the example, which is easier to explain, the same scenario is present in audio, image quality and other compression systems. Dithering is a method which enables us to preserve a significant amount of original data using less memory during the compression process.

Sampling & Quantization

It is a tricky question if you ask “can we store & represent analogue signals with perfect accuracy?”. Even the most detailed analogue signals have a sampling time which the signal was recorded meaning that there are at least few missing data points in the reconstructed signal. But in general we call signals analogue, if the signal is continuous & the sampled data using a small sampling time is represented in an interpolated approach. When we want to record a natural occurring analogue signal (if the signal is random and non-deterministic) or turn an existing analogue signal to digital, we choose a sampling rate to record and represent the data. That is called the sampling process and the recorded data value can be approximated into a set of standard values (levels) depending on the specifications of the devices or the requirements of the application. This is called the quantization process and the number of levels we choose the represent data points are called the quantization levels. Increasing the sampling time will degrade the quality of the resultant signal as the frequency of sampling decreases, thus missing out a larger number of data points.

As an example if we use only 1 bit to represent the data, there are only two states as ‘1’ & ‘0’. So all the points must be approximated to either one or zero. Like wise the sensitivity and the resolution of the representation goes up with the increasing number of quantization bits. The following images give a rough idea about how the signals will be represented with the different number of quantization bits. As an example if the number of quantization bits is 3, there will be 2**3 = 8 levels, and if N = 10, there will b 2**10 = 1024 number of levels. Therefore, the Quantization Levels = L = (2**N). The following are the standard equations used in the quantization process.

(Images were created by the author)
(N = 1) & (N = 2) — The original signal is barely preserved (Images were created by the author)
(N = 3) & (N = 5) (Images were created by the author)
(N = 6) — As you can see almost all the data points and the original shape is preserved with higher number of quantization bits (Images were created by the author)

You might wonder how do we approximate the values of the y-axis, i.e quantization levels with the input values. The two main methods of quantizers are,

  1. Mid-Riser Quantizer (No zero approximation, i.e no zero level in the quantized axis)
  2. Mid-Tread Quantizer (A zero value level is included)
Resultant Signal after Mid-Riser(left image) & Mid-Tread(right image) Quantizing (Images were created by the author)
Equations for Mid-Riser(left image) & Mid-Tread(right image)Quantizing (Images were created by the author)

(In the above equations, please note that it is not square brackets, but the floor function notation surrounding ‘X+Xm’ terms)

As an example, arduino processes analogue signals in 10 bit configuration (returns values between 0–1023) by default. Therefore, as I said, analogue signals are not truly analogue.

SQNR- Signal to Quantization Noise Ratio

When we quantize a signal we approximate values for the actual values depending on the number of levels available. This approximation results in an error in each of those points, which degrades the quality of the signal; i.e the resultant signal is noisy. If we consider the error added by quantization process as a single source of noise, we can model the process like interpreted in the following image.

Xq = Quantized signal (Image was created by the author)

The impact of this step is modelled using a parameter called the “Signal to Quantization Noise Ratio (SQNR)”. The equation is given below and this is useful to get an idea about the quality of the resultant signal after quantizing. Carefully note that this Quantization noise corresponds to the errors added by quantizing and other noise which is present due to various other factors is a different thing.

(Image was created by the author)

[ Part — II ]

Dithering

First of all, lets take a look at how the word “Dither” originated. Dither means, being indecisive in general and it is also called “Nervous Vibration”. There is a small story which I found, related to the origin of the word. Engineers working in aeronautical industries once came across a strange phenomenon which led to a remarkable discovery. They observed that the mechanical aircraft computers performed more accurately in flight than when the aircraft was on the ground. After some experiments, they found out that vibration caused by the plane’s engine influenced the sticky moving parts in the machines causing better performances in the end. In this case, the noise added by the system had helped the resultant to be better than there was no noise at all. This is analogous to how dithering works.

Dithering was first applied by Lawrence G. Roberts in his 1961 in his master’s thesis at MIT and followed by an article in 1962. Dither is an intentionally applied form of noise used to randomize quantization error & make its impact smaller. This is used in processing of both digital audio and video data, and is often one of the last stages of mastering audio to a CD. Now, lets take a deeper look into what it means and how it works.

Earlier we discussed about the quantization noise added in the process and how it affects the resultant signal. In applications, this type of noise and other noise sources can be a real issue when storing the data and processing them to get a real use of it. On the other hand, even if we used higher number of levels and reduced the quantization noise, that results in a larger number of data points which affects the memory allocation and efficiency in processing these bulk units. Therefore, we must find a way to reduce the noise or we must find a way to store most of the data accurately without sacrificing a bunch of memory space unnecessarily. Please note that, we can still use high end quantizing techniques which give us better sensitivity and accuracy for important applications which require higher accuracy, higher processing & storage capacity devices.

The below image on the left represents the two closest level to zeroth level by the horizontal arrow and the largest error that could occur by quantizing as ‘e’ which is equal to the half of the resolution (q/2) of the system.

The quantization error can be less than ‘q/2’ if the signal fed to the quantization step varies between the two nearest levels closer to them. In that way we can reduce the ‘e’ to a minimum level. Now think, what if we could add some other signal and make the actual signal vary and get the above conditions met. So, for that we can add a controlled stream of random noise and remove it at the end of quantization process to get a better valued signal in the end. The process is interpreted in a block diagram below (image on the right).

(Images were created by the author)

It can be proved that the performance of ADCs can be enhanced by adding noise referred to as dither to the input signal before the quantization. Dithering can also be used to give a similar effect of anti-aliasing filter during the sampling process. But this step adds more noise than the rest and that results in bad SNR — Signal to Noise Ratio.

If you having a hard time understanding this, the following example might help you with the theory. It is an interesting example as to how dithering can help minimizing the error in audio production. Lets say that you have a original recorded file in 32 bit format and you need to approximate it into 16 bit. One way that you can do this is by “Truncating”. We can simply truncate the extra data and recreate the signal. But as you can understand, that results in larger error rates. Next, you can use “Rounding” as well. That might give a better result than truncating but it still suffers from relatively large errors when the values are in the middle of two corresponding levels. This can be solved by introducing a random signal with the suitable maximum amplitude. It will make the signal less prone to quantization error.

Generally, in audio mastering 5 types of methods are used in applications as, “Triangular, Rectangular, pow-r-1, pow-r-2, pow-r-3. Furthermore, it is recommended that an audio file must not be dithered until the rendering is final. Just like that example of audio files, the continuous signals in nature can also be approximated efficiently using dithering.

Image Quantization & Dithering

An Image is a 2D rectilinear array of pixels. It is also a sequence of data points which represents the colour values of each pixel. The data for an image is usually stored as a 3-d vector with RGB (Red, Green, Blue) values. Depending on the number of pixels used to represent the image, we might or might not see that a digital version of an image. The image below was taken by me several months ago and I edited it so its pixel count is reduced and the individual pixels can be seen easily. Note that there are 3 types of resolutions of an image displayed and these three give 3 different error types as well.

  1. Intensity resolution — Depends on the pixel depth
  2. Spatial resolution — “Width” x “Height” in terms of pixels
  3. Temporal resolution — Monitor refresh rate in Hz

Their errors are Intensity quantization, Spatial aliasing & Temporal aliasing which are associated with insufficient Intensity, Spatial & Temporal resolution.

(Original Image was Captured and Edited by the Author)

In image processing and representation, dithering is used as an algorithm which simulates an illusion of colours and shades which are not exactly there in the image or unavailable to display due to the limitations of the screen. That can also happen due to the limitations of the operating system running on the device. Here, dithering is accomplished by using a varying pattern of the already available colours. This illusion of colour depth is often sufficient to convince the human eye into believing that it sees the shades and different colours, when in reality it’s the same set of colours placed in different patterns.

Even though there are only 2 colours, it can display some more colours because of the way we “perceive” them. Something similar happens in dithering. (Source — https://en.wikipedia.org/wiki/Dither)

Halftoning & Dithering

Source — https://www.cs.princeton.edu/courses/archive/fall00/cs426/lectures/dither/dither.pdf

When we see B&W news papers like the one above, we think we see different shades of gray. But in reality, it is just only two colours in total which are black and white. There are no gray dots at all and it seems as if there is more than one colour due to the pattern that the dots are placed on the paper. The more dither patterns that a device or program supports, the more shades of gray it can represent. In printing, dithering is usually called halftoning, and shades of gray are called halftones. These different patterns pave the way to deliver much detailed images using less number of quantization bits (In this case only 1 bit). Keep in mind that, dithering is not gray scaling. In gray scaling, each individual dot can have a different shade of gray.

Lets consider how we can represent an image using lesser number of bits. As an example consider the original image given below with 8bit representation (256 total number of shades) . Now if we want to represent the whole image with 1 bit, we can assign a suitable threshold for the pixel value and give black or white to the value. The resultant image is as the uniform quantization below. As we can see, it carries very little detail from the original one and it is certainly not suitable for applications level representations. So we have to make other arrangements and this is where the dithering concept comes in handy. Here we can consider about 3 main types of dithering.

  • Random dither — using random values and changing the initial pixel values. This results in an image a little bit better than uniform quantization.
  • Ordered dither — In this method, certain matrices hold certain patterns and the image is reconfigured to match the matrix properties. This result is better than random dithering. Neighbouring pixels do not affect each other and making this form of dithering suitable to be used in animations.
  • Error diffusion dither — Here, the quantization error is dispersed over neighbour pixels (right and below). The Floyd-Steinberg dithering algorithm is an example of an error-diffusion technique and it has very fine grained dithering. This results in almost perfect representation of the original image considering it is just 1 bit.

There are other such error diffusion dithering methods as “Jarvis-Judice & Ninke”, “Stucki”, “Burkes”, “Sierra”, “Two-row Sierra”, “Sierra Lite”, “Atkinson”, & Gradient-based”.

Depending on the occurrence we can classify this dithering type optimization of images into two kinds.

  1. Application dither — Occurs when a certain application tries to simulate colors that aren’t in the its current color table. This can occur in GIF type and PNG‑8 type images.

2. Browser dither — Occurs when a web browser works with a display with lesser number of bits (as an example — 8bit — only 256 colours) attempts to simulate colours that can not be displayed. This can occur with several types of image types such as GIF, PNG, or JPEG. Using dithering repetitively, can also result in higher contrasts as well.

How Imperfections Make Things Perfect

Forget about all the scientific details for a minute. Think about what happens inside the algorithm in general. We can add some noise “additionally” and make the result even better than without that additional noise. Apply the same into our lives. We all have imperfections. We rise and fall, we have wounds that can be seen or can not be seen by someone other than ourselves. All these things are the noisy stuff that makes us humans. We often think that these imperfections are a burden and they must be eliminated or hidden. But in reality, they are our identity, what makes us who we are, what makes us unique, what reduces the overall “noise” and make the end result perfect. While I was searching for this online, I remembered a beautiful quote from the movie “Good Will Hunting” which carries the same meaning.

“…. little things like that…. those are the things I miss the most. The little idiosyncrasies that only I knew about … People call these things imperfections, but they’re not, aw that’s the good stuff. And then we get to choose who we let into our weird little worlds. You’re not perfect, sport. And let me save you the suspense ….” — Robin Williams (Good Will Hunting)

So, the lesson that we can take from this small yet, fascinating phenomenon is that, believe in yourself, the noise (imperfections) that you might encounter in life are only meant to shape you into your best possible version and not to drag you down. Look at the troubles with this perspective from now on and you will feel much more stronger and confident day by day. I will finish this article with another quote which carries a similar meaning & thank you for reading!

“To find signals in data, we must learn to reduce the noise — not just the noise that resides in the data, but also the noise that resides in us. It is nearly impossible for noisy minds to perceive anything but noise in data.” — Stephen Few

--

--

Isuru Pamuditha

Ponder & Wander... That'll make you an interesting person || Engineering Undergraduate ||