Aymen SEKHRI
Analytics Vidhya
Published in
8 min readMay 18, 2021

--

Image and Video Processing: From Mars to Hollywood with a Stop at the Hospital | Week 1 Review

In this article, I am going to give you the most important basic concepts of Digital Image processing, and these concepts are taken from the famous course in Coursera provided by Duke University and I will support this article with some additional information from different resources.

Original image

Content:

  1. What is Digital Image Processing?
  2. Examples of fields that use Digital Image Processing.
  3. Image Acquisition.
  4. Image Sampling and Quantization.
  5. Representing digital images.
  6. Neighbors of a pixel.
  7. Simple image operations.
  8. Play with image — Programming Excercice.

1. What is Digital Image Processing?

An image may be defined as a two-dimensional function f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point.

When x, y and the intensity values of f are all finite, discrete quantities, we call the image a digital image.

Note that a digital image is composed of a finite number of elements, each of which has a particular location (x, y) and value. These elements are called picture elements or pixel. Pixel is the term used most widely to denote the elements of a digital image.

Any Digital image you are seeing on any device is just a matrice — Array of discrete numbers—

2. Examples of fields that use Digital Image Processing:

In today’s world, digital image processing affects almost every technical endeavor. Therefore, we will cover just those fields in which Digital image processing plays a major role.

a- Image sharpening and restoration:

Essentially, it involves manipulating an image to achieve the desired effect. It includes conversion, sharpening, blurring, detecting edges, retrieval, and recognition of images.

b- Medical Field:

There are several applications in the medical field which depends on the functioning of digital image processing.

  • Gamma-ray imaging
  • PET scan
  • X-Ray Imaging
  • Medical CT scan
  • UV imaging

and many more like Robot vision, Pattern recognition, Video processing… etc.

3. Image Acquisition:

So, we see in the top of the image the light represented as an energy source, is reflected from the object and it goes into the sensing device. Think about your digital cameras. So this is a blown-up of what we have back on the image plane. the object is continuous space, everything is continuous, but we actually get a discrete representation of that object and the discretization applies in two different directions. One is spatial discretization. So we have a discrete number of sensors, and basically the light.

There are two simplest modes: we integrate the light arriving. So that’s a discretization in space. There’s also a discretization in amplitude.

4. Image Sampling and Quantization:

A continuous image f is shown in Figure 2.16(a) in order to illustrate how sampling and quantization work. Continuous images can be continuous in their x- and y-coordinates, as well as in their amplitude. To convert it to digital form, we have to sample the function in both coordinates and in amplitude.

Digitizing the coordinate values is called sampling. Digitizing the amplitude values is called quantization.

→ Analyzing Fig. 2.16:

  • The one-dimensional function in Fig. 2.16(b) — let’s call it g — is a plot of amplitude (intensity level) values of the continuous image along the line segment AB in Fig. 2.16(a).
  • The random variations are due to image noise.
  • To sample this function g, we take equally spaced samples along line AB, as shown in Fig. 2.16(c).
  • In order to form a digital function, the intensity values also must be converted (quantized) into discrete quantities. The right side of Fig. 2.16(c) shows the intensity scale divided into eight discrete intervals, ranging from black to white.
  • The vertical tick marks indicate the specific value assigned to each of the eight intensity intervals. The continuous intensity levels are quantized by assigning one of the eight values to each sample.
  • The digital samples resulting from both sampling and quantization are shown in Fig. 2.16(d).
  • starting at the top of the image and carrying out this procedure line by line produces a two-dimensional digital image.

When a sensing array is used for image acquisition, the number of sensors in the array establishes the limits of sampling in both directions. Quantization of the sensor outputs is as before. Figure 2.17 illustrates this concept.

Figure 2.17(a) shows a continuous image projected onto the plane of an array sensor.

Figure 2.17(b) shows the image after sampling and quantization. Clearly, the quality of a digital image is determined to a large degree by the number of samples and discrete intensity levels used in sampling and quantization.

5. Representing digital images:

Let’s consider a continuous image function f(s, t) of two continuous variables, s and t. To convert this function into a digital image, we sample it and quantize it, as explained in the previous section. Suppose that we sample the continuous image into a 2-D array, f(x, y), containing M rows and N columns, where (x, y) are discrete coordinates. We use integer values for these discrete coordinates: x = {0, 1, 2, …, M-1} y = {0, 1, 2, …, N-1}.

In general, the value of the image at any coordinates (x, y) is denoted f(x, y), where x and y are integers. the section of the real plane spanned by the coordinates of an image is called the spatial domain, with x and y being referred to as spatial variables or spatial coordinates.

Fig. 2.18 shows, there are three basic ways to represent.

Figure 2.18(a) is a plot of the function, with two axes determining the spatial location and the third axis being the values of f (intensities) as a function of the two spatial variables x and y. This representation is useful when working with gray-scale sets.

Fig. 2.18(b) is much more common. It shows f(x, y) as it would appear on a monitor or photograph. Here, the intensity of each point is proportional to the value of f at that point. In this figure, there are only three equally spaced intensity values. If the intensity is normalized to the interval [0, 1], then each point in the image has the value 0, 0.5, or 1.

Fig. 2.18(c) simply displays the numerical values of f(x, y) as an array (matrix).

We conclude from the previous paragraph that the representations in Figs. 2.18(b ) and (c) are the most useful. Image displays allow us to view results at a glance. Numerical arrays are used for processing and algorithm development. In equation form, we write the representation of a numerical array as:

Returning briefly to Fig. 2.18, note that the origin of a digital image is at the top left (This is a conventional representation based on the fact that many image displays (e.g., TV monitors) sweep an image starting at the top left and moving to the right one row at a time).

6. Neighbors of a pixel:

A pixel p at coordinates (x, y) has four horizontal and vertical neighbors whose coordinates are given by:

This set of pixels, called the 4-neighbors of p, is denoted by N_4(p) Each pixel is a unit distance from (x, y), and some of the neighbor locations of p lie outside the digital image if (x, y) is on the border of the image. The four diagonal neighbors of p have coordinates:

and are denoted by N_D(p) These points, together with the 4-neighbors, are called the 8-neighbors of p, denoted by N_8(p) As before, some of the neighbor locations in N_D(p) and N_8(p) fall outside the image if is on the border of the image.

7. Simple image operations:

Basically, we started from a continuous picture and we finish with the 2-D discrete array. Now, we can start doing operations on these images.

→ Array versus Matrix operations:

An array operation involving one or more images is carried out on a pixel-bypixel. For example, consider the following images:

The array product of these two images is: — element-wise product

On the other hand, the matrix product is given by:

We assume array operations throughout the Course, So when we talk about the power of an array we mean that each individual pixel is raised to that power, The same with division — and no problem with addition and substraction because they are the same in both Matrix and Array — .

→ Linear versus Nonlinear operations:

Consider a general operator, H, that produces an output image, g(x, y), for a given input image, f(x, y):

H is Linear if and only if:

Where a_i, a_j and f_i(x, y), f_j(x, y) are arbitrary constants and images (of the same size), respectively.

→ Arithmetic operations:

Arithmetic operations between images are array operations which means that arithmetic operations are carried out between corresponding pixel pairs.

→ Set and Logical operations:

The image above summarizes the most important logical operations.

It is possible to perform many operations on images so we will not talk too much about them now, but we will discuss each operation in detail in dedicated articles so follow me to find out more.

8. Play with image — Programming Excercice:

  • Write a computer program capable of reducing the number of intensity levels in an image from 256 to 2, in integer powers of 2. The desired number of intensity levels needs to be a variable input to your program.

→The solution in Opencv python: click here

→ The solution in Matlab:

The solution using Matlab

Finally, This is of the first week review of Image and Video Processing: From Mars to Hollywood with a Stop at the Hospital Course. If you are interesting to read more about that I will leave the resources below.

--

--

Analytics Vidhya
Analytics Vidhya

Published in Analytics Vidhya

Analytics Vidhya is a community of Generative AI and Data Science professionals. We are building the next-gen data science ecosystem https://www.analyticsvidhya.com