Converting RGB image to the Grayscale image in Java

Nickson Joram
Javarevisited
Published in
6 min readNov 13, 2019

The topic! Conversion. We are spending a lot of time finding new things. But at the same time, we are spending a huge time finding the interconnection between the things which we’ve found!

Isn’t it?

Our objective

Before going into the topic deeper, let us see the basic definitions.

The RGB color model is an additive mixing model in which red, green, and blue light are added together in various ways to reproduce a broad array of colors.

Grayscale images, a kind of black-and-white or gray monochrome, are composed exclusively of shades of gray. The contrast ranges from black at the weakest intensity to white at the strongest.

We have a nice colorful image (say, RGB). And we are worrying about changing that image to a grayscale image. But why?

An RGB image consists of 3 layers as we saw above. Red, Green, Blue. It’s a 3-dimensional matrix. It is just like 3 consecutive pages in your book. But a grayscale image can be considered as a 2D array. Each value in the array represents the pixel values. Depends on the number of bits used for pixel representation the range of values in the matrix will vary.

Generally, a grayscale image uses an 8-bit representation for each pixel. By using 8-bits we can represent values from 0 to 255. So a grayscale image in 8-bit representation will be a matrix, and the values can be anything from 0 to 255. 0 indicates black pixels and 255 indicates white pixels and in between different shades from black to white will come.

Some algorithms can only be applied on 2-D images rather than 3-D. That is why we are keen on converting the RGB to a Grayscale image. This is the simplest explanation for it.

Other reasons are,

  • Signal to noise.

For many applications of Image Processing, color information doesn’t help us to identify important edges or other features. There may be exceptions. If there is an edge (a step-change in pixel value) in hue that is hard to detect in a grayscale image, or if we need to identify objects of known hue (orange fruit in front of green leaves), then color information could be useful. If we don’t need color, then we can consider its noise. At first, it’s a bit counterintuitive to think in grayscale, but you get used to it.

  • The complexity of the code.

If you want to find edges based on Luminance AND Chrominance, you’ve got more work ahead of you. That additional work (and additional debugging, additional pain in supporting the software, etc.) is hard to justify if the additional color information isn’t helpful for applications of interest.

  • For learning Image Processing

It’s better to understand grayscale processing first and understand how it applies to multi-channel processing rather than starting with full-color imaging and missing all the important insights that can (and should) be learned from single-channel processing.

  • The difficulty of visualization.

In grayscale images, the Watershed algorithm is fairly easy to conceptualize because we can think of the two spatial dimensions and one brightness dimension as a 3D image with hills, valleys, catchment basins, ridges, etc. Peak brightness is just a mountain peak in our 3D visualization of the grayscale image. There are a number of algorithms for which an intuitive physical interpretation helps us think through a problem. In RGB, HSI, Lab, and other color spaces this sort of visualization is much harder since there are additional dimensions that the standard human brain can’t visualize easily. Sure, we can think of peak redness, but what does that mountain peak look like in an (x,y,h,s, i) space? Ouch. One workaround is to think of each color variable as an intensity image, but that leads us right back to grayscale image processing.

  • Color is complex.

Humans perceive color and identify color with deceptive ease. If you get into the business of attempting to distinguish colors from one another, then you’ll either want to

(i) follow tradition and control the lighting, camera color calibration, and other factors to ensure the best results, or

(ii) settle down for a career-long journey into a topic that gets deeper the more you look at it, or

(iii) wish you could be back working with grayscale because at least then the problems seem solvable.

  • Speed.

With modern computers, and with parallel programming, it’s possible to perform simple pixel-by-pixel processing of a megapixel image in milliseconds. Facial recognition, OCR, content-aware resizing, mean shift segmentation, and other tasks can take much longer than that. Whatever processing time is required to manipulate the image or squeeze some useful data from it, most customers/users want it to go faster. If we make the hand-wavy assumption that processing a three-channel color image takes three times as long as processing a grayscale image — or maybe four times as long, since we may create a separate luminance channel — then that’s not a big deal if we’re processing video images on the fly and each frame can be processed in less than 1/30th or 1/25th of a second. But if we’re analyzing thousands of images from a database, it’s great if we can save ourselves processing time by resizing images, analyzing only portions of images, and/or eliminating color channels we don’t need. Cutting processing time by a factor of three to four can mean the difference between running an 8-hour overnight test that ends before you get back to work, and having your computer’s processors pegged for 24 hours straight.

Now you can get an idea, hell yeah this stuff is so important in Digital Image Processing. with that we can’t do anything right?

Then how can we change RGB into a Grayscale? Yes, we can do this in MATLAB, OpenCV, and so on. Without these can we? In Java? Simply? Using the basic idea?

Why not?

I’ve used an Image, a very famous one, that belongs to the Image Processing world, Yeah. Lenna! (Try to find out how she became the heart of the Digital Image Processing world)

Then I used these blocks of codes.

I’ve used these packages which are already defined in Java.

/**
*
* @author A Nickson Joram
*/
import java.awt.*;
import java.awt.image.BufferedImage;
import java.io.*;
import javax.imageio.ImageIO;
import javax.swing.JFrame;

And then, in my class, defined these.

 BufferedImage image;
int width;
int height;

Then, create an object within the same class


try {
File input = new File(“location of your RGB image”);
image = ImageIO.read(input);
width = image.getWidth();
height = image.getHeight();
for(int i=0; i<height; i++) {
for(int j=0; j<width; j++) {
Color c = new Color(image.getRGB(j, i));
int red = (int)(c.getRed() * 0.299);
int green = (int)(c.getGreen() * 0.587);
int blue = (int)(c.getBlue() *0.114);
Color newColor = new Color(red+green+blue,
red+green+blue,red+green+blue);
image.setRGB(j,i,newColor.getRGB());
}
}
File ouptut = new File(“Where to be placed and its name”);
System.out.println(“Done”);
ImageIO.write(image, “jpg”, ouptut);
} catch (Exception e) {}

Make the main method and run this program. You will set this.

Interesting right?

Start playing with Leena, the image I said, and enjoy Image Processing!

--

--