How does a digital camera sensor work?

Connor Gillmor
Tech Update
Published in
3 min readMay 9, 2018
Without the sensor, a digital camera would not be able to capture light and produce the images that we see. (graphic/Digital Trends)

A modern digital camera’s sensor comes in one of two varieties generally. It will either be a Complementary Metal Oxide Semiconductor (CMOS), or a Charge-Coupled Device (CCD) sensor. The CCD type is mainly used in older models, but is still used on some modern cameras. Each type has its own advantages and disadvantages, but that is a topic for another article.

The most basic way you can understand how a sensor works is when the shutter opens, the sensor captures the photons that hit it and that is converted to an electrical signal that the processor in the camera reads and interprets as colors. This information is then stitched together to form an image. That is insanely over-simplified though.

The more complex answer is that a sensor is made up of millions of cavities called “photosites,” and these photosites open when the shutter opens and close when the exposure is finished (the number of photosites is the same number of pixels your camera has). The photons that hit each photosite are interpreted as an electrical signal that varies in strength based on how many photons were actually captured in the cavity. How precise this process is depends on your camera’s bit depth.

If we looked at a picture that was taken with just that electrical data mentioned earlier from the sensor, then the images would actually be in gray-scale. How we get colored images is by what’s known as a “Bayer filter array.” A Bayer filter is a colored filter placed over-top of each photosite and is used to determine the color of an image based on how the electrical signals from neighboring photosites measure. The colors of the filters are the standard red, green and blue, with a ratio of one red, one blue and two green in every section of four photosites.

A graphic of light entering photosites with Bayer filters layered on. (graphic/Cambridge in Colour)

The red filter allows red light to be captured, the blue allows blue light in and the green allows green light in. The light that doesn’t match that photosites filter is reflected. This means that we are losing two-thirds of the light that can be captured and it is only of one color for each photosite. This forces the camera to guess what the amount of the other two colors is in each given pixel.

The data that is interpreted by the sensor with the Bayer filter array is what a RAW image file is.

The camera then goes through a process to estimate how much of each color of light there was for each photosite and colors the image based on that guessing.

There are multiple processes involved in this step, but I think that and a few other aspects like “microlenses” fall outside the scope of this article.

--

--