Smartphone Photography Explained
Co-Author: Deepak Mishra, Shiv Pratap Rai
Introduction
This world is full of different smartphone vendors launching smartphones every other day in different categories. Smartphone cameras come with all sorts of lenses, sensors, and specialized hardware to process image data. The results produced by different smartphone for the same scene carries remarkable differences and similarities at the same time.
Smartphone manufacturers throw around words like aperture, autofocus, digital and optical zoom, bokeh, etc. What do all these terms mean and how these are important to us in day-to-day life while shooting a photo? Let us find out.
What is a camera?
The word camera comes from camera obscura, the Latin name of the original device for projecting a 2D image onto a flat surface. The modern photographic camera evolved from the camera obscura.
How does the camera lens work?
Focus point
The camera majorly used a combination of different types of lenses such as convex, concave but majorly the convex lens is dominant because our goal is to converge the light on the sensor so the more pointed light on the sensor the crisp image we get and other sensors will support to this to get the pointed conversion of light on the sensor and that point is known as a focal point.
Aberrations
The failure of rays to converge at one focus because of a defect in a lens is known as an aberration.
There are two types of aberration that occurs
- Spherical Aberration
- Chromatic Aberration
Spherical Aberration
When the light passed from the convex light converges but the light which passes from the centre and closer to it that will converge at a different point and the light which is coming from the edges of the lens will converge before the light which is coming from the centre due to this light will not converge at a single point so it gets spread in a certain region and this spreading of light is called spherical aberration.
Chromatic Aberration
Chromatic aberration occurs due to the difference in the wavelength of the light because those whose wavelength is longer converge farther than those whose wavelength is smaller and due to this there is a spread occurs.
What is inside a camera setup?
- Lens assembly: A combination of various lenses (that tries to minimize the aberrations) in the camera.
- Motor: The purpose of the motor is to change the distance between the lens and the image sensor so that the focus of the setup can be changed.
- Image Sensor: Records the light on the sensor and converts the signal to a digital value. In modern cameras, mostly CMOS(Complementary metal–oxide–semiconductor) is used.
We have discussed lenses in the previous sections and will discuss others later in the article.
Image sensor -CMOS(Complementary metal–oxide–semiconductor)
An image sensor is a device that allows the camera to convert photons into electrical signals that can be interpreted by the device. Most digital cameras have a standard configuration. A 2D Bayer arrangement of RGB colour filters sits atop a pixel array. Each pixel has a photodetector and absorbs filtered light for one of the primary colours.
After light strikes the photodiodes in the image sensor, the image is converted into electrical signals and pushed onto several internal devices(A serial shift register, capacitor, and amplifier). Finally, the analog-to-digital conversion takes place. This transforms voltage signals into binary values that can be processed and stored.
Although there are many kinds of image sensors, CMOS sensors have captured the imagination of today’s vision system engineers. CMOS — Complementary Metal Oxide Semiconductor — allows the data of each pixel to be read individually. This provides the most sophisticated and granular level of control over the image. These image sensors can perform well at very small sizes, so they appear in camcorders, smartphones, and many other portable applications.
What is Bayer Filter?
As mentioned in the previous section there is a 2D Bayer array in the image sensor, so the question that comes to our mind is what is this Bayer Filter? Why are we using it? We will try to answer these questions in this section.
The Bayer filter was invented in 1974 by Bryce Bayer. may sometimes hear Bayer’s arrangement of microfilters referred to as RGGB. That’s because the arrangement uses a proportion of two green filter elements (GG) for each red (R) and blue (B) filter element. The entire array is spread over a 2x2 block of pixels, and each microfilter covers one-quarter of a pixel. For example :
Now because the human retina is more sensitive towards green light during day time, to mimic our visual perception a Bayer Filter uses 2 green elements.
Bayer Democaising
Each pixel receives input from all three primary colours, but they are not capable of outputting complete wavelength information since each pixel records only one of the three. Thus, to move from the Bayer pattern image to a full-colour image, both camera firmware and software can use various algorithms to understand the full-colour values of each pixel. This process is called demosaicing.
For example, we can take a subset of 2*2 pixels from raw input in Bayer mosaic format
Now for the 1st pixel the value or (R, G, B) = (R, (G0+G1)/2, B )
Similarly, the policy can be followed for the next pixel and so on.
Now we have a brief idea of what is inside a camera but there is yet another thing that we need to know which is factors that affect images taken from the camera. Let’s dive into it.
Controllable Components of a Camera.
Now the features that we can control are the following:
- Focal length — It determines which object is in focus and which is not.
- Exposure — It is the amount of light that reaches the camera’s sensor, creating visual data over a period of time. It has 3 components:
- Aperture — Controls the amount of light that falls on the lens.
- Shutter Speed — It is the amount of time for which the light falls on the sensor.
- ISO — Sensor’s sensitivity towards light.
Focal Length
In the figure, we can see that light from the tree passing through the lens is converging on the sensor. Now if the lens is not at the correct position then the images taken are blurred and out of focus so that is why we need to set the lens at the correct position as the lens can be moved in a camera while the sensor is at rest.
Exposure
A photograph’s exposure determines how light or dark an image will appear when it’s been captured by our camera. Achieving the correct exposure is a lot like collecting rain in a bucket. While the rate of rainfall is uncontrollable, three factors remain under our control: the bucket’s width, the duration we leave it in the rain, and the quantity of rain we want to collect. We just need to ensure we don’t collect too little (“underexposed”), but that we also don’t collect too much (“overexposed”). The key is that there are many different combinations of width, time, and quantity that will achieve this. For example, for the same quantity of water, we can get away with less time in the rain if we pick a really wide bucket. Alternatively, for the same duration left in the rain, a narrow bucket can be used as long as we plan on getting by with less water.
In photography, the exposure settings of aperture, shutter speed, and ISO speed are analogous to the width, time, and quantity discussed above. Furthermore, just as the rate of rainfall was beyond our control above, so too is natural light.
The Exposure Triangle: Exposure, ISO, Shutter Speed
Each setting controls exposure differently:
Aperture: A camera’s aperture setting controls the area over which light can pass through your camera lens. It is specified in terms of an f-stop value, which can at times be counterintuitive because the area of the opening increases as the f-stop decreases. In photographer slang, when someone says they are “stopping down” or “opening up” their lens, they are referring to increasing and decreasing the f-stop value, respectively.
Shutter speed: A camera’s shutter determines when the camera sensor will be open or closed to incoming light from the camera lens. The shutter speed specifically refers to how long this light is permitted to enter the camera. “Shutter speed” and “exposure time” refer to the same concept, where a faster shutter speed means a shorter exposure time.
ISO: The ISO speed determines the camera’s sensitivity to incoming light. Similar to shutter speed, it also correlates 1:1 with how much the exposure increases or decreases. However, unlike aperture and shutter speed, a lower ISO speed is almost always desirable, since a higher ISO increases noise. As a result, ISO speed is usually only increased from its minimum value if the desired aperture and shutter speed aren’t otherwise obtainable.
In smartphones, the camera modules are pretty small so the sensor size is considerably smaller than in a DSLR. Also, the aperture is fixed in most smartphone cameras. So to control exposure in a smartphone camera we have 2 components only i.e. shutter speed and ISO.
How camera controls the controllable components?
Now arises the question that how a camera works in auto mode. For example, we point our camera and it focuses on the object correctly most of the time and also adjusts the exposure itself.
So for that purpose, we have 3 things that a camera uses:
- Autofocus
- Auto Exposure
- Auto White Balance
Now let’s dive into these terms and how these are implemented.
Autofocus
The gif above explains what autofocus means. The camera was focusing on the guitar but as soon as the android logo toy enters the scene the camera focuses on the toy. So to make the camera focus on the desired object automatically is autofocus.
There are different algorithms used for implementing autofocus such as Laser autofocus, Phase detection autofocus(PDAF), and Contrast detection autofocus(CDAF). In most entry-level smartphones, CDAF is used.
Contrast Detection Autofocus(CDAF)
In CDAF, the camera system moves the lens and at each particular focal length calculates the contrast of the image in real-time. It calculates the contrast till the maximum contrast is reached. If the maximum contrast is reached then the object to be focused on is in focus.
Auto Exposure
One of the methods that are utilized by the system to get the autofocus right is to convert the image from RGB space to greyscale and then plot a histogram and give the value of ISO and shutter speed where the histogram is normal. All these calculations and operations are done in real-time.
Auto white balance
Before going into auto white balance let’s first understand what is white balance. White balance (WB) is the process of removing unrealistic colour casts so that objects which appear white in person are rendered white in your photo. Proper camera white balance has to take into account the “colour temperature” of a light source, which refers to the relative warmth or coolness of white light. Auto white balance sets the correct tone of the image automatically.
One of the algorithms for auto white balance is that while shooting a scene the camera searches for a white object or light object in the scene and then sets the tone of the camera according to the colour of the object.
White balancing techniques are implemented by smartphone vendors for their specific camera modules so there is a difference in the overall image colour from one camera sample to another.
Conclusion
There are many more things that are to be explored in a camera but we tried to summarize all the components and algorithms related to a camera module here. Each component of a camera in itself is a research field and smartphone cameras are even more sophisticated modules with smaller parts. So if you want to read more about the topics mentioned above kindly find the sites in references from where we gathered our knowledge.