Smartphone’s long exposures: spot the difference!
Recently, I published a long exposure of a coastal scenery, shot with a smartphone, that got some attention. Now it’s time to reveal that the photo is a trick smartphones can do and show how you can do it… manually!
The two photos published here were taken at about the same spot and some features in the background are there on purpose to show you that the place is the same. In fact, they were taken in different days but at about the same hour (little after noon), with similar light conditions. So, one could say that they are long-exposure images taken at an hour that, usually, is not the best to try the technique.
Both images were shot with a smartphone, meaning the technique is even more difficult to achieve as there’s no control over aperture, meaning you’ve to shoot wide-open, at something like f/1.8, which is a very wide aperture. The Xiaomi Redmi Note 10 Pro I am using now goes up to 1/4000 in terms of shutter speed, but even that is not enough to create a long exposure in the middle of the day, under the sun. Still, the first image is the result of a 7-second-long exposure, computed by the smartphone’s AI in the Long-Exposure mode and using the “Oil painting” sub-mode.
The second image published here… I fully controlled it myself, and I believe it looks better, more natural than the photo resulting from computational photography. And while I understand that for the common mortal, the excitement of being able to do long exposures anywhere, with just a smartphone — and a suitable tripod or other solution to keep the device steady for 7 seconds or more — is there, I much prefer to do it the old way, because I can better control the results. Before I continue to explain how I made the second photo, using the classic techniques that photographers have used for decades, let me share with you some of the magic modern smartphones can do.
Computational photography does it for you
I recently wrote in one article that smartphones are where the real advancements in photography are happening, and this example just shows how true it is. Under the term “computational photography” a series of techniques allow smartphone users to create HDR (High-Dynamic Range) photos that would take hours to edit in a conventional camera to desktop photo editor workflow and post them to their Instagram feed in a shorter time that it took for you to read this paragraph.
Smartphones do everything from blurring backgrounds as if you’re using a long lens to automatically creating images with the right light and color that make normal people say “wow”. Yes, when you go pixel-peeping you might discover quality does not compare with the one from files created with a DSLR or mirrorless model, but one of those cameras you would never bother to go to the same extent to create a photograph. So, the smartphone wins, hands-down.
I’ve always believed that the best camera is the one you’ve with you. In another area which also interests me, Virtual Reality, I’ve long said that I don’t mind trading resolution for immersion. Once you try a VR headset for things like flight simulation, it becomes hard to go back to a flat screen, even if it has a higher resolution. With a VR headset you’re in the cockpit flying above a real world, with a flat screen you’re looking at an animated postcard…
The same logic applies to smartphone photography, although in a different way. Here I say that I am willing to trade resolution for emotions, meaning I rather have a photo of a moment — and it may well be a snapshot — that may not have as many pixels and detail as those from a regular camera, but that freezes a moment in time that would otherwise be lost.
Long-exposures created manually
So, if I’ve no other way to capture a long exposure, I may well use the computational photography my smartphone offers me to create a photo that captures a moment. But given the chance, I will still resort to the old way of creating long exposures, because, despite how some of the smartphones and apps available manage to simulate the effect, nothing beats making your ow long exposures.
Having looked at sets of images captures using the Long Exposure mode in my Xiaomi Redmi Note 10 Pro and those taken the “old way”, I’ve no doubt that the computational photography photographs have a “plastic look” to them that makes me prefer the second set, which is not the base for all my long exposure photography.
So, while those who are just interested in results may opt for the quick way offered by AI and computational photography, if you want to use photography as a pastime, a way to explore the world from your point of view, and understand how the camera and light works, then there is no other way to do it than… following the classic techniques for long exposure.
Just to make things clear, while the first image is the result of a 7-second-long exposure, allowing the AI to capture a series of images to create the flowing movement of the water, my manually defined exposure is just ½ second, which was more than enough to render the movement of the waves. I do prefer the results but it’s not just that: in the same period of time, I can shoot a dozen images, capturing distinct phases of the water movement, which gives me more choice.
Also, when shooting under controlled conditions you can define when to start shooting, and keep shooting while watching the moment of the water around the rocks, as it draws lines on the sandy bottom as well. It’s a completely unique experience from which you get more options in terms of final photographs and a sense of triumph, which is part of the fun. It’s like turning to your mom after a first ride on a bike and saying, “look Mom, I did it”.
The accessories you need
While some smartphones may limit what you can achieve in term of manual control, most modern models do offer some form of control and, if exposure can be controlled to a value around 1/8 of a second, you’ll be able to create your own long exposure images without the intervention of Artificial Intelligence.
Now, you do need to get some accessories. Here is what you need to start: a smartphone, a tripod, a sturdy one, a Neutral Density filter with a clip-on adapter for your smartphone, and, as something I believe it’s essential, a remote controller, so you don’t have to touch the smartphone once you’ve set all the parameters and adjusted the lens. The remote controller I am using can be bought from Amazon and is great. In fact, I bought a pair of remote controllers, so I’ve one in my photo kit bag and always have one in my vest pocket, so I can use it anywhere I need, when I am only carrying the smartphone.
The Neutral Density filters needed can also be acquired from Amazon. The best option, if you’re starting, is to buy a Neewer ND Variable 2–400 filter, which gives you anything from 1 stop to 8 2/3 stops of light reducing power. It’s enough to start experimenting. It comes with a clip-on adapter that you’ll need to use the filter with your smartphone. Be aware that while these adapters fit most Android models — and some iPhone — you should always check compatibility before buying. I’ve tried them with models from Xiaomi, Poco, Samsung and Huawei without problems.
If you’re unsure about the technique and how to use the filter with your smartphone, I will have practical workshops available from March onwards, where a group of people can try these accessories — and others — during a tour around some of the places I use to explore with my new camera: a smartphone. Just check for the dates or, if you’ve a group of friends — minimum three for the “Initiation sessions” — get in touch.