How We Ensure That Noctacam Takes the Best Photos

Kartick Vaddadi: Tech Advisor to CXOs
NoctaCam
Published in
4 min readApr 23, 2018

We want NoctaCam to be the best low-light photography app. It should produce the best photos and videos at night, under a variety of situations. To that end, we have put in place a rigorous testing process.

To begin with, we test varied scenes, like a nightscape, an indoor scene, astrophotography, a really dark scene that pushes ISO to the max, a high contrast scene, macro, a person, moving objects like trains, and so on.

We test each of our features with multiple of these scenes. That is, video should work great both indoor and for a nightscape. Likewise, long exposure should work great indoor and for a nightscape.

In addition to testing each feature of NoctaCam individually, we also test them in combination, like long exposure with fill light.

For examination we use a 27-inch 5k iMac. With brightness set to maximum, and curtains drawn, so that we can see better. Opening two photos side by side, or in two tabs, and rapidly switching between them, which highlights differences. We generally take the photos on a tripod, so that other aspects like framing aren’t changing. Only the thing we’re comparing changes, making sure we’re not misled by unintended changes, and that we’re able to observe even minute differences in quality.

We sometimes do blind tests, so that our knowledge of how each algorithm was tuned doesn’t bias us.

We take photos on different phones — iPhone X, 7 Plus, 6 and 5s— to ensure that NoctaCam performs well on all of them, not just the one we happened to test on that day.

We compare our photos with those of the iPhone camera app. We’ve also identified a few key apps for every feature. For example, we compare our long exposures with Avg Cam Pro, Slow Shutter Cam and NightCap in addition to the iPhone camera app. Our goal is to be better than, or at least as good as, the best of our competitors. Better than some and worse than some isn’t good enough.

If a photo is really dark, we sometimes brighten it, which increases noise, and so is a more demanding test. Our photos should hold up to editing, not just look great out of camera.

Sometimes, we have different people capturing photos, applying their own style, or choice of subjects.

No matter who captured it, we have multiple people evaluating each photo, bringing their point of view to the table, so that my (the founder’s) biases aren’t given undue importance. We sometimes note each of our conclusions down without seeing others’ conclusions, and only then compare notes, so that the second person isn’t biased by the first person’s conclusions.

When we compare photos, we look at different technical parameters like focus, noise, contrast, brightness and clarity. We also look at esthetic ones like whether the photo preserved the feel of the scene. We also try to put on the photographer’s hat, rather than the developer’s, and ask whether I, as a keen photographer myself, would be happy with this photo.

Experiments are sometimes inconclusive. When that happens, we try to identify what unknown variable is changing, eliminate that, and test again. For example, we all agreed that a long exposure in another app was better than NoctaCam. When we looked closer, it turned out that a light was turned on in a building, changing the dynamics of the scene, so we redid the experiment and found that NoctaCam wasn’t actually performing worse. In another case, some of NoctaCam’s photos came out misfocused, but when we repeated the experiment, other apps also misfocused once in a while, so we concluded that was a fluke event and not a flaw in NoctaCam.

In addition to experiments being inconclusive, they’re sometimes conclusive, but leading us to the conclusion that NoctaCam doesn’t perform the best for a particular scene and for a particular type of photography, like a long exposure on my terrace. This is in a way a good thing, because it’s an opportunity to improve NoctaCam. We approach our experiments with an open mind, like that of a scientist, who wants to find out the truth, not prove a hypothesis he likes. We try to identify why NoctaCam didn’t perform the best, improve our code, and redo the test. By doing this, we’ve always been able to make NoctaCam match or beat all our competitors.

We set our bar high, building for a user with obsessively high standards. Imagine you’re exhibiting your photos at a prestigious gallery, ten feet wide. We want our quality bar higher than our users would notice, not just the minimum we can get away with.

We try different photography techniques like long exposure and light trails, and try using different algorithms, different APIs and different parameters to see what gives the best result.

We also go on photography trips, so that we’re power users of NoctaCam, and understand as a user what parts of NoctaCam work well, and what don’t, whether photo quality or UX, so that we can improve them.

You can see the level of rigour that goes into our scientific testing to ensure that NoctaCam always takes the best low-light photos. It might slow down progress, but we want to launch great features a little later than launching a mediocre feature soon. Such an exhaustive testing is also strenuous on our team, but we always work hard to do what’s right for you, our users.

If this is the level of care you want behind the products you use, download NoctaCam.

--

--

Kartick Vaddadi: Tech Advisor to CXOs
NoctaCam

Tech advisor to CXOs. I contributed to a multi-million dollar outcome for a client. ex-Google, ex-founder, ex-CTO.