The Right Amount of Bright

Evaluating Survey Data for Insights on Preferred Photo Brightness

Adam Edgerton
11 min readSep 1, 2017

I was recently inspired by conversation with a friend, Josh Riggs, to think more about how I handle brightness (AKA exposure/luminosity/tone curves) when editing photos. I’ve increasingly come across photos that look too dark (particularly on my phone), only to realize that my screen’s auto-brightness was turned down. At 100% brightness, the depth of colors, textures, and subtle contrast were more obvious as the photographer intended, but on first glance without adjustment to my device, detail was lost and I essentially saw an inferior version of the image.

This realization inspired me to put together a survey to answer a number of questions relating to preferred brightness while attempting to identify any significant differences based on type of device, photography experience level, and other related variables. The survey, while not complex enough to be fully scientific, ultimately got more than 1,000 responses and gave valuable insights into general preferences for photo editing that you’ll hopefully find valuable!

Questions & Hypothesis

When building the survey, I hoped to answer a number of questions and went into it with some assumptions that I identified to limit my own bias.

The Big Question
Should I bias my photo editing darker or brighter based on the common ways in which I share my photography (web & mobile apps)?

Hypothesis
Given auto-brightness on by default being common on today’s devices, I suspect that I should bias towards slightly more brightness than I might otherwise prefer on my full-brightness, highly-calibrated editing monitor.

Other Questions To Answer Along the Way

  • What’s the average screen brightness and ambient light situation for desktop/laptop and mobile device users?
  • Is there a difference in preferred photo brightness based on device (particularly mobile versus desktop/laptop)?
  • Is there a difference in preferred brightness based on level of photography experience?

Methodology

When it came to building the survey, I wanted to show people variations of the same photo at different exposures. I chose three photos based on several criteria:

  • Between the three photos, I wanted to have one most red-prominent, one most green-prominent, and one most blue-prominent on the RGB color spectrum.
  • I looked for photos from my catalog that I consider good but not great to avoid biasing ratings too high.
  • I picked photos with a fairly balanced histogram which allowed for noticeably increasing or decreasing the exposure without significant clipping of whites or blacks on either end.

From there, I created three variations of each photo. The “mid” photo was the most center-balanced histogram I could get. The “light” photo was as much exposure increase as I could comfortably justify without clipping significant whites. The “dark” photo was the opposite, with decreased exposure until black clipping started losing important detail. For all three photos, this ended up being roughly -.75 EV for dark and +.75 EV for light.

To avoid biasing responses towards the middle brightness option, I opted to create three surveys, with each one showing a single version of each photo, and a mix of dark/mid/light brightness across the different photos. The surveys were mixed as follows:

  • Survey 1: Photo 1 — Mid, Photo 2 — Dark, Photo 3 — Light
  • Survey 2: Photo 1 — Dark, Photo 2 — Light, Photo 3 — Mid
  • Survey 3: Photo 1 — Light, Photo 2 — Mid, Photo 3 — Dark

From there, I used Google Optimizer to run a redirection A/B test as a bit of a hacked together way to point approximately 1/3 of the survey traffic at each version. I relied on getting a large number of responses from varied sources to put together an overall rating of each version of each photo. I intentionally left the question a vague “please rate the above image” to avoid people overanalyzing the photo brightness specifically, and instead made sure to change only exposure in each photo version to ensure that with statistical significance, the only thing impacting rating differences would be the brightness.

I added a fourth photo question where I showed all three versions of an image in order to get a sense for overall preference of light/mid/dark when I presented all options side-by-side, and also to analyze any differences in the ratings of the individual photos based on the person’s preference when presented with all three. Not surprisingly, when bracketed by light and dark options, the middle exposure was the overwhelming favorite.

Finally, I asked additional questions to segment users by photography experience level, device type, screen brightness, and ambient light.

Curious to see what this all looked like in practice? Here’s a link to a PDF of one of the surveys.

Results

First, some raw results:

1,034 total responses!

The experience level results reflect a mix of several hundred of my connections on Facebook, Instagram, LinkedIn, and other social networks, plus a strong response from the Reddit’s photography community.

I was glad to see fairly balanced responses across major device types to allow for analysis based on mobile versus desktop, particularly.

I was somewhat surprised to see how evenly distributed screen brightness was…

… however, when you split out desktop/laptop from mobile devices, a clear trend emerges; mobile devices are more commonly lower brightness (likely because of auto-brightness being on), while desktop/laptop monitors, which often have manually set brightness, tend to be brighter.

Overall Photo Ratings

Rating responses for each photo were given on a 1–7 scale (7 being high).

Overall, it seems that increased exposure (pushing the histogram to the right) is definitely NOT the way to go. I’m slightly surprised, given the popularity of all of the washed out filters that you find common on Instagram these days. However with a good number of the survey respondents being experienced photographers to some degree, I’m glad to see the overexposure rank lower.

When it comes to mid exposure versus dark exposure, the results are murkier. The mid sunset got the highest rating by a bit overall, while the dark exposure brings out the deepest sunset colors at the expense of the foreground. For the other two photos, personally I think the dark exposure gives them a bit of a “burned” look, but the dark exposures slightly outscored the mids. Let’s slice and dice the results a bit more to see if there’s more to be learned.

Photo Ratings by Experience Level

Below are ratings for each photo when broken out by level of experience.

There are a few interesting insights here. First, there’s a very clear trend that for the most part, more experience resulted in a lower average rating. That’s good; I’d hope that experienced photographers would have a more keen eye, and like I mentioned above, I picked what I’d consider some good, but rather ordinary photos. However, apparently the “inexperienced” group has a more critical eye than the phone camera users, who gave the overall highest ratings.

When it comes to comparison of the light/mid/dark, we don’t necessarily get clear answers when dissecting the responses by experience level. The pros preferred the mid exposure for two of the three images, but then liked the light exposure for photo 2 best. Meanwhile the serious amateur photographers seem to prefer their images darker, with only the sunset middle exposure photo getting the highest rating. The hobbyists again preferred the mid exposure for 2 out of three photos, but liked the dark exposure of the lake just like the serious amateurs. I guess those two groups like their blues looking very saturated?

Meanwhile, the inexperienced photographers don’t seem to know what they like, preferring one light, one mid, and one dark. And the phone camera competent people not only rate everything the highest, but have such similar ratings for light/mid/dark that it’s difficult to draw any conclusions.

So no big insights from experience level. Let’s try something else.

Photo Ratings by Device Type

Below are ratings for each photo when broken out by the most common device types.

I’ll cut to the chase — there’s no big “aha” moment that I identified from looking at the device type-based data. If you see any, please leave a response and share!

A couple interesting tidbits regardless:

  • Average ratings were consistently higher on mobile devices versus larger form factor devices. This likely reflects either a) the level of detail and lesser ability to analyze a photo on mobile, or b) the type of user viewing on mobile having a less discerning eye.
  • Mac users consistently liked dark exposures the most and light exposures the least. Being a mac user myself, I guess that makes sense; the retina screens do a good job at conveying depth of color and bringing out detail in the darks that may look near-black on other monitors. And with the darker exposure comes deeper color saturation. This pattern almost held up with Apple mobile devices too.

Photo Ratings by Screen Brightness + Ambient Light

Finally, below are ratings for each photo when broken out by a combination of screen brightness and ambient light.

Before we get into analysis, it’s important to note the X-axis scale on the above charts. I merged screen brightness and ambient light options to create a scale from the worst setting to view photography on a device (bright sunlight and low screen brightness, or “Low Contrast Setting” in the above charts) to the best (little to no ambient light and a full screen brightness, or “High Contrast Setting” above).

Of the above photos, the Eglinton Sunset seems to paint the clearest picture. The ratings with a low contrast setting are all similar, but as the contrast of the viewing setting increases, the light/mid/dark exposure ratings become more differentiated, with the mid being the clear winner. That makes sense, since in an environment where you have lots of ambient light and the screen brightness is low, you’re not going to be able to distinguish between exposures, while at the opposite end of the spectrum, the difference in exposure is much more obvious.

The Cloudy Tablelands photo is fairly inconclusive, while the Lake Wanaka photo reinforces the overall results that people preferred the dark exposure most, regardless of their light situation. There’s one other really important thing to note when it comes to these results…

Even with 1,000+ responses, the majority of respondents fall somewhere in the mid-range on their ambient light + screen brightness situation, resulting in a nice bell curve. So the lowest contrast and highest contrast ratings don’t have nearly the number of responses necessary to be statistically significant. When you throw out the upper and lower ends of the scale, the ratings for each photo closely reflect the overall ratings, and don’t provide the sort of clarity I’d hoped for.

Conclusions

There are numerous other ways I could analyze this data that I haven’t reviewed thus far (and I will if your response highlights something I may have overlooked). But here’s what I’ve learned (or haven’t learned).

Revisiting Hypothesis/Questions

Hypothesis: Given auto-brightness on by default being common on today’s devices, I suspect that I should bias towards slightly more brightness than I might otherwise prefer on my full-brightness, highly-calibrated editing monitor.

This looks to be false! While I didn’t get clarity on the mid versus dark exposures, biasing towards slight overexposure resulted in the lowest ratings across all the photos.

What’s the average screen brightness and ambient light situation for desktop/laptop and mobile device users?

Per the charts above, across all respondents screen brightness was pretty evenly distributed from light to dark. When broken out by device type, laptops and desktop screens tend to be more in the 75–100% brightness range, while mobile screens tend to be most commonly in the 25–50% brightness range.

Is there a difference in preferred photo brightness based on device (particularly mobile versus desktop/laptop)?

It doesn’t appear that there is a clear distinction based on device type. However, mobile users rated all the photos higher overall than desktop/laptop users.

Is there a difference in preferred brightness based on level of photography experience?

If there is, it’s not reflected by linear ratings based on experience. The most experienced photographers liked the mid to light images best, while moderately experienced photographers liked the mid to dark images. The least experienced photographers were all over the place on ratings.

Other Takeaways

I’m curious to get into the results more as they relate to color tonality. The photo with significant blues was particularly interesting since respondents rated the dark exposure highest. It looks like people like their blues looking very rich, if not a bit burned? The Tablelands photo has the most greens, but it also has a very white/gray sky, which I suspect is where much of the rating differentiation came from. There’s a lot more contrast in the sky at the mid-dark exposures, without noticeably losing much detail in the foreground.

The sunset photo had the most expected results, and I wonder if this is partly because the histogram has a slightly wider distribution than for the other two images. I was still able to shift the exposure for that image +/-.75 EV without noticable clipping, but visually when you push it lighter you lose a lot of color, and when you push it darker you lose a lot of hillside detail. That’s a good takeaway for sunset photography in general — it’s tempting to err on the side of too dark to bring out rich colors, but unless you’re blending exposures or have a camera with great dynamic range, don’t do so at the expense of losing foreground detail to the darkness.

Ultimately, these results won’t have a significant impact on my post-processing, but it’s very useful to know what sorts of screen brightness I’m dealing with, and doubly useful to have a reminder to avoid the trendy washed out photo look!

Have other insights or questions? Please share — I’m still hoping for an AHA! moment that hasn’t really come from my analysis so far.

--

--

Adam Edgerton

Exploring the outdoors and the Internets; usually not at the same time.