The language of image adjustments

In this article, we would like to tell you more about our work in Avalanche and more specifically how we handle the migration of adjustments from one app to another.

Machine Learning to the rescue

One of the promises of Avalanche is to preserve the visual aspect of the migrated images by applying some clever algorithms to derive the adjustments in Lightroom (for example) from the adjustments in Aperture.

Because every app has its own RAW engine, and its own way to name and dimension the settings, it is not possible to simply take the value of a specific setting in Aperture — say, a WB temperature — and apply it in Lightroom. The results would be totally off.

Let’s look at a simple example. To obtain the same visual aspect in both Aperture and Lightroom, the values to apply to both the WB temperature and exposure are completely unrelated.

Here is another example of how migration can be tricky. Let’s look at the adjustment panels in popular apps :

  • In Aperture you’ll find adjustments called : Exposure, Recovery, BlackPoint, Brightness, Definition, Highlights, Shadows, Mid Contrast.
  • In Lightroom, they are called : Exposure, Blacks, Whites, … There is no Brightness, no Midtones but a Texture adjustment.
  • In CaptureOne, they are called : Exposure, Highlights, Shadows,… There is a Brightness adjustment but no Midtones.
  • In DXO, the list is even more different due to the specific capabilities of the software.

Sometimes, some adjustments don’t even exist in the target application. One example is the Skin tone White Balance setting in Aperture. How to transfer the settings of an image using that specific white balance parameter to Lightroom where this concept does not exist ?

Enter Machine Learning, or ML to make it shorter.

The idea behind ML is to learn from a set of images that have been adjusted in Aperture and Lightroom, what are the “functions” to apply to the set of parameters in Aperture, in order to find, one by one, the value of each parameter in Lightroom. Using some mathematical language, let us assume that the set of parameters in source application are called (x, x 2,,…..,x n) and the target parameters are called (y 1,y 2,….,y m). We try to find the function f i (i=1…m) that lets us compute h i = f i (x, x 2,,…..,x n) and minimizes the error between h i and y i for all our images. In other words, we try to find functions that predict the value of any parameter y i in the target app from the input parameters in the source app.
There is no need to find the explicit formula for this function. ML gives us the ability to create a model that contains the information about those functions. We train the model by exposing it to a large set of input and output images that have been carefully prepared.
At the end of the training, the model is able to predict the results for images that it has never seen.
The model is trained on a subset of the images — 80% of the images are used for that- and tested on the remaining 20%. The accuracy of the prediction is assessed and when anomalies are detected, we try to understand the cause, work on the data, or add more images if we think that the dataset is not sampling all the possibilities well enough.

The key for success is of course to have as many images as possible that cover as many situations as possible. We also shift the settings on some images to explore variations.

In the case of Lightroom, the list of parameters we aim to predict is the following: WhiteBalance temperature and tint, exposure, contrast, highlights, shadows, whites, blacks, saturation, vibrance. Color toning.

We use different techniques for curves and for parameters that are more complex (noise, texture, vignetting)

How you can help training our models ?

From the above, it is obvious that gathering as much quality data as possible is the key to getting very good adjustment migration results.

Therefore, we would love to get contributions from you if you feel like sending a few pictures that have carefully been adjusted in both the source and the target application. In the case of the current version of Avalanche, this translates into :

  1. creating an Aperture catalog with some images and adjusting them the way you would do it normally. Using images requiring some dramatic adjustments can be very useful to sample the edges of our parameter space.
  2. creating a Lightroom catalog with the same images and adjusting the images in order to get ‘exactly’ the same visual aspect (color, exposure, toning, …) in both programs.
  3. sending us the catalogs using WeTransfer.

We will only use the images for our ML algorithm.

Don’t hesitate to contact us if you need more details.

Originally published at https://cyme.io on September 18, 2019.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store