EMU-ML || An Introduction: Matching Digital to Film with Neural Networks

Errick Jackson
5 min readMay 4, 2023

--

I want to chronicle an ongoing project that’s currently in development.

Background

I’ve been frustrated with the mythology surrounding photographic film. Particularly among still photographers, film is hailed as a near-magic inimitable format. While I concede that the photochemical process is one whose end result is difficult to pin down, the behavior of different film stocks is by no means a magical process. Film is a well-documented science, with a century of study by incredible color scientists beneath it. If chemical processes could not be understood, simulated, and repeated, our modern world would simply not function.

However, the question of emulation is not a trivial matter. How does one transfer from the comparatively static formative nature of digital capture to the “organic” formative nature of photographic film? I am of the opinion that this is a question of data. I’m quite convinced that nearly any process can be accurately understood and modeled if enough data is provided. And indeed, I’ve seen many attempts at the process and a handful of results I would consider successful. I’ll be the first to admit that I don’t have the knowledge yet to engage such a process by hand. However, we live in an age now where that isn’t necessarily the only way to go about it; where we can get very well fine-tuned, nuanced, and exact results if we broach the problem using the right tools.

Process

So, about two months ago, I set out to design a programmatic method to automate the modeling process. The goal: I provide the program with a mass of reference color data, shot on both digital and film, and let it find the correlative function to connect the two formats and match their colorimetry.

This turns the problem into a data science question. I am not a data scientist.

So the first month was spent learning about statistical regression models and relearning Python. The basic process I had in mind was for the program to ingest the images, read the color data and compile them into lists as inputs and outputs. From there, I would use whichever regression model I found worked best to devise a correlative fit between the two data sets. The basic formation of the Python script was pretty simple. However, the regression model was the part I spent weeks changing.

I started with the simplest: Linear regression. Effectively forming a 3x3 matrix to describe the change. This was a decent start but missed a ton of the nuances of the photochemical rendering. So, I went to the immediate non-linear option: polynomial regression. I tested varying degrees, but it would have taken a lot of testing to find the optimal degree for the function and even still, I would have been hitting lower accuracy levels than I was looking for. I tested other, more nuanced regression models that even had overfitting protection built-in, but the process of deciding which worked best for my test dataset was not cohesive with my impatience. I needed something that could figure out which model, if any of them, was the best option, with little input from me.

As you can likely guess, I quickly approached the door of machine learning. I started reading and listening to lessons, starting from the basic functionality conceptualization, all the way to the more intensive approaches. Ultimately, I landed at one of those intensive functions: convolutional neural networks.

Data scientists will rightfully tell you neural networks are a silver bullet that shouldn’t be treated as a silver bullet. Just because a neural network can solve practically any data correlation problem doesn’t mean you should use it for everything, especially considering the potential computational cost. With all due respect to those data scientists, my impatience requires the silver bullet. Truthfully, the amount of data I’m working with is nothing in comparison to what is considered computationally expensive in the world of neural networks.

The benefit however was almost immediately recognized. After fiddling with the hyperparameters and activation functions, assessing the diminishing returns of multiple dense layers, and finding the appropriate color model for representing the data being fed into the network, I got an approach that yielded a virtually perfect result. And when I say perfect, I don’t mean close, almost there, “good if you just don’t touch these settings”; I mean 1:1, 99.9999% accuracy of the stock’s colorimetry as described by this dataset. To make sure this wasn’t hyper-fitting to that specific test dataset, I created datasets of varying sizes and characteristics. All of them perfectly matched. This meant the modeling approach was indifferent to the file format, metadata, bit-depth, EOTF, or color space. It would simply sniff out the correlative model through sheer force of compute. This breakthrough also means that the only limit to how widespread and nuanced it can describe the film stock’s gamut and gamma response is the amount of data I can provide the network.

The Profiling Journey

Going forward, I will be profiling various film stocks (particularly C41-process stocks, to start) to map the entirety of the gamut and gamma response and match to digital capture. This means capturing not only multiple exposure levels but also many different illuminants. By my calculation, this will result in nearly 20,000 points of data per capture format, that maps from the darkest to the lightest tonalities, and to the very edges of the expressible gamut of the stock. This is over 10 times the amount of data the network was tested on, but I have no doubt it will result in the most exact expression of the stocks’ nuances possible. Because of how I’m engaging the shooting and development process, the end result will allow me to create transformation LUTs suitable for grading environments — Rec709, HDR-compliant, and color-managed approaches — as well as profiles for use in common photo editing software, such as Lightroom and Photoshop.

In my opinion, the biggest benefit of this project will be the archival aspect. With the amount of data being captured, it will be a very robust way of archiving the characteristics of each individual film stock. Discontinued stocks, such as the venerable Fujifilm Pro 400H, can have a highly accurate preservation in existence so that the incredible work that these color scientists have done is not lost to time.

Lastly, it is key to note that while I am very confident in the accuracy I can achieve in this process, film development itself is not a process with a singularly defined output. Differences in development, scanning, display transformation, or even the film itself can all change how the film expresses itself. I will be aiming to control for as much of this as possible, tightening the tolerances of my data capture process and working closely with my lab of choice. But if there is a bit of magic with film, it would be in what I call the “celluloid lottery.” The goal of this project is the account for the repeatable aspects of these film stocks; the aspects that are common enough to be expected when you shoot them. And thanks to this process, all of that should be massaged out in the fitting.

Note also that this is just a model of colorimetric behavior, not physical properties, such as grain structure or halation. I may engage these at some point down the road, but the color aspect is a more widely applicable facet to hone in; so that’s where I’m starting.

I’ve made some bold claims very confidently, but I’m equally as confident in what will result from it. I think this will provide a lot of value to those looking for a digital solution in an increasingly expensive Kodak world.

--

--