Generative Photography

Joy Hughes
8 min readSep 19, 2021

--

Generative digital art has become a phenomenon of the current era, with NFTs (non-fungible tokens on the blockchain) valued in the hundred of millions of dollars. A market in photography NFTs has developed as well, with a similar market cap. But what about generative photography?

“Spotty”, a generative photograph I created using light painting

Generative photography is broadly defined as images produced using a procedure of some kind. This often includes a computer program, but it doesn’t have to. Randomness can play a role, whether introduced by the program or by the setting. Generative photographs can be of any genre or style — for instance, I have written code to generate still life, environmental art, and light painting.

I have set out on a journey to explore how generative photography might be used to create objects of artistic interest and value. I wrote a computer program named Jen that generates a list of coordinates which I use to construct scenes made from objects found in the home and in nature.

The term “generative photography” was coined by photographer Gottfried Jäger in 1968, building on the idea of generative aesthetics from the philosopher Max Bense. Jäger’s work used a multiple pinhole camera he invented to produce precise, regular images on photographic paper. He described generative photography as a process: “finding a new world inside the camera and trying to bring it out with a methodical, analytical system.” Since that time generative photography has remained a fruitful field for experimentation.

The artist is giving up the specifics of the scene to an algorithm — in a way, the algorithm itself is the work of art. The artist also has the choice of elements to include in the scene, with all their potential real-world variation. The natural world can also provide its own variability to a scene, potentially providing elements, lighting, weather conditions, and a scenic backdrop. Changing light positions and potential for compositional interplay with the surrounding scene afford opportunities to capture multiple photographs of the generated scene. The real world may also intervene by moving elements, sometimes in a subtle way and sometimes catastrophically.

My program Jen is, at the moment, fairly compact — about 1000 lines of C code. Within Jen, potential elements of the scene can be named and selected with differing degrees of rarity and supply. Elements can be generated individually or in clusters with size, position and orientation in a two- or three-dimensional space. The camera position, orientation, and settings, lighting, and background can all be precisely specified. Jen outputs a scene description that anyone should be able to use to create a very similar (but not identical) photograph.

Still Life

Two still life scenes generated by Jen using household objects

My first experiment with Jen was to generate still life images using a vase (or similar container), and other items set nearby. I constructed two of the scenes that I generated. While I had specified eight possible “vases” for the scene, both scenes chose the same container, the mortar and pestle. It’s a challenge to design an algorithm to generate aesthetically pleasing still life images, but the genre shows a lot of promise.

Environmental Art

For the second experiment I’ve named “GenGen” for Generative Genesis. Within the code I simulated a flow field, with repeated elements generated along flow lines, sometimes with a property such as size or color changing with each new element. The scenes are constructed outdoors, using naturally occurring objects such as leaves, stones, or seashells, creating a collage or mosaic.

In nature it’s hard to impose order on things. Objects such as leaves won’t lie flat. The ground is more of a fractal than a plane. Nonetheless I’ll head out to a site with a protractor and a carpenter’s square — this being the good ol’ USA it’s marked down to a sixteenth of an inch. I’ll measure the best I can, though some distortion is inevitable.

The first GenGen scene included runs of elements spiraling outward from the center while decreasing in size. I realized the construction on a west-facing beach in Obstruction Pass State Park on Orcas Island, Washington state. For my materials I used pebbles of graywacke (a dark, hardened form of sandstone) and eroded pieces of clamshell, and placed them on top of a driftwood log. The work was very meditative as I entered a flow state I call Gen-Zen. The light faded and became golden, while tiny waves made the beach pebbles gently whisper.

By sunset I had completed the first half of the work, a triple spiral, perhaps my favorite stage of this work. I paused to photograph it, including some of the beautiful setting as a background. From this I learned it can be of interest to combine the natural or human world into photographs of a generative construction, and to take several photographs of the construction from different angles.

First Generative Genesis (“GenGen”) piece, completed with a single stone offset

I returned early the next morning to find the work wet with condensation, intact apart from one small stone that had been displaced. I elected to leave the displacement intact as an example of a “mutation” that can occur, either deliberately induced by Jen or through unknown processes in the real world. Only for the last few photos did I return it to its “proper” place.

I haven’t yet written any code to render a scene on the computer — Jen gives me a list of coordinates and angles which I use to place the elements. While I have a general idea, I don’t see what the final scene will look like until it is constructed. In this case, the final piece looks a bit like the biohazard symbol. While disturbing, perhaps this is appropriate, given the events of the past two years. This is an example of emergent meaning or pareidolia, where the algorithm generates a scene that evokes previously imprinted concepts in the viewer.

My first attempt to visualize a flow field suffered a glitch! The second time around I used texture instead of color.

The second GenGen realization involved a 10x10 array of elements that visualize a flow field through orientation and color. I chose leaves to represent these elements, and searched for some time for a group of trees to provide the required range of colors. The site I located was in Rasar State Park near Sedro-Wooley in Washington State. While there was some wind, I was able to correct the occasional dislocation — however when it was nearly complete a large gust blew several leaves about.

During the construction I had a couple of invertebrate visitors. A flower spider hitched a ride on one of the leaves I brought, and a slug casually crawled across the display. It took about two hours to construct the layout, approximately one minute per leaf. I noticed that no vectors were pointing to the left, which led me to discover a bug in the code I’m using to output angles. I have created my first “glitch art” generative photograph! A neighboring tree did its own edit by dropping an extra leaf that I did not notice until later.

The artist has the choice of elements to use to construct the scene. After correcting the code I returned to the vector field using fir cones instead of maple leaves. Texture, rather than color, is used as an angle cue, which allows the viewer to better see the shape of the flow.

Generated using a vector field, the Shuksan Serpent demonstrates the use of natural background in generative photography

To further explore the use of natural background, I visited Picture Lake with its reflection of Mount Shuksan, one of Washington’s most iconic places for photography. Using a vector field I generated a curve that traced across four squarish stones, growing more wiggly and numerically unstable to the point that it went back and forth, leading me to pile stones two or three high. I stayed there for many hours watching the light change and taking photographs. In these moments the mind can become as still as the water.

A jaggie curve made from autumn leaves — with a lovely view of Mount Baker

The next form I call a “jaggie”, with randomly generated rough edges and sharp turns serving as a counterpoint to the smoother shapes.During the autumn season, colored leaves are available that can be used to blend along the curve. The setting was perfect as well, with the clouds opening to reveal Mount Baker at sunset.

Light Painting

I call this effort Project Newton, after Isaac Newton who had great insight into the nature of light. The Newton code can extract lighting information from a scene and change its position, orientation and color. Multiple images can then be combined, allowing the final product to be generated on the fly with Jen (perhaps when an NFT is minted).

A raw image of a caustic, and a generative photograph combining four caustics

When light is refracted through glass or water, it creates a pattern known as a caustic which can be quite complex. In the genesis image of Project Newton, I extracted four different caustics produced by a pickle jar filled with water and combined them into a single image.

Two images generated from the same patterns of light and shadow, using different methods

As the sun appears to move, it creates changing patterns of light and shadow that can be extracted and recombined. In the images above, the same patterns of light and shadow are recombined in different ways — on the left, each light band has been given a different color, while on the right they are rotated before they are recombined.

A New Artistic Movement

“Rainbow Swirlie”, a generative light painting

I believe that generative photography community could become a phenomenon in the NFT space — even a new artistic movement. The possibilities are vast, and the opportunities are endless!

--

--

Joy Hughes

Computer generated art in nature. Generative Photographer, Programmer, Solar Energy Advocate.