Computational Aesthetics Then and Now: Vistapro and GANs

Erik Ulberg
12 min readOct 27, 2019

--

Abstract: While computers have been used to make art for decades, humans have a natural resistance to the output. Tracing historical lineages of computational aesthetics can help explain this phenomena and suggest alternatives for the future. This paper describes the interaction and output of Vistapro, a popular landscape generator from the 1990s. It compares and contrasts the tool with GANs to shine a spotlight on the world view of each approach and question their built-in assumptions. The paper concludes that highly structured outputs have the advantage of being grounded in the world, but can lead to disappointingly sterile representations.

Introduction

While watching a movie the other day, I saw a frame that reminded me of the output of a Generative Adversarial Network (GAN). It got me thinking about the value of realistic artificial images. I thought, “What if this is fake?” And I felt uncomfortable. Realistic images generated by computers are nothing new (especially in movies), but media produced with machine learning causes greater discomfort. This change lies rooted somewhere in the new aesthetics and interactive paradigms of these tools.

Vistapro (1999) and “End Of The Day” by ariz (made in Vistapro)

To better understand the disconnect, I dug into the past. Using emulation tools combined with historical documents, I investigated the technical, aesthetic, and interactive aspects of Vistapro.

History of Vistapro (1990–2005)

Vistapro is a 3D landscape generator developed by John Hinkley. Begun in 1990, version 4.0 was published in 1999 and the final version came in 2005 [Williams 2017]. It enabled fly-through experiences of terrains. Vistapro works with real world maps or self-generated fractal landscapes. Models are manipulated using built-in functions and are rendered as realistic images, videos, or VR.

Vistapro 4.0. Created in EaaSI Platform.

Vistapro was developed for the Commodore Amiga, but later versions came out for Mac and PC [Williams 2017]. Vistapro was published by Virtual Reality Laboratories, Inc. [Williams 2017]. The company’s flagship products were Vistapro and Distant Suns, a desktop planetarium [Williams 2017]. This pairing is not surprising given that the simulation of worlds has its origin in space travel simulation. As Bruegmann (1989) notes, the NASA Moon Landing Simulator inspired other world simulators such as Cityscape by Peter Kamnitzer in 1968 and The Aspen Movie Map by MIT’s Architecture Machine Group in 1978.

Images of Distant Suns and Vistapro. Source: http://countingvirtualsheep.com/stepping-into-other-worlds/

Vistapro entered into this scene as a tool that was hyped as an easy way to visualize flights through National Parks or Mars. “Play God with the ultimate landscape generator!”, one review proclaimed [noa 1997]. In reality, it was used in a variety of manners by 3D artists, hobbyists, and geo-professionals. The USGS generated slides for presentations in the software, New Zealand’s Logging Industry Research Organisation evaluated it for forestry management, science teachers taught atomic principles with it, and backpackers planned their hikes with it [Jenkins 2003; Kilvert 1996; Markham 1998; Zohdy et al. 1993].

“CU Amiga Magazine” and “The Snows of Olympus”

Vistapro was popular with artists. The last version came out in 2005, but as recently as 2016, a gallery of work using the tool was published on Renderosity, an online community of digital art enthusiasts [gToon 2016].

Arthur C. Clarke also famously used Vistapro to imagine settings for his book The Snows of Olympus [Astrobiology Magazine 2004]. Clarke seems to have taken the “Play God” invocation seriously and experimented with terraforming. He imported recently available maps of the surfaces of Mars and used Vistapro to visualize how future colonists might make the planet more habitable [Astrobiology Magazine 2004]. Clarke seems to have succumbed to what others called the “addictive” nature of the program [Bruning 1992]. He went beyond specific purposes and played with it. Clarke states that he “couldn’t resist putting a lake in the caldera of Mount Olympus,” despite it being an unlikely place for the colonists to terraform [Astrobiology Magazine 2004]. Vistapro lends itself to this kind of freewheeling experimentation. Somewhat like the machine learning tools we see today, it rapidly generates diverse and visually interesting results with little effort.

Sociotechnical Aspects

An interesting aspect of this software was the personal relationship between its developers and users. Vistapro had a limited ability to generate or manipulate models or camera paths, so it depended on and supported an ecosystem of other tools. Instead of a walled garden, VRLI had a modular and open approach, repeatedly providing assistive scripts to customers. John Hinkley provided a translation utility between file formats to assist a high school teacher’s use of it in class. In addition, the head of VRLI provided source code to the USGS to assist the agency with using Vistapro in a custom workflow [Markham 1998; Zohdy et al. 1993]. This highly involved customer service seems to have been without financial renumeration. Also worth note, is that Arthur C. Clarke’s use of Vistapro came after he received it as a gift from Hinkley [Astrobiology Magazine 2004]. These actions suggest that the makers of Vistapro were interested in encouraging its use in artistic, educational, and scientific applications.

Flying over Atoms. John R. Markham. Laconia High School. 1998.

Visual and Interactive Analysis

The general interface of Vistapro is comprised of a map, rough preview, and menus of parameters. Elevation models can be imported from other programs or users can generate fractal landscapes choosing from one of four billion seeds.

There is little direct control in Vistapro. Changes generally happen at the global scale. This offers serendipitous landscapes that Hinkley described as “often more interesting than those found in the real world” [Jaenisch et al. 1994]. Instead of working directly with the map, a user can generate custom landscapes by clicking “Randomize Seed.” After creation, the map can be altered en masse with “erode” and other options. Manipulations like drag-and-drop or selection do not exist. More like a menu at a restaurant than a sculpture, Vistapro is not a malleable model. It sacrifices flexibility through its structure and strong assumptions about what elements are vital for representing a landscape and how they are constructed.

Creating a landscape from a random seed.

Users can place surface items (such as water features, simple cube buildings, and a small set of trees) through point and click interaction in the small graphics window. The inventory was limited to what you see in the images below, which led the New Zealand Logging Industry Research Organization to complain that it did not have an adequate representation for Pinus radiata [Kilvert 1996]. While Vistapro has a highly structured model of the world, it is frugal. The developers focused on making nature appear real from a bird’s eye view.

Trees and Buildings in Vistapro

Vistapro’s underlying generative algorithm assumes that nature is fractal on the same order at every scale. This is contradicted by close observation. One can imagine the different patterns in shifting particles of sand, the weathering of mountains, and the branching of trees [Lewis 1990]. However, fractals economically generate detail at every scale. The assumptions built into the model allow it to elegantly describe a skeleton of the world to be inflated with parameters.

Random Fractals as Nature in Vistapro

Vistapro uses the random mid-point displacement method to achieve its fractal landscape generation [Jaenisch et al. 1994]. Random mid-point displacement works by repeatedly sub-dividing planes using a midpoint that is displaced up or down based on a random number. A simple set of rules gives a realistic image.

In their seminal work on shape grammars, Stiny and Gips defined their aesthetics in terms of “specificational simplicity and visual complexity” [Stiny and Gips 1971]. By that view, fractal landscapes are remarkably beautiful. With an extremely simple algorithm, they produce enough detail to convince us we are looking at the real world.

Vistapro’s rendering calculations were done using ray tracing [Jaenisch et al. 1994]. Lighting and atmospheric conditions lent further apparent “realness.” This led to Vistapro’s beefy system requirements which were a major obstacle to those without the proper hardware. It could take hours to render a single frame on lower capacity Amigas [Williams 2017].

Waiting for the landscape to render in Vistapro

Comparing Vistapro and GANs

Vistapro exists as part of a lineage of computational tools to fabricate detailed worlds. I opened by inviting a comparison between Vistapro’s generated landscapes and the current wave of art made with GANs. How do they compare aesthetically?

Let’s use Stiny and Gips measure of aesthetics. While the outcomes of Vistapro are visually complex, the specifications (comprised of the fractal algorithms) are simple. Landscapes defined by GANs also yield complexity, but the specifications (comprised of their internal weights) include millions of parameters.

MobileNet, a neural network for classification, was created with a focus on minimal size to make it mobile friendly [Howard et al. 2017]. It was praised for its efficiency at 1.32 million parameters, which compares favorably to AlexNet at 60 million [Howard et al. 2017]. This seems like a lot, but the design space of a 32x32 image in the 256-bit RGB color space would be 256^3072 (for reference, there are 10^80 atoms in the universe). As a result, we can say that neural networks do simplify to an extent. They compress representations of the world into a few million thresholds. But, it is hard to describe this compression as simple and it is worth noting that it occurs in a way that is completely inaccessible to human comprehension.

In both Vistapro and GANs, the output can be detailed to the point of appearing realistic to humans. Complete realism is often the express goal of these tools, even when used for artistic purposes. This tendency underscores a disconnect with other artistic approaches. Many artists focus on abstraction rather than high realism. Which raises the question: what value do highly detailed images have?

This question was examined by Joan Fontcuberta (2004) in her work Orogenesis. She took landscapes from Dalí paintings and fed them into landscape rendering softwares (such as Vistapro) to examine the “delirious and baroque fantasy” of our visual culture and to intensify the kitschness of landscape images [Fontcuberta 2004]. That is to say, these images are dramatically kitsch, but ironically so. They emphasize certain details, while at the same time eliminating other types. This leads to what Fontcuberta calls “a nature which now is not more than an artificial and illusory reconstruction” [Fontcuberta 2004].

Orogenesis. Joan Fontcuberta. 2004

Vistapro may have high detail, but I think the terms “baroque” and “kitsch” better apply to GAN art. At the very least, Vistapro has a clear and somewhat simple structure for the world it represents. GANs create worlds based on pixels and any combination could potentially be produced. Vistapro renders the output in a specific way. It is a simple environment with trees, rivers and mountains. It is a world we could live in, if empty. In contrast, the worlds created by neural networks feel like they would rip your body apart with their face melting aesthetic.

It could be said that the lack of structure in GANs allows users to freely work across domains, while the Vistapro model requires developers to produce distinct applications. However, GANs also obviate the need to understand the building blocks of the world being operated in. Modeling worlds with structured output, like in Vistapro, requires close examination. Developers must define a skeleton to be filled with parameters. By contrast, the developers of applications like GauGAN (image below) did not have to observe any sunsets over water. Their focus would have been technical rather than content-oriented. The structure of Vistapro binds it to the world.

GauGAN from Nvidia Research (2019)

Conclusion

Vistapro and GANs both create artificial worlds full of detail. However, GANs do away with the highly structured output of programs like Vistapro. This lack of grounding in reality gives GANs more flexibility, but also contributes to potentially disturbing glitches. On the other hand, Vistapro provides a safe and clean world, but one that could be described as lonely and sterile.

The value of detail in artificial images is still unclear. Humans have a tendency to dislike art made using computation [Chamberlain et al. 2018]. The credits of the movie mentioned at the beginning of this article had a mixture of landscapes and hand drawn sketches, as seen below. These are obviously artificial lines, but I found them charming. This is probably because a human created them.

A frame from the credits of Paper Man (2010)

But what about computational images that I enjoy, such as those by AARON?

“Neural Network Balenciaga” by Robbie Barrat (2019) and a drawing by AARON/Harold Cohen.

One potential difference is that humans prefer computationally produced artworks when they can anthropomorphize the system used to create them [Chamberlain et al. 2018; Cohen 1973]. AARON drew live for exhibitions. This adds a temporal aspect at human scale. The apparent pondering of the robot can seem like a thinking being. Vistapro and GANs have (nearly) instantaneous output. Perhaps Vistapro was more satisfying when it took hours to print a single frame (its rendering was rapid on the modern CPU I used to test it).

Or, maybe I enjoy the human aspect of AARON’s work. Cohen saw his creation of AARON as an art making activity [McCorduck 1991]. Even though AARON could make paintings without being actively controlled, it required mountains of effort to setup. Tools like Vistapro or neural networks can be aesthetically useful, but they cannot do it alone. They require a human, Harold Cohen, Arthur C. Clarke, or whomever, to infuse them with meaning.

References

Bruning, Dave. 1992. “Visualizing Mars’ Surface: VistaPro.” Astronomy; Milwaukee, June 1992.

Chamberlain, Rebecca, Caitlin Mullin, Bram Scheerlinck, and Johan Wagemans. 2018. “Putting the Art in Artificial: Aesthetic Responses to Computer-Generated Art.” Psychology of Aesthetics Creativity and the Arts 12 (2): 177–92. https://doi.org/10.1037/aca0000136.

Cohen, Harold. 1973. “Parallel to Perception: Some Notes on the Problem of Machine-Generated Art.” Computer Studies 4 (3/4). http://www.aaronshome.com/aaron/publications/paralleltoperception.pdf.

“CU Amiga Magazine.” 1997. September 1997. http://www.cu-amiga.co.uk/bissue/sep97.html.

Fontcuberta, Joan. 2004. Orogenisis. https://www.flickr.com/photos/adfphoto/albums/72157619748001159/with/3632517437/.

“GauGAN Beta.” 2019. 2019. http://nvidia-research-mingyuliu.com/gaugan.

gToon. 2016. “Gallery of the Week — VistaPro on Renderosity.Com.” Renderosity. February 7, 2016. https://www.renderosity.com/gallery-of-the-week---vistapro-cms-18117.

Howard, Andrew G., Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.” ArXiv:1704.04861 [Cs], April. http://arxiv.org/abs/1704.04861.

Jaenisch, Holger M., James W. Handley, Jim Scoggins, and Marvin P. Carroll. 1994. “Simulating Landscapes Using an FFT-Based Fractal Filter.” In , edited by Wendell R. Watkins and Dieter Clement, 434. Orlando, FL. https://doi.org/10.1117/12.177933.

Jenkins, Roger A. 2003. “Review of VistaPro.” TwoHikers.Org. 2003. http://twohikers.org/Gear/vistapro.htm.

Kilvert, Shaun K. 1996. “NEW TECHNOLOGIES FOR THE SIMULATION AND ASSESSMENT OF FOREST LANDSCAPE CHANGE.” New Zealand Journal of Forestry Science, January, 6.

“Leaving Home.” 2004. Astrobiology Magazine. June 22, 2004. https://www.astrobio.net/mars/leaving-home/.

Lewis, J. Parry. 1990. “Is the Fractal Model Appropriate for Terrain ?” In . Disney’s The Secret Lab.

Markham, John R. 1998. “Flying over Atoms CD-ROM: Abstract of Special Issue 19.” Journal of Chemical Education 75 (2): 247. https://doi.org/10.1021/ed075p247.

McCorduck, Pamela. 1991. Aaron’s Code: Meta-Art, Artificial Intelligence, and the Work of Harold Cohen. Macmillan.

Mead, Derek. 2012. “VistaPro Renderer Alternatives and Similar Software — AlternativeTo.Net.” AlternativeTo. February 8, 2012. https://alternativeto.net/software/vistapro-renderer/.

Mulroney, Kieran, and Michele Mulroney. 2010. Paper Man.

Roth, Curtis. n.d. “Software Epigenetics and Architectures of Life.” E-Flux Architecture. Accessed October 20, 2019. https://www.e-flux.com/architecture/becoming-digital/248079/software-epigenetics-and-architectures-of-life/.

Stiny, George, and James Gips. 1971. “Shape Grammars and the Generative Specification of Painting and Sculpture.” IFIP Congress, August.

“The Aspen Movie Map Beat Google Street View by 34 Years — VICE.” 2012. February 8, 2012. https://www.vice.com/en_us/article/vvvqv4/the-aspen-movie-map-beat-google-street-view-by-28-years.

Wang, Ting-Chun, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2017. “High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs.” ArXiv:1711.11585 [Cs], November. http://arxiv.org/abs/1711.11585.

Williams, Will. 2017. “Stepping Into Other Worlds.” Counting Virtual Sheep. August 1, 2017. http://countingvirtualsheep.com/stepping-into-other-worlds/.

Zohdy, Adel A.R. 1993. “Program Kolor-Map & Section: Amiga Version 2.0.” U.S. Geological Survey. https://pubs.usgs.gov/of/1993/0585/report.pdf.

Zohdy, Adel A.R., Bisdorf, Robert J., and Peter Martin. 1993. “A Study of Seawater Intrusion Using Direct-Current Soundings in the Southeastern Part of the Oxnard Plain, California.” U.S. Geological Survey. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1003.5069&rep=rep1&type=pdf.

--

--