What is Digital Cinema?

Attempting a theory for a medium in utero

Adam Protextor
57 min readJan 16, 2020

Abstract: Digital cinema has yet to be treated to its own distinct theory. A relatively new medium, it has primarily been subject to analysis using theories originally constructed for film, much as early films were analyzed using the terminology and theories of the theatre. In “What is Digital Cinema?”, Adam Protextor provides an interdisciplinary groundwork for looking at digital film, drawing from classic and contemporary film analysis, technological history, and human biology in an attempt to find a new formalist theory for a new medium. Digital cinema grows more inherently distinct from film every day, and can no longer afford to be treated with ancillary theory. Originally published in 2010.

Author’s note: This piece was originally published over a decade ago, and as its first sentence notes, that means that plenty has changed and evolved since the time of writing. The central argument at play here, however (that the analysis and discussion of digital filmmaking is too dependent on language developed to discuss analog film), I think remains relevant. In the past 10 years, digital filmmaking has proceeded along a few paths outlined in this essay: 1) the boundary-free world creation of purely digital cinema (Alita: Battle Angel), 2) the rewriting of traditional filmic grammar via video games (God of War), and spectator-as-author interactivity (Black Mirror: Bandersnatch) have all come to fruition in a fuller sense than in 2008. Still, our language in discussing these evolutions remains tethered to a vocabulary developed in response to the invention of film strips and projectors. This piece means to argue that, despite the many overlapping sectors between film and digital, truly freeing digital theory and art from its ontological ancestor relies on learning to see it not as an extension of celluloid, but as its own distinct medium. While the conversation has become immeasurably more complicated in those 10+ years, I still believe it’s one worth having.

Content © 2008–10 Adam Protextor. All rights reserved.

Introduction

The year I’m writing this in is 2008 — a factor that may seem superfluous, but one I consider intrinsic and essential to the understanding of the subject matter within. Digital cinema can not yet be classified as a static object; it remains amorphous, temporally plastic, in utero. In short, it has not yet been born, and so any analysis I can offer now will without question become at least partly obsolete as time passes. Of course, this sentiment itself seems obvious — doesn’t any aesthetic theory suffer the rigors of further exploration and time? Yes, but my intention is differentiation. I would posit that filmic cinema (a term I will use interchangeably with analog cinema) has been born. The semantic elements of genre, composition, montage, image structure, etc. may be endlessly malleable, but the basic syntactical code of the film remains established — a strip of celluloid, peppered with silver halide crystals, is scratched, painted upon, or exposed to light to create an image. While the art itself is infinite, the process of creation does not change — intent is irrelevant. Not true for digital cinema, which continually adapts its mode of creation to fit its intended mode of exhibition, creating a system of parallel lines from origin to endpoint rather than sharing the film’s branch structure. Both media have multiple possible outcomes, but only digital cinema, as it stands now, undeveloped, has multiple possible beginnings. This fundamental notion points to a broader gap between the study of digital and filmic/analog cinemas, and one that I believe shows a great necessity for a different way of studying and analyzing the former. As is the general trend, digital cinema has largely been shaped by the overriding analog theories that have preceded it. Just as Munsterberg called the first narrative films “photo-plays,” using the stage as a referent for his understanding of the cinema, modern cinema theorists haven’t been able to distance themselves from referring to works of digital cinema as “films” — I find both reversions completely understandable and yet completely off-track. If digital cinema is indeed in essence and character a separate entity removed from its analog counterpart (as I will demonstrate it is), it would be foolish to use an old lens to interpret the new image. In the following pages I will outline in detail exactly how and why digital and analog must be considered disparate parts of cinema, first outlining my dialecticism with notes moving from the indexical to the practical, and finally reaching towards the ultimate goal — to set up the groundwork for some new theory. Obviously, it is firstly critical to free digital from its tethers to understood filmic theory; only then will it be possible for some digital formalist design to emerge. Nonetheless, as I admitted right off the bat, the year is 2008, and I can not at all hope, nor do I have the audacity to believe that these words will be responsible for creating a new schema of digital interpretation. I only wish to insist upon the necessity for one. If my distinction is to be successful, however, we first need to lay down some ground rules.

Terminology and definition

The technique behind many early silent shorts after the birth of cinema hinged on a simple recording process of what would be otherwise seen as a stage play. The camera was positioned as to mimic that of a studio audience in an auditorium watching a play on stage, with the actors positioned in medium-wide shots so as to make visible the entirety of their movements, forms, and setting. In brief, the first understanding of the filmic cinema was that of a recording process, branched directly from a set perspective on the experiential qualities of its closest narrative neighbor. The tendency we’ve seen since the advent of digital moviemaking techniques to compare them to their analog counterpart comes from an astoundingly similar mindset, although the practical applications are far from alike. From this point of view, exactly why digital cinema is being understood as an extension of filmic is no grand mystery. After all, it has more of a claim to dependency than even its spiritual predecessor did.

The advent of digital application came about in the 1980s, when the use of digital technology first began to creep into filmic narratives. True, the earliest applications of digitalization were not supplementary to filmic cinema — they were purely digital and independent representations; technique for the sake of technique created and screened in a computer (I will touch on this shortly). For now, however, our argument connects better to concentrate on the use of digital imagery purely for the augmentation of analog cinema. The 1982 “film” TRON (Steven Lisberger) demonstrated the use of computer-generated animation as a means to communicate visually the idea of human beings interacting within the world of a computer. The case of TRON is particularly noteworthy for the fact that its use of digital animation still stems from an overriding desire for realism (computer images used to create a computer world, or what Stephen Prince calls perceptual realism), despite the computer imagery’s inherent nonexistence. Indeed, many early practical uses of digital animation functioned on such a system. Perhaps most famously, the T-1000 robot in Terminator 2: Judgment Day (James Cameron, 1991) used computer-generated imagery (henceforth CGI) to construct a character of liquid metal, capable of passing through prison bars or changing its limbs into knives. The potential examples are numerous, but the point is made — the earliest commercial applications of digital technology were primarily used to add to, not exist independently from, analog cinema. After all, both TRON and Terminator 2 were shot on 35 mm film stock, and so a precedent was set that digital technology would not be used of its own accord but instead would simply be a new acquisition to the semantic tool chests of analog filmmakers. It is this primary reason why I argue that the theoretical reliance of digital cinema on analog cinema is sympathetic, and also why I feel the need to create some pedagogical boundaries as to exactly how we view disparate and ever-mixing uses of the digital and analog. Without these formal distinctions we cannot escape the messiness of trying to separate these two worlds while simultaneously acknowledging their historic overlap.

Firstly, I must insist on keeping separate the terms “cinema” and “film”. It is generally accepted to use the former to refer to the latter, but here I feel the need to segregate them for the sake of clarity. When I refer to “film” I am speaking only of objects or phenomena expressly related to the capture of light or imagery on celluloid. Conversely, I’m allowing the term “cinema” to define the entirety of the plastic art wherein audiovisual events are organized in time — or as Gene Youngblood puts it, into an “event-stream” [Youngblood 27]. “Digital”, then, will refer to that which has been codified via use of a computer, whether it be individual imagery (CGI), complete narrative, or non-linear editing. Now that these basic terms have been outlined, it is necessary to delve deeper. As I have noted, digital cinema is a complicated arena due to its murky Venn-diagram origin with the filmic, and so it becomes crucial to further break down these overlaps in a way that can be quantifiably codified. I would posit that two basic forms of digital cinema exist — the independent and the dependent.

Independent digital cinema, as I take it, stands on its own. Any cinematic object entirely created, processed, edited, screened, etc. using digital modes is independent. For instance, if a movie is shot on a digital camera, then transferred to a computer bay for editing, then finally output to DVD for screening, its complete process of creation is in a sense pure — the movie has never once been an analog object, instead completely relegated to computerization. It thus exists independently as a digital creation. At this juncture, already, I have incurred a bit of paradox in my interpretation. By using the word “independent”, I am of course pedagogically reliant on the existence of film, the implication being that digital cinema’s “independence” is from the analog. Setting up a system of comparison with this earlier art form seems to make me a hypocrite. If my argument is to grant digital cinema some new theoretical domain, how can I justify using film as a barometer with which to judge its caliber? Simply put, it can’t be helped. If the roots of digital lie in analog augmentation, and film has been born but digital is in utero, then there is no other starting point possible if indeed we seek to cut the umbilical cord. That being said, how do we classify those movies that are still connected?

The dependent digital cinema can be broken down into two subset categories — the weakly dependent and the strongly dependent. The ordering of specific cinema into these stratums directly relates to where they fit in the aforementioned Venn diagram. A movie whose images were all obtained digitally of course has the potential to end up an independent piece of digital cinema. If its mode of exhibition, however, includes its transmutation to celluloid, it has become impure and thus weakly dependent on the existence of analog technology. One example of this would be Zodiac (David Fincher, 2007). Zodiac was shot entirely on a Thomson VIPER FilmStream, which, despite its misnomer, is a digital camera. The movie was then however transferred to film in order to be shown on 35 mm projectors in theaters, creating a weak dependency on film to publicly exist. Zodiac was of course released on DVD, copied from a direct digital source of the event-stream. Yes, this does mean that the version of Zodiac screened in theaters nationwide was weakly dependent digital cinema, and the version(s) of Zodiac released on DVD are independent. Trivial as this distinction within a distinction may seem, the mode of exhibition must be factored in, for phenomenological reasons to be discussed in the next section. What is strong dependency then? A movie whose process of image-capture was analog, but that uses digital technology either to edit (as in nearly all modern movies), augment (as in the case of Terminator 2), or exhibit (again, as in every movie ever released to DVD or uploaded to the Internet), is strongly dependent digital cinema. These movies could not exist without the use of film, yet implicate a computer in a way that cannot be ignored, no matter how marginal it may be. Think of this as a reverse impurity — films imbued with digitalization rather than digital cinema imbued with filmic dependence. Is then Gone with the Wind (Victor Fleming, 1939) strongly dependent digital cinema when I watch it on my DVD player? Yes. Although the film was created using 35 mm film stock, edited linearly on a manual editing bay, and originally screened through a projector, it has become ones and zeros when I press play. This and other examples demonstrate that these categories are not absolute, and that movies can drift in and out of them as their mode of exhibition changes. Still, for the purposes of this study I must be somewhat segregationist, if for transparency only. All cinema relegated to a computer process becomes digital — its mode of production is what classifies it as either strong or weak, creating a gap that is largely phenomenological in nature. What is a purely independent filmic cinema, then? Everything else.

Index and the void

“Morning.”

The first question I have to ask when dealing with the phenomenological content of cinema, both digital and filmic, is, where does the digital image exist? When asking this question with film, all we need to do is pick up a strip of celluloid, hold it up the light, and say “there”. Digital, however, poses a problem.

Digital media are adept at storing vast collections of numbers, and computers are adroit at manipulating them. But the arcane tokens of these thinking machines are incomplete by themselves. Since concepts are not appearances, the curious digital image is unfulfilled without a transformation of numbers into percepts. [Binkley, 111–2]

As Binkley points out, digital images cannot exist on their own. They require translation, because the human brain is incapable of inferring any image from a stream of binary encryption. A computer must always be employed when dealing with digital cinema, as a medium through which we can interpret the cipher. Not true with the filmic, which theoretically doesn’t even require harnessed electricity to function. Even in an imaginarily archaic world, films could be made and screened using, if anything, harnessed lamplight and a hand-crank projector. Additionally, celluloid itself depends on a chemical reaction. This process of creation and exhibition amounts to something altogether organic. On the other hand, turn off the power on a digital production and your image is gone. Without the use of modern technology and complex circuitry, there is no cinema. This fundamental lack of index is crucial in understanding exactly why the digital is distinct from film. The answer to the question “where does the digital image exist?” is “nowhere”, because it can only be interpreted, never touched. There is no index, only void.

A friend of mine recently challenged this assertion of mine with a more pragmatic viewpoint. He doubted if the ontological difference between film and digital was even relevant, given the potential audience’s inability to distinguish between the two perceptively. Now, if we carry the digital replication of reality to its logical endpoint — virtual reality, there is a thought experiment that demonstrates why the difference does matter. Imagine that the world you are currently experiencing — sitting or standing wherever you are, reading these words — is a virtual reality simulation. Everything around you is tangible, interactive, and simulative as much as it possibly can be. Does it matter, then, that your experiential world is digital? Of course it does. Even if the phenomenological world is indistinguishable from the real world, the fact is that you are isolated in a false reality. This inherent lie, visible or not, is the fundamental theoretical concern that I feel debunks my friend’s pragmatic argument. The idea that we are “brains in vats” or that there could be dead pixels in the sky is and ought to be disconcerting. I therefore wish to immediately put aside any questions of audience experience/perception for the moment, if only in the ontological argument that follows — it is currently irrelevant.

It is no mistake nor an exercise in audacious self-aggrandizing that this essay is titled “What is digital cinema?”, for in the course of ascertaining the true nature of the digital image, especially in an ontological sense, the work of Bazin is invaluable (and his thoughts in “The Ontology of the Photographic Image” particularly). If we are to understand that, in the most basic terms, light is mummified onto celluloid, we at least have a solid platform to start from. When photons hit the silver halide on a strip of film, they imprint a direct physical copy of the object they have just bounced off of — that is to say that light, as we understand it, is tangible, and, at least in particle form, has actually touched whatever we see on the screen. In slightly more giddy terms, this is to say that if I could hold a print of La Passion de Jeanne D’Arc (Carl Theodor Dreyer, 1928) in my hand, I could be touching something that actually had ancillary contact itself with Maria Falconettti, albeit removed three or four degrees. I don’t wish to get too sentimental with this idea, but the fact remains that the filmic cinema as we understand it is a physical object, and when we watch it (as projected on 35 mm, at least), we are seeing that object’s shadow. This is the part where I have to get careful about making a value judgment — I don’t wish to condemn digital cinema for its lack of connection with the physical world, but merely to outline yet another reason for its inherent separation from filmic theories.

Speaking ontologically, I can’t ignore the strong ideological ties to realism that Bazin’s arguments are founded upon:

Originality in photography as distinct from originality in painting lies in the essentially objective character of photography. For the first time, between the originating object and its reproduction there intervenes only the instrumentality of a nonliving agent…all the arts are based on the presence of man, only photography derives an advantage from his absence. [Bazin 13]

For Bazin, art is inherent in the physical reality of our world, and the proper purpose of realist plastic art is to seek it out and redirect it for the spectator. In this redirection, then, lies the artist’s role — the possibility to imbue subtext into the natural and allow new interpretation that would not have been otherwise implied. What role does digital technology take, then, in regards to this realist-dependent value judgment? Well, for one, the innovations of digital technologies present a basic argumentative imposition to the basic assertion of the “advantage of [man’s] absence.” As stated earlier, digital movies have no inherent indexicality — they are constructs reliant on the presence of a computer to exist. This goes for their exhibition as well as their creation (at least in the weakly dependent and independent typologies). To praise filmic cinema for its liberation of representation from human methodology is to present a very fundamental concern when computation is introduced into the process. The encoding and decoding of digital event-streams, for one, requires both enhanced human participation (now as encoder as well as operator) as well as the added necessity of artificial intelligent participation. While it is superficially reasonable to argue that a digital camera acts just as much the nonliving agent as the filmic apparatus, the fact remains that reality has been filtered through an extremely more drastic course when image-capture is changed from an analog to a digital process.

If a photographic image is captured on celluloid in a camera, developed, and moved to a projector, the image’s identity is preserved — capture-image and exhibition-image are not only identical, they are one and the same. This applies even to repeated prints (copies) of the original film negative, for if the process of duplication is photonic, then chemically, ontologically, and physically, the sameness abides. In short, an independently filmic image may be classified as the image-object. A digital image-object, then, is impossible. When captured through a digital camera, reality is encoded, creating an image-simulacrum. The absence of influence Bazin so values is inverted, because the mere acquisition of digital imagery depends on presence — by definition the digital cannot be realist (at least not in the ontological sense of the term). This spectral move from the indexicality of the image-object (filmic) to the iconography of the image-simulacrum (digital) is correspondent with cinema history in terms of digital’s development. Rick Altman, writing in 1992, refers to the notion of film’s indexicality as “half a fallacy”, using the similar progression of sound technologies to highlight his point:

Little by little, the indexical nature of film sound became compromised by the ability of acoustic networks and electronic circuits to alter or simulate sound[…]the electronic revolution has now made it possible to produce all the music and effects for a film sound track without recording a single cricket or musical instrument.[…]Today, the customary electronic manipulation and construction of sound has begun to serve as a model for the image. [Altman 44]

These observations on the development of sound technology shed historical context on the indexical-to-iconographic course images are currently on. While my focus here remains image-based, the history of film sound’s gradual deontologization provides past evidence for the Venn-diagram process I’ve proposed.

What has not been addressed here yet is the fact that my ontological argument, in its adherence to Bazin’s principals, could be debunked by a mere rejection of said ideologies. Bazin, after all, died before he could view the full range of what cinema would grow to become, and his meditations on realism have often been individually discredited by those writing on film. While this is a solid critique, it can be rebutted by a simple distinction between Bazin’s intentions and my own. Bazin treated the advent of filmic technology as the ultimate art form capable of realist expression. His approach, however, while rooted in the ontological character of film, extended to assess the realist virtues of individual films, thus removing his study from the realm of the ontological and into the realm of the semantic. By analyzing, praising, and appraising individual works and genres (in particular Italian neorealism), Bazin’s study became primarily a study of the proper applications of what he saw as an essentially realist art form. To call film a realist art form is to do something very different than to suggest that film itself carries a realist character. In this study, I aim to remove this focus on realism from the ontological and place it in the syntactical, thereby proving that there is a fundamental distinction between the image acquisition of the analog and the digital. Even if one does not necessarily agree that the proper way to manipulate film is to copy reality (as Bazin may have), one can still see that the methodology of film serves as a way to preserve it. Therefore, even a film by Georges Méliès which has no semantic claim whatsoever to realist representation (as Umberto D.[Vittorio De Sica, 1952] might) nonetheless remains syntactically realist in its photographic ontology, in a way that an aesthetically comparable digital image composite would not. To borrow Bazin’s ideas is not to mimic them — if I say digital cannot be realist I mean it cannot hope to do anything more than simulate the physical, not that it or film should have any ideological thrust towards realism as an art theory. Thus, the image-object and the image-simulacrum, the definitive outcome of this material distinction.

In his 1996 article “True Lies: Perceptual Realism, Digital Images, and Film Theory”, Stephen Prince references many of the previous claims but seeks to further liberate digital integration from the slew of realist critique:

For reasons that are alternately obvious and subtle, digital imaging in its dual modes of image processing and CGI challenges indexically based notions of photographic realism…its reality is a function of complex algorithms stored in computer memory rather than a necessary mechanical resemblance to a referent. [Prince 29]

Obviously, what Prince is saying here is something I’ve already covered. What’s more important is the place where he takes this initial jumpstart:

Given the tensions in contemporary film theory, should we then conclude that digital-imaging technologies are necessarily illusionistic, that they construct a reality-effect which is merely discursive? They do, in fact, permit film artists to create synthetic realities that can look just like photographic realities…The tensions within film theory can be surmounted by avoiding an essentializing conception of the cinema stressing unique, fundamental properties and by employing, in place of indexically based notions of film realism, a correspondence-based model of cinematic representation. [Prince 31]

Prince wants to be able to justify the new emergence of the digital technology with preconceived notions of film theory, as I have argued it cannot be. In order to take this a step further, he enhances what he calls “perceptual realism”, which allows images that are “referentially fictional but perceptually realistic” [Prince 32]. It is important to note that Prince, writing in 1996, is referring to movies which feature digital imagery, rather than taking note of the possibility for movies to be independently digital (i.e. shot digitally), which as I have expressed furthers the necessity for new theory. Citing examples such as Forrest Gump (Robert Zemekis, 1994) and Jurassic Park (Steven Spielberg, 1993), Prince means to justify that which is digital within a preexisting analog framework (strongly dependent digital cinema). Given this premise, it is more reasonable for him to want to dismiss those issues of indexicality I find all too important. Back to terminology — perceptual realism, then, is the realism that exists within the mind of the viewer — the phenomenological effect of the event-stream rather than the ontological qualities of the event-stream itself. Within this caveat, it is possible to render Bazin’s claims coherent while anticipating the assimilation of CGI, etc. into the filmic narrative. This, however, is another example of the sort of critique I debunked above. Prince is too purist in drawing from realist theory, and his suggestions on the nature of perceptual realism, while accurate to the semantic and phenomenological analysis of cinema, carry on with the belief in mind that accurate representation of conceivable events finds accord with realist art theory. This may be true momentarily, but I find it to be a theoretical placeholder for a greater issue at stake. By creating this new term of “perceptual realism”, he has given us a temporary fix, but one that eventually succeeds only in explaining away CGI, without noting that in the contemporary light of digital camera production, ontological claims are proven necessary once again. If we instead co-opt Bazin’s emphasis on materiality for the sake of a syntactic look at the conventions of the cinema itself, we are still left with an inexcusable necessity to locate some new methodology for digital analysis. While CGI may find temporary solace in the synthetic model, it remains an image-simulacrum (however integrated with image-object), and thus requisite on something beside integration to be understood. Fundamentally, such pragmatism is idealistic, again linked to an archaic need to view new technologies via their past referent. I offer this, however, working from a broader analysis which includes digital capture as well. CGI to filmic augmentation does pose an interesting quandary, existing as a muddy missing link between the analog and the digital — I will return to this later.

Image processing and human perception

The camera is an eye. This basic assertion, while seemingly simple, actually carries a slew of connotations and challenges in the light of emergent technology. To truly understand these new implications, it is necessary to reflect on the origin of this commonly understood parallel. The film camera as we understand it today was first categorized as a type of eye in formalist film theories of the 1920s, especially those of Dziga Vertov, but also lays some claim to the realist theories of Bazin I have already noted. Bazin’s realism finds solace in the classification of the camera as an eye because for him, it acts as an artificial window, actuating and realizing the outside world for newly intellectualized human consumption. The camera operates as an eye phenomenologically, because it creates a spectator experience that can be closely aligned with the simple viewing of the outside world. Again, I must stress that while Bazin sees this as the building block for a realist art theory, I can use this basic apparatus structure only as a grounds for my argument here. It is not my place nor intention to argue the proper uses of cinematic technologies, merely to contrast them theoretically to substantiate my ideas of necessary liberation. Vertov’s “Kino-eye” theory is strikingly similar in ideology, if not in successive art theory, to Bazin’s. Vertov believed that by encouraging a new vision of the world by using the multi-capable cinematic eye, that human beings would naturally evolve into the perfection of an “electric man”. Seemingly a far cry from the more humanist desires of Bazin, but curiously linked. Both authors see the evolution of the camera as a new eye through which man can interpret the world — they just have different ideas on what form that interpretation should take. What is the point of all this bantering between Russian formalism and realism? Basic filmic cinematic theory was and is founded on the notion that the camera replaces the human eye. This theory has obviously been carried out to a far more detailed degree than early formalist designs; one glance at a basic analysis of Rear Window (Alfred Hitchcock, 1954) reveals that, along with the notable introduction of psychoanalytic theory, the ideological incarnation of the camera as visual-machine has not and will not dissipate. Naturally, this is not a new idea. All visual art attempts to do this to some degree or another. What other purpose did the Renaissance painter’s breakthroughs in perspective serve than to further increase the painting’s accurate simulation of the human percept (and thus his eye). Indeed, Bazin points out this evolution just as he describes the ontological benefits of the cinema, claiming that “[photography] has freed Western painting, once and for all, from its obsession with realism and allowed it to recover its aesthetic autonomy” [Bazin 16]. If photographic cinema has freed painting from its need for realism, then, what has digital cinema done for it? Has digital liberated film from its need for realism too, or can this liberation be self-contained by the new medium?

Besides establishing some basic theoretical groundwork, the above paragraph also serves to illustrate one key point. When the camera was first being viewed as an eye, whether that eye be revolutionary or re-interpretive, there was only one type of camera — the camera of analog photography. When the camera changes, can it be called an eye any longer? Can we simply dismiss all visual re-interpretation as an extension of Kino-eye mentality, or must we alter our approach when the technical act of visualization undergoes a mechanical shift? Since the apparatus and conception of “camera” has ceased to become one that can be purely pigeonholed to photographic capture, it is of course a necessity to move away from the substitution theory that depends on the similarities between the filmic camera and the human eyeball. If the camera has come to be a surrogate eye, how then does its corresponding technology gel with the mechanics of its original referent — its biological ancestor?

The human eye allows light to pass in through the cornea, which is the outermost sixth portion of the sclera. As light comes in through the front of the eyeball, it passes through the pupil, which is controlled by the iris surrounding it, a diaphragm with two built in muscles which help regulate the amount of light that is let in (the sphincter and the dilator). Right away, from this cursory description of superficial photoreception, there is a clear linguistic link between the eye and the camera. Not only do we refer to the diaphragm on a camera as an iris, but indeed, it was designed after the human eye, to dilate or contract according to the right amount of light necessary for the exposure the photographer desires. I have discussed how the ideas of vision have held film theory’s collective hand, but the fact that the mere design of the camera had in mind the mimic of our sight-process, however an obvious observation, is especially important when we turn to the digital camera. After light is allowed to pass through the pupil and into the eye, it is refracted by a lens and captured on the retina. Inside the retina there are rods and cones, which respectively process low light vision and color and detail. These cells, along with a chemical called rhodopsin (A.K.A. “visual purple”), translate the photonic particles received into electrical signals [Bianco]. Here is where our comparison to the filmic camera ends. So far, the process of light-acquisition is nearly identical to this traditional camera (something nearly a priori given said camera’s design), but with one key difference. Once light has entered an analog camera it is transformed by chemical process into a recognizable shadow of the objects it has bounced off of, whereas in the human eyeball these chemical reactions only serve to change the visual signal even further, relaying it to the visual cortex via the optic nerve. It can be argued, then, that the photographic camera stops halfway through the process — if it was designed to correctly duplicate the human visual schema, then it has cut the process short. The photographic camera, then, may be a model of our eye, but it is an inadequate model of our visual biology.

It interests me greatly that the respective fields of cinematic technology and neural/cognitive research have both matured simultaneously in the years since photography was first invented as an ape on the ocular blueprint. The fact that both the digital camera and scientific comprehension of the visual process in full have both evolved shows a great parallel that cannot be ignored. So what have we learned? In their book Phantoms in the Brain, professor Dr. V.S. Ramachandran and Sandra Blakesee offer insight:

When I was a student, I was taught that messages from my eyeballs go through the optic nerve to the visual cortex at the back of my brain (to an area called the primary visual cortex) and that this is where seeing takes place. There is a point-to-point map of the retina in this part of the brain — each point in space seen by the eye has a corresponding point in this map. [Ramachandran and Blakesee 70]

Right away we are given some clue to the fact that vision as a preconceived notion of eyesight is insufficient. Interpreting the uses and effects of the analog camera to be symbolic for human vision is limited only to that which is phenomenological. In this distinction between vision’s phenomenology and its actuality lies the distinction between the modes inherent in vision-replacement by analog and digital cameras, respectively. The experiential aspects of human vision — namely, that we look ahead with our eyes and are presented a straightforward image of our surroundings, fit in accord quite nicely with the photographic process. Just as we are presented an image by exposing our irises to light, so does celluloid yield one when it undergoes the identical process. Of course, it would be easy and perhaps again pragmatic to argue that digital cameras do the same sort of point-and-represent capture, but such reductionism ignores the consequences of setting aside technical aspects for the sake of more lucid analysis. Even the digital aperture has been designed to work, in a user-oriented schema, as a copy of the iris f-stops on an analog apparatus. This is a reasonable evolution, as digital cameras were designed to mimic what film cameras could do. They do not, however, mimic how they do it.

To concentrate only on the experiential and phenomenological links between the basic precept of “acquiring an image”, we cut out the means to compare the ends (namely, that both types of cameras, as well as the human eye, produce imagery), a misstep that will ruin the potential for understanding dynamic structural barriers between all three media. “There is no little man inside watching what is displayed on the primary visual cortex” [Ramachandran & Blakesee 72] as there is when we project a film, and so we are left with the digital and the neural to compare, two networks that depend on cognitive reorganization to decode the image file.

…this first map [in the visual cortex] serves as a sorting and editorial office where redundant or useless information is discarded wholesale and certain defining attributes of the visual image — such as edges — are strongly emphasized…This edited information is then relayed to an estimated thirty distinct visual areas in the human brain, each of which thus receives a complete or partial map of the visual world…Why do we need thirty areas? We really don’t know the answer, but they appear to be highly specialized for extracting different attributes from the visual scene — color, depth, motion, and the like. [Ramachandran & Blakesee 72]

When photonic light is transformed into electrical signals, and subsequently passed via the optic nerve to the visual cortex, the brain breaks down these signals and processes them separately to create an overall construct, which our brain instantaneously interprets as the image we call perception. This is why when certain areas of the visual cortex are damaged, specialized components of the visual process are damaged as well. It is also why certain blind individuals who have damaged eyes but not damaged visual cortexes are able to ascertain objects’ locations without “visually” perceiving them (what Ramachandran calls “blind-sight”). This complex process of image breakdown, transfer, and reconstitution relies on signal transference and to-and-fro ciphering. As Mike Teevee put it in Willy Wonka and the Chocolate Factory (Mel Stuart, 1971), “You photograph something, and then the photograph is split up into millions of tiny pieces, and they go whizzing through the air down to your TV set where they’re all put together again in the right order.” While he was referring to Wonka’s television transmissions and not neural cognition, he may have been onto something — perhaps the eccentric confectioner had already created the digital camera.

When images are captured by a digital camera, the image files are manipulated in such a way as to preserve overall vision without compromising massive amounts of space (this compared to the filmic camera, which requires a set quantity of physical space for every second of visual capture — standard 24 frames). The Panasonic AJ-SDX900 camera, according to technical specs outlined in Brian McKernan’s book Digital Cinema, “captures images at 60 fields per second or 30 or 24 frames per second” [McKernan 80]. Right off the bat the link to analog is obvious, but worth noting — modern prosumer and professional digital cameras are designed to mimic the 24 frames per second shooting aesthetic of film, a natural consequence of medium evolution. Elaborating on Panasonic’s DVCPRO model:

When the camera is operating in a progressive capture mode, the progressive image is recorded into both the odd and even fields that are normally used to make an interlaced frame. Operators have two choices of how to record 24-frame progressive images on tape. The industry standard pull-down arrangement replicates the approach used to transfer 24 frame per second film to 30 frame (60 field) per second NTSC video. [McKernan 80]

Rather than replicating imagery in a purely representational sense, the Panasonic camera used in this example creates alternating fields of video image that switch off at regular intervals in order to create coherent imagery for the human eye. Similar to the human sight pattern of “filling in” — the phenomenon where the brain replaces the blind spots in the eyeball with matching imagery to simulate a complete and unbroken image, the digital camera is constantly moving back and forth between two broken fields to create a superficial phenomenology of totality. True, the filmic camera also depends on movement, but for a very different purpose. When analog film is projected at 24 frames per second, the end result is simulated animation of the objects within the frame’s borders. When the film is stopped, the individual segments of the moving image (the still photographs) remain accurate documents of the field of vision at the time of capture. The digital cinema’s reliance on movement is not to create plausible animation of the objects in the field, but to create the field itself. Without plasticity and movement, the digital frame does not even exist to begin with. The difference between movement in the digital and analog process is the difference between perception and existence, respectively.

Returning to the visual cortex’s “filling in” program, we can see an inherent link between both media’s shared interest. Why else does the filmic cinema even exist other than because of this cognitive adherence process between disparate images? If the human brain was not capable of “filling in” our blind spots, then it would be able to spot the individual frames linking an otherwise animated film, and the effect would be lost. The film thus depends on human weakness. Digital, on the other hand, moves a bit quicker than film in its interlace process (60 fields per second as opposed to 24 frames). It may be faster then, but philosophically it remains the same. Cinema, then, depends on this weakness. What is worth mentioning, then? The fact that only digital, of the two, not only depends on “filling in” for its images to exist, but actively mimics it in order to create said images in the first place. When these images are placed on a computer, a very similar process takes place: “Lossy compression systems [a type of compression system where compression and decompression result in minute differences between original and copy] do not provide perfect reconstruction of the original data, but they use a number of techniques to ensure that the gain from the compression is much greater than the loss in fidelity” [McKernan 55]. The process of compressing and decompressing image files mimics cortex computation yet again, in the sense that the digital image is an admittedly imperfect copy of the real world that human vision isn’t sophisticated enough to note. Why? Because our own brains do the same thing, and it would be epistemologically impossible for us to know how to view something from a perspective we cannot achieve.

We have noted, then, that basic human physiology maps well onto the imperfections and image-creation methods used by modern digital cameras, and also noted that while cinema as a whole needs a certain degree of subjective perspective to exist, only digital cinema adapts to the human condition in terms of image acquisition. It presents a paradox in this way — by operating at 60 fields per second it remains faster than film at outsmarting the human brain for exhibition but not as savvy at film at capturing reality objectively (whole frames versus interlaced frames). Thus, we have dealt with the phenomena of acquisition to a degree, but not of storage. In that realm, I return once again to McKernan’s technical description, this time of color and light:

Sampling measures the amount of light that falls on one small area of the sensor (known as a pixel). This measurement produces an analog value — in other words, within the range of measurement, there are no restrictions on values. For these values to be useable in a digital system, they must be quantized, or approximated to a number of preset values that can be represented by a given number of bits. [McKernan 56]

McKernan makes the argument later in his book that “film is digital” because either “grains are either exposed or not” [McKernan 67]. I feel he debunks his own argument with the above description of digitization — on film there are no restrictions on values, as the process is chemical. To turn them into a digital representation they must be altered:

When we sample, we may get any values such as 3.1, 48.79, 211.3, 72.489, etc. In this example, however, we are going to use an 8-bit digital system, and an 8-bit value can represent any of the integers from zero to 255, but nothing in between. To quantize the measured values, we have to replace each value by an integer, usually the nearest integer. So the examples above become 3, 49, 211, and 72, and every quantized value can be represented by an 8-bit word. [McKernan 56]

So, digital representation remains trapped in the parameters that it creates by its own definition. Just as the human brain has approximately 30 areas for different segments of images, and must categorize accordingly into these databanks, the digital process must slightly alter photonic signals to fit them into its limited mode of perception. I use the word “limited” here not in a pejorative sense but purposefully — the visual cortex is also limited to its own design, and, just like the digitization process of a computer chip, must fit retinal input into classifiable sections. This may seem a stretch, as the human brain has a much larger capacity for light and color, but every computational perceptor remains finite — dogs cannot see certain colors as they lack the same number of ocular cones as humans, and humans cannot see infrared or ultraviolet light, etc. Varying film stocks are also more sensitive to different lights and colors, and chemically can or cannot “see” certain shades. Nonetheless, within a film strip the light it is sensitive to remains an analog spectrum, whereas digital must find the nearest like-alternative — think of it as someone being able to see blue-blue-green, but someone else only seeing blue. It seems then, that film, in its organic lack of prejudice, again closely resembles the chemical signals perceived by the human eye. This only follows up to the optic nerve, however, and then we must revert back to our “30 areas” mentality: “Image sensors for color need to sample three different color ranges, and generate three signals, generally known as red, green, and blue, or ‘RGB’” [McKernan 56]. As multi-purpose as human perception is of the color spectrum, the human brain performs similar encoding when it groups colors and details into these segregated compartments in the visual cortex.

What does all this back-and-forth tell us, then? It seems that film has a closer hold on the human eye and the phenomenology of sight, but that digital technology closer approximates the human computational mode for dealing with image data during and after cortex acquisition. Film more closely aligns with reception and digital with computation, then? This seems accurate, and provides a solid theoretical foundation (other than the “media evolution” model) for why digital image acquisition has so closely copied the filmic camera’s example. If film takes the phenomenological qualities of the human eye and approximates them using chemical processes, and also stops before entering its acquired imagery into any computational data-manipulation, then it does not “think”, but only “sees”. The human brain, as does digital, encodes the image, and thus introduces a higher degree of subjectivity into the proceedings. If both the human brain and digital technology encode the image, do they both produce image-simulacra? Surely we cannot argue that they produce image-objects at the film does, since the human brain surely cannot (the impossibility to encode experience being one of the most tenuous arguments in mind-body philosophy), and I have demonstrated that digital cannot. If film then produces an image-object directly correlated not to any individual experiential property but only to organic recreation, is it a purer re-presentation of reality than the human brain can match? Yes, because it presents reality undiluted by the perceptual goings-on of the brain. Nonetheless, if I am colorblind through my eyes I will be colorblind to a filmed image, so can we ever escape perception? No, we as individuals cannot, but this sort of egoist epistemology is useless subterfuge for a greater issue — if a filmic document retains an organic quality unmatched by the encoding process of individual computers, human or digital, then the specific spectator is irrelevant. Any one viewer of a filmic document would be trapped by their own perspective, but film as an object still retains its objective character — it is only light and chemistry, and has no intention or free will. While we cannot escape this epistemological loop, we can conclude that film is a more accurate document of the world, structurally if not experientially (for digital is also experienced similarly, despite heighted perceptive modes that I will return to later), than the human brain can produce. This means that film is closer to nature, but digital is closer to us.

The strange case of CGI / What can digital do that film can’t?

I would have thought not. I’m not a fan of digital. And I sound like I’m talking out of both sides of my mouth when it comes to Robert. When Robert [Rodriguez] does it, it’s great. That’s where Robert is coming from. He just wants to do everything himself and digital allows him to do that. Why would you hire a cinematographer? If you’re doing a digital movie it doesn’t make any sense whatsoever. All you need to do is look to the screen to see if you like it. Gaffer do this, do that… you could be your own cinematographer. No cinematographer should be promoting digital. It makes them as obsolete as a dodo bird. But in the case of Sin City, and probably 300, you know you could never have made those movies on film. [Quentin Tarantino, Sight and Sound]

So far this analysis has been primarily theoretical, juxtaposing what is perceived to be the basic ontological, experiential, computational character of analog and digital cinemas in order to demonstrate that the interpretive modes of the former have no place in studies of the latter. Still, despite these analytic dialectics, the real-world application of these opposed media have an undeniable link — and so, we return to the Venn diagram area, the area of transformation, in order to sift through and separate these practical and historical bonds to further heighten the sense of separation I feel is essential. As Tarantino suggests in the above quotation from a 2008 Sight and Sound interview, certain movies would not be possible to make on film that have found their place in digital. While I wish to avoid value judgments as to the visual character of the digital versus analog, these distinctions of practical capability are helpful. Even Tarantino, an ardent defender of the superiority of film, admits that “in those cases where they are creating a whole new cinematic landscape, I can’t be churlish about that. I’ve got to give it up. It adds another possibility in which to tell stories, and create pictures” [Quentin Tarantino, Sight and Sound]. Primarily, Tarantino is of course referencing independent digital cinema — a realm of experience and imagery entirely constructed, captured, and exhibited digitally — movies like Sin City (Robert Rodriguez & Frank Miller, 2005) which are able to create perceptual realities that exist independently and contain a different sort of versatility than what film can accomplish. What about the middle area, however, the dependently digital area, where CGI is used in accordance with film in order to supplement and create imagery that would be otherwise impossible? 300 (Zack Snyder, 2007), Tarantino’s other example, was in fact shot on film, and digitally graded in post-production. Herein lies a critical distinction between these two examples — both movies aim to accomplish similar goals, but one does so in the context of medium and the other in the context of augmentation. Both movies rely strongly on the presence of digital technology, but one fully embraces its new medium while the other mixes two. These movies’ similarities further highlight the chain of events that have led moviemakers to discover the potentialities for digital hidden within the mixed use of the technology in the past. Concentrating on this mix for now, we can ask: if the presence of CGI alone is enough, at least in the parameters that I have set, to change the classification of a movie from filmic to dependently digital, does it serve then as an augmentation or as a tainting factor within the otherwise analog system? I propose that the correct answer to this question is “both.”

When digitalization was first introduced into the world of mainstream cinema it was used as a tool for creating CGI, as I’ve noted earlier discussing such seminal CGI works as TRON. Even TRON however, is distinct from its successors in certain ways. First of all, there is an interesting and challenging philosophical caveat possible in exempting TRON from being viewed alongside the movies that would follow it. And although I have dismissed much of Prince’s justification for his “perceptual realism” model, I feel that in this case it stakes some claim:

Perceptually realistic images correspond to [the audiovisual experience of three-dimensional space] because film-makers build them to do so. Such images display a nested hierarchy of cues which organize the display of light, color, texture, movement, and sound in ways that correspond with the viewer’s own understanding of these phenomena in daily life. [Prince 32]

Arguing that CGI, when used in accordance with our preconceived notions of our perception of physical reality, can in fact be subsumed into the filmic narrative without much ado provides a good jumping-off point for the analysis of digitally-created imagery. What then for the use of CGI to create false realities, or realities that we as human beings could not possibly understand a priori? TRON, shot on 35 mm, aims to use its CGI for such a purpose. In the (dependently digital) film, the inner machinations of computers and videogames are represented by anthropomorphized “programs”, played by human actors in light-reflective suits. Jeff Bridges, a videogame programmer searching for information on corporate wrongdoings, is sucked into the world of these programs by a laser which is able to digitize physical objects (this plot device in itself being oddly prescient of later CGI applications), and forced to survive inside the world of the computer. Thus, all the CGI used in the movie is used strictly in order to convey this fictional digital space, and therein lies the rub. Does TRON, despite its use of CGI, somehow exempt itself from the new digital theories I have specified a need for, if only because its digital augmentation is used to create a perceptual reality that is in itself digital? Prince would say “yes”, but my answer would be closer to a “maybe”. TRON as counterexample is illustrative of the dilemma at the heart of separating analog from digital application, a theoretical “missing link” that falls into both categories of analysis. Furthermore, all of the lighting effects done within this computer world are produced by actual light on actual film through a cel-animation process. Even TRON’s digital realities are rooted in photographic ontology. The same could not be said for Terminator 2, despite what appears to be an inherent similarity. In Cameron’s movie, CGI is used to create a liquid-metal synthetic robot, a form which allows it to pass through jail bars, etc. While his use of CGI also aims to create a false reality that is in itself digital, the difference lies in the fact that his false reality exists inside of a real one. When the T-1000 interacts with jail bars, for example, there is the jarring mix of an analog object being presented interacting with a digital one — the image-object disrupted by the presence of the image-simulacrum — and we are given the theoretical foothold we need to critique notions of this image’s intrinsic organics. Now, if “TRON’s digital realities are rooted in photographic ontology”, why is it any different from a movie like Terminator 2, where the same appears to be the case? The simple answer is that TRON inverses what would come to be the model for standard CGI-implementation. Rather than imbuing the analog image with the digital (by inserting digital imagery into a filmic frame), it seeks to create the digital from the analog. In supplementing its computer world with the consistent force of ontological light from the physical world (also depicted in the movie sans digitization), TRON is an unclassifiable outlier. If anything, it holds most theoretical kinship with purely digital videos and purely analog films than with any mixed breed I can produce here, because it is the perfect mix between the two.

How does CGI fare, then, in attempting to create or re-create images the human mind could potentially conceive/has perceived? In dealing with one of Prince’s examples, Jurassic Park (Steven Spielberg, 1993), we can attempt to answer the question. Prince notes that “the viewer’s eye is adept at perceiving inaccurate information” and lauds the Jurassic Park animators for “studying the movements of elephants, rhinos, komodo dragons, and ostriches and then making some intelligent extrapolations” [Prince 33] in their creation of the digital dinosaurs. Even though no human being has ever seen a dinosaur, he argues, these images become perceptually realistic because of their adherence to the predetermined rules of physics. I would add that the realism of Spielberg’s digital dinosaurs is heightened by the presence of animatronic dinosaurs, which in turn creates an ontologically-sound version of the same object represented digitally, bridging the gap between image-object and image-simulacrum in the viewer’s perception. Is the CGI velociraptor then effectively subsumed into the narrative by virtue of this merging of technologies? Not quite — while the technical mix of animatronics and CGI is an impressive superficial innovation, it does not exempt Jurassic Park as a movie from necessary analysis as a dependently digital film. This does not mean that the presence of CGI dinos somehow taints the celluloid stock, but merely that the movie’s ontological character is compromised, resulting in a film that must utilize some new sort of theory to classify its syntax.

The issue of audience-expectation is also of value — after all, Prince’s primary argument hinges on the fact that the spectator has a tacit and pre-established agreement with the cinema that the images they see will be indexical realities. Writing in 1996, this may be the case, as CGI was still a relatively new phenomena and viewers were perhaps more easily taken in by digital interference, more willing to accept a CGI T-Rex as a real T-Rex, and thus more susceptible to the influence of perceptual reality. Indeed, wouldn’t all perceptual reality depend on such a contract? Rear-projection, as a technology introduced in the 1930s, was implemented with the assumption that the image of a moving road behind actors would be interpreted, as adherent to already understood concepts on how cars work, as a genuine image. Today, any filmmaker using rear-projection had better be doing so self-consciously, or their audience will surely call the bluff. The gradual acceptance and cultural critique of the technology behind rear-projection led to the downfall of its use as perceptual reality, and Prince’s use of such an argument towards CGI has likewise not stood the test of time. Retrospective analysis on digital augmentation is an exponentially harsh process as technology inevitably develops and evolves. Furthermore, the use of CGI to create perceptual realities in such dependently digital movies otherwise indexically valid will inevitably clash if audiences expect reality. And a certain degree of reality must always be expected from a movie shot on film, if only via the medium’s organic link. Earlier I asked if “digital had liberated film from its need for realism”. Now I answer — film cannot be liberated, and so it birthed the digital, a surrogate medium unencumbered by realism’s constraints. CGI will always feel phenomenologically awkward in a film, but in a digital realm it is at home — which brings us to the present.

Movies made in the 2000s, such as the aforementioned Sin City and 300, as well as the newer Speed Racer (The Wachowskis, 2008), which uses digital tinkering to eliminate depth-of-field in order to simulate a cartoon. I have already touched on the fact that 300, being a movie shot on film, falls into the same category as Jurassic Park, accomplishing a task that could not be done as well with purely analog technologies, but not existing independently as a digital work. Sin City and Speed Racer are a different story — movies shot, edited, and in many cases screened digitally, and therefore some of the first examples of what we can dub purely independent digital movies. Robert Zemeckis has also laid claim on this trend, creating an experience out of digital technology in his recent movies The Polar Express (2004) and Beowulf (2007). Both utilize digital motion-capture to paint over digitally captured actors and sets in ways impossible to film — creating a visual style uniquely digital. From Speed Racer to Beowulf, these movies aim (in very different ways) to create entire realities the likes of which cinema has not yet seen by virtue of its historically analog character. Whatever the specific reality they aim to create, the difference between these independently digital works and dependent digital works is the fact that movies in the former designation sculpt entire worlds out of digitization whereas the latter’s entries only seek to augment filmic narratives with CGI inclusion. In short, an independently digital movie has the ability not only to augment our perceived reality, but to encode its own fictional sense of physics, creating an image-simulacrum that is its own phenomenon.

High-definition digital televisions have broadened the landscape on this issue — technologies such as Blu-Ray discs and digital projectors have further digitized and “enhanced” perceptual realities possible by digital movies. Indeed, these movies see better than the human eye, bringing to crystal-clear detail every aspect, grain, and drop of their picture. This texture- and color-heightening mode removes digital cinema further from the ontological claims on realism Bazin championed, as well as the visual and cognitive functions I have pointed out, and requires its own model. Director Tony Scott has often used the term “hyperreality” to describe works from his most recent period such as Man on Fire (2004) and Domino (2005). This term is largely coined to deal with these movies’ excessive stylistic sense, but especially in the case of the latter (which was shot on digital unlike Man on Fire), it also refers to their tendency to incorporate digital graphic design constantly throughout the narrative, whether it be digital color grading, text on the screen, etc. Philosopher Jean Baudrillard has co-coined the term “hyperreal” with respect to simulacra with no real-world referent, such semantic elements within the realm of the science-fiction genre:

Obviously the short stories of Philip K. Dick “gravitate” in this space…One does not see an alternative cosmos, a cosmic folklore or exoticism, without origin, immanent, without a past, without a future, a diffusion of all coordinates (mental, temporal, spatial, signaletic) — it is not about a parallel universe, a double universe, or even a possible universe — neither possible, impossible, neither real nor unreal: hyperreal — it is a universe of simulation which is something else altogether. [Baudrillard 125]

Baudrillard uses the term “hyperreal” when referring to imaginable but characterized realities, especially within fantastic worlds with their own rules — this would, however, mean that rules set forth in the footnoted Star Wars world are hyperreal as they seek to create codes referential to our own reality. Nonetheless, the objects presented in the original Star Wars trilogy (at least when projected on 35 mm) do have real-world referent in an ontological sense — even a Star Destroyer has a real-world referent in that what we see onscreen is the indexical sign of an actual, physical miniature starship that was photographed in 1977 (even lightsabers are physical, having been produced with real light beaming through film stock), and so I would hold that these fantastic images, while not adherent to realism as a style (in the sense that they are narrative hyperrealities according to Baudrillard’s line of thought), remain realistic in the ontological sense. I would therefore like to borrow and adapt Scott and Baudrillard’s term (placing it in an ontological context rather than stylistic or philosophical) to refer to the virtual aptitudes of digital cinema — if we can claim that an independently digital movie uses its technology to accomplish and realize an entirely new reality wherein ontologism is not a factor (i.e. image-simulacra have no photographable referent), then that movie has created a purely digital reality, a reality unrecognizable by celluloid. If the analog cinema is ontologically grounded in realism, then the digital cinema, devoid of traditional ontologically, is grounded in hyperrealism. Whether this hyperrealism is used for the psychedelic surrealism of Speed Racer or merely, as in the case of Collateral (Michael Mann, 2004), to enhance the texturing of night shooting, it nonetheless is independent of traditional visual methodology, which accordingly stems from analog cinema.

It is of course possible that this hyperreality could be a passing cycle — a technical genre in itself. When sound was introduced in 1927, Hollywood became a musical-making machine for over a decade — when Technicolor was affordable and vogue in the 50s, the musicals reappeared to take advantage of this new development. Could movies like Speed Racer, then, be mere examples of a new technological genre cycle (the effects movie) that might peter out in the years to come? It does seem that digital cinema is an anomaly in that it can simultaneously exist as both genre and medium, but this is a misconception. The fad of technology-as-spectacle led to a specific genre (the movie musical) in 1930s Hollywood just as digital technology has led to the effects movie, but the difference lies in the fact that sound technologies were not capable, as digital is, of providing their own audiovisual medium. This makes digital technology the first art form borne from the constraints of being another medium’s generic element. If we view this overlap macroscopically, then surely we can suppose that there will be some refrain on effects-heavy blockbusters as digital technology becomes older and older (and establishes itself as its own medium). I still hold that the concept of hyperreality, not as a semantic genre qualifier but as an essential character, remains solid bedrock for which to start viewing the (non)ontological basis of digital cinema.

A final note is needed for this section, on the flexible character of this new hyperrealism. If left to randomize imagery, digital technology can only do so much, unlike a strip of film, which if left to the elements, is capable of recreating an infinitely plastic slew of visual possibilities. Now, in the context of human application, does this mean that the lack of organic endlessness for the digital process limits our ability to create image-simulacra? Surely — because digital is not imperfect, it lacks the power to process everything. A digitally played song will never shift tempo, accidentally miss a note, or bend pitch, although a live human player might. Nonetheless, this limited character is pragmatically meaningless given the nearly infinite potentialities of pixels, colors, and animations digital can produce. Film, then, has unlimited access to the real world while digital has nearly unlimited access to the void. While theorists such as Bazin celebrated cinema for being the first medium with only one machine separating us from the real world, we now must account for the digital cinema, where one machine grants us increasingly complete access to the fantastic world. If the CGI T-1000 in Terminator 2 appears “fake” or misaligned with the photographic setting, it is the fault of imperfect filmic realism. If the CGI cars in Speed Racer appear “fake”, it doesn’t matter — they don’t exist in the real world anyway. Independently digital movies shape their own realities and thus function without any referent — giving them no necessary connection to traditional film theories or traditional human ideas of perception.

The future of digital cinema

Now that we have created some semblance of a beginning for a new way of looking at digital cinema, it seems prudent to muse on some possible applications for its future use. I have already explored the idea that digital cinema, as a new medium, holds a capability for fantasy unmatched by analog. In order to fully extrapolate uses for this new medium, however, a syntactical analysis must follow the semantic one established by the concept of hyperrealism. In finding the structure of digital cinema, it is again necessary to compare it to the structure of the analog. Film exists in a constant event-stream of a set 24 frames per second, and unfolds linearly and in a set boundary of time. If a filmstrip is 129,600 frames in length, then its set runtime as an exhibition, projected at the proper rate, is 90 minutes. Digital, while conventionally modeled after this time designation, needs not hold such adherence to traditional temporal boundaries. It is by definition atemporal — by means of image-capture, editing, and possibly exhibition. I mentioned earlier the process of image and sound compression, and it is precisely this process of breaking apart the captured event-stream into compartmentalized cells and crunching it down to a set amount of hard or soft drive space that drives such an independence from time. When a digital image is captured into a miniDV tape, a DVD-R, or a data drive, it no longer exists within a linear scope. It has been manipulated so that all images converge at the same point. When it is played back, the images are reconstructed, like a memory, to play out in sequence. Furthermore, the capture-source need not even be constant, unlike film:

…once a piece of film is exposed, there is no going back. But a digital medium is designed precisely to be used over and over again. Instead of being primed as a tabula rasa, it is preformatted with a rigid structure into which any stored information must be received. A digital medium is an imposing edifice where fresh digits are repeatedly entertained in assigned locations, much as theater seats receive different theatergoers for each performance. [Binkley 110]

The static nature of film is left behind, for with a digital medium, it is unnecessary even to categorize space. Once again, traditional ideas of physics are abandoned — time and space become malleable, and images can be continually and multi-intentionally manipulated, either automatically by the computer that contains them or designedly by human intervention. Either way, event-streams cease their necessity to be played out either in a physical space (such as a film strip) or within a set time boundary (said strip’s number of frames). The digital event-stream, by virtue of these liberations, is multi-accessible, multi-directional. Youngblood alludes to this possible syntactical change:

Film grammar is based on transitions between fully formed photographic objects called frames. It is done primarily through the collision of frames called the cut, but also through wipes and dissolves. In electronic cinema the frame is not an object but a time segment of a continuous signal. This makes possible a syntax based on transformation, not transition….One can begin to imagine a movie composed of thousands of scenes with no cuts, wipes or dissolves, each image metamorphosing into the next. [Youngblood 28]

Surely the model Youngblood describes here is a possible outcome of digital cinema and actually has been done, showcased in morphing videos such as Pillow Girl (Ronnie Cramer, 2007), which depicts pulp novel starlets morphing one into another in a surreal flow of drawn femme fatales. Still, I don’t think it’s the apex of digital cinema, merely one possible application. Moreover, what Youngblood misses is the fact that his transformation model, along with his claims that multiple event-streams could fit into one frame, are still subject to the laws of time. The 2000 digital movie Timecode (Mike Figgis) is a clear example of this, wherein his theory is realized as four separate event-streams play in the four quadrants of the screen, unbroken by each other and unchanging throughout the course of the movie — and yet all bound by the movie’s 97 minute runtime, providing less of a revolution than a gimmick. If digital transcends temporal linearity, then it may not even have to exist within an event-stream, but rather an event-pool, a reservoir of feasible event-streams all equitably accessible and viable. Does this mean that digital cinema isn’t cinema? Not necessarily, but certain extensions of its use certainly do call for an expansion of our narrow-minded views on what cinema can be. If this is true, we can separate atemporal application of digital cinema into two separate schools: the passive/static, the active/plastic, and the interactive.

Films such as Speed Racer are perfect examples of the first model dealing with the atemporality of digital cinema. Still structured to a set runtime, these movies play out in sequence according to their creator’s wishes, and are therefore contingent to traditional modes of exhibition passed down from film. They are, however, changeable. When I put Speed Racer in my DVD player, or even Gone with the Wind, I can choose via chapter stops exactly where I want to start viewing the movie. I can, if I wish, even set A-B parameters and loop the event-stream. Nonetheless, these movies are originally intended to be taken in as unbroken event-streams, and are therefore passively atemporal. Movies such as Exquisite Corpse (David Fishel, 2004), however, expand on the idea of digital manipulation. Fishel’s movie is designed to randomize play whenever it is started, mixing its scenes to constantly create new modes and interpretations for viewing the narrative. It thus creates its own meaning from the information it has been programmed with, instilling a certain sense of autonomy to the digital cinema and thus being active. Exquisite Corpse still runs for a set 60 minutes, but this does not mean that other movies within the active guidelines need do this. It is possible that the runtime could vary per viewing as well, or even create its own imagery akin to that of the Windows Media Player or iTunes “visualizer.” Does this mean that digital cinema runs the risk of eliminating the human agent from the process altogether? Perhaps, because this seems to be yet another area in which digital technology surpasses our own ability to interpret the world — it would be impossible for a human to “randomize” length, event-stream selection, or even imagery, and it is therefore the digital technology’s own responsibility to utilize such functions for itself.

Reintroducing the human agent is the final form of digital cinema, and perhaps the ultimate possibility for its real-world application — the interactive. Movies have been playing with this possibility for awhile — the DVD for Final Destination 3 (James Wong, 2006), for example, includes a “Choose Their Fate” option, which in actuality amounts to nothing more than a dressed-up scene selection, allowing viewers to make choices during the movie on what will happen to a character when. Other innovations like New Line Cinema’s Infinifilm DVDs allow viewers to access making-of and behind-the-scenes data while the movie is playing. Blu-Ray disc, which is actually capable of managing multiple event-streams within the same frame as Binkley predicted, can play these same behind-the-scenes videos picture-in-picture with the ongoing movie. Finally, the popular children’s book series “Choose Your Own Adventure” has had at least one DVD incarnation — Choose Your Own Adventure: The Abominable Snowman (Bob Doucette, 2006), allowing viewers to use their remotes to interact with the story and shape its outcome. Still, the crucial holdback to all of these extensions of interactivity is its ultimate limit — there are only so many choices, so many outcomes. What then is a medium for extending the potential for interactivity contained in the digital mold that is slightly more extensive?

It is no mistake that videogames have followed the same trajectory as digital cinema. When “Pong” first debuted on the market in 1972, it was a simple representation of reality, based on real-world physics and designed to replicate a table tennis match within a computer. Over time, videogames have grown more and more cinematic — in late 1990s, aided by enhanced digital rendering technology, they were able to include cut-scenes of greater visual quality interspersed in the narrative — referred to as FMVs (full-motion video), these 3D cutscenes enhanced the story and used scripts. Games such as “Metal Gear Solid” (1998) and “Final Fantasy VII” (1997) even went so far as to provide a director credit within their end-game credits sequences (Hideo Kojima and Yoshinori Kitase, respectively), and why shouldn’t they? It is true that the narratives of both these games could not exist in an active sense (i.e. could not play out) without the involvement of the spectator, but the directors still had to manage all the semantic and syntactic elements of the experience prior to spectator-consumption. Considering also that both games had the possibility of multiple endings, it could be argued that this sort of “digital cinema” encodes spectator as author — this is partly true, but mostly illusory, as game endings are still predetermined (resulting in highly advanced and more sophisticated Choose Your Own Adventure models as discussed above). The result is an interactive digital movie, not a spectator-authored medium. It is only when the true infinite possibilites of digitization are carried out that the spectator truly has control — in which sense, there is no ending. Massive multiplayer online role-playing games (MMORPGs) such as “World of Warcraft” accomplish this, giving players the possibility of interacting with millions the world over, thus reintroducing and expanding the playing field of possibilities within a world that has no claim on realism. Binkley adds that “[digital subjects] are also rendered more real (interactive) since they can become aware of your presence and respond to you” [Binkley 113], ending on the note that “digital representation is forcing us to reconsider what is real and what is not” [115]. I responded to a similar critique very early on concerning virtual reality, and I hold to it now — interactive digital cinema/videogames are not real, they are fantastic. But it is this element of fantasy that grants them independence. To say that interactivity challenges our perceptions of reality is a misstep — to say it helps create its own reality (read: hyperreality) is more accurate. Digital does offer a new means of perception, and does challenge traditional concepts of phenomenology (HD televisions are clearer than human vision, time is superseded, etc.), but it does not challenge reality. Film, as a process of chemical and photonic representation, depends on the elements of reality to exist, and therefore holds sway over its domain artistically. Nonetheless, videogames as an increasingly prevalent force in the world may overpower the trend of digital cinema use. Either digital effects movies will die out in a genre cycle like the 1930s musical or transcend and embrace their own hypercapabilities (Sin City, Speed Racer). The important question, and one I cannot answer, is if those hypercapabilites include interactivity. Whether a potential virtual reality will exist, thrive, or collapse on itself is beyond prediction here. The concepts of interactivity and atemporality will most likely not lead digital cinema to choose one distinct path (passive, active, interactive), but to develop all three and potentially intermingle them. This presents yet another Venn diagram waiting to be pried open in the years to come.

What is digital cinema?

Digital cinema, as a medium, has been heretofore defined by its relationship with the film, much as early film theory was defined by its relationship with the theater. Indeed, there was a similar crossover effect between the two movements. Just as many early filmmakers took their cameras outside the theater to produce imagery inimitable by stage sets, early applications of digital technology were primarily used as CGI to supplement imagery otherwise unachievable with filmic technology. Similarly, just as the early semantic and syntactic qualities of cinema were wrapped around theatrical doctrines, so do digital moviemakers now attempt to mimic film history with digital equipment, shooting movies that could be shot on film on digital without regard to their new medium’s potential advents. This adherence to filmic practice and theory will not endure the test of time as the potentialities for digital cinema become more apparent. Furthermore, many previous theoretical attempts to adjust for said potentialities have resulted in radically consequentialist claims debunked by actual practice (Youngblood’s morphing and multiple event-stream models as seen in Pillow Girl and Timecode, Binkley’s notion that interactivity challenges realism, etc.). It is therefore necessary to create a new and adaptable base theory for this new medium, one that aims not to squarely predict where it is practically headed, but to outline upon what theoretical grounds it is established. The best way to do this is to start with digital’s connection to the filmic and work progressively outward.

Digital is distinct from film in four major ways. First, it carries no ontological character — film physically transmits real physical data, whereas digital only reconstructs imagery. The result is an image-simulacrum, an event-stream that cannot exist without the processing capabilities of a digital computer to read and interpret them. CGI implementation within filmic narratives can create a sense of “perceptual realism,” but ultimately the digital image has no referent, and therefore such perceptual realism is an empty placeholder for a greater mismatch. Secondly, film captures imagery in a similar manner to the human eye, whereas digital more closely approximates the machinations of the visual cortex — this means that film is attuned to presentation and digital to re-presentation. This also means that digital cinema is actually a closer fit to the way human beings interpret the world, leaving the filmic cinema as a closer embodiment of actual reality and less phenomenologically valid. Thirdly, since independently digital movies are capable, either by human will or of their own computation, of creating worlds that, like their imagery, have no referent, then they hold reign over the realm of the fantastic, leaving filmic cinema to its previous hold on realism. This new breed of representation can be dubbed hyperreality, and is in effect digital’s control in simulating the imaginable, if not the actual. Finally, digital cinema has an essentially atemporal character, a quality which allows it to challenge traditional modes of cinema as event-stream, and even to incorporate spectator interactivity into its framework, resulting in the videogame.

These four factors add up to a medium that, in a true sense, does not exist. Through the creation of digitization, the medium itself carries no hold on realism, and is thus separate from every other plastic art. There is no image, only void; no space, only code; no time, only accessibility; no reality, only fantasy. In this sense, the most accurate way to categorize digital cinema, here and now in the year 2008, is as the manifestation of the unknowable — digital cinema presents the first time that human beings have the capacity to create art unbound by organic or physical law, the first time we have been able to with any accuracy encode not the representable, but the imaginable. Digital cinema is less a medium for accomplishing creative goals, more an artificial creative process in itself. We have created our own “brains in vats.”

Works Cited

Altman, Rick. “Introduction: Four and a Half Film Fallacies.” Sound Theory/Sound Practice. Ed. Rick Altman. London: Routledge, 1992. 35–45.

Baudrillard, Jean. “Simulacra and Science Fiction.” Simulacra and Simulation. University of Michigan P, 1994. 121–127.

Bazin, Andre. “The Ontology of the Photographic Image.” What is Cinema? University of California P, 2005. 9–17.

Berton, Jr., John A. “Film Theory for the Digital World: Connecting the Masters to the New Digital Cinema.” Leonardo. Supplemental Issue. 3 (1990): 5–11.

Bianco, M.D., Carl. “How Vision Works.” Howstuffworks. <http://health.howstuffworks.com/eye.htm>.

Binkley, Timothy. “The Vitality of Digital Creation.” The Journal of Aesthetics and Art Criticism 55 (1997): 107–116.

Figgis, Mike. Digital Filmmaking. New York: Faber and Faber, Inc., 2007.

McKernan, Brian. Digital Cinema: the Revolution in Cinematography, Postproduction, and Distribution. McGraw-Hill, 2005.

“Morning.” Comic Strip. Xkcd. <http://xkcd.com/395/>.

Prince, Stephen. “True Lies: Perceptual Realism, Digital Images, and Film Theory.” Film Quarterly 49 (1996): 27–37.

Punt, Michael. “Digital Media, Artificial Life, and Postclassical Cinema: Condition, Symptom, or a Rhetoric of Funding?” Leonardo 31 (1998): 349–356.

Ramachandran, V. S., and Sandra Blakeslee. Phantoms in the Brain: Probing the Mysteries of the Human Mind. Harper Perennial, 1999.

Tarantino, Quentin. Interview with Nick James. BFI. Feb. 2008. <http://www.bfi.org.uk/sightandsound/feature/49432>.

Willis, Holly. New Digital Cinema: Reinventing the Moving Image. London: Wallflower P, 2005.

Youngblood, Gene. “Cinema and the Code.” Leonardo. Supplemental Issue. 2 (1989): 27–30.

Films Cited

300. Dir. Zack Snyder. 2006.

Beowulf. Dir. Robert Zemeckis. DVD. 2007.

Choose Your Own Adventure: the Abominable Snowman. Dir. Bob Doucette. DVD. 2006.

Collateral. Dir. Michael Mann. 2004.

Domino. Dir. Tony Scott. DVD. 2005.

Exquisite Corpse. Dir. David Fishel. DVD. 2004.

“Fainaru Fantajî VII (Final Fantasy VII)”. Dir. Yoshinori Kitase. Squaresoft, 1997.

Final Destination 3. Dir. James Wong. DVD. 2006.

Forrest Gump. Dir. Robert Zemeckis. DVD. 1994.

Gone with the Wind. Dir. Victor Fleming. DVD. 1939.

Jurassic Park. Dir. Steven Spielberg. 1993.

La Passion De Jeanne D’Arc. Dir. Carl Theodor Dreyer. DVD. 1928.

Man on Fire. Dir. Tony Scott. DVD. 2004.

“Metal Gear Solid”. Dir. Hideo Kojima. 1998.

Monster House. Dir. Robert Zemeckis. DVD. 2006.

Pillow Girl. Dir. Ronnie Cramer. 2007.

The Polar Express. Dir. Robert Zemeckis. DVD. 2004.

“Pong”. Atari, 1972.

Rear Window. Dir. Alfred Hitchcock. DVD. 1954.

Sin City. Dir. Robert Rodriguez and Frank Miller. 2005.

Speed Racer. Dir. The Wachowskis. 2008.

Star Wars: the Clone Wars. Dir. Dave Filoni. 2008.

Star Wars Episode I — the Phantom Menace. Dir. George Lucas. 1999.

Star Wars Episode II — Attack of the Clones. Dir. George Lucas. 2002.

Star Wars Episode III — Revenge of the Sith. Dir. George Lucas. 2005.

Star Wars Episode IV — a New Hope. Dir. George Lucas. DVD. 1977.

Star Wars Episode V — the Empire Strikes Back. Dir. Irvin Kershner. DVD. 1980.

Star Wars Episode VI — Return of the Jedi. Dir. Richard Marguand. DVD. 1983.

Superman Returns. Dir. Bryan Singer. 2006.

Terminator 2: Judgment Day. Dir. James Cameron. DVD. 1991.

Timecode. Dir. Mike Figgis. DVD. 2000.

TRON. Dir. Steven Lisberger. DVD. Walt Disney Pictures, 1982.

Umberto D. Dir. Vittorio De Sica. DVD. 1952.

Who Framed Roger Rabbit. Dir. Robert Zemeckis. DVD. 1988.

Willy Wonka & the Chocolate Factory. Dir. Mel Stuart. DVD. 1971.

“World of Warcraft”. Blizzard, 2004.

Zodiac. Dir. David Fincher. 2007.

--

--

Adam Protextor

I rap, direct, and write. Find me at babylionstudios.com or on Twitter @protextorparty.