VFX and the Perceptual Realism of Cinema

Or: how VFX changed the way we think and do cinema, except they did not

Have Digital Visual Effects taken the cinema’s essence and ripped it apart?

“What if cinema’s essence, indeed, is composed of visual effects” (Prince, 2011, p. 227)

In this essay I will dispute how Digital Visual Effects (DVFx) have changed the way we think, and more importantly make films. Just think about Gravity (2013), which was entirely shot in studio using green and blue screens. Set aside the actors, the environment, the scenery, and everything else were done in post production, with DVFx, an impossibility even only 20 years ago.

Gravity (2013) breakdown. From top left to bottom right: Final Shot, Photographed Plate, CG Render, and Suit and Wire Simulation.

First of all, I will examine cinema’s past, concisely exploring its history, establishing how films were made before DVFx, and showing what has changed along the years. My analysis will be mainly focused on Visual Effects, but I will consider all the areas of the process of making a film because, as I will demonstrate, everything has been impacted by cinema’s transition to digital.

In the second part of this essay I will first briefly introduce Visual Effects’s history, secondly I will explore their evolution and application, and finally compare the pros and cons of Digital Visual Effects in relation to cinema.

In order to have the abilities to explore and discuss my arguments, I watched and deeply analysed the movies debated in the following.

For my spectrum of opinion to be as wide as possibile, I studied Bordwell and Thompson, Ryan and Lenos, and Bazin, who reinforced my expertise on filming techniques.

Furthermore, to acquire the knowledge needed to dispute the ideas and the thesis examined in this essay, I read authors like Stephen Prince, who, for example, considers Visual Effects to have altered the way spectators perceive reality in the cinema experience.

To elaborate, my arguments are supported by research of McKernan’s, and Charles and Mirella Affron’s studies, accompanied by Kiel and Whissel’s broad exploration of the history of Visual Effects.

Concluding, I use contradictory thesis from Geuens and Wood to make my final point and close my essay.

“A movie is a marriage of technique and meaning” (Ryan and Lenos, 2012, p. 1)

In order to understand how VFX have changed cinema, I need to introduce how films work in the first place, briefly explaining the techniques that run the ‘Seventh Art’.

When we watch a film, we usually do not think about how it was made, why a particular shot was chosen to be as it is or why the editing was done in one way or another, however everything within a movie consists of multiple elements deliberately arranged in a particular way in the light of resembling real life.

Different characters’ positions evoke different emotions. Empty spaces unconsciously convey the viewers’ attention where the director has drifted them. Composition helps the viewers understand the events occurring on the screen. For instance, over the course of a story, two characters may get emotionally closer, which can be portrayed with composition by physically moving them closer throughout the film.

Symmetry is often used to imply order and sometimes rigidity. In A Clockwork Orange (1971), Kubrick uses “symmetrically framed scenes to imply the shift from a balanced environment to one of chaos” (Allison Janes, 2008).

A Clockwork Orange (1971) frame enlargement. Symmetry here is used to deliver a sense of tranquillity and control of the situation, however, the woman is about to get raped by the protagonists who will intrude in her house.
A Clockwork Orange (1971) frame enlargement. Symmetry is here used, once more, to deliver a sense of calm and balance, which is, again, brutally interrupted when Alex (Malcolm McDowell) invades this woman’s house and kills her.

Framing “actively defines the image for us” (Bordwell and Thompson, 1979, p. 252). It affects the dimensions and appearances of the image, the way the onscreen relates to the offscreen, how distance, height, and angle are imposed onto the image, and how “framing [itself] can move in relation to ‘mise-en-scene’” (Bordwell and Thompson, 1979, p. 253).

The perception of the offscreen is fundamental for the well-functioning of framing. As the camera moves, we lose visibility of parts of the composition as they step outside the frame, nonetheless, the viewers know they are still there, simply offscreen. Imagining the film was located inside of a cube, all the faces of the latter are possible offscreen spaces, thus giving the filmmaker six more available rooms to work with.

The diagram above shows my representation of the offscreen spaces. As seen, there are six more available rooms: up, down, right, left, front, and back. In the scene, from Pulp Fiction (1994), the framing does not show who Samuel L. Jackson is pointing the gun towards, however, the viewer knows someone is in front of him.

As the viewer is unable to see outside the frame and therefore has no knowledge of what could possibly happen in that space, this concept could be used to create surprise, mystery, and much more.

Just as the offscreen space, the frame implies a position from which the image is seen. This position is determined by four factors: the height, the distance, the level, and the angle of the camera.

There are infinite number of possible angles of framing. The most commonly used one, the straight-on angle, has the framing straight in front of the camera, perpendicular to it. The high-angle and the low-angle can be very useful to show components of the film from another perspective, challenging the viewers’ reactions to what is happening on the screen.

The level’s degree at which the frame is shot “ultimately bears on the sense of gravity” (Bordwell and Thompson, 1979, p. 261) exercised onto it.

Sometimes, the camera is positioned at a certain height, which is partly related to the camera angle; however the camera could be positioned close to the ground, having a straight-on angle, in order to show objects or people lying on the floor.

The last fundamental factor is the distance. A close-up, for example, is often used to “convey a sense of intimacy” (Ryan and Lenos, 2012, p. 52), while long shots are frequently used to show the characters in the environment, making them small and providing a sense of powerlessness.

Despite all these features being present in other arts as well, like photography or painting, what is specific to cinema is the possibility for the frame to move, producing changes in position — with its different factors — and composition during the shot.

Composition is just one of the aspects of cinematography; lighting has the ability to radically change the atmosphere and the impact of a scene. In Philadelphia Story (1940), for example, “three-point lighting creates the appearance of a ‘normal world’” (Ryan and Lenos, 2012, p. 104), whilst back lighting and low-key are used to deliver a “sense of immoral world”, as seen in Klute (1971) (Ryan and Lenos, 2012, p. 105). In Chinatown (1974), hard lighting creates clarity in the characters and in Road of Perdition (2002) the lighting is diffuse to produce a sense of threat.

Philadelphia Story (1940) frame enlargement.
Klute (1971) frame enlargement.
Chinatown (1974) frame enlargement.
Road of Perdition (2002) frame enlargement.

Lighting can be very powerful in guiding the viewers’ attention to certain parts of the composition by having “lighter and darker areas within the frame” (Bordwell and Thompson, 1979, p. 191).

Film lighting has four features: quality, direction, source, and colour.

‘Quality’ is the intensity of the lighting, for example the difference in range between hard and soft light. ‘Direction’, ‘source’ and ‘colour’ are more obvious to understand. Firstly, the direction is the path the light undertakes, it can be reflected or frontal, sideway, back, top or under lighting. Secondly, the source is where the light comes from, this could be natural light or artificial. Some lights could be used in the set as props, rather than just for their role of illumination. And finally, the colour is the colour and temperature of the lighting.

Colours are an important part of the lighting manipulation as they can amplify the characters’ emotional setting. Scenes with warm colours, as Hemphill (1996) suggested, elicit pleasing feelings such as “happiness, joy and hope”. Camgöz, Yener and Güvenç (2002) discovered that “colours are seen as more pleasant by an increase of both [brightness and saturation]”.

On the other hand, grey, soft blue and gloomy tones evoke negative emotional responses, “such as boredom and sadness” (Grandjean, 1973) and even “hatred and depression” (Dimitrova, Martino, Elenbaas & Agnihotri, 1999).

Light is constantly present in our daily life, so much that we fail to acknowledge the effects that lighting has on our surrounding, and as a consequence, as viewers, we take lighting in movies for granted, “yet the look of a shot is centrally controlled by light quality, direction, source, and colour” (Bordwell and Thompson, 1979, p. 198).

Different composition and lighting techniques are used also to portray the style in which the film is represented. There are innumerable different styles, however, the most common one is the realistic style, as if the viewers were watching through a window records of real life.

Expressionist narrative cinema might portray psychological states through the use of high contrast lighting and strong metaphors. Experimental cinema orbits around the regimes of cognition, questioning the viewers’ perception that obstruct them from a more critical look of the world around them. On the contrary, mainstream movies prone towards artificial beauty and pleasant design in order to entertain mass audiences and “reinforce their assumptions about the world” (Ryan and Lenos, 2012, p. 147).

“If the world can be represented in so many different ways, how do we know which images of it are true or accurate?” (Ryan and Lenos, 2012, p. 141)

Whilst painting is an artistic way to explore reality by altering it following the state of our emotions, “cinema is the projection of reality” (Cavell, cited in Prince, 1996, p. 29) as it automatically replicates everything before the lens. What is shown on the screen to the spectators is the representation of the real world. The narrative, made out of possible true events, happening in the most accurate way possible, fools the spectators allowing them forget they are watching a movie.

“We can consider narrative to be a chain of events in cause-effect relationship occurring in time and space” (Bordwell and Thompson, 1979, p. 69)

Narrative can convey the viewer to draw analogous among characters and situations, to define parallels within settings and times of day. Narrative depends on time, space, and casualty. In any film, characters generate causes and record effects. Things happen because of their actions, thus they react to the revolve of events. The spectators are brought to understand the characters’ actions and seek connections between the following events. This creates an interaction between the character on screen and the viewer.

The plot — which is “everything visibly and audibly present in the film” (Bordwell and Thompson, 1979, p. 71) — could also strive to arouse curiosity in the viewers, by showing a series of events which already began. This technique is called opening ‘in media res’, latin for ‘in the middle of things’. The spectator is submerged even more in the narrative of the film as the human mind will unconsciously speculate upon the causes that have brought to the events presented. Usually, the actions that occurred before the beginning of the plot will later be revealed or suggested in the course of the narrative.

The exploration of a character’s mentality and the manipulation of the depth of knowledge in the narrative can increase the viewer’s identification with that particular character and create an even deeper connection between the two, consequently creating in the spectator firm expectations about the future actions of the character.

“On the other hand, objectivity can be an effective way of withholding information” (Bordwell and Thompson, 1979, p. 86). The suspense will be much higher if the characters’ mystery remains veiled until the end when they finally reveals their intentions.

There are infinite possibile narratives, however, historically, a main form has dominated the fictional cinema — the ‘Classical Hollywood Cinema’. It is a ‘classical’ mode “because of its lengthy, stable, and influential history, ‘hollywood’ because [it] assumed its most elaborate shape in american studio films” (Bordwell and Thompson, 1979, p. 89).

This type of narrative relies on the assumption that individual characters function as casual agents in the developing of the action. Oftentimes, desire is the moving wheel of the narrative. The desire will most likely spur the character to embrace a journey, and the proceeding of the narrative will include the process of accomplishing the goal. Of course there will always be a counterforce who has opposite goals, creating conflict and balance throughout the story.

However, “the drama on the screen can exist without actors” (Bazin, cited in Bordwell and Thompson, 1979, p. 179), which is the main difference between theatre and cinema. Dramatic effect can be heightened by rain in the dark, waves crashing against the cliffs, leaves falling from trees, and so on. Because of this, men can be used in films just as supplements. Giving more importance to the real protagonist — nature — they are oftentimes just extras working as counterpoints to the main character.

It is the setting then, that plays a relevant role in the movie. To reach the aimed result, filmmakers can decide to go on the location to shoot or create an artificial one in studio. Oftentimes miniatures are used and “part of the setting could also be rendered as paintings” (Bordwell and Thompson, 1979, p. 183).

This means that the setting is not required to be built in its entirety. In creating the setting, ‘props’ (properties) are used and “in the course of a narrative, a prop may become a motif” (Bordwell and Thompson, 1979, p. 183).

We stated before that symmetry is often used in composition in order to convey certain emotional states — such as calm and control, or rigidity and order — however, symmetry can be used also in the design of the setting. Art direction is crucial to construct “the visual and aural environment of a film” (Ryan and Lenos, 2012, p. 96). In Kubrick’s highly celebrated The Shining (1980), for example, as part of the set, in a scene with the protagonist Jack Torrance (Jack Nicholson), lines were used as visual dividers in order “to reinforce the idea of a conflict between civility and animality in human life” (Ryan and Lenos, 2012, p. 98), conflict explored in the film through his character.

The Shining (1980) frame enlargement. Lines in the set are used to visually divide the protagonist’s head, in order to reinforce the concept of his conflicted mind.
“Visual effects are sometimes viewed as having taken over Hollywood blockbusters and overwhelmed good storytelling” (Prince, 2011, p. 1)

In order to fully understand all the developments and the changes Visual Effects have brought to cinema, I first need to briefly explore the historical events behind it.

The history of film begins when Thomas Edison invented the Kinetographic Camera in 1891. In 1892, the Lumière Brothers invented the Cinématographe, a clever device that in contrast to Edison’s camera allowed multiple parties to view what was recording. In 1895, they shifted to celluloid film as it was cheaper, easier to use and more versatile.

At first the public was very intrigued by these new media and impressed to watch moving images rather than just still photographs as they were used to, thus the viewers did not really care about storytelling. However, “it wasn’t long before audiences bored of the simple novelty of seeing pictures move” (McKernan, 2005, p. 8). The French brothers claimed that “cinema [was] an invention without any future” (1895) and refused to sell their camera to the French magician Georges Méliès. However, strongly intent on acquiring a film projector, the illusionist travelled to London where he bought Robert W. Paul’s Animatograph — a similar but less sophisticated machine. Méliès had the brilliant idea to use these ‘moving images’ to create better illusions and magic tricks. Using double-exposure, in-camera edits, and slow-motion effects he was able to show the audience something they had never experienced before, but more importantly he was telling stories.

Special Effects in the silent era have to be divided into three periods: the prior period — known as the ‘cinema of attractions’ — which started with the birth of cinema in 1895 and ended roughly in 1907, “was marked by experimentation and non-narrative spectacle” (Keil and Whissel, 2016, p. 37). In the ‘intermediate era’, which ran from 1908 to 1913, as film production started to form as an industry, effects gradually increased their role in supporting the story. The latest one — the ‘classical period’ — between 1914 and 1927, saw the manufacturing infrastructure become standardised and Special Effects were clarified and rethought in order to be part of the narrative and reinforce it.

Whilst firstly used as mere spectacular displays, later on, between the 1910–20s, these ‘tricks effects’ “provided moments of magnified sensation while supporting a developing narrative” (Keil and Whissel, 2016, p. 37).

In the classical hollywood era, effects were often used to avoid the need of going on location to shoot. This was achieved creating composite images combining the main studio shot and a painted, or photographed, background scenery.

Even more than composites, miniatures, as well as glass shots and mattes, “offered an economical means for completing sets” (Keil and Whissel, 2016, p. 70). The miniatures were the most used out of all the techniques mentioned above, as they had the advantage of being three dimensional thus allowing camera movement, numerous camera angles, and multiple lenses with, therefore, different focal lengths.

In Sets in Motion, a study by Charles and Mirella Jona Affron, it is argued how sets — considered as locations, paintings and miniatures — become part of the narrative by overcoming their mere background role; “they are stylistically dense and expressive in ways that emphasize time, place, and mood but without insisting upon their own presence as stylistic artefacts” (1995, p. 25).

These new techniques allowed filmmakers to step over the line and not be limited by restrictions filming had. This way of using Visual and Special Effects remained the standard until the motion control camera technology was introduced, letting VFX artists to digitally control the movements of the camera, thus allowing the elaboration of many more effects, in much simpler ways.

As Visual and Special Effects grew quite exponentially, many artists stranded up for more recognition, so much that between the 1960s through the 1970s, VFX became more intense and visible in the overall film. While originally, style was only what the artist himself did — it did not matter if this distinction was made consciously or not — only later the industry became aware that a “film couldn’t have been made without the unique skills of all the participants” (Geuens, 2005, p. 28).

With 2001: A Space Odyssey (1968) Kubrick revolutionised VFX, introducing a completely new way of using them, creating a new style, and becoming the model for future productions. It was only after nearly ten years that it was successfully reproduced. George Lucas’ Star Wars: A New Hope (1977) made clear it was possible to repeat Kubrick’s visual effects style, and, with Close Encounters of the Third Kind (1977), Steven Spielberg extended this vision of the use of the special effects to science-fiction movies located on Earth.

For this revolution of style to happen, Visual and Special Effects pioneer Douglas Trumbull, the VFX supervisor in all the three movies cited above, matured the concept of the visual effects “environment as an experience that one feels physically, sensuously, and even intellectually” (Keil and Whissel, 2016, p. 125). This disconnects the audience from the scenery by removing any realistic reference, and thus forcing the spectators to contemplate the environment presented in order to fully understand their relationship with it.

The most impressive achievement from Star Wars was the complexity of the visual effects. Not much for the single techniques, which had been used many times before, but because “they had not been deployed at such high concentration” (Keil and Whissel, 2016, p. 123) ever before.

Even though many different approaches to Visual Effects were tried during the 1970s and the early 1980s, the style adopted by Star Wars and Close Encounters was dominating the industry and it proved such an influential power that, with the transition to digital affecting cinema in the following decades, VFX artists adapted old procedures to new technologies, and, furthermore, “looked to this era as a model for making environment from the ground to look photographic” (Keil and Whissel, 2016, p. 128) — without the concern whether or not there was any photography in the image at all.

By the early 1990s Computer Generated Images (CGI) had been around for some time, however it was still mostly used for commercials only, as the time required for rendering and the power of the processor needed were enormous. Furthermore, filmmakers struggled to understand its real potential.

With the introduction of the ‘digital scanning technology’, it was possible to digitalise film in high quality and manipulate it on a machine in order to adjust the footage and then print it back onto film, ready to be projected.

In the following years, the processing power and the render times required were dropping by the day, the technology became cheaper and it kept advancing, offering now much more control over the footage. In addition, the improvements made to the techniques such as colour grading and nonlinear editing meant that any film could be modified in post-production, radically changing, once again, the way VFX were thought.

This progress was staggering, and by then a lot more noticeable in the movies. In the 1990s the number of Visual Effects shots in a film was very small, but with the new millennium movies had thousands of them.

With the technology becoming more and more powerful, some films, like Sky Captain and the World of Tomorrow (2004), relied completely to all-digital environment, with actors shot in front of green screens and composited later in the CG scenery. The Lord of the Rings trilogy (2001–2003) diverged form this tendency emphasising instead the natural settings and utilising practical effects to achieve the result desired. The three films were shot “back-to-back-to-back, utilising a complex mix of actual locations, practical sets, miniature models, and digital set extensions” (Keil and Whissel, 2016, pp. 174–175).

However, the most incredible achievement in the Visual Effects “was in the creation of digital characters and fantastic creatures” (Keil and Whissel, 2016, p. 175), especially the massive armies. This historic result was made possible thanks to Weta’s Stephen Regelous, who created the software MASSIVE, through which it was possible to digitally generate masses, armies and background hordes to a number of hundreds of thousands.

The Lord of the Rings: The Two Towers (2002) frame enlargement. In this epic shot are combined miniatures for the Helm’s Deep, hordes created with massive and real footage of New Zealand’s hills.

Another “audacious step forward” (Keil and Whissel, 2016, p.176) was using motion capture to create Gollum, the fantastic creature that accompanies the main protagonists in their journey throughout the three movies. Nonetheless, the technology was still “new” — even though, at the time, it had been around for over two decades — director Peter Jackson decided to use it as it allowed the character to have more human resemblances plus all the facial expressions that actor Andy Serkis did in his performance.

The Lord of the Rings, like 2001: A Space Odyssey before, became a new model future films would look back to. Gollum, especially, was a pivot point in the use of motion capture. Movies like Benjamin Button (2008) and Avatar (2009) have studied and further elaborated this technique in order to remove all the boundaries between acting and visual effects, so that “actors [became] effects and effects derive[d] from actors” (Prince, 2011, p. 143).

Opposite to the films previously discussed, in Clint Eastwood’s Changeling (2008) “digital effects take a secondary role” (Keil and Whissel, 2016, p. 182). Here, they are not used to create fantastic creatures or magical settings, rather to recreate locations of the past — in this case Los Angeles in 1928 — that would have been otherwise impossible to have in the picture. These ‘invisible effects’ are made to be part of the story and settings, without altering it nor distracting the viewer. Various images were also cleaned up, in order to remove any continuity discrepancies such as modern street signs or electricity wires.

Changeling (2008) frame enlargement. DVFx and softwares like massive were used to recreate the 1928 Los Angeles.
“Unreal images have never before seemed so real” (Prince, 1996, p. 34)

Before, we stated that photography, whether moving or not, is by definition the projection of reality, as it automatically replicates everything before the lens, thus confers, with its objective nature, on an object “a quality of credibility” (Bazin, 2004, 198). However, CGI has made it possible to create fictional objects which, placed in photographic reality, even with no reference in the real world, with the use of realistic lighting and surface texture details, seem real to the spectators, thus challenging the basic concept of photographic realism. The main issue is that, even though digital images are, by nature, not real, as they are generated by machines, they achieve photographic realism, thus making the viewers question the realism of all images, even the ones that actually are real. With the advancing of technologies, “the line between real and not-real will become more and more blurred” (Byrne, cited in Prince, 1996, p. 31), making us constantly doubt whether or not what we are seeing is computer-generated.

Nevertheless, despite the innumerable moves forward in modelling technology, the motivation behind it “appears to have maintained a desire for credibility” (Wood, 2007, p. 48), even in the imagined fantastic locations or the strange creatures. The credibility relies on the fact that the character gives meaning to the space it is surrounded by, thereby, although the characters may be fantastic creatures, the space they occupy remains recognisable by the viewer.

The rapid advancing of digital technologies is not just opening new horizons for Digital Visual Effects but is radically impacting and transforming the entire process of film production, so much that already in the pre-production everything is thought in correlation to how DVFx will be in the shot.

The role of the cinematographer has always been strongly connected to the Visual Effects artist’s one, and with the arriving of new technology DVFx has not destroyed Cinematography, rather it has merged with it and into it. Numerous aspects of the Director of Photography’s job, such as the lighting colour and the final look of the movie have now moved to Post Production. The ability to alter colours and lighting in post has become extremely easy with digital. Before, colour grading a film required the use of petroleum jelly or glycerine spread on glass applied in front of the lens during the shot, which necessitated more time getting the right exposure and more work finding the right shade to match the essence of the scene and the overall movie. Another technique, used during post production in the lab, involved Hazeltine colour-timed trial prints and a lot of math to create the wanted effect by altering the levels of red, green, and blue light as the negatives were printed. This method was less precise than what can be achieved now with digital post production as “it offered little opportunity to fine-tune individual colors” (Prince, 2011, p. 72).

Before the introduction of DVFx, the cinematographer’s role consisted also in the making of the Special Effects, whether it was done in camera or with composites in the laboratory. John Dykstra — one of the greatest special effects artists, who was also one of the main minds behind the Special Effects of 1977’s Star Wars episode IV — described an effect as “two or more elements of film combined into a single image” (Dykstra cited in Salah, 2012, p. 115). His description could be as easily adapted to the old Special Effects as to the new Digital Visual Effects.

In a shot from Shutter Island (2010), the real footage of the actors and a miniature model of a lighthouse are added to a digital matte painting of the sky, shore, and sea.

Shutter Island (2010) frame enlargement. The final shot, the upper image, is a composition of real footage of DiCaprio (as seen in the bottom image) with matte paintings of the sky and sea, and a miniature model of the lighthouse (image below).

The process is the same that has always been used in cinema, just improved by the digital medias. The scenery presented on screen is rarely the real one, whether it is digitally created, matte paintings, or miniature models. With the blossoming of the digital era instead of using painted backgrounds, it was possible to use real footage composited in post with the scenes of the actors shot in front of a green or blue screen. Even more similar, is the rear projection. This technique, first used in the 1930s, quickly became the most common procedure in the field of SFX, lasting until the late 1990s. It involved the actors standing in front of a screen with a reversed image projected from the back.

Matte paintings were used also as invisible effects, not just for backgrounds and locations that were impossible to photograph otherwise. Film director Norman Dawn, while working on a photography project in 1905, discovered the technique of placing a glass painting before the lens to alter “undesirable elements in a scene that was to be photographed” (Prince, 2011, 158). This is just to demonstrate, once again, that glass painting, matte painting or digital painting, have always been around. The basic idea has remained, the process has just changed and improved.

In 1911, Dawn exported this technique to the filmmaking process and augmented it “by pioneering the matte–counter-matte process used to marry a painting with a live-action film element” (Prince, 2011, 158) (original negative matte painting). This technique was much more convenient than painting on a glass on set as the matte painting could have easily been done in studio with the help of the projection of “a frame of film from a live-action scene” (Prince, 2011, 158).

Matte paintings, whether done as glass shots on location or with Dawn’s technique, quickly became the standard procedure used in cinema sets. Almost always, the set built was just the strictly necessary for the actors to act in the scene, the rest was added later with mattes. There is, once again, a clear similarity here with the process that happens nowadays with DVFx, the only difference is the media used to achieve the result.

Nonetheless, as Bob Scifo — matte artist veteran — stated, the transition to digital has taken away all the emotional bound between the artist and the art. Whilst painters still have this emotional tie to their work, as Michelle Moen — matte painter — said, working on a computer makes you antisocial. In the past, she was working side by side with other colleagues such as cameramen and other matte painters, now she is “in a cubicle staring at a monitor” (Prince, 2011, 165).

On the contrary, Lev Manovich — author and professor of Computer Science at the City University of New York — sustains that digital painting has brought cinema back to its prehistory, when “images were hand-painted and hand-animated” (Prince, 2011, p. 154).

Whichever side one takes, the digital age offered more powerful tools. One of the most important is the ability to match colours between the live-action and the matte painting. If this was quite difficult with analog techniques, it is much easier with digital as you just have to take the colour from the plate and apply it to the matte. Furthermore, “by merging and dissolving pixels, the blend line between live action and painting can be made invisible” (Prince, 2011, 165).

“Digital imaging represents not only the new domain of cinema experiences, but a new threshold for theory as well” (Prince, 1996, p. 36)

Lighting, another aspect of cinematography, is used to enhance the effect of a scene, altering its essence in dramatic ways. DVFx could be also used to change the lighting of a scene in post production in order to bring the viewers’ eyes in the right position of the composition. Many studies have showed that there’s a science which demonstrates where the eyes are attracted and “will go first in a given composition” (Williams, 2006, p. 84). For this reason post production artists often use power windows to alter the lighting in the image in order to reduce or increase the contrast, enabling control on the audience’s eyes.

Furthermore, digital lighting can be used to make the blending of an actor with the surrounding scene — created with miniature models or matte paintings, or in the digital era, with Computer Generated Images — more believable and convincing. An example of this technique is shown in King Kong (2005) where Naomi Watts is digitally lit in post as “the lighting on [her] failed to match what was subsequently created for the CG elements” (Prince, 2011, p. 64).

King Kong (2005), frame enlargement. This shot presents numerous different Visual Effects. Firstly, Naomi Watts is shot on a blue screen, secondly the t-rex, King Kong, and the scenery are added to the picture, and finally over Watts has been applied a digital lighting as it did not match the one of the environment.

Composition has been impacted by the transition to digital as well. Some filmmakers decide to shoot in 4K in order to have more space to adjust the frame in the editing, with the resulting film being in 2K. This technique allow them not to worry about composition during the shooting and focus on directing.

The most radical change that affected cinematography, however, is the ability to create CG cameras. Non-physical cameras made it is feasible to have shots that would have otherwise been beyond the bound of possibility, like extreme aerial shots, tracking shots where the camera goes through walls or objects, and more. In the first episode of the first season of Breaking Bad (2008–2013), for example, the protagonist Walter White (Bryan Cranston) holds up a gun and the camera goes inside it as a shot transition. This “non-dynamic” (Wood, 2002, 375) spectacle effect — a visual effect that enhances the narrative evoking detailed images without obstructing the focus of the play, remaining in the background — allowed the director to deliver to the spectators different emotional states, placing them in an unusual position. As this would not be possible in real life, the viewers experience something totally unexpected and beyond their imagination.

Breaking Bad (2008–2013) frame enlargement. While Walter White (Bryan Cranston) holds the gun, the camera moves into it.
“Digital HD isn’t the future of filmmaking, it’s the now” (Rodriguez, cited in McKernan, 2005, p. 122)

The transition from film to digital did not just introduce Digital Visual Effects but also the ability, for cinematographers, to have a real-life feedback on exposure, framing and composition. With such, it is possible to review the shot instantaneously, without the need of going through the development of the negatives. This not only saved time in the processing, but also, and more importantly, in the production itself. With the director and the actors being able to see the shot immediately after it was done, it was now possible to revise it and make changes if needed. The actors could see themselves and improve their acting; the director of photography could change lighting or adjust composition right away.

Some say the transition to digital has taken away the essence of filmmaking, but on the other hand, it surely has made it much easier and quicker, whether this is perceived as beneficial or not. Keil and Whissel argue that digital technologies have increased the control the editor and director have “over performances after they have been recorded” (2016, p. 179).

Jean-Pierre Geuens, on the other hand, claims that the advancing of the new technologies has given more problems to cinema than the one it solved. One of the benefits is that, not having anymore just one single way to do it, each movie becomes a unique experience, both in making and in watching, thus it is up to the film crew to capture its essence in the best way possible.

However, the author believes that the birth of new technologies has damaged the vision of the artist giving him many more futile possibilities which radically influences his working procedure. Furthermore, Geuens continues, technology is deceiving as it makes the users think they are more independent, that in having more possibilities of choice, they have more freedom, while instead it encloses and reduces them into “virtual capitalists”. This is particularly evident in post-production, especially in editing; computers and new softwares have given the users the possibility to overthink and overdo, something that was not possible when the editing was done by hand on one shot at a time. Because users can see all the different scenes one next to each other on the computer screen, they are tempted to make a “fragmentary editing”, thus harming the narrative of the film.

Voltaire insisted that “the focus [of a play] must remain in the lines that are spoken, not in the décor” (1989, p. 14–15), which he considered being everything from the actors to the stage itself. He thought they were necessary but not part of the play’s beauty, which solely consists in the subject.

Aylish Wood contrariwise, sustains that the technology is not the problem, but rather how humans use it. She states that the improvements digital has brought made possibile to bring on the screen ideas that otherwise would have not lived outside our mind’s fantasies.

Nonetheless, despite that Digital Visual Effects have brought many changes to the cinema world, nothing has really changed. Just like the transition from Silent Era to the movies with sound, or the introduction of Technicolor, DVFx have given more tools to filmmakers and allowed them to express their feelings in many more different ways. George Lucas stated in an interview that the storytelling medium has not change one bit, “the agenda is exactly the same” (McKernan, 2005, p. 31), it just improved. As I showcased in this paper, Digital Visual Effects artists use the same methods created by Special Effects artists decades ago, only refined, adjusted and upgraded for the current days. Just like Dawn’s matte painting, nowadays’ effect artists can create, on computer, objects and apply them onto — and into, when they are 3D — the shot. Just like rear-projection, green and blue screens allow the shots of the actors to be digitally located in sceneries where they would have not been able to appear otherwise. The only difference between nowadays’ techniques and the original ones from the past is the magnified range of possibilities they allow.

Sources:

  • Prince, S. (2011). Digital Visual Effects in Cinema: The Seduction of Reality. New Brunswick, New Jersey: Rutgers University Press.
  • Ryan. M and Lenos. M. (2012). An Introduction to Film Analysis: Technique and Meaning in Narrative Film. Bloomsbury Academic.
  • Janes, A. (2008). Fearful Symmetry: Symmetry and Architecture in Film. [online] Available at: http://www.tboake.com/madness/janes/index.html
  • Bordwell, D. and Thompson, K. (1979). Film Art: An Introduction. 7th ed. Boston: McGraw-Hill Companies.
  • Hemphill, M. (1996). A note on adults’ color-emotion associations. Journal of Genetic Psychology, 157(3).
  • Camgöz, N., Yener, C., & Güvenç, D. (2002). Effects of hue, saturation, and brightness on preference. Color Research & Application, 27(3).
  • Grandjean, E. (1973). Ergonomics of the home. London: Taylor and Francis.
  • Dimitrova, N., Martino, J., Elenbaas, H. and Agnihotri, L. (1999). Color SuperHistograms for Video Representation. Proc of the International Conference on Image Processing.
  • Prince, S. (1996). True Lies: Realism, Digital Images, and Film Theory. Film Quarterly, 49(3).
  • McKernan, B. (2005). Digital Cinema: The Revolution in Cinematography, Postproduction, and Distribution. New York: McGraw-Hill.
  • Keil, C. and Whissel, K. (2016). Editing and Special/Visual Effects: Behind the Silver Screen: A Modern History of Filmmaking. New Brunswick, New Jersey: Rutgers University Press.
  • Affron, C. and Affron, M. J. (1995). Sets in Motion: Art Direction and Film Narrative. New Brunswick, New Jersey: Rutgers University Press.
  • Geuens, JP. (2005). The Grand Style. Film Quarterly, 58(4).
  • Bazin, A. (2004). What is Cinema?. 2nd ed. Los Angeles: University of California Press.
  • Wood, A. (2007). Digital Encounters. London: Routledge.
  • Salah, N. M. (2012). Visual Effects Cinematography: The Cinematographers’ Filmic Technique from Traditional to Digital Era. The Turkish Online Journal of Design, Art, Communication, 2(2).
  • Williams, D. E. (2006). Symbolic Victory. American Cinematographer.
  • Wood, A. (2002). Timespaces in Spectacular Cinema: Crossing the Great Divide of Spectacle versus Narrative. Screen, 43(4).
  • Voltaire. (1989). L’Invention de la mise en scène. ed. Jean-Marie Piemme. Bruxelles: Labor.