Musings on Video’s Beheading

Since 1990, the Hubble Telescope has captured the infinite complexity of the universe as stars collide and the universe’s own story rolls into the unknown. The complexity of this great chronicle is what has given rise to the breadth of ideologies and understandings that we know today. Storytelling has always been a matter of repackaging an insight into these beliefs and while the format of storytelling has dramatically evolved over millenniums, its genetically-hardwired demand in the genotypes of all humans has and always will remain unchanged.

In modern-day storytelling, video’s moving-images and sound provide the most realistic experience when looking to engage with somebody other than ourselves. The scope of video has led to cinema, video games and sophisticated UX systems. This brief list has of course ignores the simple electronics that use pixels within all types of aspect ratios to show anything and everything. Fundamentally though, all types of video are bound by their purpose of storytelling so that we can engage with experiences and lessons that we can’t otherwise receive.

A recent heavy-loaded ontological piece of cinema was Christopher Nolan’s “Interstellar” that opened us to the questions about deep-space mysteries and our search for these answers. The 1968 inspiration for Nolan’s film, Stanley Kubrick’s “2001: A Space Odyssey”, is layered in ambiguous abstractions of existentialism and its partnered cosmological packaging. It doesn’t come as a surprise that the fabric of space-time and its vastness has always captured audiences as have these two films. The reason for such demand is simple — existentialism is simply the cradle for any experience we have on Earth. It comes as no surprise that stories that can effectively engage audiences today seem to do quite well both with respect to the box office and to cultural receptions.

2001: A Space Odyssey

From its introduction in the late 1890s, film has exponentially rose as the king of storytelling, with billions spent yearly on sustaining demand. From 1995, films have generated from $5 billion to nearly $12 billion per year in ticket sales. Last year, AT&T signed a deal to purchase Time Warner (HBO, Warner Bros., CNN, DC Comics) for $108.7 billion. During 2017 Netflix will spend $8 billion in cash for content, only deepening its $3.6 billion debt.

While this is the franchise era of film, with the top grossing films since 1998 being an adapted work, it is also the beginning of a tectonic shift in storytelling. Today we look to video for stories, whether it be in a cinema or on a smartphone, but this will soon change. Because new stories will not be shown. They will be experienced.

To understand my definition of an experience that isn’t reliant on video, we need to look into the history of storytelling.

Before the present flag-bearer of storytelling, there was theatre. Before this was writing, painting then oral storytelling. But how were the first stories told? In your mind you might be envisioning a Medieval town or perhaps an older setting, either Ancient Greece or Rome. But this is embarrassingly simplistic on your behalf.

The “Bhimbetka Petroglyphs” cave-paintings are approximately from 290,000–700,000 BCE. Discovered in 1990, these simple hammered-out holes on a rock-face from ancient India mark the oldest known example of storytelling.

Bhimbetka Petroglyphs

To put this time period into perspective, between 125,000 to 60,000 years ago vitamin D depletion amongst populations migrating from the tropics prompted light skin pigmentation. The Mitochondrial Eve, the most recent woman that all living humans descended from, is only estimated to have existed between 152,000–234,000 BCE. In fact, the primates that created the cave-paintings in Ancient India were most likely Homo Erectus — not Homo Sapiens, the present classification for modern humans.

If this seems vast, consider that if the universe’s history was presented in a calendar of 365 days with midnight marking the end, then the time humans have been on Earth would be equivalent to 4 second before the midnight fireworks.

Storytelling is so deeply rooted in us that we cannot envision a world without it. When we do, it’s not just a dystopian world, it’s apocalyptic. Ray Bradbury’s “Fahrenheit 451” touches on such a world without books. And set in a backdrop of nuclear warfare.

Stories aren’t found in just books or great films like “There Will Be Blood”. They’re social engagements, ideas and beliefs. Ironically, without these, there’d be no foundations for books or films to even exist.

The real stories are the ones we experience outside of the book or the ones going to and from the cinema. Real stories are outside the screen.

And this leads us to an incredible idea — the next frontier of storytelling. If we consider video, in all its expressions from cinema, video games, apps, web content and so on, then what is the next stage?

We arrive at a theory of mine called, the Diamond Theory. It describes how origination/innovation exist in a pattern of expansion/rarefaction. To explain this theory we’ll explore the development of colour grading technology and its relationship with video experiences today.

The simple, chemistry-driven beginnings of film to its dependence on millions of transistors assist in colour grading images today is nothing short of magic. Although the first discussions on ways to capture light are recorded from the 400s BC, the first camera as we understand it today, came from 1833. These were dependent on delicate chemical processes; a silver-plated copper surface was treated with iodine vapour and then developed with mercury vapour and sodium chloride once light had hit the surface. Over the next few decades, this alchemy-driven process was tweaked and refined, giving birth to photographic film.

Analog photography still uses dark rooms to develop photographs using refined techniques that have evolved from the primitive (by comparison) chemical processes discovered during the early 20th Century.

Filmmakers and photographers would “make” their images during the development/post-production process. If neurosurgery is a medical art, then the technicians that operated on the small light-sensitive silver halide crystals on the surface of film in their labs were chemical surgeons that literally created art.

It wasn’t until “O Brother, Where Art Thou” in 2000 by the Coen brothers, were chemical processes made redundant with the introduction of digital intermediate. Films could now be edited quickly through a computer and no longer did filmmakers have to deal with the reels of film. To put this challenge into perspective, Francis Ford Coppola’s “Apocalypse Now” shot 457,200 metres of film, took two years to edit and went through 4 editors. By comparison, most Hollywood films today are edited within 6 months and use only one editor.

“If I say its safe to surf this beach, Captain, then its safe to surf this beach! I mean, I’m not afraid to surf this place, I’ll surf this whole fucking place!”

The overwhelmingly practicality of colour grading on a computer against the tedious chemistry needed to treat these reels (most films are shot at 24 frames per second so a typical 120 minute film has 172,800 frames) gave way to the rise of digital cameras and no longer were film stocks sex symbols.

Combined with enough time to pass for Moore’s law, a theory that notes transistors on an integrated circuit (microchip) double each year, its been possible for this colour grading to take place on our smartphones. We take for granted the technological foundations of Instagram and Snapchat — the latest flag bearers of video experiences and yet they mark only an evolutionary step in how we experience video today.

What’s fascinating is that during the development of colour grading processes, the million of adjustments and refinements that were made from the initial pinbox systems has brought millions of other products. New industries have been created as others become quickly redundant, stale and ultimately useless.

While films were seemingly the only ones at the receiving end of massive technological advancements, the video game industry was able to take advantage of simple technologies that were mostly associated with films.

With Moore’s law, the rise of the television in households, improved power grids and electrical infrastructure to support increasing demand, there were simply more and more waves being made on the landscape of not just video, but technological innovation as a whole.

I should make a clear note here that ascribing these changes to just video is somewhat incorrect. The demand for video was certainly responsible for the rise of Hollywood and the television, however technological innovation as always must be in a give/take relationship.

Let’s now return to the Diamond Theory having just seen the rapid technological development of video experiences. The Diamond Theory describes how an idea will lead itself to be expanded so that other ideas can similarly undergo such expansion. This original idea, having given rise to newer ideas has in turn undergone self-destruction as every idea should experience. Great ideas should be suicidal in nature. Their passing means they’re no longer relevant due to the newer knowledge gained from this original idea.

As we’ve seen with film, from its early roots during the 1830s and its alchemical transaction between electromagnetic waves and silver molecules, video has grown to exist with complex computer-processors that allow us to only edit our videos on high-functioning smartphones.

These technological achievements have seen the rise and fall of great companies. Kodak, the creator of the first camera in the late 1800s ultimately filed for Chapter 11 bankruptcy in 2011 as it refused to adapt to the demand for digital cameras. What makes this example even more extraordinary is that Kodak invented the world’s first digital camera in 1975. Most “camera” companies today like Canon, Fujifilm and Nikon generate most of their sales from industrial products, the likes of large printing/scanning presses and medical imaging devices. Today, Canon’s annual profit is mostly found in industrial products with digital cameras accounting for less than 33%.

You might think of DSLRs when you hear “Canon”. Today its shareholders think of printers.

This reflects the technological prowess of modern consumer electronic companies. Apple’s iPhone not only beheaded these camera companies, but it savaged the music industry, general electronic companies that relied on devices like pocket calculators and is even rumoured today to be considering a purchase of The Walt Disney Company.

Apple’s rumoured acquisition of Disney underlines the principal concept here — that ideas expand, creating new ideas that ultimately force the contraction of the original idea.

Video began as a simple chemistry experiment, it created Hollywood and television and is now the backbone of UX systems on all our smartphones. If we examine Apple’s history with video, it’s interesting to note how graphical interfaces (Apple II) were so key in opening the personal computer’s industry to the masses.

When Jobs took “inspiration” from Xerox’s PARC lab for a graphic interface on Apple’s new computers and later realised Gates was doing the same he called Gates a thief. Gates responded with “Well, Steve, I think there’s more than one way of looking at it. I think it’s more like we both had this rich neighbour named Xerox and I broke into his house to steal the TV set and found out that you had already stolen it”.

When we think of Apple today, we are mostly driven to conclude its distinction comes from its design led by Jony Ive — the designer of the iMac, the iPod, iPhone, iPad and the Apple Watch. We can probably see the beautiful stainless steel structures of these devices but of equal importance is the UX. Apple’s prioritisation on design has seen Jony rise through the ranks at Apple and is now Apple’s Chief Design Officer, overseeing both hardware and software design choices.

An entire book could be written on the design operations of Apple and the bitter rivalries between past/present powerbeads such as the fallen Scott Forstall or Jon Rubinstein but we don’t need to delve too much into this. We just need to appreciate that design at Apple is important because it’s the hypnotic way users engage with Apple’s products that has been truly responsible for Apple’s success.

Apple’s financial success of course has largely been attributed to selling over 1 billion iPhones. Such profit has allowed it to consider something incredible — purchasing The Walt Disney Company. This creative-nuclear engine has given audiences Mickey Mouse, “Snow White and the Seven Dwarfs”, “Alice in Wonderland”, “Peter Pan”, “The Jungle Book”, “Mary Poppins”, “The Lion King”, and more recently Marvel movies, the revamped Star Wars series and Pixar films.

Standard Oil, the company that saw John Rockefeller’s wealth sore to over $300 billion and Microsoft, with its co-founder Bill Gates rise to the title of the world’s richest man, have both faced monopoly charges by anti-trust regulators because of their scope of influence. By comparison, the indirect influence Disney can leverage is valueless.

Such an acquisition makes sense in one aspect to Apple as all of these videos are today experienced mostly on smartphones. While we previously noted the rise in box office sales, we need to look at the exponential rise of the smartphone. In 2007, smartphone shipments numbered 122.3 million. In 2016 it was 1.495 billion.

I’m finding it hard to find the motivation to expand further on this note because it’s just so overwhelmingly obvious. The smartphone is literally the cornerstone of modern living.

And so in turn — the smartphone is the portal into modern video storytelling.

As we have seen, video has expanded from its chemical roots and with the added acceleration of Moore’s law (which ironically should be noted, follows linear-growth), we have seen video expand into a broad and dynamic existence today.

Moore’s law — the trendline showcasing the predicted trajectory of the number of transistors in a circuit as a function of time.

Apps are a form of video experiences, smartwatches too. Engineers today are pushing technology to further boundaries through virtual and augmented realities. Immersive and world-building processes that see film and video games meet at a cross roads.

You may be thinking that this cross-road marks the contraction point in the Diamond Theory. Perhaps your imagined crossroads match the shape of vectors of a diamond and so you concluded that this is where video is headed.

Hopefully such a coincidental and almost arbitrary relationship between geometric designs has had little bearing on your judgment because such a conclusion has missed an important point about the Diamond Theory.

It’ll be the merger of content and projection technologies. We can see this in its rudimentary forms with Apple’s consideration of purchasing Disney. However it’s hard to predict what exactly this will be — we have a proven track record of being terrible at forecasting tomorrow. But we shouldn’t be too harsh on ourselves.

We only have to look to nature to appreciate the level of abstraction that can take place. A flower may look pretty to you but its merely a mating signal to bees. The brighter the colours, the greater the chance of bees not only spotting the flower, but subsequently sustaining its livelihood. What’s fascinating here is that while this requirement not only saw bees with greater vision survive, birds also adapted.

The hummingbird’s unusual hovering abilities marks a significant change in typical flight mechanics amongst Aves species. By rotating their wings, hummingbirds were able to generate both upstrokes and downstrokes and consequently hover at flowers while they extracted nectar with their beak — a characteristic uncommon amongst other birds.

By studying the hummingbird’s hovering abilities that aeronautical engineers were able to build the Bell Boeing V-22 Osprey and the Harrier Jump Jet.

Such a change seems so far from what we’d presume the relationship between such species and flowers to play out.

Hence, such dramatic changes to video as proposed in this writing are so difficult to foresee. In its first principles, video is a fixed system. It captures moments and screens them to us.For video to die, the next frontier won’t be a reactive-based system. In other words, we watch videos today. An inherently time-based relationship between the user and the image. Everything we watch, we react to. We can never be ahead of a “video”.

On the 1st April 1995, the Hubble Telescope captured the “Pillars of Creation”. A collection of interstellar gas in vertical columns existing 6,500–7000 light years from Earth, providing another line in the Universe’s history book.

“Pillars of Creation”. In an ironic twist, while these columns of interstellar gas are giving birth to a new star, nearby newborn stars are flooding UV radiation onto this gas which in effect has stripped the columns of their electrons and consequently handed them a death sentence. While these gases are between 6,500 to 7,000 light years away, it’s highly likely they no longer exist.

Storytelling has always been a retrospective game and one always bound to the technological abilities of the time. The cave-paintings from ancient India were hammered in with simple tools. Today, films are bound to the computing prowess of the graphic cards of the computers used to render them and the number of pixels available on a smartphone screen.

The beheading of video will be swift just like how Apple scorched industries with its iPhone. While the iPhone led us to new video experiences, the end of video will lead us to a new world.

What’s incredible is to think of what this world will look like. Because the end of video means the end of storytelling as we know it.

Storytelling today means the world to us.

Storytelling tomorrow will be about making worlds.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.