Categories of Emerging Media

Kamal Sinclair
Vantage
Published in
46 min readFeb 8, 2018

Editor’s note: This is part of the series of articles by transmedia producer Kamal Sinclair, from her research project on social equality in emerging media, Making a New Reality.

Alternate Reality Gaming/Escape Rooms/Immersive Theater/Cosplay

I want to advocate for the live experience. The net-native populations are actually hungry for live experiences. When digital is paired with the live, it’s the most powerful. It’s all about the communal experience and that can’t be replaced by any kind of media. — Jennifer Arceneaux, Brand and Media Strategist

Alternate reality games, escape rooms, and immersive theater all provide live experiences that overlay a storyworld onto the physical world and allow audiences to play with and perform alongside the storyworld. Usually some kind of game design is employed to motivate audiences through an experience. The Quinn Experiments, Pandemic, and Sleep No More are examples of well known immersive experiences that have changed the way we interact and participate with theater.

Immersive theater: Recently, The Willows, based in Los Angeles, has created an immersive combination of interactive and site-specific theater. “The rise of escape rooms and immersive theater, call it a trend if you want, but I think what VR is doing for film and TV, that’s what this space is doing for theater,” producer Justin Fix told the Hollywood Reporter. Amidst this period of technology-based storytelling, there is also an opportunity for creators to heighten the focus on the audience. Fix went on to say “Also, we have people put down their cellphones. It’s an opportunity as a society — we’re always wanting more and more, we want the extremes, we want it all — to have vulnerable, real experiences. People are craving them.”

Installations: The appetite for this kind of high-touch, context-rich story experience is affirmed by the massive popularity of immersive fan exhibitions, and the continuation of transmedia franchises, such as Skybound’s digital-to-live campaigns for The Walking Dead and SyFy’s 10-day, immersive experience for The Magicians.

Alexandra Shapiro, executive vice president of marketing and digital for SyFy said the experience was the network’s way of bringing the world of the show to the masses.. As Shapiro told a reporter for BizBash, “With The Magicians, we are able to make the themes of fantasy and illusion come to life, providing a truly unique, interactive, and immersive storytelling experience. We wanted to provide both fans of the series and new viewers the opportunity to transport themselves into the world of the show, channeling their inner magician.”

Like the immersive experience created for The Magicians, creators are finding extraordinary, artful ways to connect audiences with story. Projection mapping uses technology and story in a way to achieve transcendent and cathartic experiences in works, such as Heartcorps: Riders of the Storyboard, which premiered at Sundance Film Festival in January 2017. Projection can be layered with interaction and engagement to elevate the immersive nature of the art.

Cosplay: Similarly, cosplay has become a way for audiences to connect with story on an emotional level. While the act of dressing up allows a person to pretend to be a character, cosplay pushes embodiment further and allows fans to become the character and join the story in a way that feels real.

“Cosplay has switched from just being a convention-going fan expression to a full blown medium and industry. Cosplayers consistently come up with their own character “mashups” and remixes to create a unique expression that is literally embodied by the cosplayer as a performance. Conventions offer a safe space, but increasingly this is making its way into individual performance. Cosplayers have even had games (for instance) written around them,” says Joseph Unger, the founder of Pigeon Hole Productions.

The effect of cosplay are that anyone is welcome to join a storyworld. “Cosplay is something to take more seriously as a form of audience co-opting a world into public space and democratizing story. The knock-on effects can be seen in Marvel’s switch to more inclusive characters. Cosplayers are also disproportionately marginalized communities. Aspergers, LGBT, social anxiety, veterans,” says Unger. The impact of this industry and fandom has changed the way studios and creators think about the worlds they are creating. Storyworlds expand and live on through cosplayers. While once a bleeding edge hobby for fans, cosplay has become the norm at major conventions such as Comic Con and AnimeCon, with competitions that showcase attention to detail, embodiment of the character, and innovative design.

Augmented Reality/Mixed Reality

Augmented Reality (AR) and Mixed Reality (MR) overlay digital content on the physical world by allowing users to look at the world through a smart tablet, phone, or head mounted display (i.e., Magic Leap, Microsoft Hololens, Meta, or Google Glass).

I think it will make life easier for a lot of people and open doors for a lot of people because we are making technology fit how our brains evolved into the physics of the universe, rather than forcing our brains to adapt to a more limited technology,” Magic Leap President and CEO Rony Abovitz told the Harvard Gazette

HMD AR/MR as a creative medium: Head Mounted Display (HMD) AR, also known as mixed reality, is in its infancy. The majority of headsets are still tethered, with low resolution and a limited field of view. There is still a steep learning curve for audiences exploring HMD AR.

Even with these limitations, artists and designers are finding compelling reasons to explore this new addition to our human communication architecture. Meta’s Journey to the Center of the Natural Machine was released at the 2017 Sundance Film Festival and showcased the power of multi-player AR interaction. In the experience, two users are able to interact with a story together and see the same hologram of the human brain. Meta’s piece hints at the possibilities of interactive and collaborative AR within the fields of science, business, education, and engineering.

Another AR project of note at the 2017 Sundance Film Festival was director Melissa Painter’s Heroes: A Duet in Mixed Reality. The project harnesses the storytelling power of the Microsoft Hololens, the first and only untethered AR HMD. Painter’s AR experience offers a glimpse into how HMD AR can achieve beauty and deliver unique stories and content that goes beyond novelty.

The Developers Edition of the Microsoft Hololens shipped on March 30, 2016. The Hololens is untethered with a contained computer system that allows the user to move freely through space. The Hololens projects holograms into the space around users through holographic lenses and sensors. Spatial sound increases the user’s feeling of presence. Through a combination of gaze, voice, and gestures the user can interact with and manipulate the holograms. Of note is the Hololens demo Fragments, which allows the player to solve an interactive mystery in their own space with life-sized characters.

Marjorie Prime, a film created by 8i, a Los Angeles-based startup, premiered at Sundance Film Festival in January 2017. The film tells the story of a family that uses an AI-enabled hologram of a dead family member to deal with their loss. Although this seems very futuristic, the underlying technologies already exists in the form of volumetric capture tools, natural language recognition software, and machine learning, to name a few. In fact, New Dimensions in Testimonyessentially provided the proof of concept in late 2016, and the Marjorie Primeteam unveiled a functional Jon Hamm hologram at its Sundance premiere party.

Tablet/Phone Delivered AR: Augmented Reality delivered via a smartphone or a tablet is currently the only accessible AR for the mass market, as HMDs are still primarily targeted to developers. In fall 2017, Apple made a push to make AR mainstream by sharing augmented reality demos for the new ARKit platform, the A11 Bionic chip, and the new powerful camera on the iPhone 8 and iPhone X.

All of a sudden, Apple’s millions of users will have AR accessible in the palm of their hand. Google announced ARCore in 2017 as a competitor to Apple’s ARKit. Recently, Samsung and Google announced a partnership to bring ARCore framework to Samsung Galaxy phones. Google had experimented with AR with Tango, however it was limited to two devices that had Tango capabilities. The Google and Samsung ARCore partnership ensures that, like the Apple ARKit, AR tools are in the hands of the millions of Android users.

AR has been on the brink of mainstream use since coming onto the market. In 2009, AR apps were used for: marketing (Blippar campaigns), cultural toursof places and museums (Museum of London), viewing constellations (Google Sky), applying makeup (ModiFace), educating children (Happy Atom, Mardles), and understanding other languages (Google Translate).

In July 2016, Niantic released its hugely popular game, Pokémon Go. With 500 million downloads in its first month and continued engagement globally (4.2 million South Korean users played the game in a single day), the app has shown just how popular the medium can be. Niantic Labs will follow up Pokémon Go with Harry Potter: Wizards Unite, an AR title set to launch in 2018, co-developed by Warner Bros. Interactive and its new sub brand Portkey Games.

The Harry Potter game utilizes Google Tango, a breakthrough for AR technology. It allows your phone to go beyond just overlaying digital content onto the real world at a specific geographic location. Tango-enabled phones can map every physical object in their view and match content to the actual dimensions of the environment. It’s like having a real-time projection mapping device that can go anywhere without needing a projector.

AR Games on Tango aren’t like AR games on existing phones,” writes Nick Sutrich, for Android Headlines.“These games actually map out virtual space in the real world and allow you to walk around in them, with fully realized spatial movement and object orientation. Tango devices can also measure correct distance and surfaces, allowing apps and games to accurately map scale and size in virtual space. All this comes together to mean that you’re actually interacting with virtual characters and places that exist in real space, unlike the Pokemon you’ll see floating ‘in front’ of your phone in games like Pokémon Go. If Pokémon Go were a Tango-enabled game, you wouldn’t just see the Pokémon placed in an image of what your phone’s camera sees, it would actually be hiding in the bushes right next to you, waiting for you to find it.”

This new platform for tablet-based AR will exponentially improve efforts such as Outthink Hidden, an AR app based on the film Hidden Figures. The app launched at CES 2017 and was critiqued for its reliance on the antiquated QR code AR model. The project was noteworthy, though, for its celebration of diversity. The project allowed users to discover and learn about black, female doctors, engineers, and scientists and other marginalized individuals who have shaped history.

Mira’s wireless headset combines the comfort of the HMD and harnesses the power of iPhone VR for $99.00. Mira’s headset allows multiplayer interactions in AR through Bluetooth. Rachel Metz, a senior reporter for MIT Technology Review, draws attention to the major problem with phone-based AR — the lack of quality content.

An interesting innovation in the works is providing micro-vibrations or haptic feedback to a device displaying digital content in AR

Essentially, we’re looking at different frequencies of vibration, which represent a different way that an object can move. By identifying those shapes and frequencies, we can predict how an object will react in new situations,” Abe Davis, the lead researcher on a related MIT project, told Digital Trends.

Other examples of tablet- or phone-delivered AR include: Priya’s Shakti, Future Delta, Ingress, Maguss, VR Rave, and Hidden Cash ARG.

Other AR forms: Large-scale, screen-based AR has been used in marketing and entertainment, like Pepsi’s famous Bus Shelter stunt in 2014. Holograms that do not require a phone, tablet, screen, or HMD might be the real future of AR. There are some technologists who are working to create hands free and non-HMD holograms, such as the HoloLamp. Others are designing “audio AR,” an interesting new term for products that use geo-locative features of a device, in concert with a suite of other hyper-specific metrics and tools to provide customized auditory content that overlays the real world. Artist Zach Lieberman was recently featured in Wired for using Apple’s ARKit and OpenFrameworks to create a real-time sound map and visualize sounds in AR. The process uses the iPhone’s sensors and camera to create a map of a room’s shape, and, from there, create bursts of sounds in front of the user based on the noises they make. While rudimentary, Lieberman’s experiment hints at the future of sound, music, and story in AR.

Bio-responsive/Bio-connected Story

Bio-responsive or bio-connected works use biometric technologies in story experiences to create connected or transcendent experience for audiences. Some examples include The Ascent, UKI, and My Sky is Falling. In fact, one of the major tech trends in Japan is Superhuman Sports. Its practitioners are pushing the boundaries of technologies that enhance our senses and capabilities, including exoskeletons, spidervision (a 360 field of view in 180 view), peripheral vision, haptic feedback, equilibrium control, muscle remote control, emotional expression sensors, overall augmented eyewear, and more.

With the proliferation of wearable devices and smartphones that feel like natural extensions of our hands, some argue that humans are becoming post-biological. There are actually cyborgs and bio-hackers/grinders who have committed to a physiological relationship with technology (i.e. Wafaa Bilal or Neil Harbisson). Other people are using synthetic biology, wetware and other integrations of technology and organisms or enhanced humans as media.

Finally, the cutting edge of this category manipulates biological matter to perform like software and hardware.

“I would urge foundations to think about emerging media, probably in a less traditional way than other media producers might generally think of it,” says artist and scientist Britt Wray. “Because I do work in the scientific realm and there are a lot of storytellers, artists, and designers who actually consider biology itself to be a medium called ‘wetware.’ There is a whole kind of art realm of doing wetware with biohacking or bioengineering and people even storing digital information that would encapsulate the binary code needed for a story, similar to a digital document like an ebook, but combining all of that information and translating it into DNA and storing it in the biology medium of the DNA molecule itself. So you can basically store thousands of terabytes in just a few drops of DNA, which scientists are starting to do now. And there have been some wild demonstrations of how much you can store, like a whole book, a whole archive of photos and several films in this one drop of DNA.”

Innovators in this category are creating “living world material” libraries full of substrate tools for making bioart. Materials such as DNA molecules can be a source of sustainable data storage for digital work by assigning the chemically based digits (i.e., A, C, T, G) with the 0s and 1s of binary code, which resolve the energy consumption issues of our current server-dependent infrastructure.

There are a growing number of artists engaging in bioart. Gina Czarnecki is considered one of the pioneers of the form, with a rich body of experimental work. Other artists playing in this space include Helen Pritchard, whose Critter Compiler uses the recently discovered chatter within certain types of algae to inform an algorithm that pulls text from a database of queer literature to compile new novellas. Essentially, she is co-creating new literary works that are collaborations between organism and machine. Other examples of bioart and wetware include: CyberneticBacteria, The Living Commons, ReBioGeneSys — Origins of Life, Polluted Art: Gilberto Esparza’s Fuel Cell Symphony,; CandyLab,and Hare’s Blood+.

Data Storytelling

When asked the question, “What excites you in the emerging media landscape?”artist Nancy Schwartzman replied:

More and more transparency around, or the possibility of transparency around, science and scientific phenomenon. How it affects health or the interior workings of your body…your heart or your organs or your neuroplasticity…and being able to measure it, connect it, look at it and see the colors of it. I find that stuff really interesting and I wish more of it were broken down for lay people. I think there’s an amazing potential to make the science of who we are and the algorithms all around us more transparent. There are ways now that we’re monitoring and tracking and breaking down the data in really digestible ways. It feels exciting and full of promise.…and I’m not just thinking about your Netflix algorithm, but political algorithms or the neuroplasticity data related to falling in love. All that stuff can be broken down into numbers and I think we have the tools and the data visualization practices to tell amazing stories with it. I think there’s a lot of promise there.

Data storytelling uses data collections to make stories about the human experience and our environment. Today’s technology and social media culture has set the stage for storytellers to evaluate humanity’s actions and emotions from a “bird’s eye view,” which was impossible with the limitations of 20th century communications, as well as to understand complex scientific concepts and dynamically share them with audiences.

Five years ago, data visualization storytelling was a niche skill, but now it is considered a fundamental skill for journalists. As the complexity of our smart communication architecture evolves and data becomes more robust, journalists can mine that data and provide insights into the dynamics of the human story.

Visual communication is starting to overtake text-based communication, so journalists have to adapt to that new language of their readers. Mark Zuckerberg affirmed this move toward visual stories at Facebook’s 2015 annual meeting, when he revealed social media analytics that showed the migration of users from text to still images to moving images as their primary mode of communication. Additionally, in a 2016 article about data visualizations used in human rights advocacy, a group of researchers published findings that there has been a noticeable increase in the use of visual data and storytelling tools among organizations, such as Human Rights watch and Amnesty international — although these groups are still learning how to use these tools well.

The art and skill of the medium exists in how the storyteller generates the data, designs related visualizations, and chooses a distribution process. The storyteller is tasked with contextualizing the data within a narrative and aesthetic construct. In her interview, digital media pioneer Ann Greenberg draws attention to the fact that constructing microdata and metadata capture and retrieval systems is very creative work. Data visualization is a useful tool, but she reminds us to use it thoughtfully. If done right, the results are enlightening.If done poorly, the results are meaningless.

Landmark data visualization artwork includes We Feel Fine, I Want You to Want Me, Artificial Killing Machine, Derive, and Dear Data.

Immersive data: Data storytelling is about to enter an exciting new phase of experimentation with the entrance of companies such as Virtualitics, LLC, which gives media-makers the ability to bring data analytics into VR and AR. The tools combine data visualization with natural language processing and AI.

Another interesting area of exploration in emerging media’s data visualization sector is the “ambient user interface.” With the growth of the Internet of Things, in an increasingly connected environment , data can be expressed to us in many other ways than a screen. Objects all around us can be designed to express data in a perhaps more integrated and aesthetically pleasing, or body-engaged way. This suite of developments is well-aligned with the attempts to make smart objects look analog, or the marketing community’s shift in thinking from information age approaches to connecting to consumers to the experience age.

Docugaming

Docugaming is a medium that aims to give agency to players in a nonfiction story. The result is a sense of implicating audience members in the true story, rather than accommodating passive observers. It can have the effect of raising the stakes for the audiences and exposing certain vulnerabilities or grey areas in these real dramas. Examples include 1979 Revolution; That Dragon, Cancer; and Everything.

Other forms of docugaming include text-based nonfiction games including Zoë Quinn’s Depression Quest, and Walden, A Game, a first person simulation of the life of Henry David Thoreau at Walden Pond, which won Game of the Year at Games for Change. Cosmic Top Secretblurs the boundary between documentary film and game. Directed by Trine Laier, it is an autobiographical adventure video game about T and her journey to uncover the truth about her parents’ work with Danish Intelligence during the Cold War. Cosmic Top Secret received the IDFA DocLab Digital Storytelling Award in November 2017.

Ephemeral Media

A few interviewees pointed out that we are just at the beginning of understanding how the adoption by Gen Y and Z of ephemeral media, like Snapchat, will impact the nature of digital story and art. Is the craving for impermanence a reaction to the over-tracked nature of our digital lives? Is this a way of trying to capture some aspect of the quality of “liveness” in a digital space? How will artists create ephemeral digital work?

“They are the Snapchat generation, right? And for them to actually have a perspective that their stories disappear and their messages disappear as a primary form of communication is something that we actually don’t understand,” says Loc Dao, chief digital officer at the National Film Board of Canada. “Any Gen Xer who tells you they understand is just making it up. I understand the technology, but I don’t understand the social value in the way they [understand it], and the way that’s become just natural for communication. My daughter’s Snapchat profile has 40,900 chats, right? We can’t even imagine what they’re going to come up with.”

“The dynamic in media that I think of as emerging has something to do with the nature of social media,” says Michelle Byrd, a strategist and producer as well as the managing director of the Producers Guild of America East. It has to be consumed right in the moment. Otherwise, it’s too far down in your feed or disappears, like stuff on SnapChat. [It’s] like ephemeral digital media are similar to live events.”

Snapchat is pushing the boundaries of storytelling and art through its extremely popular platform. Recently it was announced that Snapchat would be releasing AR art installations around the world, starting with the artist Jeff Koon’s iconic balloon animal forms. We are also just beginning to see the results of Snapchat’s research and use of facial recognition software (Snapchat lenses), which employ AR and AI.

Other forms of ephemeral media include Instagram stories, which, like Snapchat, offer an intimate look into users’ lives. As of September 2017, Instagram has 500 million daily active users. Musical.ly, which began as a lip-syncing app, allows its “musers” to record and share short videos. The app has over 215 million active users and is extremely popular with teens. Live.ly is Musical.ly’s live streaming app.

Generative Art and Artificial Intelligence

Although AI became a part of many people’s daily lives in 2011 with the introduction of iPhone’s Siri, the last year saw major advances in human-like artificial intelligence (i.e., IBM’s Watson making the first AI-edited movie trailer or an AI beating the world Go master). Many of these breakthroughs were due to AI algorithms that can better understand visual data (i.e., facial recognition and image detection) and process natural language.

Because these smart algorithms can emulate human thinking, reasoning, and decision-making now, plus recognize emotions, artists are experimenting with co-creating art with machines. Artists have always used technology as a tool for creating their vision, but the difference here is that the tool has equal, if not more, power to determine the outcome.

“[Tara Shi & Sam Kronick] hope their art can help explain the mysterious workings of artificial neural nets, which are increasingly being used by tech companies large and small to make their products faster and, for lack of a better term, smarter,” reports Ethan Chiel for Splinter. “They’re among a host of artists who are incorporating neural networks into their work, and, in the course of doing so, helping the public better understand a technology that will increasingly be a part of their lives, used to make decisions about them and the world around them. The artistic fruit of neural networks is increasingly seeping into the public consciousness.”

New Dimensions in Testimony(NDiT) was a breakthrough project in this category in 2016, because it gave us a glimpse at the promise of cognitive computing in storytelling. The project used advanced natural-language software that allowed audiences to verbally interact with the recorded 3D image of a Holocaust survivor. Powered by a complex algorithm, the hologram responded to audience questions in real time, which gave the impression of having a realistic conversation.

This form of AI may be used in many ways. Imagine the future of the family photo album or memoirs once these holograms are able to recall verbal answers to questions, tell relevant stories, pull relevant images or soundscapes, and transport you into related, immersive environments. This could become the new cultural equivalent of “sitting at the kitchen table with a photo album and telling stories,” with those who have passed away.

This project involves a smart algorithm but not necessarily one that learns as it interacts with new inputs of information. However, a collaboration between Sam Kronick, Tara Shi, and a neural net computer program does exemplify this kind of experimentation in generative art. The human collaborators feed 3D scans of natural matter (like rocks) to the artificial intelligence program, which mapped the matter’s contours, learned to recognize the matter, then generated “its own craggy depictions.”

The advancement of machine learning is creating debates about how to define art that exposes an existential crisis among artists. Similar to the debates that took place during the rise of the YouTube generation, where anyone could suddenly be a maker, or the proliferation of storyworld forms that gave the audience agency and authorship, the AI generation is now asking the question: “What is the role of the artist?”

If a machine can make visual art, edit a movie, write a screenplay (Sunspring), or compose a song (Daddy’s Car), what is the value of the artist? Granted, these smart algorithms are parasitic in that they use source materials from a millennium of human creativity to find patterns and samples to remix and mashup into something contemporary. However, some argue that this process is similar to what artists already do — tell the same seven stories and compose with the same 12 notes per octave (at least for Western music) in different ways over time. Ahmed Elgammal at the Art & AI Laboratory at Rutgers University have used create ‘art’ using the WikiArt database, and the results are extremely similar to art made by humans. Further, they have asked humans if they can tell the difference.

Collaboration, not competition: Rather than machines overtaking humanity’s creative powers, what if they enhance them? [It’s possible that AI systems could collaborate with people to create a symbiotic superintelligence,” writes computer science professor and CEO of the Allen Institute for Artificial Intelligence, Oren Etzioni, in the MIT Technology Review.

Lauren McCarthy explores this idea in 24H HOST. Over a twenty-four hour period, she hosts a cocktail party with guests attending for one hour in groups of eight. However, a custom AI with access to a wealth of data about each guest curates the groups and puppets McCarthy’s every word and action. Can the AI facilitate optimal social interactions? Where do our human limitations, like the need for sleep, under perform or hinder the outcomes? In McCarthy’s project Get-Lauren, which won the top prize in IDFA’s emerging media program, she performs the function of an AI in a wired home and examines the benefits of humans as interfaces. Will an AI ever be able to read the nuanced aspects of human emotional needs and services them as effectively as another human?

Ann Greenberg, a pioneer of interactive and generative media, has been wrestling with this question since her days as a child playing on MIT’s campus while her father designed cities of the future. Over the last two decades, she experimented and designed impressive new media technology and models, which led her to found Entertainment AI™. The company’s patented technology lets users perform inside media and provides an example of how intelligent machines and artists can complement each other in a creative process. She explains:

I describe my system as working with humans, robots, avatars, and 3D animations. In it, humans and robots are essentially equals. I set out to create an expression machine, a way for people to express themselves. A democratic cinema. And I ended up with an anti-expression machine, which was kind of worrisome. ‘How do I architect this so I don’t end up with an anti-expression machine?’ ‘Ah, I make the script intelligent, what we call a SmartScript™ so it’s both machine and human readable which allows me to virtualize production.’

In Sceneplay [now Entertainment AI™], every beat of the script is itemized into its tiniest portion: Edit decision list, the camera angles, the filters, the time of day, the character descriptions, etc. That data is used to run a series of prompters, recorders, and automatic editors all driven off the same piece of marked-up code. The user selects the kind of media that they want to be in (sitcom, music video, etc.), they select the role that they want to play (protagonist, lead singer), and they perform with the machine telling them exactly what to perform. If you do and say what the prompter is telling you, then the video you’ve created (orchestrated literally by that SmartScript and therefore by the machine) will match other people’s performances automatically and give the user an experience of being in an ensemble. If you don’t follow the machine, your stuff won’t stitch together with everyone else. Essentially, even though you are a puppet, you are part of a collaborative experience. I like to say, ‘the audience is the artist.’

Now, I’m comfortable with what I’m doing (existentially), because the SmartScript can still have a human writer. Technically it doesn’t have to be, we have systems that can generate stories, usually for the data intensive kind of subject matters like finance or sport, but increasingly, through natural language processing, all different kinds of machine-generated meaning can be created. But in my system I can keep the writer at the center. Even writers from 500 years ago can be at the center of the system.

Okay, yes, users are meat puppets in my world, but they’re also co-creators. And, I demystify how the construction of meaning happens for the users of this system. Anyone can write something and see it come to life with the crowd. And that’s how I feel comfortable with what I’m doing. I’m positioning Sceneplay [now Entertainment AI™] between human and machine creativity. Think of the possibilities of this! We can learn the whole scope of human storytelling, as far back as we have documentation, and then using that information we can automatically generate new stories from that deep history. In a world where everything can be captured and represented digitally, everything becomes a story. In this sense data has point-of-view, and literally tells a story.”

At IDFA DocLab 2017, artist Memo Akten launched Learning to See: Hello World! This interactive installation uses live cameras to demonstrates how machine learning works. Using a deep neural network, it rapidly compares patterns and pulls an image from them. Ars Electronica has also been at the forefront of supporting artists working with AI. Other examples of generative art include: MESOCOSM, word.camera, Flock, Flippaper, and Close Your, RIOT(uses facial recognition to take users through a violent protest)

Geolocative or Geo-Aware Experiences

Some projects use the GPS system in our mobile devices to connect story to place and coordinate live interactions. This creates the effect of immersion in real space that is connected intimately with story. Examples include The Silent History andThe National Mall.

Gestural Interfaces

In 2006, media scientist John Underkoffler and his startup, Oblong Industries, premiered the Gspeak system with the TAMPER editing app, which enabled editors to create cinematic worlds through gestural interface. The idea arose from brainstorming exercises led by Alex McDowell, while building the storyworld for Minority Report in 2002.

It is wonderful to see this gestural interface go from imagination, to invention, to collaborative innovation by other storytellers in the field. Here are additional examples of artistically rich, gestural projects: CLOUDS, Treachery of Sanctuary, and Shadow Monsters.

Interactive Film, Games, and Books

Works such as The Wilderness Downtown, The Johnny Cash Project, and ROME, which leveraged the URL capabilities enabled by WebGL and created best practices in the UX/UI design of interactive media, demonstrated that the artist was not dead, as pundits had postulated with the rise of user-generated content.

Conversely, they showed that although the artist may not completely control the story experience in interactive mediums, the artist is critical as the entity that crafts a story design with which users may play or co-create. Those pieces also showed how a story experience could become highly customized and reciprocal and successfully break out of the single-frame paradigm. Examples of seamless interactive cinema include: Possibilia, Late Shift, The Last Hijack, The World in Ten Blocks, and Room 202.

Live Cinema

Filmmakers are eager to converge the power of cinema with the unmatched power of live storytelling. Sam Green established the “live documentary” format in 2010 with Utopia in Four Movements, which was followed by Braden King’s HERE [The Story Sleeps],which claimed live cinema for fiction narratives as well.

Performances by Terence Nance and Travis Wilkerson at the 2017 Sundance Film Festival affirm that this format will continue to evolve beyond the experimental phase. Live documentary has involved live narrative alongside moving images, live scoring, projection mapping, real-time data analysis or Internet searches, and dance. Some detractors argue that live documentary is not really new; it is simply another form of multimedia theater, which is a well-established practice.

New But Now Media

Well-adopted media such as content delivered through linear video streaming services (i.e., Amazon, Netflix, Hulu), linear video sites that allow users to self-publish (i.e., Facebook, Vimeo, YouTube, multichannel networks that support makers across different platforms), mobile apps (i.e., Google Play), streaming social and live content (i.e. Twitch and Facebook Live) were identified by interviewees as still “emerging.” Although social media has been adopted by 76% of the global online adult population, we are still establishing and disrupting storytelling forms on the internet, while catalyzing and responding to user behavior. For example, one trend in the “now tech” sector is the increase of short form content consumption. Interviewees reflect:

  • Emerging media is less about platform and more about the democratization of storytelling. Tangerine was shot on an iPhone‚that is why it is unique. Shot on an iPhone, really cheap, really gritty, really takes you into a world you may not know. That view into another world exists because the technology barrier is gone for the filmmaking.” — Brickson Diamond, Co-Founder of The Blackhouse Foundation.
  • I am excited about the explosion of short-form content. I don’t know a lot about earlier literary movements, but when I read about the explosion of the short story, at the end of the 19th century, it seems analogous. I think that was, in part, from the proliferation of periodicals distributing stories. So it’s amazing what’s happening with short form content now. I think that’s important and needs to continue to be funded, at higher levels of course. Why is it exciting? Because it has a myriad of applications and a lot of room to grow aesthetically and journalistically.” — Michael Premo, multidisciplinary artist.​

“Storytelling has been freed up to take on the format that makes the most sense for the content, and to accommodate the time people have to consume it — whether that’s a short clip that they watch on their phone while doing other things, or a whole series that they binge-watch over a weekend,” Charles Malcher, Executive Director, Future of Storytelling (FoST), told The Drum.

From a purely entertainment perspective, world building is becoming increasingly important in all media. Audiences want constant contact with the world and characters that they love. Harry Potter, The Walking Dead, Game of Thrones, Hunger Games, Twilight, the Marvel universe, and Star Wars are just the tip of the iceberg of storyworlds that fans have an insatiable appetite to consume. They will find those worlds via cable, streaming services, games, comics, cosplay events, immersive theater, VR, social media and other platforms. Some observers have forecast a time in the near future where we will expect our storyworlds to “have 24–7 lives, just like us.”

The following are other important areas of story innovation for “Now Media,” as outlined by Joseph Unger of Pigeon Hole Production:

  • Twitch and Facebook Live: Twitch has a whole range of emerging genres specific to the streaming format. Facebook Live is giving rise to everything from dedicated production houses to fully-realized news sources. Examples include the “eating” genre growing out of YouTube streaming; the evolution of “Let’s Play” into sub-dramas, as players become the star of the show, and game becomes setting; and “Life” streaming.
  • Web Comics: Trisha Williams, co-founder of Pigeon Hole Productions, has been a leader in the web comics space for over five years with Gamer Girls. In that time the artform completely shifted from random translations of Saturday morning cartoons to a serious and innovative craft. It relates to the use of GIFs in sequential digital art and the rising application of cinemagraphs. There is a tremendous scene of creators that dwarfs traditional comics and is intensely innovating on mobile devices.
  • Esports: A surprisingly robust story space, Esports can be transitional narrative games and whole new experiences. LA based Riot’s League of Legends started as a click game, but their hundreds of millions of fans demanded story. Now they’re in the process of world creation through their characters. It is a great example of stand-alone character as world, turned world as container. Overwatch by Blizzard has also made great innovations in story-driven gaming.

Olfactory Experiments

The rise of visual digital media has been very exciting, but some storytellers feel like they’ve lost something by not engaging other senses, such as smell. Most community rituals of story, before the broadcast media age, did engage our olfactory inputs on some level. The smell of food, perfumes, smoke, nature or even raw sweat enhanced the experience. There were early experiments with Smell-O-Vision in the past, but much of that innovation fell away.

Interestingly, there is a new crop of storytellers re-engaging in this kind of experimentation. For example, Le Musk is an immersive media project that brings the smells and related meaning of Indian fragrance and spices to audiences. Famous Deaths used audio and smell to recreate the last moments of famous people such as JFK.

Omnidirectional Digital Media

It’s almost like opening a new wall, the fourth wall, maybe. Sort of like the z-axis to access the audiences. — Ziad Touma, Multimedia Producer

This category includes any story form that can go in any direction (i.e., forward/backward, left/right, linear/nonlinear, macro/micro, deep linked or layered). This can allow for curiosity-led dives down worm holes to deeper content. There are various forms of branching narrative related to the direction of the interaction, for example: Pry, by artists Samantha Gorman and Danny Cannizzaro.

Participatory Story, Co-Creation, Civic Media, and Crowdsourcing

High-quality, low-cost tools for media generation have made it much easier for community members to participate in the creative process. Co-creation and fan fiction have proliferated, as a result. An artist working in this category of emerging media must shift from broadcasting in a one-way medium to designing a storyworld (or interactive platform) that invites audiences to play in and contribute to that storyworld. Examples include HitRecord, Outside Stories, and Question Bridge.

Another trend is the ability for people to create their own fan fiction, tribute videos, and other kinds of extensions and share those through social media. People are hungry to experience the stories they love with other people, and that no longer needs to happen while sitting in their living rooms with their family or friends. Now, fans can connect through social networks like Twitter, Facebook, and Tumblr to talk about their favorite shows in real time. People will tweet, make GIFs and short videos, and share them as a show is airing. This idea of being an active member of fan communities around our favorite shows is another way of giving them meaning and value to us.” — Charles Malcher of FoST in The Drum.

This new access and an ever-more connected media environment has been particularly powerful for the rise of civic media and impact storytelling. Examples include: Witness, The Counted, and Sandy Storyline.

I think we’ve seen some promise in the evolution of guided crowdsourced storytelling,” says Jessica Clark, who is the Research Director for Media Impact Funders and is also editing this very Making a New Reality site. So it’s not just this dream that if everybody has access, then all perspectives will be revealed equally, and it’s not the journalist being in charge of the narrative and having the final say on who gets quoted. It’s a continual complicated crab walk to try to figure out what crowdsourced reporting is good for and how you create a setting for it that can be leveraged into a story that makes a difference.”

One of the prominent learnings in this area of story innovation over the last decade is the need for a constraint in the design. In the case of The Johnny Cash Project (discussed in the Interactive Film category), the designers limited the illustrations tools, so that the 2.5 million interpretations of the frames in the film could have cohesion. The Guardian’s project The Counted had “a very specific policy target: a gap in our national infrastructure for reporting violent incidents.”This helped the producers limit the ways they could have sourced these stories to a number-tallying interface that focused the discourse and the user experience.

Physical Cinema and Internet of Things Experiences

These cinematic works push boundaries around form and spatiality by employing character, dialogue, and collaborative production in ways that expand the experience of physical cinema. Artists and audiences are starting to integrate their bodies into the telling of a story by using smart objects, wearables, sensor-tracking projection mapping, and connected or smart environments.

“In front of a small audience, [Bjork] performed Vulnicura’s Quicksand wearing a suitably outlandish mask, all while surrounded by projection lighting and cameras. The cameras captured what was happening in front of the audience, while computers added layers of digital effects, writes Mat Smith for Engadget. “This wasn’t virtual reality, but more augmented reality — a seamless projection of a virtual world in the real one, without the need for a clammy VR headset…. Bjork’s work transforming this intimate breakup album into a virtual experience continues, and this performance was only the latest milestone on that front. By the time [her] Digital tour reaches the West, it’s likely there will be even more VR performances for fans to try.”

Smart Environments: These spaces that storytellers use or create have a combination of two or more immersive and smart technologies as well as some analog tools that augment a physical experience (i.e., sensors that trigger experience-related smells, allowing users to touch VR/AR content via infrared light, 360-degree 3D film glasses, plus analog items like fans and omnisound.) This includes works that push boundaries of form and space by employing character, dialogue, and collaborative production in physical cinema — and works that compel audiences to integrate their bodies into the telling of a story with smart objects, wearables, and connected environments.

Examples: Be Boy Be Girl, Birdly, Cyrano: Alex in Wonderland, Just a Reflektor, Lyka, Can’t Get Enough of Myself , Tableau, Peg Mirror, Lyka’s Adventure, OMW, Fru, The Quinn Experiment, Postcards to My Younger Self, and Magic Dance Mirror.

Projection Mapping Media

Projection mapping allows the maker to project images and words onto the physical world. This technique uses architecture, landscape, objects, and bodies as canvases for moving images, optical illusions, and mixed reality.

Examples of projection-mapping projects and artists include: Klip Collective, AntiVJ, and Heartcorps.

Tactile Digital Media

With the proliferation of touch-based hardware interfaces and the advancement of haptic technologies and sensors, touch interfaces have become a storytelling tool. Examples include Evolution of Fearlessness, HUE, Biophilia, and Real Virtuality.

[Evolution of Fearlessness],which opened at the New Crowned Hope Festival in Vienna in 2006, features filmed portraits of 10 refugee women living in Australia, all of whom have survived acts of extreme brutality, including war, rape, and incarceration,” writes Tim Elliot for The Sydney Morning Herald. [Artist Lynette] Wallworth spent months ringing trauma centers in Sydney, as well as contacting the Holocaust Museum in Melbourne and refugee centers in Adelaide. ‘I also wrote emails to everyone I knew in Australia who I thought might know a woman. I described the quality I was looking for: women who had been pushed to breaking point and who had not just survived but found something more in their core.’

Audience members read a short account of each woman’s experience before entering a dark room, on the far side of which, set into a recess, is a doorway-­sized glass panel. When the viewer touches the glass, a sensor triggers a video of one of the women, who appears to reach out and place her hand on that of the viewer. “The system is designed to randomise the videos so there is a happenstance to whichever woman comes to greet you,” Wallworth says.

In many ways, tactile digital media is providing a return to ritualistic art and storytelling. Think of how communities create rites of passage and the ways in which touch is used in the process. Additionally, touch has scientifically been proven to be a critical part of our mental and physical health, so being able to bring touch into the age of digital storytelling allows us to use the scope and scale of digital content without completely sacrificing the power of the tactile. On conscious and subconscious levels, touch triggers stronger emotional connections to the characters and world of a story.

Transmedia Storytelling and Connected Immersion

This form involves single story or storyworld playing out over many different platforms in linear or nonlinear ways. It is not replication of the same story on different platforms but fragments of one story across platforms that allow audiences to discover the whole story (like a puzzle). It often provides opportunities for audience participation, interaction, and even co-creation. Second-screen experiences are one well-adopted way that transmedia storytelling has seeped into mainstream entertainment culture. Examples include East Los High, Year Zero, and Half the Sky.

Yung Jake’s E.m-bed.de/d

The term “transmedia” has been debated and analyzed so much that it has fallen out of favor in emerging media discourse. However, the practice of transmedia content-making is more robust than ever. Although advertising, marketing, entertainment, and nonprofit organizations would not describe their campaigns and story franchises as “transmedia” anymore, they are constantly producing works that engage audiences across different platforms. When the term was coined, it was identifying a huge expansion in our communication architecture with the rise of social media and democratized tools for production. The media makers of the time had to figure out how to make or catalyze work in this new environment, and the term helped to orient them.

Twelve years later, the media community and its audiences are becoming pretty adept at communicating story and content through this new multiplatform architecture. In fact, using multiple platforms and social media are critical for engaging Generations Y and Z.

“I think it’s really important for people who are interested in substantive and nuanced storytelling to embrace these formats,” says documentary filmmaker Dawn Porter. “I see how my 15-​ ​and 13​​-year-old consume​. ​They’re avid​​ ​consumers of media. The young teens are very tuned into social messaging.”

Susan Bonds, CEO of 42 Entertainment, is known as a pioneer of transmedia. She has evolved the practice to “connected immersion.” As a creative director at Walt Disney Imagineering, she produced notable projects including the development of the Indiana Jones Adventure. Her prolific career spans gaming, theme parks, location-based experiences, transmedia, and connected immersion. For Bonds, the power of connected immersion is trusting your audience and allowing them to drive the story as it unfolds. In this FoST video, she urges storytellers, “Don’t be afraid to let the audience in.”

Virtual Reality

Virtual reality encompasses a suite of technologies that brings audiences or players into a completely visually and sonically immersive world. Some of the approaches to making VR include:

All of these formats can be interactive.

Many interviewees said they were excited by the promise of virtual reality.

To actually place you inside stories is very different to the kind of interactive documentaries and other experimental forms that we’ve seen over the last few years. That, for me, is quite radical and has enormous potential. The other thing that has promise in VR is perspective, that by putting you inside the story, then giving you agency and potentially influence over the outcome of the piece, is also very strong. We’ve been able to do that with video games, but if you marry that with feeling like you’re in a place, that feels very strong.” — Francesca Panetta, Executive Editor, Virtual Reality at The Guardian

When probing interviewees for more detail on exactly why the medium is promising to them, answers included:

  • the ability to break out of a human-scale perception of reality — going to space, becoming a tree, seeing the world through the eyes of an animal, or shrinking to molecules or particles
  • being transported across time and space instantly
  • experiencing social spaces in VR
  • being in live spaces with digital performers and performers who are digitally enhanced
  • lowering the barriers to empathy.

Is VR delivering on these promises? Here are impressions from interviewees:

  • It’s exciting for me to actually be inside my animation and the stories and the characters we’re creating, and to actually walk around with them and interact with them, like they’re people, and creatures in front of me. You’re so used to doing it on a screen, and then watching it projected. I’m also used to doing it in a miniature scale. It’s tactile, and I can move it all around, and they’re projected. But, this thing, I’m moving it small in my head, but then when it’s transferred into VR, I can watch it, and walk around with it, and interact with it, and really scrutinize it from every angle in 6 degrees (of freedom). It’s pretty mind blowing that we’re able to do that. When you think, this is how we’re going to be absorbing our entertainment in the future, and to be on the cusp, on the ground of creating that, and trying to define what that is: Wow! — Lyndon Barrios, Co-Founder of Blackthorn Media
  • The most affecting work that I’ve seen has been Nonny de la Peña’s work. I really believe in the radical empathy machine. Being able to step in each other’s shoes is so critical right now, where there’s such divide and such polarization. Anything that allows you to take a different position point from your own is valuable. Whether that be victim, perpetrator, or bystander, which is why I love her work so much. I am excited for her work in this new Trump era. I’m excited to see what she will do and how she’ll intervene if everyone were able to see her border piece.” — Sandi DuBowski, Documentary Filmmaker
  • I also find vibrancy is a social good. If all we talked about was sad, depressing things, then the world would just be a sad, depressing place. So VR and immersion in general has that ability to transplant you into something that feels magical, as if you’re in a dream or going into someone’s kind of brain.
    — Yelena Rachitsky, Executive Producer, Experiences at Oculus

“This is the first time I think technology really opens people up,” Harvard student Madeleine Woods told reporter Colleen Walsh. “You can experience the life of someone else across the world. You can see something you’ve never [seen in person]. I think you can understand people so much better when you can walk in their shoes, and this is something that literally puts that technology in hand.”

In the 2014–2015 wave of VR storytelling, makers experimented with notions of switching perspective with another person or an animal, first-person gaming, immersion in an animation world, as well as instant teleportation to distant or inaccessible places or intimate spaces.

In 2016, there were successful experiments with abstraction, space, meditation, astro and quantum physics, metaphysics, and other scope- and scale-shifting contexts that provide experiences of reality that could never be encountered in “human scale.” VR performance attained new real-time capabilities, giving audiences the potential to be in a room with a performer while they are interacting with virtual content, with a live performance of a performer in a virtual space, or with the capture of performance for non-live virtual media. Examples of how this is being tested include Ninja Theory’s Latest Unreal Engine experience, and To Be With Hamlet.

Other compelling experiments include: Notes on Blindness, Allumette, Project Syria, The Blu

Encounter, The Click Effect, Irrational Exuberance, Herders, Cardboard Crash, Rose and I, GIANT, Assent, Project Empathy, The Martian VR Experience, GONE VR, Ascend the Wall, VR Rave and Defying the Nazis.

In 2017, after 40-plus years as an emerging medium, VR is on the threshold of its first wave of “killer content,” as evidenced by: the first VR film to be nominated for an Oscar (Pearl); the first VR experience to be acknowledged by the Academy of Motion Picture Arts and Sciences with a special award (Carne y Arena); the first VR film to receive an Emmy Award (Henry) happened in 2016, but the category exploded this year with a number of high quality contenders and wins for Collisions (Outstanding New Approaches: Documentary) and The People’s House (Outstanding Original Interactive Program); Disney’s Pixar produced its very first publicly available VR experience (Coco VR); IMAX opening its first immersive media experience center (or VR arcade); The VOIDs success with theme park VR; the proliferation of universities and colleges launching immersive media degrees and programs, and NFL and live performances using VR to offer “telepresence” to audiences, which allows audience to purchase “full immersion” seats (i.e., Sacramento Kings VR seats, with cameras on players).

The 2017 Sundance Film Festival’s New Frontier showcase featured a large number of fully realized and polished experiences (finally escaping the VR demo phase). According to critical reviews of the exhibition, the content and format R&D completed in the prior four years finally started to pay off. Examples include: Dear Angelica, Zero Days VR, Life of Us, Out of Exile, Orbital Vanitas, Sky is a Gap, Chocolate, Mindshow, and Asteroids. Peer organizations such as Tribeca Immerse, Future of Storytelling, and IDFA also showed incredible works that exemplified this new standard in the medium. Examples include: Draw Me Close, Blackout, The Last Goodbye, Treehugger, Unrest, Munduruku: The Fight to Defend the Heart of the Amazon, Alice the Virtual Reality Play, Manifest 99, Homestay, A Thin Black Line, The Cave and Bloodless.

Creative community members said that they learned a lot of lessons over the past four decades and employed them well in the current crop of New Frontier projects.

  • We are learning to change people’s emotions by creating psychological cues. For example, when people like each other, or empathize with each other, they start mirroring each other’s movements. That’s the subtle level of expression we’re creating in our projects. A subtle facial expression from you can change the other character’s emotions, and they change yours, and back and forth like that. So, in Asteroids, part of the reason we have the dog follow you around is to build a connection between you and this pet. — Maureen Fan, CEO Baobab Studios
  • We went into VR thinking real faces — real people — were the way to make the best connection — and we still think that’s powerful. But once we thought about connection with voice and movement, we said, “Forget about all that. We can make this crazy world as a social, connected system.” — Aaron Koblin, Co-Founder, With.in, speaking about social VR hit, Life of Us
  • I’d done VR before — I’d done early builds of the Oculus. I visited Valve years before. And it really stuck with me. But [with the Blu], I really felt presence. I felt very vulnerable being with the big whale underwater. It felt very oppressive to me. But I loved doing Tilt Brush, and I could spend 20 minutes in there like it was nothing. I think the challenge is to make it where you can really feel like you’re living in that world. — Jon Favreau, director, Gnomes & Goblins VR, in Fast Company

Although projects such as Melting Ice and Chasing Coral, both powerful climate change documentaries, did not stray from the live-action, 360 shorts format, they certainly represented the “best of” that type. Most likely, these two projects will enjoy broader distribution than other 2017 New Frontier works, because they are compatible with the most easily accessible VR distribution platforms for mass audiences (Google Cardboard and a smartphone).

However, the adoption of more complex and higher quality VR platforms is likely to increase, especially when amplified by the expanding (see predictions from 500.co, Eye for Travel) use of VR for a broad range of practical applications, including VR in courtrooms, industrial training, pain management, travel/tourism, live events, home design, real estate, education, paraplegic therapy, PTSD therapy, conferencing, city planning, VR search, ecommerce/advertising, military and law enforcement.

Finally, motion capture performance in game engine-based mediums (including real-time live interaction) are positioned to take a new leap in authenticity with capabilities such as Unreal Engine’s live capture and real-time rendered technology.

VR and AR and the brain: “AR/VR/MR have potential to improve all sorts of workers’ lives in that our brains have evolved to be spatial, yet much of what’s required to do certain jobs now involves making a leap from abstract to spatial that some people just can’t handle,” Magic Leap founder Rony Abovitz told Network World. “This could impact jobs in architecture, education, medicine, and others,” he says. “We’re making technology bend to our neurology.

Studies are proving that when people engage with information (i.e., a historylesson) in an immersive and physically interactive way (i.e., by walking through virtual environments and picking up virtual objects), they are more likely to remember the information. To Evelyn Miralles (‎a leading VR innovator from NASA Virtual Reality Laboratory), this is not news, as she has been using VR to train astronauts for more than 20 years, as a mission critical step in their preparation.

Augmented reality is also proving to be an effective training tool for professionals such as surgeons and pilots. In fact, some researchers hypothesize that AR might trick the brain to even greater effect than VR, as it mixes the virtual and real so seamlessly that the brain does not create a suspension of disbelief, it just believes. Also, researchers have known that engaging in virtual worlds on traditional screens can have an impact on lucidity in the dream state, but recent studies show that virtual reality can increase this dynamic. This correlates to much of what the artists and content creators have discovered in the last year, in terms of what content is native to VR that cannot be replicated in other mediums (i.e., metaphysical experiences, micro/macro perspectives, meditation, death, and the surreal).

‘When you alter people’s waking realities, their memory changes.’ [Psychologist Jayne] Gackenbach’s research found that gamers report a greater sense of control in their dreams than non-gamers, as well as more awareness that they are dreaming — what researchers term ‘lucidity.’ This suggested that spending time in a fictional, controllable world might teach gamers to view dream worlds through the same lens. Her latest research extends the same questions to virtual reality… ‘By using a virtual-reality device, you are putting yourself into a brain state that is remarkably like the REM brain state.’ Many virtual environments have a surreal quality, enabling users to experience activities — deep-sea diving, flying — would be unusual or even impossible in their real lives. Dreams operate in a similar way, leading some researchers to speculate that VR devices might train users to approach any ‘unreal’ situation with heightened awareness… An increasing number of psychologists believe that the dreaming brain serves as a built-in virtual-reality generator, testing out various models of the world so that dreamers are better equipped to handle novel situations in waking life… New technologies are almost always hyped as transformational. But Gackenbach’s research provides some of the first evidence that virtual reality fundamentally alters the nature of consciousness… While she thinks augmented reality has a ‘huge future,’ she also seems wary of what that might entail. ‘We’re playing with people’s reality,’ she says. ‘What’s that going to do?’ — “Virtual Reality May Help You Control Your Dreams,” The Atlantic

A major area of inquiry for VR artists and practitioners is using the medium to share perspectives of others and break down implicit bias. This research about how VR impacts the dream state is interesting when paired with:

But while Stanford’s Virtual Human Interaction Lab has found that VR can help engender greater empathy for others, unless it’s really well-designed it can also do the opposite.

In Wired, Sarah Zhang reports that a 2009 study from the lab found that “placing people in dark-skinned avatars seemed to activate negative stereotypes about black people, rather than reduce them.” Jeremy Bailenson, the director of the lab, “suspects that the technology back then simply wasn’t realistic enough, and participants didn’t achieve the all-important ‘presence’ to overcome the priming effect. Indeed, a more recent study from [VR researcher Mel] Slater’s group that put more emphasis on getting people to identify with their avatars from the onset achieved the opposite outcome.”

Collaborative Design & Social Art Practice

The line between artist and layperson has become very blurred with the democratization of modern media (with roots as far back as the release of the Brownie camera in 1900). There has been a cultural shift from professional artists and arts institutions having control over the definition of art, to a broad creative sector enabled by democratized media that allows anyone to claim the title of “maker,” generate creative products, and distribute them to audiences or patrons. In many ways, this is a reconnection to humanity’s long history of participatory art making, where every member honed and contributed their creative abilities. This does not mean the role of the artist and arts institution is outmoded; it just means an expansion of the diversity of ways society engages with creativity.

This democratized creativity is apparent on ubiquitous social platforms, such as YouTube, but it is also manifesting in new concepts of art, social practice, and community design. Artists and creative collectives are advancing creative practices in which the community itself is the medium and the dynamic social, political, and cultural structures are the art. Examples include: Theaster Gates, For Freedoms, Detroit Narrative Agency, and Project Row Houses.

What’s exciting in the art world?” says Michael Premo, a multidisciplinary artist and the founder of Storyline, Inc. “I’ve been part of this world called social practice. I think it’s exciting when there are people in all these different silos, from film, fine arts, photography, who are increasingly trying to think about engagement and what that means at every stage of the process. That’s what I’m excited about.”

This idea of social practice and the arts has emerged in the convergence of film, TV, tech, and gaming. More than a century of film and 40-plus years of gaming have generated best practices for building storyworlds that are now being employed collectively to imagine a design of our future world. This is especially exciting for traditionally marginalized groups, who are using sci-fi storytelling, VR, AR and smart objects to shed victimization narratives and deficit-based identities, and to challenge implicit and explicit biases. These groups are cultivating storyworlds, in which marginalized groups are centered, empowered, and thriving. Projects and practitioners include Iyapo Repository, HyphenLabs, Frontera, and the World Building Institute.

There’s a reason I was drawn to World Building from games. It solved my own human need to contribute, but also solved the problem that our audience is overwhelmed with content. Builds start with an inclusive collaboration. You can fill a room and imagine a future together with anyone. The thing that makes a World Build work is including the woman who’s an engineer at the cutting edge startup next to the man who gardens in his backyard, the new immigrant to the area, the entrepreneur, and the suburban commuter. You agree to share ideas and agree that you’re in it together. This commitment makes a World Build wildly powerful both as creator and consumer. When we world build, we’re not just imagining, we’re creating together. There’s a moment when we run a build where it turns what started as a collaboration into a network of people with a shared experience. The fall on effect is a strong central node spidering out like a social network with many points. These shared stories, whether they take the form of art, media, invention or business, are sung in a chorus and able to rise above the noise, get noticed and instigate change. — Joseph Unger, Founder, Pigeon Hole Productions.

The Making a New Reality research project is authored by Kamal Sinclair with support from Ford Foundation JustFilms and supplemental support from the Sundance Institute. Learn more about the goals and methods of this research, who produced it, and the interviewees whose insights inform the analysis.

This article was originally published on Immerse, an initiative of Tribeca Film Institute, MIT Open DocLab and The Fledgling Fund. For the complete publication of the report, please, visit Immerse.

--

--

Kamal Sinclair
Vantage

Kamal Sinclair is Executive Director of Guild of Future Architects, co-author of Making a New Reality, and artist/consultant at Sinclair Futures.