Merging Creation & Consumption
The place where everyone is both a fan and a creator
Today we are sharing another article our CEO Lars-Erik drummed up whilst being super inspired in Seoul after attending the Solana Hackerhouse event.
In this article, he touches upon key problems to be solved thanks to Web3 and offers a clear path to why our video technology manifests itself perfectly within a Web3 Generation Social Media context.
In 2021, 85% of all internet traffic was video. This is something we’re all familiar with since most of us are consuming the biggest part of our information through online videos.
There is an underlying common denominator that still rules video as a medium, for the recipient and viewer it’s a static medium, broadcasting information to the audience delivered as a passive, predetermined presentation and experience.
This is how video has been all along. It’s based on the producer of the video unfolding their creativity and delivering their interpretation of reality. And while there are great tools for people to produce video easily, the audience is still stuck with a passive purely consumption-based experience of information.
The audience is limited to interact and participate through channels not natively part of the video presentation such as chat, comments, and reactions.
As video is our main medium for information, this is one of the largest muting factors in our society today. Even though it has never been easier to create and produce video, the vast majority of users are purely consuming video. Let’s face it, most people are still doing something else for a living than creating video content for others to consume.
This is also an effect of the current ruling, the passive consumption model of the video. Further, this greatly limits people’s ability for challenging the views and information they are presented with through video. You can, of course, share a link to a video on a platform like Twitter and discuss things further, though this is quite far from what could be a better reality.
The creation of video today is mostly based on what you can film with your phone. And as platforms such as Snapchat, Instagram, and TikTok make everyone able to create and reach an audience with what comes through their camera lens, we’ve seen a massive increase in the sheer number of people creating videos.
Curation of video is also emerging through the same platforms, like creating reaction videos and juxtapositions. Here, TikTok is the main enabler, and one of their most used features is the simple green screen allowing creators to overlay and inlay their own video reaction to other creators’ content. Still, the consumption of these creations is passive for the end users viewing them.
The way things are moving is quite obvious. We as humans have evolved to be highly capable of processing, curating, and sharing information. And when we experience benefits from this information we‘re intrinsically motivated to share it with others in order to increase our development and learning.
To take the next leap in enabling us to live out this even further in a world where information is in video format, we see a need to upgrade consumption from passively consuming to a new model for video where consumption and creation are merged.
The holy grail of human-machine interaction is the direct manipulation of human-readable data, preferably in real-time in its presentation format. It’s commonly referred to as WYSIWYG — What You See Is What You Get.
This is why spreadsheets are so powerful, you’re manipulating the data in its native presentation format.
Personal computing has developed from qwerty keyboards on desktops and laptops to smaller, smarter numpads T9 on feature phones before touch screens entered the chat and started delivering on the promise of directly manipulating visual data in its presentation format. Though pretty clunky and slow at the time when Nokia connected the world a second time around, with the first major global smartphone hit, the Symbian series.
In 2007, the same year Nokia was covered on the cover of Forbes and named the cell phone king for their global dominance of the mobile phone market, Apple launched the first iPhone. It rocked our world, “do you wanna see that again?” - as Steve Jobs said when he introduced the first iPhone and its magical interface scrolling a super smooth UI (User Interface) with his finger on the screen.
The winning combination was a multi-touch capacitive interface powered by a new SOC (System On a Chip) hardware setup enabling real-time interaction with visual media on screen.
Suddenly everyone could touch their music, feel their documents, and interact with media closer to how we perceive the real world of physical objects.
It felt like magic and made my mother able to edit images, just by touching and playing with the images, directly manipulating the data in its native presentation format. No complicated disconnected button-based interface from the presentation of the data itself.
As a society, we rely on new insights and ideas, and our ability to transform new insights and ideas into technology, new ways of thinking, and better ways to operate together.
Enabling everyone to consume, dissect, curate, create, and share information effortlessly is imperative for our continued development and advancement.
The touch-based interface and UX enabled by Apple and the iPhone greatly accelerated these efforts and gave our species unimaginable more momentum to create, curate, and share information with each other by merging the creation and consumption of most data formats through its revolutionary interface.
Since the beginning of video, we have put most of our efforts into the publisher’s side. This in turn has given us fantastic tools to unleash our creative minds before we publish our moments.
Now — we need to tackle the consuming side. We need to enable consumers, who today are passive receivers of static video.
Watching can no longer be passive, it needs to be creative play — where you have the tools and freedom to create your own experience at the point of consumption.
Not too long from now — our reality will be one composed of our physical world merged and augmented by our virtual world.
Static consumption will not fly far in this merged reality. There is an imminent shift to happen, from passive consumption — to creative consumption. Where we all can be creative in real-time — creating and curating our very own personal creative stories based on what’s presented to us in video players.
The time of one ruling director having the final say in how we all are to experience our digital world needs to end.
To solve this we are building a new canvas where video, 2D and 3D assets, can be created, staged, and animated in real-time in the same component. We believe this will unlock previously unimaginable opportunities, where everyone, even my mom can consume, curate, and create in real-time at the point of consumption.
This is exactly what we are building in Sagaverse, and we’ve already seen the proof that merging the creation and consumption of video and interactive visual assets is a new content format that people didn’t know they needed, but realize that they love and want in their lives.