Envisioning the Future of Journalism

If we get rid of everything we see as a given — what could the future of journalism look like? A scenario for the perfect access to information, inspired by this years SXSW conference.

Lina Timm
Media Lab Bayern
7 min readMar 17, 2018

--

When you design for a radical new technology, you have to get rid of all the boundaries. At this year’s SXSW 2018 conference, I heard a ton of amazing talks on the future of tech, media and society, which I covered in my daily SXSW Breakfast Taco (in German).

Some of the talks really struck me. M. Pell, who talked about envisioning holograms, Amy Webb, who took us on a journey through the emerging tech trends, Rohit Bhargava, who covered all those non-obvious societal trends and a couple of talks on digital assistants and artificial intelligence.

A lot of these new tech trends will have a huge impact on the distribution of information and thus journalism. Though the industry isn’t quite sure what to actually do with it yet. Mike Pell presented the method of “envisioning” for designing for technologies that weren’t there before. That inspired me to imagine how journalism could look like, if we let go of all constraints for once. If we would just forget how we always did things and ignore all the questions on feasibility.

A short intro on “Envisioning”

How do you invent for a technology, that is so new in the market, in Pells case holograms? He calls the method “Envisioning”. You should just imagine something while wandering the world. In terms of holograms: Take an empty space in the real world and try to imagine what could be there. Be creative!

The hard thing: As a designer you have to completely shift your mindset. Holograms for example are not limited by a box on a screen, the whole world is your playground. Pell’s advice: Just start and take the easiest method for prototyping: Describe what could be there, how it could look like. More advanced techniques: sketching, photos, videos, or even coding. As I’m not that into coding, I’ll stay with words.

What do you see in the yard? Just imagine something! Hint (and hologram designers’ running gag) it’s always a dinosaur. 🙃

🙋 User’s Future of Journalism

So how would the perfect access to information would look like for the user?

  • You get exactly the information you don’t know yet.
    Hence, nobody has to skim paragraphs explaining stuff he already understood. The kid gets another explanation when asking the digital assistant about the same topic than her mother.
  • You get the story or information within the medium that suit you best considering your current situation.
    Text, photo, video, audio, infographics, holograms… whatever helps you to access and understand the information right now. Audio while driving in the car, video when you’re at home after work and too tired to read a text, holograms when you need to see objects compared to the real world to really understand their impact (e.g. a T-Rex in your yard).
  • You get the information exactly when you need them or have the time for it.
    Because you don’t always have the time to deal with the evolution of the T-Rex right now but you sure would be up for watching a 2-hour-documentary on your next lazy sunday afternoon.
  • Everyone sees the same object but facts and information adjust to individual knowledge.
    When we think about holograms: both of us see the (same) hologram of a T-Rex. I am interested in when and how they actually got extinct, while you are fascinated by his anatomy. So I will be displayed some facts and moving scenery of the meteorite impact while you are getting some measurements of his bones and facts on what he actually did with his ridiculously small arms.
  • Your digital assistant is your one-stop-shop for every content.
    I have to admit, I am struggling with the distribution part. It has to be highly personalized, so the digital assistant would be my choice for the moment. No gazillion of apps anymore, rather a one-stop-shop for aaallll the content that is out there. On the hardware side, the smartphone already covers this. But it lacks the software that makes the smart choice what to show you and what not. Not to speak of all the restraints a 6-inch-glass screen brings for showing content.
    As we have a digital assistant always by our side, they can monitor and store everything we see, hear and do. This way they know exactly what information we already got and what is missing.

📝 Newsrooms’ Future of Journalism

And what would change in the newsrooms?

  • In news producing, the human mind only comes in for opinion and in-depth analysis.
    Research will be vastly done by machines and comes already pre-sorted. You check in a few times to give the machine a hint which trace might be worth digging deeper. Editors can be sure to have the topic investigated sufficiently when they start combining pre-written factual paragraphs with their analysis and opinion.
  • Topics that interest your audience will be automatically crawled from the web.
    That actually already exists, a startup of my incubator build some crawler to extract topics from forums and social networks pretty much by accident. Amazing approach to user-centered topic selection!
  • When publishing content, you push a button and the machine does the rest.
    The “rest” means: Combining paragraphs individually, so everyone gets the information he needs and doesn’t know yet. Converting every content into text, audio, video, infographic, holograms. Chopping it into bits and repacking it into a witty whatever that suits the user in that very moment. The repackaging can also be done in the digital assistant.

Oh, you mean this way we don’t need any human at all in this process? I think otherwise. Humans are great and not replaceable in

  • finding stories that are surprising, non-obvious,
  • having an opinion,
  • and making sense of the world.

How far are we and what do we need for all this to happen?

Short answer: Don’t know. Longer answer: A few trends and talks from SXSW helped me shape this scenario. (And a lot of wishful thinking.)

🤓 Progress in Mixed Reality

  • Pell compared the current VR devices with the first ever mobile phone. I never thought about it that way, I rather always wondered why people were so fascinated by these huge, heavy, uncomfortable VR glasses. But if you see them as prototypes rather than high end devices, then even I am eager to see the next generations of hardware coming. Pell is sure that in the near future, we can see mixed reality by less intrusive devices. The next round of MR glasses might be coming soon — and there are even first experiments with contact lenses!
  • Also, in five years from now the tech to build stuff in mixed reality will be much cheaper. The tech in this area is accelerating fast at the moment.
Yep, compared this way, VR Devices totally make sens. As Prototypes.

📱 2018 is the Beginning of the End of the Smartphones

We have “peak smartphone”, says Amy Webb. Our devices won’t get any smarter. After the smartphone combined different devices for years, we now start to diversify again: We wear smartwatches, smart earbuds, smart glasses. Next in line: digital assistants and their need for a non-visual user interface. Alexa is only the beginning.

You can find her whole presentation here, the report on 2018 Emerging Tech Trends here — and an amazing Twitter rant on why we need more funding for finding business models for media here.

📣 Voice & non-visual user interface

The biggest challenge at the moment for those digital assistants is context. All big players in the field are working on that. Only if the assistants can narrow down the options, they can work efficiently. Today, they are lost by a simple request like “Play Frozen!”. Do you mean the film? The soundtrack? Buy the doll from Amazon?

But: Research shows that our voice can tell if we are healthy, how we feel, how old we are, how big the room is we’re in and how many people are there. Of which fabric the walls are and even in what local area we are — simply by hearing fluctuations in the power grid that are not audible for human ears. These “Voiceprints” work as good as fingerprints. Therefore might at least become good passwords, not to mention the power to differentiate between requests for digital assistants.

🤖 AI empowers digital assistants

The security control at the airport, the self-stopping car — all of this already uses AI. ANI, to be specific. Artificial Narrow Intelligence, intelligence that specialises in one particular field. The next frontier: “reinforcement learning”. An agent develops strategies by himself to maximise his reward. Works for dogs, will work for machines in the future. The result is: These systems outperform human programs.

There is still a lot of research needed to get from ANI to real AI. But: The inventor of “Siri” Adam Cheyer said, he has seen developments in AI over the last 8 years he didn’t think would be possible within his lifetime. And he is working on AI for 30 years now. The biggest step so far: synthesis. Making connections. A machine doesn’t only see a ball and three hairy things with four legs each, a lot of yellow pixels and some dark blue ones on the left and some lighter blue ones above — it can tell that there are three dogs playing ball at the beach.

Let’s discuss!

Could this be the future of journalism? Or is there another one? I am eager to hear all your thoughts on this! And if you know of any technology that already covers some of these ideas or could be used for this, please reach out to me!

Liked this post? I would be thrilled if you’d give it a 👏!

--

--

Lina Timm
Media Lab Bayern

Digital Enthusiast. Journalism and Startups. Program Manager @MediaLabBayern. Founder of digital-journalism.rocks.