How Co-Bots can help us to understand the pandemic

ILLO
Algotv
Published in
5 min readDec 15, 2020

Robots have been capturing humanity’s imagination for quite some time: from science fiction to becoming an integral part of our daily existence, there’s almost no area of life that’s been left untouched by our technological counterparts.

A revolution of this sort is bound to be met with enthusiasm but also with a hearty dose of fear and skepticism. So there’s little wonder that even those of us working in creative, tech savvy industries worry that robots might eventually outperform us and steal our jobs.

However, this competitive mindset is making way to a new, collaborative one, where we don’t have to pick sides and define robots neither as friends, nor foes. But as colleagues.

Across fields like medicine, machine technique and human direction are now working together towards greater precision, each to the best of their abilities.

From robots performing surgeries alongside doctors or contributing to more effective medical investigations, to caregiving or remote consults, robots have really made their mark in the medical field.

There are exciting developments in terms of diagnosis accuracy, for example, with deep learning algorithms interpreting pathology images. In a 2016 collaboration between Google and the NHS, artificial intelligence outperformed doctors in breast cancer screening. But things get even more interesting when combined analysis done by both pathologists and machine learning leads to an even higher accuracy in results. So, there truly is strength in numbers (and in collaborating).

This goes beyond the usual suspects, like science or the automotive industry, reaching the creative fields as well.

RADAR, the world’s first automated local news agency, matches reporters with tech to get more stories published faster, to an unprecedented scale. More precisely 180.000 articles were filed by 6 reporters, in one year’s time, with the help of automated tools propelling their narrative skills.

Recently, Open AI released their third generation language model, GPT-3, that uses deep learning to produce human-like text. Its applications range from an AI able to write an entire article with only a title and a small paragraph as input, to one able to understand a legal contract and explain it to a second grader.

Maintaining the focus on the creative field, Remove.bg uses AI to remove the background of any photo (and as of recent, videos, too) helping journalists and designers to completely remove rotoscoping from their daily routines.

Cobots”, as these collaborative robots (and, essentially, coworkers) are being called, are slowly becoming man’s new best friends, optimizing and redefining the way we work and changing the way we approach design. After all, why waste time with long tedious procedures when you can have them done automatically, and focus on our real added value as human beings?

Algo, our data-visualization practice focusing on Video Automation, was born out of this idea of human-robot collaboration and software that amplifies our creativity.

As we find ourselves in the midst of the second wave of Coronavirus and plenty of confusion, the role of reliable data providers, such as John Hopkins University has become more and more important. Ever since the beginning of the pandemic, the University has been the most trusted source for providing reliable pandemic data from all over the world.

With Algo’s automated help all this information was summed up in a campaign of daily short videos, providing a constant update of the current US situation, as well as a final quick overview of the global one, to all the Johns Hopkins Twitter followers & Youtube subscribers.

The Algo team came up with an automated video solution that pulls data from the Johns Hopkins open source data feed on a daily basis. The data is then integrated and displayed in a tailor-made, animated video template, designed to communicate all the relevant information at a glance.

Every morning at 7:00am (New York time), the Algo platform autonomously creates a new daily video, without any human intervention required.

From a color palette that automatically shifts its intensity according to the pandemic trend, all the way to shapes and a selection of sounds that accompany each situation (be it neutral, negative or positive) every video is a unique piece of data-generated design. Big numbers, maps, charts and all design elements work together so as to create a clear, trustworthy overview of the daily evolution of such a complex phenomenon.

Once rendered in the cloud, Algo uploads videos straight on to the chosen output, on the Johns Hopkins Covid Resources website and on their Twitter & Youtube channels. This way, the JHU team can focus on the data quality (and more important tasks at hand) leaving all the hustle of video production and distribution to the robot.

The videos reach approximately 15,000 people every day on Twitter and Youtube, where the campaign is still ongoing as the second wave unravels. The Johns Hopkins tweets generate a vibrant online discussion everyday, and are being retweeted to demand action from the local and federal governments (see examples here, here, or here). The campaign has also been featured on Politico and Time Magazine, which named the whole JHU Covid-19 resources a top invention of 2020.

That’s what we believe the future holds for the news and creative industries: a working revolution enabling an exciting upgrade to our professional lives.
A future we’re currently building with the help of co-bots.

--

--

ILLO
Algotv
Editor for

is a design studio with focus on motion design, illustration & set design. We aim for a minimal and colourful aesthetic — & clear storytelling. https://illo.tv