Designing for the Autonomous Economy — The Future B.I. Stack

Matthew Falla
15 min readSep 21, 2019

--

At Signal Noise we have been working at the intersection of tech, data and design for the last decade. We work with B2B clients operating in data-rich environments and our output splits broadly across content marketing and digital product development. We’ve always been fascinated by the cutting edge and driven by what hasn’t yet been thought of. Recently, In my role as Head of Innovation at Signal Noise, I’ve been spending a lot of time thinking about the future of our discipline…

A few years ago, I started to become concerned that data visualisation — and the studios that produce it — were facing an uncertain future. The value of data, was showing no signs of slowing down, but it felt as though audiences were losing patience with the work of surfacing insight.

In a media landscape dominated by the infinite thumbing through of imagery, it takes much more effort to navigate and absorb complex visualisations. Just ask Tony Stark — all that data viz is stressful and hard! Wouldn’t it all be much easier if some AI just gave you the answer?

If demand for data visualisation was under threat from user apathy, the supply of it was also at risk of becoming dominated by the big platforms — Tableau, Power BI and so on. As a small studio offering bespoke visualisation products, it’s hard to compete for corporate dollars if you’re up against these guys.

It’s been very interesting to see that both Tableau and Looker have been acquired this year in multi-billion dollar deals. Whilst this is a great validation of the value of data viz. It equally points towards its potential commoditisation and the increase in ‘good-enough’ solutions — the dashboardification of everything!

But recently, I’ve been thinking differently. I think there are signals that point to a new frontier for data-design. One that not only opens up new opportunities for designers, developers, strategists and studios but that responds to what I see as an imminent social imperative.

Together, they make up what we see as the future business-intelligence stack, a new set of tools and visualisation formats that go far beyond the dashboard, to equip organisations for a world that is both data-driven and mediated by machines.

Let’s take a look first at digitalisation and how it is changing things…

You are standing on a busy street corner, waiting for a cab to arrive. You watch impatiently as a miniature Toyota Prius slowly crawls its way across a map, When it arrives, your destination is revealed to the driver. Within seconds a route has been recommended and you set off on the journey.

A display on the driver’s screen announces a newly optimised route. Following its suggestion, your driver swerves to take a sharp right turn. The car’s telemetry systems log the turn, the speed it was taken at and the lack of an indicator signal before it was made.

Next week these events will be brought up in a conversation about the driver’s increased insurance premium, but for now they go unnoticed.

You feel a buzz on your wrist, looking down, you see a notification on your watch telling you that your heart rate is higher than normal and you’ve been inactive for ten minutes. You shrug and dismiss it — hardly surprising given how late you’re running.

In the space of just a few minutes, almost imperceptibly, you have touched systems within the sectors of mobility, insurance and healthcare that sensed a situation, computed possible responses, selected one and recommended (or even took) appropriate action. This phenomenon is at the centre of what economist and complexity thinker, W. Brian Arthur calls the Autonomous Economy.

A large part of the modern world is now algorithmically controlled. Not only is it automated, but not far off becoming cognitive, with systems capable of making their own decisions and being given the permission to do so. For me, this raises a number of interesting questions…

1. It’s clear that The Machines Don’t Need Us, do we need us?
2. Who gets to decide what these machines choose to do?
3. How are they making those decisions?

So, do we need us?

Based off no data whatsoever, I think it’s safe to assume there is a natural inverse correlation between our acceptance of autonomous systems and the significance of their impact on people.

The point being that there are still some things that we don’t want to completely hand over to the machines. But there’s no escaping that in this data-driven world, if we want to stay informed and effective, we need to be plugged in to all those streams of data.

Computers might be happy to chew on the ball of data and logic that describe the complex tangle of the world — but humans need something more digestible.

Whether at the scale of your own heart, an airplane or an entire city, we rely on abstractions of all this data to make sense of it — visual representations that allow us to see and use the information that’s hiding in plain sight in the real world.

Recently at Signal Noise, we’ve been working on a number of projects for industrial clients. One of the clear trends that is emerging in that space is the concept of Digital Twins — data-driven, virtual representations of physical things that are used to gain insights about them and predict their future states. And it’s digital twins that make up the first component of our future BI stack.

The concept originally comes from NASA, where it makes a lot more sense to experiment with a digital model, than to break a billion dollar spacecraft. Today, digital twins are used extensively in manufacturing and engineering. A typical digital twin might look something like this…

But already, people are starting to think about them in much broader terms — digital twins that describe more intangible systems and processes. In our own work, we’ve explored digital twins of the crowd at a football match, the social graph of financial traders and fleets of commercial vehicles.

Elsewhere I’ve recently met with a start-up called Tag Team who are using Ultra Wide Band technology to track large numbers of people to manage crowds at the Hajj pilgrimage or monitor the movements of gang affiliated prisoners in US jails.

A digital twin could be as complex as a Formula 1 pit wall — or it could be as simple as one of Uber’s miniature Priuses — or should that be Prii?

For me, there are five useful dimensions that together provide a framework for thinking about different types of digital twins. These can be considered as sliding scales, from the simplest implementations to the most complex.

Purpose
What is the strategic objective for the digital twin? What value is it intended to provide to its owner?

Focus
What type of entity or system does the digital twin describe? Is it a component, vehicle, person, or place or perhaps a more intangible object such as an entire company?

Scale
Does the twin describe a single unique entity or does it aggregate data from many things?

Scope
Will this twin only ever know its own data or will it benefit from a more holistic view that includes data from suppliers, customers or even third party sources?

Mode
How does the digital twin operate? Is it a tool that is controlled by a human user? Does it work collaboratively with humans or is it a fully autonomous agent in its own right?

Let’s look at some examples

Festo dashboard
Basic digital twins that can monitor the current state of single components are used extensively in industry today.

Coinbase Pro
Trading software provides digital twins of conceptual objects such as currency pairs or indices, to allow traders to make buy, sell or hold decisions

Virtual Singapore
Virtual Singapore provides a collaborative, rich-data environment to help policy teams make long-term decisions on areas such as infrastructure and resource management and urban planning.

Waze
Navigation apps use vehicle passengers like a network of sensors to generate digital twins of cities. They prescribe routes automatically, but will continually optimise in a collaborative way as drivers adjust routes based on their own knowledge or preferences.

Out of all of these dimensions, it is the mode that a digital twin operates in, that I find the most interesting. Specifically what happens when digital twins start operating with full autonomy. Because, remember — The Machines Don’t Need Us. The part of a digital twin that we see — the visual presentation layer is only part of the story.

Over the last 18 months, we’ve been working as part of a consortium looking at ways to improve the design and engineering of manufacturing lines for electric powertrains. During one conversation, I heard how reluctant machine makers are to provide design information about their products — their secret sauce — to manufacturers as part of a digital twin.

But equally, the same machine company would love to get hold of the manufacturer’s output data — something that would normally be seen as highly sensitive business information.

There is clearly an opportunity for an exchange of value in a situation like this. There is no reason that these two partners shouldn’t trade this data — without the need for human visibility and with a reduced potential for any abuses of trust.

By adding a layer of commercial logic and control alongside the operational one and by equipping a digital twin with a cryptocurrency wallet, it is easy to imagine digital counterparts to physical things that can function like mini businesses in their own right — trading their digital assets and marketing themselves just like any other company.

It turns out that I am not the only one to have had this thought. A new way of thinking about ecosystems of these autonomous digital entities is emerging. For example, a UK startup, Fetch.ai, is developing a decentralised digital world in which useful economic activity takes place:

“This activity is performed by Autonomous Agents…digital entities that can transact independently of human intervention and can represent themselves, devices, services or individuals. Agents can work alone or together to construct solutions to today’s complex problems.”

A German firm, Spherity is exploring similar concepts. Building, in their words “systems that bridge humans, objects, machines and algorithms via their digital representations, allowing new forms of secure machine-focused commerce and decentralization.”

And recently, a new decentralised marketplace has been announced by the Iota Foundation

In the world of financial services, this idea has already played out, with over 80% of the daily moves in US stocks now initiated by algorithmic systems. But these types of transactions are only the start.

Some of the world’s most valuable companies extract huge value from the data around each of us. They are able to stitch together data from a myriad of sources into knowledge graphs capable of predicting where we will be, who we’ll be with, what we will look at and what we’ll buy.

As billions of things start to come online, there is a similar opportunity to unlock value. Within the new knowledge graph made up of all of these entities and their relationships, you can imagine:

Automated analysts
Searching for signals about a company’s performance

Consultative bots
Scanning the network to learn about the effectiveness of components and making recommendations for future models

Programmatic trend watchers
Able to see the patterns of technology adoption in different sectors and

Algorithmic deal-makers
Connecting the buyers and sellers of digital assets.

It is these ecosystems of autonomous agents — specifically the interfaces that allow their creation, calibration and control, that I believe make up the second component of the future BI stack.

They present big challenges for designers opening up questions such as:

  • How will humans make sense of these hyper-complex environments
  • How will we make navigate from a single component to an entire corporation?
  • How will lawyers monitor the web of smart contracts this will involve
  • How will R&D teams identify the best opportunities for research?
  • How will P&L owners understand where to invest and what their returns are?

I’m afraid I don’t have the answers (yet) but I suspect that we will need a multitude of different lenses onto this universe, but it feels as though these agent ecosystems not only present a huge opportunity for the discipline of data-design, but for new business models and forms of growth that rely more on the extraction of knowledge than of precious natural resources.

It is safe to say the technology will come. There is no lack of incentives — financial, intellectual and reputational that will drive people to succeed in this endeavour. Given that these systems will be highly automated, it’s likely that they will be making decisions with a laser-like focus on particular outcomes. We will need to start thinking carefully about what those outcomes are and what the second and third order effects will be.

There is an emerging field of practice called Explainable AI. The thinking being that as the types of decisions and predictions being made by AI-enabled systems is becoming more profound, the need for transparency, traceability and fairness becomes more critical.

IBM, in particular, seem to be leading the way here. They have released a set of toolkits to help stakeholders trust the accuracy, fairness and explainability of automated decision making.

For example, they provide access to simple interfaces that reveal the existence of any biases — such the age of insurance applicants or the race of prisoners up for parole that might be present and provide a way to examine the results of different scenarios that better balance fairness and accuracy.

I think these early explorations around explainable AI are a positive move. But for me, they sidestep an important consideration — what was the intent behind the AI’s calibration. When artificial intelligence is being used to optimise towards the goals of the few, a new approach of radical transparency is needed to hold those goals to account.

What is the AI being asked to optimise for? To paraphrase Shoshana Zuboff — who decides what the machine decides?

In the case of traditional companies — senior management and board members will clearly play a role. But what about completely autonomous organisations? One in which there is no management — only algorithms?Presumably control and calibration would shift to the ultimate owners of an entity — its shareholders.

For years, corporations have existed to benefit this group, but the tide is turning. Priorities are shifting.

Since 1997, the Business Roundtable — America’s most influential group of corporate leaders, has agreed on one principle: “The paramount duty of management and of boards of directors is to the corporation’s stockholders,”

As society faces mounting challenges like the climate crisis and rising economic inequality, businesses are rethinking this blind devotion to shareholders.

Last month, the same group released a statement on the purpose of a corporation that is radically different. The statement, signed by almost 200 of the CEOs of the US’s largest companies including the likes of Apple, General Motors and Walmart, proposes that companies should have a broader responsibility to society:

To deliver value to their customers
To invest in their employees
To deal fairly and ethically with their suppliers
To support the communities in which they work
Protecting the environment through sustainable practices

Again, this feels like a positive step, but it requires that all stakeholders are considered when a business is making decisions. It requires businesses to consider multiple scenarios and then act to balance the distribution of benefit more evenly. In the best case, companies will go about this endeavour with integrity. They will need tools to explain and justify their decisions. In the worst case, they will continue to focus on profit and shareholder value above all else. We will need tools to scrutinise their decision making process.

Last year, I met with the Head of Partnerships at Improbable, a unicorn software company based in London.

The company is behind SpatialOS, a computation platform that enables the creation of massive simulations and virtual worlds for use in online games. But I learnt that they also have an Enterprise division — focused on bringing powerful simulations into the corporate realm.

This division is shooting for a future that is “powered by a vast number of massive, real-time simulations, providing insights into the most important problems faced by businesses and governments, in order to enable humankind to be happier, safer and more prosperous.”

I think there is real power in this idea.

There is an opportunity to use simulations to help organisations examine alternative scenarios which consider the impact of decision making on different groups.

There’s a nice — albeit much lower resolution example that comes from two designers — Francis Tseng and Fei Liu, who teamed up to build an agent-based simulation that they called Humans of Simulated New York.

They wanted to explore the use of simulation to see what the world might look like if people behaved in different ways or government instituted different policies.

To populate their city, they used a Bayesian Network overlaid on NYC census to create a synthetic data dataset of plausible inhabitants — pyramids are unemployed, cubes are employed, and spheres are business owners. The simulant shapes are then colored according to Census race categories.

Each slice of the city’s building represents a different business, and each color corresponds to a different industry. So, the shifting colors and height of the city becomes an indicator of economic health and priority — as sickness spreads, hospitals spring up accordingly.

Here’s how Tseng describes the results:

“In many scenarios, the city collapses under inflation and gradually slouches into destitution. One by one its simulants blink out of existence. It’s really hard to strike the balance for a prosperous city. And the way the economy organized — market-based, with the sole guiding principle of expanding firm profits — is not necessarily conducive to that kind of success.”

Humans of Simulated New York, is clearly quite simplistic — it was only a month long art project, but I think it points towards an interesting future in which game mechanics and simulation technologies serve as an accessible tools for assessing both public policy and corporate strategy.

Imagine what could be achieved with the resources of firms like Improbable and the skills of the data design community…

If the big corporations are serious about pursuing a purpose that works for all stakeholders — let’s see their thinking. I, for one, would love the annual report of the future to tells us the scenarios that were simulated, the possible outcomes that were considered and the justification for the decisions that were made.

One of my favourite visualisations is the Atlas of Economic Complexity. It is a tool that allows you to view the capabilities and knowhow of different countries and explore global trade flows across markets and track these dynamics over time.

But it’s well, you know, complex. What if you could bring data like this into agent based simulations with the richness and accessibility of Fortnite? How might it better equip us to understand the implications of big decisions, such as — I don’t know — leaving the European Union?

Simulations then, make up the third component of the future BI stack. Together there are three pillars:

  1. Digital twins that enable the more efficient use of assets and resources
  2. Ecosystems of autonomous agents that build marketplaces for knowledge|
  3. Simulations that help us assess the impact of decisions on communities

These tools have the potential to dramatically impact our economies and societies. They could generate greater value using fewer resources, they could enable more distributed participation in markets, they could provide us the opportunity to rethink how firms work and escape the treadmill of short- term thinking. They could, of course, do just the opposite.

--

--

Matthew Falla

Working at the intersection of technology, design and data