pia mancini
9 min readDec 18, 2015

Future Scenarios for algorithmic accountability and governance

This is the long version of a post we wrote together with Farida Vis, as part of the World Economic Forum Global Agenda Council on Social Media. The post that made the cut is here.

You’re driving along in your low emissions car, listening to music via your favourite streaming service and notice that your child has left their favourite doll in the car. No big deal. However, unbeknown to you your music streaming service is harvesting data from your phone completely unrelated to your musical tastes, like your photographs and social network connections; the doll is ‘listening in’ and recording what goes on around it, whilst the car is actively engaging in mass-scale data fraud and tampering with emissions records.

In March Mattell announced the launch of Hello Barbie: the first interactive doll, with full Internet connectivity and the ability to record voices and conversations in order to process this data and ‘respond’ in later conversations. In August popular music streaming service Spotify changed its terms of service to gain access to users’ social media activity, photos, phone numbers and sensor data. In September, an NGO discovered Volkswagen had installed software in some of their cars designed to tamper with the emissions inspections. The software was designed to trigger full emissions control only when it was tested, switching to a less effective emissions control at all other times. According to the EPA, half a million unaware drivers were emitting up to 40 times the standard nitrogen oxides.

In all three examples digital data collection and use is either unknown or unclear to the user raising serious privacy concerns. It involves actions and decision-making processes enacted by algorithms, which are essentially computational recipes for executing certain actions.

We live in a world of algorithmic predominance: social media platforms that harvest our data; recommendation algorithms that offer up new products to buy based on past purchases; search algorithms that show us tailored results based on our profiles and location and predictive algorithms that influence our chances of getting a loan or shape how much we may pay for health insurance. Algorithms are increasingly becoming conduits through which we interact with others and with the Internet of Things; they shape how we define ourselves online and make countless known and unknown decisions for and about us.

What lurks within these ‘black boxes’? How can they be understood, governed and made accountable? These are challenging but significant political questions, which require urgent public debate. With the goal of fostering long-range thinking about this issue, we imagined a set of potential future scenarios for algorithmic accountability and governance.

The first six scenarios focus on individual empowerment and agency over algorithms and platforms. They show different arrangements in a continuum ranging from end-users being disengaged bystanders to individual empowerment to negotiate their conditions for engagement with companies on an equal footing. Moving from scenario to scenario pushes the algorithmic accountability paradigm from opacity to transparency; from unawareness to agency; from no control over how data is used to full ownership and profit sharing of positive externalities; from unawareness to public global agreement on the values algorithms should enshrine.

#1 — Overwhelming the unaware

Citizens are kept purposefully unaware of what happens with their data, how through their data they are sorted and ranked. More than that, it is unclear how the information they interact with might be biased in different ways — overemphasising or censoring . These vast information asymmetries are achieved through impenetrable legal jargon in Terms of Service (ToS) agreements, which are opaque about how data is used. All risks and responsibilities reside in the individual user, who essentially has no agency.

#2 — Users beware

In a slightly better scenario than #1, there is a more honest disclosure on the part of the company. ToS are not a incomprehensible long blurb of lawyer speech, but a clear disclosure of third party sharing, advertising strategies and access to private data. Publishing platforms make an effort to explain the reach of posts, that is to say the percentage of the user’s audience that is likely to see it, or how what users see is filtered based on preferences, location, and activity history. Although the focus of the ToS is to limit the information gap, the risks and responsibilities again lie with the individual, who still has no agency.

#3 — Dashboard interface

This scenario gives the user agency over his own data and options, now able to access information and take control of it and customise the type of engagement with the platform. Users can opt to make location data accessible in order to provide suggestions, but not give access to their photos. They can agree to share their information with third party applications for certain rewards. This opt in/out experience is made user-friendly, allowing users to tailor solutions that suit their specific individual needs through a user-facing permission control app where relevant at a very granular level.

#4 — Smart contract

Taking #3 a step further, users and platforms enter specific agreement about what are each one’s rights and responsibilities. These contracts rely on a blockchain that is able to execute through an automated trigger mechanism built into the contract. All opt-in/out decisions are specified and can include certain provisions in the event of a breach of contract. Here individual users have both agency and rely on built-in protection mechanisms. If protection mechanisms are triggered, for example because the user’s information is sent to an unknown third party, a public record is created detailing the company’s behaviour. This could then temporarily freeze this data output, only to be re-activated after the user has been compensated, which can take the form of agreed stipulated rewards (such as 6 month free subscription.)

#5 — Shared Profits

Personal data is a commodity. Social media users create potentially valuable information and platforms commercialise the aggregated knowledge they get for free when users interact with these networks (Mason, 2015). Economists call this a positive externality.

This scenario envisages a business model where users are included as financial beneficiaries of the profits their data generate. User’s awareness about the value of the positive externalities their interactions generate paired with diminishing learning curves of services like smart contracts, increase demand for individual participation in profits derived from this shared value. A bottom-up approach to the social media business model emerges that focuses on a more equitable, user-centric and inclusive approach to monetization. Think personal APIs. Companies continue to profit, but these are shared with the users that generate them.

#6 — Public agreement of values

In one major shift, transparency moves from the technical/functional description of the algorithms to their underlying values, principles, decision criteria and outcomes. #6 brings about a public discussion about what the acceptable behaviour of Internet platforms ought to be and how algorithms should behave including how data is algorithmically sorted and presented to users. A public conversation takes place about the boundaries of what is ethical and fair as well as legal.

These type of questions arise: Should online publishers be shield from liability in the age of revenge porn and trollish harassment? Should publishing sites monitor the content they publish? Does this mean they have some degree of ownership? If these sites make money out of the content billions of users upload every minute, should they also run with some of the costs associated with its generation? Do users simply accept that platforms filter out content they are likely to disagree with or do users want a wider range of possibilities for engagement, including the possibility to express that a user still wishes to see content the platform assumes they are less likely to be interested in?

The previous six scenarios refer to different arrangements of individual agency and empowerment. These agreements must be somehow enforced and relevant actors must be held accountable by someone and arrangements kept in check. There are different roads to do this. The next six scenarios focus on a series of possible arrangements for governance and accountability that range in a continuum from private governance to a multi-stakeholder supranational approach. All of these scenarios have pros and cons that, it is our opinion, are worth discussing and understanding. The scenario that will end up prevailing will have important consequences on the regulatory system and the future of many data-driven industries.

#7 — Private governance

Companies decide to move to scenarios #2 - #6 as part of their strategy without being compelled to do so. Platforms decide that they need to provide more understandable ToS (as Spotify did after the backlash changes to their policy produced) or a dashboard that allows users to opt-in or out of individual app access requirements (Android’s new OS release).

#8 — Accountable to a government national body

A consensus emerges that publishing platforms and algorithm companies have sufficient impact on society that algorithms can’t be left as black boxes nor their governance solely in the hands of the companies who design them.

In #8 this is solved through a government national body that is put in charge of enforcing governance conditions. National bodies have strong enforcement capabilities such as non-compliance fines and sanctions and ultimately can resource to the use of force. National legislation will likely have an impact on the companies’ ability to comply, find loopholes or ignore algorithmic governance agreements. All situations national governments are best suited to deal with. According to Sascha Meinrath, in the USA the FTC might already have this kind of jurisdiction and power but hasn’t used it yet.

However technology companies do not respect national jurisdiction. They have servers all around the world and have the ability to strike deals with different governments. National governments are restricted in their ability to act over this online, superstructured algorithmic platforms.

#9 — Governments international body

Due to the likely problem of technology companies’ ability to circumvent national governance agreements, an international body composed of national government representatives is formed to enforce algorithmic governance agreements. An international body has the advantage of setting mechanisms common to all national governments, therefore preventing companies striking preferred agreements with individual governments. To achieve this, an effort is made to compile and promote the normalisation of national legislation that impacts the governance mechanisms. This harmonisation effort intends to organise the vastly different approaches that take place in #8: a patchwork of national regulations that don’t map onto our technological realities.

#10 — Supranational multi-stakeholders body

In the event of both national and international government bodies proving unprepared to audit and govern algorithms, an independent supra national and multi-stakeholder body is created. This is comprised of individual members, collectively tasked with the responsibility of auditing the ethics and social impact of algorithms. This includes representatives of a wide variety of parties including the media, academia, Internet governance bodies and potentially a board of white/grey hat hackers that audit in scenarios where access is granted or find ways to reverse engineer where this is not the case. The risks of this approach are perpetual stagnation and corporatism as a consequence of the constant pitting of interest groups against each other.

#11 — Algorithms as a public utility

When certain platforms have de facto become the Internet for many — because of the sheer volume of users that access the internet though that platform alone — or the sole vehicle of free speech; these can no longer simply perform in the service of shareholders, nor be allowed to run on a basis where generating maximum profits is the key goal. Instead, they are treated as public utilities, and are in part financed by taxes levied internationally, which results in a reduced or fully eliminated need to monetise user data. (An interesting precedent is the FCC ruling that classified broadband as a public utility in the USA)

#12 — Journalists as accountability agents

This last scenario puts investigative journalists at the centre of monitoring algorithmic accountability. A variety of methods can be mobilised, including interviews with designers and engineers, alongside reverse engineering strategies. Public information needs and services centrally involved in providing these services are continuously monitored and scrutinised in terms of the services they provide, how these might differ across users, therefore exposing potentially problematic information politics.

From the above exercise, some relevant questions arise that can lead future thinking, research and policy proposals for tackling the extremely important issue of if and how the power of algorithms and their impact on our everyday life will be managed. What efforts can be made to bridge the asymmetry of information gap between users and platforms? Does transparency mean more disclosure or meaningful control over individual data?

The scenarios are not a roadmap or solutions, on the contrary our intention is to kickstart a long-range thinking exercise by proposing different probable or improbable scenarios and help further the much needed debate on algorithmic accountability and governance.

Please comment and join this conversation!

pia mancini

Cofounder Open Collective @opencollect | @democracyearth | @democracyOS @partidodelared | YCW15 | http://go.ted.com/gnL | Par de una sociedad en red | Sustainer