What does #AppleVsFBI tell us about the technologies that will shape our future

The Apple v FBI dispute is a sign of things to come. In the years and decades to come, new technologies — such as 3D printing, robotics, and artificial intelligence — will impact human experiences in unimaginable ways, changing society, economies and daily life.

In doing so they will pose great challenges to our established legal and social systems. The way governments and lawmakers react to these technologies will determine the future of human rights in this era of radical technological advancement.

It’s not about one iPhone

The controversy around weakening the security of an iPhone is an example of the clash between traditional legal systems and human rights in the context a new technological reality.

In principle, the FBI asking for assistance to access the phone of one of the San Bernardino shooters is a seemingly reasonable request. But the technology involved essentially means Apple would have to create a backdoor that bypasses the security features of the phone; Apple says that once the technique is created, it could be used over and over again on other phones.

If Apple is forced to create software that bypasses its product’s own security, it would open a Pandora’s box. Governments from all over the world could demand similar weakening of security features to access devices and data, not only from Apple, but from every other technology company.

Once the precedent for corporations aiding law enforcement by weakening the security of their products has been established, governments could then request that companies ship future products with weaker security so they could be easier to break into. This is not a remote hypothetical scenario; we already know that security agencies like the NSA and GCHQ have spent years developing ways to spy on the communications of hundreds of millions of people. In the UK, the government is trying to introduce legislation that would legalise bulk equipment interference, i.e. mass hacking.

As we speak, privacy as a human right risks becoming obsolete because laws have failed to keep pace with digital technology. Our data is being sold and shared between companies and much of what we do online is being tracked. We are effectively blind to how our information is being used or who controls it.

With the growth of the Internet of Things, the number of connected devices across households, manufacturing and businesses will reach 200 billion in 2020, up from 2 billion in 2006. Whereas now your phone, computer and perhaps your TV may be the only things in your home connected to the internet, in the next 5–10 years everything from your car, thermostat, fridge and your kids’ toys will be connected. There will no longer be a separation between the offline and online worlds. The risks to each of us will only increase as every action of our lives is tracked.

Yet, legal systems across the world are generally ill-equipped to deal with these issues. New technologies are often complex and the vast majority of judges and legal professionals are unlikely to have sufficient technical knowledge, legal precedents are thin and where they exist they can become quickly outdated as technology evolves.

More crucially, laws are not up to date. Even in areas where there is minimal technological complexity, like online harassment, revenge porn and online threats, legal systems are generally unable to keep up with contemporary forms of communications, social interactions and cyber-crime. And where new laws are developed, governments’ instincts are to “collect it all”, rather than improving the protection of our rights.

But the dilemmas over privacy and the security of communications are just the tip of the iceberg.

The coming technologies

There is disruption coming from every direction. In a near future world defined by technologies such as 3D printing, robotics and artificial intelligence, how can we anticipate and mitigate against the impacts to individuals and communities? Can we rely on companies to build their technologies in a way that respects our rights? Can we trust governments and existing laws to protect our human rights?

3D printing is touted as a revolution in manufacturing that will decentralize the physical manufacture of tools, machines, and even electronics, to individuals and communities. What will this mean for the responsibility for malfunctioning or polluting products? Who can we hold accountable when something goes wrong? How is responsible when the actual manufacturers are a collection of unrelated individual and community initiatives? If 3D printed products — say guns or drones — are used to commit human rights violations who will be responsible?

It’s predicted that robotics will have a huge effect on jobs in the coming decades. A recent study by Oxford University and Citi found that the percentage of jobs at risk of automation is 57% in OECD countries, 69% in India and 77% in China. Will this lead to an increase in poverty and inequality? Will workers still be able to maintain collective bargaining rights to improve their pay and conditions? Will societies be faced with massive unemployment or will new occupations replace old ones like they have done in previous industrial revolutions? If this does not happen, how would social security systems cope with the change?

Artificial intelligence (AI) is already being rolled out in daily life, assessing debt risk, scanning number plates and, crucially, predictive policing — essentially using algorithms to attempt to forecast crime and prevent it from happening. Over the coming years, AI systems will become widespread in everything from self-driving cars, to policing, to the insurance and healthcare industries. And just like the humans that create them, systems that process large quantities of data have been shown to be biased and reinforce inequality.

But it’s in coming decades, that artificial intelligence will become vastly more powerful, through advances like machine learning — creating machines that can learn from their experiences and in some cases even rewrite parts of their own code. Artificial intelligence will surpass human capabilities in many fields. Again, the questions of obligations and responsibilities arise. What happens when two self-driving cars collide? Who will be responsible for a policing decision based on artificial intelligence — the company that developed the technology or the police that used it? When an artificial intelligence, initially created and coded by humans, is acting autonomously, who will be responsible for its actions — its creator, its user or the machine itself? Should an artificial intelligence operating on behalf of the state have an independent responsibility to protect human rights? What would such an independent obligation mean if the AI was being used by a military, with so-called killer robots already under development? How should human rights be protected in a world where machines make largely autonomous decisions about whether you get a loan or health insurance or whether you are the subject of preventative law enforcement interventions?

So what should we do about it?

Technology is generally neutral — it can be used for good or bad. But the protection of human rights — be they economic civil, asocial,political or cultural — must be integrated into technological development to ensure advances are a boon for individuals and society, not a trading off of our rights and freedoms for convenience and temporary excitement about the new opportunities.

The technology sector, civil society and governments must reflect on the human rights implications of new technologies early on. We need to examine on the adequacy of existing legal frameworks and proactively anticipate their shortcomings. We need to rethink our models for responsibility in a world that where it is much more diffused.

This needs to happen today, as emerging technologies develop and reshape the human experience.

With hindsight, we know that the way the tech communities developed the internet and our telecommunications infrastructure — such as mobile phones that continuously broadcast our location — made it difficult to safeguard our rights in the face of power-hungry governments, This was not intentional. That error was only manifested when governments started knocking on the doors and demanding the keys to the “backdoor”.
 
 Today, at the cusp of an unprecedented century of technological progress potentially 1,000 times greater than the last, we cannot afford to make that mistake again.