The legal and social responsibilities of being a designer in the 21st Century
Designers are the public servants of the modern world. We make decisions that impact millions, sometimes billions of people worldwide, and it’s time we take our responsibility serious.
If you Google’d the largest empires in the history of the world you’d see 70 million individuals under the control of the Roman empire, more than 110 million under the rule of Genghis Khan, and 533 million under the British Empire [source]. If we look at religions, the Catholic church claims nearly 1.3 billion people worldwide, Hinduism just under 1.1 billion, and just under 17 million people claim to be Jewish. These numbers are astounding. They are so large we literally struggle to understand how big they are. But compared to Big Tech, the empires of yesteryear aren’t even in the same category.
According to reports, there are more than 2.1 billion users on Facebook, more than 2 billion people using Google’s Android operating system — along with more than a billion each on Maps, YouTube, Chrome, Gmail, Search, and Play, individually, and Amazon is host to more than 310 million active accounts. Yet despite their size it’s incredibly easy to forget just how pervasive these companies are in our lives because they are invisible — they’re embedded so deeply into every part that we fail to recognize they’re even there.
Modern technologies have also grown faster than any technologies to come before them. If you look at the time it took major technologies to reach 90 percent market saturation, it’s shocking.
These are companies that have reinvented industries, built fortunes beyond comprehension, and, ultimately, developed the infrastructure necessary for humanity to operate as a global society. The importance of what these companies have done should not be understated or taken for granted. The impact of their inventions will last beyond our lifetime and the lifetimes of our children. That being said, it should be no secret that regulation is on its way. It is, in fact, already being put into place in the EU with the GDPR, but it is only going to continue.
Consider the fact that at this point in history these facts are all true:
- Engineers can get in legal trouble for the code they write [source]
- Apple Watch data is being used as evidence in a murder trial [source]
- There is currently a battle going on between Microsoft and the United States government about how borders should be considered when thinking about privacy on the cloud [source]
- As of March 1st, the EU has given Big Tech three months to clean up the extremist content on their platforms before they’ll be sanctioned [source]
- Companies operating in Europe can get in trouble with the EU for processing and controlling citizen’s data in ways the government deems illegal, even if they don’t have a physical presence in the country [source]. And, in fact, according to a recent report by Gartner, more than 50% of international companies operating will not be in full compliance by the time the standards are activated on May 25th, 2018, which will lead to financial sanctions in the immediate future — some as large as 4 percent of the company’s annual revenue.
With algorithms now operating as governing bodies of our world these are steps we need to take. These companies (and many others) need to face some sort of regulatory body, considering their power. It has become a matter of public safety that regulation happens. Because if we look at how regulation might be implemented, there are really only three ways this plays out:
1. We continue down the path we’re on and don’t regulate.
This option is already playing out. Sure, we have light regulation in place in the United States, but it is very loose ended and open for interpretation. This regulation was built with the intentions that corporations would adopt a sense of social responsibility. And as we’ve seen, that’s not the case.
If regulation remains light touch this means we’re leaving the decisions about how these systems should be implemented to publicly traded corporations that have access to more information than any entity in the history of the world besides maybe the CIA, the FBI, or other government intelligence agencies — and who knows, they may have more... our personal data has become a trade secret. By accumulating this level of intelligence they have also accumulated vast wealth of power, and are now capable of shaping reality to reflect their bureaucratic fantasies.
Without regulation in the near future this will lead to an unmonitored arms race, similar to the nuclear arms race experienced between the United States and the Soviet Union. This engagement led to a situation the military referred to as a point of Mutually Assured Destruction (MAD) — the idea that if both sides were to continue onward, the results would end in a disastrous situation for all parties involved.
I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence.
I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon — Elon Musk
The surmounting powers of these modern data empires have invisibly wrapped around our world and will undoubtedly outweigh the power of any nation-state, country, or empire we have come to know over the course of human history. This, like the nuclear arms race, will become unsafe for everyone involved — not only the general public but the people creating these systems as well. Following this path will result in destruction unlike any military expedition we’ve ventured on before. And it will be done in pursuit of corporate profit.
2. Regulation gets put in place by those who don’t fully understand the system or what’s going on.
As we’ve seen in countries like the United States, the United Kingdom, France, Germany, Brazil, Spain, and many other developing nations, there is a large portion of the population around the world that is uncomfortable with what is happening right now. And rightfully so. But why? Mostly because they don’t fully understand it and they feel like it is out of control.
It’s not that these people are dumb, it’s that they live different lifestyles. They live lives in which they have no immediate need to understand artificial intelligence or the cutting edge of computing. Yet it’s disrupting their lives.
In an attempt to restore the order they knew and potentially slow down the incredible progress of the companies involved, these individuals are willing to do whatever it takes to put a stop to this madness and protect their freedoms. This will no doubt lead to regulation that will stifle innovation, like we’ve seen in Spain, France, Germany, and other parts of the EU.
In these nations external regulation has become a burden that opposes innovative practices. It has pushed businesses to focus more on avoiding fines than creating something meaningful. It turns into a task that leads to a checkbox mentality. Instead of pushing companies to adopt a positive internal attitude about privacy and security it puts the company at a disadvantage in the global market.
The reduced ability to innovate then pushes companies to operate out of other nations that are willing to remain flexible or completely turn a blind eye. Because the Internet works anywhere there’s a connection, so why stick around if there’s less friction somewhere else? For this reason, too much regulation can very easily put countries in harm’s way, from both an economic and military perspective.
From an economic perspective driving companies out of the country due to overregulation is dangerous because the countries that choose to do so will fall far behind in terms of innovation. And while this is bad, there should be even greater concern from a military perspective. Falling behind in this race will leave nations so far behind they’ll be unable to secure themselves in times of threat. We’ve already seen this with the scale of disinformation occurring across the world.
There is plenty of research stating the dangers on both sides of this argument, and it should be recognized that trying to shut these systems down out of fear is not a reasonable option. A large majority of the population relies on these companies to operate, just as we rely on any of our basic needs. If something bad were to happen to these companies, the global economy would crash. This would only drive greater threat from those who wish to use the technology to own the world.
Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” — Vladimir Putin
Instead, we have to embrace these systems and work to make a governing strategy that ensures the safety of everyone involved. We need powerful nations to align, set the standards for the globe and support each other from the threat of those nations who would rather wield these technologies for power and destruction. And we should figure out how this can be done in a way that doesn’t punish bad behavior but instead reinforces good behavior.
Reactive punishments are archaic at this point in history. These organizations are moving too fast. And with all the money they have, financial punishments will only represent a mild speed bump to the long term mission — in some cases fines may even be welcomed over the alternative. Instead, we should focus on creating fluid, adaptable regulating systems that proactively promote public safety and reinforce corporate responsibility while still allowing these companies to flourish economically. Which brings us to the third option.
3. We accept that regulation is necessary, get involved in our communities, and discuss the options.
As opposed to how it sounds when talking to the public about regulation, this isn’t about giving away intellectual property rights or helping the general public overthrow anyone. This is about helping the population understand the situation well enough that they can not only feel safe about their future and the future of their children, but also approach regulation with as informed citizens. It’s about increasing data and technology literacy within the population so we can have bigger conversations, not headline-only debates.
These people deserve to know more. At the very least, they deserve to know enough that when it comes time to regulate they understand how to move forward and keep everyone safe while still allowing the corporations creating these technologies to profit and flourish. It’s a matter of discovering a way to enable controlled chaos as opposed to profit-mediated anarchy and overwhelming destruction.
It’s easy to recognize how much nations are shaped by their literal wars, where blood is shed and lives are lost. We all remember Caesar and the Romans, Genghis Khan and the Mongols, and the impacts of British Colonialism. But while the physical outcomes of these historical events are remembered we often fail to recognize the ways in which large-scale industrial wars shape our world. Today, Big Tech companies are more dominant than any of their predecessors, and it’s time governing bodies across the globe step in to do the the job they were put in place to do: protect the freedom and safety of their people.
Not all companies are as large as the Facebooks, Googles, or Amazons of the world. This is something I’m well aware of. But we all design systems that impact people’s lives. And when those systems are driven by algorithms that are invisible to the public, protected as IP, it’s important we take our role as designers and engineers more seriously. We have become the modern government representatives to our public of users as constituents. It’s time we do our job and fight for their rights, not just push pixels to drive the bottom line without considering our impact.
For this reason my global network of experts and I have started a 501(c)(3) called Design Good. We will be working on a two part mission, which is to (1) help improve data and technical literacy within the general population and (2) provide technologists with knowledge and tools that enable them to create products that not only respond to informed consumer demand but also create positive social impact while still increasing revenue. Think of it like the Better Business Bureau, for the Internet.
Through this organization we will supply technologists with cutting edge research to make sure you know what’s going on and how it applies to modern regulations, tutorials to teach you the best way to design these systems within the limitations of modern regulation, and design assets to help you speed up your process. These tools will be developed with the knowledge of world-class individuals working on some of the biggest issues of our lifetime. The implications of our actions are gigantic but we’re very conscious of the potential impact and are beyond excited to help.
And you’ll be able to find much of it here, with InVision, as we work together to take steps forward and make a brighter future. We can’t wait to create a better future, through technology, and we hope you are too!
If you liked this, you should sign up for the Design Good newsletter and join over 1,400 people who have already done the same! You can also purchase our first book, Automating Humanity, at designgood.tech to learn more about all of this in much greater detail.
Also know that 25% of all profits go to youth technology literacy programs of your choosing, the rest go toward our supporting mission. And as a 501(c)(3) non-profit all donations are tax-deductible within the United States. This means your purchases not only support the mission of Design Good as a non-profit org, but also funds the future generation’s education, which will help future societies thrive!
If you don’t see the program you’d like to donate to, let me know and we’ll make sure your favorite program gets added to the list!