Somewhere Between Oblivious and Alarmed: A Starting Point to Addressing the Problems and Dangers Within AI

Anne Griffin
TECH 2025
Published in
6 min readAug 19, 2017
Photo Credit: codigoespagueti.com

Somewhere between chatbots and the AI apocalypse, here we sit in 2017. Somewhere between Elon Musk shouting “AI will end the world” and Mark Zuckerberg “there’s nothing to see here. No one should be worried. The thousands of propaganda chatbots and echo chambers on my platform had nothing to do with the results of the 2016 election that got us to where we are now.” Here. For the most part, we have some idea of where “here” is, the question is where are we going, and how fast are we going there?

Another key question is, wherever we are going, how do we avoid getting lost and end up (or continue) on the path to an AI apocalypse?

People have been shouting about the end of the world since humankind’s existence. You will get attention, but not necessarily action. And with the state of the world today, people are exhausted and are going to shut down if you just keep shouting about the end of the world. There are so many paths to the AI apocalypse, and most people aren’t really sure what AI in 2017 is, looks like, even if they’re using it. You need to give people an actionable starting point. Game of Thrones tried telling people for several seasons that “Winter is coming” but, up until this season, almost no one has suggested a starting point for the rulers and common folk of the 7 Kingdoms of how to even prepare for the White Walkers.

Here is a starting point for preparing for our inevitable AI future:

  1. Educate yourself. Talk about it with your friends. You don’t need to want a job in AI, just know enough to know how it will impact you, your family, and your friends. For the purpose of this conversation, think about the policy, guidelines, and laws. What are things that impact you today that you’re aware of policy or laws that are trying to address the problem? Now imagine computers involved in those problems and think of how our current system wouldn’t address those. A good starting point are the 23 Asilomar Principles, meant to be guidelines created by people in industry and academia. You can also listen to more about it on the Tech 2025 Podcast episode about it and, just a few months ago, challenged members of their community to analyze and edit the principals in a workshop. Then talk to your elected officials about the specific concerns and questions you have that you want them to work towards addressing or at least setting up something to investigate more. An issue some Roomba owners may be taking up with officials soon will be about Roomba’s ability to sell data about your home: everything from square footage, number of rooms, and things like if you likely have kids, pets, etc. While iRobot backed away from doing so for now, legally there isn’t a lot to stop them or anyone else from doing this in the near or immediate future. If that concerns you, imagine what Amazon knows about you based on your Echo and your orders. You don’t trust car companies to be the sole tester of their safety, and you probably feel safer eating that sushi knowing the government has guidelines and laws for food safety. People who use AI but don’t necessarily make it need to help form the ideas of how we will make sure AI is safe for everyone.
  2. Educating the public. This should happen on a consumer, what-the-heck-are-they-doing-with-your-data level, and on an economic level to help upskill works in industries that will need workers who do the same job, but slightly more high tech. ABC News recently reported on the economic issues faced by those who work or worked in manufacturing but don’t have the skills to work in the higher tech manufacturing plants that are opening right next door. Many factory workers, especially those who don’t want to retire early or can’t afford to retire, would love the chance to upskill but don’t always have the resources available or affordable to them to upskill. If we want to avoid more wide-spread unemployment and shortage of skilled labor, this needs to be addressed.
  3. Policy. I was listening to A16z’s episode on Blockchain in Congress, and my first reaction was: there is a Congressional Blockchain Caucus??!!! I’ll admit, beyond cryptocurrencies, I don’t know a lot about Blockchain yet. But I’m glad there are experts in the field advising our government on this technology. There is a lot of potential for Blockchain both in industry and government, and people with an understanding of the technology should be the people informing Congress about this technology. We should have more caucuses related to AI to inform our leadership on the benefits but also the dangers so they can make semi-informed decisions should they choose to listen.
  4. Independent industry watchdogs. This one is a bit tricky. Many people within the AI industry have expressed concerns that putting too many restrictions on AI too early will stunt its growth, especially within the US. Right now the US is struggling to support science at a Federal level, while China is investing heavily in AI development. China’s AI will be built with a different set of ethical ideas (note that the 23 Asilomar Principles don’t outline what ethics should be follows), and if AI is going to be technological and economic boon being predicted, some fear the US will fall behind. A parallel we can look back on is from the first Industrial Revolution in Great Britain. Britain was clearly further ahead than the rest of the world, and at one point outlawed the export of the machinery to other countries to keep their competitive advantage. However, we can have someone watching the industry for ethics without it being the cause of falling behind specifically because of these groups. We already have seen AI’s bias that impacts us in systematic, socio-economic ways. One of the best examples is the AI being used to help sentence people convicted of crime. The court system already has a huge bias and is part of a depressing, heart breaking for-profit prison system. AI is learning from data that already exists. There is not one point in US history where the judiciary system was unbiased, so there isn’t any data we can use that won’t be trained on the existing biases in the system. Something can be done when you can prove a specific individual in the system is or was biased (though proving it is easier said than done). However, who is responsible when the algorithm is biased? Do we fine the company? Are they allowed to continue making that software? It needs to be called out and brought to the attention of the industry, the public, and lawmakers. User experience is touted as an important aspect of tech, however when people of color, women, and people with disabilities point it out, it takes a wave of media attention and actual loss of profits before any real action occurs. We we can’t make non-racist AI for sentencing, we shouldn’t be making AI for sentencing at all.

This is our starting point. It may not prevent the end of the world, but we have to start somewhere. If Elon Musk, one of the people who is making the AI doesn’t even trust himself to stop the AI apocalypse, then we should start thinking of how we want to address the issues in AI we are facing (or ignoring) now. (Or is he snitching on himself or someone else in the industry? What has he seen?) The easiest way to avoid catastrophes is to do the small, intentional things that would prevent something from escalating or getting out of control in the first place. Be engaged, be informed, pay attention, and let your voice and your vote be heard.

--

--

Anne Griffin
TECH 2025

Anne Griffin is a human & product manager who studied engineering at the University of Michigan. She is passionate about the human aspects of technology.