Why Everyone Needs To Learn About AI

Heather Law
14 min readNov 16, 2019

--

Artificial intelligence and the human brain. Photo: Gerd Altmann

Over the summer, I collaborated with Alka Roy, a product and tech leader at AT&T’s Innovation Center in California who specializes in AI & Responsible Innovation. Prompted by her, I researched the gaps in AI ethics, policies and education.

As I talked to Alka, she challenged me to see what I could learn on my own and how to delve deeper by researching from public and academic links. Alka offered to partner with me and explore which resources I could obtain and how complicated it was for a layperson, such as myself, to understand this evolving area.

As summer came to an end and after several late-night discussions, she finally agreed to sit down one weekend and answer my questions. Oh, yeah, and she wanted me to tell you that these are her personal views and do not reflect the views of her company or non-profit affiliations.

Ok, now that we have the disclaimers out of the way, what got you interested in AI & Ethics?

Alka Roy

I prefer the term responsible innovation. Who knows in a few months, we’ll call it something else. But I like it because ethics doesn’t start or end with AI. The framework for responsible innovation leaves room for other technologies and dependencies while also including different flavors of AI and their unique challenges. But that wasn’t your question, was it? What got me interested… It was the idea of an imperfect world-building intelligent and autonomous machines. And my daughter. While I’m trying to teach her things, I’m constantly looking for better stories, better examples — with more diversity, more nuance, expanded possibilities for all genders. We are collectively struggling to teach our kids, get the right data and approach in front of them. Our history and structures are marred by inequity. And amazing discoveries and creativity. So, what kind of AI are we going to build?

As I did my research, what struck me is that AI is integrated into almost every field, so how come information about AI is not as easily accessible to the public?

There is a lot of information about AI. But if you mean information that is clear and easy to digest, especially by the non-tech community, I agree we are not there. We aren’t even there for the tech community — because there are so many different specializations. There isn’t a ubiquitous AI system — it’s a way of thinking about building intelligence and autonomy into systems — it can be a simple predictive tool or a complex and layered system. Whenever you have a field that is seeing this level of growth with so many unknowns, there is bound to be confusion and misinformation. Though, as I said before, this kind of confusion there with many new tech trends. Try getting to the bottom of bitcoins and blockchain, for example. The good news is that so many industries, disciplines, institutions including universities, governments, and think tanks are trying to figure this out.

How do we get there?

We need a way to take the mystery out of it. That is why I prefer to approach it as a subset of responsible innovation. We need to think about the general approach of how we innovate with technology, how we talk about it, frame it and who gets a seat at the table to decide what gets built and how we design and test things along with the actual building of things. There are so many influences and decisions in our decision trees before we get to algorithmic models, data sets or complex nets. What is a technique if you don’t understand the domain, how to use or interpret the data, where to get it and from whom? How to apply technology and not just how to build it? And it’s exciting when AI tools provide opportunities to mirror how we have structured things for so long. Give us a chance to deliberate how we should structure things for our future — around privacy, fairness, equity, transparency, explainability but also around how we use language and history and stories.

So, if this affects every field, where is the healthy exchange between policymakers and AI developers happening?

The challenge is that policymakers are not a homogeneous group, neither are AI designers and developers. There is definitely a need for meaningful and useful exchange between the tech community and non-tech community, including policymakers. Historically, tech has often been ahead of policy. Though there has been quite a flurry of activity on the policy side, you have to understand not only the tech but the underlying motivations, the process, and history of decision-making. The tradeoffs, the opportunities, the risks. How you design and the concerns around intelligent routers would be different than when you build an interactive game for kids or an automated resume scanner for hiring or if we are talking about a defense system. Concepts like transparency or explainability apply differently in different cases.

Don’t the policymakers need to set the rules and accountability?

For that, we need an agreement on a common set of principles and frameworks. And as the EU starts arriving at this common set, what about the US, the rest of North America, Asia, Africa, South America and the rest of the world. What about large businesses vs. local businesses vs. governance and trade-offs between competing interests? We also need some way to get an agreement on what set of principles apply and how and in which cases? And who is representing the public?

Where is this happening?

I don’t think anyone has quite cracked it yet. Universities and researches are still communicating in whitepapers, analysts are converting things to bullets and summary and we still have to parse through a lot of information — and it’s evolving and nuanced and at times domain-specific and often unclear. And fairly media-driven and punitive. That hurts experimentation and disclosures — impacts open exchanges based on common challenges. Civil societies are fragmented and overburdened already but they are starting to synthesize.

So what do we do?

I like dialogues. Asking questions. Basic questions. Information exchange between disciplines so people have to translate codified words to new domains. What do human rights mean to the tech ecosystem? How do you define accountability in a world of machines and humans? There is a fear of asking basic, stupid or bizarre questions. But that’s what we need to do. This field is still an early-mid stage depending on the flavor of AI we are discussing. And as it grows, it will touch every field and industry. So, the exchange has to be accessible — both in breaking down the concepts as well as creating an eco-system for open and timely access to findings across industry and academia. In an environment and marketplace build on competitive advantage, this can get tricky. I am involved with a few of these initiatives — I would really like to see more examples of applying principles to real-life experiences, domain-specific use cases, data-sets, algorithms, and research — both raise concerns and offer mitigation strategies that are grounded in practice.

In that case, will some policies keep developers accountable to follow all safety standards?

As you know, I’m not a policy expert. There are a lot of existing policies. And new ones are being discussed for various scenarios like if and when a human is in the loop and when the person is really a decision-maker, basically debates about different shades of autonomy and automation and accountability. Though we are focusing on AI, again I’ll come back to responsible innovation or even reasonableness. Because any tech we build needs to follow safety standards. Think about safety standards we use for things, products combining with safety requirements we have for people and organizations — because that is what autonomy and at times automation with AI can do — blend the two. EU and many industry groups have taken leadership if you are following the news and many other pockets around the world. We need to understand what you can standardize and what you can’t and make a plan for both. I would also like to see a framework to bring it to practice — build accountability and culture around these standards. That is one of the areas that I have been exploring.

How should artificial intelligence be evenly distributed rather than being concentrated in one country or organization?

Hmm…It’s important to be mindful of how we talk about AI-based solutions and tech. No tech in the world is evenly distributed — so AI, as a technology, will suffer the same fate. Unless we do something about it. Look at electricity, cell phones, the internet. And we have underlying economic structures. Follow the money. Follow their goals. Follow the business models and who hires the talent and what is their underlying motivation. Who has access to what data? Because access to good data is not easy or cheap. With our current structural inequity in power and resources, how do we leverage a responsible innovation framework and think about digital equity and build inclusive and affordable options? We really need champions for this. And to create a pipeline and more room in tech & business life-cycle for people from under-represented communities — different socio-economic backgrounds but also from the humanities and pure sciences — different perspectives, methodology, a different way of imagining what change could look like.

Then in regards to the other countries who can afford these AI practices, how will policymakers ensure AI is being ethically practiced on a global scale?

Global digital divide Photo: Mapiful/GeoNames

I really don’t know. There isn’t a global body of policymakers with global accountability. I don’t even think we have a common global understanding or framework for ethical design. We are trying. The human rights community has tried to operate a global court. There is the UN. There are other international standards and tech bodies and financial institutions. You should keep asking the questions though. Ask the policymakers and your professors. We need to be collectively thinking about it. In the meantime, what gives me hope is the different initiatives around responsible innovation frameworks and ethical design principles that have cross-border representation.

Ok, then, who should be responsible when AI technology goes wrong?

Phew…Finally, an easy question! Seriously though, glad you are not a philosophy major and we are not debating consciousness and existentialism of machines. So, who should be responsible? There is a lot of debate about accountability. Managing responsibility for software, product usage, and behavior across national borders is definitely something for policymakers and world leaders to figure out. It is important that accountability doesn’t become circular, impossible, invisible, and a public burden. While policymakers debate this, this question ought to be front and center of every design and every person who is knowingly working on something that has the potential to harm.

What happens if autonomous AI systems rebel against us? Are there any preparations for this kind of event? Is it possible they take over just as it is portrayed in the media?

If by media, you mean a dystopian view of a technology-filled world, you don’t really need AI for that, do you? Read our history and our literature, our biggest battles are often with ourselves. Who is behind building AI systems — we, the people. We cannot forget this. We may not always understand how it works or be able to explain or justify the conclusions or analysis a complex AI system arrives to, but the concept of rebelling would only apply to a system if it’s built or allowed to rebel. It’s not for or against us. And we go back to my fundamental point earlier — humans aren’t a collective us either, are they?

So we don’t need to worry about AI systems going rouge?

Ok, I didn’t say that. What I am saying is that I worry more about people than machines. People who out of ignorance, laziness, malicious intent — can make something dangerous, with or without AI. Doomsday scenarios can always be imagined — you could build a system to execute the most effective plan to save the planet, no matter the cost, and it could end up with a course of action that may not be in the best interest of all humans. We learn from our mistakes though, so we are likely going to get it wrong a few times. That is why moving with deliberation and thoughtfulness in high impact and critical areas and taking the time we need to figure out the nuances should be a basic practicing principle. There is a lot of debate about automation vs autonomy going on that should be useful here. But again, if you pull back, shouldn’t we make all impacting decisions with that kind of thoughtfulness and deliberation.

What about more common AI devices? Like virtual speakers (ex: Amazon Echo, Amazon Alexa, Google Home), aren’t they always listening? Wouldn’t this be considered an invasion of privacy?

It is. I have these devices at home. My team was building conversational UI a few years ago and I wanted to experience, day to day, how different solutions were performing. I knew they were recording me for training data — I knew I was compromising my privacy. When you install cameras and speakers or monitoring devices in your home, in your car or with your always-on phone, you are giving these devices and their software access to your privacy. Now, I am not saying that the companies building these devices or storing your data, don’t have an obligation to disclose how they are using it. And we are going through a wave, where people are understanding consent and opt-in vs opt-out paradigms better. But ultimately, the idea of privacy and what is private and what is not keeps evolving and shifting and is different globally and for different demographics especially or when it competes with collective or a government’s interests.

So, what do we do?

I call it the cost of convenience. We need to individually and collectively decide the cost we are willing to pay for convenience. What gets tricky is when you want access to something — internet, a gadget or information and you have to barter personal information or your privacy for it. I remember at a conference a few years ago, I was protecting my badge and not scanning it to avoid random spam. But then I got to a booth with these amazing lattes and I scanned my badge. I bartered my privacy for instant gratification, for good coffee and to avoid the inconvenience of leaving the conference center. If I had another option, I would have chosen it. Free things, convenience, they come with a cost. A human-centered framework for design and building products puts their users’ needs first and we are looking for it to include an emphasis on privacy or let’s say transparency and agency for the users in simple, easy to understand language. It takes discipline for businesses to give up short-term gains for longer-term user trust. It depends on their business models and how well they understand their users’ needs. Ultimately, we need to make sure we build technology and solutions with flexibility and choices to meet a wider array of needs.

So now that we have clarified some of the potential dangers of AI practices and technology itself, how essential is AI education? Why isn’t it being taught or used as a prerequisite at all educational institutions if it is supposedly our future?

You tell me! Seriously, there are organizations like AI4ALL that are trying to bring the idea of not only a wider reach of AI education but also in communities that are already affected by the tech inequity. There are many universities and foundations that are working on this and building courses. But again, I would emphasize, it’s not just access to AI but multidisciplinary AI and responsible AI should be a part of any discipline including technology and especially AI. We need to be thinking about ethics, philosophy, behavioral sciences, history and the humanities to better understand the backdrop we are building our technology against. And for those coming from humanities, there ought to be a primer in tech and AI to take the mystery out of it. To understand what is possible and how to ask the relevant questions. We need to start early on in our education.

Why can’t people simply use AI and enjoy its benefits without having to learn about it?

I believe most of us are and have been. AI/ML is not a new field.

How do you suggest learning about AI/AI classes be presented in a more appealing way that caters to those in school for instance?

People are always interested in themselves. Writers know this. Good salespeople know this. Good teachers know this. Make AI relevant. With examples, use cases, applications. We talked about the dangers of AI but there are so many exciting areas where AI can enhance our understanding. I prefer Augmented Intelligence to Artificial Intelligence for this reason. How can the fast compute and correlation of seemingly unrelated things help our understanding of our world, ourselves, help us solve problems, and enjoy life? And how do we do it mindfully, so that we can work around or avoid the traps? Direct it in is generally a productive direction to enhance our well-being. Be inclusive and bring more beneficiaries to the table. But wait, shouldn’t you be answering this question?

So, what are some ways in which AI is benefitting education systems today? Do these benefits outweigh the cons such as high power costs and loss of quality?

My child attends a Montessori school. I went to an engineering college that required engineers to take Philosophy and English classes. That offered a class in Science & Religion where I studied the persecution of Galileo, his great cost for discovering an inconvenient truth. It spanned history, ethics, philosophy, psychology, politics along with science and religion. At my university, I could take a graduate course on Neural Net and still serve as the editor of a literary magazine. Technology evolves and changes and what we need to teach and learn is not only critical thinking but also a way of being that allows us to appreciate and enter the complexity of being human. What are our collective guiding principles that leave room for individual fulfillment and well-being? AI has so many threads and variances — it’s a field of study that may become pervasive but it’s still a way technology is built. It is good to remember that. What will benefit the education systems today is a change in the mindset of how we view education for both people and machines.

Thank you for taking the time to guide me and answer my questions.

Thank you for taking the time to explore this topic. And the dialogue. For me, both asking and answering questions helps stay curious and makes us discover the cracks. What we may need to further explore. So, now, can I ask a question?

Sure

How did this exploration change your perspective?

Honestly, for me, it was most rewarding to know what AI actually is now, because I feel less intimidated when people are talking about it or when I read articles for classes that are about AI or even applying for internships. When I first called you, I probably would have just said that AI is like Siri & robots.

And do you have any plans to do anything in this area or are you already?

It is clear that AI is everywhere and will be soon! Whether it’s for data journalism, AI policymaking, or even automated journalism, I am thinking about how AI applies to areas I’m studying, despite not being in STEM. I’ve noticed many institutions lacking courses that teach AI for non-STEM majors, and that needs to change because the application of AI would be beneficial regardless of the field someone is in. After all, it is inevitable.

Perhaps, NYU should have an AI & Policy or Journalism class? This was fun. Good luck with this year and look forward to hearing more.

--

--