The first of the Great AI Debates convened (from left) Deborah Patton; Madeleine Clare Elish, Igor Jablokov, Andrea Bonime-Blanc, Steven Kuyan, Mona Sloane, and Karen Bhatia, and JT Kostman, PhD (moderator)

The Great AI Debates: Are the robots coming for our jobs?

Future Labs
Future Labs
Published in
8 min readJul 2, 2019

--

A t the recent Great AI Debate Series event hosted by NYU Tandon’s Future Labs and Applied Brilliance, leaders in diverse industries discussed the tech topic of the moment: The automation of work via machine learning and artificial intelligence.

The future of work is alarming. The future of work is bright. It all depends on whom you ask. No matter which side you take, the subject is buzzy — with good reason: Some experts predict that AI technology will eventually replace 40% of existing jobs. If these predictions come true, such a monumental shift in the workplace would dramatically affect billions of people.

To transform the buzz into a meaningful discussion that helps guide the people potentially impacted by AI, we invited experts from NYU Tandon, the Data & Society Research Institute, the New York Economic Development Corporation, and more, to bring their perspectives, insights, and no-holds-barred opinions into a holistic conversation on the topic.

Future Labs Director Steven Kuyan introduces the Great AI Debates

In his introduction, Steven Kuyan, Future Labs Managing Director, explained that the debate serves as a way to demystify the technology behind the automation of work. “It’s hard to understand all the impacts of this technology,” he said before taking his seat as a debate participant. “What we’re hoping to accomplish with this event is to provide more meaningful context around what’s happening with AI.”

Setting the scene: What is AI?

With the goal of clarifying the rhetoric around automation, JT Kostman, psychologist, data scientist, and moderator for the debate, set up a provocative case study for the participants: A hypothetical Southern transportation company, facing rising labor costs, is considering phasing in autonomous vehicles to keep the business alive. Panelists were asked to debate the far-reaching impact and implications of adopting a radical new business model, and the resulting downsizing and retraining of the workforce in this family-owned company.

JT directed his first question to Igor Jablokov, voice tech pioneer and CEO/founder of Future Labs portfolio company Pryon: What is AI? “It’s just using a machine to automate a task that humans already do,” Igor replied. “It’s that simple.” As the discussion clarified how we currently think about artificial intelligence, the panelists agreed that a more accurate description is augmented intelligence.

Moderator JT Kostman gives the debate participants their first prompt

JT escalated the debate for participants to weight in on the efficacy and ethics of programming automation in the workplace. As panelists shared their conflicting thoughts about who would be impacted by the automation of workplace tasks, it became clear that while the definition of AI is simple, using a machine to responsibly automate tasks traditionally done by humans is anything but. Even seemingly mundane tasks can become can become mired in a moral conundrum, to say nothing of AI becoming an agent of making more complicated, life-and-death decisions. Introducing the much-discussed Trolley Problem to the debate, JT asked the panelists: Are we comfortable relinquishing moral decisions to robots? Can we make machines that understand the intricacies of ethics-based behaviors?

“The creation of moral machines is impossible,” argued Mona Sloane, who is a sociologist at the Institute of Public Knowledge and adjunct professor at NYU Tandon. “Human morals and values are emergent and contextual. They change on an ongoing basis. They’re just too deeply embedded in our social lives. Instead, we need to get back to a point where we consider, is it actually inevitable that we need AI technologies in [a given] context?”

NYC EDC Senior Vice President Karen Bhatia (right) speaks on her own experience with task automation

Automating tasks can be problematic too

The context for how AI might be truly helpful and responsibly used was a point of contention. Karen Bhatia, Senior Vice President at NYC EDC, pointed out the distinction between automation replacing jobs versus replacing tasks. “When we’re thinking about rote, manual tasks that people do with far less accuracy than machines, I think people welcome automation. When I was a fresh associate at a law firm, we consistently used automation to look into diligence purposes.” There was consensus that professional fields like law and medicine could benefit greatly from automating simple, repetitive tasks that currently take a great deal of time or manpower, thereby freeing up time for practitioners to see patients or clients.

But Madeleine Clare Elish of Data & Society Research Institute posed a counterpoint: In order to create machine learning algorithms that serve everyone, data has to be provided on a truly diverse sample of human beings. Not everyone welcomes automation because not everyone is equally served by it: Many of the machine learning algorithms written today are predicated on biased data sets. “In medicine,” Madeleine explained, “the data is great for the white males on this stage. Data for and about other types of people literally hasn’t been collected yet because of the legacies of the American medical system and who they have considered to be human.”

To address these and related concerns, Karen Bhatia announced her employer, the New York City Economic Development Corporation will be building the NYC Center for Responsible Artificial Intelligence, an innovation center that will attempt to solve the issues posed by biased artificial intelligence. “We want to ensure that there is representative data,” Bhatia explained, “so that we’re building tools and products that benefit everybody.” The applause was particularly loud here.

GEC Risk Advisory CEO Andrea Bonime-Blanc explains why she does not think AI should lead humans

The boundaries of automated work

There was dissent about those tasks and professional roles that could never be truly automated. “No team should be led by a machine,” Andrea Bonime-Blanc, founder and CEO of GEC Risk Advisory said. “There needs to be human managers. Emotional intelligence and relationship building, human contact, decision making — all need to be done by a person, not a machine.”

Igor brought up other boundaries, arguing that art would be sacred place that machine could never automate. “It’s the last place where people will be untouchable. But everything else, sure. We’ll have AI that will be team managers. We’ll have AI that votes in the future.”

Preparing for the AI future

Several panelists brought up the importance of re-skilling workers who have been made redundant in an AI world. As organizations need to lay out long-term strategies around the impact of AI on their workforce, the question was posed: Whose responsibility is it to retrain the workers who will be displaced?

“It’s a societal cost,” said Karen. “I think that all players — public and private — need to step up and actually think about the future of work and all the implications. So it’s going to be incumbent on the companies [to retrain their workforce]. It’s going to be incumbent on our structures or agencies, but also on our educational institutions to think about how work is evolving and how we are preparing our workforce.” Which brought up a key question: What is work in the AI era? How do we redefine the nature and future of work? And will Universal Basic Income become the norm?

The view of collective responsibility threaded the debate. “When we talk about automation on that scale, we talk about the potential increase in productivity,” said Mona. “And then we need to think about taxation and education on the large societal scale. I think it’s really dangerous to buy into a narrative where we talk exclusively about technology, whether that’s in the context of machines or [shifting] the burden of retraining on to the workers [or] entirely on to the companies. I think there’s an obligation by leaders and also by policy makers to take that to a macro level. And I think it’s time to do that.”

The panel agreed on the “criticality of cognitive diversity and heterogeneity,” as JT phrased it. Andrea, applauding New York’s Center for Responsible AI, said, “We need to have a diversity of people involved cross-functionally, cross-policy — all the different categories of diversity that we have in our society need to be part of this from inception to the end, to troubleshoot and to solve.”

The kicker of the conversation was when the panelists were asked “What is it to be human in an AI era?” Igor said he had “no interest in being human.” His personal interest in AI is to investigate insight into non-human awareness — in using machine learning to model the way animals and plants communicate with each other and exchange information. Author Ursula Le Guin was cited to continue the exploration into speculative non-human language systems.

The response

Turning the debate over the audience, the panel fielded several heated questions about who benefits from automation. One attendee tweeted, “How can we ensure that the gains of AI benefit everyone, not just CEOs and executives. If I’m a CEO and I can automate jobs, what’s stopping me from lowering your salary because you’re not doing as much as before? How can we ensure the company is not reaping most of the benefits here?”

The consensus among both panelists and audience seemed to be that artificial intelligence needs to be carefully and ethically designed and applied in order to create inclusive, equitable workplaces. The benefits of AI would need to have a net-positive impact on society. And ultimately, that responsibility lies with us all to develop inclusive, empathetic, fair, and ethical guidelines and solutions that govern the advancement of AI. We are at a transition today where the decisions we make now will have profound effects on the world as we move rapidly into an augmented intelligence reality.

Words and photos: Annie Brinich (anniebrinich@nyu.edu)

--

--

Future Labs
Future Labs

The Future Labs at NYU Tandon offer the businesses of tomorrow a network of innovation spaces and programs that support early stage startups in New York City.