AI: 4 Key Take-aways That Separate Fact from Fiction

By Dave Carpenter

“We are summoning the Demon,” warned Elon Musk in a Vanity Fair piece last spring, when describing what’s in store for humanity should artificial intelligence’s evolution go unchecked.

As Founder of Tesla Motors and SpaceX, Musk resides in rarified air among Silicon Valley’s most successful visionaries. Elon Musk also has a penchant for flame throwers and catapulting luxury cars into space. Suffice to say, Mr. Musk has a flair for the dramatics.

Indeed, AI’s current state and foreseeable future is far more pedestrian in comparison, according to the panel that convened in Toronto in January to discuss the event’s theme ‘Ethical AI: What Kind of society do we want to have?’

Hosted by Humans For AI and Girl Geeks Toronto attendees listened to distinguished women in the artificial intelligence field talk about how, despite AI having already improved many aspects of our daily lives, have we ceded too much control to machines in exchanging our personal information for convenience?

Here are the takeaways from the discussion: (see full list of panelists at bottom)

Our Robot-Like Overlords Reside in Hollywood (and Elon Musk’s head).

When machines might attain human-level cognition skills to independently solve complex problems isn’t foreseeable. Despite dramatic press headlines, self-learning machines — let alone malevolent self-conscious ones — are the stuff of fiction. Artificial Intelligence and its efficacy depends largely on concise human instruction.

As panelist Inmar Givoni, Autonomy Engineering Manager at Uber Advanced Technologies Group put it,

“Almost all successful models out there are supervised models. This is actually a problem right now. Machines can’t think enough for themselves, actual humans have to label all the data points for them”.

In the case of driverless cars, examples of these flesh-and-blood-supplied data points include street signs, lanes, stoplights.

Data Is The Real Terminator

Big enterprises have amassed vast amount of our personal data, but data for data’s sake provides little value for a business’ bottom line. According to panelist, Karen Bennet, VP of Engineering at Cerebri AI:

“Private enterprises see AI as the next wave in making a lot of money, but you have to know what to do with that data in order to make AI useful.”

Increasingly for banks and credit auditors, the answer lies in applying iterations of supervised machine learning such as ‘classification’ to comb massive amounts of data (aka ‘Big Data’) to determine our financial viability. When we apply for a loan, “most of us are unaware that we are consenting to more than we ever dreamed,” says Bennet. When you apply for a loan, consumer credit reporting agencies such as Equifax compile considerably more of our personal data than most of us actually consider — the places we regularly shop, what we purchase, where we travel, what we post on social media — and apply machine learning to glean insights from these data points to not only determine whether you qualify for a loan, but also profit by exchanging this information with government agencies, financial institutions and insurance companies, all the while perfectly within their legal right to do so.

Overseas, The European Parliament, Union and Commission have established the GDPR (General Data Protection Regulation) with the aim of protecting citizen’s data, but it’s likely a toothless effort, and no such regulatory body exists in the US and Canada.

How Do You Solve a Problem Like Inherent Bias?

All the panelists agreed that, as AI models depend on human-inputted data, our subjectivity can, and has, lead to flaws with negative implications for societal segments.

The means for differentiating between causation and correlation in the field of AI doesn’t currently exist, and that’s a problem when homogenized groups apply AI to further research or product development, which results in real world consequences, such as biased algorithms used in US courts to help determine which individuals arrested over-indexed for black people, or as panel moderator and Humans For AI Chief Marketing Officer, Hessie Jones cited, the infamous case of the racist soap dispenser.

AI Will Serve Us (ideally all of us) For the Better:

As the discussion closed, panelists generally agreed that the machines will not rise up, and that artificial intelligence’s advancement will result in job loss in some industries, but new employment in others, where AI and humans work in concert. Think self-driving 18-wheelers with a human on board, providing direction and assistance where needed.

And, if and when there is a day when artificially intelligent machines challenge our full cognitive capabilities, it may lead to our own evolution. As Karen Bennett pointed out, at the famous 2016 Go (a chess-like game) world championship final, Alpha Go, a super-computer that utilizes machine learning, beat 18-time Go champion Lee Sedol three times in a row, but Sedol managed to rally and win the fourth match. Would Sedol have won that match had he not been forced to think differently by a machine?

List of Moderator and Panelists at ‘Ethical AI: What Kind of society do we want to have?’ in Toronto, hosted by Humans For AI and Girl Geeks Toronto (from left to right): Hessie Jones, Moderator, and Chief Marketing Officer, Humans for AI; Karen Bennet, VP Engineering Cerebri AI; Anna Goldenberg — Member of the Vector Institute, Assistant Professor at the University of Toronto Department of Computer Science, and Scientist at the Genetics and Genome Biology Lab at SickKids Research Institute; Inmar Givoni — Autonomy Engineering Manager at Uber Advanced Technologies Group.

Dave Carpenter is a member of Humans For AI and Principle at Carpendium Inc., a content strategy and creation consultancy based in Toronto.