Why AI Requires a Human Touch
Humans For AI may sound like a band of well-doers in a sci-fi movie, crusading on behalf of enslaved robots. In fact, it’s an organization that exists in the here and now, and it’s not the robots they’re lobbying for, it’s us.
A notable 2013 study from the University of Oxford revealed that 47% of Americans were at risk of losing their jobs due to advancements in computerized automation; that’s a scary number, and even more alarming that the bulk of these job losses will increasingly occur in white collar roles “likely to be substituted by computer capital.”
But job loss represents just the tip of the iceberg as AI increasingly influences our world.
Many fear that coming advancements in machine learning could render humanity simply redundant, or worse. However, Beena Ammanath, Global VP of Artificial Intelligence, Data and Innovation at HP and founder of Humans For AI, doesn’t believe in the inevitability of machine over man, as long as we play a central role in shaping AI’s future.
Humans For AI is comprised of over 50 volunteers (disclaimer: I’m one of them) across a variety of industries from tech, to retail, to data science and education. The group seeks to demystify AI and ensure it’s us humans that shape Artificial Intelligence’s transformative effect on global society going forward. Rapid increases in computer processing power in recent years, combined with the ability to store massive amounts of data has led to exponential advancements in machine learning that threaten to outpace society’s ability to understand and regulate AI.
Humans For AI believes the solution to ceding too much to artificial intelligence to make decisions for us both personally and as a society comes by providing widely accessible education and tools that bridge the gap between domain expertise and AI technology.
“AI is still in its infancy, but as AI advances we will need subject matter experts, the domain experts actively involved in building out the AI products for their domains,” writes Ammanath on Medium.
Ammanath cites the legal profession as one example of an industry where AI will contribute to its betterment because of, not in spite of, flesh and blood humans: “To truly build a robust AI product for the legal field, we will need a lawyer involved in the product design. Very soon, we are going to need lawyers who can understand AI concepts and capabilities and start thinking about what AI lawyer-specific products need to be built out. We will need lawyers who can bridge the gap between tech and law.”
Yet establishing domain experts to build better, safer AI products still threatens to leave swaths of society standing on the outside looking in, similar to what occurred with the advent of computers in the mid-20th century. Humans for AI wants to ensure those the organization sees as most at risk from AI, also play a central role in its future, including for women and minorities.
As Tess Posner, the executive director of AI4All, an Oakland California-based nonprofit that ‘works with high school students to increase diversity and inclusion in AI career pathways’ argues, unintended cultural biases of homogeneous societal segments are often responsible to date for researching and creating AI systems that appear in current software such as racially-based risk assessment tools and facial-recognition technologies, the latter of which has trouble detecting non-white faces.
“Humans for AI’s goal is to leverage AI’s strength to improve diversity as AI becomes more prevalent. While this means employment, it also means ensuring diverse representation to remove inherent biases as AI unfolds,” according to Humans for AI Chief Marketing Officer, Hessie Jones.
Jones lives and works in Toronto, a rising centre in AI research and technology, where over two-thirds of the city’s population hail from somewhere else, as reflected in the city’s burgeoning AI start-up community. While attending a recent University of Toronto event on the growing business opportunities associated with AI in the region, writer Jonathan Kay noted the range of last names on attendee ID cards that included “Adejuwon, Ehrsam, Conde, Pal, Lepshokova, Dhamani, Kurian, Ing”… and only one MacGregor.
Broadly, industry indicators suggest optimism that women may play a more prominent role in shaping AI’s future for the better.
Examples include Fei-Fei Li, Chief Scientist of Artificial Intelligence and Machine Learning for Google Cloud who, with support from Melinda Gates, founded Microsoft’s FATE team which stands for Fairness Accountability Transparency and Ethics in AI. FATE strives to expose biases found in AI data that skew results. Another example is Latanya Sweeney, a Professor of Government and Technology at Harvard and Director of the university’s Data Privacy Lab. Sweeney’s well-documented research revealed that online searches for names typically associated with black people resulted in discrimination via associated advertising.
It is encouraging to see individuals and organizations such as Latanya Sweeney, Fei-Fei Li and Humans For AI actively involved in ensuring not only humans but a diverse representation of humanity, play a role in AI’s future. Yet with all of Artificial Intelligence’s advancements in the last decade or so, the field is still at the starting line in terms a truly representative industry.
With women comprising less than 20 percent of executive positions in artificial intelligence, and some racial groups not only marginalized from the AI, but harmfully targeted by it, we still have a long way to go to ensure both that humans define AI’s future, and that, that future has a place for all of us.
About the Author: Dave Carpenter is digital media content creator and strategist, and a volunteer with Humans For AI, a non-profit focused on building a more diverse workforce for the future leveraging AI technologies. Learn more about us and join us as we embark on this journey to make a difference!