Artificial Intelligence Direction | Opinion

How do we assure that the AI systems that influence our daily lives are for the better, as individuals and as a society

Stacy S.
DataDrivenInvestor
Published in
5 min readOct 22, 2018

--

October 22, 2018 by Stacy Stanford

AI and Governance | Artificial Intelligence | License: None

AI systems are rapidly influencing our lives, in both visible (AI systems that we are sentient about) and invisible ways (AI systems that work on the background on our lives without us noticing), and for both good and bad. Obviously, numerous investigations endeavor to comprehend these effects, maybe with an eye towards molding the technological innovations in ways that are more positive. For instance, how many people’s jobs are expected to be displaced by increased robotics in the manufacturing sector? How will machine learning diagnostic systems allow healthcare to improve, particularly in under-resourced regions? How does autonomous technology will be able to shift between characteristics (dependencies) as to learn the professional or personal goals of the human user, which interfaces are, ought to be used between such AI systems and human users?

These issues, while obviously imperative, neglect an important question: how do we expect AI systems modify our human relationships with one another (will it be for the better, for the worse)? We as humans, justifiably place an enormous significance on interpersonal relationships with members of our family, workforce, community, and society in general. AI systems do not only affect us as isolated individuals, yet additionally have the potential to either improve and expand, or oblige and damage, the relationships in our lives.

Genuine and hypothetical examples are easy to find. Think about the manners by which political and social standards of discussion have been harmed in recent years by the proliferation of automated and semi-automated bots on social media networks. In a steady direction, consider the ways in which some robots are able to do mundane tasks so teams can focus on true challenges, subsequently enhancing their capacity to cooperate with one another effectively. AI systems do not just have the ability to affect us each individually, yet their more significant impact could be on our relationships with one another.

Consider the genuine possibility that some doctors may soon be required to utilize AI systems for diagnosis and treatment decisions, if such AI systems performs demonstrably better than most doctors do, then there will be natural pressures from many parties to require doctors to use the more accurate AI system, rather than their own clinical judgment. (In fact, calls for exactly such a requirement have increased after some notable recent demonstrations of AI superiority in diagnostics.) In this case, though, a doctor risks becoming merely an information broker between the patient and the AI system. If your doctor is simply a conduit for information transfer, then there seem to be few reasons for you to trust your doctor. That is, even though medical AI systems have the potential to improve individual (short-term) health outcomes, they also have the potential to significantly damage, patient-doctor trust, the most central and interpersonal relationship of healthcare.

Therefore, improved diagnostic accuracy (thanks to machine learning constant improvements in the healthcare industry) might come at an overwhelming expense.

For instance, consider a home healthcare robot to assist an elderly parent. The strain of caring for a parent can threaten or damage familial ties, precisely because of the role shifting that must occur. If a robot could perform many of these caretaking tasks, though, then the individual and his or her parent could potentially build (or rebuild) a deep, meaningful relationship. This kind of AI system does not threaten an interpersonal relationship, but rather can help people with maintaining an existing relationship or rebuild a damaged one.

We as humans are, on very basic level social creatures: our interests, as well as our ability to advance our interests, are bound up in our relationships with other people. Thus, many of our fundamental human rights depend deeply on our interactions, connections, and engagements with other people; arguably, many of those rights are established partly by those relationships. If such AI systems threaten those relationships, then they threaten core human rights. This conclusion holds even if the AI system increases my own individual capabilities. The ethical and social value of AI technology depends on more than just the ways that I interact with the system. Its role in supporting and enhancing, or alternatively threatening and undercutting, our human-to-human relationships can be equally important.

AI systems have the promise to bring great benefits to, as well as great costs to humanity. One imperative question that we must ask throughout this intelligence revolution is:

How can we obtain technology that advances our interests and goals?

While this question is increasingly being asked, the analyses too often center exclusively on positive and negative impacts for the individual. Rather, we must broaden the scope of our inquiry to also comprehend the ways in which these technologies can significantly alter, and perhaps even break, important relationships with members of our families, cities, nations, and global communities.

Distinguished Professor David Danks, Head at the Department of Philosophy and Psychology at Carnegie Mellon University give us an interesting perspective in his paper “Impacts of Trust of Healtcare AI” — It’s important for us, as a society and as individuals, to understand the significance of these new AI technologies being added to our everyday lives, along the impact from applying such, whether we are aware of them (visible AI systems) or unaware of them (background AI systems).

References:

Success of AI in Healthcare Relies on User Trust in Data | Health IT Analytics | https://healthitanalytics.com/news/success-of-ai-in-healthcare-relies-on-user-trust-in-data-algorithms

Impacts on Trust of Healthcare AI | Emily LaRosa & David Danks | Carnegie Mellon University | http://www.andrew.cmu.edu/user/ddanks/papers/HealthcareAI-Conference.pdf

Unpacking the Social Media Bot: A Typology to Guide Research and Policy | Robert Gorwa & Douglas Guilbeault | Arxiv | https://arxiv.org/pdf/1801.06863.pdf

Applications of Artificial Intelligence in Elderly Care Robotics | Tech Emergence | https://www.techemergence.com/applications-of-ai-in-elderly-care-robotics/

The AI Doctor Will See You Now | Wall Street Journal | https://www.wsj.com/articles/the-ai-doctor-will-see-you-now-1526817600

--

--