http://www.slate.com

A.I. Can Drive A Car But It Can’t Write This Article (Yet)

For many years, communicating with expert systems or artificial intelligence (A.I.) has been the domain of science fiction and academia, but as A.I. takes on more and more responsibilities and positions formerly performed by humans and connect with intricate emotionally driven decisioning, understanding out how to team up with these platforms has become a sensible challenge that requires a sensible answer.

Rob Peters
6 min readJun 30, 2018

--

Expert systems and learning systems are now a major part many solutions individuals communicate with every day, but to completely utilize the promise of artificial intelligence, humans require more compelling ways of interacting with the technologies they use. The function of architects is to understand how to construct trusted relationships between people and platforms that enable expert systems to elevate human innovation and influence rather than just being used as a substitute.

Why Relationship Capital and Empathy Matter

Apple’s Siri doesn’t make transformative conclusions for you. It is appropriate, if it does not comprehend how it comes to its decisions, but communicating with a platform that makes an important decision for you by taking data, doing something amazingly difficult, and then providing you with vague information needs much more than a simple dashboard. This kind of communication requires relationship capital trust and empathy between people and systems. If the purpose of advanced technology is to make complicated quick decisions so humans do not have to. it is useless if individuals can’t trust technology to do so. This means that constructing the relationship capital trust between humans and the systems we use becomes as foundational as constructing faster processors, memory, and greater storage.

Visualize that you are riding in a Uber self-driving automobile when without warning it immediately hits the brakes and alters its direction. Maybe the vehicle detected something you could not know about, such as a crash ahead, but the system does not report this to you and you do not trust it to make a quick judgment. A quick alteration in direction without any communication of its reasoning will be quite unsettling. Most of the time, automobiles do not confront ethically difficult decisions. However, sooner or later they have to, such as which way to change direction at a congested crash location. Before autonomous driving automobiles can really grow in adoption, individuals will most likely have to trust their vehicles to make complicated even moral decisions as their representative, similar to another person driving. Other domains such health care are even more affected, and advanced systems are getting immersed there as well.

Creating a Relationship Capital Interaction Process

In a conversation, I may misunderstand what you inquire of me, or I may require more intelligence, however, the iterative process of communication enables you to quickly eliminate my mistakes and lets me make queries on what I need to understand. A comparable human to technology interaction process allows the platform to receive the data it requires to comprehend the questions, even when the data required for comprehending the problem cannot be interpreted in advance. This also the benefit of one of the core capabilities of most expert systems: they understand when they do not know something.

Once technology gains this type of insight, an essentially different relationship is achievable. One of the biggest challenges of human and machine design is understanding what information is relevant in a certain context so that the rest can be withdrawn or discounted. What occurs when the technology alone can make these types of decisions?

Architecting for Errors

Complicated systems, like humans, make errors. evading them is not possible. The objective should be to minimize their significance and motivate people to forgive the system and support it to learn over time. As technology become both individualized and more capable of learning, the capacity for people to easily instruct them on how to behave becomes more important and influential.

Apple’s judgment to allow iPhone alarms to go off even in their “silent mode” is an example of this challenge. The iPhone silent mode only disables sounds you did not specifically request, but most of the time, individual’s assumptions are straightforward; when the off button is pressed something turns off. This inconsistency in expectations has resulted in difficulties. — an alarm going off in a Library is one example. Another potential challenge with greater consequences would be missing a meeting because your iPhone was in silent mode and did not wake you. Yet both challenges could be resolved if there were greater mutual understanding and greater empathy to the effect of the technology’s errors. Today, the impact of these errors may be minimal, but the risks are quickly rising.

Building Trust and Collaboration

What is it that allows getting on a plane or a bus driven by a complete stranger something individuals don’t even concern themselves with while the thought of getting into an autonomous automobile causes worry? Part of this is that humans, as a rule, judge other people to be fairly good competent drivers, something that machines can handle, but there is more to this issue than that. We grasp why people act the way they do on an instinctual level and feel like we can anticipate how they will act. We don’t have this empathy for current smart technologies.

In order to correctly treat patients, a doctor, whether a human or a virtual one, requires to be more than just smart, he must also be sympathetic, persuasive, and engender trust. Likewise, getting into a driverless automobile without a steering wheel is going to be stressful until we understand how to build relationship capital trust like we have with other humans.

In one of their current projects at Artefact, the firm is researching the future of automobiles as they evolve to complete autonomous control. Leveraging transparency by presenting the vehicle’s understanding of its surroundings in partially autonomous function can assisting in earning drivers’ relationship capital trust in the car’s capability to respond correctly to circumstances like another car unexpectedly changing lanes or a pedestrian stepping out onto the road. This assists people to see the automobile as capable of taking over as they give up more and more control when driving. This idea of surfacing a system’s interpretation or understanding is also core to some interesting IBM Watson interfaces that enables people to ask and get answers to complex high-level questions.

Conclusion

Many people are forewarning about the possibility for smart-driven automation to hurt the economy by reducing and eliminating most jobs. To take the turn of the 20th century as an example, the widespread introduction of the car had a huge positive impact on the quality of most people’s lives, but it also put most horses out of work.

At the beginning of the 21st century, are we the drivers who will benefit from today’s technological advances, or are we the horses hauling construction materials to Thomas Edison’s new plant? Architecting expert systems that work collaboratively with people rather than simply replacing them can support and guarantee that the benefits of smart technologies elevate more people, creating trusted platforms that are smarter than either humans or platforms alone. A.I. can drive a car, but it didn’t write this article, at least, not yet.

www.StandardofTrust.com

Sources:

--

--

Rob Peters

Relationship Capital | Gamification | Co-Creator of Peer SaaS Platform | HR Tech and Workplace Culture Strategist | CEO| Author of Standard of Trust Leadership