Retiring the T-700: Toward a Mandalorian Conception of Artificial Intelligence

Hederis Team
Hederis App
Published in
9 min readDec 17, 2020
Publicity still from The Mandalorian Media Kit, courtesy of Disney

Artificial intelligence has a bit of a negative connotation.

Considering how abstract the concept of AI is for most laypeople, including myself, we tend to automatically go to the humanoid face of The Terminator’s T-700 — melted off skin, red flashing eyes. This image supersedes the most simple of definitions which is: a computer that can think and act like a human.

The Terminator gets referenced often in articles on AI. Most recently Gould Finch and Frankfurter Buchmesse couldn’t help slipping the reference into their white paper The Future Impact of Artificial Intelligence on the Publishing Industry. Although the quote “[AI] is not a magic wand, it’s not the terminator” isn’t really alarmist in nature, once Arnold enters the chat, we kind of get derailed by the threat of killer robots — as is to be expected.

I would like to throw a new reference into the conversations surrounding AI. The Mandalorian, in my opinion, has a better story to tell about AI, a more ideal one compared to the outdated T-700.

The show tips the killer robot convention on its head.

The droid can be reprogrammed. Its purpose can be reconfigured to work with humans, rather than destroying them.

That is why, on the eve of the Season 2 finale, I wanted to look back on Season 1 and write about AI in this context to start off a series of pieces from the Hederis Team on AI in book publishing. We need a new way of understanding intelligent machines, especially if we want to enrich our understanding about the future relationship we will have with AI.

Grogu is coming for you, T-700! Publicity still from The Mandalorian Media Kit, courtesy of Disney

How Kuill’s Dialogue helped me better understand AI

There is a bit of dialogue from “Chapter 7: The Reckoning” that has always stood out to me.

A few lines spoken between Kuill and Din (the Mandalorian, yes he has a name, I forgot it too, and don’t get me started about Grogu).

To give you a little background, Din doesn’t trust the droid, IG-11. Kuill reprogrammed IG and offered to give him to Din to care for Baby Yoda. Din’s distrust exists because IG-11 was originally programmed to kill that little green scene stealer.

Their conversations goes as follows:

“Do you trust me?” Kuill says to Din.

“From what I can tell, yes,” is Din’s reply.

“Then you will trust my work.”

In my memory the lines were a bit more poetic. When I watched this scene for a second time, I looked at my friend who was pushed to the right sliver of my computer screen; she was taking notes. She is a rhet comp major. I should have known better.

We were watching the show together over Zoom. I was hoping she had something smart to say and that it would propel me into writing something equally as smart.

Something beyond just “killer robot.”

I kept repeating the lines of Din and Kuill’s conversation, mixing them up with my own recollection. “Trust me. Trust my work. Trust the droid.”

“That’s a type of syllogism,” my friend pointed out.

I had to look up what she meant by this.

A syllogism is a deductive argument which takes the form of a major premise, a minor premise, and a conclusion. The one I and likely you are most familiar with relates to Socrates: All men are mortal. Socrates is a man. Therefore, Socrates is mortal. Two premises: all men are mortal and Socrates is a man are presented, their separate truths result in the conclusion that Socrates is mortal.

A syllogism is built upon the idea that we learn by building on what we already know to be true. This is the basis of teaching someone how to reason. It is also how we teach computer programs to think.

How does this connect to computers?

Programmers translate human logic into computer models by using similar premise-based formulas.

For example, if you want to teach a computer how to recognize human faces, you set the conditions of what makes up a human face within your program. “Faces have noses. This has a nose. This is a face.” Okay so it’s a little more complicated than a major and minor premise, but still, if all the conditions programmed into the computer are true, the computer will recognize the human face.

Translating a face into a set of conditions involves the use of data points, and these data points are based on averages calculated from a data set of human faces (various premises). Oftentimes you will see these data points represented in a crude line and dot drawing overlaid on a person’s face in articles talking about facial recognition. Typical software will identify 80 nodal points on a person’s face. The program learns to recognize faces by reading these 80 nodal points and comparing them to the faces made up in the original dataset. “Faces have [insert 80 nodal points]. This has [those same 80 nodal points]. This is a face.”

If a computer has difficulty recognizing a face or telling the difference between one face over another, it is because those faces fall outside the average data set it has been taught to recognize. This is how the logic built into a program bumps up against reality and fails to see it for what it is.

Syllogisms also fail in this way too.

When algorithms are wrong

There is an air of objectivity to premise-based logic, but that is simply not always true. A syllogism can contain a false impression of reality.

For example: Sugar is sweet. Sugar is white and grainy. White grainy things are sweet. If you never tasted salt you wouldn’t know any better, but if you ever added salt in place of sugar in your cookie recipe you would spot the lie.

So when your data set of human faces is primarily made up of white people and your user isn’t white, the program fails to recognize the variations of physical expressions among humans — what we have come to call race. The program fails to understand reality; because at its core, it contains a biased impression of that reality, coded into it by the programmer. If the programmer feeds the first two false parts of the syllogism into the computer — “This is a sample of human faces. These faces have white skin. Therefore, human faces have white skin.” — the computer can’t be blamed for coming to the inevitable wrong conclusion: “Faces are white.”

While facial recognition technology has often borne the brunt of conversations regarding these failures of logic, there are other ways in which algorithms fail to recognize reality, ways that have to do with variations in behaviors, yet the result remains biased.

In late 2019, a software meant to help hospitals and insurance companies identify which patients would benefit from “high-risk care management” programs failed to adequately recommend Black patients to this care program compared with their white counterparts. This failure directly relates to the program’s logic.

The algorithm was designed to recognize potential ‘high-risk’ patients based on the amount these patients pay for health care.

The logic of this algorithm might be boiled down to this: High-risk patients require more care than healthy patients. More health care means more spent on medical expenses. High-risk patients spend more on health care than healthy patients.

However, these conditions don’t take into account the accessibility of doctors to Black patients or the effect of income inequality and the affordability of health care on a patient’s history of seeking care. Additionally, the software also doesn’t consider feelings of mistrust among Black patients based on a very real history of medical malpractice, including unethical medical experimentation and biased and neglectful care, with doctors less likely to recommend additional care for Black patients in need than their white counterparts.

The logic behind the program failed because the programmers did not take into account the many conditions that factor into decision making in health care and how these decisions can lead to alternative patterns of behavior among patients; the result of course is that the software perpetuates racial biased care for a historically underserved group of patients.

These examples show how computers come to the wrong conclusions when they don’t have enough data. Additionally, both of these types of programs were up and operating, affecting how we work and live with major flaws in their logic that went unchecked. When AI is sold as a tool to be trusted uncritically to “solve” the problem of big data analysis rather than as tool to be used and maintained (and questioned) by its user, the technology runs unchecked, perpetuating false impressions of reality with real consequences.

We know about the failures of these two programs because of third-party testing. It makes a strong case for more independent testing of the algorithms that affect our lives, yet rarely does this take place.

So, AI is evil, again?

In my attempt to cast off the notion of evil robots, I have done the opposite. Is then my quest to vanquish the discreditable perception of AI a failure? I wouldn’t go so far to say that — yet.

Kuill has something to say about the nature of droids which can also apply to the nature of AI, so I would like to defer to him:

“Droids are not good or bad. They are neutral reflections of those who imprint them.”

In other words: Humans have biases. Humans encode computers. Computers have encoded bias. A program recognizes the reality constructed by the conditions created by the programmer. The way a program understands the world is not objective; rather, it is subjective distance. That element of subjectivity is tied to the person or team of people who created the program. A program is only as “good” or “bad” as its creator and the data that trains it.

This point can help us better understand our relationship with machines, especially those advancing towards AI.

Now, for a final quiz, let’s consider the case of Amazon’s new health band, Halo.

Amazon’s Halo health band uses AI to track and analyze various aspects of the user’s physical existence, including tone of voice. Two reviewers tried the band simultaneously and found that it used more negative words to describe the tone of the person who was female and a mother than for the male reviewer, which speaks to some potential bias. But what people are finding most frustrating, beyond a computer telling you that you sound annoyed, which is perhaps the worst thing to tell someone if they are in fact annoyed, is the invasive nature of the Halo’s data collection and the way it is presented to users. The band collects user data and the companion app spits it out in reports. There is little interaction in either the data collection or analysis process. We entrust the intimate parts of our lives to this technology, with few opportunities to adjust the data points the program is using to decide whether or not we are annoyed (hint: we are).

So here is my question for you: do you trust the programmer? Have they taken into account the things about you and your life that are different from their own? Have they given you an opportunity to help them program their AI to reflect you and your needs?

Just as Kuill saw that the original programming for IG-11 wasn’t serving the needs of the users (himself, Din, and Grogu), programmers should also be a part of the conversations around and critiques of the algorithms they create, and adapt accordingly. What if the creators of Skynet (the infamous predecessor to the Terminator) had listened to the feedback of the people they programmed Skynet to surveil? And what if those people had understood what Skynet was programmed to do, and were able to give informed feedback to the developers? How might that story have turned out differently?

Trust is a central factor in our philosophy about AI at Hederis, and one major component of our goal as an organization is to build AI into our platform so that users are given the chance not only to see how it is collecting and using their data, but to tell us whether they trust it and help us learn how to make it serve their needs better.

Make sure to tune in tomorrow when we’ll share some additional thoughts from the Hederis team on what’s on the horizon for AI in publishing in 2021 and beyond.

--

--

Hederis Team
Hederis App

Insights on publishing, design, and innovation from the Hederis Team.