You Forgot to Plan For Us Still Being Here, Didn’t You?

©2023 by Rebecca Lewis. All Rights Reserved.

Becky Lewis (she/her/hers)
6 min readOct 8, 2023

On LinkedIn, Michael Hobbs posted this exploration of the idea of “guardrails” for AI, stating that its focus on assessing the quality and integrity of AI outputs is problematic. He is concerned that people talk about “guardrails” to comfort themselves, but they actually don’t exist.

It’s as though the public’s starting assumption is that the AI companies have been fair and reasonable up to this point (because who wouldn’t be fair and reasonable?) and so we just need to find a fair way to define “guardrails.”

https://www.linkedin.com/posts/michaelhobbs2_broken-guardrails-for-ai-systems-lead-to-activity-7116373771971612672-Qjat

I see things very differently from the public — very differently from anyone who has never worked in that industry, and so are bewildered at the products and marketing they see (I do not intend to reference Michael Hobbs in that grouping. I do not know him, nor do I know much about him, even though I generally respect his posts).

Hiding in plain sight, the AI outputs, marketing narratives, and spokespeople statements contain natural evidence of the specific inputs and processes used to create them — the hallmark beliefs of the people who created them.

I recognize the clues because I observed the events, as a worker: there was no planning or action done for the well-being of people-different-from-them, unless it could be used to keep the permanent employees believing that they were serving humanity and pushing the boundaries of DEI.

Alt Text: Photo of two wide-eyed kittens who have crammed themselves into a bathroom sink, side-by-side. Meme text says, “No room for you in our spa.”

Image Credit: “no room for you” by chwalker01 is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/

The AI outputs and issues that we consider harmful could have been avoided and mitigated, and instead, tradeoff decisions were made, ruling that the things we consider harmful do not matter; the show must go on.

I am not theorizing; I am reporting my observations as a worker.

When people in a company set out to develop some aspect or functionality of an AI, some of them develop their technology and then look for a problem to solve with it. Problem Number One.

Even the ones whose technology was inspired to solve a particular problem then have to define for themselves HOW this technology actually solves that problem.

They have far too much idealistic confidence that the rest of the world will benefit from the same things that benefit them, and that they are a priority (they are elite futurists, serving humanity and advancing the boundaries of DEI — at least until they get laid off and locked out of their work accounts at 2:00 a.m.). Their companies actively teach and feed these beliefs. Problem Number Two.

Even if they break the problem down into components of systems, interactions, demonstrated degrees of agency, and whatever it is possible to know or suspect about components, they multiply their “straying from reality” risk with each increment. Problems Number Three through 2,000.

Do they value insight of other professions — non-techie humans? Is changing a problem’s technology really enough to solve the problem, without addressing human behavior and other non-tech factors? No; apparently not. Problems Number 2,001 through 5,000.

Do they do tests as “reality checks,” to test how far they have strayed from accuracy? After watching Coded Bias on Netflix, I am not so sure. Problem Number 5,001.

Count the inferences, predictions, rationalizations, and tradeoffs involved in all those processes. Problems Number 5,002 through 10,000.

Count the times that workers with consciences must tell themselves, “This doesn’t seem right, but the show must go on.” Problems Number 10,001 to 20,000.

Count the times that the company culture intentionally conditions their workers to be comfortable with considering and treating other humans as “less-than.” Problems Number 20,001 to 50,000.

Who is “less-than,” according to techie culture?

  • people in roles designated to always be filled by contractors, rather than by permanent employees (not a temporary strategy)
  • people in the public whose faces, behavior, lives, jobs, and well-being serve as guinea pigs, databases, and sacrifices to ROI for tech products;
  • non-techie-people, especially from prior generations and in socioeconomic circumstances that preclude buying tech products;
  • people harmed by faulty inferences, logical fallacies, and faulty products that the public trusts too much (because the intentional company marketing narratives tell them they should trust the AI more than they trust other humans);
  • people harmed by products that function exactly as designed, because it was decided that their harm was an acceptable cost and tradeoff (that they don’t really matter; the show must go on)

Do all those other people matter?

If you honestly believe that they matter, then there are logical consequences: you would have to numb or deceive yourself in order to be okay with watching your work harming them.

If you believe that “some people matter; some people don’t,” then there are logical consequences: your behavior and narratives are saturated with that belief and it’s not simple to try to hide it.

In my LinkedIn posts and Medium articles, I’ve pointed out lots and lots of evidence of those “logical consequences” in outputs of AIs and in statements of AI company spokespeople: not as my opinion or belief about aspects or principles of AIs, but as a whistleblower describing specific events and pointing to the natural results, hidden in plain sight.

Isn’t it strange that “no one knew” there were biases in the LLMs, generative AIs, and facial recognition engines until a bachelor’s-level researcher pointed it out? Coded Bias, on Netflix, tells the story.

Did no one in the AI companies know? Or did no one in the AI companies care until they could see that the public cared?

Had anyone been curious, at all, and done bias measurements and testing? Had anyone in the company noticed that their own face was consistently not being accurately “read” as being a human face?

Reportedly, IBM fixed the biases in their model very quickly, once they were pointed out. I am not confident that the fix was foundational-enough, but if it was so quick to fix, why had no one already fixed it before releasing it to the public?

Was it really such a surprise to them that the public cared? Had it not occurred to them, until they saw that the public cared, that they should at least pretend that they cared?

Oops! Their beliefs, narrative, and behavior were showing; they forgot to at least appear to plan for a world where people-not-like-them can thrive and exist, also.

Alt Text: Photo of two wide-eyed kittens who have crammed themselves into a bathroom sink, side-by-side. Meme text says, “No room for you in our spa.”

Image Credit: “no room for you” by chwalker01 is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/

There were individuals in those companies who knew, and who were uncomfortable and confused. They were probably afraid that they would lose their success, salary, elite standing, and financial stability if they spoke up about it. Their tribe would banish them.

I invite those workers who are uncomfortable and confused with being pressured and conditioned to treat other people as “less-than” to speak up to their co-workers and management. See what they do with it. Their responses will tell you a lot.

If the company takes it to heart and changes their behavior and narratives as they create and market products, I’ll be the happiest person in the room.

If you are retaliated against, is that really worse than dying inside because you know the truth, feel powerless to do anything about it, and live comfortably because of the truth while you see others in need?

I have tried both strategies. Dying inside is worse.

Right on these platforms (Medium and LinkedIn) there are many disciplined professionals and organizations who will stand with you. If you despair of finding them, let me know.

--

--

Becky Lewis (she/her/hers)

I've been a technical writer for over 16 years. I am a human witness. I listen to human stories and retell them on LinkedIn and Medium.