Who “Makes” The Rules? Whose Labels to Use?

Living By the Spirit in the Age of Machine Learning

FaithTech
FaithTech Institute
9 min readOct 23, 2020

--

Winner of FaithTech Institute’s 2020 Writing Contest!

Visitors to Google Codelabs’ TensorFlow tutorial will encounter two diagrams contrasting traditional computer programming with Machine Learning (ML):

In traditional programming, rules are essentially hard coded “inputs” to systems that we will term “rule-following,” whereas in ML rules can be seen as products: rules are “made” in the sense of “manufactured.” The Machine Learning side of the diagram reminds me of the ’90s CCM song by Stephen Curtis Chapman (and my colleague James Elliot!), “Who Makes the Rules.” It includes these lines:

“I guess the one thing that’s been bothering me the most is when I see us playing by the same rules that the world is using.”

This concern ties into current conversations about fairness in the development of automated decision-making systems, but first, let’s explore similarities between ML and Christianity regarding rules.

Photo by Christina Morillo

Like ML, Christianity is presented through the letters of Paul as being different from following the rules in the Law of Moses. In ML terminology, the Law was intended to be a metric, not an objective. The Law was a standard of behavior, yet it was “powerless” to produce the goal of inner righteousness: “For no one can ever be made right with God by doing what the law commands. The law simply shows us how sinful we are.” In Christian traditions, this adherence to rule-following is called legalism, an ancient example of Goodhart’s law:

“When a measure becomes a target, it ceases to be a good measure.”

Like the rule-following artificial intelligence (AI) systems made for business and consumer applications before the “rise” of ML, a legalistic system suffers from deficits such as brittleness, lacking knowledge acquisition, and difficulty handling complex situations. Paul encourages believers to abandon rule-following and “live by the Spirit,” thereby making inferences via the Law put in our minds and written on our hearts — or in ML parlance, encoded in the weights of our neural networks. Another ’90s CCM song expounds,

“It’s just a Spirit thing,
It’s just a holy nudge,
It’s like a circuit judge in the brain…
It’s just a little hard to explain.”

–“Spirit Thing,” Newsboys (1994)

“Hard to explain”: like many ML systems, the advantages of being led by the Spirit come at the cost of explainability. Having rules written out offers transparency that sophisticated inference systems may lack. The “right to an explanation” in the EU’s GDPR precludes the use of complicated neural networks since decisions of who receives bank loans and who is deemed to be “high risk” in criminal proceedings are too important to be left to inscrutable models, many of which demonstrate unfairness in various forms, referred to collectively as “bias.” As the recent conflict between Yann Lecun and Timnit Gebru showcased, there is a strong notion that bias comes from training datasets, but bias can enter at many stages of development involving a global supply chain. Still, cases such as the lack of Black faces in computer vision datasets or the use of historical data from mostly-male hiring practices make datasets a good focus for our discussion.

https://twitter.com/Chicken3gg/

ML models trained on textual datasets learn to represent the biases of humans. Microsoft’s “Tay” chatbot fiasco showed how easy it is to create “racist AI” without careful curation of data, yet even one of the most widely-used computer vision datasets is rife with derogatory labels for people: “snob,” “slattern,” etc. These misnomers matter because ML systems are often structured as classifiers, “learning” these “ground truth” labels to apply them to new cases. Labels are the call-signs of classifications which then feed into policies. And as the saying goes, labels stick — to justify a discriminatory or violent policy, tie it to a label. If someone labels you a Nazi and believes Nazis merit punching, then… One ML researcher was recently labeled a “bigot” for sharing reports of human rights abuses in China and his employment threatened because “people will feel unsafe.” Leaving aside the irony of who is making whom feel unsafe, we see that labels can be powerful and contentious. The choice of “whose labels to use?” is then key. This is easily seen in medicine, where labels by experts should supersede those obtained by crowdsourcing, yet this applies to moral judgments as well. In the words of another ’90s CCM song,

“To label wrong or right by the people’s sight is like going to a loser to ask advice.”

– “Socially Acceptable,” DC Talk (1992)

One important labeling task, sentiment analysis, seeks to classify content as expressing “positive” or “negative” sentiment for texts such as tweets, movie reviews, or — as demands multiply for content moderation on social media — hate speech. Usually models output a sentiment score which is turned into a classification based on whether or not the score exceeds some threshold value. A naively-constructed dataset will tend to produce bias by yielding lower scores for expressions of “underrepresented” status in terms of race, gender, or religion. Another important text-processing task involves forming word associations via “word [vector] embeddings,” which end up encoding human biases present in the text and raising the question “Whose Truth is the ‘Ground Truth?’”

To address problems of bias, there are efforts to increase diverse representation of people within datasets, within teams who create ML models, and among the voices amplified at conferences for industry and academia, in categories of race, gender, and sexual orientation, and…. Often conspicuously absent from the list of categories is “religion,” despite the underrepresentation of religious persons in the technology industry and academia. This is a powerful omission, as Christians are encouraged in Scripture to classify themselves not in ethnic terms but rather by the cross-racial unity of the family of God. Thus the classification scheme determines what forms representation and diversity take. As Kate Crawford pointed out to the Royal Society, “Systems of classification are themselves objects of power” increasingly concentrated in the hands of the few creating AI systems. Crawford continues, “AI is rearranging power, and it’s about configuring who can do what with what and how knowledge itself works.”

Sieger Köder, “The Meal”

The preference for Christians and technology companies to transcend the diversity of backgrounds is common ground, yet diversity of beliefs brings with it inevitable conflicts, and whereas Christians are instructed by Jesus’ example to sit at the table with ‘problematic’ individuals and even love those they find reprehensible, the non-Christian world is under no such compulsion, even proudly refusing to sit with opponents. As Alasdair MacIntyre clarified in After Virtue, the inconsistent use of values in modern pluralistic societies results from keeping the conclusions of centuries of moral reasoning while denying their basis, yielding unending spectacles of moral positions asserted with passionate sincerity by avowed moral relativists.

Other efforts to “fix” bias include “de-biasing” ML models — which amounts to “re-biasing,” depending on what one means by “bias.” De-biasing enforces symmetry (or invariance) of outputs against changes in inputs (e.g., male→female, black→white). Yet “bias” can also mean “implicit assumptions.” One should be aware that requiring symmetry is a bias. For example, many in progressive circles found the result, “Man is to Computer Programmer as Woman is to Homemaker” to be “horrifying.” De-biasing this corrects a historical error for “programmer” (e.g., the creator of programming was a woman) yet imposes an ideological bias against the gender role for “homemaker” that many — though certainly not all — Christians value. This choice is an assertion of will, as Hannah Fry says, “deciding what kind of world we want to live in.” But the question remains, “who is ‘we’?”

Examples of algorithmic anti-Christian bias don’t make news like those for race or gender, yet they include automated (mis)labeling of traditional Christian views on sexuality as hate speech and worship videos as “harmful or false information.” This omission makes sense given the indifference or outright hostility toward Christians in academia, the press, and Silicon Valley — the latter meriting an episode of the HBO TV comedy! A few recent instances include: signers of the Southern Baptists’ statement on AI Ethics shying from listing company affiliations (an instance of “closeted conservatives”) because even privately acting according to one’s faith outside of work can get you “outed” and ousted; a prestigious panel on “science, religion, AI & ethics” featuring atheists but no believers (or coders?); and a research team showcasing anti-Catholic profanities as their premier text corpus example.

Photo by Miguel Á. Padriñán

While Biblical views on sexuality top the list for many critics, other “Christianophobia” still draws from the long-discredited “conflict” narrative around faith and science (even though the ‘harder’ the science, the more Christians one finds) and stereotypes based on deplorable subgroups. Such stereotypes screen out the many innovative contributions Christians have brought to science and technology, often arising out of their distinctive worldviews. Organizations that label themselves as “progressive” are at odds with their stated values when they promote harmful Christian stereotypes in ways not done to others holding similar views on sexuality (e.g., Muslims). Yet Jesus predicted this: “You will be hated by everyone because of me.” Should Christians expect otherwise? Moderators contradict their own claims of neutrality when they proudly delete political posts they disagree with. The point is not about being pro-Trump or anti-Trump, but to highlight the dynamics of authority at work in tech companies’ control over the global flow of information, its classification, the resulting policies, and their effects. As Zeynep Tufecki recently stated,

“The real question is not whether Zuck is doing what I like or not,…[it’s] why he’s getting to decide what hate speech is.”

For ML-based content moderation, this returns us to “whose labels?and “who’s at the table?” The “expert labeler” for Christians is God revealed in Scripture, in the person of Jesus, experienced via the Holy Spirit. Any efforts to create CCM-like alternative media, social platforms, or even ML models — including the important question of who gets to apply the label “Christian” and to whom — must be done viewing God as the power apart from worldly politics. He holds the power of re-labeling: declaring the unloved as beloved, the guilty as innocent; renaming and affirming new identities. Holy Spirit can reset the “weights” in our minds’ neural networks, rewriting rules internalized during painful pasts.

Photo by Markus Spiske

The Christian answer to the new ML paper “Can the Rules in a Deep Network be Rewritten?” is emphatically Yes. The world itself will be regenerated. Thus Christians can perform “bi-directional” inference by looking backward at God’s faithfulness and forward via his promises as we focus our attention on Christ in the present, so we may (to put 2 Corinthians 3:18 into ML jargon) maximize our similarity measure with him, to ever-increasing capability.

When working “at the table” with others, Christians should promote the many goals of progressive technical and academic groups which agree with Christian precursors such as justice for the poor, welcoming of foreigners, relief for the needy, and stewardship of the environment. As Christians seek to work diligently for powerful non-Christian employers, we can emphasize this common ground and partner together to transform the world in redemptive ways.

Physicist Dr Scott H. Hawley’s interest in Machine Learning began in 2013 when he saw how it would affect the careers of his audio engineering students at Belmont University. His writings on AI and ethics have appeared in Perspectives on Science and Christian Faith and Superposition.com. He’s fascinated by classification. Find him on Twitter @drscotthawley.

Learn more about FaithTech at faithtech.com.

Want to join the movement? Start here.

--

--