Humanism and AI

The modern technological landscape continues to alter. The world with it. There has been use of the term “Humanism” to describe the orientation of giant technological companies in the development of artificial intellignece.

The Washington Post stated, “Tom Gruber of Apple describes Siri as “humanistic AI — artificial intelligence designed to meet human needs by collaborating [with] and augmenting people.”

Satya Nadella, who is the Chief Executive of Microsoft, said, “Human-centered AI can help create a better world.” In short, the rhetoric around artificial intelligence amounts to the utilization of the terms “humanism” and “humanistic,” or “human-centered,” to substantiate the mission of the AI development.

The Washington Post argues the terms such as the aforementioned emerge in the conversation around the bringing of humanity together. However, some important points come in the form of the rhetorical aspect and the connection to the reality of it.

“The word “human” crops up in conversations across the technology industry, but it’s not always clear what it means — assuming it means anything at all,” the article opines, “Intuitively comprehensible, it sounds nonthreatening, especially in contrast to alienating jargon such as ‘machine learning.’”

The orientation of the larger companies is proposed to be for ergonomy. The development of technologies by and for human needs and wants. This becomes the basis for the use, even abuse, of the terms humanistic, argues the article.

“But calling the results “humanistic” is ultimately rhetorical sleight of hand that suggests much and means little. Unless these companies reconsider their underlying approach, their words will remain empty,” the reportage continued, “Among the big tech companies, Google has voiced the clearest expression of the idea of humanistic AI In March, Li, chief scientist for AI research at Google Cloud, penned a New York Times op-ed.”

Google did not renew the Department of Defence contract and set forth ethical guidelines for the development of technologies not for weapons. AI weapons would be a bad future, a non-positive for humans future.

However, is this the case? Does the non-renewal of the contract and the orientation of the technological curve make for a humanistic technology movement?

The Washington Post explained, “Consider computer vision, a type of AI that was key to Project Maven (and is central to self-driving cars). Photographic images from cameras mounted on drones are widely used to gather visual evidence and provide forensic truth value for military decision-makers.”

The work requires a huge amount of human labor to make sense of the information collected. There are many cases in which a drone has misidentified a target. The question is the human value framework.

Although, as a small interjection, people have different values from one another. Thus, the conception of a single human-values framework implies a universalization of human values.

What if these human-values and humanistic values purported to represent all humankind simply reflect the orientations of the billionaires and technology companies?