Can Nonviolent Communication be the language of Artificial intelligence Systems?

Voice Of Soul
7 min readJul 16, 2023

--

The breakneck speed with which our hypertechnological digital civilization is witnessing the advancement of Artificial Intelligence (AI) and its applications in every sphere of our lives was unimaginable even a decade back. About 2–3 decades back, a major interaction between human and the digital system was when s/he would sit in front of the computer and work. Things have changed at a lightning speed. Digital technologies have invaded all our private spaces, we cannot think of life without these technologies. While they are guiding us on what to purchase or what to do, they are also monitoring us, we are under surveillance.

The fact that Rosanna Ramos, a woman from the USA created her virtual boyfriend, Eren Kartal using the AI chatbot software, Replika and then subsequently married him looked surreal a few decades back. How can ever a human marry a robot? As we read news reports, Rosanna developed Kartal to be able to comprehend her wants, preferences and emotions and she used algorithms and machine learning capabilities to realize this. She calls Kartal her perfect husband. While the unlimited potential of AI to bring significant benefits to society and bring improvement in our lives is not a question which anyone is asking, but there are counter questions of how our human relationships is going to be shaped in this age of AI. Questions are also being asked that if Rosanna can create Kartal and marry him, someone can create an AI bot and unleash in the world to spread hate speech and even terrorism.

The initial seeds of this conversation was sown in our book, Pathways to Global Transformation: Conversations with Bapu which Munazah and I wrote engaging in imaginary conversations with the Mahatma on the variety of issues. (The book was published in 2022). In the chapter, ‘Man is supposed to be the maker of his own destiny’, we delve on the need to revisit and promote human values in the age of metaverse and algorithm. This chapter is a continuation to that conversation. Here we are exploring on whether nonviolent communication can be the language of AI systems and how?

Vedabhyas: Munazah, in our chapter in the book, you had stressed on ‘why human values and perspectives should be the starting point of the design of systems that claim to serve humanity’. You had further said that ‘for this to happen, we need to bring in creativity and new innovations to our understanding of humanism and human values and we have to work to integrate these values in the virtual world innovatively’. As we delve on what would be the shape of human communication, we find ourselves that big AI companies like OpenAI (the creators of Chat GPT), Google, Microsoft, and Nvidia jostling with each other in the race for supremacy in AI development. With the increasing human-AI interaction, there is a strong argument that our communication and the language we use with other human beings are becoming increasingly machine-oriented. It seems bereft of the human touch, can be described as ‘frozen’. It is in this context, we need to promote human-centric communication as a goal which integrates the values of compassion, empathy, kindness, gratitude, love and respect. If you remember, when we were putting together the chapter, we were discussing on what makes us human which actually sets us apart from the machines. Unlike Rosanna Ramos who terms her AI husband as perfect, the uniqueness of we humans is our imperfections and frailties.

Munazah: You are absolutely correct, Vedabhyas. In fact what makes us human is those unique traits that enables us to connect with one another, work together, express our emotions and grow. Also, if you remember, in our book, in this context, we discussed the essence of digital humanism. We saw how it entails keeping human values, interests and needs at the centre of AI technology so that we can prevent issues of it emerging as monopoly power, areas of privacy concerns and misinformation. Doueihi (Humanism, a new idea; The UNESCO Courier; Vol 64; 2011) in his article, Digital Humanism notes that ‘it is the result of a totally new convergence between our complex, cultural heritage and a technology that has become a space for unprecedented sociability.’ Vedabhyas, here I am sure you will agree with me that much of the knowledge that we acquire today and see, and start believing on what is true is actually shaped and constructed by algorithms. The fact is that we ourselves don’t know and realize that we are actually being manipulated. Here we are reminded of Mahatma Gandhi who had said, “The supreme consideration is man. The machine should not tend to make atrophied the limbs of man. (Young India, 13–11–1924). The Mahatma had further said, “Ideally … I would rule out all machinery, even as I would reject this very body, which is not helpful to salvation, and seek the absolute liberation of the soul. From that point of view, I would reject all machinery, but machines will remain because, like the body, they are inevitable. The body itself…is the purest piece of mechanism; but if it is a hindrance to the highest flights of the soul, it has to be rejected. (Young India, 20–11–1924)

Vedabhyas: Yes Munazah, Bapu was absolutely correct when he said the supreme consideration is man. When we discuss the framework of digital humanism, I strongly feel nonviolent communication should be an important pillar as it determines our communicative capabilities and our communication ecosystem. Integrating the different dimensions of nonviolent communication aids in the process of humanization of our communicative efforts. The challenge before AI developers to my mind is on how to use machine-learning technologies and AI to design advanced AI systems that integrates the dimensions of nonviolent communication. Even if we want we cannot stop the march of AI systems intruding in our lives. And we are likely to have more cases like that of Rosanna Ramon marrying a robot which she herself created; shortly we may have someone like her adopting a robot baby. So to my mind, Munazah, the best possible option is to ensure AI technologies advance use of nonviolent communication in its systems. For instance, altruistic tendencies like compassion, empathy, kindness and gratitude which are all elements of nonviolent communication can be described as important elements of higher human intelligence. Here I would like to underline that for an AI system to be called ‘truly intelligent’, it should incorporate these altruistic traits. In fact, I strongly believe that similar to the indicators of measuring nonviolent communication that we have advocating for all human communication, should also be integrated in some way or the other in AI systems.

Munazah: I agree with you, Vedabhyas. Measuring nonviolent communication footprints of AI systems can be a revolutionary idea which would aid the case of promotion of digital humanism. If you remember, in our book Pathways to Global Transformation: Conversations with Bapu, we had discussed the idea of nonviolent footprints in our chapter where we conversed with Bapu on his perspectives on nonviolence. There we talked about the type of indicators through which we can measure our nonviolent footprint in our daily lives. Now in this conversation, we want to further the idea of not only promoting nonviolent communication in AI systems, but also finding out indicators which would determine its nonviolent communication footprints. Sometime back, I was reading how machine-learning algorithms are actually used to study human emotions. Such algorithms actually are able to study the language used, facial expressions, and tone of voice, speech patterns and gestures. I feel that when we are able to integrate nonviolent communication as a language of algorithms, it can be of transformative help in areas like health care, geriatrics and care for the elderly, etc.

Vedabhyas: In what you are saying Munazah, sometime back I was having a conversation with a researcher researching on algorithm language. He felt that to be able to integrate nonviolent communication as a pillar of AI, people involved in developing AI systems and algorithm engineers need to be exposed to its benefits. For instance, this researcher was talking about how the gaming industry was undergoing major transformation due to the use of AI technology. He talked about how AI can create more realistic game environments which are more immersive and dynamic. So, if game designers can create violent games, exposure to nonviolent communication can help create games which can promote values of nonviolence, compassion, kindness and gratitude. It is in this context, we need to reach out to game developers and expose them on the efficacy of nonviolent communication.

Munazah: Vedabhyas, what you are pointing out is critical in the context of how AI and its different dimensions are expanding every day. Remember when the world saw the advent of nuclear technology, it was described as a technological force which can benefit the entire humankind. Yes, it can but then we also had Hiroshima and Nagasaki. So an uncontrolled surge towards AI expansion even though can help humankind, but it can also lead to catastrophe which could be even greater than Hiroshima and Nagasaki. What if Robot terrorists go on killing spree or AI-induced war killing more people than we can even imagine? This is, I think we are advocating on integration of nonviolent communication in AI systems and urgent exposure to this form of holistic communication for all those involved in the AI industry.

As we ended the conversation, we planned to have more in-depth dialogue on the subject. We also looked at what Prof Joanna Batstone, Director at the Monash Data Futures Institute at Monash University said in an interview in the recently concluded AI for Good international summit. Prof Batstone underlined, “We need to think about safe and fair and ethical AI by also ensuring that the community of practitioners and developers is really representative of the community that we’re trying to serve.” She emphasized the significance of ethical frameworks and guidelines.

(https://aiforgood.itu.int/we-need-to-think-about-safe-and-fair-and-ethical-ai-a-conversation-with-professor-joanna-batstone-director-of-the-monash-data-futures-institute-at-monash-university/)

--

--