Computational Linguistics: Where Humans and Sci-Fi Meet

Computational linguistics asks the question: What are humans computing when they hear spoken language? What meaning do they find, and how do they find it?

Isabella Mandis
Wikitongues
4 min readMay 30, 2020

--

Experts in the field study all of the ways in which an idea might be expressed in language while still remaining comprehensible to human ears. It embraces and analyzes every aspect of communication that a language contains, such as grammar, meaning, usage, and inflection. Understanding, parsing, and qualifying these separate aspects of language is necessary to comprehend how language has evolved over time and across cultures and to teach a computer how to speak.

Although the study of computational linguistics predates the first experiments in artificial intelligence (AI), the two fields are closely interrelated. Teaching a computer to process and generate human language is a fundamental aspect of enabling a computer to process thought, problem solve, and communicate in a way that humans can understand.

ELIZA: An Early Experiment in Mimicking Human Communication

The number of researchers striving for human-level AI grew dramatically in the 1960s. These researchers designed programs with the goal of having computers understand and respond to human language. One example was Joseph Weizenbaum’s ELIZA program, which aimed to mimic a psychiatrist’s responses to their patient. Weizenbaum’s group taught ELIZA a series of outputs, organized as phrases and word patterns, that it stored and recognized. For each user input, ELIZA would search and mine its databank of phrases and word patterns, identifying the one that best fit the user’s own word sequence. ELIZA then reformatted its output to include words selected from the user input, thus mimicking the feeling of a genuine response to the user’s specific question. As ELIZA could not generate new outputs independent of its databank, its design more closely resembled a modern chatbot than a true example of AI.

Photo by Thomas Jensen on Unsplash

Some people believe that chatbots like ELIZA are impostors, as they are not linguistically competent; however, others argue that humans speak in an almost “preprogrammed way,” and that chatbots provide significant insight into human speech through their ability to recognize the patterns of human communication. One of the most common examples of this argument involves a human’s response to a greeting: in American English, for example, when someone says, “How are you?”, the most common response is “good,” regardless of the speaker’s actual mental or emotional state.

In the decades following ELIZA’s creation, studies in computational linguistics and artificial intelligence advanced from attempting to mimic human communication to developing strategies for teaching computers to recognize and interpret the meaning of human speech. This evolution has accomplished the transition from ELIZA’s very limited functionality to the revolutionary advances in speech-recognition technology that are evident in devices such as SIRI and Alexa. Today, developments in computational linguistics continue to push the ability of computers to understand and manipulate human language — working towards a day when computers will be able to speak for themselves in a way that humans will be able to easily understand!

Computational Linguistics vs. Natural Language Processing (NLP)

Computational linguistics is not just about programming artificial intelligence — it also involves the study of patterns in language and linguistic evolution over time. Researchers may conduct an analysis of different human languages to be modeled electronically, for instance, thus facilitating a global analysis of linguistic evolution.

Other areas are more specifically dedicated to innovations in computing: Natural Language Processing, for instance, works towards enabling a computer program to understand human language both by hearing it and reading it. It uses engineering to analyze and process natural language text.

Photo by Sebastian Scholz (Nuki) on Unsplash

The goals of computational linguists today include translating text between languages, analyzing and summarizing text, building dialogue agents capable of complex tasks, and creating capable chatbots.

One of the most famous products of the marriage of artificial intelligence and computational linguistics is Siri, Apple’s answer to an interactable, auditory-based user interface. The backend of this commonly used program involves teaching a computer to understand spoken human speech, process its meaning, and respond. Although Siri represents a breakthrough in the linguistic capabilities of computing, it still faces many challenges, particularly in its failure to adequately represent or parse emotion and tone: accent and language present roadblocks to Siri’s understanding, and emotional meaning lies far outside its current capabilities.

In the field’s search for deeper and more comprehensive strategies to compute the many nuanced ways in which humans use language to communicate, computational linguistics has the potential to realize a digital world that embraces linguistic differences. Through the field’s constant and evolving innovation, we may hope to see a future where global communication and access to technology does not require individuals to homogenize their accent or speech. And, in terms of artificial intelligence, computational linguistics may offer the best means of teaching computers to understand human emotion. It is a field grounded in the most human of interactions and the greatest imagination of science fiction.

If you would like to donate to support the work of Wikitongues or if you would like to get to know our work, please visit wikitongues.org. To watch our oral histories, subscribe to our YouTube channel or visit wikitongues.org to submit a video.

--

--