Will AI Make Sign Language Interpreters Obsolete?

Joshua McKenzie
5 min readSep 14, 2017

--

Thamsanqa Jantjie, the fake sign language interpreter. (Photo: AP)

AI, artificial intelligence, has been making the headlines for the last couple of years. Elon Musk, billionaire and founder of SpaceX and Tesla, prophecizes humanity’s doom unless society confronts AI. And he may be on to something.

AI has been used to program sophisticated software to detect scenery, images, facial recognition, augmented reality among others, effectively adding a large leap forward in technological development creating a new vibrant and flourishing market and the internet giants are racing to acquire it. And it’s only just the beginning.

Forrester Research predicted more than 300% increase in investment in AI in 2017 compared to last year and IDC estimated that the AI market will grow from $8 billion in 2016 to more than a $47 billion industry by the year 2020.

Source: Forrester Research, Inc

The clouds are starting to form and it appears that sign language interpreters may be one of the first casualties of the AI age.

Facial expressions are the cornerstone of sign language interpreting and are an example of a set of behaviors called “non-manual markers” such as head tilting, nodding, shaking, shoulder raising and mouth morphemes, among others. And facial expression recognition technology is already being employed.

We can already trigger event driven animations with our eyebrows and our mouths on Snapchat. By outlining a wireframe of the face, the Snapchat algorithm will trigger an event whenever the user moves their face, such as their eyebrows, responding to a user’s facial expressions. Although some may regard this piece of technology as a novelty, it is in fact a critical juncture between human expression and machines. And further investment in AI technology will push the boundaries even further. Opening up the entire spectrum of human expression to be interpreted by machines.

(Tech Insider)

This month, the Guardian published a report by a Stanford University professor claiming that AI can detect whether a person is gay or straight and further predicts that it will be able to do much more, even as far as detecting their political affiliation based on just their facial expression. Far beyond any human ability. This report rocked not only the tech world, but other parts of society, incurring backlash from the LGBT community and rights groups.

Either way, should the professor indeed be correct in his prediction, then the problem of machines understanding facial expressions of the Deaf and Hard of Hearing will be effectively, and summarily, be solved in only a matter of time if not already.

But can a computer program effectively replace the sign language interpreter? The answer has always been “No.” But thanks to deep learning, that answer may have changed to a definite “maybe.”

Deep learning is sub-genre of AI technology, enabling many practical applications and by extension the overall field of AI. It is deep learning that enables machine assistance such as driverless cars, and healthcare diagnostics.

Driverless cars are perhaps the most public facing product of deep learning AI. Drive.ai, a startup, designs and develops driverless cars, depends on deep learning to handle and respond to many external events and data inputs at practical speeds.

“ It’s only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area”

(Fortune)

Deep learning applications appears to be limitless, sign language interpreting not excluded. Paired with facial recognition AI, deep learning may be the final key step in prototyping a fully functional sign language interpreting program. Depending on efficiency and accuracy, which the AI compensates for, it seems that the technology may in fact disrupt the entire profession of sign language interpreting.

Allowing our imagination to place us just a few years into the future, it may very well be possible that sign language interpreting AI can in fact adapt to each ASL user’s style and unique expression individually. So instead of an “one size fits all” approach, the AI interpreting program learns and adapts to the ASL user instead, which serves as a more practical solution thus reducing any rate of errors or mistakes by a significant margin.

This approach would require the AI to be under the ownership of the ASL user as opposed to any facility or client and reception to such a program will be varied but overall mostly positive.

An AI interpreter, that adapts your specific signing style over time and with a error rate of <1~2%, on demand at any time or place would remove current barriers to communication such as sub-par interpreters, availability of interpreters, waiting for interpreters to arrive, conflicts in signing style, overall cost, and failures of provisioning interpreters by institutions.

Concluding our thought experiment, would, and more importantly, should the sign language interpreting industry fight against such a formidable piece of technological marvel? If indeed the AI technology can surpass facial recognition ability of humans, as such to the extreme degree of detecting sexual orientation, then it behooves us to admit that the sophistication and ability of said technology surpasses any human ability. Because at it’s core, the purpose of such technology is the provisioning of freedom to any and all Deaf and Hard of Hearing.

--

--