Machine learning and the workforce
My fear of Machine Learning replacing me as a Language Specialist — and why I need to overcome it.
In early 2017, Booking.com became one of the leading companies to experiment with Neural Machine Translation (NMT). Don’t worry, I’ll explain what this is later.
Innovation has always been one of the key driving forces behind Booking.com’s success, so this was hardly a surprise. But for people like me, who translate and localise for a living, the news also opened a pandora’s box of hypothetical situations where we’re replaced by bots powered with data.
As Language Specialists, we don’t depend on our knowledge alone to translate and localise content. We use data too; translations memories, for example, which are lists of previous translations we’ve worked on. Using these memories, we can translate content in a context-dependent way that suits the local market and the customer (which is the key difference between translation and localisation).
The use of data in the form of passive translation memories sounds pretty benign compared to NMT. I recently attended a conference on Data Science, organised for the Data Science community at Booking.com, who attended from offices around the world. The presentation on Neural Machine Translation at Booking.com intrigued me the most. Presented by Nishikant Dhanuka, a Senior Data Scientist leading the team working on Neural Machine Translation (NMT), the talk explored how and where data-driven Machine Learning is being used in this new area. The reason this presentation stood out to me was because I hoped it would finally shed some light on the conundrum of Language Specialist vs. Machine Translation.
Neural Machine Translation at Booking.com
At Booking.com, property descriptions, room descriptions and hotel names are translated by teams of freelance translators across 43 languages. These freelancers use Google’s Translation Toolkit, which runs on the Statistical Machine Translation method (SMT). For Turkish, SMT is no better than a toaster at translating creative texts such as idioms, whereas NMT is more like Ava from Ex Machina, scarily close to a human being. Let me explain why:
SMT works by breaking up the source sentence into parts and then translates these parts on a phrase-by-phrase basis. For language pairs such as English-Turkish, the result of SMT-produced texts is often a wrong (and mostly hilarious) translation. Imagine reading about someone literally frying bigger fish in the middle of a sentence about someone’s career growth. That’s how Google translates the English idiom “to have bigger fish to fry” into Turkish. This is where SMT falls way behind the human ability to understand text as part of context. As a result, SMT-produced texts need heavy editing by our freelancers before they become readable.
Turkish has recently become one of the languages to be tested with NMT. The process requires a combination of automated and human evaluation of the NMT-translated text to feed the algorithm’s deep learning. For the human part, my team’s help was needed. I was confident that NMT would suffer the same fate as SMT texts, but the results proved otherwise. With only a few exceptions, the texts looked like they were the work of a human linguist.
Doomsday Scenarios
One of the first questions asked after the NMT presentation at the conference was “When will these algorithms make our jobs obsolete?” I’d been working on similar horror scenarios in my head already. Hearing the same concern raised by someone in Tech only validated the scenarios I’d come up with. But Dhanuka was quick to state that NMT will only be used for less creative texts and added that humans will always be part of the process to help these algorithms work.
I realised this only later: I was sceptical because I was afraid I would be replaced by technology, and my fear was making it difficult to realise that these algorithms could do the routine parts of my work for me. I didn’t see that these algorithms might allow me to focus on more creative and rewarding work. I was focusing too much on what could go wrong to understand the benefits.
The trouble I had accepting this new technology reminded me of an interview with Kevin Kelly, the founding editor of WIRED and a philosopher on technology. Kelly explains how the technology of the last decade has moved from being a useful tool to being very close to our own being. This begs the question “Who are we?”, and the same question applies when it comes to NMT. If these technologies are inching closer to doing the work of a human linguist, then what does my job consist of?
This question has an existential quality to it, and this might be the source of my fear but I can’t find an answer to this question by shying away from technology. Technology evolves with us, whether we want it to or not. My scepticism doesn’t make NMT go away — it only makes it difficult for me to see where I fit in this picture. I don’t have agency over stopping the evolution of this technology, but I can figure out how to use it in my best interests.
As a Language Specialist working with people that deliver some of the best localised products, redefining our role with evolving technologies is necessary to stay the best. The only way we can do that is by diving deep and getting to know these technologies better.
We’re always on the hunt for new localization talent. Wanna join us? Apply here.