Mark My Words: An Analysis of the Translator’s Impact on Media
While I was talking with a friend about how Western movies get dubbed over into Japanese, we eventually ended up on the topic of just how dramatic the differences are across the many Japanese-language versions of Star Wars: A New Hope. (A whopping five in total)
Not only are the voices different across each version, but the dialogue also changes, giving each character a slightly different nuance.
This got me thinking: just how much impact does a translator have on the overall tone of a given work? And, by extension, on how its fans perceive their favorite characters?
Today, we’re going to crunch the numbers and try to answer these very questions!
Getting To the Root of the Problem
To the uninitiated, translation is a relatively simple task: take words in one language, flip through a dictionary, and then pop out the appropriate sentence in the target language. After all, that’s the way Google does it!
Alas, things aren’t quite so cut and dry — especially when humans are involved. Even if you were to get the top translators in their field together, no two translations will be 100% identical.
Nowhere is this more evident than in the translation of novels, movies, comics, games, and other forms of media.
Imagine, for a moment, that you asked 10 people to recount the story of the Three Little Pigs back to you from memory. While we all know the story, depending on our propensity to embellish details, our own life experiences, and even just plain faulty memory, you’ll come up with 10 very similar, but also starkly different stories.
The same thing happens in translation: we bring with us our own perceptions, life experiences, and even creative writing skills to bear whenever we are tasked with translating a story.
Assembling Our Tools
Now that we know the what and the why, we’re left with figuring the “how.” Without setting any baseline metrics, we can’t really comment on whether anything meaningful is really changed from one translation to the next except for details regarding accuracy.
Objectively Objective Objectivity
While I’d love to get a large sampling of 100 or so adults, have them watch several translations of the same work, and then give me their objective feedback on how they felt about the work they just saw, that isn’t really practical here in terms of time or money.
Humans are also pretty bad at being objective, so forcing them to watch the same work over and over again is pretty good way to get increasingly worse scores as boredom sets in.
Fortunately, we have other choices…
My Good Friend Watson
In case you somehow haven’t heard about it, IBM’s Watson is an amazingly powerful AI that offers up a whole slew of impressive options for all your various computer-powered needs. In our case here, I’m most interested in the Tone Analyzer product.
What does it do, you ask? Well, I think IBM sells it best:
[Tone Analyzer can a]nalyze emotions and tones in what people write online, like tweets or reviews. Predict whether they are happy, sad, confident, and more.
Essentially, you feed text into the Tone Analyzer system and it spits out a score of how much of a given emotion is detected in the provided text.
Our New Guinea Pig
Now that we know how we’re going to get our results and how to benchmark them, we need to get together some data to evaluate.
So what should it be?
Our requirements are pretty simple. I need some something that:
- has a lot of dialogue;
- is available in text format; and
- has been translated multiple times
Fortunately for me, I just so happen to have the perfect data set for this — and right in my house, at that!
The first two seasons of the Sailor Moon anime have seen three releases in English thus far: the 1995 dub by DiC, the 2002 subtitled version by ADV, and the 2014 subtitled/dubbed version by Viz.
If we count the subtitles and dubbed dialogue by Viz as two separate translations (which they essentially are), then that gives us four adaptations of the same original Japanese source.
It’s Analyzin’ Time
Compiling the Data
Before I could get started, I needed text, and lots of it. So I did what any reasonable person would do: I extracted the subtitle files and English-language closed captioning for four episodes (24, 25, 26, and 28).
Why not 27? Well, something went wrong while extracting the text from the Viz dub and I only noticed too late, so I had to cut it out to keep our comparisons clean.
Still, this left me with quite a few lines of dialogue. Of those lines, however, we could only use those which contained at least four words, due to the Tone Analyzer not being able to analyze anything shorter.
Total lines (lines >3 words)
- DiC: 770 (564)
- ADV: 1099 (704)
- Viz (Dub): 1128 (753)
- Viz (Subtitle): 1250 (717)
Once I fed these 2,738 lines of dialogue into the Tone Analyzer, it returned a nice… well, massive wall of numbers.
Finding the Signal in the Noise
Every line is given a score across 12 different “tones” (Anger, Disgust, Fear, Joy, Sadness, Analytical, Confident, Tentative, Openness, Conscientiousness, Extraversion, Agreeableness), with the score ranging between 0 (none) and 1 (strongly present).
Anything less than 0.5 is deemed inconclusive, so I chucked those scores out. Next up, I wanted to know what the primary tone was, so I took the categorized each line of dialogue by their highest score. Finally, I threw out Anger, Disgust, and Fear since they one appeared a few times (if at all) in each episode.
Before I go through my final conclusions, I thought it would be good to show what the results look like on an episode-by-episode basis.
Episode 24 (Naru’s Tears: Nephrite Dies for Love)
Anyone familiar with Sailor Moon probably best knows this episode as “the one where Nephrite dies.” I figured this was a good starting point due to the more sadder themes of the episode.
Most interesting to me is that the subtitles seemed to have put more emphasis on this this than either of the dubbed versions, with a whopping 8% of ADV’s and 4% of Viz’s subtitled dialogue being tagged as largely conveying feelings of sadness.
Episode 25 (Jupiter, the Powerful Girl in Love)
What stood out most to me about the results from this episode were just how similar the DiC version (a completely rewritten localization) and the Viz subtitles wound up being.
I can’t really say that I have any theories about how it ended up this way, but it is interesting that no matter what approach you take toward translation, a lot of the story is dictated by what appears on screen.
Episode 26 (Restore Naru’s Smile: Usagi’s Friendship)
Here we see pretty much the opposite of the results from episode 25: the ADV subtitles and the Viz dub dialogue are pretty close to each other in terms of the emotions expressed.
Episode 28 (The Painting of Love: Usagi and Mamoru Get Closer)
Both of the Viz translations are pretty similar to each other here, though I’m honestly surprised that this isn’t the case the more often. Considering it’s done by the same company, I would assume that the same translators/editors would be at least sharing notes, if not the same people.
One thing that I found interesting from analyzing all of these episodes wasn’t just how each translation stacked up against each other, but how they stacked up against themselves.
Well, I was curious at just how much “variability” each adaptation showed from episode to episode. The results were as follows:
Emotional Range (average / minimum / maximum)
Note: Bigger number = larger differences between episodes
- DiC: 3.8 points / 1.5 (sadness) / 6.3 (extraversion)
- ADV: 5.4 points / 3.8 (agreeableness) / 9.1 (tentative)
- Viz (Dub): 4.0 points / 1.5 (agreeableness) / 9.5 (openness)
- Viz (Subtitle): 4.0 points / 1.2 (confidence) / 7.1 (openness)
What this essentially means is that the DiC adaptation showed the least amount of variability between episodes, which would make sense when you consider their target audience and that the show was meant to just be a simple early morning cartoon for kids.
On the other end of the spectrum, the ADV subtitles showed a lot more variability between each episode. I won’t comment on if this makes for a more (or less) accurate translation without comparing it against the original Japanese dialogue, but it at least was interesting to me.
While I obviously can’t say definitively if viewers of a show are affected differently depending on which translation they’re exposed to, I think that there’s at least enough information here to make a solid argument that the translator can change the overall theme of a piece of media.
Where can we go from here?
Going forward, I’d like to do a similar analysis on a manga that’s been translated several times, and ideally I’d like to use a larger data set. Finally, since the Tone Analyzer API now supports Japanese, I’d like to run the original Japanese dialogue through that and see which adaptation comes closest to the source text.
In case you’re interested in playing with the data yourself, I’ve uploaded the results to a Google Sheets file that you can download, copy, or mess around with at your own leisure. (Note that all dialogue has been removed as the translations belong to their respective copyright holders)