Mark My Words: An Analysis of the Translator’s Impact on Media

Jason Muell
Apr 15, 2019 · 8 min read
Writing out that keikaku

While I was talking with a friend about how Western movies get dubbed over into Japanese, we eventually ended up on the topic of just how dramatic the differences are across the many Japanese-language versions of Star Wars: A New Hope. (A whopping five in total)

Not only are the voices different across each version, but the dialogue also changes, giving each character a slightly different nuance.

This got me thinking: just how much impact does a translator have on the overall tone of a given work? And, by extension, on how its fans perceive their favorite characters?

Today, we’re going to crunch the numbers and try to answer these very questions!

Words are complicated… and even more so when combined together

Getting To the Root of the Problem

Alas, things aren’t quite so cut and dry — especially when humans are involved. Even if you were to get the top translators in their field together, no two translations will be 100% identical.

Nowhere is this more evident than in the translation of novels, movies, comics, games, and other forms of media.

Imagine, for a moment, that you asked 10 people to recount the story of the Three Little Pigs back to you from memory. While we all know the story, depending on our propensity to embellish details, our own life experiences, and even just plain faulty memory, you’ll come up with 10 very similar, but also starkly different stories.

The same thing happens in translation: we bring with us our own perceptions, life experiences, and even creative writing skills to bear whenever we are tasked with translating a story.

Okay, so we only need like… one of these things

Assembling Our Tools

Objectively Objective Objectivity

Humans are also pretty bad at being objective, so forcing them to watch the same work over and over again is pretty good way to get increasingly worse scores as boredom sets in.

Fortunately, we have other choices…

Humans? Not so good for data. Robots? Amazing. Until they overthrow you.

My Good Friend Watson

What does it do, you ask? Well, I think IBM sells it best:

[Tone Analyzer can a]nalyze emotions and tones in what people write online, like tweets or reviews. Predict whether they are happy, sad, confident, and more.

Essentially, you feed text into the Tone Analyzer system and it spits out a score of how much of a given emotion is detected in the provided text.

Our New Guinea Pig

So what should it be?

Our requirements are pretty simple. I need some something that:

  • has a lot of dialogue;
  • is available in text format; and
  • has been translated multiple times

Fortunately for me, I just so happen to have the perfect data set for this — and right in my house, at that!

Was anyone REALLY surprised?

The first two seasons of the Sailor Moon anime have seen three releases in English thus far: the 1995 dub by DiC, the 2002 subtitled version by ADV, and the 2014 subtitled/dubbed version by Viz.

If we count the subtitles and dubbed dialogue by Viz as two separate translations (which they essentially are), then that gives us four adaptations of the same original Japanese source.


You’d be hard-pressed to love Excel as much as I do

It’s Analyzin’ Time

Compiling the Data

Why not 27? Well, something went wrong while extracting the text from the Viz dub and I only noticed too late, so I had to cut it out to keep our comparisons clean.

Still, this left me with quite a few lines of dialogue. Of those lines, however, we could only use those which contained at least four words, due to the Tone Analyzer not being able to analyze anything shorter.

Total lines (lines >3 words)

  • DiC: 770 (564)
  • ADV: 1099 (704)
  • Viz (Dub): 1128 (753)
  • Viz (Subtitle): 1250 (717)

Once I fed these 2,738 lines of dialogue into the Tone Analyzer, it returned a nice… well, massive wall of numbers.

Finding the Signal in the Noise

Anything less than 0.5 is deemed inconclusive, so I chucked those scores out. Next up, I wanted to know what the primary tone was, so I took the categorized each line of dialogue by their highest score. Finally, I threw out Anger, Disgust, and Fear since they one appeared a few times (if at all) in each episode.


Episode 24 (Naru’s Tears: Nephrite Dies for Love)

Tone Analyses for Pretty Guardian Sailor Moon Episode 24

Anyone familiar with Sailor Moon probably best knows this episode as “the one where Nephrite dies.” I figured this was a good starting point due to the more sadder themes of the episode.

Most interesting to me is that the subtitles seemed to have put more emphasis on this this than either of the dubbed versions, with a whopping 8% of ADV’s and 4% of Viz’s subtitled dialogue being tagged as largely conveying feelings of sadness.

Episode 25 (Jupiter, the Powerful Girl in Love)

Tone Analyses for Pretty Guardian Sailor Moon Episode 25

What stood out most to me about the results from this episode were just how similar the DiC version (a completely rewritten localization) and the Viz subtitles wound up being.

I can’t really say that I have any theories about how it ended up this way, but it is interesting that no matter what approach you take toward translation, a lot of the story is dictated by what appears on screen.

Episode 26 (Restore Naru’s Smile: Usagi’s Friendship)

Tone Analyses for Pretty Guardian Sailor Moon Episode 26

Here we see pretty much the opposite of the results from episode 25: the ADV subtitles and the Viz dub dialogue are pretty close to each other in terms of the emotions expressed.

Episode 28 (The Painting of Love: Usagi and Mamoru Get Closer)

Tone Analyses for Pretty Guardian Sailor Moon Episode 28

Both of the Viz translations are pretty similar to each other here, though I’m honestly surprised that this isn’t the case the more often. Considering it’s done by the same company, I would assume that the same translators/editors would be at least sharing notes, if not the same people.

General Observations

How so?

Well, I was curious at just how much “variability” each adaptation showed from episode to episode. The results were as follows:

Emotional Range (average / minimum / maximum)
Note: Bigger number = larger differences between episodes

  • DiC: 3.8 points / 1.5 (sadness) / 6.3 (extraversion)
  • ADV: 5.4 points / 3.8 (agreeableness) / 9.1 (tentative)
  • Viz (Dub): 4.0 points / 1.5 (agreeableness) / 9.5 (openness)
  • Viz (Subtitle): 4.0 points / 1.2 (confidence) / 7.1 (openness)

What this essentially means is that the DiC adaptation showed the least amount of variability between episodes, which would make sense when you consider their target audience and that the show was meant to just be a simple early morning cartoon for kids.

On the other end of the spectrum, the ADV subtitles showed a lot more variability between each episode. I won’t comment on if this makes for a more (or less) accurate translation without comparing it against the original Japanese dialogue, but it at least was interesting to me.

A true English Mastar

Closing Comments

Where can we go from here?

Going forward, I’d like to do a similar analysis on a manga that’s been translated several times, and ideally I’d like to use a larger data set. Finally, since the Tone Analyzer API now supports Japanese, I’d like to run the original Japanese dialogue through that and see which adaptation comes closest to the source text.

In case you’re interested in playing with the data yourself, I’ve uploaded the results to a Google Sheets file that you can download, copy, or mess around with at your own leisure. (Note that all dialogue has been removed as the translations belong to their respective copyright holders)

Want to see more content like this? Why not check out my other blog, Tuxedo Unmasked, or follow me on social media at @t_unmasked on Twitter and TuxedoUnmasked on Facebook!

Future Vision

A publication centered around high quality storytelling

Jason Muell

Written by

Jason is a translator, blogger, and author. When he’s not serving as a human jungle gym for his young daughter, he can be found researching Japanese culture.

Future Vision

A publication centered around high quality storytelling

Jason Muell

Written by

Jason is a translator, blogger, and author. When he’s not serving as a human jungle gym for his young daughter, he can be found researching Japanese culture.

Future Vision

A publication centered around high quality storytelling

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store