Neural Machine Translation with Code

Umer Farooq
8 min readMar 21, 2018

--

Table of content

  • Machine Translation
  • Statistical Machine Translation
  • Neural Machine Translation
  • Encoder
  • Decoder
  • Attention Mechanism
  • Training
  • Complete Code for translating of German phrases to English in Keras

Machine Translation?

Machine translation is the task of automatically converting source text in one language to text in another language.

In a machine translation task, the input already consists of a sequence of symbols in some language, and the computer program must convert this into a sequence of symbols in another language.

Given a sequence of text in a source language, there is no one single best translation of that text to another language. This is because of the natural ambiguity and flexibility of human language. This makes the challenge of automatic machine translation difficult, perhaps one of the most difficult in artificial intelligence:

The fact is that accurate translation requires background knowledge in order to resolve ambiguity and establish the content of the sentence.

Classical machine translation methods often involve rules for converting text in the source language to the target language. The rules are often developed by linguists and may operate at the lexical, syntactic, or semantic level. This focus on rules gives the name to this area of study: Rule-based Machine Translation, or RBMT.

RBMT is characterized with the explicit use and manual creation of linguistically informed rules and representations.

The key limitations of the classical machine translation approaches are both the expertise required to develop the rules, and the vast number of rules and exceptions required.

Statistical Machine Translation?

Statistical machine translation, or SMT for short, is the use of statistical models that learn to translate text from a source language to a target language gives a large corpus of examples.

This task of using a statistical model can be stated formally as follows:

Given a sentence T in the target language, we seek the sentence S from which the translator produced T. We know that our chance of error is minimized by choosing that sentence S that is most probable given T. Thus, we wish to choose S so as to maximize Pr(S|T).

Neural Machine Translation?

Neural machine translation, or NMT for short, is the use of neural network models to learn a statistical model for machine translation.

The key benefit to the approach is that a single system can be trained directly on source and target text, no longer requiring the pipeline of specialized systems used in statistical machine learning.

Unlike the traditional phrase-based translation system which consists of many small sub-components that are tuned separately, neural machine translation attempts to build and train a single, large neural network that reads a sentence and outputs a correct translation.

Neural Machine Translation by Jointly Learning to Align and Translate, 2014.

As such, neural machine translation systems are said to be end-to-end systems as only one model is required for the translation.

The strength of NMT lies in its ability to learn directly, in an end-to-end fashion, the mapping from input text to associated output text.

Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation, 2016.

Encoder

The task of the encoder is to provide a representation of the input sentence. The input sentence is a sequence of words, for which we first consult the embedding matrix. Then, as in the basic language model described previously, we process these words with a recurrent neural network. This results in hidden states that encode each word with its left context, i.e., all the preceding words. To also get the right context, we also build a recurrent neural network that runs rightto-left, or more precisely, from the end of the sentence to the beginning. Having two recurrent neural networks running in two directions is called a bidirectional recurrent neural network.

Decoder

The decoder is a recurrent neural network. It takes some representation of the input context (more on that in the next section on the attention mechanism) and the previous hidden state and output word prediction, and generates a new hidden decoder state and a new output word prediction.

if we use LSTMs for the encoder, then we also use LSTMs for the decoder. From the hidden state. we now predict the output word. This prediction takes the form of a probability distribution over the entire output vocabulary. If we have a vocabulary of, say, 50,000 words, then the prediction is a 50,000 dimensional vector, each element corresponding to the probability predicted for one word in the vocabulary

Attention Mechanism

We currently have two loose ends. The decoder gave us a sequence of word representations hj = (←− hj , −→hj ) and the decoder expects a context ci at each step i. We now describe the attention mechanism that ties these ends together. The attention mechamism is hard to visualize using our typical neural network graphs, but Figure 13.21 gives at least an idea what the input and output relations are. The attention mechanism is informed by all input word representations ( ←− hj , −→hj ) and the previous hidden state of the decoder si−1, and it produces a context state ci . The motivation is that we want to compute an association between the decoder state (which contains information where we are in the output sentence production) and each input word. Based on how strong this association is, or in other words how relevant each particular input word is to produce the next output word, we want to weight the impact of its word representation

Training

With the complete model in hand, we can now take a closer look at training. One challenge is that the number of steps in the decoder and the number of steps in the encoder varies with each training example. Sentence pairs consist of sentences of different length, so we cannot have the same computation graph for each training example but instead have to dynamically create the computation graph for each of them. This technique is called unrolling the recurrent neural networks, and we already discussed it with regard to language models

Practical training of neural machine translation models requires GPUs which are well suited to the high degree of parallelism inherent in these deep learning models (just think of the many matrix multiplications). To increase parallelism even more, we process several sentence pairs (say, 100) at once. This implies that we increase the dimensionality of all the state tensors. To given an example. We represent each input word in specific sentence pair with a vector hj . Since we already have a sequence of input words, these are lined up in a matrix. When we process a batch of sentence pairs, we again line up these matrices into a 3-dimensional tensor. Similarly, to give another example, the decoder hidden state si is a vector for each output word. Since we process a batch of sentences, we line up their hidden states into a matrix. Note that in this case it is not helpful to line up the states for all the output words, since the states are computed sequentially.

To summarize, training consists of the following steps • Shuffle the training corpus (to avoid undue biases due to temporal or topical order) • Break up the corpus into maxi-batches • Break up each maxi-batch into mini-batches • Process each mini-batch, gather gradients • Apply all gradients for a maxi-batch to update the parameters Typically, training neural machine translation models takes about 5–15 epochs (passes through entire training corpus). A common stopping criteria is to check progress of the model on a validation set (that is not part of the training data) and halt when the error on the validation set does not improve. Training longer would not lead to any further improvements and may even degrade performance due to overfitting

Code

In this , we will use a dataset of German to English terms used as the basis for flashcards for language learning.

The dataset is available from the ManyThings.org website, with examples drawn from the Tatoeba Project. The dataset is comprised of German phrases and their English counterparts and is intended to be used with the Anki flashcard software.

The dataset we will use in this tutorial is available for download here:

Download the dataset to your current working directory and decompress;

from numpy import arrayfrom keras.preprocessing.text import Tokenizerfrom keras.preprocessing.sequence import pad_sequencesfrom keras.utils import to_categoricalfrom keras.utils.vis_utils import plot_modelfrom keras.models import Sequentialfrom keras.layers import LSTMfrom keras.layers import Densefrom keras.layers import Embeddingfrom keras.layers import RepeatVectorfrom keras.layers import TimeDistributedfrom keras.callbacks import ModelCheckpoint# load a clean datasetdef load_clean_sentences(filename):  return load(open(filename, 'rb'))# fit a tokenizerdef create_tokenizer(lines):  tokenizer = Tokenizer()  tokenizer.fit_on_texts(lines)  return tokenizer# max sentence lengthdef max_length(lines):  return max(len(line.split()) for line in lines)# encode and pad sequencesdef encode_sequences(tokenizer, length, lines):  # integer encode sequences  X = tokenizer.texts_to_sequences(lines)  # pad sequences with 0 values  X = pad_sequences(X, maxlen=length, padding='post')  return X# one hot encode target sequencedef encode_output(sequences, vocab_size):  ylist = list()  for sequence in sequences:  encoded = to_categorical(sequence, num_classes=vocab_size)  ylist.append(encoded)  y = array(ylist)  y = y.reshape(sequences.shape[0], sequences.shape[1], vocab_size)  return y# define NMT modeldef define_model(src_vocab, tar_vocab, src_timesteps,   tar_timesteps, n_units):
model = Sequential()
model.add(Embedding(src_vocab, n_units, input_length=src_timesteps, mask_zero=True)) model.add(LSTM(n_units)) model.add(RepeatVector(tar_timesteps)) model.add(LSTM(n_units, return_sequences=True)) model.add(TimeDistributed(Dense(tar_vocab, activation='softmax'))) return model# load datasetsdataset = load_clean_sentences('english-german-both.pkl')train = load_clean_sentences('english-german-train.pkl')test = load_clean_sentences('english-german-test.pkl')# prepare english tokenizereng_tokenizer = create_tokenizer(dataset[:, 0])eng_vocab_size = len(eng_tokenizer.word_index) + 1eng_length = max_length(dataset[:, 0])print('English Vocabulary Size: %d' % eng_vocab_size)print('English Max Length: %d' % (eng_length))# prepare german tokenizerger_tokenizer = create_tokenizer(dataset[:, 1])ger_vocab_size = len(ger_tokenizer.word_index) + 1ger_length = max_length(dataset[:, 1])print('German Vocabulary Size: %d' % ger_vocab_size)print('German Max Length: %d' % (ger_length))# prepare training datatrainX = encode_sequences(ger_tokenizer, ger_length, train[:, 1])trainY = encode_sequences(eng_tokenizer, eng_length, train[:, 0])trainY = encode_output(trainY, eng_vocab_size)# prepare validation datatestX = encode_sequences(ger_tokenizer, ger_length, test[:, 1])testY = encode_sequences(eng_tokenizer, eng_length, test[:, 0])testY = encode_output(testY, eng_vocab_size)# define modelmodel = define_model(ger_vocab_size, eng_vocab_size, ger_length, eng_length, 256)model.compile(optimizer='adam', loss='categorical_crossentropy')# summarize defined modelprint(model.summary())plot_model(model, to_file='model.png', show_shapes=True)# fit modelfilename = 'model.h5'checkpoint = ModelCheckpoint(filename, monitor='val_loss', verbose=1, save_best_only=True, mode='min')model.fit(trainX, trainY, epochs=30, batch_size=64, validation_data=(testX, testY),callbacks=[checkpoint], verbose=2)#see complete code here
#https://github.com/umer7/nmt

Papers

--

--