Implementation Of Neural Machine Translation Using Attentions

Pushprajmaraje
Analytics Vidhya
Published in
5 min readAug 23, 2021

What Is NMT?

NMT Stands For Neural Machine Translation Which Comes From Machine Translation. Here The Use Of Neural Networks Is Complete To Do The Machine Translation Of One Language Into Another.

NMT Helps In Translating Using Grammar, Parts-Of-Speech, Vocabulary Of The Language To Find Correct Replacement In Other Languages.

Example:

1. How Are You Today?

Translating To German

Wie Geht Es Ihnen Heute?

Following Is The Historic Translation Of JFK’s Speech That Created Confusion During The Cold War.

“Ich Bin Ein Berliner” Which Also Means “I Am A Jelly Donut”.

Link For Optional Reading

Https://Www.Theatlantic.Com/Magazine/Archive/2013/08/The-Real-Meaning-Of-Ich-Bin-Ein-Berliner/309500/

The Above Example Gives Us The Need To Find Not Just Correct Words To Replace, But Also To Match Relationships Between Words So That We Don’t End Up Making A Wrong Translation.

Architecture Of Neural Machine Translation

The NMT Consists Of Encoder And Decoder Which Help In Translating. Encoder Takes A Sequence As Input In One Language, Whereas The Decoder Decodes The Given Input And Tries To Find Appropriate Replacement Words In The Language The Model Is Translating.

Encoder And Decoder Are Nothing But 2 LSTM-RNN Models Whose Functions Are Change Then.

However, Encoder Works With Sequence-To-Vector Encoding, Whereas The Decoder Works With Vector-To-Sequence Decoding.

Data Preparation For NMT

The Dataset Contains Language Translation Pairs In The Format.

Something Like This “May I Borrow This Book? ¿Puedo Tomar Prestado Este Libro?”

Here’s A Link To Get Pre-Formatted Datasets “Http://Www.Manythings.Org/Anki/ ”.

Thus, The Above Link Has Many Zip Packages Which Can Be Used For NMT Models.

After Downloading The Dataset From Link Above, Some Steps Are Need To Prepare The Data:

  1. Add A Start And End Token To Each Sentence, Which Looks Like <SOS> And <EOS>.
  2. Cleaning The Sentences By Removing Special Characters And Unnecessary Characters.
  3. Create A Word Index And Inverted Word Index (Dictionaries Mapping From Word → Id And Id → Word).
  4. Padding Of Each Sentence Is Must To Match The Max Length Sentence.

Implementation Of Neural Machine Translation Using Attention

The NMT Models Trained Before Attention Were Using Seq2seq Structure. This Structure Had An Issue Known As Fixed Encoder Representation Which Caused A Bottleneck At The Output Vector.

Here You Can See The Latest Word Gets Higher Priority Than The Previous Ones, Which Makes The NMT Model Hard To Maintain The Relationship Between These Words. Here’s Where The Working Of Attention Comes In.

What Is Attention?

Attention As The Name Says, It Allows The Model To “Focus On Important Parts” Of The Sentence.

The Term “Attention” Was Initially Introduced In The Paper Neural Machine Translation By Jointly Learning To Align And Translate Whose Sole Purpose Was To Address The Fixed Representation Problem.

Attention Mechanism Is A Part Of A Neural Network Which Makes The Network Focus Only On Important Data. At Each Decoder Step, It Decides Which Parts Of The Sentence Are More Important. So The Encoder Does Not Have To Get All The Tokens In The Sentence Into A Single Vector.

As Seen Earlier The Encoder And Decoder Part Is The Same, The Output Of The Decoder Is Sent To The Softmax Activation Function To Get Final Results Of The NMT Model.

There Are Various Computations That Take Part In Calculation Of Each Attention Value Based On The Important Parts Of The Sentence.

Training The NMT Model

Here We’ll Be Using The Encoder And The Decoder Structure To Create The NMT Model With Extra Addition Of Attention.

In This Example, We Are Translating English Tokens To German Tokens. The Input Is Represented By 0, And The Target Is Represented By 1. One Copy Of The Input Tokens Is Fed Into The Inputs Encoder To Be Transformed Into The Key And Value Vectors. Another Copy Of The Target Tokens Goes Into The Pre-Attention Decoder. An Important Note Here, The Pre-Attention Decoder Is Not The Decoder What We Saw Before, Which Produces The Decoded Outputs.

The Pre-Attention Decoder Is Transforming The Prediction Targets Into Various Vector Spaces Called The Query Vectors. To Be Even More Specific, The Pre-Attention Decoder Takes The Target Tokens And Shifts Them One Place To The Right. This Is Where The Teacher Forcing Takes Place.

However, That Way When We Predict, We Can Just Feed In The Correct Target Word (I.E. Teacher Forcing).

The Input Encoder Gives You The Keys And Values. Once You Have The Queries, Keys, And Values, You Can Compute The Attention. After Getting The Output Of Your Attention Layer, The Residual Block Adds The Queries Generated In The Pre-Attention Decoder To The Results Of The Attention Layer.

Then Activations Go To The Second Phase, With The Mask That Was Previously Created. We Are Now In The Top Right Corner Of The Image. The Select Is Use To Drop The Mask. It Takes The Activations From The Attention Layer Or The 0, And The Second Copy Of The Target Tokens, Or The 2. These Are The True Targets That The Decoder Needs To Compare Against The Predictions.

Finally, You Just Run Everything Through A Decoder LSTM /Dense Layer Or A Simple Linear Layer With Your Target’s Vocab Size. This Gives Your Output The Right Size. We Use Log Softmax To Compute The Probabilities. The True Target Tokens Are Still Hanging Out Here, And We’ll Pass It Along With The Log Probabilities To Be Match Against The Predictions.

Link To The Full Code On NMT Will Be Given In The References Section.

Conclusion:

Working On NMT Models Is Kind Of Tricky As The Computation Happening Behind Need To Be Understood First To Improve The Model Performance.

References:

  1. NMT Paper neural machine translation by jointly learning to align and translate.
  2. Here’s a link to get datasets “http://www.manythings.org/anki/ ”.
  3. Code Link “https://www.tensorflow.org/tutorials/text/nmt_with_attention#training
  4. NMT Models in Detail “https://lena-voita.github.io/nlp_course/seq2seq_and_attention.html#attention_intro

--

--