Developing the Language Model

Finally, I can start the work towards Milestone — 2, which is completing the development of Language Model for Malayalam. Time to completely switch to Ubuntu from here on. Why?

Well, all the forums related to CMU Sphinx keep telling that they won’t monitor the reports from Windows anyways, and since all the commands and codes mentioned in the documentation is more inclined to Linux, let’s just stick to it as well. After all, when it comes to Open-Source, why should I develop using Microsoft Windows. (** Giggle **)

What is a Statistical Language Model?

Statistical language models describe more complex language, which in our case is Malayalam. They contain probabilities of the words and word combinations. Those probabilities are estimated from a sample data ( the sentence file ) and automatically have some flexibility.

This means, every combination from the vocabulary is possible, though probability of such combination might vary.

Let’s say if you create statistical language model from a list of words , which is what I did for my Major Project work, it will still allow to decode word combinations ( phrases or sentences for that matter. ) though it might not be our intent.

Overall, statistical language models are recommended for free-form input where user could say anything in a natural language and they require way less engineering effort than grammars, you just list the possible sentences using the words from the vocabulary.

Let me explain this with a traditional Malayalam example:

Suppose we have these two sentences “ ഞാനും അവനും ഭക്ഷണം കഴിച്ചു ” and “ ചേട്ടൻ ഭക്ഷണം കഴിച്ചില്ലേ ”.

If we use the statistical language model of this set of sentences, then it is possible to derive more sentences from the words( vocabulary ).

ഞാനും (1) , അവനും (1) , ഭക്ഷണം (2) , കഴിച്ചു (1) , ചേട്ടൻ (1) , കഴിച്ചില്ലേ (1)

That is, we can have sentences like “ ഞാനും കഴിച്ചു ഭക്ഷണം ” or maybe “ഭക്ഷണം കഴിച്ചില്ലേ ”, or “ അവനും കഴിച്ചില്ലേ ” and so on. It’s like the Transitive Property of Equality but in a more complex manner. Here it's related to probability of occurrence of a given word after a word. Now this is calculated using the sample data that we provide as the database.

Now, you might be wondering what the numbers inside the parenthesis mean. Those are nothing but the number of occurrences of each word in the given complete set of sentences. This is calculated by the set of C libraries provided by a toolkit that I will introduce shortly.


Update — July 18

Okay!

Let’s start building. If you remember from my previous blog post/articles, you can recollect me writing about extracting words and then transcribing those to phonetic representation. Those words are nothing but the vocabulary that I just showed.

For building a language model of such a large scale vocabulary, you will need to use specialized tools or algorithms. One such set of algorithms are provided as C Libraries by the name “CMU-Cambridge Statistical Language Modeling Toolkit” or in short CMU-CLMTK. You can head over to their official page to know more about it. I have already installed it. So we are ready to go.

So according to the documentation,

The first step is to find out the number of occurrences. (text2wfreq)

cat ml.txt | text2wfreq > ml.wfreq

Next we need the .wfreq to .vocab file without the numbers and stuff. Just the words.

cat ml.wfreq | wfreq2vocab -top 20000 > ml.vocab

Oops, there are some issues with the generated vocab file regarding repetitions and additional words here and there which are not required. This might have happened while I was filtering the sentences file but forgot to update or skipped updating the transcription file. Some delay in further process. It's already late night! I need to sleep!


Update — July 19

‘Meld’. Thank you StackExchange

With this guy, its easy to compare everything and make changes simultaneously. It should be done by today!

.

.

Done!

Okay, now that the issue have been handled, we are getting somewhere. It should be pretty much straight forward now.

Next we need find list of every id n-gram which occurred in the text, along with its number of occurrences. i.e. Generate a binary id 3-gram of the training text ( ml.txt ), based on this vocabulary ( ml.vocab ).

By default, the id n-gram file is written out as binary file, unless the -write_ascii switch is used in the command.

-temp ./ switch can be used if youwant to run the command without root permission and use the current working directory as the temp folder. Or you can just run it as root, without any use, which by default will use /usr/tmp as temp folder.

cat ml.txt | text2idngram -vocab ml.vocab -temp ./ > ml.idngram

Finally, we can generate the Language Model. This can either be an ARPA model or a Binary.

idngram2lm -idngram ml.idngram -vocab ml.vocab -binary ml.bin

or

idngram2lm -idngram ml.idngram -vocab ml.vocab -arpa ml.arpa

Even though ARPA is available, using the binary format of the language model is recommended for faster operations.

Here is the basic work-flow.

as provided by the Toolkit Documentation.

That’s it. The Language Model is complete. I can now go ahead into next step, that is building and training the Acoustic Model.