LSTM by Example using Tensorflow
Rowel Atienza
32612

Just started learning LSTM and tensorflow so this is article is a huge help, currently running your model on a f1-micro so it’s taking forever (no surprise) but I’m getting these outputs:

Iter= 1000, Average Loss= 3.174847, Average Accuracy= 13.30%
['y', ' ', ','] - [ ] vs [o]
Iter= 2000, Average Loss= 2.867009, Average Accuracy= 18.30%
['l', ' ', 't'] - [o] vs [o]
Iter= 3000, Average Loss= 2.794725, Average Accuracy= 20.40%
['s', ' ', 'i'] - [n] vs [a]
Iter= 4000, Average Loss= 2.748497, Average Accuracy= 20.10%
[' ', 'o', 'f'] - [ ] vs [e]
Iter= 5000, Average Loss= 2.722638, Average Accuracy= 20.60%
['o', 'c', 'u'] - [r] vs [ ]
Iter= 6000, Average Loss= 2.658239, Average Accuracy= 24.20%
['e', ' ', 'w'] - [a] vs [r]
Iter= 7000, Average Loss= 2.598048, Average Accuracy= 22.60%
['a', 'u', 's'] - [e] vs [ ]
Iter= 8000, Average Loss= 2.684880, Average Accuracy= 25.10%
['e', ' ', 'l'] - [o] vs [a]
Iter= 9000, Average Loss= 2.549299, Average Accuracy= 25.80%
['i', 'e', 's'] - [ ] vs [ ]

and I’m wondering if that’s typical? I played around with ML libraries before getting into tensorflow and I was getting similar sporadic behavior with accuracy going up and then down and then up, etc. I suppose I’ll have to wait until this has completed it’s 50,000 iterations but from other models that have been graphed, there’s usually a sharp decline at first and then there is sporadic behavior but in the 80–90% range…

Show your support

Clapping shows how much you appreciated Jacob Edward’s story.