Text Summarization with Amazon Reviews
David Currie

Dave Currie Initially I tried with random initialization of the embeddings as in general using pretrained embeddings don’t make much difference, as far as I have read. Since, the results were not as good I directly ran your code to see what are the results should be, they were not good either. Yesterday I trained on whole data, using your code just changed you start and stop variables. E.g

Original Text: This is the worst cheese that I have ever bought! I will never buy it again and I hope you won't either!

Word Ids: [18184, 19555, 57297, 11063, 28446, 47807, 11446, 39992]
Input Words: worst cheese ever bought never buy hope either

Word Ids: [38537, 51885, 17285, 26151, 39086, 26151]
Response Words: a product hearty more potter more

Seems pretty bad to me, not sure whats going on. I used the code you posted on github.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.