Deep Learning the Stock Market
Tal Perry
1K54

this doesn’t seem to make sense to me, because in the word2vec case, the meaning of words are mined from their contexts.

here in this article, you doesn’t seem to explain what “context” means for “a market vector”, or the 1000 stocks you picked up in your table.

if the prices of that 1000 stocks can be viewed as a snapshot of the market. then I can imagine a series of consecutive snapshots can be viewed as a context,​

but still, applying embedding to this market vector doesn’t make sense to me. because it’s hard to imagine that two market vectors that are apart can appear in a similar context.

Stock market changes gradually. If Apple’s price is, say, 117 today, it usually won’t crash to 20 the next day. So in the original space of dimension 4*1000, market vectors that appear in the same context should be already very close.

Whereas in the word2vec case, words can look very different but have similar meanings, for example “Frog” and “Toad”.

Moreover, correlations between different stocks seem to be small, unlike the one hot word vector in the word2vec case, which is very sparse. I suspect that by shrinking the dimension of the original 4*1000 vector to 300, you will get worse performance, not better.

Show your support

Clapping shows how much you appreciated Shi Yan’s story.