Lenny #2: Autoencoders and Word Embeddings
Lenny Khazan
292
Actually, activations in any higher level hidden layer will give you the same effect. It’s also a mapping of your input to a more meaningful representation that lies in some vector space…
Actually, activations in any higher level hidden layer will give you the same effect. It’s also a mapping of your input to a more meaningful representation that lies in some vector space…