While watching Siraj’s Seq2Seq livestream I realized it’s important to really understand embeddings. Because the potential is using it for more than words, you can use it for anything really to decrease scope of problems you are after.

What is it?

Converting a big set (with unique items) to a subset of vector representations.

Why we need it?

We need it to decrease the size of the neural network we are going to build. For example, let’s take the most basic feed forward net with 1 hidden layer of size Y. We will need NxY weights (exclude the biases for now) to represent that first layer connectivity. But, the same connectivity quality may be represented with less parameters — because the uniqueness of words usually not important. What’s important are clusters of words with same meaning to the problem we are trying to solve.

Syntax — TensorFlow

Taken from Siraj’s lesson

embeddings = tf.Variable(
[vocab_size, input_embedding_size],
-1.0, 1), dtype=tf.float32)
encoder_inputs_embedded = tf.nn.embedding_lookup(embeddings, encoder_inputs)

The tf.nn.embedding_lookup adds the neural net layer that is going to understand what clusters of inputs we are talking about. It’s an embedding matrix that we feed to our neural net. BTW it can be any nn… In the video its fed to a RNN (encoder of the seq2seq model).

Pretrained solutions

I also found this video with dedicated focus on word embeddings — How to Make Word Vectors from Game of Thrones (LIVE). We can find reference to nltk.org — tokenize sentences. it has a pre trained model, but this is for word embeddings only — what if you want to decrease dimensionality of something other that words. I don’t want this story to be about that.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.