On each run the batch data is fed to the placeholders, which are “starting nodes” of the computational graph
How to build a Recurrent Neural Network in TensorFlow (1/7)
Erik Hallström


first of all, thanks for this wonderful tutorial and all the effort you put in it!

I’m quite new to neural nets and have a question regarding the use of batches in your code. As far as I understood the concept of batch training, the goal is to train a neural net with a set of input data, instead of applying the training function for each input vector separately. So basically you gather a specific amount of input vectors (=batch size), calculate the output and the squared error for each input vector (supervised learning) and use the average of this error as basis for gradient descent.

If I’m understanding your code right, you do something different. It appears to me, that you build up something like a parallel structure, five neural nets with the same purpose bounded together in one single net. Let me explain why I came to this conclusion: The network has the purpose to create and echo of the input, so a shifted copy. Normally I’d expect a net with a single input and a single output. But if I get you right, you have 5 inputs and 5 outputs (or 3 as your figures suggest). Each output is the echo of an input. Don’t they do the same? So that you in the end just split up your training data to train these 5 sub-networks?

If I got you right I don’t see why you’d do this.. So I think I completely misunderstood something here xD Could you help me out?

Like what you read? Give Daniel a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.