Claude Coulombe
Aug 27, 2017 · 1 min read

Greetings, I’m happy to know someone who appreciates my first story in Medium. Thank you for your comments and careful reading. ;-)

My answer:

  1. You’re right! The neural network is working very hard to get results on this ridiculously small dataset of 4 points. At first, I’ve set up training on 10 000 epochs and the results was often off target. This is the reason why I increased the number of epochs to 100 000. Furthermore, I’ve tried like you suggested other initialization schemes for weights. In fact just one `tf.random_uniform(…)` in place of `tf.truncated_normal(…)`. That said, I did not push my experiments further in that direction. Do you have any suggestions to share?
  2. So I’ve put emphasis on testing different loss functions which are in comments. You’re right, with the commented loss function `sigmoid_cross_entropy_with_logits(…)`, having sigmoid on entry like `y_estimated = tf.sigmoid(tf.add(tf.matmul(h,w),b))` is not required, probably because there is already a sigmoid inside as the name of the function suggests. Nevertheless that works, since two sigmoids in that context cannot harm. Above all, that sigmoid helps a lot with the other loss functions commented in the code without many changes in the code.

Thank you again for your careful reading and your vigilance.

)
    Claude Coulombe

    Written by