Shenrui Pan
Aug 24, 2017 · 1 min read

Thank you for this detailed blog, Manish. Personally, you trained your stacked convolutional autoencoders as a whole. ( I mean you calculate the total loss and back propagate the loss in each layer to be able to adjust all weights. This is what Adam optimizer does in my personal understanding.) Is it possible that I train a similar network layer by layer because this is what I used to think stacked autoencoders was? ( I mean training conv1+pool1(fix the weigtht here) and then conv2+pool2 and then conv3+pool3.) Thanks!

)