An augmentation based deep neural network approach to learn human driving behavior
Vivek Yadav

I am wondering, when you do the horizontal and vertical shifts, do you pad the edge black pixels with something? I thought about going with that approach but I was afraid the model would simply look for the amount of black pixels around the edge for steering adjustment and of course those won’t be present in runtime.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.