An Oversimplified History of Machine Learning: Part 2

VGG Architecture
Two 3x3 convolutions are equal to one 5x5 convolution in receptive field size. However, a (5x5) convolution has 25 parameters, while 3x3 x 2 has 18 parameters. https://towardsdatascience.com/applied-deep-learning-part-4-convolutional-neural-networks-584bc134c1e2#9a7a
Schema from ResNet 34 in comparison to VGG; ResNet: f(x) = f(x) + x
Parameters = Filters x Convolution Size x Feature Maps
DenseNet Schema with skip connections connecting every layer
Concatenation: Layer 1, Layer 2, Layer 3 = [1, 2, 3]
Inception Module implementing wideness
http://i.imgur.com/G3vwhsu.jpg InceptionNet
Inception-v3 module using equivalent receptive fields
Inception-ResNet Convolutional Module
Inception-ResNet Reduction Module
ResNet and ResNeXt cardinality comparison; the number of parameters used is similar
General Schema for NASNet before construction
End product of NASNet Training
  1. Dekhtiar, Jonathan. “Why Convolutions Always Use Odd-Numbers as filter_size.” Data Science Stack Exchange, 1 Mar. 1968, datascience.stackexchange.com/questions/23183/why-convolutions-always-use-odd-numbers-as-filter-size.
  2. Dertat, Arden. “Applied Deep Learning — Part 4: Convolutional Neural Networks.” Medium, Towards Data Science, 13 Nov. 2017, towardsdatascience.com/applied-deep-learning-part-4-convolutional-neural-networks-584bc134c1e2#9a7a.
  3. Despois, Julien. “Memorizing Is Not Learning! — 6 Tricks to Prevent Overfitting in Machine Learning.” Hackernoon, 20 Mar. 2018, hackernoon.com/memorizing-is-not-learning-6-tricks-to-prevent-overfitting-in-machine-learning-820b091dc42.
  4. He, Kaiming, et al. “Deep Residual Learning for Image Recognition.” arXiv:1512.03385, Cornell University. 10 Dec. 2015, https://arxiv.org/abs/1512.03385
  5. Huang, Gao, et al. “Densely Connected Convolutional Networks.” arXiv:1608.06993, Cornell University. 25 Aug. 2016, https://arxiv.org/abs/1608.06993?source=post_page
  6. Krizhevsky, Alex, et al. “ImageNet Classification with Deep Convolutional Neural Networks.” 2012 NIPS Proceedings Beta, https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks
  7. Simonyan, Karen, et al. “Very Deep Convolutional Networks for Large-Scale Image Recognition.” arXiv:1401.1556, Cornell University. 4 Sep. 2014, https://arxiv.org/abs/1409.1556
  8. Szegedy, Christian, et al. “Going Deeper with Convolutions.” arXiv:1409.4842, Cornell University. 17 Sep. 2014, https://arxiv.org/abs/1409.4842
  9. Szegedy, Christian, et al. “Rethinking the Inception Architecture for Computer Vision.” arXiv:1512.00567, Cornell University. 2 Dec. 2015, https://arxiv.org/abs/1512.00567
  10. Szegedy, Christian, et al. “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning.” arXiv:1602.07261, Cornell University. https://arxiv.org/abs/1602.07261
  11. Xie, Saining, et al. “Aggregated Residual Transformations for Deep Neural Networks.” arXiv:1611.05431, Cornell University. https://arxiv.org/abs/1611.05431
  12. Zoph, Barret, et al. “Learning Transferable Architectures for Scalable Image Recognition.” arXiv:1707.07012, Cornell University. https://arxiv.org/abs/1707.07012

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store