What if I train a deep neural network with random data?

Fra Gadaleta
HackerNoon.com
2 min readNov 12, 2019

--

Listen to the episode

This post has been published as a podcast episode on Data Science at Home. You can listen to the full episode here

Would you train a neural network with random data? Moreover, are massive neural networks just lookup tables or do they truly learn something?

Today’s episode is about memorisation and generalisation in deep learning, with Stanislaw Jastrzębski. Stan works as post-doc at New York University. His research interests include:

  • Understanding and improving how deep network generalise
  • Representation Learning
  • Natural Language Processing
  • Computer Aided Drug Design

What makes deep learning unique?

I have asked Stan a few questions I was looking answers for a long time. For instance, what is deep learning bringing to the table that other methods don’t or are not capable of?
Above all, Stan believes that the one thing that makes deep learning special is representation learning. It turns out that all the other competing methods, be it kernel machines, or random forests, do not have this capability.

Moreover, optimisation (Stochastic Gradient Descent) lies at the heart of representation learning in the sense that it allows finding good representations.

What really improves the training quality of a neural network?

We discussed about the accuracy of neural networks depending pretty much on how good the Stochastic Gradient Descent method is at finding minima of the loss function.
What would influence such minima?
Stan’s answer has revealed that training set accuracy or loss value is not that interesting actually. For instance, it is relatively easy to overfit data (i.e. achieve the lowest loss possible), provided a large enough network, and a large enough computational budget.
However, shape of the minima, or performance on validation sets are in a quite fascinating way influenced by optimisation. In other words, optimisation in the beginning of the trajectory, steers such trajectory towards minima of certain properties that go much further than just training accuracy.

So what happens if you train a neural network with random data?
Stan’s answer, while not shocking has a very important message to keep in mind.

The future of AI

Finally, we speak about the future of AI and the role deep learning will play.
Stan’s opinion is very moderate with respect to what many journalists and experts dare to say about Artificial General Intelligence, killer robots and agents with superhuman capabilities.

We hope you enjoy the show!

Don’t forget to join the conversation on our new Discord channel. See you there!

Originally published at https://datascienceathome.com on November 12, 2019.

--

--

Fra Gadaleta
HackerNoon.com

🏢 Founder of Amethix 🌟 Building software wizardry and 🦀 Rust-powered wonders 🎧 Host of the mind-bending podcast https://datascienceathome.com