Become a member
Sign in
Emily
Emily

Emily

37 Following
7 Followers
·
  • Profile
  • Claps
  • Highlights

Highlighted by Emily

See more

From Rohan & Lenny #1: Neural Networks & The Backpropagation Algorithm, Explained by Rohan Kapur

… get to the bottom. Also, notice that the cost function is parameterized by our network’s weights — we control our loss function by changing the weights.

From Rohan & Lenny #1: Neural Networks & The Backpropagation Algorithm, Explained by Rohan Kapur

…networks. Fundamentally, neural networks are nothing more than really good function approximators — you give a trained network an input vector, it performs a series of operations, and it produces an output vector. To train our network to estimate an unknown function, we give it a collection of data points — whic…

From Rohan & Lenny #3: Recurrent Neural Networks & LSTMs by Rohan Kapur

…o Stanford University as an undergrad student. A few months ago, I achieved this goal. At Stanford, I’ll probably be studying Symbolic Systems, which is a program that explores both the humanities and STEM to inform an understanding of artificial intelligence and the nature of minds. Needless to say, A Year of AI will continue to document the new things I learn 😀.

Claps from Emily

See more

Experts Say Your Fingers Can Type 24/7, Forever, Until Your Dying Breath

steve rousseau

Anti-Abortion Lawmakers Have No Idea How Women’s Bodies Work

Jessica Valenti

What Do 90-Somethings Regret Most?

Lydia Sohn