Keeping it simple: My kudos to Andrew Ng

Reflections & observations from Andrew Ng’s revolutionary Deep Learning Specialization

“Everything should be made as simple as possible, but not simpler”
-Albert Einstein

Keeping it simple really helps. I am really impressed the way Andrew Ng has conducted the two specializations on Machine Learning and Deep Learning. There can be hardly anyone who will debate his teaching prowess. This is my way of saying Thank You for all that Andrew continues to do towards the field of AI, Deep Learning and making learning a joyous ride.

  1. The ‘Cat’ classifier:

Most of whom who have taken the Deep Learning specialization would repeatedly come across the examples of cats cropping up amidst the various deep learning methodologies Andrew talks about. With a seemingly simple example of identifying cats in pictures, Andrew covers the concept of Image recognition, expanding the range to span and help us understand the power of Neural networks, of over fitting, and intuition of Bayesian error.

2. Adhering to the ‘Peak-End’ rule:

What was really interesting in most videos was that he left us with a summary of what he covered in the 10–12 minute snippet. I think this really invigorates the rate at which we are able to synthesize information. This, in psychology, is called the Peak-End rule, one where people remember the most out of an event at the most intense point and the end. One thing which was missing as compared to the ML specialization were a few pop quizzes. I guess that was more than made up in the Programming assignments, which were again standout.

3. Attributing references:

It might be an intuitive thing to do, but it is something most often overlooked. With the treasure trove of research articles, blogs and documentation to guide us through every step of learning, we fail to attribute where the base of our idea came from. Two things inspired me during the journey.

A. The ground breaking research in Artificial Intelligence and Neural networks which is being carried out, in various areas from debiasing AI to CNNs to RNNs.

B. Andrew’s guidance on how to approach these references, and a thorough acknowledgement of such path breaking work. I think, this was a huge takeaway for me, as I increasingly see posts which are being directly ripped off other’s work, without an iota of acknowledgement.

He also makes it a point to frequently attribute his team members (Course instructors and Teaching Assistants) and people who have inspired him in his journey.

Take a bow, Andrew Ng. You are a real hero.

Kishan

--

--