#1 NeuroAI Records: Origin of Deep Learning Revolution

Deulkardr
5 min readDec 25, 2023

--

The story begins in 1956. This is the year that a summer conference was held at Dartmouth — brought together computer scientists who started in the field of Artificial Intelligence. They wanted to write a program that would be able to perform as intelligently as a human. That was their goal.
To solve this problem, it’s not as easy as it seems.

We still solving that very problem!🔬

Let us start with the Problem of Vision. It’s still a mystery how a 2-year-old can recognize a cat after just seeing it once. While in computer vision there’s a very big data set called ImageNet with 20,000 categories of images, 20 million images. Creating a new category required a couple of man years of work. And we do that without any effort.

The only existent proof that the problem of vision can be solved is the fact that nature has solved it.

So, another way of approaching the problem is by learning from existent solution itself: BRAIN

➯1950s & 60s: The Dawn of Neural Networks

The fields of neuroscience and Engineering have been thought of as different fields until scientists started to mess around a bit. In 1959, American Psychologist Frank Rosenblatt came up with the concept of Perceptron. The most simple embodiment of what a neuron can be.

The perceptron takes a set of input values, multiplies each by a weight, sums them up, and then passes this sum through an activation function to produce an output. In essence, it’s used for binary classification tasks, determining whether an input belongs to one class or another based on the calculated output. The perceptron adjusts its weights based on the errors in its predictions, learning from training data to improve its accuracy over time.

🗘Limitations of Perceptron:
Minsky and Papert proved that the perceptron has notable limitations.

Primarily, it struggles with problems that are not linearly separable, meaning it cannot classify data points when a straight line or hyperplane can’t separate them in space. This shortcoming was proved by XOR problem, a classic example where data isn’t linearly separable. Additionally, perceptrons are limited to binary classification tasks.

➯1980s: A Paradigm Shift with the Boltzmann Machine

The learning algorithm of the Boltzmann Machine (1983)
Developed by Geoffrey Hinton and Terry Sejnowski, marked a significant advancement in the field of Deep Learning. Overcame the limitation of Perceptron learning, It solved it very elegantly in a biologically plausible way. The only problem was it was slow.

The introduction of the Boltzmann Machine marked a pivotal shift, using probabilistic activations to address non-linear separability.

➯The 2000s: The Rise of Deep Learning

The 2000s marked a significant turning point in the field of artificial intelligence, particularly with the rise of deep learning.

Deep learning, is primarily built on neural networks. These networks comprise layers of interconnected nodes or “neurons,” which process and transmit information. The “deep” in deep learning refers to the presence of multiple such layers, enabling the extraction and learning of complex patterns in data.

The resurgence of neural networks and deep learning can be attributed to several key factors:

1. Increased Computational Power: The development of powerful GPUs (Graphics Processing Units) and TPUs (Tensor Processing Units) provided the necessary computational capacity to process large datasets and complex algorithms efficiently.

2. Big Data: The explosion of digital data, provided the vast amounts of information required for training deep learning models.

3. Algorithmic Advances: Improvements in algorithms, such as backpropagation and various optimization techniques, enhanced the learning capabilities of neural networks.

One of the most notable applications of deep learning is in the development of voice assistants such as Siri and Alexa. These assistants utilize deep learning for natural language processing (NLP) and speech recognition.

Siri, for instance, uses deep neural networks to convert speech into text, understand the context, and respond in a human-like manner.

Key contributors to these advancements include Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, often referred to as the “Godfathers of Deep Learning.”

➯2012: The ImageNet Revolution

The ImageNet Revolution in 2012 marked a significant milestone in the field of deep learning and computer vision, fundamentally altering the trajectory of AI research and development.

On December 4th, alongside Alex Krizhevsky and Ilya Sutskever, Hinton presented their work on training a large, deep convolutional neural network to classify images from the ImageNet dataset, achieving significantly better results than previous methods.

ImageNet classification with Deep CNN

Additionally, on December 6th, Hinton, alongside George Dahl, gave an invited talk titled “Dropout: A simple and effective way to improve neural networks” during Oral Session 10. Hinton’s contributions, particularly in pioneering techniques like backpropagation and his work in deep learning, have been instrumental in advancing the field.

➯Deep Learning in the Real World

The real-world applications of deep learning have been vast and transformative across various industries. Here are some key areas where deep learning has made significant impacts:

1. Healthcare and Medicine: Deep learning algorithms are increasingly used in diagnostics, such as analyzing medical images for early detection of diseases like cancer. They’re also instrumental in drug discovery and personalized medicine, offering tailored treatment plans based on individual genetic makeup.

2. Automotive Industry: The development of autonomous vehicles heavily relies on deep learning for object detection, scene understanding, and decision-making. This technology is not just limited to cars but extends to drones and other autonomous systems.

3. Finance: In the financial sector, deep learning aids in fraud detection, risk management, and algorithmic trading, where it analyzes large volumes of data to predict market trends and identify anomalies.

4. Retail and E-Commerce: Personalized shopping experiences, recommendation systems, and inventory management in the retail sector have been revolutionized by deep learning algorithms, enhancing customer satisfaction and operational efficiency.

5. Natural Language Processing (NLP): Deep learning has significantly advanced the capabilities of machines in understanding and generating human language. This advancement is evident in translation services, chatbots, and sentiment analysis tools used across various platforms.

6. Entertainment and Media: In the realm of entertainment, deep learning algorithms are used for content recommendation on streaming platforms, creating realistic visual effects in movies, and even generating music and art.

The continuous evolution of deep learning is not without challenges. Issues like the need for large datasets, the risk of bias in training data, interpretability of deep learning models, and the computational cost remain areas of active research and development. Nonetheless, the potential of deep learning to transform industries and improve human life remains undeniable, making it one of the most exciting fields in the realm of artificial intelligence.

Till then Enjoy, Investigating🔬, exploring!!

--

--

Deulkardr

I am computer science student, interested in knowing novel patterns in life that leads to greater version of yourself. Bio-computation beginner and explorer.