UNDERSTANDING NATURE-INSPIRED AI (NIA)

Dhruv Tyagi
9 min readJan 24, 2024

--

Reference Paper — (Read Paper Here!)

This blog explains the Research Paper — Nature Inspired AI (NIA) by Gagandeep Reehal, Co-Founder, CEO & CTO at Minus Zero Robotics Pvt Ltd, India.

INTRODUCTION

The current methods used for self-driving cars are almost impossible to train for every possible real world situation. Thus they often struggle with new unexpected scenarios, leading to unexpected actions and even accidents.

This paper suggests a new approach inspired by nature and functioning of human brain to solve this issue.

CURRENT CHALLENGES

Let’s look at the various challenges with current AI methodologies in the domain of Autonomous Vehicles, AVs :

1. The Broken Loop

Traditional self-driving car models depend upon a closed-loop system in which the sensors constantly feed information back to the processor. This system is exceptionally delicate to changes in the surrounding such as the weather or the road conditions. As a result, these models often get confused and make mistakes.

Figure : The closed loop for (a) autonomous vehicles and (b) a human driver

In contrast, human drivers learn from their experiences and adapt to new situations. This is because humans are not limited to the information that is provided by their senses. They can also use their understanding of the world to make inferences and decisions.

Thus using nature-inspired AI models can be used to create self-driving cars that take actions similar to humans and learn from their experiences to adapt to new situations better.

2. How much data is unbiased data?

The most data that we have today is biased towards normal driving conditions. There are some unexpected driving situations that are rare and have very less representation in dataset. Thus our model might not able to learn them well, leading to biased decisions. Even if we have a lot of data that does not guarantee fairness and might led to inaccurate actions while driving.

3. The Barriers of Proportionality — Size, Compute and Data

To improve self driving vehicles, people frequently think that getting more data & making more bigger models will do the work. However, these thoughts have challenges. Gathering and labeling lots of data is time-consuming and expensive.

Further, large AI models need a lot of computing resources making them impractical for small devices or real-time applications. On the other hand, smaller models may struggle to learn from big datasets that leads to issues like overfitting or poor performance.

So, finding the right balance is crucial for making economical AI models that have good performance.

4. Taxonomical Limitations of Representation

Are the ongoing models able to extract sufficient context from the raw data for proper navigation?

The above question is vital as sometimes just simplifying complex data from sensors into labels that humans understand might not work great for AI models.

For example, an AI might mistake an ice-cream truck for an ice cream cone. Current models might struggle to confidently fit new variations into predefined categories. To have intelligence similar to humans, neural models needs a flexible way of understanding different appearances and movements of objects. Understanding surrounding context, judgment, and reasoning is essential for AI, just like it is for us to make better decisions.

5. Complicated vs. Complex Systems : Why AI Struggles with Traffic?

  • Imagine playing Chess. It have many probable outcomes but always have a specific set of rules. Complicated system are somewhat similar. They have variables but follow patterns that make predictions possible. This works well when every agent follows specific rules.
  • Juggling is more like a Complex system. It also has a lot of parts (balls, your hands, gravity), but they interact in unpredictable ways. In complex systems, each agent makes independent decisions, introducing uncertainties and making predictions difficult.

Traffic is a complex system as it is full of independent agents (drivers, pedestrians, cyclists) that have changing relationships with each other. Even if you could track all of the independent agents movements, there’s still no guarantee that we could predict what would happen next. Thus, current AI models cannot effectively deal with such complex systems.

6. The Faulty Frame of Reference

Decision-making by models is often simplified by using offline data source like high-fidelity maps. But relying just on a global frame of reference might introduce complexities, especially when errors occur in state estimation due to calibration issues or noisy data.

In contrast, the human mind makes decisions in an ego-centric frame of reference, planning relative to the surroundings. This adaptability across different environments, even with little prior context puts humans on upper hand from traditional agents.

The paper suggests that understanding this difference will open opportunities to understand and improve how AI perceives & adapts to its environment.

THE NEW PARADIGM— NATURE INSPIRED AI

The Beauty of Human Mind

The paper talks about a new paradigm for developing generalized autonomous agents called Nature-inspired AI (NIA). It is inspired by the working of human mind. The current AI algorithms are not capable of this type of decision-making.

The new approach is based on the idea of “social contracts”. These are algorithmic relationships that are informed by physics and context. These are designed to add key features i.e., attention, perception, episodic memory, and predictive decision-making that allow humans to make good decisions in complex situations.

What are generalized autonomous agent?

The characteristics of a generalized autonomous agent include -

  • Resource-aware (minimizing resource usage)
  • Data-aware (representing associations beyond limited categories)
  • Context-aware (reasoning within its domain)
  • Interaction-aware (understanding outcomes of interactions)
  • Spatio-temporal aware (grasping environmental behavior in relative space and time)
  • Outcome-aware (optimizing understanding based on deviations from expected outcomes).

Representation and Association

  1. Physics-aware, Context-aware Algorithms

Current deep neural networks (DNNs) can approximate functions well but struggle when it comes to understanding the physical characteristics of a problem.

If we talk about autonomous vehicles, the proposed solution is a physics-aware design that have knowledge about the physical aspects of the problem during training. This leads to more accurate representation of the world and clear explanation of dynamics like motion and semantic context.

2. Loosely Supervised Networks with Sparse Representations

Figure: An image drawn (a) by recalling from memory and (b) by referring to the object physically

The above is an example of drawing a dollar note from memory versus having a physical note for reference. This suggests that our ability to recognize things is much better than recalling from memory. The proposed solution is loosely supervised networks, guided by environment.

These networks have the ability to understand and represent threats without explicit identification, allowing them to adapt to new unseen variations without specific training. The model should be sparse and multi-modal, enabling learning from diverse variations and making quick associations for identification.

3. Network-of-Networks with Contrasting Bias

The paper suggests a new approach called a network-of-networks. In traditional models, the entire neural network is trained together, which can lead to biases affecting the entire system. The proposed method involves breaking the network into smaller sub-networks that can be further trained separately. This method enables more specific training for different tasks within the model, leading to a more balanced and controlled overall bias.

Predictive Awareness

While driving a car we need to decide which route to take. This decision involves considering many factors like traffic, road conditions, and more. It’s like solving a puzzle where we want to get the best possible outcome.

  1. Space-Time Awareness

This relates to not just seeing the objects but also understanding how they move in space and time. This helps in predicting what might happen next. However traditional practiced map-based navigation does not represent this type of decision making.

But unfortunately, Current navigation systems have a limited view that focuses only on spatial details. Paper thus suggest a smarter approach of creating a broad representation of the world, considering both space and time. This helps to connect past experiences with potential future situations, guiding the model’s actions based on a more comprehensive understanding of the present.

2. Multi-Agent Environment Interactions

If we imagine the road as a busy place with many other drivers or agents making their movements. Each driver operates independently having their own view and no one can control that. It is similar to a dance where everyone has their steps and the outcome depends on how they interact.

For an autonomous agent to generalize across dynamics of such complex system, it would require a careful design of the correlations of behavioral properties like determinism, entropy, size, etc.

3. Autonomous Agent’s Challenge

If we want a smart car to handle diverse and complex scenarios, it must understand the behaviors of all these independent drivers. Traditional approaches simplify things by averaging out everyone’s behavior, but that’s like assuming everyone dances the same way. In reality, each driver is unique, and their actions depend on various factors.

In a nutshell, in order to improve self-driving system models need to understand not just the space around them but also how things move over time. They should grasp the independent actions of others on the road, considering factors like how certain they are about their actions, how random their behavior is, and how big or small they are. This way AVs can adapt better to the dynamics of the road and make better decisions.

Adaptation

The goal of Nature-inspired AI is to have a model that doesn’t just follow pre-programmed routes, but constantly learns and adapts on the way. Here is how this works:

1. Infinite Correction Loop: The model does not just decide once, it undergoes in constant loop of deciding and correcting.

2. Local Frame of Reference: Instead of relying on detailed maps, NIA uses an ego-centric view. Just like we look through our car windshield for getting a better overview. This helps to react quickly in response to the world around it even in unfamiliar places.

3. Self-Adaptive Control Systems: The car is a quick learner but needs the right moves in real-world driving. Imagine the car as a dancer who adjusts instantly to unexpected steps. We propose a vision-only control system, letting the car adapt to any vehicle it dances with.

Current models often learn to drive better in simulations but flop in the real world. NIA removes the driver from the equation by focusing only on what the model sees. This lets it adjust to different car models and various unexpected situations like a flat tire or sudden rain.

4. Continual Learning — Predicting Rewards: The new method learns by comparing what it predicted with what actually happened. If there is a difference, it automatically adjusts the internal model states. It is learning from past mistakes. If car trips on a crack then model will remember it for the next time.

THE ENABLERS

1. The Pseudo-data Approach

The problem of biased data can be solved through generating artificial data of diverse conditions. The generative models can be prompted properly to make them understand the physics-awareness, contextual understanding, and sparse generalized representations for better data generation.

2. The Self Supervised Way Ahead

Traditional methods using hand-crafted features struggle to model the complexity of driving. Task-focused networks often brute-force correlations to specific outcomes, limiting adaptability to different environments. The paper advocates for self-supervised or loosely supervised training methods, emphasizing the importance of context-aware associations and correlations.

3. Teaching for Edge

Training the model for various edge conditions requires physics-informed & contextually reasoning models. Student-teacher methods used with -

  • loosely supervised networks, for extracting learnable representations.
  • self supervised physics-aware or context-aware networks, for learning distilled knowledge or crucial insights.

provides faster convergence and diverse correlations.

CONCLUSION

In simpler terms, the research proposes a novel idea of nature-inspired AI. It draws comparisons between the limitations of current AI and the human mind, suggesting a change in thinking to create self-driving systems that can handle diverse situations better.

Having more of sensors, data and powerful models for autonomous vehicles will only make things more complicated and less safe. This paper could be the key to making autonomous driving safer and more reliable.

--

--