Common Sense Reasoning Based Safe and Reliable Artificial “General” Intelligence System

image credit: https://vitalflux.com/ethical-ai-principles-ibm-google-intel/

When using deep learning models, we generally only have point estimates of parameters and predictions at hand and that is why deep learning models are often viewed as deterministic functions.

This models can be used for applications as diverse as detection of diseases, autonomous vehicles, and classification. For example, given several pictures of flowers species as training data — when a user uploads a photo of flower the classification engine should return a prediction with high confidence. But what happens if a user show a photo of a cat and asks the classifier to decide on a flower species?

In case of more serious example, where , such as diabetic retinopathy or cancer with structures the classifier system has never observed before, or scenes an autonomous driving system has never been trained on, how the deep learning model will behave.

The above cases is an example of out of distribution test data. The model has been trained on certain distribution of data, and has learnt to distinguish between them well. But the model has never seen a cat before in case of flower species classifier, and a photo of a cat would lie outside of the data distribution the model was trained on.

The desired behavior of a deep learning model in such cases would be to return a prediction with the added information that the point lies outside of the data distribution i.e. we want our model to possess some quantity conveying a high level of uncertainty with such inputs (alternatively, conveying low confidence).

Apart of out-of-distribution based uncertainty, the uncertainty can also come from noisy data such as the labels might be noisy or in case of camera with sudden contrast/brightness change, over/under exposure or occluded images. Other situation also arises such as which model and its structure to choose out of many models which all has explain a given dataset and how do we specify our model to extrapolate/interpolate.

Model uncertainty information is deemed essential in systems that make decisions that affect human life such as in aviation, auto or medical industries or financial institution where direct money is involved and comes under the umbrella of AI safety.

In case of high uncertainty in model prediction which could be due to noisy data or due to model uncertainty, it is better to give control back to human or any system with less uncertain at that particular scenario than relying completely on uncertain model.

So measuring uncertainty is an essential component in making a safe, reliable and interpretble deep learning based AI control system.

Below are some example of deep learning based classifiers/image-segmentation in which the system has predicted correct label/image-segmentation with high prediction scores

Deep Learning classifier predicting correctly the class label with high prediction score

But the question need to ask what is the confidence bound on this prediction scores. Is my model is highly confident while predicting the scores?

In case of autonomous driving an desired output of the deep learning based control system is to take advance precautionary steps on traffic scenario in which it is not confident in making decision.

A complex traffic scenario under which deep learning based control system has to be vary cautious in making decision

Like in above scenario is the deep learning based controller is confident in detecting both the traffic signal pole( the first traffic signal pole is even difficult by human to pick up). If the model is uncertain about detecting the traffic signal pole than it is better to slow down till the traffic signal pole is detected better. In case it is still not sure about the traffic signal pole detection, it is better to pull-over or give the control back to the occupant or any other expert system present at that time.

Another desired property is how can we make the deep learning based AI system more safe during training time. Can the uncertainty be reduced? Can uncertainty information be used to improve upon the training set? How to identify the reason of uncertainty? Is it due to noisy data, over/underexposed images, occluded images or is it due to less rich model training dataset?

If we get the uncertainty information right during training time, the uncertainty of the deep learning based AI system can be reduced considerably and this leads to a verified AI system which is one of the foremost requirement of safety standard

AiOTA Labs research has the solution to make the deep learning based AI system

  1. Safe
  2. Reliable
  3. Interpratble
  4. Verified

We offer ready solution where user can train their system with their own training dataset while using our deep learning framework or can adopt our technology in thier own deep learning framework.

Here are some examples of how our classifier can output model prediction along with uncertainty and the reason of uncertainty.

In the above example the top two images were classified by classifier with high prediction scores but our algorithm has suggested that we should not rely on this classification since the uncertainty related to the classification is very high and the decision has to be handed over to expert.

During training time the reason of uncertainty can be segregated in the bucket of image noise related uncertainty or data richness related uncertainty. In the above example of traffic sign, the reason of uncertainty was due to image noise and after augmentation ( augmented with gamma correction) the uncertainty went down quite significantly.

Our algorithm also works seamlessly in any deep learning framework. Below is the example of image segmentation(pixel wise image segmentation).

In the above example the yellowish tone is the region of high uncertainty and as we can see that image segmentation system is very uncertain in detecting one of the traffic signal pole.

But one of the problem in implementing the uncertainty based algorithm in deep learning based AI system is that it slows down decision making ability considerably. It is absolutely undesirable in case of real-time decision making system such as autonomous driving.

AiOTA Labs has the solution in which we still implement uncertainty in any deep learning system without adversely impacting its decision time. Below video is simultaneously doing image pixel-wise segmentation while also calculating uncertainty in every frame of the video without loss in accuracy and maintains same state-of-art accuracy. At present our system takes 100mS inference time(measured on GTX-1080 GPU) which is ~20X faster than state-of-art video segmentation system.

Video segmentation with uncertainty: Left: Ground Truth, middle: Segmentation result, right: Uncertainty in segmentation. The video is intentionally slowed down for better viewing purpose. There is some synchronization problem within the three video

AiOTA Labs researcher is focused in bringing down the present inference time by 2X to 3X further. Our ultimate goal is to reach 30mS of inference time in pixel-wise segmentation with uncertainties deep learning framework.

In summary, uncertainty is an important aspect in today’s deep learning based AI system without which AI can’t be safe and unreliable.

Here are some AI driven use-cases where usage of uncertainty makes application safe, reliable and interpretable

  1. Autonomous driving sensor fusion: If any of the sensor measurement is uncertain than pass on the information to the controller the less uncertain information along with the uncertainty values and propagates this uncertainty values down to the last decision making controller.
  2. Autonomous driving controller a.k.a Arguing Machine: Choose many redundant controller so called arguing machine and based on the uncertainty values of these controller, chose the output of one controller which has least uncertainty at that particular decision.
  3. Medical auto disease detection: In case of detection is highly uncertain even with high prediction score, pass this uncertainty information to the expert without taking the decision on its own.
  4. Fintech industry: While investing money on the AI decided stock, look at the uncertainty information on that stock prediction. Even if AI predicting high score on a particular stock but if uncertainty is high, don’t invest in that stock.
  5. Deep learning training phase: Segregate the data on various uncertainties bucket such as sensor uncertainties and model uncertainties. Model uncertainties can be explained by increasing the richness of the dataset. sensor uncertainties can be explained away by proper choice of camera ISP algorithm and better labeling.
  6. Deep learning model and structure selection: Even if many model predicting high score on same dataset, choose the model which has lesser uncertainties. Also fine tune the structure such as number of filters, channels, layers … to get lower uncertainty values.

AiOTA Labs will be very happy to collaborate with you to make the world of AI based system safe and reliable.

Contact us at info@aiotalabs.com in case you are interested to learn more.

Redefining Deep Neural Networks