More complex deep learning models require more complex data

Anyverse
Anyverse™
Published in
2 min readFeb 28, 2022

More complex deep learning models require more complex data, it’s that simple

This Massachusetts Institute of Technology (MIT) recent paper shows a more than interesting approach towards how machines can get to understand and interpret the relationships between objects in a scene.

As this and other recent studies reflect, the pain is latent… deep learning models are getting very good at identifying objects in all kinds of scenes, however they can’t understand the relationships of those objects with each other and the surrounding environment.

Even simple relationships that are obvious for a human, like this is inside of this or that is on top of that, are very hard for widely used object detection and segmentation models. There is a growing number of use cases that will require this understanding.

The machine learning model MIT researchers have developed brings machines one step closer to understanding and interacting with the scene environment, just like humans would do…

This evolution in the models will require new data for training. One of the conclusions of the paper states:

Our system, as with all systems based on neural networks, is susceptible to dataset biases. The learned relations will exhibit biases if the training datasets are biased. We must take balanced and fair datasets if we develop our model to solve real-world problems, as otherwise, it could inadvertently worsen existing societal prejudices and biases

Having the data that reveals real-world complexity without bias and with accuracy can definitely help.

Source: Learning to Compose Visual Relations on GitHub

--

--

Anyverse
Anyverse™

The hyperspectral synthetic data platform for advanced perception