How do computers see? Multi layered prediction and object detection in Dx Vision.

While computer vision is evolving we still have some obstacles to overcome. When it comes to an object detection algorithm based on the data that you input in the training, you can train a machine to detect objects in a sandbox environment. Where the input that the computer can read is always the same. Out of that controlled environment the computer might fail to be precise or work as intended.

While computer vision is modeled after the human’s ability to see and differentiate objects, it has its flaws. Our vision and perception is still far greater that what the current generation of machines are able to understand and analyze. It has multiple layers of predictions, detection and a much larger knowledge that can expand in specific cases (medical imaging).

That is why here at Dentem, with Dx Vision we are modeling a multi layered prediction and detection computer vision for dental images. That will allow the technology to evolve and grow and work as intended outside of a sandbox environment.

To further simplify it if we as researchers build a multi layered prediction and detection models. We can make a computer understand much more deeply and precisely what it is looking at and have better analytical outcome.

In our simple testing we trained a model to predict the dental image category simplified for testing:

1)panoramic

2)bitewing

3)other

Category prediction

The results as bellow:
1st image: 99.9% accurate that it is a panoramic (bigger dataset)

2nd image: 98.8% accurate that it is a bitewing

3rd image: 90.8% accurate that it is not a dental radiograph.

After the category is determined than the computer goes ahead and runs the specific trained models to detect objects within that category.

Dx Vision 2.0 (General Detection Model)

Now imagine for a second. A multi layered prediction and detection system that is within a hospital (or in the cloud). This gives a knowledge graph to a machine to detect a diversified array of medical images from chest x-rays, to CT scans and run different models regarding the images to try and assist the doctor to analyse the image spending far less time and with just one press of a button.

The system determines the kind of image that it has to analyse -> Specifies a category -> runs the specific model based on category -> gives our the result.

This will become a reality very soon when the future imaging machines will have a build it knowledge graph or they will be connected to a cloud service that will deliver the data and get back the results so it makes the process far less time consuming.

This is why at Dentem we packed Dx Vision into an API that will enable all third parties to send images to predict, detect and analyse and return the results seamlessly into the system. An infrastructure with multi layered models trained after specific set of rules.

The testing of this API has begun and already a couple of companies from different industries are testing and willing to test the APi and include it later into their infrastructure to improve their processes.

Dx Vision running on android. 2 layers.

Bonus: Now Dx vision model run inside a web browser via tfjs or inside a mobile application that is not connected to the internet. Those models are simplified and might produce less accurate results. But looking at the speed of change we believe that the future will revolutionize the way we approach healthcare and dental healthcare.

For more updates on Dx Vision and Dental Imaging with Machine Learning stay with us.

More updates coming soon.

Team Dentem.