The End of Dumb AI

The future of AI is realtime learning, interactive and dynamic

Star Wars — A New Hope (1977)

The world turns and changes. As does data. Change needs to be captured to reflect a new reality just like modern software systems do. With conventional AI, feature vectors are a means to producing a trained static model. With Thingy, a fully automated realtime AI, the feature vectors are central characters on the stage(*) with new ones appearing, current ones remaining unchanged or updated or even terminated.

In this post we:

  • Add images to the Thingy system,
  • Show how a query can contain both known and unknown items,
  • Briefly discuss integrating systems-of-record and systems-of-intelligence,
  • Lay out a vision of a universal vector space for realtime AI to flourish.

Litmus test

Adding new data at any time is a litmus test of a realtime AI. The steps to demonstrate adding unknown images to Thingy are:

  1. Query with an unknown image. If the image is known to Thingy then its duplicate will show as the first result. In this case, it doesn’t.
  2. Add a copy of the unknown image to the Thingy system.
  3. Query with the unknown image again. If the image is known to Thingy then its duplicate will show as the first result. This time, it does.
  4. Repeat steps 1, 2 and 3 but add three unknown images at the same time.

The slidedeck shows these steps in action.

Knowns and unknowns

In conventional AI, “seen” data is for building trained models to obtain predictions for “unseen” data. With Thingy, a data item is represented by a feature vector and is either known or unknown to the system but both can be used to get a prediction.

For example, a photo of the Golden Gate Bridge is added to Thingy with a unique id and becomes a known item. A person can i) query with an image of the Bay Bridge which is unknown to Thingy or ii) query with both the known Golden Gate Bridge and unknown Bay Bridge images at the same time. A query can contain both known and unknown items.

The next slidedeck walks through an example.

Integrating systems-of-intelligence with systems-of-record

It is straightforward to extract information from a systems-of-record layer to employ in an AI service running independently. When data changes in the systems-of-record layer it notifies the AI service to collect the new data and begin the re-training process. A clear separation exists between the layers. Joined-at-the-hip integration between systems-of-record and systems-of intelligence layers need to carry a big “caveat emptor” label as the former is a dynamic system and the latter a static model.

“Travelling through hyperspace ain’t like dusting crops, farm boy”, Han Solo to Luke Skywalker

A universal vector space formed from tens of millions of compatible sub-vector spaces where things, people and machines are defined by vectors automatically generated with deep learning. Digital objects have two representations — its native format and a standardized vector which can be private or public. These vectors form a dynamic feature space with items continually added, changed and removed.

The bottom-up universe of dynamic vectors of things are married to top-down AI agents navigating and interacting to discover and detect things in realtime.

Science fiction writers and movie makers are typically decades ahead of the rest of us. In the Blade Runner, Minority Report or Star Wars movies, the AI systems don’t need cranking-up regularly to keep them fit-for-purpose. They won’t in our world either.

Last reel

Nobel Prize winner Steven Chu said, “If you use an old tool to tackle a problem you’ve got to be really smarter than the rest of the folks because everybody has this tool. If you are the first to look with something new it’s like starting a new world. You just look around and everything you see is going to be new.

Thingy makes everything look new.

Bottom line: The future of AI is realtime learning, interactive and dynamic.

(*) The output of deep learning is a trained model for object classification or recognition. Feature vectors of unknown objects can be extracted from the pre-trained model in a process called transfer learning. These dense vectors are employed in machine learning pipelines and in distance metric operations to find similar objects in an n-dimensional geometric space.

Articles in series

Realtime AI is No Longer a Myth
Interactive AI with Tom Cruise
The End of Dumb AI

If you liked this, click the clapping hands below so other people will hear about this on Medium. Thank-you.