Ontology vs. Epistemology for AI

Moji Solgi
BuzzRobot
Published in
3 min readDec 29, 2017

I listened to this podcast while stuck on an airplane with no internet (horrible thing, I know!).

This is a topic that I’ve put some thought into over the years of doing AI research and studying the subject. So I decided to jot down my opinions. Warning: I tend to get too philosophical when jet lagged and in transit for 30+ hours.. Below is what I would’ve said if I was on the show.

The debate goes back a long time. The Greek philosophers (I wouldn’t be able to tell you which ones, but I bet Aristotle was one of them since he seems to have had too much time on his hands and covered everything) talked about two branches of knowledge. Ontology: the study of what there is in the world that we should know about, and Epistemology: the study of how we should get to know the things in the world. Obviously, this was long before AI was a thing, and they were merely concerned with the structure of knowledge and its acquisition by humans.

Fast forward to the 20th century, people started thinking about making computers knowledgeable like humans. Given how the digital computers work, it was much easier (and more intuitive) to represent knowledge about the world in terms of entities (symbols) and their relationship. For example, x is the symbol for dog, y is the symbol for animal, and we can easily encode the knowledge of dog being a type of animal in a computer program. Huge decade-long projects were defined using the American tax money to create AI systems this way, with very low ROI.

Other people avoided the easy and more intuitive path of symbolic AI and tried to emulate how the human brain works and spent decades tweaking artificial neural networks with very little success — did even worse than the symbolic guys in many cases. And such the Game of Thrones style war between the symbolic AI and connectionist AI schools of research began.. There are legends about the bloody rivalry between Marvin Minsky (symbolic guy) and Rumelhart (connectionist guy) and later between others in the opposite camps. The connectionist camp would get the limelight a few times during the decades (first with the perceptron and then with back propagation algos) but they were mostly underfunded and in ‘winter’ mode — get the game of thrones analogies?! I was personally affected by the last neural network winter while doing a PhD on neural nets, before they were called deep learning and became cool.

Then came the deep learning revolution, almost everyone became a connectionist and now it’s hard to get funding for other types of AI research (which is a problem in itself), and the rest in history..

All of this just to get to the point that, IMO, when it comes to creating AI we should take an epistemological approach: how does the only working truly intelligent system (the human brain) models the world, innately or by learning. This is in contrast with the ontological approach: focusing on organizing what we know in data ontologies and then trying to instill those in computers.

The study of ontologies is very important for projects like Wikipedia, search engines, etc. but those are systems with a different kind of “intelligence” which, IMO, shouldn’t be confused with the study of AI as defined by Alan Turing and his test.

As a practitioner, I think most real-world problems can get away with the common practice of data modeling and interface design in software engineering by people who do not necessarily know much about the science of ontologies. For example, in the podcast, they mention the need for knowledge of ontologies for computers to talk to each other on the internet. Software engineers have successfully designed internet protocols, RPC frameworks (my favorite one being Apache Thrift originally by Facebook), and the modern micro-service frameworks without consideration for ontologies in an academic sense. I think the same argument applies to the actual work that’s needed for the reCaptcha and HumanDX projects that were mentioned in the show.

Let’s get back to work on building the AGI thing now. Enough philosophy.

--

--