Continuing the Nyaya tradition of exploratory analyses

Parikshit Sanyal
Significant others
Published in
2 min readSep 30, 2022

One of the major schools of Indian philosophy, Nyaya, defines ‘perception’ as:

A perceptual cognition arises by means of the connection between sense faculty and object, is not dependent on words, is non-deviating, and is determinate.

The second condition, i.e. ‘not dependent on words’, has several interpretations. In his essay ‘Perception and Language;’ Prof B K Matilal argues

Thus, the Nyayasutra wants, perhaps, to point out that sense perception can take place even when the words denoting the object have not been learned

This viewpoint is later reiterated by the Buddhist school of Dinnaga, who define “perception as a cognitive state which is totally untouched by imaginative construction (vikalpa, kalpana) or conceptualization”, i.e. not dependent on memory or prior experience.

Sounds eerily similar to unsupervised learning? Is there some kind of internal clustering, common to all agents who ‘learn’, without supervision or even with supervision? (somewhat akin to Chomsky’s universal grammar). Do some errors fail to correct with training? And are these errors reproducible between two agents, humans & machines?

We wanted to test this hypothesis in humans and machine learning models. The difficulty was acquiring a dataset, which is novel enough for machines (not a problem) as well as humans. Histologic images fall in this very sweet spot. Less than 0.01% of all humans will ever see a histologic image (i.e. microscopic sections of tissues), and most humans are completely histology-naive. We conducted the study with histologic images, with a brief training imparted to both humans (medical students) and convolutional neural networks (VGG16 and Inception V3).

To the uninitiated, histology is the perfect Rorschach test.

One way to compare performance of two students is to look at their errors on the same test. If everybody in a class has made the same set of mistakes, there can only be two possibilities

  1. everybody has copied their answers from one guy
  2. or, everyone in the class has similar thought patterns, which is highly unlikely

So when we compared the errors of students to that of ML models, we were in for a surprise. The ML models are better at recognising tissues than the average human students (that’s not the surprise); however, those few students who performed similar to or better than ML models, also replicate the error profile of ML models closely.

How to make sense of that? Is there some universal visual code to humans and convolutional neural networks? Seems unbelievable (and most possibly, false).

If you want to come up with your own interpretation, read the paper here.

References

1. Bimal Krishna Matilal. Epistemology, Logic & Grammar in Indian Philosophical Analysis.
2. Nyaya. In Internet Encyclopedia of Philosophy. https://iep.utm.edu/nyaya

--

--