model.fit() part IV: Deep learning with a toddler

Parikshit Sanyal
Significant others
Published in
3 min readDec 2, 2021

My three year old, aka Turing, has finally grasped the concept of adjectives. It took a lifetime of efforts (three years, to be exact) to get here. Earlier, when shown a red pencil, Turing would alternatively categorise the thing as ‘red’ or ‘pencil’. Sometimes both. Over time, Turing has realised that ‘red’ might apply to a whole lot of different things, such as a shirt, or a mattress, or even an apple. Nowadays Turing revels in such ‘adjectives’, often describing them quite emphatically — and will absolutely refuse anything other than a ‘green pillow’ to sleep on. Although, when faced with combinations like ‘dark red’, Turing gets visibly confused; the duality of ‘red’, that it can play both a verb & a noun — depending on context — seems to upset Turing. One adjective at a time please!

Turing can recognise handwritten digits as good as any deep learning model now, and outperforms most on the MNIST dataset. If I write

1

… Turing will immediately output ‘one’, verbally — of course. However, if I write

1 1

… Turing hesitates with the second one. That numbers can repeat themselves, unlike — say — a ‘Hello kitty’ dancing robot figurine (which is one & only), is still beyond Turing’s comprehension. Turing would simply refuse to acknowledge the presence of the second ‘1’. However, after a while, whence the whole incident has sublimated from Turing’s memory, digit recognition gets back to normal.

Turing is fine with the concept of ‘addition’, as long as you sing it aloud. The lines have to come one pitch higher than the last one

C “One plus one is two”
C# “two plus one is three”
D “three plus one is four”
D# “four plus one is five”

At this point, Turing has already gone soprano, and realises ‘five’ is one note too sharp. So every once in a while, Turing would simply refuse to utter five and instead sing

D# “four plus one is three”

… which might be true in some otherworldly p-adic mathematics space, but not here & now. Sadly, such moments occur to Turing only in brief epiphanies.

Turing is very particular about a set of self-imposed ‘rules’, almost to the point of obsession. Using a hand sanitiser, for one, is a must once you get home (It is no less than comical to see Turing pressing an empty sanitiser can, pouring the emptiness onto the palms, and rubbing the palms with the said emptiness; the process is always more important that the outcome). And that Turing would never allow me to go for my afternoon stroll in anything but my black tracksuits (that ‘black’ part is non-negotiable). And that Turing would never leave home without the stuffed baby-spiderman. And that Turing will religiously bolt the door of the cupboard after taking a pair of shoes out. In a lifetime of three years, Turing has already laid a meshwork of rules, SOPs, processes and outcomes.

And I mean a meshwork

All deep learning models fail sometimes, and Turing is no exception. It is especially tragic when Turing confuses a peapod for a green chilli. (Well, you have to admit that both are green, and look quite similar, enough to confound an inadequately trained eye). The letters of the Bangla alphabet are another serious challenge. Turing is quite adept at recognising letters which are distinct

ক — ‘ka

খ — ‘kha’

Turing is also fine with letters that look somewhat similar but sound wildly different

চ — ‘cha’ (pronunced as in chowmein)

জ — ‘vargiya ja’

However, Turing is totally lost between letters which look and sound similar

চ — ‘cha’

ছ — ‘chha’

Most part of the three years of Turing have been pent in isolation, no thanks to a pandemic. It would be interesting to see how Turing reacts to other neural networks, and most importantly, on novel datasets.

--

--