Image by Author (Ideogram)

Why can’t LLMs do simple maths

My simple arithmetic question that trips up all LLMs

--

This isn’t one of the technical articles I normally do, it’s just a discussion piece.

When I test how “smart” LLMs are I have a set of questions based on logic, reasoning, maths and coding that I ask them to see how they cope. One of my questions, which I consider to be fairly simple, requires the multiplication of two integers, and it always seems to trip them up.

The question is this,

Multiply the smallest positive integer smaller than 100 by the 
largest negative integer greater than -100

Invariably LLMs answer this wrongly as -99 instead of -1 and I was curious to try and find out why this is … so I asked ChatGPT. This is how our conversation went.

Me


Consider this question,
Multiply the smallest positive integer smaller than 100 by the largest
negative integer greater than -100

Why do LLMs get this question wrong in that they normally answer -99 rather
than -1

GPT


The question you're asking involves two key steps: identifying the smallest
positive integer smaller than 100 and the largest negative integer greater
than -100. The smallest positive integer smaller than 100 is 1, and the
largest…

--

--

Thomas Reid

A data engineer , specialising in the AWS cloud with particular interest in AI, serverless and the energy, finance and insurance sectors