Can You Eat a Chair?
Here’s a quiz for you. What do these questions have in common?
Can you eat a chair?
Is a brick alive?
Is a shout louder than a whisper?
Which is larger, a cat or a train?
Give up? The answer is that they have all been asked by Mitsuku’s users. The answers to these type of questions are obvious to even a small child but very difficult for a chatbot, as it has no knowledge of the world. To a chatbot, the questions make as little sense to them, as someone asking a person if they prefer “flibber” or “zobber”. Unless you know what these objects are, it’s impossible to answer.
Unfortunately, many people like to ask questions like this to try and fool chatbots. They are also common in many competitions including the Loebner Prize. If we want to make our chatbots more convincing, we need to find a way to handle these questions.
Hard code the answers?
Sure, one solution would be to hard code responses for these. It would only take a few seconds to code a response to “Can you eat a chair?” but what if someone asks, “Can you eat a tree?”. What about an apple, a river, a T-shirt or any other of the thousands of common objects in the world? It would take years to try and think of all the possible questions and answers, then you would have to move on to other questions such as, “Where is the best place to find a chair/dog/house?” etc.
Ideally, we need to cover each type of question in just one category and I’ll show you how to do that. I use a chatbot language called AIML but my techniques can easily be adopted for whichever language you prefer.
What objects do we need?
The first place to start is to find a list of common objects that people may ask about. Nothing too obscure at first, as not many users will ask about hydrocarbons but lots might ask about dogs.
When I was first making my list, I started by checking out websites with common nouns for children. Here’s a few examples:
These contain words such as dad, cup, river and so on, which are much more likely to be asked about. As time goes on, your users will ask about different words and so you need to make a note of these too. I started with around 100 objects and now have over 3,000.
Attributes of objects
In order for the chatbot to be able to answer questions about the object, we need to educate it about each one. This is far more challenging than educating a child. If I wanted to teach a small boy what a tree was, I would show him a tree and he would be able to see for himself that it is about as tall as a house, it’s got green leaves, birds live in it, it’s made of wood etc.
The chatbot has no senses and so this option is unavailable. We need to teach the bot manually about a tree. To do this, we need to make a list of attributes about the tree. Things like its size and colour, what it’s made of, what use does it have, where are we likely to find it and whatever else you think people will ask about it. I call this my common sense database.
Setting up the common sense database
I found the easiest way to do this was to make an Excel sheet with a row of all the objects (in alphabetical order for ease) and columns of attributes. Here is a small sample of mine:
The “syl” column indicates how many syllables each word has. The “size” attribute is a relative size to the other objects, so an ant may be a size 1 and a planet may be size 20. In the example above, the chatbot can tell that a fireman (size 4) is bigger than a fish (size 2).
Using this file, I can write a bit of code to extract the details into AIML categories. Here is a sample of my “tree” category.
Once all the categories are loaded, you can then create categories to handle the queries.
Where can I find a tree?
If a user asks, “Where can I find xxx?”, where xxx is an object, the chatbot needs to load all the information it has about the object in order to provide an answer. It can do this by calling “XOBJECT xxx”. It now knows all about trees and so can fill in a template like this:
The best place to find a <get name=”word”/> is <get name=”where”/>.
The best place to find a tree is in a park.
We can now handle any “Where is xxx?” questions using just this one template instead of having to hard code thousands of categories. However, if the user asks about an object we don’t know about, we need to have a catchall category with a pattern of “XOBJECT *” and provide a generic response such as “Have you tried looking on the internet?”
The method above handles simple queries but how can we answer, “Can you eat a xxx?”. We don’t need an attribute for whether an object is edible or not, as we can use our “madefrom” or “whatisit” attribute to work this out.
In our common sense database, let’s assume a chair is made from wood and is furniture. We know that we can’t eat wood or furniture and so we can’t eat a chair. The category to work this out is a little trickier. First of all, we load up the attributes of a chair by calling, “XOBJECT xxx”. Now we need to teach the chatbot what is edible by running it through this category:.
The chair is classed as furniture and so will drop down the list until it reaches the default “NO”. Had we asked if we can eat something like a burger, this would have been “YES”, as a burger is classed as “food”. The “UNKNOWN” covers any objects not in our database.
We can then do a conditional statement to check whether the object is edible and fill in the best match as an answer for the user, remembering to put a catchall category for objects it doesn’t know about:
This provides responses like:
User: Can you eat a chair?
Bot: Not really, as a chair is made from wood rather than any kind of food.
User: Can you eat a burger?
Bot: Sure! It’s made from meat so you can eat it.
User: Can you eat a zoober?
Bot: I guess if you put enough ketchup on it, anything is edible.
So from just this category, we have taught the bot how to work out whether you can eat an object or not and provide suitable responses. The principles here can be amended for other common questions. For example, “Is a brick alive?” could be answered by checking the value of an attribute like “whatisit” and comparing it against attributes you know are alive:
Comparing two objects
A common question in contests is something like, “Is x bigger than y?” or “Which is faster, x or y?” where x and y are two objects like a train and a leaf or an apple and Jupiter. To answer these, we need to load up the attributes for each object and compare them.
In our example of “Is a fireman larger than a fish?”, we first check the relative sizes of each object and set a variable (checksize) to contain the two sizes. In our database, a fireman was a size 4 and a fish was a size 2. This will set “checksize” equal to “4 2”.
We can now use a conditional statement to compare the sizes and provide a suitable reply. Here is an extract:
As our “checksize” was set to “4 2”, the following would be produced:
User: Is a fireman larger than a fish?
Bot: In my opinion, a fireman is bigger than a fish.
We can provide even better answers by amending the replies if the objects vary greatly in size. In my database, a desert has a size of 8 and a mosquito is set to 1.
User: Is a desert larger than a mosquito?
Bot: I assume you are testing me, as a gigantic desert is massively bigger than a mosquito.
The above techniques work great for questions like “Can you eat a chair?” but what about “Can you eat chairs?”. We don’t have a “chairs” object and so can’t answer this. To get around this, one of the attributes I use is “plural”, which allows users to ask, “What is the plural of xxx?”. When setting up the initial categories, I include an extra category for each object called “XP2S” (plural to singular). If someone asks about an object and the chatbot can’t find it, it can check to see if the object is a plural and fill out the attributes that way.
These techniques allow the chatbot to answer many questions about many different objects with just a few categories. In a similar way to a human, once you have some knowledge about an object, you can answer questions about it. Any future questions can be handled by creating appropriate categories.
Top Tip: Some users will deliberately ask questions to trip up these methods. Questions such as “Can you eat a chocolate chair?” and “What sound does a dead cat make?” can be handled by patterns such as, “CAN YOU EAT A CHOCOLATE *” and coding suitable responses. These are not asked regularly and I deal with them on an individual basis as I find them in the logs.
You can also attempt to work out unknowns from the facts you already know. If someone asked the chatbot, “Is a zoober bigger than an ant”, the bot doesn’t know what a zoober is but knows an ant is pretty small so chances are this mysterious zoober is probably bigger.
A question once asked in a contest was to name something red that starts with the letter T. To answer this, I went down each category and checked the first letter of each object and its colour. If they matched T and red, I produced the answer and then carried on down the list.
User: Name something red that starts with T
To answer something so obscure by creating a hard coded category would be pointless, as it’s highly unlikely that this question would be asked again.
To finish with, here’s a few examples from Mitsuku’s logs where these techniques have proved useful. I’ll go through the jokes module in another blog. Some of the answers surprise myself when I see them. The last one demonstrates the bot working with an unknown object “The Taj Mahal”.
To bring the best bots to your business, check out www.pandorabots.com or contact us at email@example.com for more details.