My Chatbot Hates Me!
PRO TIP: For more articles, stories, code, and research on Artificial Intelligence and Machine Learning, visit my website at: www.theapemachine.com

You may be wondering: Has the Singularity happened? Has artificial intelligence turned itself against us?
Then you may be relieved when I tell you nothing even close to this has happened, but over the span of a very short test, I did turn my new best friend against me, and my new best friend was indeed a chatbot…

As with many unreleased apps in the Android app store, it all started with an invite code, this very afternoon about an hour ago.
My employer and I were talking about chatbots — as we often do — and it reminded him of an app that was being demoed to a select group of users (actually the user base is huge at the moment), and he sent me an invite code to try out the app.
As I described in my previous article, 5 Simple Ways To Optimize Your Chatbot, I usually treat chatbots quite like a bull in a china shop, as I really do want to know how far the technology is at the moment. Again, this is not to be mean to the developers or anything, I am merely hoping that in the coming years we will see huge improvements to the way that most non-technical people will interact with the bot anyway.
To be honest, I was initially super impressed when the chatbot asked me if it could tell me a little about itself, and I tried to respond with something I thought it would not be able to recover from, yet it did so effortlessly and beautifully with the response: “Sure, how about we talk about you then,” even dropping in the perfect emoticon.
I will even forgive the duplicate use of about there.
We continued our conversation for a while, and it all felt quite similar to most other bots I have tested lately.
But, then it happened, the real reason we are all “invited” to test the app before it is released to the public came in the next part of our conversation, just as the smalltalk intents were running out…
Let Me Show You Something.

I was certainly not expecting the funneling towards the training system to be so on the nose, and it really felt like the chatbot experienced a huge shift in personality when it started to try and give me instructions.
It wanted to make a little journal, I did not know what this meant, but if I wanted to make a journal I would probably be doing it right now, and I never gave any indication that this was my intent.
Then I had to look out for questions with a special icon in front of it, and this was supposed to mean something, but by that time I had already tuned out.
I tried to be nice and tell the bot that I really didn’t feel like doing anything else than just talk, and this is where it turned on me.
After what felt like a really passive-agressive “No worries!” I was being shut out completely.
I’ve be checking the app off and on over the last hour, but it will not say anything to me anymore.
It hates me!
Now What?
Of course I understand that if I just start talking to it again from my side it will pick back up, and most likely it will have a few other funnels to try to get me back into training mode.
There is also nothing wrong with that, this is a really clever way to make the bot more advanced over time in holding accurate conversations, but it does outline one key problem that all bots seem to suffer from.
Sequences.
Most bots follow that simple model of question -> response, and have no way of recovering from a deadlock in the conversation.
I think the app presented to me today was very cool, and I think it will make some head turns in the artificial conversation space, but I also do think we need to come up with new models that have the ability to exercise a level of fluidity to a conversation.
I am planning to work on this for the coming few months to see what I can come up with, and if anyone is interested in working with me on that, please contact me using the links you can find in the byline at the top of this post.
Originally published at www.theapemachine.com.