A hackathon for a chatbot
How a team of 4 implemented a conversational bot in the Facebook Messenger platform in 3 days
Three days to implement a working prototype for a chatbot in Facebook Messenger. The goal was clear: have our bot help the user refine her search of a new car through a “show me, don’t tell me” metaphor.
This means we show the user cars for sale with their main characteristics, in the casual context of a chat window, giving her the chance to refine that search until she finds what she is looking for, in which moment the user can continue the process in the AutoScout24 portal.
Please, meet Scouty.
Our solution was based on C# over .NET core , hosted in Azure, acting as a webhook for the Facebook Messenger platform.
We had a preliminary version that wired everything together: Facebook Messenger platform communicating with our chatbot app in Azure, in turn connected to the AutoScout24 RESTful API.
The task was basically to implement the chatbot intelligence and define the proper user experience in the Messenger platform.
What we were looking for was a so-called “conversational AI”. We knew we had little time and none of us had background in AI. After a quick search I found this article, stating that:
Chatbot developers usually use two technologies to make the bot understand the meaning of user messages: machine learning and hardcoded rules.
Machine learning is hard. It requires a framework you know how to use, a good initial sample dataset, and training time. We had the IBM Watson technology available to us, as IBM was one of the sponsors of the event. It looked really powerful, but we politely declined to use it, given the little time we had, our lack of knowledge of the platform and AI in general, and the limited scope of our solution.
We decided to focus on hardcoded rules, but even then, providing a good set of rules is no trivial task, so we narrowed down the problem by relying heavily on Quick Replies.
Quick Replies are buttons that serve as possible continuation messages for the user. If the user taps on these buttons, her text is sent, as if the user had typed that directly. In fact, there is more than that, because we can define a configurable payload inside the button, which is passed onto us. This facilitates the work enormously, because we no longer need to identify intent from a free text, but we are given directly the context for the next decision.
For instance, after being shown a car as proposal, the user would be displayed 3 buttons, showing possible replies, like “too expensive” or “too many km” or “something else”. This is very convenient for the user to follow up in the conversation with no need to type, and is very convenient for us because it simplifies the problem dramatically.
In fact, we didn’t even need to keep internal state for the conversation, because the configurable payload was actually derived from the previous step in the conversation, so that when we received the reply, we could pick up from where we had left in the conversation. This also simplified our solution significantly.
Finally, we even added some polishing touches like:
- typing indicators, to let the user know that we are searching for the next answer, since the request to the AutoScout24 backend was taking a couple of seconds, as well as,
- some very simple hardcoded rules to react properly to a simple greeting message.
The team was:
- David, who focused on providing the appropriate queries for the AutoScout24 backend and passed that into our model objects.
- Yann, who had a clear vision of what the UX should be like, contributed some code in the logic of the bot, created the assets for presenting our solution to the jury, and focused on preparing the first pitch.
- Adrian, who focused mostly on the interaction with the Messenger app and the logic of the Quick Reply buttons.
- Terence, who had setup the starting solution, and focused on coordinating and doing everything that was needed at the framework level, such as proper logging of messages between our web app and the Messenger platform.
Yann is a PM, although with programming background, and I am an iOS dev with no experience in the .NET software stack, so we were both worried that we might not be of that much actual help. We nevertheless could roll some decent code during the 3 days and be productive to the team.
In particular, and having plenty of experience in OO, including Java, I felt like I could pick up the C# code quite fast. The Visual Studio IDE was helpful and features like autocomplete made it easier to choose the right variable or method. Deploying to Azure was very easy directly from Visual Studio.
The morale was high and the camaraderie excellent. We faced several problems (more on that later), but we all helped each other and were always ready for the next challenge coming our way, be it a nasty bug in our logic, a deployment problem, preparing the pitches, etc.
A particular highlight for me was our break in the temple of food in Bern, tibits, for a refreshment before a longer evening session on the second day.
There were initially 21 teams selected, out of which, 19 remained to deliver the 1 minute pitch at the beginning of the third day, which determined the 12 teams who made it to the final session of the 5 minutes pitch, including live demo.
Yann delivered an inspiring and super-focused 1 minute pitch, showing slides on an iPad pro in his hands, which allowed for a very close contact to the jury, in my opinion, as opposed to using a video or some slides in the big screen in the background, like other teams did.
We didn’t really get the time for proper rehearsal for the final and decisive 5 minutes pitch because we were running into last minute problems when preparing the slides and setting up a backup video in case the live demo had unexpected problems, so we went on stage with no previous rehearsal, which resulted in us being cut sharp by the relentless timer. However, the live demo worked spotlessly and got quite a reaction from the audience, which was of course rewarding after the 3 days of intense work.
This was the first hackathon experience for all of us. We work normally in different teams although we do interact regularly. We had no experience with AI or bots previously. This was a primer in many ways for all of us.
I would like to highlight the team spirit that we developed organically. It occurred to us that this is what being in a young startup must feel like: all crunched in a small table, frantically pounding at our laptops, with constant interaction and ceaseless decision making, but without much time to ponder the available options. It was exhilarating, stressful and fun.
There is some romanticism to that feeling of a small team sharing all day on the same problem and coming to a working solution. I am super proud of the team and what we achieved together in just 3 days. We didn’t make it into the first 3 places, which were awarded prices, but it nevertheless felt like a mission accomplished.
We faced several problems but I would say they all just boiled down to the same cause: coordination. That’s right, I don’t think we had any significant problems due to the technology or the platforms or the technical decisions. All of our difficulties were due to reaching a coordinated solution between the four of us. We needed to challenge our assumptions openly time and time again, to make sure that we were all heading in the same direction, with the next steps, with the logic, with the model, with the buttons, with the reaction to the user actions, etc.
I just think it is inherently hard to write code in a tight team configuration, because there mustn't be any unspoken assumptions.
In particular, we faced lots of merge conflicts. We were using git to synch our code. Although Terence had done a great job at defining different classes with clear scopes, we tended to all touch in the same places, especially on the first day, as most of the code seemed to gravitate towards the BotIntelligenceSystem module.
The fact that I started using rebase and they were using merge didn’t quite help in the process. I eventually settled for merge and we extracted classes to different source files as soon as they became clear and this mitigated the problem. I am a big fan of moving stuff around and extracting methods and renaming aggressively, as it helps me get a better understanding of the abstractions, and I had to restrain myself continuously in case this could lead to further merge conflicts.
On the evening of the first day I forgot to push my last changes. By the time I wanted to synch them, the merge conflicts were such a mess that I ended up throwing away a bit of hardcoded rules I had implemented the evening before. Dealing with conflicts in git is hard!
We did our bit of loose pair programming for tricky moments of confusion, but for the most part we were working simultaneously on different parts of the prototype. David was writing some unit tests for sanity, because it was not very convenient to validate every bit of logic on the deployed server or even on the locally released app.
One thing I learned from this experiment is that, no matter how much preparation work you do, you could always do with some more.
Before the hackathon, I already had a quick look at the Facebook Messenger Bot integration by mixing this sample project in Glitch. I had it running in no time, the web app being hosted seamlessly by Glitch with no configuration whatsoever. Neat! This allowed us to get a better understanding of the interface for the bot and what the possibilities of the platform were.
Other areas were we could have benefitted from previous work was AI in general, conversational bots in particular, and even more importantly, a setup that would have allowed us to easily validate our code locally without deploying live and a way to better manage conflicts in the code when merging.
All in all, it was a tiring but exciting experience and I am proud of what we achieved as a team. I would like to publicly thank my colleagues Terence, Yann and David for 3 intense days together.
Here is to the next one! 🥂