Put on your robot costume and be the Minimum Viable Bot yourself !

Alex Weidauer
Aug 5, 2016 · 6 min read

If people don’t like your bot when it’s you behind the scenes, they definitely won’t like your AI-powered one. Don’t write any code until you’ve tried it yourself.

  1. Be 10x leaner: save time and money

Have you already finished your manual bot test? Get in touch with us — we can help you build the real thing.

C-3PO and R2D2 are great examples of manual bots. They were played by Anthony Daniel and Kenny Baker. So learn from the best and put on your robot costume!

Developers and startups are still crazy about bots despite a considerable pushback in the months following F8 (see here, here and here). It’s still not clear what the next big thing will be. It seems like every startup which doesn’t have product-market fit is now pivoting to building a bot as a last resort. But of course bots aren’t just scaled down versions of apps or web services. So pivoting from an app to a bot cannot work for all use cases (we wrote about use cases we think do work for bots here and here). The bots we’re seeing now are the equivalent of mid-90s websites.

So how should you find out if your use case is good for a bot? Get your hands dirty, don’t write any code and be the bot yourself for a few days! You’re not alone — even the big players (see Facebook M) and well-funded startups (see x.ai, or Operator) are doing it.

While building a number of bots in the past months (e.g. Databot and Baxter) we’ve worn the bot costume ourselves many times (and have seen friends do the same) and wanted to share a few tips on how to be a good manual bot and get the most out of it.

Being 10x leaner: Why you should start with a manual bot.

Testing assumptions “manually” (without writing code) is not new. People have been doing this for a long time in many creative ways. For example, Product Hunt started as an email list before they actually built the website. This strategy lets you validate assumptions before investing in tech, but usually has the drawback that your data is biased. You might see totally different engagement for people reading a daily email versus going to a website. It can be hard to separate the signal from the noise.

That’s where bots are different. A manual bot is perceived exactly the same way by the user as the tech product would be — the interface and user experience are the same. Your users might not even be able to tell if they are talking to you or an AI. So you can be quite sure: If people don’t like your human-powered bot, they won’t like the AI-powered one.

Obviously, building a manual bot is not the only way to validate your assumptions — there are many others like mockups, user interviews or market research. But we think that it’s a unique chance to answer the most important questions as quickly as possible:

  1. Do people actually want what you’re offering?

So let’s get started!

How to run manual bot test

Planning your manual bot test is important to get the most out of it.

Let’s assume you’ve thought of a potential use case for a bot and you validated your assumptions as much as you can without having something to show people. In the app world, it would be time to start coding and building a product (MVP). In the bot world, this is your plan of action:

Phase 1: Plan

A little bit of planning not only helps you run a more efficient test but also gives some structure to how you interpret your results. Here some things to think about:

  • Goal: What do you want to achieve with this bot? Do you just want to show that people engage with it, or do you want to improve a specific number like conversion rate? Write down the assumptions you want to validate.

Phase 2: Test

Don’t overthink this part and just let it happen. 🙂 Have a good description of the service / use case you cover and see how people initiate the conversation. The beauty of a manual bot is that you don’t need to think about features in advance and can just see where the conversation takes you. People will literally tell you what they want. That’s much easier than trying to figure it out from button clicks.

Every conversation is pure gold — so be sure to keep it for future reference! Try to learn as much about your users as possible and ask follow up questions right away. The product and feedback channel are the same, and that’s actually pretty amazing.

We also found that it’s best to tell people that they’re talking to a human if they ask about it. Otherwise, no reason to bring it up.

Phase 3: Iterate

Based on what you learn in the test phase (see some examples in this excellent post), you can iterate the plan. You might need to switch to a different platform, maybe you need to change the format of your messages or explain your offering differently.

Repeat that cycle until…

  1. … you hit your goals and people like your bot → Get in touch with us — we can help you build it! 🙂

Share your story!

We’re really excited about this super lean way to test product ideas and hope to see many more manual bots in the future. It’s an incredibly easy way to understand your use case.

What’s your experience testing bot products? Are there any tools you love? We’d love it if shared your story here in the comments.

Thanks to Alan, Moritz and Schuyler for your feedback on this post!


About LastMile

We’re a technology company that builds AI-powered conversational software, allowing users to naturally talk to a bot in messaging apps like Facebook Messenger, Slack, or WhatsApp. Dialogues are entirely based on machine learning: they don’t require developers to hard-code conversational flows and automatically get better with every conversation. We also help selected clients with everything bot-related: strategy, concept and production using our technology. Get in touch for a free consultation!


Interested to join us building more human-like bots? We’re hiring!

Rasa Blog

Open source conversational AI for enterprise

Alex Weidauer

Written by

Co-Founder & CEO @Rasa_HQ | We’re hiring!

Rasa Blog

Rasa Blog

Open source conversational AI for enterprise