Trust me, I’m a bot

Remember Myspace? The early social network, while initially popular, collapsed under the weight of its own success, setting the stage for Facebook’s eventual rise. Why? Arguably the biggest factor was trust i.e. being able to trust the people users are networking with. On Myspace, a user could pick any name and pretend to be anyone. As the number of users in the network increased, it was impossible to trust or identify friends. Facebook solved the trust problem by initially restricting itself to users with emails from trusted organizations. Once users could trust and identify friends, communications increased and usage skyrocketed. Trust, identity and authentication is the foundation of virtually all human interactions.

With bots emerging as autonomous agents that humans and other bots interact with, we have the same problem all over again. How do we trust, identify and authenticate a bot? Who are they? Who do they say they are? Can they be trusted with the data sent to them? Can the data received from them be trusted? Can they be trusted with money? As evidenced by the current debate around fake news, trust (or lack thereof) in bots can have profound consequences.

My company, Gupshup, is one of the leading chat bot platforms. Over 30,000 bots have been built on our platform, which processes over 4 billion messages per month. Brands that have built bots on our platform often wonder how to differentiate their bots from numerous other similar sounding bots. Their consumers often complain about being confused when trying to identify the “official” brand bot. This problem is even more acute in the enterprise environment where confidential data can go to the wrong bot if the user is not careful. Additionally, Gupshup also launched a messaging channel, Interbot, to enable inter-bot communication. One of the biggest challenges limiting the growth of both bot-to-human and bot-to-bot interactions is the inability to reliably trust bots.

Identity cards issued by centralized trust and certification authorities abound in virtually every sphere of human activity. Governments issue passports and driver’s licenses. Universities and corporates issue identity cards to students and employees. Professional associations of all varieties certify lawyers, doctors, accountants, realtors, brokers etc. Services like Truste certify websites. However, the recent development of blockchain has given us a different, decentralized, model for enabling trust. Decentralized trust is more robust, reliable and scalable. Consequently, it makes sense to develop a blockchain based trust and identity solution for bots.

Enter BotChain, a block-chain based trust and identity solution for bots. The team at Talla, an enterprise bot development company, was the first to recognize this problem and built BotChain to solve it. They opened it up to be a community effort by inviting multiple companies in the bot ecosystem, such as Gupshup, to participate in steering the future development of BotChain. BotChain addresses many of the issues that limit the wider adoption of bots. Botchain enables a universal, tokenized, decentralized registry that provides each bot a unique identity code that can be easily verified. Verifiable bot identity solves the bot impersonation problem. Another solution provided by BotChain is behavior audit and compliance. BotChain records the state and activities of the bot onto the distributed ledger. This provides a reliable, auditable, immutable record of the bot’s past behavior. This creates incentives for bots with good behavior and penalizes bad behavior, increasing compliance to norms, standards and policies.

BotChain has the potential to catalyze greater bot-to-human and bot-to-bot interactions. Humans will be able to use bots with greater safety and confidence. Bots will able to safely collaborate, negotiate and transact with other bots enabling more advanced use-cases. With a strong foundation of trust, there is no limit to what bots can do.