Why will declarative programming rule chatbots, AI and cognitive computing?
If you are creating a smart app or a chatbot that relies on some cognitive services to do the heavy lifting for you, and you end up writing imperative code, you must be doing something wrong.
Declarative programming is a paradigm where you write code by describing WHAT you want to do, while in imperative programming you are required to describe the HOW.
Consider an example for ordering a table in a restaurant described in imperative vs. declarative approach. The example is adapted from Tyler McGinnis’ article on the difference between imperative and declarative programming.
Imperative (How): I see an empty table with two seats located next to the window. If there is no reserved sign on it, then my wife and I are going to walk over there and sit down. Otherwise, we will check the table next to it.
Declarative (What): Table for two, please.
The majority of the chatbots today are being built using imperative approach, and it’s no wonder why they sound like an automated machine that follows a fixed decision-tree.
To the left is a flow-chart diagram trying to simulate an intelligent conversation for a fairly simple chatbot that can organize meetings for employees at Progress.
It’s just an immense task to create an intelligent chatbot that allows the user to write in natural language, change his mind at every point of the conversation and rely on the chatbot’s short term memory all being done in imperative code and fixed-decision trees.
As an alternative, at Darvin.ai you use a declarative approach where you describe what information you want to extract from a conversation, and let a set of cognitive algorithms handle the conversation for you. But more on that later.
The status quo — imperative programming
To understand how we accept this approach as the norm, we need to go back and look at how the chatbot market hype and opportunity has evolved for the past 4 years.
The analyst firm Gartner Inc. has developed a model to describe the technology adoption of new and emerging technologies called Gartner Hype Cycle that has 5 phases that I will use to illustrate the evolution of chatbots and cognitive computing.
Phase 1: The Technology Trigger — Natural Language Processing
A potential technology breakthrough kicks things off. Early proof-of-concept stories and media interest trigger significant publicity. Often no usable products exist and commercial viability is unproven.
The proliferation and success of startups focused on processing natural language in the form of voice and text was marked by key acquisitions. Wit.ai was acquired by Facebook in 2015 and Api.ai was acquired by Google in 2016. Microsoft also introduced its Luis NLP service in combination with its Bot Framework.
The promise was that you can use any NLP provider as a silver bullet to understand natural language, and simply create your conversation flow using imperative programming. Bot Framework comes with .NET and Node.js SDKs to help you do that.
Phase 2: Peak of Inflated Expectations — Facebook opens Messenger
Early publicity produces a number of success stories — often accompanied by scores of failures. Some companies take action; many do not.
Early publicity took a peek when Mark Zuckerberg got on stage in April 2016 and announced that Facebook opens Messenger platform for chatbots. The next big thing was here. The number of vanity metrics surrounding chatbots skyrocketed with numbers quickly demonstrating faster adoption than mobile apps.
Phase 3: Trough of Disillusionment — Dumb and Dumber
Interest wanes as experiments and implementations fail to deliver. Producers of the technology shake out or fail. Investments continue only if the surviving providers improve their products to the satisfaction of early adopters.
Interest quickly started to drop when early experiments and implementations failed to deliver in 2016. Today, the promised intelligence of chatbots doesn’t extend beyond a chatbot asking you to start a conversation over in case you derailed from its happy path. The worst example of a chatbot design is a welcome message of the sort “Ask me anything” where users quickly give up due to the shallowness of intelligence demonstrated by a chatbot.
“We never called them chatbots. We called them bots. People took it too literally in the first three months that the future is going to be conversational.”
- David Marcus, Vice-President Messenger, April 2017
Facebook was quick to realize that the adoption lifecycle of chatbots will massively depend on the success of its developers. And today, they are struggling to create text-based conversation. Messenger introduced support for Menu, Buttons and hinted that developers might be more successful by relying on menus and predefined options rather than text-based conversations.
Wit.ai owned by Facebook also announced that it is deprecating it’s conversation story builder called Bot Engine, so it can focus on pure NLP.
The ecosystem has shifted towards a mix of NLP and GUI elements to produce a user experience on par with native and web apps. For instance, Messenger has introduced quick replies, menus and even web view. As a result, Bot Engine and its emphasis on text-only bots has become somewhat obsolete.
People have realized that Natural Language Processing is not a silver bullet for creating intelligent chatbots. It’s just a piece of the puzzle, and the technology lifecycle needs to evolve further before it can achieve mainstream adoption by enterprises.
Phase 4: Slope of Enlightenment — Declarative Programming
Instances of how the technology can benefit the enterprise start to crystallize and become more widely understood. Second- and third-generation products appear from technology providers. More enterprises fund pilots; conservative companies remain cautious.
Instead of pushing developers to reinvent conversational intelligence, we designed a declarative approach where you focus on What data you want to extract from a conversation in the form of XML definitions instead of How.
For our hospital chatbot, we simply define the different types of a conversation the chatbot can have along with data that needs to be extracted for each conversation such as doctor, date and time, patient’s phone number etc. and let our conversation intelligence engine do the rest.
There are 3 significant developments that we are introducing with Darvin.ai as the next-generation technology for building enterprise chatbots.
Declarative Programming using XML
We introduced our own XML scheme that helps you define conversations and steps in a linear model that represent the best case happy path scenario of a conversation. However, the user can change his intent, switch conversations, and provide the data in any way he wants using natural language and our conversational intelligence services will design an adequate response. In the example above we are defining several options for getting the specific doctor:
- Message — this is the message that will prompt the user to provide the doctor name.
- Quick Replies (optional) — we are dynamically showing suggestions of doctors working at the hospital.
- Acknowledgment (optional) — if the chatbot understands the name of the doctor without explicitly asking for it, it’s a good practice to let the user know what the bot has understood.
- Validation — a custom validation logic against an external service to ensure that the selected doctor is available.
Angular Templating and Expression Engine
In Angular 4 the data binding, template syntax, structural directives and pipes that you can use in your templates are one of the best examples of declarative syntax. That is why we are building our own custom renderer on top of the Angular 4 base implementation to harness the power of Angular templating engine that you can use directly in your conversation declaration.
Dynamic training on top of Wit.ai
Similar to the conversation flow definition, we are also allowing you to use a declarative approach to train your chatbot on recognizing entities and different conversations using NLP. We have based our product on top of Wit.ai, since it has support for more than 55 languages, and has outstanding API. We are also allowing you to do dynamic training using a web service rather than manually enter your entities.
Here is an example of a chatbot that requires less than 200 lines of code.
Phase 5: Plateau of Productivity — What’s coming?
Mainstream adoption starts to take off. Criteria for assessing provider viability are more clearly defined. The technology’s broad market applicability and relevance are clearly paying off.
The value propositions of chatbots that include faster development, easier deployment than mobile apps and natural way for users to interact with chatbots are too strong to ignore.
The cognitive computing and AI definitely have the potential to change the way we develop software in the long-term. But this can only happen if we are developing enough abstractions and services to build upon, instead of expecting every developer to start from scratch. For us, the right level of abstraction when talking about chatbots, AI and cognitive computing is declarative programming where you simply focus on the What. Everything else is just a noise.
If you are excited at what we build at Progress with Darvin.ai, drop me an email at hristo dot borisov at progress dot com to sign-up for our early access program. And don’t forget to like our post :)