Integrate Watson Assistant With Just About Anything

Mitchell Mason
IBM watsonx Assistant
4 min readApr 9, 2018

Additional credit to Laksh Krishnamurthy for his contributions to the content and diagram

Watson services on IBM Cloud are a set of REST APIs. This makes them quite simple to be used as a piece of a solution within an application. It also means they need to be integrated with various other parts of the solution to allow your users to interact with your instance of Watson. With the launch of Watson Assistant, integrating with other channels (Facebook, Slack, Intercom) has never been easier. Building a skill for Alexa is possible with Watson. While we have published a number of assets to assist in this effort, like the Watson SDKs, API reference, sample applications, and other users have contributed GitHub repos as well, many of our users still ask how to integrate with other channels or specific external systems. If you are not familiar with the Watson APIs, you can sign up for IBM Cloud here.

While it would be physically impossible for us to provide instructions on how to integrate with every single possible integration point on the planet, there is a general outline that nearly all integrations follow that I hope sets our developers up for success when they need to integrate with something new.

There are essentially 3 major components to a solution.

Sample Architecture Diagram for Integrations

The left hand side of the solution typically is the front end or channel. This could be a web page or an application window where the user types their questions, responses are shown, images are displayed, etc. It may be a messaging channel, an embedded chat widget, a mobile app or even SMS messaging.

The brains behind the interaction would be a Watson Assistant service. Watson Assistant is taking the inputs, understanding them, and driving what happens next. Whether it is simply displaying a response, disambiguating what’s being asked, using a multi modal interaction like showing a map, playing a video, or something more complex like reading/writing to a database or even calling an enterprise service.

These first two pieces are typically pretty standard. The left side can be customized if you are using your own website, but existing messaging channels like Slack or Facebook can’t be customized, you can only connect to them. The right hand side can be trained on your content, but the interaction follows some structure. Depending upon the content source or type you may have to use some data transformation or connectivity patterns.

The middle layer is the application layer and is typically the piece that can vary the most. If you take a look at any of our sample applications, you will see that there is one job that the middle layer must accomplish — passing information from the left side to the right side, including system context, and passing it back right to left to carry the conversation. It’s simply a translation layer taking data from one side to the other and back. Where this gets the most complex is when you have additional integrations you want to work with. Let’s say you want to add Tone Analyzer, so you have an empathetic chatbot. We typically call this pre-processing because it happens before calling Watson Assistant. Your application would take the user input, run it through this pre-processor, in this case to get the tone of the user statement, attach that as context for Assistant, and then pass it on to Watson.

The third layer is a post-processing step where logic necessary to respond to the user query resides, meaning it happens after calling Watson Assistant but before returning a response to the front end. In the Assistant With Discovery example, we show using Watson Discovery Service as a post processed service. Another use case for this might be writing information to a database. Let’s say a user orders a ‘large pepperoni pizza’. Your application would potentially need to make two callouts. The first would place the order in your POS system to actually get them the pizza, and the second might write their order to a database. This way, next time the user logs in they could simply say ‘order my usual’ or something similar. Watson Assistant would typically return an ‘action’ tag as documented here and also return some text. Your application could take action and do the activities as defined, and then also show a message like “I’ll remember that’s your favorite, and it’s on the way. Thank you for your order.”

Get started with your own customized bot, IBM has created samples and developer patterns so you can build quickly. We publish various samples for things like connecting to channels using Botkit, or using our new Serverless Architecture built on IBM Cloud Functions. Or start by creating an Alexa skill using Watson Assistant via the Apache OpenWhisk serverless framework.

We have demos for connecting to Tone, NLU, and Discovery, but these are just samples. You will probably find more unique and powerful things to integrate to your virtual agent. Use the pattern established above to swap in a new front end or messaging channel that we may not support out of the box. Using Actions, you can now call out to other services to enrich your conversation or allow users to actually complete activities using post processing. You can also add additional pieces along with Assistant in order to make your Watson more powerful.

--

--