Sinch Conversation API adds native Machine Learning analysis
Today I thought I would share some news on the Sinch Conversation API as we are about to make Machine Learning based analysis available natively for customer conversations you run over our API service.
Background
First, some quick context. In early 2020 Sinch acquired the AI based chatbot company Chatlayer. This was part of our strategic investments into AI, Machine Learning (ML) and specifically Natural Language Processing (NLP).
The rationale was of course the obvious rise of conversational messaging and the opportunity for Sinch to offer services that would help our customers to build cost-efficient automations of conversations happening over channels provided by Sinch.
Since then, we integrated the Chatalyer service into our product portfolio and it has for a while used the Sinch Conversation API and Voice API to communicate with customeers, and it is integrated with our Contact Pro contact center service to offload and support human customer care agents.
What’s new?
We have now made yet another use of our powerful ML/NLP engine as we use it to power the upcoming ML/NLP capability of the Sinch Conversation API. This means developers will have native access to NLP and ML based analysis of customers conversations, across all supported messaging channels, just by adding a webhook to their Sinch Conversation API application.
So what can such ML/NLP analysis of the conversation provide? In the first release we will support the following analysis for messages from your customer:
- Intents — you will get a list of possible intents by the customer and the probability for each of them. Our ML engine has already been trained to recognize a lot of typical intents happening in conversations over digital channels, for more than 100 human languages, batteries included!
- Sentiments — you will get scores for the probability of positive/neutral/negative sentiment of the message, again for more than 100 languages
- Entity extraction — a list of probable entities the customer refers to in the conversation, our ML engine comes with a number of pre-defined system entities and will learn to identify your contextual entities
That said, rest assured our roadmap has quite a few exciting ML based analysis capabilities that will let the developer automate conversations without involving complicated third-party solutions. More about those in a future update.
How does it work?
For the developer, getting started with the ML features is really easy. On your Sinch Conversation API application just add a webhook for ML analysis callbacks and you are good to go. You will now receive a callback for each message sent by your customer, the callback comes with a blob of familiar json syntax from which your application can quickly extract insights and pass downstream where conversation automation happens in your stack.
If nothing else, building something that provides real time analytics and alerts of customer sentiment is now dead easy. This alone can provide valuable insights to where human agents should jump in, or plot trends for how your conversation automation is developing.
Also, there is no need for the developer to have any ML skills really, we take care of that, and our ML engine comes loaded with all the training we put it through over years of customer conversations, so your ML journey does not start from scratch.
When can I try it?
This feature is now in Closed Beta with selected customers, as always keep an eye on your Sinch account (https://dashboard.sinch.com) and it will pop up as Open Beta in the not-too-distant future for everyone to try out. And of course, if you have specific use cases you are interested in, don’t hesitate to reach out and let me/us know, we have a gang of ML engineers that love challenges :-)
Looking forward to seeing what will be built on this new unique feature!