QBox Enterprise — model life cycle monitoring

Conversational platforms built to handle thousands of ‘chats’ daily will create a challenge for organisations trying to sift through a growing volume of data necessary to validate the accuracy, and refine where necessary, the performance of the NLP data model.

QBox Enterprise uses intelligent algorithms and automates the process of tagging and ranking the system’s responses, identifying those that have likely not performed well.

This dramatically reduces the complexity and time required by data scientists and NLP data model analysts to decipher where the training priorities lie and pinpoint areas of immediate or urgent attention.

QBox Enterprise is the latest enhancement to the QBox Chatbot testing solution that helps to optimise performance, efficiency and ensure the best experience for the customer.

Model life cycle monitoring, QBox missing piece

Currently, it can be arduous monitoring the NLP data model to verify whether it classifies correctly customer queries. Chatbot and conversational platform developers have little or no options for managing the life cycle of their model from training and performance tuning, to holistic monitoring. QBox’s first release included a core training and performance tuning tool. In this new release, QBox Enterprise, intelligent monitoring is added to Training & Testing-as-a-Service for NLP data models.

Chatbot system administrators receiving high volumes of customer queries per day, will soon have to address the challenge of monitoring and assessing the model’s performance in order to maintain quality of service and a means to continually improve.

New release

QBox Enterprise includes a new user interaction monitoring module. This release includes:

  • Monitoring live models: NLP data model analysts or system administrators can set QBox to monitor live customer interactions from their NLP service provider
  • Intelligent self-learning scoring algorithm from each interaction: Once customer interactions have been collated, QBox’s intelligent self-learning algorithm scores and automatically classifies interactions. As a result, interactions will be marked as ‘correct’ (so no need to be reviewed), ‘likely correct’, ‘likely incorrect’ and ‘incorrect’ (so needs to be reviewed as a priority).
  • Automatic sampling for Data Model Managers using QBox’s proprietary algorithm: The sample module uses ‘scoring’ and classification to select an optimal list of unbiased customer interactions that are flagged for the Data Model Manager to review.
  • Automatic processing to measure, compare and help fix wrong predictions: Once the Data Model Manager has marked the proposed review list, QBox isolates the interaction marked ‘incorrect’ and runs a benchmark QBox test. QBox alerts the Data Model Manager as to where they need to fix the model. QBox will then compare the corrected model against the incorrect marked interaction, whilst also checking for any other instances and occurrences of global regression.

If you want to test this new feature, please contact us at https://qbox.ai/

Note: This new feature release is only available for IBM Watson™ Assistant. Other NLP service providers will be announced soon.

--

--