Automating support

Tech@ProSiebenSat.1
ProSiebenSat.1 Tech Blog
5 min readJul 8, 2021

--

by Daniele Frasca

In one of our previous articles, we have explained how Syndication Team works. This time, we would like to present how we automate simple tasks to support colleagues from other teams.

Some Background

The syndication team is part of a complex chain where Material Coordination set up the content to distribute to third parties. The data inserted has rules and travel through the chain passing through multiple teams until in the end arrive at the Syndication team that aggregate and transform data based on partner requirement and deliver this package to partner servers.

As with every application, there is a level of validation that analyse if the data is correct, and of course, we could have any issues, for example:

  • Image X is missing.
  • The title is too long.
  • Video had a problem during transcoding.

When we detect an error, we will notify the Material Coordination of an error message in their system. A Material Coordinator person usually contacts the team leader of the syndication team (in this case, me) to find out the why, how, and what.

For years we had a web application that the Material Coordination group used to visualise the data inserted initially on their side, up in the chain. We also added few features not available in their system, like re-processing an asset in error status.

The Syndication Team is a back-end team, and historically, we never had a front-end developer because we don’t need it. Of course, we can build some UI using React or Angular, but this never came easy and pleasant. We had a series of challenges because once you developed a fancy front-end application, the users always want nicer icons, data formatted in particular ways, and more features. Even if we add them, the Material Coordination was still coming to us asking why, how, and what.

Chatbot

Early 2020, we were already thinking about adding a chatbot into our arsenal to remove from our daily tasks the time spent supporting and explaining (over and over) why things fail and just focus more on the team’s needs. Many people were sceptic about this because having an internal chatbot sounds too much crazy and exaggerated.

With timing issues increasing, a smaller team, many meetings to attend, and more tasks, there was not a practical way to duplicate ourselves and support all the teams involved. That is why, one weekend, I decided to build a chatbot with Amazon Lex, to replace our physical presence as much as possible. In this regard, the chatbot is helping us to responds to colleagues queries more quickly and easily because:

  • A chatbot is available 24x7, and it can answer multiple people at the same time.
  • A chatbot automates tasks reducing the manual burden.
  • A chatbot carries out actions based on predefined workflow and can redirect them to the team if it cannot answer.
  • Easily integrated with.

The Basic

Amazon Lex has three main concepts:

  • Intents: the action that you want to fulfil, i.e., “Find X”.
  • Utterances: one or more human-readable sentences, i.e., “I want to find X when Y”.
  • Slots: the input parameters, X and Y in the examples above.

But there is something more:

  • Lambda initialisation and validation: A Lambda function hooked to an action to do some pre-validation or run some business logic before fulfilling the request.
  • Confirmation prompt: Simple questions like “Are you sure?” where the user can simply answer “yes” or “no.”
  • Fulfilment: A Lambda function hooked the entire workflow that fulfils the request once all your steps are filled or used just to add a simple message like “Hi, I am a BOT”.

Example

A user starts chatting with the Lex on Slack, and it is trying to answer a request based on what you enter (determine your intent).

If Lex identifies an intent, it looks for slots that have not been filled:

If there are any, it prompts you for the missing values. Once it is all filled, Lex will execute the action thanks to a lambda function hooked to the intent and fulfil the request.

If the initialisation and validation step are successful, Lex fulfils the request running if configured, for example, a Lambda function and returning a response.

Finally, Lex has the concept of session attributes where you can store information that you need across intents. In our example, as you can see from the images above, we request Email and OTP (one-time-password) to the user to perform specific tasks. Then, instead of asking them for each action and authenticate the user multiple times, we ask them once, and we store them in the session attributes. At this point, when Lex uses an intent where the slot Email and OTP are required, it takes them from the session attributes automatically without asking again.

Conclusions

The adoption of the chatbot can be complex and cumbersome at the beginning for users who are used to work with a UI because, instead of clicking around, they need to chat and get familiar with the bot and the utterances that have been set up.

Amazon Lex is natural-language processing (NLP) services that uses machine learning to uncover information in unstructured data and sometimes cannot understand what people ask. However, thanks to the monitoring section, where you see the missed utterances, you can update utterances by adding additional sentences, thus improving the overall user experience.

In our case, the time to support other teams has been drastically reduced. All the tasks that we previously did are now managed through the chatbot. Multiple people can be served, automatically eliminating the need for a team member, reducing issues like bottlenecks or being stuck if a team member is on holiday or sick or left the office earlier.

Furthermore, looking at the first stats, the integration of Lex + Slack serves almost 3K requests per week, which means that we reduced the time to answers questions and extract data that people need.

Finally, Lex became part of our back-end development skills. We managed to add more features and automate more than ever, overcoming some limitations that a front-end development may have. In this way, we redirected the time spent answering questions to a more focused engagement towards our customers.

--

--