The Internship

WIP: A Media Innovation Studio, Internship

Its Monday 13th February 2017 and its our annual ERASMUS internship programme. this time we welcome Aäron Declerck to our team, he joins us from Howest University in Belgium and is the second we’ve had since taking this role over from Onno Baudouin, a Senior Creative Technologist in Innovation here at the University.

The internship is in place to give students an insight into working processes and experiences, of the types of industries they might like to explore once completing their degree. At the same time giving the employer (us, in this case) the ability to explore new possibilities with novel and exciting projects that may lead to possible research or further funding.

Last year’s intern Glenn Matthys, worked on a range of projects from server management to the development of a companion app to compliment the littleBits platform. The companion app named ALE, which stood for Augmented Learning Environment, allowed users to gain a deeper insight into the littleBits blocks. If you aren’t aware of these little ‘bits’, they are basically pieces of technology that connect together via magnets which make up a circuit. Without any prior knowledge of circuit making or messing with solder. The bits snap together, and in their simplest of circuits require power, input and an output. This could be for instance, a button and a LED; when pressed the LED lights up — to more complex circuits of building in cloud based and coding capabilities.

During his time, Glenn worked alongside myself to paper prototype and design the platform. In such a short space of time, he was able to fully develop on from the discussions, brainstorming, paper based designs to the final interactive prototype. In the end the platform consisted of 31 fully functional bits, a simple bot bit (hat tip to the Microsoft Paperclip “Clippy”), a real time shopping cart which calculates and talks to the littleBits store to total the cost of the bits on the stage, save and share projects with others. We alpha tested the prototype with team members in the Studio, iterated the flow and design and then took the prototype out for additional feedback through beta testers. We tested the application with children interested in technology or was curious about the littleBits platform, at science festivals, geek-ups and through self hosted littleBits workshops. The project was first publicly unveiled at IDC2016, through a workshop and published in the conferences Works in Progress track.

Screenshot of the littleBits ALE

To generate interest we devised our own working processes using the physical bits, and developed our own projects to demonstrate the potential of the littleBits platform. Alongside projects we still use in the Studio, like a coffee counter used to count the number of coffees we consumer each day in the Studio, the naughty or nice treat dispenser; like it suggests “naughty or nice” we use this at Christmas time linked to our teams Slack channel and if someone uses the Christmas Tree emoji in their messages the slack bot will decide if that person has been naughty or nice then tell the littleBits treat dispenser to dispense a treat or not. But also the “Go Ball” rollercoaster/crane.

Although the Studio still uses littleBits in daily operations and workshops. It was decided that Aäron (our latest intern) would work on a different project or projects to be precise. As with any new team member, we found out what interests and skill set Aäron possessed. As with many of the projects were are currently exploring, the Internet of Things (IoT) is of interest to many in the Studio, as we currently have a few projects on the go which involved some IoT element but also as one of the team is doing their PhD around the topic, it seemed appropriate to explore some new areas of this field.


Slack Radio: a briefing radio curating content from Slack

In the early days working for the Media Innovation Studio, we adopted a weekly catchup of all things that people were working on and went on in the Studio. Distributed on a Friday after 5pm, the email would contain elements of what had happened that week, appropriately named That Was, The Week, That Was or TWTWTW for short. This worked at a time when the team used email for all their communication needs. Moving forward, Slack was adopted and more importantly an improved way of collaboratively working. Alongside many other advantages of using Slack, Slack enabled members to “listen” in on other projects (projects are broken down into channels some of which are public, allowing other members to see whats going on in that channel without the need to participate).

With this, the weekly TWTWTW became unnecessary. This was due to the fact members had a good handle on what was going on in the Studio. Although Slack didn't replace TWTWTW perfectly, the weekly debriefs stopped. Many ways in which we attempted to rejuvenate this debrief, worked temporally, but nothing succeeded as a permanent fixture.

But now we are using Slack more, the amount of noise we are created means some posts/messages are unseen. Even though Slack has designed for this case some members aren’t aware of certain aspects of the Studio’s weekly goings on. This could be the fact they misuse Slack or don’t comply with the etiquette of Slack, but this creates a problem. One way in which a TWTWTW briefing was trailed, was to manually capture certain posts from Slack and feed them into a TWTWTW channel, this required members of team using the Slash Command feature within Slack (a common feature that is used to interact with the platform), however this relied heavily on user input and aside from a few members this was not adopted.

Fast forward six months, and we still have noise in channels (good noise), but we are getting better in achieving, muting and naming channel. However, we very keen to explore more ambient ways in which we can filter the noise and re-distribute. From using the Amazon Echo both personally and at work, we came up with the concept of a Slack Briefing Radio. The Radio would broadcast a series of weekly stories curated from Slack. But like before these posts retrieved from Slack would require some form of collection and filtering process. Over the last Aäron and I set aside time to design and prototype the first version of the Radio, during which time many challenges were faced when designing for human input.

Concept / Design

It was devised that the Radio should;

  • Read out Slack based messages.
  • Broadcast the briefings to take place in a particular location.
  • Create a playlist of messages that can be achieved for later use.
  • Filter and process messages based on rules.
  • Understand which messages were of importance to be re-distributed.
  • Be fairly simple to rebuild for other message based platforms (Twitter).
  • Work on a Raspberry Pi (or equivalent) and have scope to be re-purposed for the Amazon Echo.

The Platform / Infrastructure

Based on the RadioDan and the Neue Radio project. Initial findings demonstrated potential of how the Radio would work in terms of its connection, set up and function. Neue Radio provided some basic functionality that certainly helped to develop the text to speech functionality on the Pi. Neue Radio uses a headless instance of Chromium in combination with a Node.js backend. The Node.js backend then communicates with the Pi through a Web Socket. The main problem we found with the Neue Radio is the documentation, its a great project and still in its infancy, which found to be problematic when wanting to develop on top of. But more important to this project was the fact that Neue Radio at the moment only supports the ability to play a live HLS radio stream.

Photograph taken of the debug console and the initial Raspberry Pi setup

In the end, we adopted the same technique as Neue Radio to accomplish a text to speech application on the Raspberry Pi. It uses a headless instance of Chromium and Web Sockets for communication. To actually output the text as speech, the Web Speech API came in handy. It is included in most modern browsers. The main problem for the Pi though, is that the Chromium browser doesn’t include any voices by default. In case there aren’t any voices, the Responsive Voice library takes over. It uses an external API to create the audio required for the text to speech.

The next task was to get content to read out. The basic idea was to retrieve messages from Slack and use them to compile a briefing. This can be achieved via varying methods;

The first thing to do, before you can start using their API, is to setup a Custom Integration or Slack App. First we went ahead with a “Development Token”, which essentially grants you a subset(?) of features available through an Slack App without the hassle of setting up an Slack App.

At the moment it seems that the Custom Integration is outdated, thus creating a Slack App is recommend. I’ll add more information on this topic later on.

  • Real Time Messaging API

The “easiest” way to retrieve messages from Slack is through using their API. Essentially it creates a connection to Slack and whenever a certain event occurs, it will send you more information about that particular event. But as this is real time and the app had to retrieve information in the past, it became evident to change to the Web API.

  • Web API

Instead of waiting for Slack to send information to us, with the Web API you can simply request a HTTP link with the appropriate headers and body to receive what you want, whenever you want.

  • Events API

Comparable to the Real Time Messaging API, but the main difference is the communication used between your app and Slack. With the RTM Client a Web Socket is used for communication. You open a Web Socket and when an event occurs Slack will inform you through that same connection.

The Events API uses a callback mechanism. When an event occurs, they will send more information about that event to a certain URL that you can configure. Basically, they’ll contact you instead of the other way around.

Therefore, instead of using the Real Time Messaging API we adopted to use the Web API. The biggest upside for us was that the communication wasn’t real time. We could now pull the history for a team from Slack whenever we wanted.

Every time we added a new feature request to the project, decisions were made which API would best suit our needs. For example, the next feature to be added was to respond to a certain event. In our case, that is when an user is posting a message on Slack. We’ll then try to find out whether the message is the starting point for a new conversation or if it is a response / reply on another message on the same channel. To do so, it had to be real time. So there was the choice between the Real Time Messaging API and the Events API. Dependent on your use case, it will be different than ours. In the end I choose to use the Events API. Why? The Events API is more scalable because Slack is requesting a certain URL on your end instead of you opening 1 Web Socket to handle the actions of a hole team. And in our case, the Events API has all the necessary events that we want to intercept. And if something went wrong with the communication through the Web Socket, you’ll lose all adjacent events. If something goes wrong on your side with the Events API, the chances are that only 1 event will be handled incorrectly dependent on the implementation on your side. But now we had to think about Rate Limiting (which is always a good thing to think about).
Screenshot of the debug console, showing the sentiment analysis of the messages

So at this point, we had a Raspberry Pi with a headless Chromium browser, a debug console that would allow us to test the text to speech element of the project (seeing as this was the main user interface for the project, it was important that posts were read out correctly). We were starting to pull in messages from Slack and storing them in a database, so that playlists could be archived and retrieved at a later date (catch up?).

The Rules

We started to map out what rules should be in place when filtering and sorting messages. The ones in which we decided to use initially were;

  • reaction/s (emoji) to the text.
  • The number of users ‘@ mentions’ within the text.
  • The length of the text.
  • The number of noun uses with in the text.
  • If any URLs were included in the text.
  • If an attachment was uploaded along with the text.
  • A decision on a positive of negative piece of text (+1, 0, -1) — based on the SlackMood library.

Other rules that became useful were, ‘K Nearest Neighbour’. This required retrieving from Slack and assigning our own Label and Features to it. A Label could be Interesting, Do Not Mention or something similar. A Feature would be Emoji Count or Number of Replies etc. Then we let K Nearest Neighbour Classifier decide for us. This then progressed onto ‘K Mean Classifier’ which doesn’t need Data that is Labeled. It tries to Cluster a given Data Set for you. The K Mean implementation is however a bit tricky. It tries to Cluster Data based on the given Feature Set. With the Iris Data — a Data Set with information about a certain flower — one is being categorised correctly while the two other ones are being mixed because their to similar. But with more data or a more differentiating Data Set, the error rate decreased. When then set aside time to manually classify the data set in order to improve the Neural Network.

Manually labeled Data is now stored in the Database. It can be used to Train the Bot automatically, because new Messages will use the same Database Structure.

Message

  • threadTimeStamp: used by Slack to indicate that a Message is linked to a Thread;
  • parentId: used by us to indicate that a Message is a Response to another Message.

Conversation

  • chronologicalConversation
  • Uses the “parentId” when available; otherwise the Conversation is sorted based on “postedOn” value.
  • The “parentMessage” is linked to a Message when available.

Message

  • “previousMessage” in case a it is the starting point of a Conversation;
  • “parentMessage” in case the Message is a Response to another Message within the Conversation;

Normally either the “previousMessage” or the “parentMessage” will be initialised. Sometimes that isn’t possible because there is no Message to be linked. Could be when it is the first Message within a Channel etc.

Why do we need “previousMessage” and “parentMessage”? When training the Neural Network we need to assign Features based on certain Metrics. There needs to be a distinction between a real Response and just the Message ahead of another Message. But we need one of them to assign a Feature Set to each Message.

At the moment the Neural Network will .splice(); the Message Collection and filter Messages without a “linkedMessage”; for the reason discussed previously.

Curating the content

As aforementioned, curating the content was one of the issues projects like this fail on. This time rather than rely on user input, an element of machine learning and rules were developed. At this point we realised that even though Slack had recently introduced a new feature ‘Threadding’ (where messages and its replies are collated into one thread), from our collection of data we found that the majority of team members weren’t adopting this feature, which had huge impact on the way in which we designed the Radio. Messages that were collated and then sorted based on the rules.

As the project is not about improving the efficiency of machine learning nor building new ways to curate content, but more about the playful aspects of message retrieval and distribution, research was conducted into this area and frameworks and libraries adopted to meet our need.

The first library we encountered was, Synaptic, a library for Node.js to Train Neural Networks without actually needing to know the Math behind the scenes, although it is interesting.

The next challenge was to determine if a message was an original or reply. This was something that became quite time consuming and very challenging to solve. Again, for this we placed some rules to help the classify understand if said message was a reply or not, we determined based on the data set and how users use Slack that the following rules could be applied to determine if the message is a reply, these were;

  • Does the message start with an ‘@ mention’ ?
  • The Mean Time between a Message and the next Message. When the Time between a Message and the next Message is above Mean Time, then a o is assigned. Otherwise a 1 is assigned. By utilising only, and only that information the Neural Network has an Error Rate of about 15%. By adding more “Features” we were certain this error rate could be improved.
  • Exponential Distribution of the Date on which a Message is posted. If the Message is a week old, it probably won’t be a reply.
    http://stackoverflow.com/questions/11638995/algorithm-heuristic-for-grouping-chat-message-histories-by-conversation-implic
  • Using Tf-Idf to detect the important words in a Message.
    We could use the kNearestNeighbour to assign a Feature Set and then give it a Data Set with Messages that are a Thread and their Score for each Feature. Or we could use another option like a Decision Tree.
  • Turn taking — if a message and its previous messages fall within the above criteria then another check could be performed to see if there is a common interaction between certain people.

Formatting the content

As the briefing was designed to take place here in the Studio every Friday then the briefing should break down the week by day. It was important to vary the phrases as we did not want the radio to sound robot like, but to more human….

“On Monday, …… [PLAY MESSAGES]
“Tuesday saw, ……[PLAY MESSAGES]
“What happened on Wednesday?…. Well…. [PLAY MESSAGES]
“Thoughts about Thursday…. [LIST THE NAMES OF CONTRIBUTORS] thought [PLAY MESSAGES]
“Get that Friday feeling from….. [LIST THE NAMES OF CONTRIBUTORS] thought [PLAY MESSAGES]
“Saturday is the first day off for some…… [PLAY MESSAGES]
“Sunday is usually a day of rest, but not for…… [LIST THE NAMES OF CONTRIBUTORS] [PLAY MESSAGES]

Standard messages
X said [MESSAGE]
[MESSAGE] from X

Positive reaction
The team decided that [MESSAGE] was some good news

Negative reaction
The team decided that [MESSAGE] was some bad news

But as we are aware, people now use emojis within text, especially common on playful platforms like Slack, but the problem with this is the Text to Speech engine struggles to read this out. So converting any emoticons, links, files into text was made a requirement. For the project this means that if a PDF was shared instead of the platform reading out the filename of the file it would interrupt that as a PDF file and read out “[Person X shared a PDF]” or [It’s a lovely day happy face].

Improving the curation

One way in which our rules and filtering could be improved is by human input. Even though one of the design choices was to avoid such, we decided to integrate a method in which a ‘human’ could assist. By human we mean bot and human. For those who have used Slack, they know that another feature integral to the platform is the bot. Bots act as a way of communicating and assisting users. For example when you first join a team the Slack Bot will help you get accustomed to the platform and be there for you when you need help. Well we decided to use this feature to help us train the classifier.

It was also important that the bot did not become an annoyance (like the Microsoft ‘Clippy’ paper clip from 1997), but rather a suggestion from time to based on some rules and actions. But also the bot should not require too much input from the user. For example if a message is written and the rules above for determining if its a reply or the classifier needs more information into to understand the message, a bot will appear (only to that user) asking the user if their message is a reply and if so is it replying to one of multiple choice messages. All that is required from the user is to click the button which best represents their input. Over time the bot will learn who has contributed but also over time appear less frequent.

Full integration with Slack

A Slack App is required when you want to respond to an action of an User.

  • The Event API was adopted because it can be scaled. The Real Time Messaging API is much harder to scale and the differences between both don’t really matter in the case of this project.

Creating a Slack App can be done through visiting https://api.slack.com/apps and clicking on “Create new App”.

  • Subscribe to the Event API.
  • On the same page as previously mentioned you can go to the App you’ve created. Then select the “Add Features and Functionality” option.
  • Go to the “Event Subscriptions” page.

Now you’ll see that you need a Request URL. This is used by Slack to redirect information to. When a User posts a Message, dependent on your OAuth Scope, it will send that information to a specific URL on your end.

To do so, we have used Node.js Express in combination with Slack Events API which will abstract the verification of your Application.

Don’t forget to set a process.env.SLACK_VERIFICATION_TOKEN Variable with the Verification Token you can find under the “Basic Information” of your Slack App.

Once you’ve followed the instructions on the Node Slack Events API page, you can then try to verify your Application. Go back to the Event Subscriptions page and enter the URL of the endpoint Slack should use.

Do NOT forget that each Request should be sent through HTTPS. When you’re using Node.js in combination with Express as mentioned before, Greenlock Express is a good starting point for automatic generation of a free HTTPS Certificate through Let’s Encrypt.

  • On the Event Subscriptions page don’t forget to Enable your Application Endpoint. It is not because the URL you will be using is verified that it automatically starts to redirect information to your Application. I forgot it myself.
  • Subscribe to Team Events & OAuth Permission Scope. Use both together to define to what information you wan’t access or what you would like, writing a Message?
  • The Events Api triggers an Url on our side that I’ve defined under the Slack App Configuration page. At the moment we only listen for the “message” and “reaction” event.
  • When the “message” event is triggered we only listen for Normal Messages and File Shares because they are the most important to incorporate into the weekly briefing; and
  • Each Subtype has it’s own behaviour. For example. Comments on a File that is being Shared cannot be deleted. Thus we would need to mimic the hole Slack backend ourself which is unnecessary.

The issues we faced at this point were;

  • The Time Stamp given when a File is shared doesn’t match the Time Stamp given when a Comment is being added to a certain File. → However, the File Object given when a File is shared on when a Comment is added contains a File Id and a specific / valid Time Stamp that can be used. At the moment that isn’t the case.

BeepBoop Slapp

  • Developer Platform for Slack Integrations with the possibility to skip the Platform’s Features and run it locally;
  • If you run it locally you’ll need to implement some “requirements” yourself. For example, metadata for each Request.
  • You’ll need to look at the Source Code of the Slapp Library to find more information about implementing the “requirements” for a local instance;

Node Slack Events

  • Easy to use;
  • Documentation is a bit unhelpful. The suggest using ngrok, but if your on a Free Plan you’ll need to change the Endpoint of your Application each time your ngrok Endpoint is changed. E.g. when you restart ngrok.
  • Limited in Features. Isn’t as advanced as BeepBoop’s Slapp Library. Which is both positive and negative.

As the platform was based on a web platform and that we wanted to not only encourage users to listen within the Studio, we wanted to expand the reach to work across multiple platforms, this meant the ability to listen on mobile or desktop. However, this required some level of authorisation as anyone would be able to hear the operations of the Media Innovation Studio. A Slack app was used to authorised the user and in order to listen online you must first authorise yourself and see if you are part of the team. This also acted as a debug console to test the briefings and data collection.

Pulling it all together

At present the platform is awaiting its final touches, one being the speakers to hear the briefings. We purchased a Pi hat to improve the audio quality (IQaudIO) but at present this is not working with the chromium browser and we have contacted the company for their support. But once this is resolved we will have a fully working Raspberry Pi set up with the ability to read out weekly briefings. The next task is then to study how people use this and to see if this improves communication in small/medium teams. Im sure it will be a talking point.