Purple Brain

Simon Burns
Oct 20, 2017 · 5 min read

Do you remember the IBM Watson advert with Bob Dylan from a while back? I thought it would be an interesting project to actually try to build something like that.

It’s also a fun way to show how to build a conversational analysis solution, that is creating a corpus of unstructured data and then interactively querying it in a conversational way.

To make this even more interesting (for me at least) I decided to analyze the lyrics of my favorite artist, Prince. While I built a demo using one particular artist, it should be possible to take that as a template and generate the same for other artists.

Here’s a video showing the result:

“Using a special code he accessed his mind”

When I started I wasn’t totally sure I could actually get the insights I wanted. So the first thing I needed to do was to get Watson to read some of Prince’s lyrics, and see what insights I could get. I got an instance of Watson Discovery on the IBM Cloud and created a collection in Discovery called “Prince”. Using the Discovery tooling, I loaded in the lyrics with each album as a separate document.

There are some insights you can see straight away on the collection overview, which include sentiment, content hierarchy and top entities. There is also a query builder where you can create your own custom queries. Using this I worked out how to build the various queries I wanted:

  • Top themes (content hierarchy)
  • Emotion (aggregation to find the highest score)
  • Sentiment (aggregations to find the highest and lowest scores)
  • Common references (keywords)

I could query over everything or restrict to individual documents (albums).

“Conversation is better than being lonely”

Once I had worked out the queries and confirmed I could in fact extract the insights I wanted, it was time to make it conversational. For this I used the Watson Assistant service. If you are not familiar with this service, you can read some of my other articles such as “Getting Chatty with IBM Watson” to find out more.

Watson Assistant uses intents and entities to understand what is being said. An intent is the meaning of what was said, and an entity is a parameter on that input, e.g. for this input “What is Prince’s most positive album?”, the intent would be #which-album and the entity would be @sentiment:positive.

So I defined intents for each of the queries I wanted to enable, and entities for albums, emotions, sentiment, etc. Using these intents and entities I created a dialog to define how to respond. My responses not only included what Watson should say but also details about the query to run. I returned these as parameters in the “output” object (using the built in JSON editor).

To round out the conversation, I also added some intents and responses for saying hello, thanks and asking what Watson knows.

“Come together as one”

Now I needed to build an application to bring this all together. I wanted to front end to be a simple black screen with the Watson logo to make it reminiscent of the Watson advert. I also needed to add in Speech To Text (STT) and Text To Speech (TTS) for the same reason.

The flow of the app is as follows:

  • Speech To Text listens for speech
  • Once it gets something, it converts this to text
  • The front-end sends this text to the backend(which I wrote in NodeJS)
  • The NodeJS app passes the text to the Assistant service
  • If the response from Assistant contains a query then it calls Discovery and inserts the results in the output text from Assistant
  • The final response is sent back to the front end
  • Text To Speech is used to read out the response

“Was I what you wanted me to be?”

So how well did Watson analyze Prince’s lyrics? That’s a tricky question as we humans tend to focus and remember things that connect with us, while Watson looks at everything the same. An example of this is the main themes that Watson picked out: “music”, “sex” and “dance”. These are not the themes I would have said if I was asked. I would have included “spirituality”, “love” and “peace/unity”, as these are what stand out for me. However I don’t think Watson is wrong, Prince references music, sex and dance throughout his songs, even those of a “deeper” nature. It has been said that Prince was a master at blending the spiritual with the sexual.

So what about other aspects, such as sentiment. Well, I was hoping for a Lovesexy versus Black Album result for the most positive and negative just because that will fit perfectly into Prince “mythology”. However that wasn’t to be :-( The Batman album being discovered as the most positive is interesting because I don’t think human listeners would reach the same conclusion. But we must remember Watson is only analyzing the lyrics — he can’t hear the songs. I think Batman sounds dark but is actually lyrically positive.

Song lyrics are quite different to normal writing and I don’t believe Watson Discovery is optimized for analyzing lyrics. For example lyrics tend to have a lot of repetition and poetic phrasing (i.e. saying something without saying it). Having said that, Watson has managed to pull out interesting insights which do make sense.

I’ve seen the future and it will be

What does the future hold for this project?

  • First I would like to make it more conversational, for example, get Watson to prompt and lead the conversation at times. Watson could ask your favourite album and then give an analysis of that.
  • At the moment Watson can find common references in the songs. I want to add another query to find example references in the lyrics, e.g. “Watson find me an example of a reference to ‘world’.”
  • I would also like to allowing scoping of the queries to periods, for example, “what is Prince’s most positive album of the nineties?”
  • When the application needs to do a query to Discovery, it leaves too much of a gap between the question and response. I would like to change the app to return immediately after calling Conversation with a placeholder phrase, such as “ok” or “let me see”. While Watson is saying that, the call to Discovery is underway. This provides the “thinking” time and is equivalent to a human saying “umm”.

“What else do I have to say?”

Let me know what you think about this project either in the comments or reach out on Twitter.

If you want to learn more about building with Watson, check out some of my other articles in the Conversational Directory.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade