Find out your personality with Watson AI!

The Assembly
Aug 23, 2017 · 10 min read

Learn how to create a custom personality summary on the Bluemix Platform with IBM Watson using Tone Analyzer, Personality Insights, Text to Speech & Speech to Text services with step-by-step instructions.

How and why do people think, act, and feel the way they do?

To answer this we’re harnessing the power of AI to apply linguistic analytics and personality theory to infer attributes from a person’s unstructured text — this can be a whole speech, a twitter feed, an email database, a blog post or a forum post that can be used to gain an insight into the needs, values and inclinations of an individual.

With the massive amount of data being generated through social media, emails, purchases, reviews and sensors across the world, it is imperative for a company or an individual to opportunistically generate value out of this data to improve competitiveness and insights into what people really feel about a company, product or service.

The Big Picture — What we’re creating

We’re creating a handy application that takes user speech input from an audio file of a person speaking. It then uses Speech-to-Text to transcribe the audio into text. Next, it uses Tone Analyzer and Personality Insights services to generate a Summary. Finally, it uses Text-to-Speech to read out the summary we’ve just created.

I’m going to share with you a detailed tutorial on how you can build your own application from start to finish. Let’s get started!

Step 1: Create an account on Bluemix

If you don’t have one already, make sure you sign upfor a Bluemix Account here. It’s a really powerful platform that lets you build, run, deploy and manage applications on the cloud. You get a free 30 day trial when you sign up!

P.S. If you already have a Bluemix account and your trial is about to get expired, hit us up on Facebook at The Assembly and we’ll hook you up with a promo code to extend your trial. 😉


Step 2: Download or clone the required assets from GitHub

You’ll need to download a couple of files which have custom code for the Node-RED assets at the later stages. Click here to get them.

Once you’ve done that, extract the files somewhere you can remember and install a good text editor to view them. Sublime Text is our personal favorite but you can choose whatever you’re comfortable with. Other recommendations include Notepad++ , Atom, or Brackets.

To open a folder directory with all the files in one place, go to File > Open Folder and select your directory.

Keeping the files ready on Sublime Text

Step 3: Create your Node-RED App

Head back over to your Bluemix dashboard and click on Create App.

Once on the catalog page, search for the Node-RED Starter Boilerplate, and select it to deploy the service. This is the first step to creating / editing your app on IBM Bluemix.

“But wait — What in the world is Node-RED?”

Simply put, Node-RED is a programming tool for wiring together hardware devices, APIs and online services. It provides a browser-based editor that makes it easy to wire together flows using the wide range of nodes in the palette that can be deployed to its run-time in a single-click. Check out their website to know exactly what it’s capable of doing.

Creating a new application on Bluemix

After you’ve selected the application, enter a unique name for your application.

Enter a unique name for your application

Since we’re using a free account, we’ll stick to the default options that are suggested. Go ahead and press create to start building your node-RED application. Once it’s done, click on ‘visit app URL’. This should open up a new window where you can start building your node red environment.

Secure Node-RED with login credentials

Enter an ID and password — you’ll need this frequently to access your node-RED editor and also to keep it secure.

Keep hitting the next button until you reach the editor. Go ahead and click on ‘Go to your Node-RED flow editor’. Once you’re there, enter your login details and get ready to start building!

Step 4: Start making the Flow

On your left hand, you should see a panel of widgets that you can drag, drop and rearrange the way you want your flow to progress. This is an extremely powerful feature that allows an intuitive visual of the entire flow of the application.

We’ll start by dragging in 3 blocks — the input HTTP Request node (found in the input section), the template node (found in the function section) and the HTTP Response node (found in the output block).

Here’s what each of them do:

  1. The input HTTP Request node let’s you select your root directory or landing page directory for your application. So if you double click on the input node and input your directory as /test, the user can access this landing page with ‘https://my-amazing-node.mybluemix.net/test’
  2. The template block allows you to input HTML code to display how the web page should look like. You can even add <style> or <script> tags to incorporate CSS and JavaScript. For the purposes of the demo, we’ve kept it really simple.
  3. The HTTP Response block sends responses back to requests received from an HTTP input node.

You can go ahead and start wiring them up together.

Adding the first 3 nodes to your flow

Select the HTTP Request node and change the URL directory as you prefer. Mine is called /test.

Setting up the URL directory for your application

Next, head on over to the files you downloaded from GitHub and open the ‘index.html’ file. Copy the contents and paste it in the template node.

This simply includes a title and some div elements that has a link to the audio file.

Getting the file contents from the GitHub repository
Custom Front-facing HTML in the template node

Next, connect a function node. This function node essentially provides the raw audio file to be analyzed. You can change the directory to read any file you like.

Double click on the function node and paste the contents from the ‘inject audio.js’ file. This piece of code will retrieve the URL of a one minute monologue and pass it on for transcription.

Adding a function to inject audio and keep it ready for transcribing

Step 5: Start adding the required services

Next, go back to your Bluemix app and add and bind the following services to the application:

  • Speech to Text
  • Text to Speech
  • Personality Insights
  • Tone Analyzer
Selecting the ‘Watson’ tab to find the services you’ll need

While you’re adding the services, ensure that in the connections tab, you select the name of your application, in our case: ‘my-amazing-node’.

In each of the services tab, head over to Credentials and make sure you’ve added new credentials if it doesn’t exist already.

Note: After connecting one service, Bluemix should prompt you if you want to restage your app after adding a service. Click on cancel and only select the option restage when you add your final service (the 4th one). This will save you time, since restaging takes a while.

Step 6: Keep adding more nodes

Return to your flow editor, refresh the page (if it’s a separate tab) and add these three new nodes.

1. Speech to text — The Speech To Text converts the human voice into the written word. This service uses machine intelligence to combine information about grammar and language structure with knowledge of the composition of the audio signal to generate a more accurate transcription

The audio file to be analysed should be passed in on msg.payload.

Supported msg.payload types:.

  • String URL to audio
  • Buffer Raw Audio Bytes

For more information about the Speech To Text service, read the documentation.

2. Personality Insights — The Personality Insights service uses linguistic analytics to infer personality characteristics from text. The service can infer consumption preferences based on the results of its analysis.

The text (minimum of a hundred words) to analyse should be passed in on msg.payload.

For more information about the Personality Insights service, read the documentation.

3. Tone Analyzer — The Tone Analyzer service uses linguistic analysis to detect emotional tones, social propensities, and writing styles in written communication.

The text to analyze should be passed in on msg.payload.
The service response will be returned on msg.response

For more information about the Tone Analyzer service, read the documentation.

Double click on the Speech to Text node and select US English as your language and check the box ‘Place output on msg.payload’. We do this because the two connected services Personality Insights and Tone Analyzer take inputs from msg.payload.

Next, add a new function node as well as a text to speech node.

We will then click on the function and paste in contents from the ‘personality summary.js’ file. This piece of code will basically call in a custom class that will summarize the person’s personality obtained from the two previous nodes.

Double click on personality insights and text to speech nodes and ensure that the selected language is English.

Step 7: Install an external node for playing the audio

For the final step in the editor, we need to insert another node that lets us play audio in the browser.

To do this, click on the hamburger menu on the right and select ‘Manage Palette’ from the drop down.

Select Manage Palette to install an external node

Look for the node-red-contrib-play-audio file and proceed to install it with the button on the right.

Select the first option and click on install
Here’s what your final flow should look like

We add the ‘Play Audio’ node at the end of our flow so that we can actually hear the output in our browser.

Note that you need to keep your Node-RED editor as well as the Interface page open if you want to listen to the processed output.

For a better understanding of each stage, we add debug nodes found on the left hand side panel. By default they output msg.payload to the debug tab, but you’ll need to change the one for personality insights and tone analyzer to msg.insights and msg.response respectively.

Drag and drop them after each of the outputs you’re interested in seeing results for.

Step 8: Making the custom summary function work

We’re almost done but we need to add a few lines in our dependencies and add an modify an existing function as well.

Go back to the Bluemix page of your application and click on the View Toolchain button under Continuous Delivery at the bottom.

Select View Toolchain under the Continuous Delivery

Since we want to modify the dependencies and add a function, we’re interested in the code tab. You have two options once you’re here:

  • Edit the code using Git on a local machine and then use the Bluemix console to push the app back to the cloud.
  • Use a web IDE to edit the files online without needing to push it back using the Bluemix console.

You can choose whichever method you prefer but I’ll show you how you can do it with the Orion Web IDE.

Select your preferred method of editing the code

Once it loads up, navigate to the packages.json file and add in the following line in the middle

"personality-text-summary": "2.1.x",

You can find this in the downloaded GitHub files as well, in the file titled ‘requirements.txt’

Next, head over to the ‘bluemix-settings.js’ file and replace the old function with this modified version.

functionGlobalContext: {
PersonalitySummary:require(‘personality-text-summary’)
},
Ready for deployment

That’s all! You should be good to go now. Press the Deploy button on the top and wait for a while for it to work. This might take a few tries to get it up and running so don’t worry too much if it gives you an error at the first try.

Step 9: We’re done!

Head back to your Node-RED editor, click on Deploy and open the user interface on another tab by going to:

https://your-app-name.mybluemix.net/test

If everything is set up right, you should hear a personality summary being read out by your friendly robot, Michael.

Feel free to comment below if you’re stuck on a step or if you’re facing a bug.


PS: Special thanks to Naiyarah and the IBM Developer Experience team for organizing this hands-on at The Assembly!

If you get value out of these posts press the 👏 button and follow The Assembly for more valuable posts!

If you live in Dubai, make sure to follow The Assembly on Facebook and keep up-to-date with our events! 😊

The Assembly is a community collaborative that provides hands-on workshops every Saturday on topics varying from artificial intelligence, cloud platforms, IoT devices and DIY electronics projects.

If you have the passion to learn and build innovative projects then be sure to join us every Saturday to learn more about such smart things. If you want to keep track of the events we organize, get in touch with us on Facebook or visit our website.

)

The Assembly

Written by

Makerspace + Innovation lab in Dubai focusing on DIY Electronics, Cloud Applications, Artificial Intelligence, smart cities and much more projects

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade