Build your own Voice Assistant with React Native, Node.js and Watson (pt.3) — Mobile App

Anderson Anjos
3 min readJun 16, 2019

--

And last but (definitely) not least… the app.

Finally, let’s finish the job. A mobile app that serves as a channel on which users can interact with Watson and can get their beer advices

Obviously, before we begin, we need to create the mobile app, and React Native CLI helps us accomplish this task. First install React CLI globally, running:

npm install -g react-native-cli

Then, create app boilerplate:

react-native init beer-advisor-mobile

Well, at this time we can run:

react-native run-ios (iOS devices)ORreact-native run-android (Android devices)

And something like that will be displayed:

If everything is fine until here, let’s code desirable behavior.

Here I would like to tell something based on my personal experience. Although IBM provides a Speech to Text and Text to Speech api’s, I decided not use them for performance and accuracy questions, mainly when works in Portuguese language.

If you want to know for more information about these api’s, visit:
https://www.ibm.com/watson/services/speech-to-text/
https://www.ibm.com/watson/services/text-to-speech/

Instead, we’ll do the conversion process directly on the device, sending back only the text to Watson.
To perform our task we depend on two modules. Let’s install them:

npm install react-native-voicenpm install react-native-tts

Instantiate some React components that we needed, axios (for http requests), react-native-voice and react-native-tts.

Create a class based component and within constructor, bind voice handler functions and set language which our assistant will speak

Now, define functions to get Watson session and initialize conversation by sending a empty message. It triggers Watson’s greetings message the dialog starts.

Next step is define handlers to take care voice events.

Once voice engine is initialized, we need to define the method responsible for sending other messages

And finally, according to specification we need provide required render method. For explanatory purposes, we put two fields, and provide a basic style too.

It’s done!
Run project again and speak to test.

Let’s check it out result:

Conclusion

This is a very poor example and does not even describe all the power of technologies involved.

Node orchestration could to do request to other systems, bringing possibilities for useful integrations. Likewise Watson, since be properly trained could to answer more specific subjects accurately.

The intention here is to bring a spark that will sharpen your creativity for develop disruptive solutions, using on voice, image or whatever you could imagine it.

I wish to be successful in my purpose ;)

--

--