Voice of InterConnect

An Offline First mobile web app featuring Hoodie, IBM Cloudant, and IBM Watson Services

Voice of InterConnect visualization of aggregate sentiment over time.

I’m here in Las Vegas this week for IBM InterConnect, our annual conference that features the “what’s next” of tech innovation for cloud services, Internet of Things, mobile app development, and IBM Watson. Several us from the IBM Watson Data Platform team are here at InterConnect giving a number of demos and talks. Over the past couple of months our team has been working with Steve Trevathan of Make&Model and Gregor Martynus of Neighbourhoodie on a demo app called Voice of InterConnect. You can try out Voice of InterConnect here in the DevZone at InterConnect or on your own device at voiceofinterconnect.com.

Voice of InterConnect architecture diagram.

Voice of InterConnect is an Offline First mobile web app that features Hoodie, IBM Cloudant, and IBM Watson Services. We decided to build this as a Progressive Web App, rather than as a native mobile app, as we wanted to demonstrate some of what’s possible today on the web platform. The user experience of the Voice of InterConnect app is very similar to that of a native mobile app.

The app first prompts the user with a question, then leverages the HTML Media Capture API to access the device’s microphone and record a short voice response from the user. This voice response is stored locally in a PouchDB database on the device. This is the Offline First aspect of the app—the user can continue to record voice responses whether or not the app is able to connect to the cloud.

The Voice of InterConnect booth in the DevZone at IBM InterConnect.

The next time the front end app has a network connection (which could be immediately) it uploads the audio file to a Hoodie app running in the cloud. Hoodie is a Node.js framework that provides a complete backend for Offline First apps. The Hoodie app sends the voice response to IBM Watson Speech to Text for transcription, and then sends the transcribed text to IBM Watson Natural Language Understanding for sentiment analysis. The audio recording, the transcribed text, and the sentiment analysis are all stored in IBM Cloudant. Finally, all of the sentiment analysis data is displayed in a visualization that aggregates sentiment over time. Watch this interview with Steven and Gregor if you want to learn more about how the app is built. You can find all of the code for Voice of InterConnect on GitHub.

If you’re at InterConnect then please stop by the DevZone and try out Voice of InterConnect! There you’ll find the Voice of InterConnect team: Steve Trevathan, Gregor Martynus, Mark Watson, Irina J. Uno, Teri Chadbourne, CMP, and me (Bradley Holt). We’d love to talk with you about the app and answer your questions about how it’s built and the technologies that it uses.