What’s Next for WebRTC?

A couple weeks ago I chaired a panel at the IIT Real-Time Communications Conference in Chicago with the topic What’s Next for WebRTC with a focus on future use cases and applications. I had a great set of panelists that included Brian Pulito from IBM, Douglas Wadkins from Skedans, Ivelin Ivanov from Telestax, Dr. Luis Lopez of Kurento, Vladimir Beloborodov of Mera Software. The show was great as always and had had me pondering a bunch of interesting concepts for weeks.

I had three main take-aways from the discussion on where WebRTC is going:

  • Supermedia — forget today’s simple one-to-one and think broadcasting to millions with scalable multi-stream architectures and unique augmented reality experiences leveraging the latest in AI processing
  • Remote Machines — there are plenty of devices other than the latest smart watch that we don’t think about that could house a camera or microphone, and these devices will not only produce WebRTC streams but drive the need for more human-to-human communication
  • Middleware — a different kind of glue is needed to stitch together the growing array of high-bandwidth, realtime media WebRTC will produce

See below for details.

Setup: Where is WebRTC Now?

WebRTC has come a long way with in just a few years:

  • Penetration in 3 of the top 4 browser vendors — all but Apple now with Microsoft’s Edge (notably not Internet Explorer) recently joining Google’s Chrome and Mozilla’s Firefox
  • 3 of the top 5 social networks — Facebook, WhatsApp, and Google+ with the Chinese networks yet to have a public WebRTC launch (that I could find)
  • Soon to be near ubiquity in the traditional telecom and enterprise communications vendors
  • Even a Primetime TV a reference on HBO’s Silicon Valley:
WebRTC mentioned on HBO’s Silicon Valley

I really did not want to host another panel about how the web is disrupting telcos or how RTC embedded into enterprise messaging is changing Unified Communications. While these statements are arguably true, they are not really new news for those that have been working in those areas — or they shouldn’t be! Instead I asked the panelists to highlight some of the forward-looking use cases for WebRTC that are just emerging.

Evolving into Supermedia

WebRTC makes one-to-one video calls no big deal. Anyone with a novice-level of JavaScript skills (like me) can quickly hack together a peer-to-peer video call. The future of WebRTC might not be so simple.

Brian and Luis highlighted effective multi-party video calling is much more difficult. Luis highlighted an even more difficult use case — large scale real-time broadcasting like the popular (or once popular?) Periscope and Meerkat apps or the game watching service Twitch that Amazon recently purchased for $974M. The low-latency nature of WebRTC means one could provide a real-time broadcast to thousands or millions over IP without the 5–30 second delay or more you get with existing technologies that require use of a CDN.

Duplicating, mixing, and routing stream is one thing, but trying to do real time processing of them is another entirely. Luis talked about how they are doing real time analysis of WebRTC with computer vision (a favorite topic of some of my own hacks) for applications such as augmented reality and crowd detection. Ivelin mentioned this for home automation, security applications, and manufacturing process intervention. A light weight processor with a camera can be connected to heavier weight vision software in the cloud to do some very exciting things.

Luis demonstrating Kurento and Augmented Reality

Brian talked about using the content of the stream to provide useful context to applications during the call with sentiment analysis and text-to-speech. In the previous session, Dean Bubley talked about how sentiment analysis could be used to help with call center customer service — if software detects your voice patterns growing increasing upset perhaps you get transferred to an agent you are more likely to agree with.

Virtual Reality may also prove to be interesting, but will require greater flexibility for dealing with stereoscopic 3D and 360 degree cameras. Imagine strapping a 360 camera to your head and broadcasting those streams in real time to review VR viewers who can not only see what you see, but see what you don’t behind you as was recently demonstrated. Vladimir even speculated on what WebRTC could mean for 3D scanners for 3D printing.

The future of WebRTC is supermedia — moving beyond one to one calls that simply get shown on a remote screen into a much more complex world. A world where the streams themselves are becoming much more advanced, where they are being process and augmented in realtime, and where they will be sent to other parties, potentially millions of them, in very dynamic ways.

Talking about Machines

Internet of Things (IoT) was one of the major themes of the conference. So does it make sense to put WebRTC video calls on your watch, thermostat, and refrigerator? Possibly, but WebRTC‘s real-time video and audio feeds do make a lot more sense for many other devices.

Douglas of Skedans talked about commercial drone use cases. Ivelin talked about connected cars and how realtime feeds could help for roadside assistance and emergencies. Vladimir mentioned medical devices and pubic transportation.

One of the more interesting aspects of the discussion brought up by both Douglas, Brian, and Vladimir was about how IoT is driving more human-to-human communications than human-to-machine or machine-to-machine. Douglas talked about commercial drone inspection use cases where there is a pilot who is flying and a inspector who is looking for issues. If the inspector sees something unusual it often requires invoking other experts right then and there. Rapid response is critical — commercial drone pilots aren’t free and the drone has limited time in the air. There could also be safety issues at hand that must be addressed immediately depending on the application. A single stream from the drone quickly escalates into a multi-party conference discussing what that stream is showing and determining the correct response.

IoT is driving more human-to-human communications than human-to-machine or machine-to-machine

Brian talked about how this can happen around simpler IoT devices that might not have a camera or microphone, but would need to automatically initiate a real time communication session with context between people as part of an alerting procedure.

The inspection use case for embedding WebRTC as part of IoT alone could be huge. Most man made things in the physical world need to be checked to ensure they were made correctly when they come off the manufacturing line. Anything that is built degrades over time and this needs some inspection and maintenance. More often than not its hard to put a person in the right physical position to see what’s going on , such as on the side of a skyscraper, under a bridge, over an agricultural systems that span miles, down an a oil pipeline, or inside a wall. Lowering the cost of inspections, improving the perspective of what is seen, and responding more effectively could save many billions, and spawn a new billion dollar industry as a result.

Machines will inspire just as much human-to-human conversation as human-to-machine. Image source: Metropolis (1927)

Not your father’s Middleware

Another major topic of discussion is what do you do with all this new streaming data? Do you archive it? Where does that go and who gets access to it? Where is a machine-based stream sent when an alter goes off and who is part of the conference to discuss it? How do you manage the enormous amounts of data produced by sensors and vision processing? How does all this new data get integrated into existing systems? A better, more powerful glue is required to keep up with all these streams and stitch it all together — a modern, more sophisticated middleware.

This middleware is actually a major part of what IBM WebSphere, Skedans, and Telestax provide. Mera and Kurento may even say the same. Middleware may not sound as sexy as augmented reality or drones, but is critical to making these applications work, particularly in complex commercial and industrial environments where breaking a process can quickly cost millions.

WebRTC = Big Data — Brian Pulito, IBM

Today’s middleware needs to change to handle:

  • Orders of magnitude more devices and interconnections from more sensors and IoT
  • Realtime, high bandwidth data — like a camera feed, video conference, or high fidelity sensor data
  • Intelligent human interaction — altering the right people, at the right time — and not all the time — and giving them the tools to effectively respond and take action with tools that now includes real time communications

Middleware is often forgotten behind the scenes of slick devices and fancy UI’s, but it is become the hidden core of many of tomorrow’s advanced WebRTC applications.

More information

Check out the slides from the panel session here:

What’s Next for WebRTC IIT-RTC 2015 panel slide deck

The conference had a ton of great keynotes and sessions. You can access most of them here: http://www.rtc-conference.com/2015-iit-rtc-conference-keynote-videos/

I will also be at the TAD Summit in Lisbon next month discussing WebRTC and the exciting new communications apps ecosystem.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.