Data streaming is quickly taking over the internet with streaming video already consuming more than 70% of all internet data. According to a study completed by Cisco, the number of devices receiving and emitting real-time data will more than double in the next four years from 5.5 billion to 12.5 billion. Streaming data is the core of social media as well. As sites like Twitter, SnapChat, Instagram, and Facebook continue to grow, so will the overall consumption of real-time streaming data. The question becomes, “What tools can we use to make sense of this influx of constant data?”
How it works
Streamlined3 provides developers with six fully customizable visualization templates that can be generated when creating a server instance. Once a developer has access to a data stream, visualizing this data can be completed in two simple steps: first, connect to our library and invoke the method for the desired visualization, passing in the data stream and configuration specs.
We currently offer customizable templates for six categories of graphs: Line, Bar, Scatter, Pie, Map and Bubble. Lastly, reference our library in the HTML page, then create an HTML component with the designated visualization ID.
Behind the scenes, our library takes these data streams and assigns them to the most optimal combination of web workers using node clusters to balance the load. Sticky sessions are then created using a Redis database and servers before the data is sent through our diffing algorithm via socket connections. Once our diffing algorithm optimizes rendering performance by isolating updated DOM elements, the visualization of this bound data is then appended to the HTML page.
Having multiple streams constantly emitting data every few milliseconds through a single-threaded server like Node can slow down the transfer of data. So to ensure scalability and optimal data transfer speeds, we used Node clusters to balance the load, creating web workers based on the number of CPUs on the client’s computer. However, we found socket connections passed through the web workers had problems maintaining their handshake. The socket connections would bounce around to multiple workers, never fully solidifying their connection. Our solution to this problem was to employ a Redis database and servers under the hood to create sticky sessions between our sockets and web workers.
D3.js offers methods and algorithms that assist developers with creating stunning visualizations derived from static sets of data. This library is equipped with dynamic selection methods that play a vital role in how DOM elements are reconfigured and redrawn. One downside to using this tool with a live data stream, however, is that the constant re-rendering can take a huge toll on the app’s performance, especially when working with multiple data streams. If the data chunk consists of only one changed value, all data-bound components will still need to be re-rendered.
We created a diffing algorithm to better apply D3.js’ great rendering logic to a constant data stream, effectively decreasing the number of DOM components targeted for a re-render each time a new data chunk is received. This algorithm works on two levels. It first compares the new data chunk against previously rendered components, then invokes a new render on the isolated components with updated data.
Check out our demo at Streamlined3.io!