From IoT to IoE: Why Data Virtualization is Important in This Journey

denodo
denodo
Published in
3 min readJan 4, 2017

We have been talking about the Internet of Things (IoT) for many years now. But just how many “things” are we talking about? Last year, Gartner estimated that 20 billion things will be connected by 2018.

These “things” are connected devices that appear in many different forms, factors, locations, and usage patterns. Every day, we are creating many more devices that connect to the Internet. Gartner also predicted that, by the end of this year, 5.5 million new devices would be getting connected to the Internet every single day.

Recently, more and more companies and analysts have been talking about the “Internet of Everything” (IoE). When we move from “things” to “everything” we include people, processes, and data. That’s a much bigger, all-encompassing ecosystem than just the IoT. That’s great! We are all very excited to be in this expanded, connected ecosystem. When we think about consumers, the excitement is about being able to interact with smart cars, smart refrigerators, smart watches, smart beds, smart toothbrushes, and the list goes on. But for enterprises, the excitement is more than just connectivity, and being able to do new tasks over the Internet. What is this excitement, and how is it important?

What Lies Beneath

For every enterprise across the globe that embraces the IoE, it’s an opportunity to create economic value from this connected ecosystem. But how does that happen? The answer lies in the data that these connected devices generate. The IoE also includes data about people and processes that are captured in various applications such as CRM and ERP, and stored in the cloud, or in big data sources, as well as many other enterprise systems that are in a sense nothing but data repositories. Enterprises need to use data related to things, people, and processes for valuable feedback on how well end devices are working, whether those devices are functioning in the optimal way, whether all of their workforce is getting utilized in the most optimal way, and determining whether each process is laid out and functioning in the way it is intended to function. Leveraging this feedback, enterprises can make better business decisions that generate additional revenue and save cost or time.

With the advent of mobile, cloud, SaaS, social media, and various other web and media formats, the data economy is getting exponentially complicated. Not all forms of data and data repositories communicate with each other in the way we would like. To make sense of any enterprise data trove, we need clearly defined ways to capture, process, access, and present heterogeneous sets of data which, when combined, can provide organizations with invaluable insights for innovation and progress. Some progress has been made through data integration technologies like ETL, which helps build enterprise data warehouses (EDWs) as single sources of truth. Various BI tools make use of EDWs to try to understand customer behavior, the usefulness of a product, the right segmentation of a certain market, etc. But ETL processes do not deliver data in real time, and they are quite resource intensive.

In many instances, EDW solutions are just too expensive to be viable. Recently, with the advent of big data analytics on inexpensive commodity servers or hyper-converged clusters, many organizations are gathering insights from their non-transactional or dark data. But often, big data analytics lack context, and without the right context, big data analytics mean very little. This is where data virtualization becomes instrumental.

Keep reading…

--

--

denodo
denodo
Editor for

We do #DataVirtualization We care about #AgileBI, #BigData #Analytics, #Dataservices, #DataManagement, Logical #DataWarehouse Web, #SaaS and #Cloud integration.