Innovation, Business and IT (Are they separate any longer?)

I just had the opportunity to attend the Red Hat Summit as well as a recent sales event with HPE. Coming out of them, I was struck by the convergence of business and IT at multiple levels. HPE was talking about autonomous, self-healing infrastructure that applied predictive analytics to improve service levels. The Red Hat sessions spoke to the power of collaboration across organizations. An enterprise, as well as IT infrastructure, can also be autonomous and self-healing yet connected with all its peers. That is a lot of mirroring to ponder.

First, let’s look at business strategies. The model for new revenue growth has moved, away from top down big bets that take years, to quickly trying new ideas, looking for the one that truly disrupts the market. That’s not to say that major funding won’t follow, just that funding decisions become more granular based on intermediate goals and near-term payback.

That change stems from many things, but a critical piece is that incremental development is a lot simpler to do now than in the past. For instance, the Agile development model feeds directly into cloud native technology architectures to create “just enough” of an application quickly. Through DevOps release and test processes and container-based packaging, these apps are ready to run immediately. Elastic cloud infrastructure such as AWS, Azure, Google or OpenShift means there is always space available to put them to work.

Drivers

The complementary nature of the key elements is no accident. Everyone wants change to occur more quickly and at lower cost and risk.

Open Source

A key driver for faster cycle time is open source technology and the community model that fosters it. There are three elements to the open source revolution: cost, cycle times, and collaboration.

Open source is available free or supported via subscription. That makes innovations that might scale very large a lot less risky. At their heart, most applications have databases. Oracle costs upwards of $40,000 per core to license. Imagine if your application goes from 10,000 users to 10M in a few months — you go broke paying for the back end. That cost ramp goes away with open source database.

With open source, anyone can extend or modify a distribution to meet their needs. No need to wait for a vendor to decide if your need is a priority and then charge you early adopter fees to get it done. On the other hand, if longer term, your extension deserves to be part of the base distribution, you must work with the community to get it built in. That brings us to collaboration.

Collaboration brings out the best in the software community — achieving common ground so that the greatest number can benefit. The community for any particular distribution must agree on additions and changes before they get rolled into the base distribution. That also forces discussion of how this distribution might integrate with complementary technologies. Those decisions drive toward simplicity and efficiency to benefit everyone. Taken together, these three elements deliver a set of overlapping collaborations across the IT community, while being open to brand new concepts.

DevOps

Silos of thinking and narrowly defined goals retard innovation. Just as the open source model encourages collaboration in infrastructure services development, DevOps helps business and IT organizations update their apps more quickly. And, of course, open source teams routinely apply DevOps in their own work. One of the key elements that DevOps offers is the integration of the business point of view directly into the development to production process. Again, that combines the attributes of cross-functional collaboration and fast cycle times to improve applications incrementally and through acquired feedback. Under the covers, DevOps drives standards, compliance and automation to minimize handwork and human error.

Flash Memory

Analytics such as AI and machine learning have provided business decision-makers with enormous insights into their businesses and how to assist customers. That analysis depends on cheap access to solid state memory. Bringing memory closer to the CPU has made an incredible difference in what can be analyzed and how. The bad old days of complex disk striping schemes and complex data warehouse schemas, tied together by modest I/O bandwidth, are in the past. It can’t be overstated how flash memory has opened the door to broad usage of AI and machine learning, which drive analytics of all kinds. The notion of an open “data lake” where a wide variety of evaluations can take place simultaneously opens new doors to creativity.

One great thing is the “fractal” nature of data sets relative to analytics. Fractal connotes that structures are similar even though they exist at different scales. The same approaches that an organization uses to understand customers or the key parts of their business can also be applied to their IT infrastructure. Log files, telemetry and sensors apply equally well to Internet of Things (IoT) environments and the datacenter. The business and the supporting IT are once again on parallel tracks.

Finally, the very notion of what constitutes data has changed. Classically information was thought of in the concept of a purpose, whether that was a transaction value, a document, or an image. Current data management has moved on, viewing objects as made up of blocks, information is simply unique combinations of those blocks, rather than having a unique, predefined, external form. This is similar to 3D printing or DNA, where an infinite number of objects can be defined from a limited number simpler elements.

Impact of Analytics on Decision-Making

Analytics provide fresh perspectives on the business and the ecosystem that it fits into. Those are loosely described as historical (evaluating patterns), prescriptive (tactical response) and predictive (identifying future behavior). Businesses need all three to grow and thrive by making good decisions. Decision-making cycle time has radically closed thanks to analytics. The output of the current analytics engines supports three types of decision-making.

· Automated corrections and recommendations are the first level. Think of suggestions from your favorite website, or changes to machinery settings on the manufacturing floor, or rerouting transport on a supply chain. These are based on real-time analysis coupled to well defined policies or algorithms.

· Proactive line of business decisions are the next level. As business staff evaluate and understand trends and relationships, they can change business plans, affecting people and processes.

· Adjusting services is the third level. This is where the loop closes between business and technology as insights from AI and machine learning, along with predictive analytics make their way back to the user experience. For instance, insights from the analytics play directly to changes in customer-facing applications. Between the fast capture of usage-related data and the rapid cycle time to push changes into applications and improve customer satisfaction, this gets you pretty close to autonomous and self-healing.

Synthesis and Disruption

Business and IT innovation are more tightly intertwined than ever. Advances in one drive improvements in the other in a continual cycle that accelerates the pace of change. But iterative improvements are not the same as breakthroughs, they just enable them to come to pass more quickly. Improvements in Amazon Prime generated by machine learning did not lead directly to the purchase of Whole Foods. In the middle, there is synthesis of ideas, the leap to a fresh concept. It is the fresh look that drives disruption. So let’s focus on innovation and incremental improvement, but not forget to step back occasionally. Make room for the unanticipated, unvetted idea that turn the past sideways.