Can innovation be managed?

Yuriy Yuzifovich
8 min readFeb 8, 2018

--

Edison made 1,000 unsuccessful attempts at inventing the light bulb

Anecdotal evidence and public perception suggests that innovation is a spontaneous, unpredictable process, a sort of the Holy Grail of corporate wishful thinking. And some would argue that innovation is just a skill that some have while others just lack. Yet, in an increasingly competitive technological landscape, the most innovative companies will dominate.

I believe innovation is not only something that can happen repeatedly, but that it’s a process that can be managed. To spur innovation, leaders must provide an innovation/lab environment that embraces diversity, allows failure and runs lean.

Innovation grows from diversity

The magnetic compass was invented in the 11th century by Shen Kuo who combined his knowledge of astronomy, mathematics, physics, geography, cartography, magnetics, optics and many other sciences. Not surprisingly, it was he who combined this diverse knowledge to produce a compass, discovering along the way that magnetic and geographical south are different.

It would be great to hire people who are experts in multiple areas of science, but it’s increasingly rare these days. A knowledge area requires either a major time commitment to master (10,000 famous hours, according to Malcolm Gladwell), or a constantly-updated stream of information that’s “embedded” in the flow of new discoveries (arXiv, a major open source scientific medium for publications, now sees several hundred papers in computer science alone per day). Consider the sheer volume of information being created every day. According to MicroFocus, there were 5.2 billion new Google searches every day and 90 percent of internet content was generated in 2016.

So, are we doomed when it comes to innovation? Not necessarily.

I strongly believe that innovation happens when sufficiently different tools, methods or approaches are applied from sufficiently different areas to a particular problem. Why sufficiently different? People working in one area for long periods of time can develop a “narrow” view. As reported in Scientific American, ethnic and gender similarity rarely breeds innovation. Harvard Economics Professor Richard Freeman and Harvard Economics Ph.D. Wei Huang undertook a research project where 1.5 million scientific papers written between 1985 and 2008 were examined.

“Papers written by diverse groups receive more citations and have higher impact factors than papers written by people from within the same ethnic group.”

They also concluded that the stronger papers had more authors, more geographic diversity and a larger number of references.

Because we analyze massive amounts of data, machine learning and algorithms have to be used. At Nominum (now part of Akamai), a good example of innovation is an algorithm called Domain2vec, which is inspired by Word2vec. Word2vec detects usage of different words in similar context, while Domain2vec picks up subtle relations between domains used by the same botnet, when applied to DNS data. DNS or Domain Name System is the lookup functionality that associates a web address with an IP address. The vast majority of cyber criminals rely on DNS, so examining this data provides incredible insight into cybercriminal patterns and techniques.

Word2vec itself has origins in image processing, applied to the natural language processing (NLP) area.

Pixels under heavy magnification

Tomáš Mikolov, the researcher who invented the Word2Vec algorithm, spent most of his time working on natural language processing and neural networks. Yet, in his early days his interests included image processing, evidenced by his paper published in 2007 titled “Color Reduction Using K-Means Clustering,” written in Prague before he came to the U.S.

Hongliang Liu, the lead inventor of Domain2vec, was born in China, trained in computer science and physics, and worked alongside colleagues from Vietnam, Iran, Russia, Israel, Sweden and the US, a multinational team at Nominum that I had the honor to lead. A patent application for Domain2vec was filed in 2015, along with production deployment of the algorithm. The technology is one of many algorithms used to block malicious domains on a daily basis. Could these algorithms have been developed in silos? It’s unlikely. The reason we were able to innovate was because of the diversity of backgrounds and experiences.

Here are three other the things I believe are essential to foster an innovation culture (of course, there are many more but those are for other blog posts): the importance of the “fail fast” culture, embracing resource limitation, and building the right team.

1. Promote a “fail fast” culture

It may sound cliche, but constant success almost always leads to complacency and a gradual decrease of innovative capacity. Constant success can simply mean being at the right place at the right time and, eventually, that success will wear off. In order to keep innovation in an organization’s DNA, failure must be okay. Failures provide necessary “resets” that feed energy and focus back to innovative teams … as long as it’s recognized and lessons are learned. A successful innovation stream on a project level can look as follows:

Fail-success-fail-fail-fail-success-fail-fail-fail-success

Corporate structures disproportionately value streaks of successes, reducing “volatility.” However, a large enterprise can and should create an insulation layer between innovative teams and products delivered to markets. This fosters an environment where wild experimentation can occur, and breeds a culture where brainstorming, observation, rapid prototyping and experimentation are tasks as well as observation/analysis of those valuable failures and lessons learned.

Innovation engine

Starbucks, the most recognized coffee brand on the planet, embraces this failure culture. It tried to sell beer and wine and acquired Tevana and La Boulange, moves that were not at all successful. But did they continue? No, they quickly changed course and moved on.

As a side note, the analyze, prototype, deploy and observe model requires solid DevOps processes, integrated with data science. In security, cybercriminals are continually innovating. To respond accordingly, the team must be able to put together algorithms and develop new tools that discover and address malware authors’ innovations. The fleeting nature of the web, in many cases, prevents a classical approach of prototyping off-line, for example, in Hadoop. Often we need to quickly prototype, build and deploy a real-time algorithm that uses a live stream of data, combined with access to other real-time data sources, to observe results before deciding on whether to enable the output to production.

2. Embrace resource limitation as a source of innovation

The brain is wired in such a way that it attempts to solve a problem not by examining available resources, but by first attempting to make do with what one has.

My personal journey in constraint-inspired innovation started with a Soviet programming calculator — MK-54 — that had program memory that could only hold 98 one-byte commands. This created a race among enthusiasts who competed to cram more and more functionality in this space, and some went as far as simulating orbital maneuvers.

A more recent example from my practice would be the development of a new core domains engine. In our experiments with third party tools and databases, we found a solution that could process several million record updates per second but it would also cost a lot due to server and maintenance in our data center (due to the privacy concerns of our customers, we could not move this processing to the cloud).

After several disappointing tests that pointed to a need for a cluster of dozens of expensive machines, one of our engineers suggested that a simple, specialized in-memory algorithm be written that does not rely on tools but uses C++ and existing infrastructure in our data center. I approved the decision, even though some purists cautioned against solving a problem with a newly-designed tool, and pushed for use of a standard toolchain. But this new core domain detection method was born.

It’s became a backbone of our malware data processing pipeline.

Real-time data processing pipeline

Many countries face limited resources. Often, this scarcity directly contributes to the innovative capacity of a team. Israel (and the region around it) historically had a shortage of water which created a pressure on the people to innovate. The best minds joined in solving this problem which resulted in desalination technology which now provides about 50 percent of water in Israel. Israel also exports water and desalination equipment, to the tune of $2.2 billion annually.

Ocean of data

Some may say that the cloud revolution created an appearance of abundance of the resources. However, costs quickly add up as storage increases. Thus, successful use of cloud requires different development and deployment practices, now represented by containers and server-less computing. The most optimal algorithms must be used. When we’re talking about billions and trillions of data points being processed, the difference can mean real-time processing versus month-long delays.

3. Building the right team

Successful hiring can be done through multiple channels. However, I’ve been most successful by utilizing personal networks of team members. I personally rarely face a lack of talent, even in the Valley. People want to join self-organizing teams that rapidly execute on a vision and learn diverse knowledge from different team members. And they want to be part of a collaborative, innovative culture where dissent and failure is okay. Convincing people to join such a team, providing the compensation is right, is not difficult.

Focus should be made on complementary skills, which greatly improve the throughput of the team. For example, in my team originally there was no experienced C++ engineer. Using external engineering teams works in most cases, but when we added this engineer, the dynamics of the team changed by embedding engineering thinking into the data science process.

When Locky Ransomware started to wreak havoc in 2016, the team was able to invent a Locky detection engine that uses Graphics Processing Units (GPUs) written in CUDA, a framework this engineer learned in a day. The whole team participated in the creative and building process, where prototyping, finding a computational flaw in the malware algorithm, engineering optimization, deployment and output validation was performed by different team members within days.

It’s important to note that the source of the motivation and self-inflicted pressure was originating from the desire to stop these attacks worldwide and not from the direct pressure of management or customers.

Conclusion

Fighting cybercrime has never been so difficult. Some (not all) cybercriminals are among the most agile, innovative and intelligent technologists out there. New threats spring up daily. Hundreds of billions of dollars are stolen each year and over 3 million data records are stolen each day. The only way to fight such advanced and prolific crime is through innovation.

Getting back to the question: can innovation be managed?

In cybersecurity, there is no other way. We continuously innovate not because we are told to do so, but simply because this is the only way to stay relevant and maintain the upper hand with cybercriminals. While we are using machines to do the actual fighting for us, humans still have to innovate, at least for now.

--

--

Yuriy Yuzifovich

Cybersecurity expert, inventor, dad and AI fan. Head of Security Innovation Labs at Alibaba Cloud. Opinions expressed are my own.