Week after week, we’ve gotten used to news media reports about ever-more jaw-dropping data breaches. The breach at the credit reporting firm Equifax is just the latest, and so far highest-profile reminder that more than 5 million personal records are lost or stolen every day. Each breach costs companies on average $3.6 million. CEOs have lost their jobs and reputations, and CSOs wake up each morning dreading the news that personal customer data is in the hands of hackers.
It wasn’t always like this. Twenty years ago, cyber-related threats barely cracked the top 10 security threats facing U.S. companies, let alone data-specific threats. And historically, a company’s primary worry about its data related to governance and compliance, not security.
When I recently asked the VP of IT security for a Fortune 1000 company what his approach to data security was, his response was simply “I wish I knew; it’s not my job. It’s critically important for us to be engaged, but I only get informed after the fact.”
Such responses are depressingly common in an industry that is only just starting to grasp the full impact data security has on their business.
This is the first of a series of posts in which I explore “data friction” that results when security constraints inhibit the ability to satisfy the data needs of the business.
In today’s software economy, data has grown to become one of the most important assets a business can own. Consumers expect personalized experiences that can only come from gathering, analyzing and managing data at scale. That data is then used to drive new insights, decisions, and strategies throughout the business.
This imperative to collect and store more information about customers creates a feedback loop that’s not always virtuous. Data is stored in more places than ever before, and it contains more personal information than ever before. Both sides of the business equation derive a benefit: companies offer a better experience, and sell more products or services; customers find those products and services more valuable and buy more.
And while overall, this creates a greater value for businesses and customers alike, it also creates a more target-rich environment for an attacker. Protecting that data is more complex than it has ever been.
The old standard practice of “securing the edge” by using corporate firewalls and authentication systems is no longer adequate in a world where enterprises are forced to contend with mobile devices in the hands of employees and customers, an ever-growing list of connected IoT devices, plus public, private and hybrid cloud infrastructure. And while we still need to do the basic blocking and tackling of verifying identities, securing the transport layer, and encrypting transactions, they’re just starting points.
Companies have increasingly focused on mitigating risks and boosting their capacity to recover once the edge has been breached. Security and Event Incident Management (SEIM) systems — such as ArcSight and Splunk — are getting much more sophisticated and using machine learning and artificial intelligence to better identify threats. But even so, while the damage can be done in minutes or hours, the average time it takes to detect and respond to a data breach, according to a global study of security breaches by The Ponemon Institute, is more than six months.
What complicates the discussion around securing the data is the data itself. When you combine the inexorable growth in the amount of data that companies gather with newly intricate and sometimes convoluted ways in which it’s used, you end up with a quagmire. Companies struggle to understand, let alone quantify, their risk. And while techniques like data masking help eliminate personal information from data troves, it’s useless if you can’t deliver the data to the people in your business whose job it is to work with it day after day.
Even if you are able to identify, secure, and deliver data, it’s extremely difficult to fully understand how it’s being used at scale, and even harder to take action against new threats. As user workflows become fragmented across disparate systems, retaining the semantic information and inserting points of control must be re-implemented for each and every system.
These are all forms of data friction that occurs when data’s inherent constraints keep it from satisfying the demands of the business. Tackling it requires a new approach that brings together data operators — those who manage data and its related systems — with data consumers including developers, data scientists, anyone else who needs data to do their jobs. DataOps is the emerging movement that seeks to eliminate data friction through people, process, and technology.
I’ll have more to say on this in the coming days, including how data friction inhibits a successful data security strategy, and how DataOps techniques can help open new possibilities for the business. So stay tuned.
Next Article : You Can’t Protect What You Can’t See