Feds ordering, not asking, banks to share cyber incident info
Just about any cybersecurity expert will tell you that cybercriminals are more willing to cooperate — share information, techniques, and strategy — than their targets in the private and public sectors.
That’s not surprising. Private companies are understandably loath to share anything from intellectual property to an admission that they’ve been the victim of a cyberattack, since it could bring them even more misery than trying to recover from the attack itself — bad publicity and legal liability.
But that will be changing, at least in the financial services industry (FSI). And perhaps that will be a very good thing. But it won’t be voluntary. Five months from now banking organizations and their service providers will be required to notify federal regulators within 36 hours of any computer-security incidents of a certain severity.
Yes, required. This is not a request. It’s a rule, issued jointly last month by the Federal Deposit Insurance Corporation (FDIC), the Federal Reserve, and the Office of the Comptroller of the Currency. While it will take effect April 1, full compliance isn’t required on May 1.
The different agencies oversee different components of the financial industry, but the rule is essentially the same for all of them. For example, banking organizations supervised by the FDIC “will be required to notify the FDIC as soon as possible and no later than 36 hours after the banking organization determines that a computer-security incident that rises to the level of a notification incident has occurred.”
And what rises to the level of a notification incident?
The definitions are loaded with standard bureaucratese, but the stripped-down version is any incident that has “materially disrupted or degraded, or is reasonably likely to materially disrupt or degrade” a banking organization’s
- Ability to operate or deliver its products and services to “a material portion” of its customers;
- Business line(s), including associated operations, services, functions, and support, the failure of which would cause a material loss of revenue, profit, or franchise value;
- Operations, to the point that it would threaten U.S. financial stability.
Examples of notification incidents include “a major computer-system failure; a cyber-related interruption, such as a distributed denial of service or ransomware attack; or another type of significant operational interruption.”
The rule adds that bank service providers will be required to report any incidents to client banks that could disrupt those services for more than four hours.
To lessen the reporting “burden,” organizations will simply have to report the incident, not the details or analysis of how or why it happened.
All of which sounds both reasonable and overdue. As the introduction to the rule puts it, “This requirement will help promote early awareness of emerging threats to banking organizations and the broader financial system. This early awareness will help the agencies react to these threats before they become systemic.”
Sammy Migues, principal scientist with the Synopsys Software Integrity Group, agrees with the concept. “It’s hard to argue that this isn’t a good idea,” he said. “If all banking service providers tell their client banking organizations when they have a computer security incident of some magnitude, and banking organizations tell regulators when a service provider’s incident or their own incident amounts to a notification incident, then in theory the financial system regulatory bodies can share this information in some kind of fusion center and detect broad or systemic attack patterns much faster than any individual banking organization can.”
Trust and clarity
So why didn’t this happen 10 or more years ago?
Well, it’s been a matter of trust and clarity — a perceived lack of both. Private sector organizations and privacy advocates have complained for years that the government version of information sharing is a “one-way street” in which they are expected to share while government doesn’t. And that sharing private information with government could expose them to sanctions or other legal liability.
Don Davidson, director of cyber-SCRM [supply chain risk management] programs at Synopsys, said private sector organizations and trade associations have resisted government-mandated cyber incident reporting in part because of a lack of clarity and agreed “commercial standards” in cybersecurity.
“There is little agreed on when the clock starts for breach notification,” he said. “Is it when an anomaly is detected, when a breach is suspected, when a breach is confirmed, and with what degree of certainty?”
Indeed, a company may not know for days or even weeks whether an incident rises to the level of a notification incident. If its leaders think they are required to notify a federal agency before they are certain of the severity of the incident, “that may bring unwarranted exposure to a company,” Davidson said.
It looks, from the language of the rule, that such a scenario shouldn’t be problem, since it says the 36-hour clock doesn’t start ticking until “after the banking organization determines that a notification incident has occurred.”
Still, Ariel Parnes, chief operating officer of cloud incident response firm Mitiga, said compliance could be difficult simply because of the complexity of cyber incidents. “Unless these organizations are already collecting the forensic data needed to investigate an incident and have teams on hand to analyze that data, it will be nearly impossible to determine whether an incident is significant and requires notification,” he said, adding that while many organizations rely on incident-response vendors, “it often takes many days for these organizations to gain access to the systems needed to collect forensic data and begin investigating the incident.”
Nor is there any assurance in the rule about a “safe harbor” from government sanctions if a breach or incident happened due to a lack of compliance with government security rules or regulations.
Still, resistance to government notification mandates appears to be subsiding, at least in some quarters. The headline for a recent story in American Banker declared , “Report data breaches within 36 hours? Banks are OK with that.”
Perhaps that is in part because FSI organizations have been doing more of that among themselves. “Banking organizations and service providers have their own ISACs [information sharing and analysis centers] and private groups to discuss such things,” Migues said.
“I have no doubt that the security groups at many of these service providers and banking organizations are in regular contact with each other and probably come together when there is a common need.”
And Denyette DePierro, vice president of cybersecurity and digital risk for the American Bankers Association, told American Banker, “I describe it as a standardization of what has been a well-worn practice within the banking industry to give early voluntary notice around a lot of different types of incidents or events.”
The forecasts of how the rule will work are all preliminary, of course, since it won’t take effect for months. And when it comes to government, things are rarely simple. It will likely take time to sort out the meaning of terms and phrases in the rule such as “materially disrupted or degraded,” “reasonably likely,” and “material loss.” Who defines or decides what those mean?
Brian Sullivan, a spokesman with the FDIC Office of Communications, said at least at the start, the interpretation of those phrases will be up to the banks. “The agencies expect financial institutions to make these determinations and to report those incidents accordingly,” he said.
Precedent suggests that sorting this all out will be a slow process. “Much like GDPR [the General Data Protection Regulation in the EU] et al, this is initially an exercise for lawyers to parse every word of the final rule and determine how to minimize the impact on their organizations while meeting the spirit of the rule,” Migues said.
Secure your software
But there could be some positive side effects to the rule, such as a heightened incentive for organizations to make reportable security failures less likely. After all, you don’t have to worry about complying with the rule if there is nothing to report.
Of course, that would require organizations to improve the security of their software, which powers the major operations of just about every business — both internal and external. But that ought to be happening even without prodding from the feds. Software is so deeply embedded into any organization that software risk is a business risk.
And the good news is that there is a wealth of manual and automated testing tools available to help organizations do that. They start with architecture risk analysis and threat modeling to identify the ways malicious hackers might attack, and then include static, dynamic and interactive application security testing of the code, along with software composition analysis, which helps developers find and fix known vulnerabilities and potential licensing conflicts in open source software components.
Among the mantras at security conferences is that while nothing can make software bulletproof, it is possible to make it a much more difficult target. And most cybercriminals are looking for easy targets.
Meanwhile, perhaps another encouraging element of the rule is that it doesn’t specify any sanctions for failure to comply with it, or any process to determine if an organization is guilty of failing to report a reportable incident. Sullivan said those are yet to be determined.
“The FDIC, and I’m sure other regulators, would evaluate compliance as part of normal safety and soundness examinations,” he said. “Should we discover occasions when an institution failed to make a timely notification of a covered computer security incident, we would first consult with the institution about the appropriate remedy or corrective action. But these scenarios are entirely fact-based and case-specific, so it’s impossible to discuss potential sanctions.”
Whatever the outcome of this rule, it’s not the only federal initiative on the issue. Davidson notes that the Defense Federal Acquisition Regulation Supplement already requires federal defense contractors to report breaches within 72 hours. And there are legislative initiatives in the works as well. A bill now pending in Congress — the Cyber Incident Reporting Act of 2021 — would set a 72-hour deadline for a broad range of companies to report any “major” cyber incident to the Cybersecurity and Infrastructure Security Agency. Another, the Cyber Incident Notification Act of 2021, would set a 24-hour deadline for the operators of critical infrastructure to report a cyber incident like a ransomware attack.
In other words, the banking rule is likely to be just one of many mandated reporting requirements, which Emile Monette, director of value chain security at Synopsys, said has become “a plethora of notification requirements flowing from various federal and state entities, which means it adds to an organization’s compliance burden.”
Migues said the most important question is what problem regulators are trying to solve. “Do they believe that various financial organizations are experiencing security incidents at a significant frequency and impact but keeping it a secret whenever they can? It’s hard to imagine that isn’t happening at every company everywhere. Are they trying to prevent broad damage to the critical financial sector through analysis or just trying to shine a flashlight on how bad the problem really is? Maybe they’re just trying to get FSI firms to be more open for now and thus the lack of any penalties for non-compliance.”
“But every slippery slope starts with a single step,” Migues said.