Software security during war
What software developers should be wary during war times, and a few of the most common attacks
Software built by most developers is not designed to withstand techniques and attacks that can be useful during times of war. But insecure software, or even otherwise secure software with insecure use, can be used as a weapon.
Following are some pieces of advice that aim to minimize how much your software can be used against the people it is supposed to serve.
Expect the DDoS
One of the simplest attacks against a networked application is a Denial-of-Service attack: an attack that generates a flood of requests, in order to saturate the capacity of your servers. Your servers, just like any computer, have a limited set of resources, so, if they are used to analyse, process and respond to bogus requests, it will have less resources for responding to real ones.
The bigger brother, Distributed Denial-of-Service, is a more advanced and large-scale attack that involves many bogus clients, distributed across the globe. It is quick to bring smaller applications offline, as they simply cannot deal with the increased amount of traffic and requests.
More advanced and significantly more effective forms exist, especially if the attacker knows a bit of how your application works, or can at least anticipate. Here is an example:
Let’s assume that your app serves an API URL that looks like this:
https://mysite/api/apartments-for-sale/list?limit=30
What does that link tell you?
If I run it, and see that it does, indeed, return 30 results, the next thing that I’m requesting is:
https://mysite/api/apartments-for-sale/list?limit=60
…and then:
https://mysite/api/apartments-for-sale/list?limit=200
If I get back 60, then 200 results, respectively, I will progressively test the limit parameter until I either hit a hard cap, or I have found what I need. Should no hard cap be enforced, a DDoS with this URL:
https://mysite/api/apartments-for-sale/list?limit=2000000000
…will be absolutely devastating to your app, because it will both saturate your application server with requests, but also your database server and all the bandwidth in between by fetching probably all objects from the database with each request.
Will it help if you were to protect your application at business level and not allow a knowledgeable attack like described above to go through to the next level? Absolutely. The shorter the execution paths is until you figure out that a request is bogus and should be dropped, the better. If, say, you leave 30 as the default limit in the example above, and your UI will impose a limit of 200, it is safe to assume that normal user operation will not mind if you impose a hard cap on the limit parameter. Better yet, if you figure out that no accepted usage scenario would require more than 200 items, then maybe validating the parameter and returning HTTP 400 Bad Request right away instead of going through with limiting to the hard cap and fetching will aid greatly.
What about regular DDoS then? Let’s say you have your bases covered. There is still a lot to do, albeit proactively: you could split your traffic across multiple DNS/routing paths, thus meaning that, should you have a DDoS attack coming from, let’s take a completely random country, like Russia, all you need is to block or limit the path serving Russia, until the attack goes away.
Of course, real-life wartime scenarios are going to be a lot more large-scale than that. If your application is hit from all sides by government-level attacks, you pretty much stand no chance to keep your app still running. Better to just close it, in order to avoid racking up costs.
And always make sure that your core target audience knows what’s going on: if you are under attack, say so — users are more likely to be understanding with an application that tells them that it is under attack, than with one that works like crap. Of course, do so in a way that doesn’t scare them that their data has been compromised. Even if you have to close — do so knowing that you have tried your best to serve customers, but the attack was simply too much for you to handle.
And an even better piece of advice, especially for applications that have documents, laws, manuals and other such files: get CDNs and use permalinks. Once you’ve placed a file on the CDN, your application is no longer invisible if it is attacked. Even if the application succumbs to the giant wave of bogus requests, the CDN’s might still live on, and your data will still be served to your users by way of other applications, such as search engines. Your users will certainly appreciate the effort.
Let’s review a few pieces of advice:
- Protect your database with hard caps on data limits
- Make execution paths as short as possible
- Separate your traffic by region, or by any other meaningful way, even if that doesn’t imply more servers
- Get some proper DDoS protection, such as user gateways, and flexible cloud solutions for serving your app
- Put as much of your content as you can on CDNs — that’s why they exist
Updates are your friends
One of the most problematic types of attacks on a system is an attack that does not necessarily involve someone doing something wrong, but malware spreading simply because of a flaw in the software.
Back in 2003, when true broadband access was still young and many users were still rosy-cheeked enthusiasts convinced that the Internet is the place to be and nothing could go wrong in the world, a worm named Blaster quickly reminded us all that security should be on our minds. I personally remember how many of my friends’ Windows PCs quickly fell down after getting infected, and they all, in the true spirit of computer newbies, took to reinstalling Windows on their machine, only to be amazed that the worm reappeared quickly afterwards, as if something outside of their computer was spreading it. Hmm…
I, on the other hand, along with one other person, were unaffected. The other person was using a Linux machine (which was rare at the time, in my part of the world, for consumers), so that left only me: the only user of Windows Server 2003 with NAT and explicit port forwarding enabled. And I had all the updates installed.
As it turns out, Blaster used a vulnerability in Windows RPC that allowed it to infect other machines that were simply being connected to the Internet or an affected network, if they had the RPC service port open and listening. Of course, back in those times, most consumers did not have a good-enough understanding of ports and their use, and the idea of a firewall was new. I, on the other hand, used NAT, and a strategy to have all ports closed unless they needed to be open.
Nowadays, such an exploit could no longer take place, but the principle behind it remains: if there’s a flaw in your software, sooner or later it might end up compromising your security.
But the most important takeaway from the Blaster story, actually, is how it got created in the first place. Microsoft had issued a patch that would fix the exploit on May 28, and a public patch to deal with what Blaster used on July 16. Only afterwards, on August 11, the Blaster worm was released on the Internet. Notice that it had already been almost a month AFTER Microsoft had made their patch available to everyone and had included it in regular updates that the worm even appeared. The speed with which it spread shows exactly what lax attitude most people had about patching their systems and keeping them up-to-date.
It was only a year later, in 2004, that the worm itself waned from the public eye, finally defeated after massive patching attempts, massive uptake of new “firewall” software, and mostly global uproar.
And it all goes to prove one thing: keeping your system up-to-date as much as possible, while it may not be pleasant (especially on mission-critical systems) is a good way to ensure that your systems at least don’t have that many usable exploits.
Let’s review a few pieces of advice:
- Keep your systems up-to-date, at least with respect to security updates
- Employ systems which have proper update channels, and keep up-to-date with those channels
- If you have mission-critical systems that need to be feature-locked, long-term-support (LTS) versions exist on most major operating systems and most pieces of important software
Keep your hackers close, and your libs… even closer
Libraries are a normal mainstay in software development. No software today, maybe with the exception of simple “Hello world!” applications, can claim to be truly stand-alone (and there’s a debate as to what those apps depend on, anyway, as a Java or .NET “Hello world!” still uses the underlying virtual machine/runtime). All software builds up on top of years of market experience, and relies on many libraries, inheriting their features, and also their bugs. And while, sometimes, the results are hilarious, oftentimes the results are disastrous.
In 2016, a NPM module named left-pad was removed from NPM, along with many others, because of a decision by its author. Many major pieces of software would then see their builds failing because either them, or, most importantly, some other modules they depended on, had left-pad as a dependency.
The consequences of this occurrence were far and wide, many package/module/library repositories deciding that deletion of what has already been published should no longer be allowed. I personally only found out about the whole left-pad debacle after NuGet decided against package removal, and must admit that, before that moment, I had taken everything for granted, and not once had I considered safeguarding my dependencies.
Obviously, the key takeaway here is to make sure that what you depend on is also accessible. Imagine, if you will, that you lose the ability to retrieve some dependencies because a malicious party has denied access to them. What would you do? Nothing that tragic if it was just a build dependency, but seriously tragic if your application had a dependency that it needed to load from an external source at runtime, like many websites load jQuery or Bootstrap.
So let’s review a few pieces of advice:
- Always keep a copy of your dependencies. There are many ways to create a federated repository that will create safe copies of your dependencies, such as jFrog’s Artifactory
- Make sure that you have the source code for your open-source dependencies, in case you discover a vector for attack against your software in them, and the original dependency/developer is unavailable
Track friendly
Now, while the above sections deal with advice that is good in any situation, this section deals specifically with danger that is mostly present during wartimes, or terrorist attacks.
Never underestimate the power your software has to track things, and never underestimate the power that an attacker can gain if it gets access to that tracking.
And if your software is being used in a war zone, or in countries that are known for violating basic human rights on ethnic, religious or political grounds, please turn off your tracking. Whatever you gain out of that tracking can only be considered blood money.
Let’s think of the following scenario: your software keeps track of your users’ location, and whenever they are live, and is helpful for coordinating groups of people. During preparation for a terrorist attack, your software gets compromised, and the attackers get access to the live locations of your users. The result should be obvious.
It doesn’t even matter, in that scenario, whether or not the attackers get access to continuous real-time location data. It matters that they have access to historical location data, from which they can extrapolate. If your users tend to congregate around a specific landmark every weekday for years, it is highly likely that they will continue to do so. An attacker can extrapolate from this data and seriously refine the list of locations to target.
Let’s take a different example. Let’s assume that a political cleansing is being planned. Your software is known for tracking user preferences, and attempting to group users together based on shared interests. The offending party wants to ascertain a certain subset of users for that cleansing campaign. By examining algorithms that your software uses (and, believe me, this is not as out-of-reach for an attacking team backed by an entire country’s government as you may think), the offending party will be able to create fake accounts with preferences identical to those which they target, that will then easily be grouped to real ones by your own algorithms. By introducing many fake accounts in many ingress points within your social application, pretty soon the circles of each of those fake accounts will inevitably start to intersect, and their intersection will lead to those people that will be most likely targets: those that are influential-enough to matter on a local or regional scope, or who are of great importance to the political doctrine that is being targeted for cleansing, but are not high-profile-enough that their sudden disappearance would cause an immediate eruption of a scandal or revolution. In other words, don’t target the figureheads, but target those who are actually making the cogs turn.
But let’s say that your software isn’t a social network or something that can be used as a people tracker. Still, think about it for a second: does every piece of information that you track actually matter? Is the phone number of your users, for instance, that important to you? After all, maybe your app can’t track your users, but maybe somebody else can: in this case, the mobile network operator, which can, given a phone number, either willingly or not, track it to an approximate location, and can even intercept communication. Attackers may not need your software for much, but, in targeting specific people, every piece of information counts.
Let’s say that you don’t associate a phone number with a person, but you do associate it to an e-mail address, or a screen handle, or any other piece of information. Attackers may not find out the identity of a person from you, but they may find the correlation, and attack a different application which also correlates data, and is able to make the connection to a person they’re targeting. Lo and behold: not one of the applications is single-handedly responsible for the fact that a tyranny was able to successfully target an individual, but all of them, collectively, were.
Can you do away with not tracking user information in war zones or places with a record of human rights violations? Then you should, because in the vastness of human action, you may never know who needs that one piece of information that you have, but consider trivial.
Key takeaways:
- Only track as much as you absolutely need to for users coming from war zones; you never know what information might put them in danger
- Correlate as little as you absolutely need to from your users in war zones – any association may end up creating a connection to others, which will, in turn, create another, up until enough connections can be made that a single entity can be identified
- Stop tracking and delete tracking data if a subset of users is suddenly put in danger – you won’t be able to always anticipate when that happens, so take some time to set up a cleansing service for your data, just in case
- Unless critical for your business, or mandated by law, don’t collect that sort of information that can later be used to track or spy on people via other more accessible means, such as phone numbers
Everybody counts
This is a set of advices that, sadly, many people are completely oblivious to, even though I personally struggle to understand how they are not completely obvious.
In 2010, software security researchers in Belarus managed to identify what would later be known as Stuxnet — a worm that targeted a very specific SCADA system made by Siemens. The worm would spread through various means and mostly indiscriminately, however it would not do much to the infected machine (apart from rootkitting it), unless it detected a very specific Siemens SCADA software that is mostly used for heavy industrial equipment. If that happened, it would infect that software, and would determine it to put specific code into the PLCs of the devices it was controlling, and would make it interfere with its operation in potentially destructive ways.
The first indication that this was a product of cyberwarfare was that it was seriously heavy on exploits. It used no less than 4 zero-day exploits in order to spread and infect, which proved the determination of the developer to get that code running on as many target machines as possible, at any cost.
The second indication was how narrow the surface of attack it had. While it would indiscriminately infect any computer it could, its core action (that of infecting the SCADA system) would only performed if particular software was found, and only if that software was connected to PLCs with specific characteristics, that had specific hardware modules from specific vendors installed, and only if it were operating under certain very specific parameters. Coincidentally, the specific parameters of gas centrifuges used in the process of Uranium enrichment.
The third clue, and what tied up all of this together, was that it mostly infected computers in Iran, just as Iran was winding up its Uranium enrichment programme.
But the biggest deal with Stuxnet is actually that it is a worm that managed to jump the air gap — meaning that it could infect computers that are not connected to the Internet. And the story of how that happened is actually the moral of this section. The following paragraph is speculation on how the actual events may have happened, but the main attack path is real.
It turns out that Iran, being under embargo, could not implement an industrial process of Uranium enrichment on its own, so it resorted to secretly purchasing industrial equipment and also hired specialists from Russia as contractors. Because the Iranian systems were air-gapped and could not be infected, the attackers focused on the more accessible Russian contractors, probably deducing that, since their cultural and operational patterns differed, as well as because they were the ones who needed to set things up, they were more likely to bring their own personal equipment in the air-gapped networks in order to be able to work. The plan was successful, and infected Russian machines generated infected USB drives, which then infected the air-gapped Iranian systems when they were plugged in.
The infection via USB is a known and proven method of infection, and, while the exact attack path is probably unknown even for the attackers themselves, the above speculation is considered the most likely way in which Iranian air-gapped systems were infected. But there exists an even stranger (albeit far more plausible, even though it is basically just an urban legend) story as to how it may have happened.
As the legend goes, like pretty much all industrial facilities, the ones in Iran have parking lots. In one of those parking lots, a mole working for a foreign intelligence service “accidentally dropped” a few USB sticks early in the morning here and there, in key locations where they would be certain to be noticed by other people. Unsuspecting employees noticed said USB sticks and picked them up and, trying to figure out who they belonged to, plugged them in to their computers. However, this being in the morning, when employees were going in, they, of course, used their work machines, thus allowing the worm easy access to the air-gapped network.
Now, of course, the above paragraph is actually just as stated: a legend. Things could not have happened like that in this particular occasion: it can be safely assumed that such facilities are heavily guarded, and someone would notice someone repeatedly “accidentally” dropping USB sticks, and a thorough analysis (like Russian contractors could ask their multiple serious software security firms to do) would have likely exposed the entire thing before it had a chance to do any damage, not to mention outing the mole. But this particular legend did inspire an entire cultural meme, and, at Black Hat 2016, Elie Bursztein presented a study that demonstrated that this type of attack (called a “USB Drop”) is actually very effective against a civilian population.
Why would this happen? And, more importantly, what can we do to not have this happen?
There is a tendency to put the blame squarely on the employees that do apparently dumb things that compromise the security of our systems. But, as Jayson E. Street pointed out in a DEFCON presentation (the link points to the exact segment, but the entire talk is worth watching, albeit only tangent to the point this section is trying to make): it’s time we phased out “stupid users” from our security vocabulary, and it’s time that we started treating everybody as part of the security ensemble of a system. Everyone, meaning workers, both blue- and white-collar, security, management, administration, external contractors, cleaning crew… everyone should be not only prepared to handle possible threats, but also empowered to do so.
Imagine how differently this would have played if, say, the centrifuge contractors would have routinely scanned their computers and devices. How unnoticed do you think network communication, the kind that is required to properly disseminate the worm, would go with a serious setup, assuming that it could go out at all? How unnoticed do you think attempting to install a new device driver (like the worm does) would be if the users that inadvertently did that actually took some time to look towards the screen at what they were installing? How different would the legend play out if people would recognize that a “misplaced” USB stick might be misplaced for a reason, and would simply say “I’ll let IT security handle this, after all, that’s exactly where anyone would think to go if they lost a USB stick”?
There’s one major point that comes up out of this segment:
- Train and empower anyone that you come into contact with to be security-aware, and to not hesitate to report anything they feel is out of place
Wrapping up
War is a messy business, there is no doubt of this in anyone’s mind by now. But if you’ve managed to get this far reading this article, you will have noticed that the points I’m trying to make didn’t really talk that much about war, and they didn’t really talk that much about software, either. Why, then, did I present it to you?
Because what goes unnoticed or barely scratches the peripheral eyesight of the public during times of peace, may end up mattering a lot during times of war.
I began my article with DDoS for a good reason. At the time of writing this article, Russia’s 2022 invasion of Ukraine is ongoing. On February 15th, I posted this on Facebook:
As it turns out, I was correct. The invasion was preceded by serious attacks on the IT infrastructure of Ukraine, with blackouts on some sites and overall disturbance, culminating on February 23rd 2022, when data-wiping malware was deployed and the intensity of cyberattacks managed to knock some Ukrainian government websites down. Russia invaded in the early morning of February 24th. What followed, you all know.
I then repeatedly tried to make the point that anything can be used against you, even if you don’t realize it. In the particular conflict, the Ukrainians learned this fast, and managed to adapt their strategies, and so did all the countries supporting them. Russians, on the other hand, both managed to cause incredible mayhem using well-played social engineering that had been going on for a long time, as well as both brute and elaborate attacks on the cyber-infrastructure of Europe. Thankfully, though, this also worked seriously against them, as their own cybersecurity misadventures led to some pretty embarrassing situations.
And last, but not least, I was trying to make the point that one may not know what kind of threats to expect, and from whom, and what will come next, so it is always wise to try and instil a proper education and a serious sense of empowerment in your employees (read “citizens” if you are a government representative), in order to get them to, as much as humanly possible, not fall into the wise traps of enemies, whoever they might be.
In the end, I’m not sure how many lives could have been saved if all of the above advice would have been followed throughout the short and tumultuous history of humans and computers. But I am absolutely certain that the amount would have been worth the effort.