The everlasting threat of reflection attacks and the need for an increased network liability

The recent outbreak of the new memcached reflection DDoS, forces the Internet community to take stock of the situation and look at the adoption of basic preventive measures against this type of attacks.

Alberto Radice
Akamai Krakow Blog
6 min readOct 1, 2018

--

*This article was first published on Akamai Krakow Technical Community in March 2018, hence all references date back to that period.

The month of March is not new to bring madness, and this year we have clearly observed this trend in the DDoS world, as we have officially entered what somebody called the “Terabit Attack Era”.

On the last day of February, Akamai mitigated a 1.35 Tbps attack targeted at a customer that operates a developer platform. A few days later, on March 5th, Arbor Networks confirmed having received a 1.7 Tbps attack targeted at a customer of a U.S. based service Provider.

Both attacks were fueled by the same service: memcached, an open-source memory caching system used to speed up websites, that has been around for some 15 years but was never meant to be used on servers exposed to the Internet.

The attackers leveraged the publicly accessible memcached servers with the UDP protocol enabled and sent requests to them with spoofed source IP addresses, those of the intended victims, obtaining a flood of responses which could have easily crushed the targeted services, if no mitigation were applied. This attack vector is the last one to join the happy family of reflection/amplification attacks.

The concept of reflection is so simple that can be compared to many real-life, offline world, examples. It‘s like sending a registration letter to a free clothing catalogue distributor with somebody else’s sender’s address, or calling a pizza delivery service and ordering a pizza at somebody else’s address. In both situations, if no basic validation or authentication mechanism is in place, the victim will receive unsolicited material.

Now, imagine that with a single letter you can register your victim to 10 catalogues and with a single call you can order 15 family-sized pizzas, and you will get the concept of amplification.

In the Internet world, this type of attacks makes use of the fact that the same transport protocol is used in both directions. The services chosen by the attackers are generally the ones that rely on UDP, which inherently lacks any sort of verification of the communication partner. Reflection attacks based on NTP, DNS, CharGen, SNMP — just to name a few — are being launched on a daily basis.

In contrast with what one might think, the TCP based services can also be abused to launch reflection attacks, as explained by this interesting paper, which summarizes a research conducted at a German university in 2014. Although the number of reflectors is smaller when compared to UDP-based ones, the researchers were able to find millions of vulnerable hosts and to prove the fact that the very mechanism that provides connection and mutual validation of the communicators, the three-way handshake, often produces amplification.

What is to be done?

The fact that reflection/amplification attacks are based on easy concepts and rather simple mechanisms seems to suggest that a solution isn’t too hard to find. Then, how is it possible that this type of attacks has been around for years and is not likely to disappear anytime soon?

Let’s focus on the recently emerged memcached reflection and try to identify the underlying causes:

  • A misconfiguration in the memcached service
  • The fact that IP spoofing is easy to achieve

Let’s delve into the former and ask ourselves why a service with such a disruptive potential can be easily misconfigured. Among the reasons, we can find careless system administrators, lack of understanding of the service and its implications, and lack of resources to conduct an exhaustive assessment of the network security level. It’s not hard to imagine that small businesses, ideal users of free and open-source software and services, easily fall within the described scenario.

The emerging question is: should these service providers be held accountable for indirectly posing a threat to the Internet community?

It is not easy to answer this question, but undoubtedly it is high time to approach the activation of an Internet service, regardless of whether it’s maintained by a business or by a non-profit entity, with proper care.

For other types of services, the community has found systems to encourage the adoption of solid security measures, as a requirement for the service to function properly. For example, this is the case of E-mail servers, whose administrators are highly aware of the risk of becoming unwitting agents of SPAM diffusion. This would cause the server’s IP to be added to DNS Blacklists, with the result of other systems not accepting its messages.

A “self-defense mechanism” was already created also for the memcached reflection. A tool, named Memfixed, allows to wipe the cached memory of the vulnerable servers, thus “fixing” them. It even supports a more drastic option: shutting the servers down. This tool, in the words of its own creator

“is unethical in every way looked at”

He also adds a few words to highlight the responsibility of the vendors who do not disable UDP. On February 27th, the authors of memcached have issued a bugfix release, disabling UDP protocol by default and explaining why this hadn’t been done before:

“12 years ago, the UDP version of the protocol had more widespread use: TCP overhead could be very high. In the last few years, I’ve not heard of anyone using UDP anymore. Proxies and special clients allow connection reuse, which lowers the overhead. Also, RAM values are so large that TCP buffers just don’t add up as much as they used to.”

Figure 1 — Memfixed, a killswitch for the memcached fueled attacks

Let’s now focus on the latter of the causes we’ve identified, IP spoofing. How is it possible that the Internet community hasn’t yet found a way to stop this plague?

As a matter of fact, a solution has been found long ago. It’s called ingress filtering, a technique aimed at verifying that incoming packets are indeed originating from the network they claim to come from. The declared objective of this technique is

“to prohibit DoS attacks which use forged IP addresses to be propagated from ‘behind’ an Internet Service Provider’s (ISP) aggregation point”

with the additional benefit of having the originator

“easily traced to its true source, since the attacker would have to use a valid, and legitimately reachable, source address.”

This technique was first formalized 20 years ago by RFC 2267, later superseded by BCP 38, and updated by BCP 84, which analyzes the effects of Ingress filtering on multihoming.

Figure 2 — An example of BCP38 implementation in a large DHCP-addressed network (source: www.bcp38.info )

So, how are we doing 20 years after this concept has been made public? Surprisingly, not too bad. According to the tests run by the Center for Applied Internet Data Analysis, at the time of this writing, nearly 90% of our address space, both IPv4 and IPv6, is non-spoofable. Detailed results can be found on the organization’s website, which also publicized an open-source tool, called Spoofer, to periodically test a network ability to both send and receive packets with forged source IP addresses.

Is this ~90% enough?

Clearly not. The remaining 10% is still being used to launch attacks, every single day. The real problem is that ingress filtering remains merely a best current practice, thus non-binding like a standard would be.

ISPs are ultimately businesses and, as such, they often prefer to avoid taking on the costs, efforts, and risks associated with having to maintain effective filtering configurations on their networks.

In short, everything is left to the “good will” of some ISPs, supported by projects like the Routing Resilience Manifesto.

In conclusion, what we are still lacking is a real concept of liability.

With the ever-increasing dependency of businesses, services, institutions, information, and, ultimately, our own lives on the efficiency of the Internet, we cannot afford to confine security practices to the role of “good neighbor policies”.

What’s really needed is a radical change of the mindset of Internet services operators, who run some of the most vital highways of our times and should be providing us with basic security measures for a safe drive.

--

--

Alberto Radice
Akamai Krakow Blog

Cybersecurity Manager and enthusiast. [Twitter: @securitypills; LinkedIn: linkedin.com/in/radicalb/]