A summary of why IOTA’s refutation of a vulnerability by DCI labs is absurd

Noah Ruderman
14 min readMar 4, 2018

--

tldr; DCI labs showed that the EU-CMA attack was successful for IOTA’s transaction signature scheme, and it would allow for the theft of funds. This is literally a textbook definition of vulnerability. If you dispute this, you are disputing the textbooks.

Introduction

The vulnerability report authored by MIT’s digital currency initiative (DCI) lab was the start of a very strange saga. IOTA developers denied the very definition of a vulnerability as it was understood in cryptography and computer security, accused DCI labs of academic fraud, harassed security researcher Ethan Heilman, and threatened legal action for an unfavorable vulnerability report.

Although there is no question about the correctness of DCI’s claims of a vulnerability among every single cryptographer, security researcher, and major cryptocurrency developer that has weighted in, there is unfortunately a size-able amount of misinformation which makes it difficult for the average person to understand the nature of the dispute. IOTA vigorously disputed the that curl-p was vulnerable and affiliates still post articles related to this. This post is meant to explain (a) the nature of the vulnerability to someone with little to no knowledge of cryptography; (b) why IOTA’s rebuttals have not been convincing to cryptographers and security researchers; (c) how the security of hash functions are reasoned about.

Recap of the DCI lab claims

DCI labs is saying that the digital signature scheme used to secure transactions fails the EU-CMA notion of security. The digital signature scheme is broken by EU-CMA security because the hash function used in this signature scheme, curl-p, does not have a property called collision resistance.

What is curl-p?

Curl-p is a hash function. Hash functions transform arbitrary amounts of data into fixed-length outputs. You can think of these outputs as digital fingerprints. Hash functions are intended to satisfy certain properties, the major ones being:

  • Determinism. The hash of a given input is always the same.
  • Uniformity. The expected inputs should map as uniformly to the output range as possible.
  • Non-invertible. Given a hash value, it should be very difficult to find the input. The definition of “very difficult” loosely specified since it depends on a number of external factors like technology that change over time, but it is not disputed among experts. It should not be possible to compute collisions on commodity hardware, and for secure hash functions it should be too difficult for anyone to find collisions. If the government could find collisions a hash function would not be considered secure.

The intent is that hash functions exhibit highly random behavior, and it allows us to treat hash values as unique and tamper-proof.

What is the role of curl-p?

Curl-p is a custom hash function written by Sergey Ivancheglo, known as Come-from-Beyond. It was part of the digital signature scheme used to ensure authentication and integrity of a transaction. The process of constructing the digital signature on a message involves hashing the data and encrypting the hash value with the private key. It is standard practice in cryptocurrencies. To be clear what I mean by these words:

  • Authentication: Prove that the message was created by the owner of the public key in a compact way. If the message were not hashed the signature would be longer and require more time to verify.
  • Integrity: Prevent the message from being tampered after creation while still validating under the digital signature.

Digital signatures are verified by decrypting the signature with the public key and hashing the transaction data. If the unencrypted signature and hash value are the same, the digital signature is considered valid.

However, digital signatures working as intended require the hash function to exhibit a high degree of random behavior. If curl-p exhibited sufficiently non-random behavior, an attacker could construct a message that was NOT signed by the private key but had the same signature because the hash of the message was identical. This means that funds could be stolen with counterfeit transactions if the digital signature scheme was broken. Since the theft of funds is not an intended feature of the IOTA network, the digital signature of a transaction, and by extension curl-p, is considered to have a critical security role in production software.

What security properties must curl-p satisfy to prevent attacks?

It comes down to collisions, which is the frequency that two different messages can be found to hash to the same value with curl-p. If two messages hash to the same value, then the signatures will be the same, and if the attacker’s message is a valid transaction, it means someone may be able to spend your Iota by using your signature from a previous transaction. Any kind of collision that exists with curl-p opens up attacks on the digital signature scheme, so curl-p should fulfill the strongest security properties.

There is a name for hash functions which fulfill the most stringent security properties and they are called cryptographic hash functions. Their security properties include:

  • Collision resistance. The attacker should not be able to find two messages m1 and m2 such that hash(m1) = hash(m2).
  • Pre-image resistance. Provided a hash value h, an attacker should not be able to find a message m such that m = hash(h).
  • Second pre-image resistance. Provided a message m1, an attacker should not be able to find a message m2 such that hash(m1) = hash(m2) and m1 != m2.

Is curl-p a cryptographic hash function, or not?

The IOTA developers have equivocated but more frequently claim that curl-p was not intended to be a cryptographic hash function. They refute the claims of a vulnerability by DCI labs because they believe collision resistance was not necessary in curl-p. In other words, they cannot be faulted for curl-p being an insecure hash function because it was never intended to be secure.

However, curl-p played a critical role in securing a production system holding more than a billion dollars at the time it was used. Sufficiently non-random behavior from curl-p would allow theft of funds. Unless theft of funds is an intended feature, curl-p should have been designed to satisfy the most stringent security properties I outlined, which would make it a cryptographic hash function. If the IOTA developers are to be believed, that curl-p was not meant to be a cryptographic hash function was a significant design flaw.

So IOTA’s developers are either responsible for extremely poor design choices, and/or they thought they built a secure hash function because they couldn’t find a way to break it. Both are exhibiting extremely poor judgment, the first being oblivious and the second being overconfident about security.

Was there a vulnerability?

Yes. The DCI lab found that curl-p was not collision resistant, and they showed real examples of two messages which be considered a valid transaction by the network, but hashed to the same value. They were able to do this by exploiting non-random behavior in curl-p to change a few bits in the message to generate a new one whose hash was the same. Those bits were the transaction amount. So if you sent someone some Iota, it is possible that the transaction could be modified to send a different amount.

DCI labs also showed how a theoretical attack would work by breaking collision resistance. You would construct two transactions that hashed to the same value sending Alice’s Iota, you would then get Alice to sign the first transaction, and you would use that signature to send the second transaction.

Since collisions were found on curl-p using commodity hardware; since the broken collision resistance in curl-p was shown with well-formed transactions from the same address; since the use of curl-p for hashing the transaction data means that user funds can potentially be stolen; and since none of this is possible with a cryptographic hash function… Yes, there was a vulnerability, as is commonly understood by cryptographers, security researchers, etc.

But what if I think he attack isn’t that damaging?

The words “secure” as it applies to hash functions and “vulnerability” as it applies to software have clear definitions which are independent from anyone’s personal feelings about how damaging those attacks are. Cryptography textbooks have rigorous definitions for the security of digital signature schemes which are also independent of personal feelings. So it is perfectly fine to say that curl-p was insecure, and there was a vulnerability, but the attack wasn’t that bad. It is incorrect to say that because the attack wasn’t that bad, there wasn’t a vulnerability.

Why consider the standard EU-CMA and not real-world attacks?

One of the most common points of confusion that come from people with no training in software engineering or cryptography is that the EU-CMA attack is an abstract game which doesn’t translate well into real-world results. After all, IOTA’s coordinator probably did some validation of their own, and that could certainly affect the feasibility of the attack outlined which was not part of the EU-CMA simulation. There were certainly external components to the came that mattered for the attack — address reuse is not advised. Let’s put aside the obvious detail that the coordinator is a temporary measure, and that it is closed-source code — meaning nobody knows what it actually does.

If the standard for reasoning about the security of systems was “can a real-world attack be demonstrated in the production system?”, there would be obvious problems that relate to real-world deployment of these demonstrations. Not the first of which being that hacking computer systems is illegal. Then there is the subtle nature of security weaknesses — do you really think it’s a good idea to wait until SHA-1 is broken on commodity hardware to consider it insecure when used for critical internet infrastructure? There’s also the fact that closed-source code is not publicly accessible — do you think think we should overlook the security of systems that are closed source just out of politeness? The list goes on.

The standard for how to reason about security you can think of as best-practices. Best-practices are like common-sense-for-experts, because the same mistakes tend to be repeated — like making your own custom hash function. The EU-CMA game encompasses the behavior that we expect to see from systems constructed with best-practices in security. That is, if you fulfill the EU-CMA notion of security for digital signatures, you know that certain security guarantees exist, without even having to create some giant flow chart of the pieces of your system. To put another spin on this, if your system’s security relies on external validation for maintaining the integrity of broken protocols, you’ve got a sloppy, overly complicated system ripe for security weaknesses.

Sergey saying EU-CMA doesn’t matter for digital signatures because his system does additional validation is like saying:

  • You shouldn’t be worried about an exchange keeping all of your coins in an hot wallet because nobody knows where this computer is and there is a password.
  • You shouldn’t be worried about using a VPN to hide your IP from a government because their privacy policy says they don’t keep logs.
  • You shouldn’t be worried about re-using passwords because the password are strong.

Are these examples silly? Sure. But they share the same spirit. There is a security weakness that is technically mitigated by an external factor. But that mitigating factor is very tenuous, especially for something security-critical. What if that exchange with the large hot wallet was securing $1B? It totally breaks your security model. That IOTA devs risked $1B in user funds — in their own words, because they didn’t know a better way than testing their not peer-reviewed, custom cryptographic primitive in a production system than to just deploy it and see if someone breaks it — it’s absurd. (see: letter #4 of email leaks)

Why are security researchers concerned if the real-world attacks do not seem that practical yet?

Because if the history of secure hash functions shows us any lessons, it’s that the first vulnerability is only the beginning, and as time goes on more vulnerabilities are found. The DCI team was so close to finding a pre-image attack that they pre-emptively declared that they had in their private correspondence with IOTA. It was not included in the vulnerability report, but int he leaked emails, DCI labs said they felt it was possible but couldn’t yet quantify it. It’s not implausible to think that curl-p also has broken pre-image resistance.

If a pre-image attack was found for curl-p, the real-world attack would be critical and unrecoverable. A pre-image attack would mean that an attacker would not require you to sign their message to succeed. Picture an attacker setting up a huge number of Iota full nodes. Now picture using a light wallet, connecting to those nodes, and broadcasting your transaction. That transaction would not be broadcast to the network. Instead, they would forge a fraudulent one and broadcast it with your signature.

Consider SHA-1 as a case study for the security of hash functions

Hash functions showing signs of non-random behavior is only the very beginning. SHA-1 was formally specified in 1995. In 2005, a full decade after its specification, vulnerabilities started to be published, where attacks more efficient than brute force were shown to be possible. Unlike with curl-p, no real world collisions were known in 2005. SHA-1 was just known to provide less security than promised, by a margin too small to ignore, possibly within the budget of a government to break, and that was sufficient for the cryptographic community to consider it insecure.

Over the next few years, the barrier to breaking the security of SHA-1 iteratively became smaller and well within the reach of governments. Again, despite the fact that no collision was found, many organizations recommended its replacement by SHA-2 or SHA-3. In 2017 a collision was finally found. It was a collision attack, and they showed it could be used to do things like construct a low-rent contract, and swap the digital signature with a high-rent contract that had the same hash.

To summarize SHA-1’s history:

  • The first vulnerability was found a full decade after its specification.
  • It was considered insecure by cryptographers once attacks were shown to be more effective than brute force, even though it was impractical. No cryptographer felt SHA-1 was secure just because it was too hard to attack.
  • Even though nobody could afford the computing power to find a collision in SHA-1 before 2017, it was well within the reach of governments.
  • Since 2005, the attacks on SHA-1 became increasingly efficient every year. Put another way, once SHA-1 was shown to exhibit non-random behavior, attempts to exploit that became better and better.

Compare to this curl-p, where:

  • The first vulnerability was found within 1 month of IOTA’s exchange listing, which is the point the project came under the public eye with a >$1B market cap. SHA-1 took 10 years of having the attention of researchers.
  • Despite curl-p having a similar role to SHA-1 in protecting the integrity of signatures, Sergey Ivancheglo did not think collision resistance was important. The entire cryptographic community felt collision resistance for SHA-1 was important. That collision resistance was close to being broken was the basis of calling SHA-1 insecure.
  • According to the leaked emails, IOTA devs used curl-p for a critical security application but did not think it necessary to submit it to peer review from professional cryptographers. The only way they felt they could ensure the security of their home-made cryptography was to use it in a production system and wait to see if it is attacked, in their own words.
  • The attacks on curl-p took 20 hours of work, according to DCI labs.

But are you sure there was a vulnerability?

Yes, by the textbook definition.

The assessment of curl-p being insecure is that it fulfills a critical security role in a production system (meaning it should be a cryptographic hash function), and that that it had broken collision resistance (it wasn’t a secure cryptographic hash function).

The assessment that there was a vulnerability in the digital signature scheme for transactions was determined on the basis that an insecure hash function meant transactions could be forged which used the signature of previous transactions. This can mean theft of funds, an unintended and undesirable feature, was possible, and that none of this would be possible with a secure hash function.

But you said this was a textbook definition…

More rigorously, DCI labs showed that IOTA’s transaction digital signature scheme using curl-p fails according to Existential Unforgeability under a Chosen Message Attack. In this attack, the attacker is allowed to have any message of their choosing signed and may repeat the generation and signing of messages as often as they need. If any two messages generate the same signature, the attacker wins.

For the attack, the messages are unsigned IOTA transactions. Because the signature of a transaction is actually the signature of the curl-p hash of the transaction data, breaking the collision resistance of curl-p is enough to win the game. DCI lab researchers broke the collision resistance of curl-p by producing colliding messages. That those colliding messages were well-formed transactions was a bonus.

But IOTA refutes that EU-CMA security is broken, too…

We’re all used to it by now, trust me.

In the leaked emails, Sergey refutes this on the basis that the definition of EU-CMA security is too abstract. (That’s what his reference to “spherical signature scheme in a vacuum” is to.) His arguments are confusing, in part because he repeatedly cites informal sources of information like Wikipedia and security.stackexchange.com to defend his arguments, taking the information as authoritative. Sergey repeatedly makes reference to the security definitions not taking into account cryptocurrency protocols. Other times, he disputes the definition of EU-CMA security by saying a pre-image attack is required. On Twitter he frequently challenges anyone who says IOTA is vulnerable to find a pre-image attack, which was repeatedly addressed by Heilman as not necessary for the EU-CMA attack they outlined.

Sergey lays out his understanding more carefully in a post. His understanding is pretty clearly underdeveloped. He struggles to view the aspects of EU-CMA security outside of a very literal view. For example, the EU-CMA game allows the attacker to get signatures on any messages they generate from the target. So DCI labs simulate a victim providing those messages by using private keys they control. Sergey feels this is a violation because they are using a figurative victim whose keys they can created, not a literal one whose keys they can’t.

Sergey also misunderstands the concept of the word “negligible”. He repeatedly denies that DCI lab has any credibility if they do not produce the code for finding collisions in curl-p, as if this changed the fact that collisions were found for curl-p on commodity hardware, meaning that collisions were anything but negligible to cryptographers. He even disputes that curl-p collisions were found on commodity hardware because he could not verify this himself on his commodity hardware. These requests are incredibly strange and non-standard.

Sergey is not asking a few questions, but rather he is questioning *everything he possibly can* in cryptography. If he does not understand these concepts, he should be taking a cryptography course and asking his questions there, not harassing the security researchers writing a vulnerability report. Failing that, he should hire cryptographers to answer his questions, because he is requiring so much detail that satisfying Sergey is an unreasonable time commitment as a voluntary gesture.

Conclusion

DCI lab’s assessment that IOTA had a vulnerability was consistent with the cryptography textbooks. Every single cryptographer, security researcher, and developer of a major cryptocurrency that has publicly commented has agreed with DCI labs.

IOTA is disputing this in part because

  • they lack a functional understanding of cryptography to the extent that they are taking the informal answers of security.stackexchange.com as rigorous definitions;
  • they lack any natural intuition about how to think about security, to the extent that they are rolling their own crypto and testing it on production systems without peer review;
  • and they lack the social skills to accurately interpret the more qualitative aspects of these textbook definitions such as what the word “negligible” means.

--

--