Guiding users through the dangerous waters of smart contract security

Adam Kolář
Solidified
Published in
7 min readSep 3, 2018

--

After a year of working as an Ethereum smart contract auditor, I’ve developed an intuitive sense which guides me in decisions concerning inclusion of findings in reports, their categorization, presentation and other matters that are not directly related to the technical part of the job, but to the wider context in which my work takes place. The goal of this article is to make this intuition explicit, turn it into a reasoned methodology and clear up my own thinking on the topic. This is not a text about newest tools, specific vulnerability categories or a case study of a recent attack. If you are interested in answering questions like what is the purpose of auditing, what is the intended audience of security reports, how to tell bug from a vulnerability and similar, read on.

Why do we audit?

Having an incorrect model of a tool or a system we’re using can be very costly. To illustrate let’s think about an example of two chains lying around on a construction site. One chain is completely made of plastic, the other is made of steel with the exception of one plastic link. When asked which one is more dangerous, most people will probably intuitively answer the mostly steel one. While nobody will probably try to hang a heavy load on a plastic chain, they might do that mistake with one that seems to be made of steel. That’s why hidden flaws are so dangerous, they trick us into making wrong decisions. And that’s exactly what’s the main task of the auditor, to point out the hidden weak link in the chain. By making weaknesses known, the auditor turns bugs into issues, or in other words something dangerous and unpredictable into something that can be fixed or worked around.

It’s not the task of auditor to fix issues, or even come up with solutions, they might provide suggestions, but their main responsibility is to correct and complete users’ mental models of the system. It’s also incorrect to assume that the only purpose of reports is to inform the development process. Because while some findings might not result in improvement of the audited system, they might help users to engage with the system in a safer way. One good example of this is the discovery of the approve / transferFrom multiple withdrawal vulnerability in the ERC20 standard (https://github.com/ethereum/EIPs/issues/738) I won’t go into the detail of the attack, but long before a standard update addressing this issue emerged and certainly long before it was widely used, users found a way to interact with the ERC20 tokens that worked around the security issue.

Who do we audit for?

While thinking about the purpose of auditing, we have naturally arrived to the question of the audit’s audience. Blockchain applications, in contrast with their centralised counterparts aspire to be trustless, in other words ideally there is no entity that is responsible for the correct functioning of the system, legally or otherwise. Rather, the system represents a neutral set of rules guiding interactions of independent actors. Everybody is responsible for their own actions and if the system fails, there’s nobody to blame but the user that decided to engage with the system with insufficient understanding. This difference has important implications for determining audience of smart contract security reports. Operator of a centralised system, for example a platform like Facebook bears legal responsibility for security failures and eventual damage that their users suffer, in this situation it’s natural that security reports are written mainly for their eyes and benefit, there’s no expectation that Facebook users will make assessments pertaining to the inner workings of the platform, they simply trust Facebook. On the other hand a user of a smart contract system engages at their own risk and therefore should be interested in expert assessments of the platform to make sure they do so safely and consciously. That’s why I think confidential audits on blockchain platforms go against the spirit of the blockchain movement. If we want to use impartial decentralised systems, we need impartial decentralised knowledge of these systems. Ideally the process of building this knowledge should be a collaborative community effort, but at the very least, experts should keep in mind they are doing research on behalf of current or future users, not just platform authors even if they might be currently paying for their work.

How to determine what is a security issue

Most sensitive category of issues we come across are security issues or vulnerabilities. In general, a bug that allows griefing, or intentional damage is called a vulnerability. This might seem straightforward enough, but it’s not always easy to decide what is a vulnerability and what is merely an expression of trust of one user group in another. We’ll try to make this decision a bit clearer in following paragraphs.

If a vulnerability is known to user and they nonetheless freely decide to engage with the system, it ceases to be a vulnerability and becomes a point of trust. What we mean by that, is that if a user knows about the ability of some other user to cause them damage, but decides to take part in the given system anyway, this can be interpreted as an expression of trust and the aspect of the system that puts user at risk can be called point of trust.
While hypothetically we could identify multiple points of trust, for a given pair of users, there’s always one in particular we are interested in, which is the most severe one, or the one that provides the easiest way for one user to damage another (the one with the highest griefing factor). To illustrate: Let’s say we have given our landlord a key to our house for the cases of emergency, he might also have a key to our backyard, but that’s not very relevant from security standpoint, at least until he returns the entrance key.
Now for some aspect of a system to be called a vulnerability, it has to satisfy two conditions. First it has to be unknown to the afflicted party at the moment of voluntary adoption of the system and second, it has to enable easier or more severe damage than the current point of trust.
This also means that what can be considered a vulnerability changes depending on the context, some aspects of the code that have been harmless in the past can become a liability if previous points of trust are removed when the system is updated to be made more trustless. Or when new trustless mechanisms are added to the system. That’s why with every substantial change or extension, the whole system should be reassessed.

context matters

I will further illustrate on one particular finding in our recent audit of an ICO smart contract.

The crowdsale in question included a KYC whitelisting functionality, but allowed people who haven’t yet gone through the KYC process to pre-commit their ETH with the actual buy being executed only after they have been added to the whitelist. The asset that was being sold had no trade value and no utility at the time of sale. Our auditor correctly noted in the report that ICO owner can withdraw ETH from the contract even for users that haven’t been added to the whitelist, but incorrectly identified this issue as a vulnerability. Since the whitelisting was controlled by the same address as withdrawals, malicious owner would have an equally easy evenue to withdraw buyer’s ETH by whitelisting them. Since withdrawal without whitelisting didn’t represent risk escalation over the implied accepted risk, the issue wasn’t in the end categorised as a vulnerability.

Final note we should add, is that the fact a point of trust is documented in developer’s specification of the system doesn’t mean we should automatically assume users will know out about it. This is especially true for functionality that conflicts with common expectations. For example in case of an ICO with minimum funding goal, users learned to expect trustless refunds in the case of the goal not being met, even if this is not part of the code’s specification, a conscientious auditor should point out the absence because they have good reason to believe it will not be part of user’s expectation.

Practical implications

I would like to finish this article by presenting a list of good practices derived from the previous text.

  1. Auditor’s primary responsibility is to build correct and complete understanding of the system, audits are not just lists of bugs to be fixed
  2. Auditors should publish their findings even in absence of solutions
  3. Primary audience of smart contract audits are users
  4. Smart contract audits should be public
  5. Auditor should always consider perspective and interest of users
  6. Auditors should publish amended reports that document issue fixes
  7. Auditors should document their communication with the developer
  8. To assess what constitutes a vulnerability, we need to first determine points of trust, this determination has to be based on realistic model of user’s understanding of the system

--

--