Ethereum Smart Contract Security

Stefan Beyer
Jan 28, 2018 · 8 min read


Smart contracts are self-executing contracts, in which the terms are specified in code.

Whilst the concept has been around for a while, at least since Nick Szabo’s wrote up the concept in 1996, it hasn’t been until the advent of the turing complete Ethereum blockchain that smart contract use has become common.

Contracts in Ethereum live at contract addresses and can be invoked by transaction calls. Executing contracts written in code and stored on an immutable public blockchain creates certain risks and security issues. We will discuss these issues and possible mitigation actions in this article.

Code is Law?

A literal interpretation of the smart contract idea leads to the “Code is Law” paradigm, which means that smart contracts are binding and are interpreted as if they were legal documents. Any software engineer aware of the impossibility to create completely error free code will get sweaty hands at the thought of a computer program being legally binging. There are a number of obvious problems:

  1. Code contains bugs. It’s extremely difficult to write bug-free code and even if all possible precautions are taken, there will always be unexpected execution paths or possible vulnerabilities in reasonably complex software.
  2. Legal contracts are subject to interpretation and arbitration. It is very difficult to create air-tight contracts. In any large contract, typos may slip in and some clauses need to be interpreted and arbitrated. That’s what courts do in case of dispute. If in a legal contract sale price is specified as $100 on 39 out of 40 pages and on one page an additional zero sneaks in, a court would rule in “the spirit of the contract”. A computer just executes the clause as written. The immutability of the blockchain adds to this problem, as contracts cannot be amended easily.
  3. Software engineers are no legal experts and vise versa. A different skill set is required to draft a good contract, not necessarily compatible with writing a good computer program.

Two Examples of High-profile Smart Contract Exploits

A lot has been said already about this case, which we will not repeat here. A good overview of the attack and the aftermath can be found here.

In summary, in June 2016, an attacker managed to divert a large amount of crowdsourced funds (3.5M ETH, approximately 15% of total ETH at the time) into his own child contract, in which the funds were locked for 28 days, leading to a race against time to find a solution.

The important point to note in this case, is that the contract was attacked by making it behave in an unexpected way. In this particular case reentrancy vulnerabilities were exploited. We will look at reentrancy further on in this article.

This was in fact the second hack of the multi-signature wallet contract provided by Parity. The multisig wallet contract, used by many startups, had most of its logic implemented in a library contract. Each wallet consisted in a light-weight client contract connecting to this single point of failure.

Parity Multisig Architecture

There was a crucial bug in the library contract, which consisted in an initialisation function being able to be called exactly once.

In November 2017, someone did initialise the contract and by doing so converted himself into the owner of the contract. This then allowed him to invoke owner only functions, a privilege he used to call the following function:

// kills the contract sending everything to `_to`.
function kill(address _to) onlymanyowners(sha3( external {

This is the equivalent of a self-destruct button, which renders the contract useless. Calling this function caused all the funds of the client contracts to be frozen, probably forever.

At the time of writing, it is still unclear wether the hack constituted a deliberate attack or was accidental, with the perpetrator claiming accidental actions.

Both attacks show, that even relatively simple contracts, written by the biggest players in the Ethereum ecosystem are prone to basic bugs with serious consequences.

Known Vulnerabilities and Pitfalls

Using unsafe private keys is really a case of user error, rather than a vulnerability. However, we mention this nevertheless, as it happens surprisingly often and certain players have specialised in stealing funds from unsafe addresses.

What usually happens is that development addresses (such as those used by Ganache/TestPRC) are used in production. These are addresses generated from publicly known private keys. Some user have even unknowingly imported these keys into wallet software, by using the Ganache seed words to generate the same private keys.

Attackers are monitoring these addresses and any amount transferred to a TestRPC address on the main Ethereum chain tends to disappear immediately (within 2 blocks). This highly lucrative “sweeping” activity has been investigated in this interesting study, which found that one sweeper account had managed to accumulate funds worth $ 23 million.

Reentrancy vulnerabilities consist in unexpected behaviour, if a function is called various times before execution has completed.

Let’s look a the following function, which can be used to withdraw the total balance of the caller from a contract:

mapping (address => uint) private balances;function payOut () {
balances[msg.sender] = 0;

The call.value() invocation causes contract external code to be executed. If the caller is another contract, this means that the contracts fallback method is executed. This may call payOut() again, before the balance is set to 0, thereby obtaining more funds than available.

The solution to this is to use the alternative functions send() or transfer(). These forward just enough gas for some basic housekeeping and any attempt at calling payOut() again would run out of gas.

A similar race condition may occur without calling a function repeatedly, if a contract has two functions that access shared state. Therefore, it’s always best practise to make state changes before the transfer, i.e. in the above code the balance should be set to 0 before the funds are transferred.

The DAO attack used a variation of this vulnerability.

Balances are usually represented by unsigned integers, typically 256 bit numbers in Solidity. When unsigned integers overflow or underflow, their value changes dramatically. Let’s look at the following example of a more common underflow (numbers shortened for readability):

- 0x0004

It’s easy to see the issue here. Subtracting 1 more than available balance causes an underflow. The resulting balance is now a large number.

Also note, that in integer arithmetics division is troublesome, due to rounding errors.

The solution is to always check for under- or overflows in the code. There are safe maths library to assist with this, such as SafeMath by OpenZeppelin.

Transactions enter a pool of unconfirmed transactions and maybe included in blocks by miners in any order, depending on the miner’s transaction selection criteria, which is probably some algorithm aimed at achieving maximum earnings from transaction fees, but could be anything. Hence, the order of transactions being included can be completely different to the order in which they are generated. Therefore, contract code cannot make any assumptions on transaction order.

Apart from unexpected results in contract execution, there is a possible attack vector in this, as transactions are visible in the mempool and their execution can be predicted. This maybe an issue in trading, where delaying a transaction may be used for personal advantage by a rogue miner. In fact, simply being aware of certain transactions before they are executed can be used as advantage by anyone, not just miners.

Timestamps are generated by the miners. Therefore, no contract should rely on the block timestamp for critical operations, such as using it as a seed for random number generation. Consensys give a 15-seconds rule in their guidelines, which states that it is safe to use block.timestamp, if your time depending code can deal with a 15 seconds time variation.

The Golem team has uncovered an interesting attack, which is described in detail here. The exploit affects ERC20 token transfers and similar contracts, and relies on the fact that transaction byte code can be of any size. Missing trailing bytes are filled with 0s by the Ethereum virtual machine (EVM).

The attack consists in finding an address with a hex representation ending in various 0s, and leave out these trailing 0s in a withdrawal request. When the contract constructs a transfer request the shortened address is inserted and the rest of the transaction byte code is shifted.

For example, leaving out two trailing 0s causes a 1-byte shift in the bytes following the address in the transaction data. The address is followed by the argument in the transaction data, which usually is an unsigned 256-bit integer with leading zeros. The leading 0s shift into the address field, making the address valid and ensuring the transaction destination is correct.

A 1-byte shift in the argument field also conveniently causes the amount to be withdrawn to be multiplied by 256. As the EVM return 0s for missing trailing bytes, the transaction will succeed, transferring 256 times the requested amount.

Thus, leaving out two hex 0s of the address can be exploited to withdraw 256000 tokens from an account that holds a balance of 1000 tokens, or similar. Leaving out four trailing 0s multiplies the amount by 2^16.

To avoid this attack, your contracts should validate addresses.

Contract transactions can sometimes be forced to always fails by making them exceed the maximum amount of gas that can be included in a block. The classic example of this is explained in this explanation of an auction contract. Forcing the contract to refund many small bids, which are not accepted, will bump up the gas used and, if this exceeds the block gas limit, the whole transaction will fail.

The solution to this problem is avoiding situations in which many transaction calls can be caused by the same function invocation, especially if the the number of calls can be influenced externally.

The recommended pattern to make pay outs, is to let clients request transfers, instead of pushing them out, as explained in the official Solidity documentation.

Mitigation Measures and Conclusion

In this article we have looked at possible vulnerabilities and some examples on how these have been exploited in the past, in order to highlight the dangers of the “Code is Law” paradigm.

Recent history has shown that executing turing complete smart contracts on public blockchains is dangerous and nowhere safe enough to substitute more traditional legal systems with their precise language, room for interpretation and arbitration.

This does not mean we should abandon smart contracts. They are extremely useful tools and open up interesting applications. However, we should not consider them substitutes for legally binding contracts, but complementary tools for automation.

Furthermore, we should take the following precautions to avoid vulnerabilities:

  • Use open source and community accepted de facto standards for library contracts, such as Open Zeppelin’s contracts.
  • Use recommended patterns and best practise guidelines, such as those provided by Consensys.
  • Consider contracting an audit of your smart contracts by a reputable provider.


Blockchain Technology Insights

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store