The Encyclopedia of Smart Contract Attacks and Vulnerabilities

A deep dive into smart contract security

Kaden
19 min readDec 3, 2019

Applications on Ethereum manage financial value, making security absolutely crucial. As a nascent, experimental technology, smart contracts have certainly had their fair share of attacks.

To help prevent further attacks, I’ve constructed a list of nearly all known attacks and vulnerabilities. Though this list may cover known attacks, new exploits are still being discovered regularly, and, as such, this should only be the beginning of your research into smart contract security as an engineer.

This list can also be found on GitHub.

Attacks

In this section, we’ll look at known attacks that can be used to exploit smart contract vulnerabilities.

Front-running aka transaction-ordering dependence

The University of Concordia considers front-running to be “a course of action where an entity benefits from prior access to privileged market information about upcoming transactions and trades.” This knowledge of future events in a market can lead to exploitation.

For example, knowing a very large purchase of a specific token is going to occur, a bad actor can purchase that token in advance and sell the token for a profit when the oversized buy order increases the price.

Front-running attacks have long been an issue in financial markets, and due to blockchain’s transparent nature, the problem is coming up again in cryptocurrency markets.

Since the solution to this problem varies on a per-contract basis, it can be hard to protect against. Possible solutions include batching transactions and using a precommit scheme (i.e., allowing users to submit details at a later time).

DoS with block gas limit

In the Ethereum blockchain, the blocks all have a gas limit. One of the benefits of a block gas limit is it prevents attackers from creating an infinite transaction loop, but if the gas usage of a transaction exceeds this limit, the transaction will fail. This can lead to a DoS attack in a couple different ways.

Unbounded operations

A situation in which the block gas limit can be an issue is in sending funds to an array of addresses. Even without any malicious intent, this can easily go wrong. Just by having too large an array of users to pay can max out the gas limit and prevent the transaction from ever succeeding.

This situation can also lead to an attack. Say a bad actor decides to create a significant amount of addresses, with each address being paid a small amount of funds from the smart contract. If done effectively, the transaction can be blocked indefinitely, possibly even preventing further transactions from going through.

An effective solution to this problem would be to use a pull-payment system over the current push-payment system. To do this, separate each payment into its own transaction and have the recipient call the function.

If, for some reason, you really need to loop through an array of unspecified length, at least expect it to potentially take multiple blocks, and allow it to be performed in multiple transactions — as seen in this example:

Example from Consensys

Block stuffing

In some situations, your contract can be attacked with a block gas limit even if you don’t loop through an array of unspecified length. An attacker can fill several blocks before a transaction can be processed by using a sufficiently high gas price.

This attack is done by issuing several transactions at a very high gas price. If the gas price is high enough and the transactions consume enough gas, they can fill entire blocks and prevent other transactions from being processed.

Ethereum transactions require the sender to pay gas to disincentivize spam attacks, but in some situations, there can be enough incentive to go through with such an attack. For example, a block stuffing attack was used on a gambling Dapp, Fomo3D. The app had a countdown timer, and users could win a jackpot by being the last to purchase a key — except every time a user bought a key, the timer would be extended. An attacker bought a key then stuffed the next 13 blocks in a row so they could win the jackpot.

To prevent such attacks from occurring, it’s important to carefully consider whether it’s safe to incorporate time-based actions in your application.

DoS with (unexpected) revert

DoS (denial-of-service) attacks can occur in functions when you try to send funds to a user and the functionality relies on that fund transfer being successful.

This can be problematic in the case that the funds are sent to a smart contract created by a bad actor, since they can simply create a fallback function that reverts all payments.

For example:

Example from Consensys

As you can see in this example, if an attacker bids from a smart contract with a fallback function reverting all payments, they can never be refunded, and, thus, no one can ever make a higher bid.

This can also be problematic without an attacker present. For example, you may want to pay an array of users by iterating through the array, and, of course, you’d want to make sure each user is properly paid. The problem here is if one payment fails, the function is reverted and no one is paid.

Example from Consensys

An effective solution to this problem would be to use a pull-payment system over the current push-payment system. To do this, separate each payment into its own transaction, and have the recipient call the function.

Example from Consensys

Forcibly sending Ether to a contract

Occasionally, it’s unwanted for users to be able to send Ether to a smart contract. Unfortunately for these circumstances, it’s possible to bypass a contract fallback function and forcibly send Ether.

Example from Consensys

Though it seems like any transaction to the Vulnerable contract should be reverted, there are actually a couple ways to forcibly send Ether.

The first method is to call the selfdestruct method on a contract with the Vulnerable contract address set as the beneficiary. This works because selfdestruct will not trigger the fallback function.

Another method is to precompute a contract’s address and send Ether to the address before the contract is even deployed. Surprisingly enough, this is possible.

Insufficient gas griefing

Griefing is a type of attack often performed in video games, where a malicious user plays a game in an unintended way to bother other players, aka trolling. This type of attack is also used to prevent transactions from being performed as intended.

This attack can be done on contracts which accept data and use it in a subcall on another contract. This method is often used in multisignature wallets as well as transaction relayers. If the subcall fails, either the whole transaction is reverted or execution is continued.

Let’s consider a simple relayer contract as an example. As shown below, the relayer contract allows someone to make and sign a transaction without having to execute the transaction. Often this is used when a user can’t pay for the gas associated with the transaction.

Example from Consensys

The user who executes the transaction, the forwarder, can effectively censor transactions by using just enough gas that the transaction executes but not enough gas for the subcall to succeed.

There are two ways this could be prevented. The first solution would be to only allow trusted users to relay transactions. The other solution is to require the forwarder provides enough gas, as seen below.

Example from Consensys

Reentrancy

Reentrancy is an attack that can occur when a bug in a contract function can allow a function interaction to proceed multiple times when it should otherwise be prohibited. This can be used to drain funds from a smart contract if used maliciously. In fact, reentrancy was the attack vector used in the DAO hack.

Single-function reentrancy

A single-function reentrancy attack occurs when a vulnerable function is the same function an attacker is trying to recursively call.

Example from Consensys

Here, we can see the balance is only modified after the funds have been transferred. This can allow a hacker to call the function many times before the balance is set to 0, effectively draining the smart contract.

Cross-function reentrancy

A cross-function reentrancy attack is a more complex version of the same process. Cross-function reentrancy occurs when a vulnerable function shares a state with a function an attacker can exploit.

Example from Consensys

In this example, a hacker can exploit this contract by having a fallback function call transfer() to transfer spent funds before the balance is set to 0 in the withdraw() function.

Reentrancy prevention

When transferring funds in a smart contract, use send or transfer instead of call. The problem with using call is unlike the other functions, it doesn't have a gas limit of 2300. This means call can be used in external function calls, which can be used to perform reentrancy attacks.

Another solid prevention method is to mark untrusted functions.

Example from Consensys

In addition, for optimum security use the checks-effects-interactions pattern. This is a simple rule of thumb for ordering smart contract functions.

The function should begin with checks — e.g., require and assert statements.

Next, the effects of the contract should be performed — e.g., state modifications.

Finally, we can perform interactions with other smart contracts — e.g., external function calls.

This structure is effective against reentrancy because the modified state of the contract will prevent bad actors from performing malicious interactions.

Example from Consensys

Since the balance is set to 0 before any interactions are performed, if the contract is called recursively, there is nothing to send after the first transaction.

Vulnerabilities

In this section, we’ll look at known smart contract vulnerabilities and how they can be avoided. Nearly all vulnerabilities listed here can be found in the Smart Contract Weakness Classification.

Integer overflow and underflow

In solidity, integer types have maximum values. For example:

uint8 => 255

uint16 => 65535

uint24 => 16777215

uint256 => (2^256) - 1

Overflow and underflow bugs can occur when you exceed the maximum value (overflow) or when you go below the minimum value (underflow). When you exceed the maximum value, you go back down to zero, and when you go below the minimum value, it brings you back up to the maximum value.

Since smaller integer types — like uint8, uint16, etc. — have smaller maximum values, it can be easier to cause an overflow; thus, they should be used with greater caution.

Likely, the best available solution to overflow and underflow bugs is to use the OpenZeppelin SafeMath library when performing mathematical operations.

Timestamp dependence

The timestamp of a block, accessed by now or block.timestamp, can be manipulated by a miner. There are three considerations you should take into account when using a timestamp to execute a contract function.

Timestamp manipulation

If a timestamp is used in an attempt to generate randomness, a miner can post a timestamp within 15 seconds of block validation, giving them the ability to set the timestamp as a value that’d increase their odds of benefiting from the function.

For example, a lottery application may use the block timestamp to pick a random bidder in a group. A miner may enter the lottery then modify the timestamp to a value that gives them better odds at winning the lottery.

Timestamps should thus not be used to create randomness.

The 15-second rule

Ethereum’s reference specification, the “Yellow Paper,” doesn’t specify a limit as to how much blocks can change in time — it just has to be bigger than the timestamp of it’s parent. This being said, popular protocol implementations reject blocks with timestamps greater than 15 seconds in the future, so as long as your time-dependent event can safely vary by 15 seconds, it’s safe to use a block timestamp.

Don’t use block.number as a timestamp

You can estimate the time difference between events using block.number and the average block time. But block times may change and break the functionality, so it's best to avoid this use.

Authorization through tx.origin

tx.origin is a global variable in Solidity which returns the address that sent a transaction. It's important you never use tx.origin for authorization since another contract can use a fallback function to call your contract and gain authorization since the authorized address is stored in tx.origin. Consider this example:

Example from Solidity docs

Here we can see the TxUserWallet contract authorizes the transferTo() function with tx.origin.

Example from Solidity docs

Now, if someone were to trick you into sending Ether to the TxAttackWallet contract address, they could steal your funds by checking tx.origin to find the address that sent the transaction.

To prevent this kind of attack, use msg.sender for authorization.

Floating pragma

It’s considered best practice to pick one compiler version and stick with it. With a floating pragma, contracts may accidentally be deployed using an outdated or problematic compiler version — which can cause bugs, putting your smart contract’s security in jeopardy. For open-source projects, the pragma also tells developers which version to use should they deploy your contract. The chosen compiler version should be thoroughly tested and considered for known bugs.

The exception in which it’s acceptable to use a floating pragma is in the case of libraries and packages. Otherwise, developers would need to manually update the pragma to compile locally.

Function default visibility

Function visibility can be specified as either public, private, internal, or external. It’s important to consider which visibility is best for your smart contract function.

Many smart contract attacks are caused by a developer forgetting or forgoing to use a visibility modifier. The function is then set as public by default, which can lead to unintended state changes.

Outdated compiler version

Developers often find bugs and vulnerabilities in existing software and make patches. For this reason, it’s important to use the most recent compiler version possible. See bugs from past compiler versions here.

Unchecked call-return value

If the return value of a low-level call is not checked, the execution may resume even if the function call throws an error. This can lead to unexpected behaviour and break the program logic. A failed call can even be caused by an attacker, who may be able to further exploit the application.

In Solidity, you can either use low-level calls such as address.call(), address.callcode(), address.delegatecall(), and address.send(), or you can use contract calls such as ExternalContract.doSomething(). Low-level calls will never throw an exception — instead they will return false if they encounter an exception, whereas contract calls will automatically throw.

In the case that you use low-level calls, be sure to check the return value to handle possible failed calls.

Unprotected Ether withdrawal

Without adequate access controls, bad actors may be able to withdraw some or all of the Ether from a contract. This can be caused by misnaming a function intended to be a constructor, giving anyone access to reinitialize the contract. To avoid this vulnerability, only allow withdrawals to be triggered by those authorized or as intended, and name your constructor appropriately.

Unprotected selfdestruct instruction

In contracts that have a selfdestruct method, if there are missing or insufficient access controls, malicious actors can self-destruct the contract. It's important to consider whether self-destruct functionality is absolutely necessary. If it’s necessary, consider using a multisig authorization to prevent an attack.

This attack was used in the Parity attack. An anonymous user located and exploited a vulnerability in the “library” smart contract, making themselves the contract owner. The attacker then proceeded to self-destruct the contract. This led to funds being blocked in 587 unique wallets, holding a total of 513,774.16 Ether.

State variable default visibility

It’s common for developers to explicitly declare function visibility but not so common to declare variable visibility. State variables can have one of three visibility identifiers: public, internal, or private. Luckily, the default visibility for variables is internal and not public, but even if you intend on declaring a variable as internal, it's important to be explicit so there are no incorrect assumptions as to who can access the variable.

Uninitialized storage pointer

Data is stored in the EVM as either storage, memory, or calldata. It’s important the two are well understood and correctly initialized. Incorrectly initializing data-storage pointers, or simply leaving them uninitialized, can lead to contract vulnerabilities.

As of Solidity 0.5.0, uninitialized storage pointers are no longer an issue since contracts with uninitialized storage pointers will no longer compile. This being said, it's still important to understand what storage pointers you should be using in certain situations.

Assert violation

In Solidity 0.4.10, the following functions were created: assert(), require(), and revert(). Here, we'll discuss the assert function and how to use it.

Formally said, the assert() function is meant to assert invariants; informally said, assert() is an overly assertive bodyguard that protects your contract but steals your gas in the process. Properly functioning contracts should never reach a failing assert statement. If you've reached a failing assert statement, you've either improperly used assert() or there is a bug in your contract that puts it in an invalid state.

If the condition checked in the assert() is not actually an invariant, it's suggested that you replace it with a require() statement.

Use of deprecated gunctions

As time goes by, functions in Solidity are deprecated and often replaced with better functions. It’s important to not use deprecated functions, as it can lead to unexpected effects and compilation errors.

Here’s a list of deprecated functions and alternatives. Many alternatives are simple aliases and won’t break current behaviour if used as a replacement for its deprecated counterpart.

Deprecated functions and alternatives

Delegatecall to untrusted callee

Delegatecall is a special variant of a message call. It’s almost identical to a regular message call except the target address is executed in the context of the calling contract and msg.sender and msg.value remain the same. Essentially, delegatecall delegates other contracts to modify the calling contract's storage.

Since delegatecall gives so much control over a contract, it's very important to only use this with trusted contracts, such as your own. If the target address comes from user input, be sure to verify that it’s a trusted contract.

Signature malleability

Often, people assume the use of a cryptographic signature system in smart contracts verifies that signatures are unique; however, this isn’t the case. Signatures in Ethereum can be altered without the private key and remain valid. For example, elliptic-key cryptography consists of three variables — v, r, and s — and if these values are modified in just the right way, you can obtain a valid signature with an invalid private key.

To avoid the problem of signature malleability, never use a signature in a signed message hash to check if previously signed messages have been processed by the contract because malicious users can find your signature and recreate it.

Incorrect constructor name

Before Solidity 0.4.22, the only way to define a constructor was by creating a function with the contract name. In some cases, this was problematic. For example, if a smart contract is reused with a different name but the constructor function isn't also changed, it simply becomes a regular, callable function.

Now with modern versions of Solidity, you can define the constructor with the constructor keyword, effectively deprecating this vulnerability. Thus, the solution to this problem is simply to use modern Solidity compiler versions.

Shadowing state variables

It’s possible to use the same variable twice in Solidity, but it can lead to unintended side effects. This is especially difficult regarding working with multiple contracts. Take the following example:

Example of shadowing state variables

Here, we can see SubContract inherits SuperContract, and the variable a is defined twice with different values. Now, say we use a to perform some function in SubContract. Functionality inherited from SuperContract will no longer work since the value of a has been modified.

To avoid this vulnerability, it’s important we check the entire smart contract system for ambiguities. It’s also important to check for compiler warnings, as they can flag these ambiguities as long as they’re in the smart contract.

Weak sources of randomness from chain attributes

In Ethereum, there are certain applications that rely on random-number generation for fairness. However, random-number generation is very difficult in Ethereum, and there are several pitfalls worth considering.

Using chain attributes such as block.timestamp, blockhash, and block.difficulty can seem like a good idea, as they often produce pseudorandom values. The problem, however, lies in the ability of a miner to modify these values. For example, in a gambling app with a multimillion-dollar jackpot, there’s sufficient incentive for a miner to generate many alternative blocks, only choosing the block that’ll result in a jackpot for the miner. Of course, it comes at a substantial cost to control the blockchain like that, but if the stakes are high enough, this can certainly be done.

To avoid miner manipulation in random-number generation, there are a few solutions:

  • A commitment scheme such as RANDAO, a DAO where the random number is generated by all participants in the DAO
  • External sources via oracles — e.g., Oraclize
  • Using Bitcoin block hashes, as the network is more decentralized and blocks are more expensive to mine

Missing protection against signature-replay attacks

Sometimes in smart contracts, it’s necessary to perform signature verification to improve usability and gas cost. However, consideration needs to be taken when implementing signature verification.

To protect against signature-replay attacks, the contract should only be allowing new hashes to be processed. This prevents malicious users from replaying another users signature multiple times.

To be extra safe with signature verification, follow these recommendations:

  • Store every message hash processed by the contract — then check messages hashes against the existing ones before executing the function
  • Include the address of the contract in the hash to ensure the message is only used in a single contract
  • Never generate the message hash including the signature. See “Signature malleability.”

Requirement violation

The require() method is meant to validate conditions, such as inputs or contract state variables, or to validate return values from external contract calls. For validating external calls, inputs can be provided by callers or they can be returned by callees. In the case an input violation has occured by the return value of a callee, likely one of two things has gone wrong:

  • There’s a bug in the contract that provided the input.
  • The requirement condition is too strong.

To solve this issue, first consider whether the requirement condition is too strong. If necessary, weaken it to allow any valid external input. If the problem isn’t the requirement condition, there must be a bug in the contract providing external input. Ensure this contract is not providing invalid inputs.

Write to an arbitrary storage location

Only authorized addresses should have access to write to sensitive storage locations. If there isn’t proper authorization checks throughout the contract, a malicious user may be able to overwrite sensitive data. However, even if there are authorization checks for writing to sensitive data, an attacker may still be able to overwrite the sensitive data via insensitive data. This could give an attacker access to overwrite important variables such as the contract owner.

To prevent this from occurring, we not only want to protect sensitive data stores with authorization requirements, but we also want to ensure that writes to one data structure cannot inadvertently overwrite entries of another data structure.

Incorrect inheritance order

In Solidity, it’s possible to inherit from multiple sources, which, if not properly understood, can introduce ambiguity. This ambiguity is known as the diamond problem: If two base contracts have the same function, which one should be prioritized? Luckily, Solidity handles this problem gracefully — that is as long as the developer understands the solution.

The solution Solidity provides to the diamond problem is by using reverse-C3 linearization. This means that it will linearize the inheritance from right to left so the order of inheritance matters. It’s suggested to start with more general contracts and end with more specific contracts to avoid problems.

Arbitrary jump with a function-type variable

Function types are supported in Solidity. This means a variable of type function can be assigned to a function with a matching signature. The function can then be called from the variable just like any other function. Users shouldn’t be able to change the function variable, but in some cases, this is possible.

If the smart contract uses certain assembly instructions, mstore for example, an attacker may be able to point the function variable to any other function. This may give the attacker the ability to break the functionality of the contract — and, perhaps, even drain the contract funds.

Since inline assembly is a way to access the EVM at a low level, it bypasses many important safety features. So it’s important to only use assembly if it’s necessary and properly understood.

Presence of unused variables

Although it’s allowed, it’s best practice to avoid unused variables. Unused variables can lead to a few different problems:

  • Increase in computations (unnecessary gas consumption)
  • Indication of bugs or malformed data structures
  • Decreased code readability

It’s highly recommended to remove all unused variables from a code base.

Unexpected Ether balance

Since it’s always possible to send Ether to a contract — see “Forcibly sending Ether to a smart contract” — if a contract assumes a specific balance, it’s vulnerable to attack.

Say we have a contract that prevents all functions from executing if there’s any Ether stored in the contract. If a malicious user decides to exploit this by forcibly sending Ether, they’ll cause a DoS, rendering the contract unusable. For this reason, it’s important to never use strict equality checks for the balance of Ether in a contract.

Unencrypted secrets

Ethereum smart contract code can always be read. Treat it as such. Even if your code is not verified on Etherscan, attackers can still decompile or even just check transactions to and from it to analyze it.

One example of a problem here would be having a guessing game, where the user has to guess a stored private variable to win the Ether in the contract. This is, of course, extremely trivial to exploit (to the point you shouldn’t try it because it’s almost certainly a honeypot contract that’s much trickier).

Another common problem here is using unencrypted off-chain secrets, such as API keys, with Oracle calls. If your API key can be determined, malicious actors can either simply use it for themselves or take advantage of other vectors such as exhausting your allowed API calls and forcing the Oracle to return an error page which may or may not lead to problems, depending on the structure of the contract.

Faulty contract detection

Some contracts don’t want other contracts to interact with them. A common way to prevent this is to check whether the calling account has any code stored in it. However, contract accounts initiating calls during their construction won’t yet show they store code, effectively bypassing the contract detection.

Unclogged blockchain reliance

Many contracts rely on calls happening within a certain period of time, but Ethereum can be spammed with very high Gwei transactions for a decent amount of time, relatively cheaply.

For example, Fomo3D (a countdown game where the last investor wins the jackpot, but each investment adds time to the countdown) was won by a user who completely clogged the blockchain for a small period of time, disallowing others from investing until the timer ran out and he won (see “DoS with block gas limit”).

There are many croupier gambling contracts nowadays that rely on past blockhashes to provide RNG. This isn’t a terrible source of RNG for the most part, and they even account for the pruning of hashes that happens after 256 blocks. But at that point, many of them simply null the bet. This would allow someone to make bets on many of these similarly functioning contracts with a certain result as the winner for them all, check the croupier’s submission while it’s still pending, and, if it’s unfavorable, simply clog the blockchain until pruning occurs and they could get their bets returned.

Inadherence to standards

In terms of smart contract development, it’s important to follow standards. Standards are set to prevent vulnerabilities, and ignoring them can lead to unexpected effects.

Take for example Binance’s original BNB token. It was marketed as an ERC20 token, but it was later pointed out it wasn’t actually ERC-20 compliant for a few reasons:

  • It prevented sending to 0x0
  • It blocked transfers of 0 value
  • It didn’t return true or false for success or fail

The main cause for concern with this improper implementation is that if it’s used with a smart contract that expects an ERC-20 token, it’ll behave in unexpected ways. It could even get locked in the contract forever.

Although standards aren’t always perfect and may someday become antiquated, they foster the most secure smart contracts.

Conclusion

As you can see, there are many ways in which your smart contracts can be exploited. It’s vital you fully understand each attack vector and vulnerability before building.

Special thanks to RobertMCForster for many excellent contributions.

--

--