Continuous integration for Smart Contracts

Nicolás Bello Camilletti
SOUTHWORKS
Published in
7 min readMay 19, 2021
Solidity, Hardhat, Slither and Azure pipelines logos

As part of our development process, we like to make sure that we are meeting our quality standards by using git flows and code reviews to double check the solution before merging it to the main branch. However, before doing these reviews, is important to have a continuous integration (CI) pipeline in place to run the verifications that can be automated like code formatting, looking for compiling errors as well as running the tests to verify the behavior is the expected one. This helps us saving time and letting the reviewers focus on the important things. Working with smart contracts it’s not the exception for this, and in this article, we will show how to configure a CI pipeline that can help you on your development process.

First, let us give you the context. We recently worked on several projects using solidity and Ethereum where we choose Hardhat as our development environment. Additionally, we are taking advantage of their integration with npm including npm scripts for automating our tasks as well as TypeScript to ensure type-safeness whenever is possible. For this pipeline, we are using Azure DevOps Pipelines to run it.

The pipeline structure

As mentioned before, we are using Azure DevOps Pipelines with the ubuntu-latest image. Additionally, we configure the pipeline to be trigger only on PRs targeting the main branch as well as whenever a commit is pushed to that branch.

Now, for the first steps and considering that we are using npm scripts and hardhat we need to install node as well as the dependencies described in the package.json.

trigger:
branches:
include:
- main
pool:
vmImage: ubuntu-latest
steps:
- task: NodeTool@0
inputs:
versionSpec: '14.x'
displayName: 'Install Node.js'

- script: |
npm ci --also=dev
displayName: 'Install dependencies'

Linting

As first formal step of our CI pipeline we use solhint as linter of our contracts. For this, we first need to install it as a dependency using npm i -D solhint . Then, we need to create the configuration file named .solhint.json which can be created using npx solhint --init. We can also create a .solhintignore file which uses the .gitignore format to ignore files that do not require validation.

Finally, we need to add a npm script in the package.json file for simplifying things as follow:

"scripts": {
"lint": "solhint --max-warnings 0 \"contracts/**/*.sol\"",
},

By doing this, we can run npm run lint locally as well as in the pipeline to confirm that the code is following the rules.

In our case we also wanted to comment on the PR if we find errors. For this, we save the log to a file and set a pipeline variable with the content of it (i.e., linterLog). Note that we used sed to replace line endings to support multiline content in the variable while setting its content (based on this question in stackoverflow). As the linter will fail on errors, the step will also fail before storing the log in the variable, that’s why we need to mark the step to “continue on error”. Then, we use the GitHubComment task to write the log only if the build was triggered as part of a pull request and it failed the earlier step.

- script: |
npm run lint > linter-log.txt 2>&1
if [ $? -ne 0 ]; then
echo "##vso[task.setvariable variable=linterLog]$(cat linter-log.txt | sed '1,/> eslint ./d;/npm ERR!/,$d' | sed '/./,$!d' | sed ':a;N;$!ba;s/\n/%0D%0A/g')"
fi
continueOnError: true
displayName: 'Run Linter'
- task: GitHubComment@0
condition: and(eq(variables['Build.Reason'], 'PullRequest'), failed())
inputs:
gitHubConnection: 'myGitHubConnection'
repositoryName: '$(Build.Repository.Name)'
comment: |
**Linter Error Details:**
```
$(linterLog)
```

Building the project

Now that we know that the code passes the basic security and style guide thanks to our linter step, we will build the solution. For this, we will run hardhat compile and to make things easier we will add a "compile" script on the package.json.

"scripts": {
"compile": "hardhat compile",
},

Then the pipeline step will be the following.

- script: |
npm run build
displayName: 'Run compile'

We could also call directly to npx hardhat compile but we are following the same pattern as in the other tasks.

Running a static security audit

Something super important for smart contracts is their security as once they are deployed it’s impossible to replace the same contract and you only can deploy a new one. For this, we can use slither analyzer.

First, we need to install the dependencies. Slither uses python so we can take advantage of the UsePythonVersion task selecting the version 3.x. As a separate task, we will use pip3 to install slither-analyzer as well as solc-select. Finally, we need to install and select the correct solidity compiler (solc) version using solc-select which in our case is version 0.6.11.

# Security Audit tasks
########################
- task: UsePythonVersion@0
inputs:
versionSpec: '3.x'
addToPath: true
architecture: 'x64'
displayName: 'Install Python'
- script: |
python -m pip install --upgrade pip
pip3 install slither-analyzer solc-select
solc-select install 0.6.11
solc-select use 0.6.11
displayName: 'Install dependencies'

Now that we have all the dependencies in place, we need to run slither and, as we did with solhint, save its log in a variable to be able to have a comment in the PR if it fails.

This time, the script is a bit more complex than the earlier one. First, we are using slither without compiling (--ignore-compile ) to save us some time as we already compile the contracts before. We are using the human-summary printer to have a consumable view of the report. Then, we are parsing the report to obtain the number of issues of each level.

Most of the time the human summary is not enough so we are running slither again but this time using the default printer. As we want to append this output to the original file, we will use the >> pipe instead of >.

Now, we set a new slitherAudit variable with the content of the output file using sed as before for multiline content. Finally, we mark the step as failed is we found an error of any type.

- script: |
slither . --ignore-compile --filter-paths "node_modules" --disable-color --print human-summary > slither-audit.txt 2>&1

echo "##vso[task.setvariable variable=slitherAuditRun]yes";
export LError=$(grep 'low issues' slither-audit.txt | sed 's/[a-zA-Z:0 ]//g')
export MError=$(grep 'medium issues' slither-audit.txt | sed 's/[a-zA-Z:0 ]//g')
export HError=$(grep 'high issues' slither-audit.txt | sed 's/[a-zA-Z:0 ]//g')

echo -en '-------------------------------\n\n\n# Details:\n##############################\n\n' >> slither-audit.txt
slither . --ignore-compile --filter-paths "node_modules" --disable-color >> slither-audit.txt 2>&1
echo "##vso[task.setvariable variable=slitherAudit]$(cat slither-audit.txt | sed ':a;N;$!ba;s/\n/%0D%0A/g')"

if [ ! -z "${LError}" ] || [ ! -z "${MError}" ] || [ ! -z "${HError}" ] ; then
echo "##vso[task.complete result=Failed;]Slither found an issue." ;
fi
displayName: 'Run slither audit'

After that step, as we did before, we use the GitHubComment task to write the comments on the PR only if there were errors.

- task: GitHubComment@0
condition: and(eq(variables['Build.Reason'], 'PullRequest'), failed(), eq(variables.slitherAuditRun,'yes'))
inputs:
gitHubConnection: 'myGitHubConnection'
repositoryName: '$(Build.Repository.Name)'
comment: |
**Slither audit result:**

```
$(securityAudit)
```

As configuring slither locally is a bit harder, you might want to store the log as build artifact so it can be consumed latter. For that, we copy the file to the ArtifactStagingDirectory and then use the PublishBuildArtifacts task.

- task: CopyFiles@2
condition: and(succeededOrFailed(), eq(variables.slitherAuditRun,'yes'))
inputs:
contents: 'slither-audit.txt'
targetFolder: $(Build.ArtifactStagingDirectory)
flattenFolders: true
- task: PublishBuildArtifacts@1
condition: and(succeededOrFailed(), eq(variables.slitherAuditRun,'yes'))
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: 'slither-audit'

Unit tests and coverage

An important part of any CI is to run unit tests and to analyze their coverage. For the latter, we use the solidity-coverage plugin for hardhat. You need to install it with npm i -D solidity-coverage and update the hardhat.config.ts file with the corresponding import: import "solidity-coverage"; .

Additionally, we create a .solcover.js file to change the reporter to cobertura which is compatible with DevOps reports.

module.exports = {
istanbulReporter: ['cobertura']
};

Finally, we create two extra npm scripts, one for running the tests and the other one to run the coverage.

"scripts": {
"test": "hardhat test",
"coverage": "hardhat coverage",
},

Now, going back to the pipeline, we need to add two steps, the first one will run the coverage script and the second one will publish the results.

Note that it’s important that this step is executed after slither as it will compile the solution again adding the necessary things for the coverage report, which will fail the slither analysis.

# Testing and coverage tasks
##############################
- script: |
npm run coverage
displayName: 'Run tests and coverage'
- task: PublishCodeCoverageResults@1
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: coverage/cobertura-coverage.xml

Wrapping up

We configured our pipeline to build the solution, run a linter and a security static analyzer and run the test along with their coverage report. We also comment the PR whenever we have linter or security issues.

Even when we could also create a CD pipeline to deploy the contracts to a test network, we are not doing this here as it will depend on the nature of your contract (i.e., if the contract is upgradable or not and the type of network you are using for the deployment). You need to consider that you cannot change a deployed contract because of the blockchain nature but you can use strategies like open zeppelin upgrades.

You can find a sample with the full azure-pipelines.yml file at nbellocam/hardhat-pipeline.

--

--

SOUTHWORKS
SOUTHWORKS

Published in SOUTHWORKS

SOUTHWORKS Development on Demand is the new model for nearshore software development. It’s based on three core promises: transparent pricing, short-term flexible contracts, and time zone affinity. No overhead. No hidden fees. No do-overs. No surprises.

Nicolás Bello Camilletti
Nicolás Bello Camilletti

Written by Nicolás Bello Camilletti

Principal Software Developer. Microsoft MVP and Google Developer Expert (GDE).