Improvements in Pipeline Process, Static Analysis, and Bug Tracking for Front-End in Azure DevOps

Ygor Barbosa
Shape Digital
7 min readNov 9, 2021

--

As projects grow, we see new needs in the pipeline process, some of them validate the code quality and patterns in new features, beyond bug-tracking that will help the team with real data information.

To deal with these challenges we made changes like trigger pipeline in our Pull Requests, Sentry integration, and split our deployment into two steps.

We will cover each one of them with more details in this article.

Added a new pipeline to PR:

As the number of developers grows, we need to ensure that new changes can’t break our main branch, and to ensure that we can apply a new pipeline and every time we create a new PR, we ensure the code will be automated tested before the merge.

First of all, create the pipeline in Azure DevOps for the PR, then select the target branch, in our case the “master”, then select the branch policies.

In the section “Build Validation”, click in the button “Add” and select the pipeline created in the last step.

Select the branch you wish to make the validation

Click in “+” to open the configuration and add the New Build Policy

In every new PR where someone creates to the target branch, this policy will be triggered, and just before it passes the automated tests the merge can be completed.

What we execute during the PR pipeline:

In the PR pipeline, will be executed the unit and integrations tests with Jest and publish the coverage in LCOV format.

Later we execute the E2E Tests with Cypress, and in the last step the static analysis and code coverage published to Sonar Qube.

trigger:
- none
jobs:
- job: Job_1
displayName: Agent job 1
pool:
vmImage: ubuntu-latest
steps:
- checkout: self
- task: NodeTool@0
displayName: Use Node version
inputs:
versionSpec: 12.x
- task: npmAuthenticate@0
inputs:
workingFile: .npmrc
- task: Cache@2
inputs:
key: 'npm | "$(Agent.OS)" | $(Build.SourcesDirectory)/package-lock.json'
cacheHitVar: NPM_CACHE_RESTORED
path: "/home/vsts/.npm"
displayName: "Cache ~/.npm directory"
- task: Cache@2
displayName: Cache node_modules
inputs:
key: 'npm | "$(Agent.OS)" | $(Build.SourcesDirectory)/package-lock.json'
path: $(Build.SourcesDirectory)/node_modules
cacheHitVar: CACHE_RESTORED
- script: |
npm ci
displayName: Install dependencies
condition: ne(variables.CACHE_RESTORED, 'true')
- script: |
npmVersionString=$(node -p "require('./package.json').version")
echo "##vso[build.updatebuildnumber]$npmVersionString"
displayName: "Set build number"
- task: SonarCloudPrepare@1
displayName: Sonar Cloud Prepare
inputs:
SonarCloud: "sonar-cloud-service-connection"
organization: "YourOrganization"
scannerMode: "CLI"
configMode: "file"
extraProperties: |
sonar.projectVersion=$(Build.BuildNumber)
- task: Npm@1
displayName: npm run test
inputs:
command: custom
customCommand: run ci:unit
- script: echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
displayName: Increase number of file watchers
- task: Npm@1
displayName: npm run cypress
inputs:
command: custom
customCommand: run ci:cypress
- task: PublishTestResults@2
condition: succeededOrFailed()
inputs:
testRunner: JUnit
testResultsFiles: "**/junit.xml"
- task: SonarCloudAnalyze@1
displayName: Sonar Cloud Analysis
- task: SonarCloudPublish@1
displayName: Sonar Cloud Publish
inputs:
pollingTimeoutSec: "300"

You can customize your configurations to Sonar Qube in the file sonar-project.properties in your root folder, just the fields “projectKey” and “projectName” are required, all the other fields are optional.

In our configuration file, we used the field “exclusions” to reduce the number of lines in the final project, in our case doesn’t make sense to analyze the files from mock.

sonar.projectKey=YourOrganization
sonar.projectName=your-project
sonar.exclusions=src/server/**/*, src/**/*.scss, src/**/*.sass, src/**/*.svg
sonar.sources=src
sonar.javascript.lcov.reportPaths=coverage/lcov.info
sonar.test.inclusions=**/*.test.ts
sonar.sourceEncoding=UTF-8

To follow the progress of code quality and test coverage we pass the release version to Sonar Qube, we get the version from package.json, so we can understand the progress of the application

This excerpt of code sets the “BuidNumber” environment variable available in the pipeline, so we pass this variable to Sonar Qube.

- task: SonarCloudPrepare@1
displayName: Sonar Cloud Prepare
inputs:
SonarCloud: "sonar-cloud-service-connection"
organization: "YourOrganization"
scannerMode: "CLI"
configMode: "file"
extraProperties: |
sonar.projectVersion=$(Build.BuildNumber)

To generate the code coverage in your project, you need to install the package jest-junit and define the report in LCOV format, during the execution of tests will be generated the folder /coverage with the result of tests.

One of the steps that take a long time during our build is the NODE packages installation, so to reduce the amount of time during this step we can use the task Cache@2 available in the Azure DevOps pipeline.

This step can cache some files to use in subsequent pipeline trigger, so if any change happens in our package-lock.json the Azure DevOps will use again the NPM packages installed later. With that in mind we must always keep tracking of package-lock.json in our git repository.

- task: Cache@2
displayName: Cache node_modules
inputs:
key: 'npm | "$(Agent.OS)" | $(Build.SourcesDirectory)/package-lock.json'
path: $(Build.SourcesDirectory)/node_modules
cacheHitVar: CACHE_RESTORED

Another improvement to reduce the time of the pipeline is to replace the task Npm@0, with one Script task and run the command NPM CI, this is specific for CI/CD environments and it doesn’t trigger some information that could be relevant for a human user.

This makes the process faster, to see more details about the task follow the link

- script: |
npm ci
displayName: Install dependencies
condition: ne(variables.CACHE_RESTORED, 'true')

Sentry integration:

The integration with Sentry is really helpful, to analyze real data from the application about performance and bug tracking.

Sentry dashboard

We will use a Bash task in Azure DevOps to run the CLI available to Sentry.

This task will be necessary to configure the environment to Sentry. As a pre-requisite, we will have to generate a new access token in Sentry and use this information in our pipeline variables.

To do this, select the pipeline you wish to create a variable, click in “Edit”, then in “Variables” and place the value of your token.

Place the name and value to your variable, and it will be available in your pipeline

We link our application version to a new Release in Sentry, so we can understand if some version of the application is responsible for the increase in the number of bugs or performance decrease.

We will link the commits to the same release and publish the source map of that version and this will help to understand better the causes of a bug.

- task: Bash@3
displayName: Associate commit to Sentry Release
inputs:
targetType: "inline"
script: |
curl -sL https://sentry.io/get-cli/ | bash
export SENTRY_AUTH_TOKEN=$(sentry-token)
export SENTRY_ORG=your_organization
export SENTRY_PROJECT=your_project
sentry-cli releases new -p $SENTRY_PROJECT "pdm-modec-$(SentryEnvironment)@$(Build.BuildNumber)"
sentry-cli releases -p $SENTRY_PROJECT files "pdm-modec-$(SentryEnvironment)@$(Build.BuildNumber)" upload-sourcemaps --url-prefix "~/static/js" --validate build/static/js
sentry-cli releases set-commits --auto "your-project-$(SentryEnvironment)@$(Build.BuildNumber)" --ignore-missing

Install the packages in your application

# Using yarn
yarn add @sentry/browser @sentry/tracing
# Using npm
npm install --save @sentry/browser @sentry/tracing

The above code must be inserted in the application to record the information to Sentry, another integration that can help with the bug tracking is the plugin Rrweb.

It will record all the actions of the user after the problem happens, so the developer can see exactly where the user was interacting with the application. See more details about the Rrweb integration in the link

Sentry.init({
dsn: "https://examplePublicKey@o0.ingest.sentry.io/0",
integrations: [new Integrations.BrowserTracing(), new SentryRRWeb({})],
tracesSampleRate: 1.0,
environment: process.env.REACT_APP_ENVIRONMENT,
release: `your-project-${process.env.REACT_APP_ENVIRONMENT}@${process.env.REACT_APP_SENTRY_VERSION}`,
});
Sentry.setTag("rrweb.active", "yes");

Two steps pipeline:

We can define two steps to our pipeline. we will split the build and the deploy pipelines.

To do so, we need to publish the build files in the Azure DevOps artifact so the deploy process to have access to the publish the files.

We can do this with CopyFiles@2 and PublishBuildArtifacts@1 tasks where we specify the folders we wish to copy and to be available in artifacts.

We can create an environment specific to our deploy stage to analyze the data from all our deploys in a single place.

In the next step, we just recover the files from artifacts and publish them, and that's all for our pipeline process.

stages:
- stage: build
displayName: Build
jobs:
- job: Job_1
displayName: Agent job 1
pool:
vmImage: ubuntu-latest
steps:
- checkout: self
persistCredentials: true
### build steps ###- stage: deploy
displayName: Deploy to Azure
pool:
vmImage: ubuntu-latest
jobs:
- deployment: DEPLOYMENT
environment: "your-environment"
strategy:
runOnce:
deploy:
steps:
- download: current
artifact: build
- task: AzureRmWebAppDeployment@4
displayName: "Azure App Service Deploy"
inputs:
ConnectedServiceName: $(ConnectedServiceName)
WebAppKind: webAppLinux
WebAppName: $(WebAppName)
Package: $(Pipeline.Workspace)/build/build
InlineScript: >2

With just some changes in our pipeline, we can ensure more quality in our final code and better understand what's happening before our deployment through a faster and easy pipeline process.

--

--