How We Test SurveyJS Libraries
The very first commit was pushed into the SurveyJS Library repo on GitHub almost seven years ago, and it already included unit tests. Since that time, the SurveyJS project has undergone a string of evolutionary and revolutionary changes, and so has the overall testing process.
In this article, we’ll overview our current testing environment and take a look back to see how it changed through the years.
Unit Testing
If you are a veteran of TDD and a big fan of the famous Kent Beck book, you have probably forgotten how it is — to write code without creating a test case first.
We feel the same way.
Here are the tools that we use for unit testing:
- Karma + NUnit for the SurveyJS Library (view tests)
- Jest for the Survey Creator, Survey PDF, and Survey Analytics libraries (view tests)
Each option has its pros and cons. Karma runs in a browser, while with Jest, you can run tests just inside your IDE. On the other hand, Karma runs ~2000 tests in about 7 seconds, while Jest takes a full two minutes to run even fewer tests.
We still haven’t overcome this performance issue with Jest. If you have any ideas, please let us know in the comment section below.
For the first couple of years, we fully relied on unit testing. When SurveyJS was still a fledgeling, we didn’t receive complaints regarding the quality of the product. However, as our codebase grew bigger, things started to break more often.
Introducing major changes was especially painful. About five years ago we were migrating our code from the KnockoutJS MVVM library. We wanted to separate the model from the rendering code. That would make it possible to support native rendering in React, Vue, and Angular.
We improved the code and got rid of duplicates, but alongside emerged multiple severe bugs, and unit tests didn’t discover them. The quality of our product had been degrading for several releases, and we found out about this only from our customers!
Our testing process definitely required level up.
End-to-End (e2e) Tests
We knew that e2e tests would help us because the problem laid not in the application codebase, but in the way different application modules interact with each other.
For e2e testing, we chose TestCafe. It doesn’t require time- and effort consuming WebDriver setup, and just runs on Node.js in a browser. We added regression tests for basic functionality and bug fixes.
An example of the survey-library/testCafe/questions/image.js
import { Selector, fixture, test } from "testcafe";
import { frameworks, url, initSurvey, getSurveyResult, getQuestionValue, getQuestionJson, checkSurveyWithEmptyQuestion } from "../helper";
const title = "image";const json = {
questions: [
{
type: "image",
name: "image",
imageLink: "https://surveyjs.io/Content/Images/examples/image-picker/lion.jpg"
},
{
type: "image",
name: "video",
imageLink: "https://surveyjs.io/Content/Images/examples/image-picker/lion.avi"
},
{
type: "image",
name: "youtube",
imageLink: "https://www.youtube.com/embed/tgbNymZ7vqY"
}
]
};frameworks.forEach(framework => {
fixture`${framework} ${title}`.page`${url}${framework}.html`.beforeEach(
async t => {
await initSurvey(framework, json);
}
);function SelectorByNumber(questionNumber) {
return Selector(".sv_body .sv_row")
.nth(questionNumber)
.find(".sv_qstn")
.find(".sv_q_image");
}test("Check image question layout", async t => {
await t
.expect(SelectorByNumber(0).child("img").exists).ok()
.expect(SelectorByNumber(1).child("video").exists).ok()
.expect(SelectorByNumber(2).child("iframe").exists).ok();});
});
TestCafe tests are available here:
So, the quality of our products went up, and the testing issue was handled the matter once and for all, right?
Wrong.
From time to time, we received customer reports that SurveyJS components appear visually broken. For example, an input field displays a weird thick border. Or radio group buttons are placed in the wrong order.
At the same time, we were working on an updated version of our Survey Creator. This component contains hundreds of UI elements, which may be affected just by the slightest style change. To keep our clients satisfied and maintain a consistent UI in a long run, we decided to integrate screenshot testing.
Screenshot Tests for CSS
Some SurveyJS team members were against screenshot tests initially. Old folks from 90s, they’d seen how these tests worked (or rather did not) for desktop libraries: You add a new font into the system — tests fail; you update the OS — tests fail; you introduce a small functionality change that doesn’t even affect the UI — tests fail again…Back in the days, screenshot tests were more hassle than they were worth.
Fortunately, passionate younger team members had better experience with screenshot tests, and they convinced the skeptics to give screenshot tests a try.
For screenshot testing, we implemented a custom in-house solution. Among other features, it allows us to take screenshots and replace baseline images using console commands. We set up the environment to ensure that our tests do not depend on fonts, OS, and other varying factors.
As a result, the number of issues caused by incorrect CSS rules has dropped significantly.
Screenshots tests helped.
Markup Tests for Different Frameworks
SurveyJS supports different JavaScript frameworks: Angular, React, jQuery, Knockout, Vue. To maintain rendering across different frameworks, a developer must introduce UI changes and ensure they produce the same markup in each supported framework.
To test the markup, we use simple string comparison. We obtain the markup generated by the framework, remove framework-specific attributes from it, convert it to string, and compare the result to the baseline string value. As a positive side effect, we also get accessibility attributes tested.
You can find examples of our markup tests in the SurveyJS Library repo.
Build a DevOps Pipeline
Test cases are executed in a test environment. Initially, we used Travis CI. It was free for open-source projects and was suitable for the simple workflow we had at early stages.
The development workflow was the following: team members push their commits directly to the master
branch. Tests run for every commit. If tests fail, the team fixes the build.
The workflow worked perfectly until the team got bigger. While the team was fixing the build, new commits were on hold. The more team members we had, the more commits were postponed.
We had to change the workflow. So we disallowed direct commits to the master
branch. Developers now push changes through a new local branch. This ensures that each change (group of commits on a branch) goes through a review process (a pull request).
Pre-push hooks build libraries and run unit tests. If a test fails, the branch remains unpublished. If a test passes, the branch is published, and the developer creates a pull request to push the new branch to the master
branch. The remaining tests run in this pull request.
The updated workflow rocked! But apparently, it required a significant amount of CPU power. Thus, we migrated to Azure Pipelines.
Setting up Azure Pipelines forced us to optimize the tests. Now, our tests run in parallel and reuse artifacts from other tasks. As a result, the testing time reduced from the initial 40 minutes to 10-15 minutes, which is acceptable, and our main branch is (almost) never red.
Investing time and effort in DevOps paid back.
What’s Next
There’s still room for improvement. The performance of our Azure pipelines can be optimized. Despite the pre-push testing, we still break the main branch sometimes. This mainly happens because the Survey Creator main pipeline depends on the SurveyJS library, but we don’t run Creator pipelines along with Library pull requests. Besides that, we are seeking a suitable accessibility-testing tool in addition to our markup comparison method. Any suggestions are highly appreciated.
We would love to hear your stories. How do you build your DevOps? What tools and libraries do you use for testing?
Leave us a comment below.
About SurveyJS Project
SurveyJS Project includes four open-source JavaScript Libraries:
- Form Library — A free and open-source MIT-licensed JavaScript library that lets you design dynamic, data-driven, multi-language survey forms and run them in your web application using a variety of front-end technologies. Available for free under the MIT license.
- Survey Creator — A GUI-based no-code survey builder that allows easy drag-and-drop form creation even for non-tech savvy users. Requires a commercial developer license.
- Dashboard — Simplifies survey data analysis with interactive and customizable charts and tables. Visualize your insights with the survey data dashboard and analyze survey results in one view. Requires a commercial developer license.
- PDF Generator— Allows you to save an unlimited number of custom-built survey forms to PDF (both new and filled-in), and generate fillable PDF forms to automate your form workflow and go paperless. Requires a commercial developer license.
To learn more about SurveyJS Project, visit our website: surveyjs.io.