Upgrading to NUnit3 — not such a walk in the park
At Redgate, a part of our technical strategy is to move all our products to .NET Standard as we believe this is the future for working cross-platform in the .NET stack. We are tackling this on a product-by-product basis, though with a special focus on migrating shared utilities first. To aid us, we have integrated the .NET Portability Analyzer into the build scripts of some of our products and identified some areas we can start fixing straight away.
This blog summarises our approach and some of the challenges we faced upgrading the test framework used by our Oracle DevOps tools from NUnit2 to NUnit3.
Our existing NUnit test framework was version 2.6.4, which was released back in December 2014. Not only is this not .NET Standard, it is also now a legacy package, having been superseded by NUnit3. When deciding what version of NUnit3 to upgrade to, we reviewed the release notes and decided we needed as a minimum to go to 3.10.0 since this introduced .NET Standard 2.0 compatibility. However, we also thought, why not just go the latest version, 3.12.0? There didn’t seem to be any reason why not, so we went for it.
We started by simply upgrading all the packages in our solution that used NUnit via the Nuget package manager in the JetBrains Rider IDE. Interestingly, although all of our projects still use the old-style .csproj format, some of them did not have a packages.config file, and were missed by the package manager so we had to manually identify these and update the NUnit reference in the .csproj files directly.
As anticipated, this upgrade broke many of our tests. Fortunately, a full list of breaking changes can be found on the NUnit Github page — this made our life much easier. The breaking changes could be broken into 2 types:
- Compile errors — for instance, the replacement of
Does.StringContain(). These were very easy to identify and fix.
- Runtime errors — for example,
TestCaseSourcereferences must now be static. These were only identifiable when running the tests either locally or through our build server.
The vast majority of these we were able to fix quickly by making minor changes to the test code. However, some changes were more complex due to the nature of the breaking change or the way our test code was written.
One such breaking changes was the deprecation of using
TestCase parameters. In order to continue testing for exceptions, we now have to use
Assert.Throws or similar, which meant separating any tests that had
TestCase parameters that should throw errors from those that don’t. This increased the number of tests and duplicated some code.
Although most tests requiring static
TestCaseSource references could easily be fixed by just adding the
static keyword, some of our tests were structured in a more complex manner using test base classes and calculating the
TestCaseSource references at run-time. These could not simply be made static and instead meant we had to stop using
TestCaseSource entirely and instead re-write the tests to loop over all the necessary test cases. Fundamentally, everything is still being tested, but the number of tests has decreased significantly as what were previously multiple different test parameters have now been consolidated into aingle tests with multiple assertions.
During our fixing of test failures, we identified a few bugs and other peculiarities arising that were not documented.
Parameterized tests with parameters containing ‘
:‘ do not report properly
We got confused when reviewing our build server reports as the number of tests being reported was considerably different after our NUnit upgrade. On investigation, it turned out that we had lots of tests that used parameterized test cases, and tests with parameters containing a
: were not being reported properly.
These tests were running correctly, both locally and on the build server, but when the results were being output by the console runner, any
: symbols were being replaced with an empty space. As confusing as this was, it was made worse by the fact these tests often ran multiple times, and there were legitimate test cases that had empty spaces. The reporting therefore assumed that because the two tests had the same signature, they were the same and only reported them once. This appears to be a bug with the NUnit TeamCity extension so we have filed a bug report with a simple example ( https://github.com/nunit/teamcity-event-listener/issues/68) and left the tests as they are.
The width of the console runner appears to be unpredictable
We found that some of our tests comparing two strings were reliant on outputs to the console. Specifically, the formatting of the two strings on the console mattered when doing the assertion even though the test actually only cared about the contents.
Interestingly, upgrading to NUnit3 caused some tests to fail due to incorrectly formatting the strings on the console (i.e. failing to wrap at 80 characters). Even more peculiarly, hard-coding the console width to 80 columns only fixed the issue on some of our build servers.
To resolve this, we actually rewrote the relevant tests so that they could assert on the strings themselves without caring about the output formatting. Unfortunately we have not got to the bottom of what exactly has changed here.
The default working directory for NUnit3 has moved
NUnit3 has changed where it runs tests from the Test Assembly’s folder to a temporary folder. This broke a number of our tests both locally and on our build server as they were unable to locate required files.
We fixed this locally by updating the settings in our IDE. However, for the build server, we instead had to introduce a new configuration when setting up test fixtures:
public void OneTimeSetUp()
We have a number of utilities that are shared across products and one such tool is called
CodeHygiene. This provides base test classes for checking various things including Nuget package versions in our solution. Unfortunately, for reasons, this utility uses NUnit 3.11.0 not 3.12.0. Our test packages using CodeHygiene therefore failed to run as the utility was expecting a different assembly version of NUnit. When we downgraded the affected test packages to NUnit 3.11.0, we instead got test failures due to competing versions of NUnit in our solution.
Our solution right now was therefore to downgrade NUnit from 3.12.0 to 3.11.0 for all packages in our solution. This fixes the immediate issue but does not address the wider problem that comes from having these cross-package dependencies arising from inheriting test classes from external packages and is something that we should think about further.
Much of the process of upgrading NUnit was straightforward — breaking changes are well documented, and simply running the tests revealed the issues to fix. Most of these fixes were simple, but the nature of upgrading legacy code meant that some more complex refactorings were necessary.
The main pain-points came from undocumented issues, bugs or Redgate-specific architectural challenges. We found that using an incremental approach to tackling undocumented issues was crucial. Fix one failing test, then fix the next one.
To give us confidence that we had not lost tests, we also maintained a spreadsheet during the process documenting the number of tests failing, passing and ignored for each build process. Reasons for changes in tests numbers were identified and noted, enabling us to move on, confident that test coverage was not adversely impacted.