Part 1 is here – where I intro Testing Strategies.
Unit testing is the single most important test suite within ANY application. It is the first line of defense guarding against defects and is paramount to instilling confidence in developers that the application of changes does not break any existing logic. This being the case, they are (should be) the most numerous type of test authored for a system. High performing teams will run them often as a verification step and ensure their runs are as fast as possible to save time. By doing so and building confidence they are able to achiever ever higher levels of efficiency and quality.
What do we Unit Test?
This is perhaps the single most important and common question you will get from teams or you will discuss within your own teams. Making the right decision here is critical for the long term success of the project and preventing quality and performance issues from negatively impacting your teams.
As a fundamental rule, we do not unit test external dependencies, that is database calls, network calls, or any logic that might involve any sort of external dependencies. Our unit test runs need to be idempotent such that we can run them as much as we like without having to worry about disk space, data pollution, or other external factors.
Second, the focus must be on a unit of code. In this regard, our tests do not test multi-step processes. They test a single path through a unit of code; the need for a unit test to be complex is often an indicator of a code smell: either the logic is overly complicated and needs refactoring or, the test itself is wrong and should either be broken down or tested with a different form of testing such as integration tests.
Finally, we should test known conditions for external dependencies through the use of mocking. By using a mocking library we can ensure that code remains resilient and that our known error cases are handled. Further, using a mocking library often forces us to use design by contract which can improve the readability of our code.
Making the wrong choice – a story from the past
I worked with a team in a past life that made the wrong choice when it came to their testing. As part of an effort to improve quality the client (astutely) asked the team to ensure testing was being done against database and networking calls. Leaders on the team, due to poor knowledge around testing or poor decision making, opted to work these tests into the unit test library. Over the course of the project, this caused the test run time to increase to greater than 40m.
One of the critical elements to high functioning teams is the notion of fast feedback. We want to ensure developers are given immediate feedback when something breaks. Unit tests are a core part of achieving this and their speed is paramount to the teams effectiveness. What happens when you allow tests times to balloon as mentioned? Disaster.
When the turnaround time is that long, developers will seek ways to avoid incurring that time cost (there is still the pressure to get work done). Generally this involves not writing tests (we dont want to increase the time cost), running them minimally (get the work done and test at the end), or turning them off. None of these options improve efficiency and, in fact, make an already bad problem that much worse.
In this case, the team adopted a branching model that called for entire features to be developed in a “feature” branch before merging. With any development environment we always want to minimize “drift”, that is differences between master and any branches. The less drift the fewer merge conflicts and the quicker problems are discovered.
By not understanding this principle, the team unknowingly, compounded their problem. In some cases these “features” would be in flight for 10+ days, creating enormous amounts of drift. And, as the team was looking to avoid running the tests too often, the changes were not being checked regularly by the tests. As you can imagine, issues were found persistently near the end of sprints, as code was merged. And due to the size of the incoming changes debugging became a massive task.
This created more problems for the beleaguered teams as they were forced to spend time after hours routinely debugging and trying to finish features before the end of the sprint. Burnout was rampant and the team members became jaded with one another and the company itself – they endured this for 10+ months. While the project ultimately did complete, the client relationship was ruined and several good developers left the company.
To be clear, the bad choices around testing alone were not the single cause of this failure, there were numerous other problems. However, I have found that that even a difficult client can be assuaged if code quality is maintained and the team delivers. I can recall a team that I led where we had unit testing and continuous delivery processes in place such that, even though we had delays and bugs, these processes enabled us to respond quickly – the client remained delighted and worked with us.
The lesson here is, no matter what, we MUST ensure the development team has the tools needed to support automation processes. These processes form the core of the ability to deliver and lend themselves to building healthy and sustainable client relationships.
How do I write a Unit Test?
So, now you have an understanding of what can be unit tested, let’s talk about how you write them. First, I wish to introduce you to the AAA pattern: Arrange, Act, Assert. This pattern is crucial as you write your tests to check yourself against the warning signs for bad unit tests.
- Arrange: In this step we “arrange” the unit, that is we do all of the things to prepare for executing our unit. Be wary at this level if the steps to arrange feel too cumbersome, it likely indicates that your design need refactoring
- Act: In this step we “invoke” the unit. This executes our the code we are specifically testing. Be wary at this level if more than two executions are necessary. This means you are NOT testing a unit and your design needs to be re-evaluated. Remember, we do not test multi-part flows with unit tests.
- Assert: In this step we check the outcome of our unit. Important here is to only assert on the minimum amount of information needed to verify the unit. I have seen teams assert on 20+ properties for an object, this is excessive. Think carefully about what indicates a failure. My rule of thumb is never more than three asserts. If you need more, create another test.
Here is an example of a simple math problem under unit test:
As you can see, in this example we define our two variables (numberOne and numberTwo) in the arrange section, we then invoke our add operation in the act and finally we assert that the value meets with our expectations.
The [Fact] is a part of the xUnit testing library. xUnit is a popular open source testing framework commonly used with .NET Core. There are other libraries available. The use of a library for unit testing makes great sense and will greatly aid in your productivity. Below are a few of the common ones in the .NET ecosystem:
- nUnit (https://nunit.org/) – the grand-daddy of them all. Base dont JUnit from Java and one of the first unit testing frameworks devised for .NET
- MSTest – Microsoft’s testing framework. It offers the same functionality as nUnit and is built into .NET Framework
- xUnit – as mentioned above, similar to nUnit in functionality and aimed at supporting testing in an OS agnostic programming world. This is my default
The next common problem is organization. When you start talking about an application that has thousands, if not tens of thousands (or more) tests, it becomes very apparent that a clear and consistent strategy must be adopted. Over the course of my career I have seen many different approaches but, the one that I favor is the given and assert naming convention. Mainly because it plays very well with most test reporters. Here is an example.
Imagine we have defined the following Web API Controller:
In this case we might define our test fixture (that is the class that contains our test) as such:
Notice the name of the class here, while it does violate traditional C# naming convention, when you run the test runner, it will precede your method name. Therefore, if we expand this to include a test like so:
The above example is a product of over simplification and ONLY for demonstration purposes. When unit testing controllers, the emphasis needs to be on result types returned NOT values. Testing the outcome of operations should be done with unit tests against services. The above represents code that violates the separation of concerns principle.
With this in place, if we run a test runner and view the results in the reporter we will see the following:
As you can see, the advantage to this strategy is it lines up nicely and produces a readable English sentence detailing what the test is doing. There are other strategies but, as I said, this is my go to in most cases due to the readability and scalable nature of this naming method.
Further, it bakes into it a necessary check to ensure unit tests are not checking too much. As a rule, the assert portion should never contain the word and as that it implies more than one thing is being checked which violates the unit principle.
How do I test external dependencies?
The short answer is, you dont, you generally write integration tests (next part in this series) to cover those interactions. However, given the speed and criticality of the logic checked by unit tests we want to maximize their ability as best we can.
A classic example of this case is Entity Framework. If you have worked with Entity Framework you will be familiar with the DbContext base class that denotes the context which handles querying our underlying database. As you might expect, our unit tests should NEVER invoke this context directly, not even the InMemory version but, we do need to ensure our logic built on the context works properly. How can we achieve this?
The short answer is: we can define an interface which exposes the necessary methods and properties on our context and have our classes take a dependency on this interface rather than the concreate context class itself. In doing so, we can use mocking libraries to mock the context allowing testing against these lower level classes.
The long answer is, honestly, an entire blog post (Learning Tree has a good write up that uses NSubstitute here) that I will try to add on later.
But this strategy of using interfaces also allows us to take dependencies on static components as well. In older versions of ASP .NET it was common for applications to utilize the HttpContext.Current property to reference the incoming ISAPI results. But, because this property was static it could not be unit tested directly (it would always be null unless running in the web context).
Using the interface approach, we commonly saw things like this:
Using this approach the controller, which will have unit tests, is dependent on the injected IContextAccessor interface instead of HttpContext. This fact is crucial as it allows us to write code like such:
This is the result. This code validates that our logic is correct but, it does NOT validate that HttpContext gets built properly at runtime, this is not our responsibility, it is the author of the framework (Microsoft in this case) whose responsibility that is.
This brings a very clear and important point when writing tests: some tests are NOT yours to right. It is not on your team to validate that, for example, Entity Framework works properly, or that a request through HttpClient works – these components are already (hopefully) being tested by their authors. Attempting to go down this road will not lead you anywhere where the test drive value.
A final point
The final use case with testing I would like to make, and this is especially true with .NET is, tests should ALWAYS be synchronous and deterministic. Parallel code needs to be broken down into its discrete pieces and those pieces need to be tested. Trying to unit test parallel code is fraught with the risk of introducing “flakiness” into tests – these are tests that pass sometimes and other times not.
.NET developers commonly use the async/await syntax in their code. Its very useful and helpful however, when running unit tests it needs to be forced down a synchronous path.
We do not test external dependencies so, the use of async/await should not be needed for ANY test. Our dependencies should be mocked and thus will return instantaneously.
To do this, it is quite easy, we can call GetAwaiter and GetResult methods which will force the resolution of the Task return variable. Here is an example:
By calling GetAwaiter() and GetResult() we force the call to be synchronous. This is important since, in some cases, the Asserts may run BEFORE the actual call completes, resulting in increased test flakiness.
The most important thing is not just to test but also to be fast
Hopefully this post has shown you some of the ways you can test things like databases, async calls, and other complex scenarios with unit tests. This is important. Due to their speed, it makes sense to use them to validate wherever possible.
One of the uses that I did not show here is “call spying“, this is where the mocking framework can “track” how many times a method is called which can serve as another way to assert.
But the most important thing I hope I can impress is the need to not only ensure unit tests are built with the application but, also that you continually are watching to ensure they remain fast enough to be effective for your developers to perform validation on a consistent ongoing basis.
The next topic which I intend to cover will focus on Integration Tests, primarily via API testing through Postman.