Category: Unit Testing


I am by no means a regular blogger, but the last time I put pen to paper (or fonts to screen) I suggested that the .NET unit testing cycle was just too long. This became completely apparent to me when recently working on a project with thousands of code files, where the compilation of a test project could take several minutes. Obviously, poor workstations had their portion of the blame, but I also wondered if the fact that a whole load of unnecessary files had to be compiled in order to create the testing assembly.

When going through the red-green-refactor loop we are often constantly running a few select tests over and over again, and not the whole test suite. This made me wonder, do we really need to build the whole testing assembly (and more importantly all referenced projects) just to run a single test?

The fairly simplistic idea that I had was to parse a single test file and determine only the dependencies required to make it build. I would achieve this by looking at the solution file and work out where the code files for these dependencies live on the file system.

Well, while preparing for an upcoming trip, I have had a little time off work. With this time I thought I would knock up a really quick and dirty spike to see if what I thought was actually sensible to implement. The result of which is some really nasty code with very sparse test coverage and some extremely poor design decisions. That said, I also managed to prove that (if I have the required time) I could likely make the .NET testing cycle a little less painful for myself.

My spike can be found at http://github.com/chrisjowen/WellItCouldWork, and the result is a small console app that takes the path of a test file and its corresponding solution file. Whenever the test file is saved the app parses the file to find any dependencies required referenced in the file, then builds a temporary assembly and runs the NUnit test runner against this little assembly i.e.

Output

As I said this is just a spike so the code isn’t great, but I think that when I get back from my travels I will look to seriously do a production ready rewrite as I honestly think this would be a useful tool in the .NET world. Any thoughts?

Advertisement

We are now at a point where the testing has gone through the stages of been the new kid on the block and come out the other end a proven engineering practice. Many developers are now seeing improved code quality, and a greater feeling of confidence in what they release.

But with all the goodness we’re still have a few teething issues with the frameworks that support these paradigms. Not to mention the fact that without the shiny gloss surrounding testing we’re feeling a sever lack of the “ooooh” factor. Enter a few new and old faces in the OOP community to spice up testing frameworks (and hopefully resolve a few problems with existing libraries).

#TestEx (Sharp Tests Extensions)

One of the great things about Unit Tests is that they double up as the a great source of documentation for any developer looking at the code. As with any documentation, it is only any good if you can easily understand it. Traditional unit testing frameworks such as NUnit or MSTest use a standard set of static method calls against an assertion class such as:

Assert.AreEqual("so", something.SubString(2));
Assert.AreEqual("ing", something.SubString(something.Length-3, something.Length));
Assert.That(something.Contains("meth"));

While this code isn’t exactly cryptic, it does take a little bit of concentration to figure out what’s happening. #TestEx is brought to us by Fabio Maulo (among others), and adds a series of extensible extensions to work with various unit test frameworks. What this gives us is tests that read much more fluently and clearly, for example:

something.Should()
   .StartWith("so")
   .And
   .EndWith("ing")
   .And
   .Contain("meth");

Behaviour Driven Tests

One of the other complaints about existing testing frameworks is that it’s difficult to match a series of tests together with desired system behaviour. Several articles have been published on the topic of BDD and now it seems there are some pretty interesting testing frameworks coming out of the wood work such as NBehave and SpecFlow.

These framework take the approach that you initially write your user stories (and acceptance criteria), then write clear tests that specifically meet these criteria. This may not sound much different to previous approaches, but the key difference is that these frameworks totally cater for this scenario and almost force you down this path.

For example, SpecFlow dictates that we write our stories in a Business Readable, Domain Specific Language that both our clients and the framework understands.

SpecFlow

What this essentially means is that we can write tests at a feature level that our clients can verify for correctness. How about that for the shiny “ooooh” factor?