How to setup a .NET Development Tree Part 5

In the last part we started using NAnt to automate a build for our project. In this part we’ll add some more build functionality.

When we added the compile target we used the <solution> task to compile our solution. However, we also specified which ‘Build Configuration’ to use. Build Configurations are a Visual Studio feature that allow you to build your project in different ways. The most common differences are between ‘Debug’ and ‘Release’ (2 configurations that Visual Studio always creates for you.) With a Debug build, the Visual Studio compiler is configured to create the .pdb files we use for debugging (it gives us line numbers in exception stack traces, that kind of thing.) The ‘Release’ configuration doesn’t have these files generated, but it does produce assemblies more geared towards production than development.

However, there are a whole bunch of other things you can configure for different build configurations. Right-click on a project in Visual Studio, select Properties, then look at everything that appears under ‘Configuration Properties’ – all of those items can change for different Build Configurations. We’re interested in the ‘Output Path’ property, and I’ll explain why.

When we tell NAnt to compile the Debug Build Configuration of our solution, it tries to invoke the C# compiler to produce all the files that appear under the bin\Debug folder for each VS Project. There’s a problem with this though – if we already have the Solution open in Visual Studio, VS will have locks on those files once they reach a certain size. That means that our NAnt compile will fail since it can’t overwrite the assemblies. Anyway, it would be cleaner if we could separate out our ‘automated’ build from our ‘interactive’ build.

Thankfully, Build Configurations let us do this and still use the <solution> task. We do this by creating a new Build Configuration which we will just use for automated builds, and change where it outputs its files to.

To do this for Sycamore, I open up Visual Studio’s ‘Configuration Manager’ (right click on the Solution, choose ‘Configuration Manager’), and create a new configuration (open the drop-down menu, select ‘<New…>’). I’m going to call the new configuration AutomatedDebug and copy settings from the ‘Debug’ configuration (leave the ‘create new project configuration(s)’ box checked.) Close the dialog, and then bring up the properties for ‘SycamoreConsole’. Select the ‘Build’ ‘Configuration Properties’ section, and make sure ‘AutomatedDebug’ is selected in the Configuration drop-down. Select the ‘Output Path’ box and change its value to ‘..\..\build\Debug\SycamoreConsole’. Switch Visual Studio back to the normal ‘Debug’ configuration which we use for interactive builds.

Finally, edit the build script, and change the ‘configuration’ argument of the <solution> task to be AutomatedDebug. It should now look like this:

<target name="compile">

<solution solutionfile="srcSycamore.sln" configuration="AutomatedDebug" />

</target>

So what have we actually done here? If you run NAnt, you should see the following lines in your build output:

compile:

[solution] Starting solution build.

[solution] Building 'SycamoreConsole' [AutomatedDebug] ...

This tells us that NAnt is using the new Build Configuration. Now, look in the build\Debug\SycamoreConsole folder – you should see our compiled .exe file (and a .pdb file since we are compiling with debug options.)

That tells us what is happening, but why have we put these files in this directory? We use the build folder as another of our ‘top level’ project folders. It will contain all the build artifacts (assemblies, test reports, etc.) that we produce in the automated build. It will not contain any files that aren’t generated by the build, so we don’t need to check it into Source Control, and we can safely delete it whenever we want. Under build we will have a number of sub-folders, and so far we created one called Debug that will contain all of our Debug compilation artifacts. We put the artifacts for each VS Project in its own folder, with the same name as the VS Project it belongs to.

I said we could safely delete this folder, so let’s add another NAnt target that will do this:

<target name="clean">

<delete dir="build" if="${directory::exists('build')}"/>

</target>

I also said we didn’t need to check the build folder into Source Control, so we can also add it to our list of excluded files. With Subversion, I do this by editting the svn:ignore property of the project root folder.

Finally for this part, we’re going to create a batch file that developers can use to kick off the build. Its very simple, having just the following line:

@toolsnantNAnt.exe -buildfile:Sycamore.build %*

I like calling this file ‘go.bat’ since the targets in the build script tend to be ‘action’ type words. Since its closely associated with the build script, put it in the project root. Note that we specify which build script to use – change this for your project. To use this file, just pass the target to run as an option, so to delete the build folder, just enter go clean.

Note that this batch file really is just meant as a bootstrap for convenience. I’ve seen plenty of projects use a combination of batch files and NAnt / Ant scripts to configure a build system. This is a bad idea for several reasons:

  • Batch files are significantly less manageable or powerful than NAnt, and tend to get very ‘hacky’ very quickly.
  • Your build behaviour really is one distinct concept and NAnt can handle all of it – splitting it across technologies isn’t necessary.
  • Don’t go down the road of having multiple batch files to launch builds for different environments. I’m yet to see a project that managed to pull this off in a clean, manageable way. Apart from anything else it is redundancy, and introduces more manual work and possiblities for error. Instead, use logic in your NAnt script to use different property values for different environments (hopefully I’ll get on to build configuration and refactoring concepts in the future.)

If you run your default target, it should still be successful. If you have all your ignored files and directories setup correctly you should have 4 files to commit – the build script, the build script launcher (go.bat), the solution, and the VS Project for Sycamore Console. I’m going to check in these changes and call it a day for this part.

The current state of Sycamore is available here.

To summarise this part:

  • Use a top-level, transient, folder called build as the target folder of your automated build.
  • Create a new Visual Studio Build Configuration for your automated NAnt Builds. This Build Configuration should output to build.
  • Setup a clean target to delete your transient files.
  • Create a simple build bootstrap batch file.
  • Don’t put any kind of build logic in this build bootstrap – leave that all in the NAnt build script.

In the next part we’ll start to add some unit tests.

Classifying Tests

I’m not really a ‘testing guy’, but I am an ‘agile guy’ so I do write and use automated tests during development. I find that there’s a lot of confusion about what a ‘unit test’ is, what an ‘acceptance test’ is, and that kind of thing. Most of the time people have their own feelings about what these different classifications of test are and when talking to others assume that they have the same idea. This is frequently not the case.

So just in case you ever talk to me about tests, and we forget to come to a common shared language about them, here’s what I might mean by the various types of tests. Note that these are mostly ‘development test’ defintions, not ‘QA test’ definitions. I’m not saying they’re right, they’re just what I think they mean. (Bret and Brian – please forgive me. 🙂 )

A unit test tests the smallest component in a system. In an object oriented system, this is a class. It might also be an individual stored procedure, javascript method, whatever. A unit test should not assert (or rely on) any behaviour in the system other than that of the component it is testing.

At completely the other end of the scale, an acceptance test is a behaviour that ‘the customer’ has defined the application should have with the environment around it. This may be a relationship between different parts of the application’s domain, an expected output for a set of input in the UI, performance criteria, or interactions using a shared technological resource (such as a shared database or messaging infrastructure.) The important thing about an acceptance test is it should not assert anything about the implementation of the application.

Ideally, acceptance tests should be automated, but sometimes the cost of doing this is too high to justify the value.

I like any application I work on to have both of these types of test. Acceptance tests are about showing the application does what it should do, to the extent of what I (as a developer) have been told it should do. Unit tests are about helping develop new parts of the system.

Somewhere between these two I define functional and integration tests.

To me, a functional test is used to assert the behaviour of an individual component within the application. Typically this would be a pretty small component, consisting of smaller components that have each been unit tested. The point of functional tests is to group together some overall behaviour for a (developer defined) part of the system.

An integration test is like a functional test, but tests a complete sub-system of the application in its target environment as opposed to functional tests which would typically be tested in a test-specific environment (e.g. out of container.) If the application consists of several services (for example), an integration test in my speak would test each of these services in their deployed environment with a set of example scenarios.

To me, functional and integration tests as defined here are not strictly necessary for the successful development of an application, but can help development by showing problems in a more easily diagnosable form. That said, if you have a failing functional or integration test and no failing acceptance test you are either missing an acceptance test or are implementing functionality that is not required.

And I have no idea what a system test is. Maybe someone can tell me. 🙂