Blog

Tree Surgeon 1.0

This week I released ‘version 1’ of Tree Surgeon, my application that will generate a .NET development tree for you.

The main updates since 0.1 are a GUI, an installer, and an update of the included version of NAnt.

Over time, I plan on adding some more patterns, including versioning, continuous integration support and distributable generation.

On Subversion

I’ve recently started using the Subversion Source Control system a lot more. It really has got pretty stable now and I feel happy about telling people that they should consider migrating to Subversion from CVS (or in fact pretty much any other Source Control tool) as soon as they can. Atomic checkins, offline features, scalibility, useful and easy tags and branching, and lots of other good stuff all make it a compelling tool.

One thing that has helped Subversion recently is the improved documentation available. An example of this is Mike Mason‘s new book Pragmatic Version Control Using Subversion. I have to say right now that I’m an old friend and colleague of Mike’s and also reviewed the book, so I am completely biased, but I think its a brilliant tutorial to getting started with Subversion, and a handy reference to have around.

Mike’s also recently helped me out with a couple of things that weren’t in the book, so I wanted to share them here.

Firstly, if you’ve been following my blog recently you’ll know I’ve been using Cygwin a lot more. One of the things that Mike does very well is show how powerful and easy the svn command line client is. I was using the standard Windows download of Subversion with Cygwin, but apparently this is a really bad idea. Apart from not using the right kind of paths, you can apparently also get file corruption.

Anyway, this is easy to change – open up the Cygwin installer and choose to use its version of Subversion – its in the devel folder. Cygwin will automatically update your path so that your Cygwin version of svn is used rather than the previous one you installed. You’ll also need to repeat any changes you made to your Subversion config file – the Cygwin version will appear in the ~/.subversion/ directory. If you’ve already setup Putty, Plink and Pageant to manage your SSH identity this will still work – my config file has one change to support this:

[tunnels]

ssh = $SVN_SSH /c/tools/putty/PLINK.EXE

The second thing is to do with deletes. During the course of a development episode, I might well delete some old files. When I perform a svn status I will get a bunch of files showing as ! meaning that they have been removed by something other than a Subversion request. In fact I’ll get output a bit like this:

$ svn st

A      src/Core/Generators/NewGenerator.cs

!      src/Core/Generators/StandardDotNetUpperCaseGuidGenerator.cs

!      src/Core/Generators/IGuidGenerator.cs

Typically I then manually run a svn rm command for each of these, but this can get a little tedious after you’ve done it once. So since I’m using Cygwin, I can use some script to get the shell to do it automatically for me:

$ svn st | awk '$1 == "!"' | cut -c 2- | xargs svn rm

D         src/Core/Generators/StandardDotNetUpperCaseGuidGenerator.cs

D         src/Core/Generators/IGuidGenerator.cs

Now I don’t want to have to remember to do this every time, so I add this to the .profile file in my Cygwin home directory:

alias svnrmdeleted='svn st | awk "$1 == \"!\"" | cut -c 2- | xargs svn rm'

Now to remove from Subversion all deleted files I just type svnrmdeleted .

UpdateMatt Ryall came back with the update to the above using awk rather than grep. (Otherwise you might end up deleting files with actual “!”‘s in the name). He also grappled the required escaping to get the alias to work. (Thanks Matt!)

Update – Dan Bodart has since told me how to do this in plain Windows shell (Thanks Dan!):

  • for /f "usebackq tokens=2" %i in (`"svn st | findstr !"`) do svn rm %i

So, in summary – go use Subversion! Even if you are mostly a Windows user then try using its command line and ask some friendly UNIX- (or Windows Script-) savvy people how to automate some things you do often.

Automated SSH authentication on Windows

I use a few remote UNIX servers. Some host web content, some are Source Control repositories. All of them I access using SSH either for an interactive shell, or as a tunnel for applications like Subversion, CVS or rsync.

A few months ago when I started committing to projects on Codehaus I had to setup a SSH key-pair since they don’t allow plain password authentication for their SSH server. This was actually good since I’d been meaning to switch to key-based authentication for a while but hadn’t quite got around to it. The main reason for using key-pairs is the extra security – you have to ‘bring something’ (your private key) as well as just ‘know something’ (either a password, or the passphrase of your private key). However there is an added benefit in using key-pairs in that once you have set them up once in any one ‘session’ you don’t have to keep re-entering a password. (A session here is usually a Windows, or X-Windows, login session.)

It took me a while to get all of that setup though. I wanted to use Putty since it was before I’d started using the command line in anger. Putty actually makes this kind of thing pretty easy through using its Plink (RSH implementation), PuTTYgen (Key generator) and Pageant (key authentication agent) programs, but the problem was around key formats. Putty by default saves you a public key that won’t work on an OpenSSH remote server. After some head banging and half an hour with a friendly CodeHaus Despot or two on an IRC channel and we managed to get it working. The key (haha!) was to do the following in PuTTYgen:

  • Use SSH2 DSA keys
  • Don’t use the public key file that is saved, but instead use the contents of the box at the top of the window. Yes, the one that says ‘Public key for pasting into OpenSSH authorized_keys file’ that I should have used straight away 🙂

Then it was just a matter of setting up a saved Putty session (including my user name) and adding my key to Pageant. You do have to remember to try to use Putty for an interactive login the first time you connect to a server so that it can save a copy of the server’s key locally.

Using Plink works fine for command line Subversion (see my earlier post for my [tunnels] setup), but today I hit a problem using it with rsync. Cygwin’s rsync seems to want to use Cygwin’s ssh, and Plink just doesn’t seem to play ball. ‘No problem’, I thought, ‘UNIX must have an equivalent of Pageant’. Indeed it does – its called ssh-agent. Using this helpful page I found the required incantation, but hit a problem in that it wouldn’t accept the passphrase on my private key. After a couple of minutes I realised that it was another formatting problem which PuTTYgen could solve for me. All up then, being able to use my Putty-generated private key on Cygwin required the following steps:

  • Load private key in PuTTYgen
  • From the Conversions menu, select Export OpenSSH key
  • Save it as a file called id_dsa in the ~/.ssh directory. On my machine that is equivalent to c:\Documents and Settings\mroberts\.ssh\id_dsa
  • Add the following to my ~/.profile file : alias startssh="eval \`ssh-agent\` ; ssh-add"
  • Add the following to my ~/.profile file : alias stopssh="ssh-agent -k"

Now I just run startssh and stopssh around any times I want to do some rsync work. Its not perfect since right now I need to startup ssh-agent for every Cygwin prompt, and I also need to stop it before I exit the prompt otherwise the window will hang. There’s probably some hackery that can be done using a Windows Service, but I’ll save that investigation for another day.

Goodbye Web Forms

Today I committed the final changes to a large chunk of CruiseControl.NET work that I’m rather proud of, namely removing all Web Forms code from the Dashboard Web App.

‘What??’ you may cry. ‘You’ve stopped using ASP.NET??’ No, I’ve just stopped using Web Forms. Web Forms are those .aspx files you write and their code behinds. They’re also the things that use server controls, view state and other such components that make up most of .NET web apps. I’m still using the ASP.NET runtime in the form of an IHttpHandler. This is a much more lightweight way of developing web apps, similar to Java’s Servlets.

So why have I done this and thrown away the incredibally rich System.Web.UI namespace? Well for a number of reasons, but chiefly because of testability and simplicity.

Web Forms are hard things to unit test. Basically you can’t. This is because of how closely tied all Page implementations are into the ASP.NET framework. To introduce testability you have to keep your code-behinds very thin, but once you’ve got a few controls on your page this is tricky. Also, any logic that you put in the .aspx file itself is even harder to test, and this includes any templates, grid setup or whatever.

ASP.NET Web Forms also seem to be incredibally complex to me. The Page class alone is a pretty big beast, with all those events going on under the hood. And don’t even start me on Data Grids and Data Binding. Easy to setup something up in a prototype, yes, but simple to maintain? I’m not convinced. Fundamentally, web apps should be simple. You have a request with a bunch of parameters and you produce a response which is (to most intents and purposes) a string. Now I know that Web Forms are supposed to introduce a new model above all this stuff, but I don’t think the abstraction works particularly well once you get beyond a prototype.

So anyway, I decided to try and get rid of using Web Forms. I’ve evolved a new web framework, based a little bit on WebWork. It has an ultra simple front-controller and is based on decorated actions. Views are just html strings, but I’m using NVelocity to generate them. I’m using an IHttpHandler to process requests, and at the moment I’m overriding the .aspx extension to be handled by my custom handler, not the Web Forms handler.

Will this be any use outside of CruiseControl.NET? I’m not sure – I might just be going off on one. But that said a good number of my Java-developing colleagues in ThoughtWorks have migrated from using Struts to WebWork, for similar to reasons to why I’ve moved away from Web Forms. Is any of my code re-usable? I think so, indeed I hope to spin off the web framework as a separate open source project, but the point is it is possible to write perfectly decent web applications in .NET without using Web Forms.

Finally, I’d like to give significant kudos to Joe Walnes. He wrote some of WebWork, and badgered me to think about using it as a basis of a new web framework for .NET. He also introduced to me the ideas of using IHttpHandler’s as the entry point for such a custom framework and of overriding .aspx handling to avoid reconfiguring IIS.

find and grep

So remember my new toy – the command line? Well, I didn’t just drop it after a few days.

I don’t know about anyone else, but the Search Tool in Windows XP really bugs me. I’m not just talking about that stupid dog wagging its tail all the time (OK, I’m a cat person, so maybe I’m biased) – I’m talking about how the results always seem a bit odd, and never really tell me what I want.

Sometimes I want to know all files in a folder tree with ‘Widget’ as part of their name. Sometimes I want to know all files in a folder tree with ‘Widget’ as part of their content.

Ladies and gentlemen, may I present you with find and grep. Simply put your Cygwin prompt into the directory you want to start at, and:

  • find . -name '*Widget*' – searches file names
  • grep -r Widget * – searches file content

Simple, clean, fast, effective. And no dumb tail wagging noises coming out of my speaker.

Introducing Tree Surgeon

If you’ve been following my ‘How to setup a .NET Development Tree’ series, you might have been encouraged to start work on a new development tree. If you are, then you might want to check out Tree Surgeon. Its a new Open Source project I’ve started which will create a complete development tree for you, parameterized for your needs. At the moment those parameters are just the project name, but that should be enough to get you going.

Please use the mailing lists, or mail me, if you have any opinions about Tree Surgeon. Happy chopping!

How to setup a .NET Development Tree Part 7

Last time we left our code with a dependency on a 3rd party library, multiple internal modules (VS Projects), and a passing test. Great! But how do we know the test passes? At the moment it requires us to have our ‘interactive hat’ on. It would be much better if we knew just by running our automted build. So let’s do that.

Before we start, here is the current state of our build script:

<project name="nant" default="compile" xmlns="http://nant.sf.net/schemas/nant.xsd">

<target name="clean">

<delete dir="build" if="${directory::exists('build')}"/>

</target>

<target name="compile">

<solution solutionfile="srcSycamore.sln" configuration="AutomatedDebug" />

</target>

</project>

We’re going to add a test target. Here’s our first cut:

<target name="test">

<exec program="nunit-console.exe" basedir="toolsnunit" workingdir="buildDebugUnitTests">

<arg value="SherwoodForest.Sycamore.UnitTests.dll" />

</exec>

</target>

Here we are using an <exec> task to run the NUnit Console application that’s already in our development tree (that was handy, wasn’t it? That’s why we left all the NUnit binaries in our tree.) Some projects will use the <nunit> or <nunit2> tasks to run their tests from a build script, but this requires your version of NAnt and version of NUnit being in sync. Personally, I think the <exec> call looks pretty clean so I’m happy to use that rather than the tighter NUnit integration. And it means that later on if we update one of these 2 tools we don’t have to worry about breaking this part of our build script.

The slightly tricky thing here is getting our directory specifications right. <exec>‘s basedir attribute is the location of the actual .exe we want to run, and workingdir is the directory we want to run the application in. What might catch you out is that workingdir is relative to your NAnt base directory, not to the basedir attribute in the task specification.

Try running this target by entering go test from a command prompt in the project root. Did it work? What if you try go clean test ? The problem is that we need to compile our code before we test our code. NAnt supports this kind of problem through the depends target attribute and <call> task. Now we are entering the realm of much disagreement between build script developers. 🙂 Which is the best option? And how should it be used? If you’re new to NAnt, you’ll probably want to skip the next few paragraphs.

depends specifies that for a target to run, all the targets in the depends list must have all run already. If they haven’t, they will be run first, and then the requested target will run. <call> is much more like a traditional procedure call. So surely <call> is the best option, since we all know about procedure calls, right? Well, maybe, but the problem is that depends is really quite a clean way of writing things, especially when our script has multiple entry points. Also, traditionally, the behaviour of ‘properties’ have been a little strange when using <call>. depends though can get messy if every target has 7 different dependencies.

So, for better or worse, here’s my current advice on this subject:

  1. Use depends as the primary way of defining flow in your build script.
  2. If a target has a depends value, don’t give it a body. In other words a target should have task definitions, or dependencies, but not both. This is to try and get away from the ‘dependency explosion’ that Ant / NAnt scripts tend towards.
  3. Use <call> only for the equivalent of an extract method refactoring. <call>ed targets should never have dependencies. Think very carefully about properties when using <call>.

We’ll put this hot potato back on the fire now.

(Paragraph skippers, join back in here.) So back to our test target. What we want to say is that running the unit tests depends on compiling the code. So we’ll add the attribute depends="compile" to the test target tag.

<target name="test" depends="compile" />

<exec program="nunit-console.exe" basedir="toolsnunit" workingdir="buildDebugUnitTests">

<arg value="SherwoodForest.Sycamore.UnitTests.dll" />

</exec>

</target>

Now we’re mixing up our dependencies and tasks though, breaking rule 2 above. We’ll use an extract dependency target refactoring to split the target into 2 (note the second dependency on the test target):

<target name="test" depends="compile, run-unit-tests"

description="Compile and Run Tests" />

<target name="run-unit-tests">

<exec program="nunit-console.exe" basedir="toolsnunit" workingdir="buildDebugUnitTests">

<arg value="SherwoodForest.Sycamore.UnitTests.dll" />

</exec>

</target>

There’s something else we’ve done here – we’ve added a description to the test target. This is important – you should use the convention that targets with a description value are runnable by the user. If a user tries running a target without a description then that’s down to them – they should be aware that the script may fail since dependencies have not been run. Users can easily see all the ‘public’ targets in a build script by doing go -projecthelp (the ‘main’ targets as NAnt calls them are our public targets.)

OK, we can run our tests, but where are the results? What we’d actually like is to use NUnit’s XML output so that results can be picked up by another process, such as CruiseControl.NET. Let’s put this XML output somewhere in the build folder, since its another one of our build artifacts. We’ll update the run-unit-tests target as follows:

<target name="run-unit-tests">

<mkdir dir="buildtest-reports" />

<exec program="nunit-console.exe" basedir="toolsnunit" workingdir="buildDebugUnitTests">

<arg value="SherwoodForest.Sycamore.UnitTests.dll" />

<arg value="/xml:....test-reportsUnitTests.xml" />

</exec>

</target>

We used the /xml: parameter for NUnit, and made sure the report output directory already existed.

One more thing, and then we’ll be done. We already introduced the idea of a build script refactoring above when we split-up the test target. If you look at the current state of the build script though, you’ll see there’s plenty of scope for another refactoring – ‘introduce variable’, or introduce script property as we’ll call it in the build script world. Look at all those places where we use the folder name build. Lets put that in a script property called build.dir. Now our script looks like:

<project name="nant" default="test" xmlns="http://nant.sf.net/schemas/nant.xsd">

<property name="build.dir" value="build" />

<!-- User targets -->

<target name="test" depends="compile, run-unit-tests"

description="Compile and Run Tests" />

<target name="clean" description="Delete Automated Build artifacts">

<delete dir="${build.dir}" if="${directory::exists(property::get-value('build.dir'))}"/>

</target>

<target name="compile" description="Compiles using the AutomatedDebug Configuration">

<solution solutionfile="srcSycamore.sln" configuration="AutomatedDebug" />

</target>

<!-- Internal targets -->

<target name="run-unit-tests">

<mkdir dir="${build.dir}test-reports" />

<exec program="nunit-console.exe" basedir="toolsnunit" workingdir="${build.dir}DebugUnitTests">

<arg value="SherwoodForest.Sycamore.UnitTests.dll" />

<arg value="/xml:....test-reportsUnitTests.xml" />

</exec>

</target>

</project>

A lot of people will introduce a script level property whenever they introduce a new directory, file, etc. I advise you not to do this in your build script development since (I think) it hinders maintainablity. Treat your build script like well maintained code – do the simplest thing that works, but refactor mercilessly. In terms of introduce script property you should really only be doing it once the same piece of information is used by multiple targets. For example, a lot of people would introduce a src.dir property out of principle, and in our case it would have the value src. But what would that gain us? In our build script we only ever use that directory name once, so its simpler just to leave it as a literal usage in the call to <solution>.

Notice in the last example we also added descriptions to all the targets we want to be public, and split the file up into (effectively) public and private targets. XML is not the cleanest language to develop in, but by thinking about simplicity and readabilty, you can make your build scripts more maintainable.

To summarise this part:

  • Use the <exec> task to call NUnit within your build script.
  • Use targets that just specify dependencies to create flow within your build script.
  • Don’t use dependencies with targets that specify tasks
  • Split your targets into ‘public’ and ‘private’ targets by giving public targets a description.
  • Use build script refactorings to simplify the structure of your NAnt file.
  • Don’t introduce unnecesssary script properties

How to setup a .NET Development Tree Part 6

By now we have some source code checked in to our Source Control server. Its got a structured folder hierarchy and we’re being careful about how we check specific files in (and ignore others). We’re combining Visual Studio and NAnt to have a simple yet powerful automated build that works closely with the changes we make during interactive development.

So far though we only have 1 source file and shockingly no tests. We need to change this.

To do this we’re going to create 2 new assemblies – one application DLL, and one DLL for unit tests. .NET won’t allow you to use .exe assemblies as references for other projects, so a unit test DLL can only reference another DLL. Its slightly off-topic but because of this reason I try to keep my .exe projects as small as possible (because any classes in them can’t be unit tested) and have nearly all code in a DLL.

So let’s create our new Application DLL. I’m going to call it Core. Following the conventions we set down in part 2, the VS Project Folder is stored in src and we change the default namespace to SherwoodForest.Sycamore.Core. Before closing the Project Properties window though, there are 2 more things to change.

Firstly, for DLLs I like to use the naming convention that the Assembly has the same name as the default namespace. Also, following what we did in the previous part, create an ‘AutomatedDebug’ configuration, based on the ‘Debug’ Configuration, except with the output path of ..\..\build\Debug\Core. Make sure your Solution build configurations are all mapped correctly. We won’t need the ‘Class1’ which VS automatically creates, so delete it.

We follow exactly the same procedure for our Unit Test DLL, giving the VS Project the (not particularly original, nevertheless informative) name of UnitTests. Save everything and make sure you can compile in Visual Studio and using your build script.

Before we write a test, we need to setup our project with NUnit. There’s a few hoops to go through here but we only have to do it once for our project. Firstly, download NUnit – I’m going to be using NUnit 2.2.2 for this example. Download the binary zip file, not the MSI. While its downloading, open up your Global Assembly Cache (or GAC) – it will be in C:\Windows\Assembly, or something similar. Look to see if you have any NUnit assmblies in it. If you do, try to get rid of them by uninstalling any previous versions of NUnit from your computer.

Why are we worrying about not using the GAC and MSI’s? Well, for pretty much exactly the reasons as we specified for NAnt, we want to use NUnit from our development tree . The problem is that if we have any NUnit assemblies in the GAC, they will take priority over the NUnit in our development tree. We could go through being explicit about the versions of NUnit each assembly requries, but that’s a lot of hassle. Its easier just not to make NUnit a system wide tool, and this means getting it out of the GAC. (Mike Two, one of the NUnit authors, is probably going to shoot me for suggesting all of this. If you want to make NUnit a system tool then that will work too, you just have a few more hoops to jump through.)

By now your NUnit download should be complete. Extract it, take the bin folder and put in next to the nant folder in your project’s tools folder. Rename it to nunit.

To create test fixtures in our UnitTests VS Project, we need to reference the nunit.framework assembly. This introduces a new concept – that of third party code dependencies. To implement these, I like to have a new top-level folder in my project root called lib. Do this in your project and copy the nunit.framework.dll file from the NUnit distribution to the new folder. Once you’ve done that, add lib\nunit.framework.dll as a Reference to your UnitTests project.

Because of the previous step we now have the same file (nunit.framework.dll) copied twice in our development tree. Its worth doing this because we have a clear separation between code dependencies (in the lib folder) and build time tools (in the tools folder). We could delete the entire tools folder and the solution would still compile in Visual Studio. This is an example of making things clean and simple. It uses more disk space, but remember what we said back in Part 1 about that?

So finally we can actually write a test! For Sycamore, I’m going to add the following as a file called TreeRecogniserTest.cs to my UnitTests project:

using NUnit.Framework;

using SherwoodForest.Sycamore.Core;

namespace SherwoodForest.Sycamore.UnitTests

{

[TestFixture]

public class TreeRecogniserTest

{

[Test]

public void ShouldRecogniseLarchAs1()

{

TreeRecogniser recogniser = new TreeRecogniser();

Assert.AreEqual(1, recogniser.Recognise("Larch"));

}

}

}

To implement this, I add Core as a Project Reference to UnitTests and create a new class in Core called TreeRecogniser:

namespace SherwoodForest.Sycamore.Core

{

public class TreeRecogniser

{

public int Recognise(string treeName)

{

if (treeName == "Larch")

{

return 1;

}

else

{

return 0;

}

}

}

}

I can then run this test by using TestDriven.NET within the IDE, or by using the NUnit GUI and pointing it at src\UnitTests\bin\Debug\SherwoodForest.Sycamore.UnitTests.dll. The tests should pass in either case.

If we run our automated NAnt build, everything should compile OK, and you should be able to see each of the VS Projects compiling in their AutomatedDebug Build Configuration. The tests aren’t run yet, but that’s what we’ll be looking at next time. Even so, we are still at a check-in point. We have 2 new project folders to add, but remember the exclusion rules (*.user, bin and obj). Being a Subversion command-line user, I like to use the the -N (non recursive) flag of svn add to make sure I can mark the svn:ignore property before all the temporary files get added.

Also, don’t forget to check in tools\nunit or the new lib folder.

The current state of Sycamore is available here.

So let’s wrap up this part then. We covered some new generic principles about projects and dependencies. We also looked at the specifics of using NUnit. Some concrete points to take away are:

  • Set DLL Names to be the same as the default namespace
  • Put your Unit Tests in a separate VS project called UnitTests
  • Save NUnit in your development tree in its own folder under tools
  • Put all DLLs your code depends on in a top level folder called lib. The only exceptions are system DLLs such as .NET Framework Libraries.

How to setup a .NET Development Tree Part 5

In the last part we started using NAnt to automate a build for our project. In this part we’ll add some more build functionality.

When we added the compile target we used the <solution> task to compile our solution. However, we also specified which ‘Build Configuration’ to use. Build Configurations are a Visual Studio feature that allow you to build your project in different ways. The most common differences are between ‘Debug’ and ‘Release’ (2 configurations that Visual Studio always creates for you.) With a Debug build, the Visual Studio compiler is configured to create the .pdb files we use for debugging (it gives us line numbers in exception stack traces, that kind of thing.) The ‘Release’ configuration doesn’t have these files generated, but it does produce assemblies more geared towards production than development.

However, there are a whole bunch of other things you can configure for different build configurations. Right-click on a project in Visual Studio, select Properties, then look at everything that appears under ‘Configuration Properties’ – all of those items can change for different Build Configurations. We’re interested in the ‘Output Path’ property, and I’ll explain why.

When we tell NAnt to compile the Debug Build Configuration of our solution, it tries to invoke the C# compiler to produce all the files that appear under the bin\Debug folder for each VS Project. There’s a problem with this though – if we already have the Solution open in Visual Studio, VS will have locks on those files once they reach a certain size. That means that our NAnt compile will fail since it can’t overwrite the assemblies. Anyway, it would be cleaner if we could separate out our ‘automated’ build from our ‘interactive’ build.

Thankfully, Build Configurations let us do this and still use the <solution> task. We do this by creating a new Build Configuration which we will just use for automated builds, and change where it outputs its files to.

To do this for Sycamore, I open up Visual Studio’s ‘Configuration Manager’ (right click on the Solution, choose ‘Configuration Manager’), and create a new configuration (open the drop-down menu, select ‘<New…>’). I’m going to call the new configuration AutomatedDebug and copy settings from the ‘Debug’ configuration (leave the ‘create new project configuration(s)’ box checked.) Close the dialog, and then bring up the properties for ‘SycamoreConsole’. Select the ‘Build’ ‘Configuration Properties’ section, and make sure ‘AutomatedDebug’ is selected in the Configuration drop-down. Select the ‘Output Path’ box and change its value to ‘..\..\build\Debug\SycamoreConsole’. Switch Visual Studio back to the normal ‘Debug’ configuration which we use for interactive builds.

Finally, edit the build script, and change the ‘configuration’ argument of the <solution> task to be AutomatedDebug. It should now look like this:

<target name="compile">

<solution solutionfile="srcSycamore.sln" configuration="AutomatedDebug" />

</target>

So what have we actually done here? If you run NAnt, you should see the following lines in your build output:

compile:

[solution] Starting solution build.

[solution] Building 'SycamoreConsole' [AutomatedDebug] ...

This tells us that NAnt is using the new Build Configuration. Now, look in the build\Debug\SycamoreConsole folder – you should see our compiled .exe file (and a .pdb file since we are compiling with debug options.)

That tells us what is happening, but why have we put these files in this directory? We use the build folder as another of our ‘top level’ project folders. It will contain all the build artifacts (assemblies, test reports, etc.) that we produce in the automated build. It will not contain any files that aren’t generated by the build, so we don’t need to check it into Source Control, and we can safely delete it whenever we want. Under build we will have a number of sub-folders, and so far we created one called Debug that will contain all of our Debug compilation artifacts. We put the artifacts for each VS Project in its own folder, with the same name as the VS Project it belongs to.

I said we could safely delete this folder, so let’s add another NAnt target that will do this:

<target name="clean">

<delete dir="build" if="${directory::exists('build')}"/>

</target>

I also said we didn’t need to check the build folder into Source Control, so we can also add it to our list of excluded files. With Subversion, I do this by editting the svn:ignore property of the project root folder.

Finally for this part, we’re going to create a batch file that developers can use to kick off the build. Its very simple, having just the following line:

@toolsnantNAnt.exe -buildfile:Sycamore.build %*

I like calling this file ‘go.bat’ since the targets in the build script tend to be ‘action’ type words. Since its closely associated with the build script, put it in the project root. Note that we specify which build script to use – change this for your project. To use this file, just pass the target to run as an option, so to delete the build folder, just enter go clean.

Note that this batch file really is just meant as a bootstrap for convenience. I’ve seen plenty of projects use a combination of batch files and NAnt / Ant scripts to configure a build system. This is a bad idea for several reasons:

  • Batch files are significantly less manageable or powerful than NAnt, and tend to get very ‘hacky’ very quickly.
  • Your build behaviour really is one distinct concept and NAnt can handle all of it – splitting it across technologies isn’t necessary.
  • Don’t go down the road of having multiple batch files to launch builds for different environments. I’m yet to see a project that managed to pull this off in a clean, manageable way. Apart from anything else it is redundancy, and introduces more manual work and possiblities for error. Instead, use logic in your NAnt script to use different property values for different environments (hopefully I’ll get on to build configuration and refactoring concepts in the future.)

If you run your default target, it should still be successful. If you have all your ignored files and directories setup correctly you should have 4 files to commit – the build script, the build script launcher (go.bat), the solution, and the VS Project for Sycamore Console. I’m going to check in these changes and call it a day for this part.

The current state of Sycamore is available here.

To summarise this part:

  • Use a top-level, transient, folder called build as the target folder of your automated build.
  • Create a new Visual Studio Build Configuration for your automated NAnt Builds. This Build Configuration should output to build.
  • Setup a clean target to delete your transient files.
  • Create a simple build bootstrap batch file.
  • Don’t put any kind of build logic in this build bootstrap – leave that all in the NAnt build script.

In the next part we’ll start to add some unit tests.