Blog

Classifying Tests

I’m not really a ‘testing guy’, but I am an ‘agile guy’ so I do write and use automated tests during development. I find that there’s a lot of confusion about what a ‘unit test’ is, what an ‘acceptance test’ is, and that kind of thing. Most of the time people have their own feelings about what these different classifications of test are and when talking to others assume that they have the same idea. This is frequently not the case.

So just in case you ever talk to me about tests, and we forget to come to a common shared language about them, here’s what I might mean by the various types of tests. Note that these are mostly ‘development test’ defintions, not ‘QA test’ definitions. I’m not saying they’re right, they’re just what I think they mean. (Bret and Brian – please forgive me. 🙂 )

A unit test tests the smallest component in a system. In an object oriented system, this is a class. It might also be an individual stored procedure, javascript method, whatever. A unit test should not assert (or rely on) any behaviour in the system other than that of the component it is testing.

At completely the other end of the scale, an acceptance test is a behaviour that ‘the customer’ has defined the application should have with the environment around it. This may be a relationship between different parts of the application’s domain, an expected output for a set of input in the UI, performance criteria, or interactions using a shared technological resource (such as a shared database or messaging infrastructure.) The important thing about an acceptance test is it should not assert anything about the implementation of the application.

Ideally, acceptance tests should be automated, but sometimes the cost of doing this is too high to justify the value.

I like any application I work on to have both of these types of test. Acceptance tests are about showing the application does what it should do, to the extent of what I (as a developer) have been told it should do. Unit tests are about helping develop new parts of the system.

Somewhere between these two I define functional and integration tests.

To me, a functional test is used to assert the behaviour of an individual component within the application. Typically this would be a pretty small component, consisting of smaller components that have each been unit tested. The point of functional tests is to group together some overall behaviour for a (developer defined) part of the system.

An integration test is like a functional test, but tests a complete sub-system of the application in its target environment as opposed to functional tests which would typically be tested in a test-specific environment (e.g. out of container.) If the application consists of several services (for example), an integration test in my speak would test each of these services in their deployed environment with a set of example scenarios.

To me, functional and integration tests as defined here are not strictly necessary for the successful development of an application, but can help development by showing problems in a more easily diagnosable form. That said, if you have a failing functional or integration test and no failing acceptance test you are either missing an acceptance test or are implementing functionality that is not required.

And I have no idea what a system test is. Maybe someone can tell me. 🙂

How to setup a .NET Development Tree Part 4

At this point we have a basic Visual Studio solution checked into Source Control. Now its time to automate how we build this solution.

Most of the time .NET developers will work solely within the Visual Studio environment, compiling their solution with the in-built compiler, and running tests using TestDriven.NET (more on testing to come…). But relying solely on Visual Studio as a way to produce build artifacts and run your tests isn’t enough. For instance:

  • How do you run scheduled or triggered builds for your project? Using the command line version of Visual Studio (devenv.com) provides you with only basic command line features.
  • Visual Studio’s ‘pre-‘ and ‘post-‘ build events provide some build scripting beyond just compiling code, but such scripting is limited in scope and expressiveness.

The current ‘de-facto’ automated build tool for .NET projects is NAnt. NAnt is based on the Java build tool Ant and has similar strengths (integration with lots of useful tools, few dependencies) and also its weaknesses (being defined in XML means large build scripts quickly get hard to maintain). .NET 2 and Visual Studio 2005 will come with their own build scripting tool, MSBuild, which is very similar to NAnt. Investing in NAnt now should give you a build script you can easily convert to MSBuild later, should you want to.

NAnt is a tool that can be installed on every developers machine. However, I like to check NAnt into the project tree for some simple reasons:

  • It saves the manual steps of everyone copying it to their machine, and installing it. (Remember – manual steps take time and are a possible point of error.)
  • NAnt changes between versions, and such changes can effect the behaviour of a build. Making sure that everyone has the same version of NAnt when everyone is manually installing it can be tricky, and is time consuming when you want to upgrade the version of NAnt everyone uses.
  • Many projects use their own ‘custom’ NAnt tasks. Storing these in source control along with the project’s own version of NAnt makes distribution to team members painless.
  • It is not a large tool, so the overhead of storing it in source control should not be a problem.

To add NAnt to your project tree, first download and unpack its binary zip file (I’m going to use NAnt 0.85 RC1, available here.) Then, copy the bin folder to your project directory. I like to put all build-time tools in a sub-folder of my project root called tools, and then put the contents of NAnt’s bin folder in tools\nant. Before going any further, commit NAnt to your project’s source control, making sure to include in the commit message the version of NAnt you are using. Later on, this will help you decide whether you want to upgrade to a new version.

You tell NAnt what to do using a build script. The standard for naming NAnt build scripts is ProjectName.build. The build script is a gateway into our project, so I like to save it in the root folder. You can edit your build script with Visual Studio – create it as a ‘solution item’ (Right click on the solution icon in Solution Explorer and choose Add new item… or Add existing item…). If you follow the instructions here and here you’ll even get IntelliSense! (Thanks to Serge van de Oever and Craig Boland for writing it up.)

Our first NAnt build script will just compile our project. There are several ways to do this, and I’m going to use the <solution> task:

<?xml version="1.0" ?>

<project name="nant" default="compile" xmlns="http://nant.sf.net/schemas/nant.xsd">

<target name="compile">

<solution solutionfile="srcSycamore.sln" configuration="debug" />

</target>

</project>

I like to use the <solution> task for a couple of reasons:

  • For developers to work in Visual Studio we need to define how to compile our project in Visual Studio using its ‘references’ system. <solution> lets us re-use all this work in 1 line of script. If we were to use the <csc> task instead we would need to maintain a separate set of compile definitions (which would be time-consuming and might not match the Solution / VS Project setup).
  • Using <solution> rather than the <exec> task calling out to devenv.com is less resource intensive, gives more appropriate feedback and also allows us to run builds on machines without Visual Studio installed (it just needs the .NET SDK.) If you have a problem using <solution> you can always quickly replace it with an <exec> to devenv.com

To run your build, save the build script, open a command prompt and change to your project’s root folder. Then just enter tools\nant\NAnt. You should see output like:

NAnt 0.85 (Build 0.85.1793.0; rc1; 28/11/2004)

Copyright (C) 2001-2004 Gerry Shaw

http://nant.sourceforge.net

Buildfile: file:///c:/devel/sycamore/Sycamore.build

Target(s) specified: compile


compile:

[solution] Starting solution build.

[solution] Building 'SycamoreConsole' [debug] ...

BUILD SUCCEEDED

Total time: 0.2 seconds.

Woohoo – a successful build! We have something new, that works, so submit the build script (and your changes to the Solution file that include the build script) to source control.

The current state of Sycamore is available here.

To summarise this part:

  • Add an automated build system to your project.
  • Use NAnt to automate your .NET 1.1 and earlier projects.
  • Check the NAnt distribution into your development tree.
  • Create a build script and save it in your development tree.
  • Use the <solution> task to compile your project.

In the next part we’ll be adding some more features to our automated build.

How to setup a .NET Development Tree Part 3

A quick recap. So far we have made sure we have a good source control environment and have created a Visual Studio Solution with a well structured folder setup. But we haven’t checked those files into our Source Control server yet – we’d better fix that.

Your Source Control administrator will probably tell you where to make your initial check-in of your new project, but I suggest you think about simplicity for a moment:

  • If you’re using Perforce, consider using 1 depot as the ‘server side’ equivalent of your meta root. You don’t lose any security options, and you gain in that developers may already have this depot mapped in their client so won’t need to change any source control configuration.
  • If you’re using Subversion, just use one repository for all the projects in your department (see here for a good explanation why.) Use a new directory for your new project (and probably check it in to a ‘trunk’ sub-directory, but you can always move it later.)
  • If you’re using CVS, its fairly standard to create a new CVSROOT for each project, and I would recommend it. Note that you’ll have to setup any extra permissions and triggers that you use as standard. I’ve seen organisations make good use of GForge to manage their CVS server.
  • For other Source Control systems, follow similar guidelines

Once you’ve figured out the source control location of your new project, don’t be too hasty about checking in. Its worth taking a moment to decide what you actually want to check in. Files you don’t want to include are:

  • Build output folders – don’t check in the bin or obj sub-folders of your VS project folders
  • Any Solution .suo or VS Project .user files – these are user and environment specific and should not be checked in
  • Any Resharper, or other third-party tool output. (Resharper generates a SolutionName.resharperoptions file and a _ReSharper.SolutionName folder, neither of which you need to save)

Not checking these files in is good, but making sure no-one else ever checks them in later by mistake is even better. CVS and Subversion both offer such functionality through .cvsignore files and svn:ignore properties respectively. With Perforce, you can use Triggers, but this is not as elegant a solution.

Moving back to our Sycamore example, I’m going to use a Subversion server to check in our work. First of all I delete all the temporary files we discussed above. Then I’m going to use the svn command line tool, but you could use TortoiseSVN or AnkhSVN instead. My command line looks like:

c:develsycamore>svn import -m "Initial Sycamore Import" . file:///c:/svn-repos/sycamore/trunk

Once the intial checkin is complete I’m going to delete my ‘sycamore’ folder and then checkout from Subversion the folder we just imported to get a local versioned folder. After that I reload the solution in Visual Studio and compile. This recreates the temporary files.

I then set the svn:ignore value for src to be *.suo, ReSharper.Sycamore and *.resharperoptions. The svn:ignore for VS Project dirs should be set to *.user, bin and obj. You should be able to test you’ve captured everything by doing a svn status in the root folder and only seeing output for merging the properties of the src and VS Project directories. Make sure to commit these property updates.

To see exactly the state of Sycamore as it currently stands, download a zip file from here.

To summarise this part:

  • Pick a Source Control location that is simple for everyone to use.
  • When checking in your project directory, make sure not to include build artfacts or temporary environment files.
  • If possible, configure your Source Control to make sure no-one can check in such files in the future.

In the next part we’ll be adding an automated build for our project using NAnt.

How to setup a .NET Development Tree Part 2

In Part 1 we looked at making sure we had our Source Control story straight. With that sorted out, we can start creating some files to put in it.

First some terminology, I’m going to use the word ‘Project’ to define the thing that all the files in our development tree go to make up. It is more than just a Visual Studio Project. I’m going to use an example project called Sycamore.

Next, I’m going to assume you are using Visual Studio 2003. Pretty much everything we are going to look at will work without Visual Studio, but I’ll assume you have it anyway.

First, we want to make a new folder. We’ll put it in our meta root – a place where you check projects out from Source Control. On my machine, this folder is C:\devel but on your machine (and anyone else’s) it might be different. You should never assume the concrete location of meta roots.

Call the new folder pretty much the same thing as your project. I say ‘pretty much’ since I like to remove capitals and spaces, but its really up to you. Our folder will be called sycamore. It is the root of our development tree. All source code for this project will exist somewhere under this root. Any tool or library dependencies that exist outside the scope of this root will have to be managed carefully. There will be no source under this root that belongs to any other project.

To start with we are just going to create a Visual Studio-compilable ‘solution’. A solution contains source code, so we’re going to create a sub-folder of sycamore called src. We will have other sub-directories later, but they will contain other things, so its good to separate the source out into its own location.

In src we create our new Visual Studio solution. To make things easy, I’m going to call it Sycamore. Unfortunately, Visual Studio doesn’t make it easy – it wants to put it in a another sub-folder called Sycamore so once I’ve created an empty solution I’ll close it down and move the Solution.sln file into the src folder. We can delete the extra Sycamore folder that Visual Studio created for us.

Next to create some VS Projects. In this part, I’m going to keep it simple – just a single command line application in one project. We’ll be creating some more projects later. I like projects to have their own folder under src. We never merge VS projects into one folder and never put their ‘project roots’ anywhere other than src.

In Visual Studio I create a C# Console Application called SycamoreConsole. Its location on my machine is C:\devel\sycamore\src, but the location on your’s will depend on your meta root. VS creates the project for us and creates a class called Class1.

We’re also going to change some of the project properties. First the Assembly Name. In later parts we’ll talk about dll names, but for console applications, pick something short and obvious. We’re going to call our’s sycamore. For the Default Namespace, I like to use the convention OrganisationName.ProjectName.VSProjectName, so in our case I’m going to use the Namespace SherwoodForest.Sycamore.SycamoreConsole. Save these properties and go back to the Class1 window.

First, set the namespace to be as you just set in the Project properties. Now rename the class to something sensible (we’ll use HelloWorld for now), and don’t forget to rename the file to match. (I recommend you use ReSharper which will do the file rename for you.) I also like to delete all the unnecessary comments. Sticking with tradition we’ll add the statement Console.WriteLine("Hello World"); to the main method. Compile, run and make sure everything does as expected.

We’re done for now. We may be making baby steps, but we are already seeing some defintions and patterns emerge:

  • The root is the upper most point of our development tree.
  • All files belonging to a project exist under the root.
  • No files belonging to any other project exist under the root.
  • The root itself is resident in a meta root which can change from machine to machine.
  • All source code resides under a src sub-folder.
  • The project .sln file is saved in the src folder.
  • All Visual Studio Projects exist in their own sub-folders under the src folder.
  • Visual Studio Project folders are atomic, and should be named identically to the project they contain.
  • The default namespace for a Visual Studio project should be OrganisationName.ProjectName.VSProjectName.

In the next part we’ll look at what we have to do to get this project into Source Control.

How to setup a .NET Development Tree Part 1

So, lets start building our development tree. Feel free to join in. 🙂

The first thing you need is a Source Control environment. This may sound simple, but even at this stage I have seen some strange things happen on projects.

Here are some ‘must haves’:

  • Your source control server must be fast. Your developers are often going to be waiting for your source control to do things, so don’t scrimp on hardware. Specifically:
    • Do use decent, modern, hardware.
    • Don’t use network shares to store files – in my experience it will slow your source control by about 10x. Instead invest in some locally redundant disks (RAID 5 is OK, RAID 0+1 is better), and a backup strategy.
    • Don’t put your Source Control server on the other side of the world from your team. Keep it local, and make sure your network isn’t getting bogged down. Obviously with distributed teams this may not be possible, but if your team isn’t distributed, don’t distribute your hardware.
  • Don’t be tight on hard disk space. Get about as much as you think you might need in 3-5 years. Disk space really is cheap and having lots of it means that people can worry about producing software, and not about whether they are going over quota.
  • Give developers write access to the code they need to work on. If you trust them to write code, you should trust them to be able to edit their own work without having to go through slow processes. Other team’s code may be a different matter.
  • Put each development tree in its own folder under source control – don’t try and ‘save’ work or space by merging them. It really will save you headaches and time. See the ‘hard disk space’ point.
  • Make sure new source control clients can be set up fast and correctly. Document what needs to be done for each project on a Wiki. If your Source Control Client setup takes over 10 minutes, or is more than a page of manual work, change it. If necessary throw away your current Source Control software and start again.
  • Make sure basic source control operations are quick, simple and well understood. All developers should be easily able to do all of the following operations – if they don’t know how or if these processes are cumbersome or slow to execute, then change them (again, if necessary consider changing your Source Control software)
    • Check out from nothing
    • Get updates
    • Find differences between server and local versions
    • Revert local versions
    • Commit changes
  • Your Source Control system must be consistently trustworthy – if developers are losing changes or files are becoming corrupted, fix it.
  • Your Source Control server should support the following more advanced operations, which developers should be able to perform if necessary:
    • Labelling (tagging)
    • Branching (parallel, independent, development of integratable code lines)
    • Automation (be driven by a process, not just a person)


The above points I believe are all necessary for an effective development project. For an ‘excellent’ project I recommend the following:

  • A good source control server can happily accomodate 100 developers. I recommend the following kind of system:
    • UNIX/Linux based – Most good Source Control software is written primarily for a UNIX/Linux environment so don’t support edge cases.
    • At least dual-CPU (I like the idea of one CPU being able to do work, and one doing I/O, but I’m sure that’s rather a simplistic model these days)
    • At least 1GB RAM – if your often-accessed source is already cached you should get a speed up.
    • Don’t run anything else on the machine apart from a Source Control server. If you do (e.g. source control reporting), invest in extra processors and monitor what impact those extra applications are having.
    • Use 1 disk set for applications and checkpoints/journals, and a separate disk set for your actual data.
  • If cash is fairly easily available in your organisation, use Perforce. I’ve been using it on an off for 4 years now and it never ceases to amaze me how fast and stable it is. It also requires almost zero maintenance.
  • Otherwise use Subversion. It is free, and better than any other SCM system I’ve tried apart from Perforce.
  • If you are using Visual SourceSafe, I strongly urge you to migrate away from it. It is renowned for not being scalable and is also prone to file corruption. If you are not experienced with UNIX, or any other SCM tool apart from VSS, I have heard good things about SourceGear Vault.
  • Use clean and simple setups for your ‘meta’ trees. In Perforce, putting all projects in one ‘depot’ is perfectly reasonable, and use similar ideas for other tools.


If after reading all of this you are thinking ‘Nice ideas, but we don’t have the time or money to do any of this’, then think how much it would really cost you to (say) invest in a new Linux server and Subversion, and how much money you are losing through lack of productivity. Its also a lot simpler than you think. Why not try out Subversion for half a day with a good book?

In the next part we’ll start looking at some code – stay tuned!

How to setup a .NET Development Tree – Introduction

In the last few weeks I’ve setup 2 brand new .Development Trees for .NET projects. What do I mean by development tree?

  • It is a directory structure
  • containing:
    • source files
    • tools and dependencies
    • references to external tools and dependencies
  • checked into source control
  • that is atomically integratable
  • to produce a set of artifacts

A good development tree should:

  • be easily integratable on new environments
  • require little maintainance
  • but be easily maintainable when it does require maintenance
  • support, but not hamper, developer productivity
  • have consistent behaviour

This is all a bit wooly, but will do for an intial stab. I might come back and refine these points later.

Anyway, I’ve setup quite a few development trees in my time, in Java and .NET. In this series of blog entries I hope to develop a good ‘boilerplate’ development tree structure for .NET projects that other people can use.

If you find it interesting, please email me with your comments.

Using NVelocity

I’ve recently started using NVelocity. Its a brilliantly simple and powerful templating engine that’s been ported from the Java world.

I was going to explain how it works, and how to use it in your app, but its easier just to show you a test and an implementation.

[TestFixture]

public class MyVelocityTransformerTest

{

[Test]

public void ShouldUseVelocityToMergeContextContentsWithTemplate()

{

Hashtable contextContents = new Hashtable();

contextContents["foo"] = "bar";

MyVelocityTransformer transformer = new MyVelocityTransformer();

Assert.AreEqual("foo is bar", transformer.Transform("testTransform.vm", contextContents));

}

}

public class MyVelocityTransformer

{

public string Transform(string transformerFileName, Hashtable transformable)

{

VelocityEngine engine = new VelocityEngine();

engine.SetProperty(RuntimeConstants_Fields.RUNTIME_LOG_LOGSYSTEM_CLASS, "NVelocity.Runtime.Log.NullLogSystem");

engine.SetProperty(RuntimeConstants_Fields.FILE_RESOURCE_LOADER_PATH, "templates");

engine.SetProperty(RuntimeConstants_Fields.RESOURCE_MANAGER_CLASS, "NVelocity.Runtime.Resource.ResourceManagerImpl");

engine.Init();

string output = "";

using(TextWriter writer = new StringWriter())

{

engine.MergeTemplate(transformerFileName, new VelocityContext(transformable), writer);

output = writer.ToString();

}

return output;

}

}

This assumes that a file called ‘templates\testTransform’ exists with the following contents:

foo is $foo

The benefit to NVelocity is it makes generating string content much cleaner than any other method I’ve seen. Times when you would want to use it include code generation, HTML generation, etc.

The velocity template language is rich enough to make it more powerful that using (say) string.Format() – the #if and #foreach directives are especially useful.

As with all tools and libraries, NVelocity should be used where appropriate. I’ve seen Java projects with hideously complex, un-unittested templates which are incredibally fragile and hard to debug. My advice is to keep your templates relatively small, and only use directives to just go below the surface of the objects in your context. If necessary, introduce presentation objects and create them in normal, unit testable, code.

End of a Domain

Its time to say goodbye to my domain names.

5 years ago I bought the domain name ‘transmorphic.co.uk’. At the time I was into reconfigurable computing, the idea of re-defining the hardware in a computer. ‘Transmorphic’ was a made up word, meaning (in my mind) ‘changing’ or ‘beyond’ (trans) ‘shape’ (morphic).

About a year later I became more vain, and wanted my own .com . ‘transmorphic.com’ had gone, but ‘tmorph.com’ was available, so I grabbed it. Apart from anything else it was a good place to host my CV. 🙂

For over a year now though I haven’t been advertising my domain names. Instead I’ve been using mikeroberts.thoughtworks.net for my web idendity and since Gmail my Gmail address for all personal email. Because of this, I’ve allowed my domains to expire. I’ll probably get another one in the future, but for now please use these internet identies to find me.

CruiseControl.NET 0.8 Released

I haven’t been blogging much recently, but here’s excuse to start off again – CruiseControl.NET 0.8 is released!

There’s a couple of breaking changes – make sure to read the release notes. The biggest thing for me about the release is we are now recommending the ‘Web Dashboard’ over the old ‘single project’ web app. I think this is great since people now only need 1 web app instance to monitor and report all of their CCNet projects across all their build servers!

There’s a whole bunch more things coming to CCNet too, but I’ll mention them once they’re released. 🙂 For now, go and get the 0.8 release and try it out!