Goodbye Web Forms

Today I committed the final changes to a large chunk of CruiseControl.NET work that I’m rather proud of, namely removing all Web Forms code from the Dashboard Web App.

‘What??’ you may cry. ‘You’ve stopped using ASP.NET??’ No, I’ve just stopped using Web Forms. Web Forms are those .aspx files you write and their code behinds. They’re also the things that use server controls, view state and other such components that make up most of .NET web apps. I’m still using the ASP.NET runtime in the form of an IHttpHandler. This is a much more lightweight way of developing web apps, similar to Java’s Servlets.

So why have I done this and thrown away the incredibally rich System.Web.UI namespace? Well for a number of reasons, but chiefly because of testability and simplicity.

Web Forms are hard things to unit test. Basically you can’t. This is because of how closely tied all Page implementations are into the ASP.NET framework. To introduce testability you have to keep your code-behinds very thin, but once you’ve got a few controls on your page this is tricky. Also, any logic that you put in the .aspx file itself is even harder to test, and this includes any templates, grid setup or whatever.

ASP.NET Web Forms also seem to be incredibally complex to me. The Page class alone is a pretty big beast, with all those events going on under the hood. And don’t even start me on Data Grids and Data Binding. Easy to setup something up in a prototype, yes, but simple to maintain? I’m not convinced. Fundamentally, web apps should be simple. You have a request with a bunch of parameters and you produce a response which is (to most intents and purposes) a string. Now I know that Web Forms are supposed to introduce a new model above all this stuff, but I don’t think the abstraction works particularly well once you get beyond a prototype.

So anyway, I decided to try and get rid of using Web Forms. I’ve evolved a new web framework, based a little bit on WebWork. It has an ultra simple front-controller and is based on decorated actions. Views are just html strings, but I’m using NVelocity to generate them. I’m using an IHttpHandler to process requests, and at the moment I’m overriding the .aspx extension to be handled by my custom handler, not the Web Forms handler.

Will this be any use outside of CruiseControl.NET? I’m not sure – I might just be going off on one. But that said a good number of my Java-developing colleagues in ThoughtWorks have migrated from using Struts to WebWork, for similar to reasons to why I’ve moved away from Web Forms. Is any of my code re-usable? I think so, indeed I hope to spin off the web framework as a separate open source project, but the point is it is possible to write perfectly decent web applications in .NET without using Web Forms.

Finally, I’d like to give significant kudos to Joe Walnes. He wrote some of WebWork, and badgered me to think about using it as a basis of a new web framework for .NET. He also introduced to me the ideas of using IHttpHandler’s as the entry point for such a custom framework and of overriding .aspx handling to avoid reconfiguring IIS.

Advertisements

find and grep

So remember my new toy – the command line? Well, I didn’t just drop it after a few days.

I don’t know about anyone else, but the Search Tool in Windows XP really bugs me. I’m not just talking about that stupid dog wagging its tail all the time (OK, I’m a cat person, so maybe I’m biased) – I’m talking about how the results always seem a bit odd, and never really tell me what I want.

Sometimes I want to know all files in a folder tree with ‘Widget’ as part of their name. Sometimes I want to know all files in a folder tree with ‘Widget’ as part of their content.

Ladies and gentlemen, may I present you with find and grep. Simply put your Cygwin prompt into the directory you want to start at, and:

  • find . -name '*Widget*' – searches file names
  • grep -r Widget * – searches file content

Simple, clean, fast, effective. And no dumb tail wagging noises coming out of my speaker.

Introducing Tree Surgeon

If you’ve been following my ‘How to setup a .NET Development Tree’ series, you might have been encouraged to start work on a new development tree. If you are, then you might want to check out Tree Surgeon. Its a new Open Source project I’ve started which will create a complete development tree for you, parameterized for your needs. At the moment those parameters are just the project name, but that should be enough to get you going.

Please use the mailing lists, or mail me, if you have any opinions about Tree Surgeon. Happy chopping!

How to setup a .NET Development Tree Part 7

Last time we left our code with a dependency on a 3rd party library, multiple internal modules (VS Projects), and a passing test. Great! But how do we know the test passes? At the moment it requires us to have our ‘interactive hat’ on. It would be much better if we knew just by running our automted build. So let’s do that.

Before we start, here is the current state of our build script:

<project name="nant" default="compile" xmlns="http://nant.sf.net/schemas/nant.xsd">

<target name="clean">

<delete dir="build" if="${directory::exists('build')}"/>

</target>

<target name="compile">

<solution solutionfile="srcSycamore.sln" configuration="AutomatedDebug" />

</target>

</project>

We’re going to add a test target. Here’s our first cut:

<target name="test">

<exec program="nunit-console.exe" basedir="toolsnunit" workingdir="buildDebugUnitTests">

<arg value="SherwoodForest.Sycamore.UnitTests.dll" />

</exec>

</target>

Here we are using an <exec> task to run the NUnit Console application that’s already in our development tree (that was handy, wasn’t it? That’s why we left all the NUnit binaries in our tree.) Some projects will use the <nunit> or <nunit2> tasks to run their tests from a build script, but this requires your version of NAnt and version of NUnit being in sync. Personally, I think the <exec> call looks pretty clean so I’m happy to use that rather than the tighter NUnit integration. And it means that later on if we update one of these 2 tools we don’t have to worry about breaking this part of our build script.

The slightly tricky thing here is getting our directory specifications right. <exec>‘s basedir attribute is the location of the actual .exe we want to run, and workingdir is the directory we want to run the application in. What might catch you out is that workingdir is relative to your NAnt base directory, not to the basedir attribute in the task specification.

Try running this target by entering go test from a command prompt in the project root. Did it work? What if you try go clean test ? The problem is that we need to compile our code before we test our code. NAnt supports this kind of problem through the depends target attribute and <call> task. Now we are entering the realm of much disagreement between build script developers. 🙂 Which is the best option? And how should it be used? If you’re new to NAnt, you’ll probably want to skip the next few paragraphs.

depends specifies that for a target to run, all the targets in the depends list must have all run already. If they haven’t, they will be run first, and then the requested target will run. <call> is much more like a traditional procedure call. So surely <call> is the best option, since we all know about procedure calls, right? Well, maybe, but the problem is that depends is really quite a clean way of writing things, especially when our script has multiple entry points. Also, traditionally, the behaviour of ‘properties’ have been a little strange when using <call>. depends though can get messy if every target has 7 different dependencies.

So, for better or worse, here’s my current advice on this subject:

  1. Use depends as the primary way of defining flow in your build script.
  2. If a target has a depends value, don’t give it a body. In other words a target should have task definitions, or dependencies, but not both. This is to try and get away from the ‘dependency explosion’ that Ant / NAnt scripts tend towards.
  3. Use <call> only for the equivalent of an extract method refactoring. <call>ed targets should never have dependencies. Think very carefully about properties when using <call>.

We’ll put this hot potato back on the fire now.

(Paragraph skippers, join back in here.) So back to our test target. What we want to say is that running the unit tests depends on compiling the code. So we’ll add the attribute depends="compile" to the test target tag.

<target name="test" depends="compile" />

<exec program="nunit-console.exe" basedir="toolsnunit" workingdir="buildDebugUnitTests">

<arg value="SherwoodForest.Sycamore.UnitTests.dll" />

</exec>

</target>

Now we’re mixing up our dependencies and tasks though, breaking rule 2 above. We’ll use an extract dependency target refactoring to split the target into 2 (note the second dependency on the test target):

<target name="test" depends="compile, run-unit-tests"

description="Compile and Run Tests" />

<target name="run-unit-tests">

<exec program="nunit-console.exe" basedir="toolsnunit" workingdir="buildDebugUnitTests">

<arg value="SherwoodForest.Sycamore.UnitTests.dll" />

</exec>

</target>

There’s something else we’ve done here – we’ve added a description to the test target. This is important – you should use the convention that targets with a description value are runnable by the user. If a user tries running a target without a description then that’s down to them – they should be aware that the script may fail since dependencies have not been run. Users can easily see all the ‘public’ targets in a build script by doing go -projecthelp (the ‘main’ targets as NAnt calls them are our public targets.)

OK, we can run our tests, but where are the results? What we’d actually like is to use NUnit’s XML output so that results can be picked up by another process, such as CruiseControl.NET. Let’s put this XML output somewhere in the build folder, since its another one of our build artifacts. We’ll update the run-unit-tests target as follows:

<target name="run-unit-tests">

<mkdir dir="buildtest-reports" />

<exec program="nunit-console.exe" basedir="toolsnunit" workingdir="buildDebugUnitTests">

<arg value="SherwoodForest.Sycamore.UnitTests.dll" />

<arg value="/xml:....test-reportsUnitTests.xml" />

</exec>

</target>

We used the /xml: parameter for NUnit, and made sure the report output directory already existed.

One more thing, and then we’ll be done. We already introduced the idea of a build script refactoring above when we split-up the test target. If you look at the current state of the build script though, you’ll see there’s plenty of scope for another refactoring – ‘introduce variable’, or introduce script property as we’ll call it in the build script world. Look at all those places where we use the folder name build. Lets put that in a script property called build.dir. Now our script looks like:

<project name="nant" default="test" xmlns="http://nant.sf.net/schemas/nant.xsd">

<property name="build.dir" value="build" />

<!-- User targets -->

<target name="test" depends="compile, run-unit-tests"

description="Compile and Run Tests" />

<target name="clean" description="Delete Automated Build artifacts">

<delete dir="${build.dir}" if="${directory::exists(property::get-value('build.dir'))}"/>

</target>

<target name="compile" description="Compiles using the AutomatedDebug Configuration">

<solution solutionfile="srcSycamore.sln" configuration="AutomatedDebug" />

</target>

<!-- Internal targets -->

<target name="run-unit-tests">

<mkdir dir="${build.dir}test-reports" />

<exec program="nunit-console.exe" basedir="toolsnunit" workingdir="${build.dir}DebugUnitTests">

<arg value="SherwoodForest.Sycamore.UnitTests.dll" />

<arg value="/xml:....test-reportsUnitTests.xml" />

</exec>

</target>

</project>

A lot of people will introduce a script level property whenever they introduce a new directory, file, etc. I advise you not to do this in your build script development since (I think) it hinders maintainablity. Treat your build script like well maintained code – do the simplest thing that works, but refactor mercilessly. In terms of introduce script property you should really only be doing it once the same piece of information is used by multiple targets. For example, a lot of people would introduce a src.dir property out of principle, and in our case it would have the value src. But what would that gain us? In our build script we only ever use that directory name once, so its simpler just to leave it as a literal usage in the call to <solution>.

Notice in the last example we also added descriptions to all the targets we want to be public, and split the file up into (effectively) public and private targets. XML is not the cleanest language to develop in, but by thinking about simplicity and readabilty, you can make your build scripts more maintainable.

To summarise this part:

  • Use the <exec> task to call NUnit within your build script.
  • Use targets that just specify dependencies to create flow within your build script.
  • Don’t use dependencies with targets that specify tasks
  • Split your targets into ‘public’ and ‘private’ targets by giving public targets a description.
  • Use build script refactorings to simplify the structure of your NAnt file.
  • Don’t introduce unnecesssary script properties

How to setup a .NET Development Tree Part 6

By now we have some source code checked in to our Source Control server. Its got a structured folder hierarchy and we’re being careful about how we check specific files in (and ignore others). We’re combining Visual Studio and NAnt to have a simple yet powerful automated build that works closely with the changes we make during interactive development.

So far though we only have 1 source file and shockingly no tests. We need to change this.

To do this we’re going to create 2 new assemblies – one application DLL, and one DLL for unit tests. .NET won’t allow you to use .exe assemblies as references for other projects, so a unit test DLL can only reference another DLL. Its slightly off-topic but because of this reason I try to keep my .exe projects as small as possible (because any classes in them can’t be unit tested) and have nearly all code in a DLL.

So let’s create our new Application DLL. I’m going to call it Core. Following the conventions we set down in part 2, the VS Project Folder is stored in src and we change the default namespace to SherwoodForest.Sycamore.Core. Before closing the Project Properties window though, there are 2 more things to change.

Firstly, for DLLs I like to use the naming convention that the Assembly has the same name as the default namespace. Also, following what we did in the previous part, create an ‘AutomatedDebug’ configuration, based on the ‘Debug’ Configuration, except with the output path of ..\..\build\Debug\Core. Make sure your Solution build configurations are all mapped correctly. We won’t need the ‘Class1’ which VS automatically creates, so delete it.

We follow exactly the same procedure for our Unit Test DLL, giving the VS Project the (not particularly original, nevertheless informative) name of UnitTests. Save everything and make sure you can compile in Visual Studio and using your build script.

Before we write a test, we need to setup our project with NUnit. There’s a few hoops to go through here but we only have to do it once for our project. Firstly, download NUnit – I’m going to be using NUnit 2.2.2 for this example. Download the binary zip file, not the MSI. While its downloading, open up your Global Assembly Cache (or GAC) – it will be in C:\Windows\Assembly, or something similar. Look to see if you have any NUnit assmblies in it. If you do, try to get rid of them by uninstalling any previous versions of NUnit from your computer.

Why are we worrying about not using the GAC and MSI’s? Well, for pretty much exactly the reasons as we specified for NAnt, we want to use NUnit from our development tree . The problem is that if we have any NUnit assemblies in the GAC, they will take priority over the NUnit in our development tree. We could go through being explicit about the versions of NUnit each assembly requries, but that’s a lot of hassle. Its easier just not to make NUnit a system wide tool, and this means getting it out of the GAC. (Mike Two, one of the NUnit authors, is probably going to shoot me for suggesting all of this. If you want to make NUnit a system tool then that will work too, you just have a few more hoops to jump through.)

By now your NUnit download should be complete. Extract it, take the bin folder and put in next to the nant folder in your project’s tools folder. Rename it to nunit.

To create test fixtures in our UnitTests VS Project, we need to reference the nunit.framework assembly. This introduces a new concept – that of third party code dependencies. To implement these, I like to have a new top-level folder in my project root called lib. Do this in your project and copy the nunit.framework.dll file from the NUnit distribution to the new folder. Once you’ve done that, add lib\nunit.framework.dll as a Reference to your UnitTests project.

Because of the previous step we now have the same file (nunit.framework.dll) copied twice in our development tree. Its worth doing this because we have a clear separation between code dependencies (in the lib folder) and build time tools (in the tools folder). We could delete the entire tools folder and the solution would still compile in Visual Studio. This is an example of making things clean and simple. It uses more disk space, but remember what we said back in Part 1 about that?

So finally we can actually write a test! For Sycamore, I’m going to add the following as a file called TreeRecogniserTest.cs to my UnitTests project:

using NUnit.Framework;

using SherwoodForest.Sycamore.Core;

namespace SherwoodForest.Sycamore.UnitTests

{

[TestFixture]

public class TreeRecogniserTest

{

[Test]

public void ShouldRecogniseLarchAs1()

{

TreeRecogniser recogniser = new TreeRecogniser();

Assert.AreEqual(1, recogniser.Recognise("Larch"));

}

}

}

To implement this, I add Core as a Project Reference to UnitTests and create a new class in Core called TreeRecogniser:

namespace SherwoodForest.Sycamore.Core

{

public class TreeRecogniser

{

public int Recognise(string treeName)

{

if (treeName == "Larch")

{

return 1;

}

else

{

return 0;

}

}

}

}

I can then run this test by using TestDriven.NET within the IDE, or by using the NUnit GUI and pointing it at src\UnitTests\bin\Debug\SherwoodForest.Sycamore.UnitTests.dll. The tests should pass in either case.

If we run our automated NAnt build, everything should compile OK, and you should be able to see each of the VS Projects compiling in their AutomatedDebug Build Configuration. The tests aren’t run yet, but that’s what we’ll be looking at next time. Even so, we are still at a check-in point. We have 2 new project folders to add, but remember the exclusion rules (*.user, bin and obj). Being a Subversion command-line user, I like to use the the -N (non recursive) flag of svn add to make sure I can mark the svn:ignore property before all the temporary files get added.

Also, don’t forget to check in tools\nunit or the new lib folder.

The current state of Sycamore is available here.

So let’s wrap up this part then. We covered some new generic principles about projects and dependencies. We also looked at the specifics of using NUnit. Some concrete points to take away are:

  • Set DLL Names to be the same as the default namespace
  • Put your Unit Tests in a separate VS project called UnitTests
  • Save NUnit in your development tree in its own folder under tools
  • Put all DLLs your code depends on in a top level folder called lib. The only exceptions are system DLLs such as .NET Framework Libraries.

How to setup a .NET Development Tree Part 5

In the last part we started using NAnt to automate a build for our project. In this part we’ll add some more build functionality.

When we added the compile target we used the <solution> task to compile our solution. However, we also specified which ‘Build Configuration’ to use. Build Configurations are a Visual Studio feature that allow you to build your project in different ways. The most common differences are between ‘Debug’ and ‘Release’ (2 configurations that Visual Studio always creates for you.) With a Debug build, the Visual Studio compiler is configured to create the .pdb files we use for debugging (it gives us line numbers in exception stack traces, that kind of thing.) The ‘Release’ configuration doesn’t have these files generated, but it does produce assemblies more geared towards production than development.

However, there are a whole bunch of other things you can configure for different build configurations. Right-click on a project in Visual Studio, select Properties, then look at everything that appears under ‘Configuration Properties’ – all of those items can change for different Build Configurations. We’re interested in the ‘Output Path’ property, and I’ll explain why.

When we tell NAnt to compile the Debug Build Configuration of our solution, it tries to invoke the C# compiler to produce all the files that appear under the bin\Debug folder for each VS Project. There’s a problem with this though – if we already have the Solution open in Visual Studio, VS will have locks on those files once they reach a certain size. That means that our NAnt compile will fail since it can’t overwrite the assemblies. Anyway, it would be cleaner if we could separate out our ‘automated’ build from our ‘interactive’ build.

Thankfully, Build Configurations let us do this and still use the <solution> task. We do this by creating a new Build Configuration which we will just use for automated builds, and change where it outputs its files to.

To do this for Sycamore, I open up Visual Studio’s ‘Configuration Manager’ (right click on the Solution, choose ‘Configuration Manager’), and create a new configuration (open the drop-down menu, select ‘<New…>’). I’m going to call the new configuration AutomatedDebug and copy settings from the ‘Debug’ configuration (leave the ‘create new project configuration(s)’ box checked.) Close the dialog, and then bring up the properties for ‘SycamoreConsole’. Select the ‘Build’ ‘Configuration Properties’ section, and make sure ‘AutomatedDebug’ is selected in the Configuration drop-down. Select the ‘Output Path’ box and change its value to ‘..\..\build\Debug\SycamoreConsole’. Switch Visual Studio back to the normal ‘Debug’ configuration which we use for interactive builds.

Finally, edit the build script, and change the ‘configuration’ argument of the <solution> task to be AutomatedDebug. It should now look like this:

<target name="compile">

<solution solutionfile="srcSycamore.sln" configuration="AutomatedDebug" />

</target>

So what have we actually done here? If you run NAnt, you should see the following lines in your build output:

compile:

[solution] Starting solution build.

[solution] Building 'SycamoreConsole' [AutomatedDebug] ...

This tells us that NAnt is using the new Build Configuration. Now, look in the build\Debug\SycamoreConsole folder – you should see our compiled .exe file (and a .pdb file since we are compiling with debug options.)

That tells us what is happening, but why have we put these files in this directory? We use the build folder as another of our ‘top level’ project folders. It will contain all the build artifacts (assemblies, test reports, etc.) that we produce in the automated build. It will not contain any files that aren’t generated by the build, so we don’t need to check it into Source Control, and we can safely delete it whenever we want. Under build we will have a number of sub-folders, and so far we created one called Debug that will contain all of our Debug compilation artifacts. We put the artifacts for each VS Project in its own folder, with the same name as the VS Project it belongs to.

I said we could safely delete this folder, so let’s add another NAnt target that will do this:

<target name="clean">

<delete dir="build" if="${directory::exists('build')}"/>

</target>

I also said we didn’t need to check the build folder into Source Control, so we can also add it to our list of excluded files. With Subversion, I do this by editting the svn:ignore property of the project root folder.

Finally for this part, we’re going to create a batch file that developers can use to kick off the build. Its very simple, having just the following line:

@toolsnantNAnt.exe -buildfile:Sycamore.build %*

I like calling this file ‘go.bat’ since the targets in the build script tend to be ‘action’ type words. Since its closely associated with the build script, put it in the project root. Note that we specify which build script to use – change this for your project. To use this file, just pass the target to run as an option, so to delete the build folder, just enter go clean.

Note that this batch file really is just meant as a bootstrap for convenience. I’ve seen plenty of projects use a combination of batch files and NAnt / Ant scripts to configure a build system. This is a bad idea for several reasons:

  • Batch files are significantly less manageable or powerful than NAnt, and tend to get very ‘hacky’ very quickly.
  • Your build behaviour really is one distinct concept and NAnt can handle all of it – splitting it across technologies isn’t necessary.
  • Don’t go down the road of having multiple batch files to launch builds for different environments. I’m yet to see a project that managed to pull this off in a clean, manageable way. Apart from anything else it is redundancy, and introduces more manual work and possiblities for error. Instead, use logic in your NAnt script to use different property values for different environments (hopefully I’ll get on to build configuration and refactoring concepts in the future.)

If you run your default target, it should still be successful. If you have all your ignored files and directories setup correctly you should have 4 files to commit – the build script, the build script launcher (go.bat), the solution, and the VS Project for Sycamore Console. I’m going to check in these changes and call it a day for this part.

The current state of Sycamore is available here.

To summarise this part:

  • Use a top-level, transient, folder called build as the target folder of your automated build.
  • Create a new Visual Studio Build Configuration for your automated NAnt Builds. This Build Configuration should output to build.
  • Setup a clean target to delete your transient files.
  • Create a simple build bootstrap batch file.
  • Don’t put any kind of build logic in this build bootstrap – leave that all in the NAnt build script.

In the next part we’ll start to add some unit tests.

Classifying Tests

I’m not really a ‘testing guy’, but I am an ‘agile guy’ so I do write and use automated tests during development. I find that there’s a lot of confusion about what a ‘unit test’ is, what an ‘acceptance test’ is, and that kind of thing. Most of the time people have their own feelings about what these different classifications of test are and when talking to others assume that they have the same idea. This is frequently not the case.

So just in case you ever talk to me about tests, and we forget to come to a common shared language about them, here’s what I might mean by the various types of tests. Note that these are mostly ‘development test’ defintions, not ‘QA test’ definitions. I’m not saying they’re right, they’re just what I think they mean. (Bret and Brian – please forgive me. 🙂 )

A unit test tests the smallest component in a system. In an object oriented system, this is a class. It might also be an individual stored procedure, javascript method, whatever. A unit test should not assert (or rely on) any behaviour in the system other than that of the component it is testing.

At completely the other end of the scale, an acceptance test is a behaviour that ‘the customer’ has defined the application should have with the environment around it. This may be a relationship between different parts of the application’s domain, an expected output for a set of input in the UI, performance criteria, or interactions using a shared technological resource (such as a shared database or messaging infrastructure.) The important thing about an acceptance test is it should not assert anything about the implementation of the application.

Ideally, acceptance tests should be automated, but sometimes the cost of doing this is too high to justify the value.

I like any application I work on to have both of these types of test. Acceptance tests are about showing the application does what it should do, to the extent of what I (as a developer) have been told it should do. Unit tests are about helping develop new parts of the system.

Somewhere between these two I define functional and integration tests.

To me, a functional test is used to assert the behaviour of an individual component within the application. Typically this would be a pretty small component, consisting of smaller components that have each been unit tested. The point of functional tests is to group together some overall behaviour for a (developer defined) part of the system.

An integration test is like a functional test, but tests a complete sub-system of the application in its target environment as opposed to functional tests which would typically be tested in a test-specific environment (e.g. out of container.) If the application consists of several services (for example), an integration test in my speak would test each of these services in their deployed environment with a set of example scenarios.

To me, functional and integration tests as defined here are not strictly necessary for the successful development of an application, but can help development by showing problems in a more easily diagnosable form. That said, if you have a failing functional or integration test and no failing acceptance test you are either missing an acceptance test or are implementing functionality that is not required.

And I have no idea what a system test is. Maybe someone can tell me. 🙂