My XP 2004

A couple of weeks ago I was at XP 2004. Due to my goldfish-like memory I’ve already started forgetting what happened, but here’s some things I do remember:

Common Absolute Paths Anti-Pattern

Some projects I’ve worked on in the past assume that certain files can always be found at an absolute path, eg ‘c:\program files\some cool library\library.dll’. Such files may be dependencies, or deployment target locations. This is a situation to be avoided!

The main reason this is so bad is that you may not actually have control over such paths when your application gets deployed. For example, many IT operations departments use ‘C:’ just for the Windows O/S, and put all other files on a secondary drive (e.g. for backup purposes). If your project assumes absolute paths, such problems may not arise until you finally reach production.

However there are other problems, like what happens when you want to work on 2 versions of the app at the same time (e.g. trunk, and branch)?

But there is good news – you don’t need to hard code absolute paths into your system! Alternatives are:

Use relative paths. As an example, setup your source control directory structure such that you always know where your dependencies can be found relative to your project’s source code. (This may mean you save third-party binaries in source-control but this is not a sin!)

Use deploy-time configuration. Get your build environment to generate environment-specific distribution files where parameterised paths are expanded to absolute paths at the last moment.

Use environment variables, like path. This is a ‘last-chance’ option since it is system-wide, rather than application-wide in scope, but stops you having to hard code paths in your application at least.

To check that a project I’m working on isn’t using absolute paths, I tend to do one of the following things

– Have 2 copies of the development tree checked out on my machine. Switch between them occasionally and check nothing breaks.

– Have my continuous integration and ‘development test’ environments specifically use different paths to that of the development team.

Confluence – A Wiki on steroids

Those Atlassian boys have done it again. First I became a fan of Jira, and now its looking good for Confluence too.

Confluence is at its most basic another wiki clone, but its a nice looking, easily editted one at that. After a few minutes using it though you realise why its going to blow all other Wiki’s out of the water, with features like:

– arbitrary page hierarchies

– exports (of whatever selection of pages you want) to pdf and html

– separated ‘spaces’ for topic consolidation and security

– rss feeds for various events

But then after a few minutes more you suddenly get the ‘wow factor’ when you start using its dynamic macros – inlining things like

– search output, child page lists, etc.

– abitrary RSS feeds

– Jira issue reports

… etc. They’ve even done their own implementation of Fit called FatCow which you can just use like any other Macro. You can write your own macros too, just to add insane power to the application.

Its not perfect yet, but for a 1.0 release it has a very ambitious feature set so I’ll let them off. Things that could be better:

– RSS and blogging features need some work – these are powerful things, but really they could be better (e.g. blogging in a context smaller than a space, better (or maybe more obvious to use) RSS outputs)

– Usability is good, but it still takes a while to get going as a new user. Jira’s got better on this front, so I expect later versions of Confluence will be easier for newbies too.

We’re using it already for CruiseControl.NET – check out our space here.

A bird turns into a fox (that's what I call evolution)

I’ve been using Mozilla Firebird as my browser for a while now. Its had (another) rename and is now called Firefox.

One of the best things I’ve found with Firefox recently is Type Ahead Finding – just start typing and Mozilla will highlight a link. To skip to the next result, hit Ctrl-G; to open the page hit Enter; to open the page in a new tab hit Ctrl-Enter. I’m not normally a keyboard shortcut fan, but this really speeds up navigation.

Oh, and Flash works now. 🙂

Archive Quality MP3 Ripping

When I started using MP3 music files in 1998, the standard was to use 112 kbps encoding, which produced a notable loss of quality when played on speakers of any quality. One of the great things about the iPod, and the increasing size of hard disks generally, is that these days there’s no point being so stingy about the quality of your MP3s.

So now I use high-quality VBR encoding, which tends to give a bit rate around 200 kbps. The files are therefore almost twice the size compared with 112 kbps encoding, but the result is music that sounds great even on a decent hifi. Such MP3 encoding has become known as ‘archive quality’ since it produces the kind of files you’d want to keep in case you lost the original source.

To produce such MP3’s I use 2 pieces of free software. For ripping I use Exact Audio Copy, which isn’t quick, but guarantees a ‘perfect rip’. This hooks in with the separately downloadable LAME, available here. These 2 combined give me an average 4x ripping speed on a 1 year-old PC.

One hint if you try these yourself – LAME has a special setting that sets all the various options to a mix that a bunch of people with better hearing than me reckon is the best audable MP3 quality you can get, for the smallest file size. To use this, go to the EAC menu, select ‘compression options’, ‘External Compression’ tab, and set the ‘Additional Command Line options’ to ‘–r3mix %s %d’ (having already selected to use the external lame.exe as your MP3 encoder.)

Handling Continuous Integration Failures

My blogging activity has been subdued recently. To try and remedy this I’ve decided to write some entries on build engineering, a subject I’ve looked at in some depth over the last few years.

To start off with, here’s an entry about Automated Continuous Integration. Automated CI is normally widely accepted when introduced to a team. However there’s often confusion about what to do when the automated build breaks but developers are able to build without problem on their own machine.

The 2 reasons I’ve seen most often that cause this are:

1 – Problems retrieving updates from Source Control

2 – The part of the build that’s failing is not used by developers

The first of these is normally easy to diagnose since it happens early in the build process before anything else has a chance to run. It can occur when a file checked into source control is overwritten by the build process. Often the simplest solution is to delete the build server’s copy of the source tree totally, and check it out cleanly from Source Control.

The second part can be caught much more easily using the following pattern:

Be able to run the complete integration build from any development machine

Typically a full integration build should include the following tasks:

– Full, clean compile

– Deployment to local app server

– Running all the tests

– Publishing of distributable files to file server

– Labelling of source control

People sometimes setup specific scripts that are only available on the build server that perform some of these functions. This causes pain in exactly the situation when the build script starts failing. If you can run the entire build on a developer machine using sensible default values (e.g. using a ‘test’ name for labelling and distributable locations), you can debug the problem without having to go onto the build server.

Moreover, If you have the ability to easily to perform a full, clean, compile; deployment; and test in one easy step from a developer machine, then developers will likely use this and catch any problems before checking in broken code.

Here’s an example of this using NAnt and CruiseControl.NET.

In NAnt, we can have the following in a build script to define some properties and specify how to publish a distributable file:

<property name=”publish.basedir” value=”publish” />

<property name=”label-to-apply” value=”temp” />

<target name=”dist.publish” depends=”dist”>

<property name=”publish.dir” value=”${publish.basedir}\${label-to-apply}” />

<mkdir dir=”${publish.dir}” />

<copy todir=”${publish.dir}”>

<fileset basedir=”dist”>

<includes name=”*”/>

</fileset>

</copy>

</target>

The ‘dist.publish’ target relies on the 2 properties publish.basedir and label-to-apply. By setting them in the script we give these values sensible defaults for running on a developer machine. However, we want them to be more specific when running on the real build machine. We can do this by specifying values for the properties when NAnt is started which override these defaults.

If we’re using CruiseControl.NET, the ‘label-to-apply’ property is always set to the build number for us. We can override other variables using the following CCNet Configuration option:

<cruisecontrol>

<project name=”MyProject”>

<build type=”nant”>

<buildArgs>-D:publish.basedir=d:\MyProject\PublishedBuilds</buildArgs>

.

.

</build>

.

.

</project>

</cruisecontrol>

So, summing up, automated CI build failures are bound to occur. However, if we set up a build process that is repeatable on machines other than the build server it is easier to solve such problems.

GadgetWatch – Palm Tungsten T3

I bought a Sharp Zaurus PDA about a year ago and it hasn’t really worked out for me. The chief problems have been:

– size: it doesn’t quite fit in a trouser pocket, which makes it too big

– ease of integration with other apps on my actual PC

– like most Linux systems, doing anything new is non-trivial and therefore too much effort for someone as lazy as me.

So, taking the same route as I did with my iPod (i.e. find out what I *actually* want, and then buy it, even if its more expensive), I’ve just bought the new Palm Tungsten T3. What attracted me was the following:

– size: its small enough to fit in my pocket

– UI: Pocket PC just doesn’t cut it still for usability and Palm’s longevity in the market shows through.

– integrated bluetooth: Primarily so that when I’m arguing about a movie with friends in the pub I can connect to the internet using a bluetooth mobile phone and consult IMDB. 🙂

– good screen definition – the T3 has a 320 x 480 screen, viewable in landscape as well as portrait, which makes it one of the best displays on the PDA market.

So far I’m impressed, and it’s already helping me organise myself a little better. If in 6 months time I’m as happy with it as I continue to be with my iPod, it will be money well spent.

Agile build engineering worthy of more research

I’ve spent the last 5 months on a build engineering engagement. Its been enjoyable, for the following reasons:

– The projects I was working with were implemented in .NET so it was an opportunity to explore build techniques that I know from the Java world and see how they apply in .NET

– I was working with several separate, but related, teams and therefore was exploring how to best integrate the projects.

I’m hoping to think about these areas more, but there’s a couple of brief points that I thought were interesting.

Importance of IDE build engineering in .NET

In .NET its worth considering how the IDE fits in with the build environment. Unlike Java based projects we are pretty much guaranteed the IDE is Visual Studio. Moreover, a lot of .NET developers have come through the Visual Studio history, and so are used to working solely within the IDE. Tools like the NUnit Visual Studio Plugin aid this command-line-free method of work for the interactive development process.

All this being said, we must remember that agile development places a premium on repeatable, maintainable, automated processes for tasks outside that of the interactive development flow, such as continuous integration and deployment. Therefore there exists a question of how we marry these 2 worlds together.

The basic premise is that using Visual Studio project definitions within your automated process means that your interactive and automated processes remain closely tied, and updating your project structure through the IDE automatically updates your automated process (so aiding maintainability.)

You can implement this into a NAnt automation setup by using NAnt’s new <solution> task, or by calling Visual Studio from the command line (using the ‘devenv’ application). We do still need to use something like NAnt since VS only covers a subset of what we want to automate, i.e. it doesn’t cover packaging, testing, etc.

What remains to be seen is whether <solution> or calling devenv directly is the best way to go, and how much we do still need a command line environment in the development process (e.g. for deploying an application locally for acceptance tests.)

Managing project dependencies

The other point to come up was how projects relate to each other, and specifically how to manage dependencies between projects. Things to think about are:

– How to specify dependencies in a convenient way

– How to manage build- vs run-time dependencies

– How to manage chained dependencies

– How versioning comes into play, and can we used ‘ranged’ versions with fixed ‘published’ interfaces

– How to allow a project to depend on either a fixed version, or ‘latest’ version of another project

– Where ‘fixed’ version requirements exist, how to perform pre-emptive ‘gump style‘ continuous integration for the head revision of all projects.

– What needs to be tested for each project’s build

I heard someone talking about Continous Integration the other day, and I’d forgotten the value of the simplicity of having ‘one build’ even for a larger team (since these problems become less important). But where you have long build-times and/or different deliverable timeframes it is worth considering breaking up the project. I think this would become a valid option when the agile community has tools and practices to solve the above questions.

Another reason why Google rocks

So Google now has a calculator. Try typing in some of the following into Google, or via your Google bar (or just click the link):

how many millilitres in 13.5 ounces

what is 2 times the radius of the earth

square root 3 * 4

square root (3 * 4)

square root (3 * 4) to the 7

square root (3 * 4) to the e

square root (3 * 4) to the e times i

square root (3 * 4) to the e times i over 12

e to the (i * pi)

5 factorial

5 factorial cosinr 12 (deliberate spelling mistake 😉 )

5 factorial cos (12 mod 3)

10 km over 50 seconds

This is sweet. 🙂 From a technical point of view, there’s a tonne of natural language processing going on, and considering everything its blisteringly quick.

UPDATE

OK, finding new stuff in this thing has me hooked. 🙂

200 miles over 4 hours in kph

65 centigrade in farenheit (I don’t know how to spell!)