Blog

Reasons to have a Mac 473 – Clutter

There are so many cool things about Mac’s these days, and Clutter’s one of them. I was going to buy my first Mac about 6 months ago, but then switched from developing in Java to .NET so decided I couldn’t justify the expense (or the carrying around of 2 laptops everywhere.)

Advertisements

CruiseControl.NET eats its own dog food

The CruiseControl.NET (CCNet) project now has a publically viewable server that allows us to run CCNet against itself. We plan to use this as the Continuous Integration server for several other open source .NET projects aswell (NMock is already active.)

Check it out at http://ccnetlive.thoughtworks.com.

Good Practice rather than Best Practice

I’m a bit of a stickler when it comes to use of language. I’m not very good at it, but I appreciate the importance of trying to use it in a good way. Communication often leads to an emotional response and as a communicator you want the people you are projecting to to have the response you were aiming for. As a responsible communicator you have a responsiblity not to attempt to invoke a response that is invalid.

As an example of all this, take the phrase Best Practice which seems to be in fashion in the software industry. It doesn’t sit quite right with me, and I think I’d rather use the alternative phrase Good Practice. Why?

Best Practice is often used to describe a technique out of any particular context. In most cases there are often many alternative techniques and you can only pick the best one once you have a context to choose it within. Martin‘s book on Enterprise Patterns stresses this point. Transaction Script, Domain Model, and Table Module are all Good Practices but the best one depends on the context of the application you are developing. As a responsible communicator I should not say that one of these is always the best.

Best Practice can also invoke a negative response in the listener. By saying ‘You should use pair programming because it is a Best Practice’, I am implying that my opinion is better than your’s, no matter what the reason for you not using pair programming. Before we even start discussing why I think pair programming is useful, you may have already got a negative emotion towards the conversation and are therefore less likely to accept what I’m going to say.

I’ve used the phrase Best Practice frequently in the past. I’m going to try just to use Good Practice in future.

Popping the Why Stack

I while ago in a previous company of mine a new guy had started on the team. He was discussing with a team member part of the architecture of the application. I can’t remember exactly how the conversation went now, but it was something like:

Newbie: So why are you using Stateful Session Beans It may not have been stateful session beans, but they’re always a good thing to question 🙂

Old Hand: Because they’re better to use than Entity or Stateless beans in this context.

Newbie: Why?

Old Hand: Because they’re the standard practice in this case.

Newbie: Why?

Old Hand: Because we need to track session state

Newbie: Why?

Old Hand (starting to get annoyed now): Because otherwise the customer would lose their session across requests

Newbie: Why is that a problem?

Old Hand (volume rising): Because we need to give the next page dependent on what the customer did in previous pages.

Newbie: Why?

Old Hand (down-right stressed now): Because we are writing a shopping application and as a customer I want to put something in my shopping basket so that its still there when I go to another page!!!!

Newbie: Oh, right, thanks 🙂 !

So lets disect this a little. We have a new developer (who’s pretty clever as it happens) and he’s asking a bunch of what looks like inane questions to someone who’s been on the team ages. At the start of the conversation the newbie is getting a whole load of technical detail, but want they really want to know is what customer story is driving this design decision. It annoys the heck out of the ‘old hand’ since they know the application inside out, but this popping the Why Stack is valid because:

– Its a valid way for new-comers to see how design, and customer stories, are related

– If you can’t eventually pop-out to a story it probably means that you have some un-used architecture in your application

– If its describing why a new piece of architecture needs to be added to the system then it can be used to show what customer story should apply to that new technical addition. e.g. :

Tech Lead to Project Manager: We should spend 2 weeks adding encryption to this module

Proj Manager: Why?

Tech Lead: To make our application more secure.

Proj Manager: Why do we need that?

Tech Lead: Because as a customer I want to be able to add my Credit Card details to my account so that they can’t be seen by any hacker who may be trying to get in the system.

Proj Manager: Oh, ok!

Formal Methods versus TDD

Back when I was taking my Computer Science degree at University I learned all about ‘Formal Methods’ (when I wasn’t playing pool, badly, that is). The main reason to use Formal Methods is to prove the correctness of programs. By stating a logical pre-condition for a block of code you can apply a sequence of logical steps to it that each correspond to the behaviour of each of the statements in the block. If when you reach the end of your block your transformed pre-condition is the same as the logical post-condition then your block is proven correct. If you do this for your entire program then it as a whole is proven correct.

So why doesn’t anyone really do this outside of academia? Basically because:

– Coming up with pre- and post- conditions that represent exactly what you want your program to do is hard

– The actual process is equally hard

– .. meaning that the whole thing is slow

OK, now flip to what I do now. I’m a big fan of ‘Extreme Programming’ techniques including Test Driven Development (TDD) which is an evolution of unit testing. Why do we do unit testing? Partly to try and ‘prove’ that each part of our program works. Its not really absolute proof since we never know if our unit tests uncover all possible bugs in the code, but its a good approximation. In fact as we add more unit tests it becomes an even better approximation of proof.

Anyway, why am I rambling about this? Well, the academic in me (which isn’t allowed out very often) always had a bit of a bad feeling about unit testing due to its lack of proof. But recently I was talking to a couple of ThoughtWorkers about analytical and numerical processes in Maths. With analysis (or calculus) you prove the answer to a problem using Mathematical logic. Some problems though are either (1) too hard to solve like this or (2) take too long to solve so you can also use numerical methods. These never give you the exact solution but through a succession of repeated calculations you can tend to (or hone in on) the solution until you have an answer that is ‘good enough’.

To me, this relationship of analysis and numerical methods seems strikingly similar to that of formal methods and unit testing. Since numerical solutions are academically valid this makes the academic in me happier about unit testing.

TDD isn’t just about testing though – its also a method of actually deciding how to write a program. You start with a failing test, write some code that fixes this test in the simplest way possible, refactor your program to remove duplication, and then repeat. In my Formal Methods studies at University I also saw how you could actually write a program based on a logical specification and a set of rules related to the ‘correctness proving’ rules. (See here). So both TDD and Formal Methods offer a way of ‘generating’ code based on testable specification (programmatic assertions for TDD, logical specifications for Formal Methods.)

In conclusion, this argument itself doesn’t prove anything (!) but in my own mind at least showing a relationship between Formal Methods and TDD strengthens the case for TDD being a valid, and valuable, technique for writing software.

Why I don't like the GAC

A real example of why I don’t like the GAC in .NET:

I’m in the middle of restructuring the CruiseControl.NET build file. I create a new build, test it and its fine. I setup a new CruiseControl instance for a project I’m working on, and that’s fine too. I then copy this instance on to the build server and it fails with a TypeLoadException.

Stripping this down, I have an application that runs on one machine, but if I copy it file-for-file onto another machine it doesn’t run. This is the kind of thing that makes the control-freak side of me’s blood boil.

My next thought is ‘so what’s different between the 2 machines?’ The only thing that effects a .NET application’s assembly load routine outside of the local directory, app configuration (which we don’t do in CCNet) and programmatic assembly loading (which we don’t do either) is the GAC. So I look in machine number 1’s GAC and I see a copy of NUnit in there. ‘Hmmm’ (I think), ‘I wonder if that’s it’ (in theory if my build had gone against the version of NUnit in the GAC vs. the directory-local one I thought I’d specified there could be a problem). So I (eventually) manage to get NUnit out of my GAC, recompile CCNet, try again and everything’s hunky-dory.

So what is the point of all this? In my mind the GAC breaks the completeness / isolation concept for building and running an application in an environment. In my mind, I should be able to zip-up an application (as much as possible), drop it somewhere and it runs whatever the machine, given a basic set of environmental requirements. In J2EE world, when you deploy an application in an app server you put your own JAR’s in the application’s own class-path – you wouldn’t put them in the app-server’s global classpath. So (in my mind) it should be in .NET and libraries shouldn’t be in the GAC unless they are part of, or an extension of, the .NET runtime itself.

I especially don’t like that I can build something referencing a non-Runtime library in the GAC and not be given a warning.

The oft-quoted reason for using the GAC is that ‘it saves you having multiple copies of dll’s all over the place’. My response to this is that premature optimization is the root of all evil and that disk space is cheap, and the pain that shared libraries can give you does not merit the storage space / download bandwidth gain.

So in conclusion, the GAC is better than the dll-hell of COM (which thankfully I avoided by living in Java world for a few years), but still isn’t as good as not sharing library files across applications at all. My advice is don’t use the GAC unless you absolutely have to, or all the applications you use explicitally specify which versions of libraries they require.