Blog

Falling (back) in love with the command line

Its all Mike Mason‘s fault. He sent me an email a few months ago asking me to review his new Subversion Book. Little did I know it was actually a ploy to turn his former bash-savvy, now cheese-eating Windows GUI dependent, sparring partner back to the realm of the $ and # ….

So on a slightly more serious note… Mike’s book, like the rest in the Pragmatic Starter Kit, uses the ‘command line’ version of tools to explain concepts and usage. I was a bit worried about this at first. I’ve become used to my Tortoise SVN, my P4Win and my Eclipse CVS plugin, why did we need to go ‘back to basics’? Anyway, Mike’s a mate so I carried on through to chapters 2, 3 and onward.

I don’t know what happened in those hours of reading but when next I went to access a Subversion server the following week, I found TortoiseSVN clunky to use. Now, don’t get me wrong, Tortoise SVN is a great SVN UI client, but therin lies the problem – its a UI client. I’d got used to the speed of doing a ‘svn stat’ (tell me what’s changed), or an ‘svn stat -u’ (tell me what’s changed, including any updates to the server). In Tortoise I have to navigate explorer to the right folder, right click a folder and then select .. umm, what is it again?

I always used to use command prompt tools but in recent years I’ve got lazy, and its not a good type of lazy. Its the lazy of living with a broken window, of learning ‘just enough to solve the current problem and no more’. The command line takes that little bit more effort to get right initially, but what I’d forgotten was that with a decent command line application you are repaid your efforts later. Combine that with a little scripting knowledge and you can start plugging tools together and really start becoming truely lazy, the good lazy of being able to sit back and drink your coffee while a computer does all the repetitive work that you would normally do repeatedly yourself.

So am I forced to become another member of the Mac-owning Masses to re-embark on my journey to shell nirvana? Not at all. With Cygwin I have a fully featured BASH shell on my Windows machine (and the Windows command prompt (cmd.exe) isn’t all that bad for getting started). VBscripters or Javascripters can also use Windows ‘cscript’ scripting host. For build scripts, NAnt allows you to interpret C# or VB.NET at run time through the <script> task. Finally .NET applications have full abilities to launch processes and access the standard streams.

And have I actually done anything beyond using command-line svn? Yep, you’re reading it. I used to upload my blog content using WinSCP, but now I use rsync, running on Cygwin, to upload the differences in my locally-generated blog pages to a UNIX server. It requires about 10% the of the hand movement on my part and completes in about 10% of the time. My next plan is to put a command line wrapper on to my blog application, wrap the whole process up in a script and run it from a CruiseControl.NET project, running under Mono, monitoring the Subversion server where my blog source content is hosted. Then updating my blog will be as easy as adding a new file to a Subversion repository (and with Mozilla Composer + mod_svn, that’s a trivial task.)

Oh, and maybe I’ll buy a Mac Mini anyway. 🙂

CruiseControl.NET 0.7 Released

I’m very happy to say that CruiseControl.NET 0.7 has been released. This was a fascinating release for me to work on for three reasons:

– I updated and added to most of the application, so I real feel I know it pretty well now

– I started playing with some new concepts, (like starting my own MVC implementation)

– … and most importantly, I finally understood how wonderful TDD, Interface-first, Mocking, Constructor Dependency Injection and the like all go together to make coding a whole new experience, with much better results.

The following is taken from the release notes

CruiseControl.NET 0.7 is one of our largest single releases so far. If you are upgrading from 0.6.1 or earlier, there are some big changes. Some updates are:

* Web Dashboard now has reporting options, allowing one web application instance to report multiple CruiseControl.NET projects across multiple servers. This feature is still in development, but see here for more details

* State Managers now automatically work for multi-project servers (so you should be able to take them out of your config file and forget about them!)

* The Xml log publisher now has default settings that work for multi-project servers (so you should just need an empty tag.) File merging has been removed from the Xml Logger, so you must use a File Merge Task for this behaviour.

* Some Source Control plugins can now automatically update your source tree for you (so no need for bootstrap builds). This feature is currently implemented by the CVS, Perforce and Visual Source Safe plugins.

* New section replaces old (See here)

* Introduction of ‘Working Directory’ and ‘Artifact’ as Project-level concepts to make relative directories easier to use. More support for these concepts will be coming in later releases.

* Considerable documentation updates

GMail Extra – Gmail Drive

Yes, more Google dribblings. This time its Gmail Drive which lets you have a Windows Drive that is backed by your GMail Account! (Go back and read that again and think about it. Its your own personal file server accessable from anywhere.)

The idea was based on GmailFS for UNIX, but its still inspired.

Google Desktop – I guess it was just a matter of time

I know I’m at risk of starting to sound like a bit of a stuck record when it comes to Google, but I needed to blog Google Desktop. Its a really simple concept – Google searching your own machine – which I think will become as ubiquitous as their internet search engine.

They’re in early days with at the moment – it only works on Windows, only searches specific extensions of files (e.g. I’d like to search other plain text files, not just ‘.txt’), only searches Internet Explorer (I’d like it to search Firefox), etc., but you can see where they’re going.

What will be interesting to see is how this fits in with the new Search functionality coming out in Longhorn.

Surely its only (another) matter of time before they release a server application people can install on their Corporate LANs to do the same kind of thing (which Google Desktop could talk to, maybe – perhaps called ‘Google Enterprise’?). They already have the Google Appliance, but I think companies would be happier to install software on their own kit than use a totally separate piece of hardware. The software could sit next to any corporate version of GMail!!

Anyway, go get Google Desktop, try it out, and send them feedback.

You've got GMail

I’ve always considered myself a fairly demanding email user. For a long time, I’ve expected to be able to use folder hierarchies, server-side rules, secure connections and an easy to use interface. For personal email I’ve used IMAPS or IMAP/ssh tunnel and procmail since 1999. I expect proprietory mail systems to fit the bill too (which I’m often disappointed by.)

But now along comes GMail and its changed 5 years of habits. I have switched pretty much all of my personal and mailing list email to it. It doesn’t have offline support, but I spend most of my time at a computer on a broadband connection now so that’s less important.

There are several reasons for switching, but they are all usability related: filtering is much easier to do than in procmail; I find I don’t need hierarchies anymore; I have a manageable inbox for the first time in years; its blisteringly quick, and so many reasons more.

I can’t recommend it enough – switch to GMail today if you can. If you know me and want me to refer you for a GMail account just ask.

What does Skype mean anyway

Skype is a free Instant Messaging App that also does really high quality free Voice over IP. Its easy to use, free, and the sound quality and lag are great even over wireless LAN and long distance. And did I say it was free?

It also does voice calling to real phones. That’s not quite free, but is cheap (probably cheaper than using a phone card if you’re making international calls.)

Go get it from www.skype.com. And if I know you, email me, we can exchange contact details, and start Skype’ing!

Lessons from a Successful Agile Project Part 4

Anyone that knows me would have expected me to spout on about tools and build issues by this point, and I think its pertinent how important the previous points were to this project that build gets pushed into the fourth chapter, but here it is finally.

Technical Automation

Despite having 10 highly productive developers, and a fairly complex build process, we did not need a full-time ‘Buildmaster’. I think there were a few reasons for this:

– A few of us on the team had had enough build experience over the years that we now had a good feeling of ‘what worked’ on an agile project

– The build environment was under the same common ownership as the source base

– Major build improvements would be driven by specific stories

– Minor build improvements would be treated as refactorings and done at any time

This was beneficial since no one person was ‘stuck’ with the tag of being in charge of the build, and everyone else on the team felt empowered to change anything that was causing them pain.

I should add at this point that if your build / deployment setup isn’t sufficiently automated and easy to troubleshoot, then you probably will need a full time Buildmaster. One of their tasks should be to make themselves redundant by adding such behaviour.

Lesson 6 : Aim to not have a specific Buildmaster. Instead, treat the build system as part of the commonly owned source tree and make sure anyone can add value to its on-going development.

There were many things that worked really well in our build system but I’ll just talk about 2 of them. The first covered the area of build configuration.

I’ve seen many ‘difficult’ build systems where there are a plethora of build config files, build launch scripts, environment variable dependencies, complicated build invocations, etc. All of this is (in my mind) totally unnecessary. On this particular project, there was one property of build configuration that changed between developer workstations and that was the database name to use. Apart from that, everything was the same. So the entire configuration for our development workstation environment of 15 (or so) machines fitted in one page of a properties file which itself was checked into source control.

Now of course we had some clever stuff going on to handle figuring out what specific machine you were using, how to override for Continuous Integration environments, etc., but most of the time developers didn’t have to worry about it.

This setup saved us so many headaches and so much time that I would now always consider it a ‘must-have’ for any future project I work on. I hope to write an article on the exact details at a later date, so keep your eyes peeled.

Lesson 7 : Minimise the complexity of your build configuration system. Keep the entire development environment configuration in one file and store it in Source Control

The final thing I want to mention here is our database automation. We migrated the full database on every build. Go back and read that last sentence and think about it for a moment…. OK, I’ll tell you why I think its great:

– We never hit a data migration problem on release: we migrated all the time, against real data and so any problems were caught early

– We didn’t have an ostricised data team: data migration work formed part of any story that required it and all the developers did it.

– Since we wanted to ‘do it all the time’, the data migration process was highly automated. That meant our DB release process that we gave to the operational DBAs was really-simple, reducing likelihood of errors and getting us in the DBAs’ good books. 🙂

– We could release at any time – we always had enough data migration code written to support the state of the code base.

Technically. we did this with a bunch of NAnt script and command line ‘osql’. Again, it would be too big to document in a blog item but I hope to write it up later.

Lesson 8 : Make data migration part of your regular build. Make developers write data migration code as part of implementing a story that requires it.

The people were right and the leaders were wrong

It was a spring day early last year when I joined a million other people in London alone to show my disagreement with going to war in Iraq. I’m not an actively political kind of guy, but it just seemed fundamentally wrong to be going to war over (at best) tenuous evidence. Robin Cook agreed and resigned, despite being a senior member of the UK’s government and therefore having access to a whole raft of information that the general public could not see.

The justification for war given to the British population was that Saddam Hussain was hording Weapons of Mass Destruction which he could readily deploy against our green and pleasant land with a mere flick of the wrist.

As a new report today shows, we were lied to.

The american people were told that they were going to war to stop Al-Qaeda launching any more attacks on their country. But now Donald Rumsfeld says he knew of no “strong, hard evidence” linking Iraq with the terrorist organisation.

But does all this really matter? The war is over and there is nothing we can do? I would argue that more than 10,000 civilian deaths and 1,000 military deaths (and let alone the economic cost) shout for justice.

I voted Labour, and so for Tony Blair, at the previous 2 British General Elections. I will not vote for someone who treats war so casually again and so will not vote Labour until Tony Blair resigns. I challenge anyone who believes in justice and truth to think similarly at the upcoming US and UK elections.

Lessons from a Successful Agile Project Part 3

Apologies for the delay since the last installment of this series – I’ve been pretty busy with work and moving country. 🙂

Daily Practices

Our daily practices worked out well, and were pretty simple, so I wanted to describe them.

The day would start at 9:30 with a team stand-up. People present would be the entire project team (the 10 devs, 1 PM, 2 BAs and tester), plus anyone we were also working with at the time (e.g. we would occasionally have iterations where a reporting specialist would work with us.) Also the department coach would drop in every now and then. The initial stand-up would last about 10 minutes, by going around the team one-by-one (we would stand in a circle) and people reporting what they did yesterday and any particular problems they hit. Deeply tech problems would be mentioned but not described heavily. Since we paired, often the first person in the circle would decribe the activity, and the second person would just say ‘What Bob said’, and that would be considered perfectly OK.

Lesson 4 : Keep your stand-up meetings short, and allow the whole team to talk about what they’ve been up to

After the main stand-up, the developers would remain for a brief ‘tech stand-up’. About 1 in 3 times we would spend time quickly discussing any tech issues that arose from the main stand up. With that over, we would do pair-up and story-distribution. This basically worked as follows:

– If a pair from the previous day were near to completion and/or had only just started pairing they would often stay paired and stay on the same card.

– The remaining people would pair up normally with a different pair and often with someone that they hadn’t paired with for a while.

– For any started, but incomplete, stories being worked on at least one member of the pair would always have worked on the story the previous day.

– If people remained after all started but incomplete cards had been distributed, new stories would be distributed according to priority set in the IPM.

– We would normally have one pair assigned to bugs.

If stories were completed in the middle of the day, we might iterate the above process with a few people on the team. For example, pair rotation could happen during the day, and new stories could be started during the day.

The important point that is not mentioned here is anything about ‘story ownership’. I’ve often seen projects where each story is ‘owned’ by a particular developer for a particular iteration. I really don’t understand the value in this and believe it to be the sign of either a micro-managing PM or a development team that doesn’t have internal trust. The developers should be jointly responsible for all stories in an iteration. If the iteration is running slow, it is the team’s, not an individual’s, responsiblity to pick up the slack where required.

I actually think ‘individual ownership’ is damaging. Consider the following:

– If there are people on the team that have specific knowledge about an area that no-one else has, it is the team’s responsiblity to spread that knowledge around – individual sign-up is not going to encourage the team to start picking up specialized tasks and so knowledge sharing is limited.

– What does individual responsiblity mean? I’ve heard of some projects where it means that if a story is running behind in an iteration it is that individual’s responsiblity to pick up the slack, if necessary by working weekends. This is terrible for team morale.

– Individual sign-up hinders pair rotation. Consider a story that takes 3 days to implement. If one specific person has to work on that card every day, a maximum of 4 people will work on that card. If the ideas above are used instead, 5 people could work on that card. This means more rotation and more common knowledge in the team.

– It doesn’t fit with prioritisation. The 2 most important cards in an iteration should be the 2 that are worked on first. If these 2 cards are both ‘owned’ by the same individual, one is likely to be pushed back in the iteration and therefore is less likely to be completed.

Lesson 5 : Don’t do individual story sign-up. Instead allocate stories on at least a daily basis, keeping priority stories up the stack, and pair rotation a continual activty.