GMail Extra – Gmail Drive

Yes, more Google dribblings. This time its Gmail Drive which lets you have a Windows Drive that is backed by your GMail Account! (Go back and read that again and think about it. Its your own personal file server accessable from anywhere.)

The idea was based on GmailFS for UNIX, but its still inspired.

Google Desktop – I guess it was just a matter of time

I know I’m at risk of starting to sound like a bit of a stuck record when it comes to Google, but I needed to blog Google Desktop. Its a really simple concept – Google searching your own machine – which I think will become as ubiquitous as their internet search engine.

They’re in early days with at the moment – it only works on Windows, only searches specific extensions of files (e.g. I’d like to search other plain text files, not just ‘.txt’), only searches Internet Explorer (I’d like it to search Firefox), etc., but you can see where they’re going.

What will be interesting to see is how this fits in with the new Search functionality coming out in Longhorn.

Surely its only (another) matter of time before they release a server application people can install on their Corporate LANs to do the same kind of thing (which Google Desktop could talk to, maybe – perhaps called ‘Google Enterprise’?). They already have the Google Appliance, but I think companies would be happier to install software on their own kit than use a totally separate piece of hardware. The software could sit next to any corporate version of GMail!!

Anyway, go get Google Desktop, try it out, and send them feedback.

What does Skype mean anyway

Skype is a free Instant Messaging App that also does really high quality free Voice over IP. Its easy to use, free, and the sound quality and lag are great even over wireless LAN and long distance. And did I say it was free?

It also does voice calling to real phones. That’s not quite free, but is cheap (probably cheaper than using a phone card if you’re making international calls.)

Go get it from www.skype.com. And if I know you, email me, we can exchange contact details, and start Skype’ing!

You've got GMail

I’ve always considered myself a fairly demanding email user. For a long time, I’ve expected to be able to use folder hierarchies, server-side rules, secure connections and an easy to use interface. For personal email I’ve used IMAPS or IMAP/ssh tunnel and procmail since 1999. I expect proprietory mail systems to fit the bill too (which I’m often disappointed by.)

But now along comes GMail and its changed 5 years of habits. I have switched pretty much all of my personal and mailing list email to it. It doesn’t have offline support, but I spend most of my time at a computer on a broadband connection now so that’s less important.

There are several reasons for switching, but they are all usability related: filtering is much easier to do than in procmail; I find I don’t need hierarchies anymore; I have a manageable inbox for the first time in years; its blisteringly quick, and so many reasons more.

I can’t recommend it enough – switch to GMail today if you can. If you know me and want me to refer you for a GMail account just ask.

Lessons from a Successful Agile Project Part 4

Anyone that knows me would have expected me to spout on about tools and build issues by this point, and I think its pertinent how important the previous points were to this project that build gets pushed into the fourth chapter, but here it is finally.

Technical Automation

Despite having 10 highly productive developers, and a fairly complex build process, we did not need a full-time ‘Buildmaster’. I think there were a few reasons for this:

– A few of us on the team had had enough build experience over the years that we now had a good feeling of ‘what worked’ on an agile project

– The build environment was under the same common ownership as the source base

– Major build improvements would be driven by specific stories

– Minor build improvements would be treated as refactorings and done at any time

This was beneficial since no one person was ‘stuck’ with the tag of being in charge of the build, and everyone else on the team felt empowered to change anything that was causing them pain.

I should add at this point that if your build / deployment setup isn’t sufficiently automated and easy to troubleshoot, then you probably will need a full time Buildmaster. One of their tasks should be to make themselves redundant by adding such behaviour.

Lesson 6 : Aim to not have a specific Buildmaster. Instead, treat the build system as part of the commonly owned source tree and make sure anyone can add value to its on-going development.

There were many things that worked really well in our build system but I’ll just talk about 2 of them. The first covered the area of build configuration.

I’ve seen many ‘difficult’ build systems where there are a plethora of build config files, build launch scripts, environment variable dependencies, complicated build invocations, etc. All of this is (in my mind) totally unnecessary. On this particular project, there was one property of build configuration that changed between developer workstations and that was the database name to use. Apart from that, everything was the same. So the entire configuration for our development workstation environment of 15 (or so) machines fitted in one page of a properties file which itself was checked into source control.

Now of course we had some clever stuff going on to handle figuring out what specific machine you were using, how to override for Continuous Integration environments, etc., but most of the time developers didn’t have to worry about it.

This setup saved us so many headaches and so much time that I would now always consider it a ‘must-have’ for any future project I work on. I hope to write an article on the exact details at a later date, so keep your eyes peeled.

Lesson 7 : Minimise the complexity of your build configuration system. Keep the entire development environment configuration in one file and store it in Source Control

The final thing I want to mention here is our database automation. We migrated the full database on every build. Go back and read that last sentence and think about it for a moment…. OK, I’ll tell you why I think its great:

– We never hit a data migration problem on release: we migrated all the time, against real data and so any problems were caught early

– We didn’t have an ostricised data team: data migration work formed part of any story that required it and all the developers did it.

– Since we wanted to ‘do it all the time’, the data migration process was highly automated. That meant our DB release process that we gave to the operational DBAs was really-simple, reducing likelihood of errors and getting us in the DBAs’ good books. 🙂

– We could release at any time – we always had enough data migration code written to support the state of the code base.

Technically. we did this with a bunch of NAnt script and command line ‘osql’. Again, it would be too big to document in a blog item but I hope to write it up later.

Lesson 8 : Make data migration part of your regular build. Make developers write data migration code as part of implementing a story that requires it.

Lessons from a Successful Agile Project Part 3

Apologies for the delay since the last installment of this series – I’ve been pretty busy with work and moving country. 🙂

Daily Practices

Our daily practices worked out well, and were pretty simple, so I wanted to describe them.

The day would start at 9:30 with a team stand-up. People present would be the entire project team (the 10 devs, 1 PM, 2 BAs and tester), plus anyone we were also working with at the time (e.g. we would occasionally have iterations where a reporting specialist would work with us.) Also the department coach would drop in every now and then. The initial stand-up would last about 10 minutes, by going around the team one-by-one (we would stand in a circle) and people reporting what they did yesterday and any particular problems they hit. Deeply tech problems would be mentioned but not described heavily. Since we paired, often the first person in the circle would decribe the activity, and the second person would just say ‘What Bob said’, and that would be considered perfectly OK.

Lesson 4 : Keep your stand-up meetings short, and allow the whole team to talk about what they’ve been up to

After the main stand-up, the developers would remain for a brief ‘tech stand-up’. About 1 in 3 times we would spend time quickly discussing any tech issues that arose from the main stand up. With that over, we would do pair-up and story-distribution. This basically worked as follows:

– If a pair from the previous day were near to completion and/or had only just started pairing they would often stay paired and stay on the same card.

– The remaining people would pair up normally with a different pair and often with someone that they hadn’t paired with for a while.

– For any started, but incomplete, stories being worked on at least one member of the pair would always have worked on the story the previous day.

– If people remained after all started but incomplete cards had been distributed, new stories would be distributed according to priority set in the IPM.

– We would normally have one pair assigned to bugs.

If stories were completed in the middle of the day, we might iterate the above process with a few people on the team. For example, pair rotation could happen during the day, and new stories could be started during the day.

The important point that is not mentioned here is anything about ‘story ownership’. I’ve often seen projects where each story is ‘owned’ by a particular developer for a particular iteration. I really don’t understand the value in this and believe it to be the sign of either a micro-managing PM or a development team that doesn’t have internal trust. The developers should be jointly responsible for all stories in an iteration. If the iteration is running slow, it is the team’s, not an individual’s, responsiblity to pick up the slack where required.

I actually think ‘individual ownership’ is damaging. Consider the following:

– If there are people on the team that have specific knowledge about an area that no-one else has, it is the team’s responsiblity to spread that knowledge around – individual sign-up is not going to encourage the team to start picking up specialized tasks and so knowledge sharing is limited.

– What does individual responsiblity mean? I’ve heard of some projects where it means that if a story is running behind in an iteration it is that individual’s responsiblity to pick up the slack, if necessary by working weekends. This is terrible for team morale.

– Individual sign-up hinders pair rotation. Consider a story that takes 3 days to implement. If one specific person has to work on that card every day, a maximum of 4 people will work on that card. If the ideas above are used instead, 5 people could work on that card. This means more rotation and more common knowledge in the team.

– It doesn’t fit with prioritisation. The 2 most important cards in an iteration should be the 2 that are worked on first. If these 2 cards are both ‘owned’ by the same individual, one is likely to be pushed back in the iteration and therefore is less likely to be completed.

Lesson 5 : Don’t do individual story sign-up. Instead allocate stories on at least a daily basis, keeping priority stories up the stack, and pair rotation a continual activty.

Lessons from a Successful Agile Project Part 2

Lightweight Planning

It was not appropriate to deploy to the ‘real’ customer every iteration for a few reasons, so we bundled new functionality into several releases, each being 2 – 3 months long.

We would have release planning meeting at the beginning of every release, and normally also halfway through to keep track of how we were doing. Such meetings were typically between 1 and 2 hours long. The process would be:

– One of the BA’s would read out the ‘release level’ stories one by one

– For each story each developer would write down an estimate in rough ‘ideal days’

– Once all the stories were read out each developer would sum their estimates

– We would then go around the room and all the totals would be written on a whiteboard.

– From this, we would see a basic idea of how much work the team thought there was

This process was not at all in depth, but it was only meant to give a ‘finger in the air’ estimation. Keeping the meetings short also kept the developers from falling asleep. 🙂

Before each iteration planning meeting (IPM), the project manager, the 2 BA’s and 2 of the developers would have a quick look through the upcoming stories to pick candidates to discuss at the IPM. This decision was not final – it was just an optimisation so that we’d have a first cut of stories to discuss in the IPM proper.

The IPM itself would normally be about an hour long. The sequence of activities would normally be:

– The Iteration Manager (one of the developers) would describe the results of the last iteration in terms of completed stories, total ‘ideal days’ completed, and stories remaining (‘hangover’).

– The team would estimate the remaining ideal days for each of the hangover stories.

– The Iteration Manager would give a rough amount of how many ideal days were left for new stories, based on velocity and how many pairs we had for the next iteration.

– One of the BAs would then describe a story. At this point a combination of 3 things would happen:

+ A discussion would spark up

+ The team would split the story (e.g. because of size)

+ The developers would estimate the story

– Once the story was estimated, another story would be played and the process repeated

– This story description / estimation process would continue until we were pretty happy we had the right amount of work for the iteration based on how many ideal days we thought we needed from the beginning of the meeting.

This process is very similar to what Martin Fowler describes here. Its not particularly precise, but we found it was good enough for our project to work, and at the same time meant that we weren’t bogged down by process.

Related to our planning processes were our tracking processes. These were also kept simple, and were based solely on completed stories (i.e. completed and remaining stories per iteration).

Lesson number 3 is then:

Keep your planning processes lightweight, and adapt them to suit the team and the project

Lessons from a Successful Agile Project Part 1

For the last 9 months I’ve been working as part of the best project I’ve experienced in my career so far. The project has successfully delivered into production 5 times in the space of a year, with only 2 production bugs (each of which were fixed and deployed in less than a day). The real customers are happy due to the successful delivery (and relatively low total cost), and so are the project team due to being able to deliver successfully while keeping a sustainable pace and good team spirit throughout.

So, what’s been the secret of the team’s success? As I roll off the project, I want to write down some of the reasons before I forget.

Great People

The core team is 14 people – 1 Project Manager (PM), 2 Business Analysts, 1 tester / admin and 10 developers (including 1 developer taking an iteration manager role). Everyone on the team has been top quality. The most important attributes throughout the team have been:

– Desire for team success over personal achievement

– Knowledge and acceptance of own abilities and limits

– Willingness to speak up

– Willingness to accept the team’s decisions over personal choice

Individually, the PM is the best I’ve ever worked with. The way he has worked with the business (in areas such as expectation management) has been crucial to both the happiness of the business themselves, and the rest of the team.

Its interesting to note that even though there have been 10 developers thoughout, there has been considerable rotation of who the 10 people are. This has been possible due to the qualities above, and also because of practices like pair programming.

So lesson number 1 for a successful agile project:

Create a great team, and focus on getting an excellent project manager

Use Business Analysts as Customer Proxies

Having an on-team customer is a great idea since it means that developers can quickly get feedback for questions about how a story should be implemented, and also the customer can give early feedback for how a new feature is being developed. (Note I say ‘on-team’ rather than ‘on-site’ purposefully to show that the customer is part of the core team – a ‘pig’ rather than a ‘chicken’ in Scrum-speak.)

The problem with using a ‘real’ customer though is that ‘real’ customers also need to do ‘real’ work, and so there is a a conflict of interest between getting their ‘real’ work done and helping the team deliver the project. (This reminds me of the ‘context switching’ problems Tom DeMarco talks about in his book ‘Slack’). There’s other problems with using ‘real’ customers too, but this to me is the biggest.

So on our project we haven’t had an on-team ‘real’ customer. Instead, we have 2 Business Analysts that act as ‘customer proxies’. The BA’s need to do (at least) the following:

– Talk with the customer groups to come up with a set of features

– Figure out how these features can be coalesced into a set of stories

– Communicate effectively with the developers to explain the problem domain, and the stories to be implemented

– Work with the customers to get early feedback (e.g. from UAT)

Having the BA’s as part of our core team has allowed us to keep open and active channels of communication, enabling early and relevant feedback to issues.

Lesson number 2 then is:

Use a pair of on-team Business Analysts instead of on-site customers

These 2 areas have been about people, and team structure. I think they’re the most important lessons I’ve learned from the project. In later entries I hope to talk about areas such as how planning worked, day-to-day team practices, and development practices.

Visual Studio Web Projects considered harmful

When you want to work on a web application in Visual Studio the default behaviour is to use a project that uses Frontpage server extensions. (Web applications include ASP.NET projects or Web Services.) There are big problems with this:

– Every development environment must have IIS setup, even if its used for working on part of the app separate from the web project.

– Every development environment must have IIS setup, even if an alternative ASP.NET runtime is being used (e.g. Cassini)

– Every environment must have a virtual directory set up with a common name (this is similar to the Common Absolute Paths Anti Pattern)

– Source Control Integration is often problematic

The solution to this problem is to just use plain DLL projects. You’ll need to explicitally add references to the System.Web assembly, but that’s basically it. You’ll probably want to change the Debug and Release ‘output’ paths to just ‘bin’ aswell, since that’s the default for web apps.

One restriction is that out of the box Visual Studio won’t allow you to create a new aspx/asax page and associated code behind using ‘Add new’ on the project for a DLL project. There’s a work-around, but it needs to done for every developer environment. Here’s how:

– Go into C:\Program Files\Microsoft Visual Studio .NET 2003\VC#\CSharpProjectItems\WebProjectItems\UI and open the ui.vsdir file

– Copy the lines that end in WebForm.aspx, WebUserControl.ascx, HTMLPage.htm, Frameset.htm, StyleSheet.css, and the ‘mobile’ types if you want them too.

– Paste the lines into C:\Program Files\Microsoft Visual Studio .NET 2003\VC#\CSharpProjectItems\LocalProjectItems\UI\ui.vsdir

– When you choose ‘Add new item’, the web templates will now appear in the ‘UI’ subfolder (you may need to restart Visual Studio first)

2 notes – if you installed Visual Studio somewhere else you’ll need to adjust the paths, and you should also backup the us.vsdir file you are changing. (Thanks to Owen Rogers for pointing out this workaround)

Some other behaviour you get for free when you do use web projects is:

– Hitting F5 launches a browser window at your default page

– You can debug the server side of your application

If you switch to using DLL projects you can still debug your ASP.NET hosted application by using one of the following alternatives

– The easiest way is to manually ‘attach’ to the ASP.NET worker process (In Visual Studio, go to ‘Tools -> Debug Processes’, and select ‘aspnet_wp.exe’). This process automatically starts the first time you access a ASP.NET resource. Doing this will mean you can debug the server-side of your application as normal.

– Alternatively, you can set the debug properties on the DLL (right click on the project, select ‘Properties’, then select ‘Debugging’ in ‘Configuration Properties’). Set ‘Enable ASP.NET Debugging’ to true, set ‘Debug Mode’ to URL, and set the ‘Start URL’ as required (there seems to be a bug in VS where you need to reopen the properties box to be able to set this last property.)

Benefits to this scheme beyond solving the original problems are:

– Since all source is developed under one tree it is much easier to work on 2 copies of the same project on one machine

– Similarly to the above point, it is much easier to work concurrently on more than one branch of the same project on one machine

– Its faster – visual studio just needs to use file access

– Its simpler – developers are already used to working with dll projects so they don’t need to learn anything new.

– You can compile, and run tests that don’t depend on retrieving web content, without having IIS running.