Rebugging (verb) : The practice of refactoring without automated tests. Typically followed by a session of debugging.
🙂
Rebugging (verb) : The practice of refactoring without automated tests. Typically followed by a session of debugging.
🙂
I don’t normally blog about other people’s blog entries, but this is fascinating.
I’m a big fan of using a lot of delegation, with small methods and small classes, and I can understand when you’re sitting on a bunch of libraries that think the same way you can end up with these kind of stack traces.
Part of me wonders whether this is actually a problem. Modern IDE’s (e.g. IntelliJ, Resharper, Eclipse) make navigating abstracted delegation chains pretty easy (I’m forever pressing Ctrl(+Alt)+B or Ctrl+Alt+F7 in Resharper), and I would hope that the Managed VMs we use do clever stuff too. That said, I wouldn’t expect to have to understand library code more than a couple of calls outside of my own code, so libraries would have to make sure that their abstractions don’t leak too much.
Thanks to Simon Stewart for the link.
I’m putting together an ‘introduction to Agile’ presentation this week. I’ve done them before, but I like going back and writing these things from scratch since it allows me to think about what I’ve learned since the last time.
One thing I’ve added this time around is the first principle from the Agile Manifesto, which states:
Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
I’ve read this statement before, and even blogged about the business value slant it has, but I’ve never thought before how much overall it underpins everything I believe as an IT professional. Don’t tell anyone, but I don’t really care about test-driven development, stories, refactoring or even continuous integration. They’re just tools I use in order to fulfill the actual goal of delivering value early and often. At the moment they’re the best tools I know of for doing so, which is why I practice them, talk about them and even spend my own time writing open-source software to make some of them easier.
But they’re not the ‘axiom’ of why I do what I do. Getting useful, valuable, software in front of a real user; seeing the smile on their face when they see their professional lives are going to be easier (if that’s the value they gain); and having a conversation about what we can deliver next, and soon, is actually why I do what I do.
When I talked about this statement before, I focussed on the business value phrase, i.e. figuring out what the monetary benefit of a feature is. If you just take the word valuable as meaning ‘something which somebody values’, then this principle can be applied to any prospective customer in the business IT world.
As I mentioned in part 1 on my ‘life’ blog, today I’m moving from London to New York City.
For a number of reasons, I’ve decided to combine this move with a change of employer, and from Monday I’ll be working for Finetix.
Before I talk about the ‘new guys’, just a few words about ThoughtWorks. I worked for TW for nearly 4 years, and it was a brilliant experience and outstanding learning opportunity. I was able to work with some amazing people, who were not just amazing because they are some of the best technologists to work in the same part of the IT industry as myself, but also because they are great individuals. Over the 4 years I made a number of friends in ThoughtWorks, many of whom I will be making a real effort to keep in touch with, not simply because of a ‘networking opportunity’ but because my life is better having them as part of it.
ThoughtWorks continues to grow – I was the 11th hire in the UK office which is now more than 150 strong, with more than 700 people globally – and as it does so experiences the challenges of a growing organisation. I truely hope it can reconcile these challenges with the vision and energy that its people encompass, for the benefit not just of those people, but also for the clients it works with.
Right, enough gushing about TW – on to the future!
Finetix are like ThoughtWorks in that they are primarily a software development consultancy, but Finetix are a much more concentrated firm than ThoughtWorks. Their Manhattan office is their primary one, and has less than 100 people. This is the first thing that interests me – as I said in part 1 of this blog entry, I’m a bit of a ‘people person’, and I like to know to some level most of people that work in the same organisation as me.
Secondly, Finetix concentrate their client space – they work pretty much solely for ‘Capital Markets’ firms – investment banks, hedge funds, etc. ThoughtWorks gave me the opportunity for a broad experience, but part of what Finetix offers me is the ability to hone my skills into a particular ‘vertical’ and so become (at least for a while in my career) a ‘master of one trade’ rather than a jack of all.
This concentration of work not only allows Finetix consultants to build up a good background ‘domain knowledge’ of the finance industry, but also allows them to specialise their technology skills into a particular problem space. The problems that Finetix consultants face are tricky ones too – the success or otherwise of an investment bank partly depends on their technology, and their systems often include the very latest hardware and software techniques to allow them to beat their competitors sometimes in the region of milliseconds. I’m very much looking forward to improving my knowledge and experience in this area.
Finetix put ‘agile’ development forward as their preferred methodology. I’ve been practicing eXtreme Programming (XP) oriented agile development for all my time at ThoughtWorks, and I look forward to any opportunities that arise within Finetix and their clients to use some of this experience.
Finally, 2 of the 3 guys that run Finetix are ex-pats so I’m hoping that when England win the football (OK, soccer for you americans! 😉 ) world cup this summer I’ll be watching them do so with a few colleagues. 🙂
Joining a new company is always something of a gamble, especially when you leave a company with happy memories. I’m excited to take the opportunity of being part of Finetix, with the hope of gaining new knowledge, experience and friends.
I’ve released version 1.1.1 of Tree Surgeon. Its just a small update from version 1.1 to include the latest version of NUnit.
In terms of future work on Tree Surgeon, the biggest update would be to support .NET 2 / Visual Studio 2005. I still haven’t done much work in .NET 2, but I’m hoping one of my colleagues at ThoughtWorks will help out here. I’m also still thinking exactly how this would work in terms of whether we should we generate trees that will build in both .NET 1.1 and .NET 2. For the first cut, I’m thinking not, and the user would just pick .NET 1.1 / Visual Studio 2003 or .NET 2 / Visual Studio 2005.
I’d also like to support projects that will build in Mono (I’d like if possible to still use .csproj files as the definition of how to compile though.)
Finally on the list is being able to generate different types of projects, make it easier for people to customise the template, that kind of thing. As usual though, time is lacking…
7 years ago someone showed me VMWare for the first time. I remember my jaw-dropping. Here was somebody running Linux on their computer, but running a full-blown instance of Windows as well in a ‘Virtual Machine’ (VM). The VM even showed a BIOS screen when you ‘powered’ it on. This was huge – to be able to run applications on different Operating Systems in isolated machines opened up opportunities that just didn’t exist before.
In the years since then people have used VMWare, or Microsoft’s equivalents of Virtual PC and Virtual Server, for a lot of different tasks such as being able to run Windows applications even when your primary OS is Linux, to running VMs as production servers for ease of configuration. In the software development world VMs are often used to allow testing across a whole suite of OS’s, user configurations, etc. For instance when you are writing a Web Application you want to test with multiple OS’s, multiple browsers, multiple screen resolutions, that kind of thing. You may have many different permutations of these, and to have a physical machine for each would just be too much overhead. With VM technology though you can setup a multiple VMs very easily, and even automate your testing so you run against every configuration regularly (e.g. overnight.) Connextra, the eXtreme Programming pioneering company in the UK, were doing exactly this several years ago.
VMWare have undoubtedly driven this in the x86 world, with Microsoft playing catch-up. This week though, VMWare dropped the biggest bombshell since they launched their original product – they made it free. No, we’re not talking about the limited functionality ‘VMWare Player’ which they announced a few months back, we’re talking about VMWare Server, the next version of their baseline GSX Server product, just renamed, with extra functionality and a price tag of zero.
Why is this such a big deal? ‘Local decisions’ often drive a ‘global strategy’. By that I mean that often when we want to implement something new its hard to justify it in terms of long term benefit, and its much easier to justify it short term. Using VM technology though is very much a long term benefit – the short term alternative is just to use a spare PC that’s hanging around. As such, its hard in a short term context to justify the cost of buying VMWare, or Virtual Server, licenses. By removing this purchased licensing barrier VMWare have removed the short term barrier, and so open up organisations to make that short term leap. Of course, VMWare are hoping that such organisations are going to want to purchase the higher-value products they offer once they start seeing the benefits of virtualisation.
I’m very excited about this – I think that VM technology is fantastic for software delivery, right from development through QA to production deployment. I’ll be posting more on this in the future.
The principles behind the Agile Manifesto state Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. I’ve often heard this shortened into a frequently used battle-cry of Agile: It’s all about business value!.
This is an incredibally liberating and empowering statement. It tells us that we have a concrete, measurable, fulcrum we can measure all our decisions upon. Should we implement feature A or feature B? Well think about how much each are worth! Should we write this support documentation? Well, is the company going to save money by having it? And so on.
But there’s a problem. How many people in an organisation actually care about business value? Sure, the CEO probably does. He needs to justify the bottom line of the company accounts to the board. And so, by the chain of responsiblity, should everyone else in the organisation. But what if someone doesn’t. How does an Agile project justify its existence and practices then?
An immediate counterpoint may be ‘well, if there’s an Agile project happening in a company, then obviously someone believes in business value, and they take the responsiblity for the justification’. Great, but there’s more than one stakeholder a project has to deal with, and what happens when some of those people don’t care about business value? What if they only care about process? What if, at the end of the day, they only care about doing work they can’t get fired for? How does Agile prove itself here? And no, you can’t just ignore those people. If you do, your project is doomed to fail.
In my experience Agile can be promoted as a worthwhile change in process without using the argument of business value, but it’s hard work. It gets harder the bigger the project, the more visibility the project has within the organisation, and the more bureauocratic the organisation is. Arguments about quality, visibility, repeatiblity, etc can be used, but effectively what’s needed is a knowledge about what motivates every individual within the stakeholder community. If that community conists of 50+ people, that’s a lot of work, requires a lot of patience, and needs (at least) an experienced Agile project manager or Agile coach. Even then there’s going to be a certain amount of extra work required within the team to satisfy the environment in which the project lives.
Of course, there’s also a danger that not even the project sponsor really cares about business value. Maybe they had ulterior motives when they picked the methodology or supplier, or maybe they move on from the project. In this environment, Agile can still win, but until Agile becomes ‘mainstream’ any such project is in a dangerous place and such teams need to tread carefully.
Looking forwards optimistically though, what if the Agile Software movement changed not just the world of software development, but also helped to bring about a more ‘agile oriented’ business world in general; one where business value, respect, individuals and interactions, and the ability to change were valued more highly than process, sticking to the rules, or making decisions based on how they would impact ‘me’ rather than ‘us’? In this business world, my life would indeed be easier.
Wiki‘s are a great tool for having loosely structured documentation that a whole team can update. They offer a very low barrier to entry both in terms of learning how to use them, and in terms of infrastructure (no software to install, no shared document strategy to invent, that kind of thing.)
One problem with Wiki’s though are that they typically sit outside of a team’s source control environment. This is bad because teams like the versioning functionality source control offers and also because it means you have another repository of knowledge that needs to be backed up. Typically I’ve just made do with these problems but on my current project we are using a Wiki to host our automated acceptance test scripts. We’re using Fitnesse, but that doesn’t really matter for the sake of this article. What does matter though is that we definitely want our automated acceptance tests under source control since they are closely tied to the state of our source code, so we need to figure out how to get our Wiki working with source control.
Our approach to solving this is as follows:
OK, so far so good: we define our Acceptance Tests in a wiki, and it fits into source control and our automated build in the normal way. As a nice side effect people can read and edit the Wiki offline. But we’ve lost something. We said at the top that there’s a low infrastructural barrier to entry in using a Wiki, and by introducing a source control environment that’s no longer the case. Is there a way we can have the ease of use of a ‘shared’ wiki but still keep it in source control? Yes, and here’s how we do it:
There are a few gotchas though to be aware of:
Right, less talk, more code! 🙂 How did we actually implement all of this? In terms of environments we are using Subversion for source control, Fitnesse as our Wiki and CruiseControl.NET as our Wiki Update tool. For Fitnesse we updated the launch script as follows:
"%JAVA_HOME%binjava" -cp fitnesse.jar fitnesse.FitNesse -o -e 0 -p 8888
The -o and -e 0 options suppress unnecessary file updates and in-built source control. We also deleted all the .zip files which already existed (these are just for source control).
For the CruiseControl.NET project we have the following configuration (this is for CCNet 1.0) :
<project name="Wiki Sync"> <workingDirectory>c:sourcecontrolwiki-trunkwiki</workingDirectory> <triggers> <intervalTrigger seconds="15" /> </triggers> <sourcecontrol type="multi"> <sourceControls> <svn> <trunkUrl>svn://oursvnserver/ourproject/trunk/wiki</trunkUrl> <workingDirectory>c:sourcecontrolwiki-trunk</workingDirectory> <autoGetSource>true</autoGetSource> </svn> <filtered> <sourceControlProvider type="filesystem"> <repositoryRoot>c:sourcecontrolwiki-trunkwiki</repositoryRoot> </sourceControlProvider> <exclusionFilters> <pathFilter><pattern>**.svn*</pattern></pathFilter> </exclusionFilters> </filtered> </sourceControls> </sourcecontrol> <tasks> <exec executable="sync.cmd" /> </tasks> </project>
This is a little complicated, but basically it means that ‘sync.cmd’ is called if changes occur in our subversion copy of the wiki, or on the local version, but ignoring any of subversion’s own local files. Fitnesse is actually run as a service from c:\sourcecontrol\wiki-trunk\wiki .
sync.cmd is as follows:
@echo off rem mark as removed any files that have been deleted for /f "usebackq tokens=2" %%i in (`"svn st | findstr !"`) do svn rm %%i rem add new files svn add --force *.* rem commit - this won't do anything if nothing is to be committed svn commit -m "This is an automated Wiki commit"
The ‘svn update’ is done before all of this by the source control provider in CCNet. We may well update this to automatically handle conflicts (at the moment we have to do it manually by logging into the wiki server.)
OK, lets wrap this up. Wikis are great, writing acceptance tests in a wiki is also great, but acceptance tests should be in source control so we’ve put our wiki into source control. Through a bit of CI hackery we still have a ‘shared’ wiki that anyone can edit without having to use source control. It really is a hack though – it would be far cleaner if the shared wiki was actually persisting and reading directly to and from source control.
Kudos to Jeffrey Palermo – he’s already done most of this, blogged it and so provided the start to the work we did.
So, first things first, my blog has moved. Its now at www.mikebroberts.com/blog/ .
All things being well, your browser or RSS aggregator should already be looking at the new location right now. That’s because I’ve setup redirects from my old server (which runs Apache). It was all pretty easy really – I just created a single .htaccess file in the root of my old webspace with the following line in it:
Redirect /blog http://www.mikebroberts.com/blog
This redirects any request starting with /blog to the new space, and works for sub-directories too. Not only was this easy to setup, I didn’t need any administrative privileges on the server to do it.
My new domain and webspace are hosted at TextDrive. They’re not the cheapest hosting company around but they provide a huge amount of functionality (including shell access) and just seem to be extremely competent and professional. At the moment my site is just static HTML which I uploaded using scp, but I’m looking forward to trying out some of the Subversion / DAV features and maybe even some Ruby hosting if I get around to learning it.
This week I’m really proud to blog that we’ve released version 1.0 of CruiseControl.NET . It’s taken 3 years, but I believe it’s been worth the wait. One of the key reasons I work on the project is that I need a tool in my day-to-day work that fulfils the features we try to implement in CruiseControl.NET, and I’m always happy when I install it, setup a project and see it running.
CruiseControl.NET is an enterprise-class automated integration server. I believe we can say this quite confidently based on the fact we support 13 different source control tools, we have significant support for building and reporting multiple projects across multiple servers, our inter-project co-ordination, and lots more besides.
If you’re already a CruiseControl.NET user, I thoroughly recommend you upgrade to version 1.0. We’ve made significant updates to CCTray, have first-class support for MSBuild to better support you as you migrate to .NET 2, and a whole bunch of other updates.
So are we going to sit on our laurels now we’ve got the big one-oh out of the door? Far from it. We, along with the rest of the CruiseControl.NET community, are working hard on 1.1 already. Which reminds me, we wouldn’t be where we were if it weren’t for the massive support of a whole raft of CCNet users supplying patches and answering questions on the mailing lists. I can honestly say the project would not be close to where it is today were it not for CCNet user community, so a big thank-you to all of you out there.
You can download CruiseControl.NET 1.0 here. I hope you find it useful.