Blog

Book Review – William Gibson’s ‘Virtual Light’

Virtual Light (Bridge, #1)Virtual Light by William Gibson
My rating: 4 of 5 stars

The last time I read this book was 10+ years ago. After I recently traveled to San Francisco, where a lot of this book is set, I wanted to re-read it. I remember it as being ok, but not his best, and certainly not as good as the last book in this ‘Bridge Trilogy’ – ‘All Tomorrow’s Parties’.

I was also a little trepidatious about doing so since a relatively recent re-read of Neuromancer did not hold up to my memories.

It turned out though that Virtual Light is much, much better than I remembered. The concept of the San Francisco-Oakland Bay Bridge becoming effectively a shanty town I remembered as being brilliantly described, and that was the same this time. Furthermore the plot seemed to whizz by and the character development was good. The ‘historical’ sub-plot of the ‘HIV martyr’ JD Shapely was well conceived and fantastically exposed during the rest of the book.

I only give 4 stars since I think the character development could have been a little better, and also I thought that one of the key plot points (why the glasses were so valuable) was a little weak.

Another thing that works about this book now is that it was set at the time in the ‘near future’ – the early 2000’s (it was first published in 1994.) We’re obviously now past that point so instead of science fiction this book effectively becomes more speculative fiction. I enjoyed this as such since it’s good to know things didn’t turn out as bad as they could, yet on the other hand some of the themes still offer a warning about how out society could become if left to be screwed up by corporations and the upper classes.

View all my reviews

Online music stores finally fit my needs

I’ve been somewhat of a luddite up until now with my music purchases – almost all of them have been CDs – since I’ve not been a fan of buying downloaded music from iTunes, eMusic, etc. My main gripes with downloads were:

  1. Typically downloads have been lowish quality lossy formats. I rip all my music lossless for home listening
  2. Buying music as downloads has often worked out more expensive when buying full albums
  3. DRM was a show stopper – I don’t want to buy something from Apple and then not be able to play it on some other device in the future
  4. Inability to download more than once. CDs are a great physical backup
  5. Not having something tangible, e.g. CD booklets
  6. The selection on sites like eMusic, which have never had DRM for example, haven’t been a great fit for my music tastes

In the last month or so this has started changing though – I’ve recently bought 5 albums from Apple’s iTunes Store and 4 physical CDs in the same time span. What’s changed? Going through the list from above:

  1. All iTunes downloads are now 256kbps AAC. This isn’t lossless, but is close enough that with my current kit I’m not going to be able to tell the difference
  2. Its now a real toss-up as to what’s cheaper – CDs or downloads. Albums on iTunes work out normally at $10. CDs on Amazon range from $6 – $14.
  3. Apple doesn’t use DRM anymore
  4. Apple now offers multiple downloads – buy once and download again whenever you want
  5. The tangibility thing isn’t solved but really I don’t look at my CDs any more once I’ve ripped them and taken a quick browse through the booklet. They’re just gathering dust on my shelves
  6. The selection on the iTunes Store, at least for my tastes, is extensive

I don’t think I’m a total convert, and still expect to keep buying some CDs for a while (I expect I’ll buy one of the Pink Floyd box sets coming out later this year), but my days of being CD-exclusive are over.

Syncing music to iPod / iPhone from lossless iTunes library

For listening to music at home I use an Apple TV plugged into my fancyish sound system, and so I use music stored in lossless format. Since I use an Apple TV this music is stored on a computer using iTunes. I also have an iPhone, and my music library is on there too, but I can’t fit my entire lossless library on there (it’s more than 100GB) so up until now I’ve also kept a totally separate iTunes library, on a different computer, with the same music in 128kbps AAC format that can fit on my iPhone.

iTunes for a while has had an option where when syncing to an iPod shuffle it will automatically convert songs to a low bitrate to fit more songs on. I realized a couple of weeks ago that this option now exists for iPods and iPhones too – it appears on the main iPhone screen when you look at the device in iTunes

Thanks to macyourself.com

I tried this out last week. It definitely works, but takes a long time, about 15 hours syncing from my ~4 year old iMac. I can live with that slowness though now I don’t have to look after 2 separate libraries and manually convert all my music to smaller formats myself.

 

Dual KVM Pairing

Previously when I’ve pair programmed (2 people programming at the same computer at the same time) I’ve always used one keyboard, screen and mouse (KVM – V means Video). In the last couple of weeks I’ve been trying out ‘dual KVM’ pairing though – in this scenario each programmer has their own keyboard, mouse and monitor, where the screens are setup to mirror each other (each person sees exactly the same thing)

This style of pairing isn’t new, and certainly is common on other teams at DRW, I just hadn’t used it before. In fact I had concerns, the principal ones being:

  1. Wouldn’t something be lost in communication with not having a shared physical screen? (I point at things on the screen fairly often when pairing)
  2. Wouldn’t programmers be constantly aware that they might be fighting each other for control of the mouse pointer / cursor if they had their own keyboard and mouse?

It turns out that I really like this style of pairing. My concerns with communication about the screen are largely alleviated by turning on line numbers in the code editor, and the keyboard and mouse fighting isn’t nearly the problem I feared. The benefits are chiefly ergonomic, but they are significant. Being able to look straight forward, and not having to lean in towards the keyboard and mouse makes work a lot more comfortable. The only thing I slightly miss is being able to use 2 screens for a stretched desktop, but that’s a price worth paying – I can always switch back the screens to this mode when I’m not pairing.

Experience using Scala as a functional testing language

6 months ago my team decided to migrate our functional tests to being coded in Scala rather than Java, the native language our application is written in. However we have now reverted back to writing them in plain Java. What follows is an experience report of this exercise and our reasons for bringing it to an end.

Background

The application under test is a message-driven server application. We define the functional tests of this application as those that run against the largest subset of our application we can define without requiring any out of process communication. The functional tests themselves run in process with the application under test.

Each functional test is written in a style that treats the whole system (mostly) as a black box. We stub out all external collaborators – those stubs simulate collaborators sending messages, and also collect any messages that they receive allowing the tests to make assertions about the application’s interactions with its environment.

Our application is not trivial; writing functional tests that are concise, understandable and maintainable is a tricky task. We’ve created a fair number of support classes that start the system and act as the collaborator stubs described above to help keep the tests themselves clean.

We use functional tests extensively, and typically write at least one functional test per work item on our backlog. Just in terms of numbers about 10% of all the automated tests we have are functional in style, the rest are per-class-level unit tests.

For our development environment we use IntelliJ as our IDE and Rake as our command-line build environment.

Switching to Scala

We were interested in trying Scala as our functional test language for 2 main reasons:

  1. To improve clarity and maintainability of the tests
  2. To assess Scala as a possible production-code language

We already had a good number of functional tests going into this exercise and so our first task was to rewrite these in Scala. We also rewrote most of our test-support classes in Scala.

Since this was our first time writing Scala the translation wasn’t a blisteringly fast process but the Intellij Scala plugin’s ‘copy Java, paste as Scala’ feature did help us get going. If nothing else it was a useful guide when translating generic code.

Another initial task was to setup our development environment to support Scala. Intellij’s Scala plugin, while having a number of deficiencies, does the basics well and we were very quickly compiling and testing Scala alongside Java in the same project. Even though Intellij will support Java and Scala code in the same source  we kept all Scala code in a separate source tree to avoid complications with the command line build. With that setup updating our rake script to compile Scala and run the Scala tests was relatively easy.

What was good

The main thing that attracted us to Scala was the ability to write code in a semi-functional style much more concisely than can be done in plain Java. We’ve also been coding a good amount of C# recently and we sorely miss the basic functional support in C# 3 when switching to Java. We were not disappointed by Scala’s abilities in this regard: there were many occasions where we could write 1 line of concise, readable Scala where previously we’d had 8 lines of a Java method.

Why drop it?

There were several reasons we decided to roll back to Java, and to be fair to Scala most of them were not it’s fault.

The biggest reason was that despite 6 months of experience we still found we were slower to code and debug Scala than Java. We probably spend around 5 to 10% of our coding time working with the functional tests and that just isn’t enough to really ‘get’ the language. I think this would be similar for most languages – you’ve got to use them significantly to become fluent in them – but I think this is particularly true with Scala since it is a large language with an equivalently large library.

I don’t think its just the time aspect either. Tests are a very specific style of coding and mostly procedural. Where we got most of the benefit of Scala were in our test support classes, but even they aren’t hugely complex. We never got into any meaty problems in our Scala realm and so never really pushed our knowledge of it.

Scala is absolutely a more powerful language than Java and as I mentioned above we could write code more concisely in Scala than we could in Java. However IntelliJ is a great tool and it makes up for a surprising number Java’s deficiencies. You end up with more code on the screen with Java but I’m not convinced that it takes more time to write it. Furthermore once the code is written the rest of the IDE experience is far better in Java than Scala – compiling is faster, code browsing works much better and debugging Scala in Intellij is no fun at all. (yes, we use a debugger, I know that probably makes us awful programmers in the eyes of some readers!)

Again this isn’t necessarily Scala’s fault – if I didn’t have an IDE at all Java would be more painful than Scala – but I do have an IDE and even if I don’t write the most elegant solution that’s not what my goal is – my goal is to create functioning software as quickly as I can (for the next and following releases.)

Some reasons we ditched Scala though can’t be blamed on tools or the particular problem we were trying to solve with it. Scala is large, larger than I’m comfortable with. I want a language that’s more opinionated, at least when I’m getting started.

Furthermore the libraries, especially the collection libraries, are hard to get a handle on. As a particular example Scala has both mutable and immutable Set classes … but they are both called ‘Set’ . Relying on a namespace definition to specify something as fundamental as a collection type is just plain frustrating. The Interop between Java and Scala, specifically with Collection types, can also be painful (even with some of the updates in Scala 2.8 .)

In Conclusion

To me using Scala isn’t a great fit in a polyglot environment: it’s a large language to learn, the interop still needs work and and the tool chain adds friction in comparison with plain Java. That said I think with time Scala could become a very interesting monoglot language for developing an entire app on the JVM – in that scenario developers would naturally come to learn it more throughly and wouldn’t be brushing up against interop problems.

Using Scala was a good experience overall and has been yet another push for us to write more functional-style Java. We’ll be looking at other languages to enhance our productivity but will likely leave our functional tests in Java.

The death of agile

First go and read this great post by Bill Caputo. Bill’s site doesn’t seem to allow comments right now so I’ll put my response here instead.

I think part of the problem is that the agile ‘movement’ is so top-heavy with consultants. For many of these consultancies its very hard to sell a story like agile, which isn’t for a specific technology, when it’s packaged as a technical solution. They have to make it a business process problem to get through the door at a high enough dollar rate, or (for bigger consulting firms) to get enough bums on seats to make it worth the effort.

Think of other cross company technology efforts like standards bodies. Are all of these racked to the gills with consultants? No, they have a bunch of CTOs, senior developers, or whatever who are having to live with this stuff every day in a consistent environment. They do have consultants too, but not drowning out everyone else.

I know that the agile movement started with a bunch of really smart people, most of which were consultants, and some of its current leads (tip of the hat to some of my previous colleagues still at ThoughtWorks) are brilliant and continue to add valuable insight to our industry.

However for agile to get to anywhere beyond where it’s become (mostly a big mix of fluffy ideas that are easily billable but which don’t really solve anything without the necessary discipline which most companies are incapable of) it needs a much better diversity of background of leaders. Unfortunately I don’t see that happening – it’s just too big. Take the Agile 20xx conferences – they’re now basically 3 things:

  • 101-level training for newbies
  • an expo for largely pseudo-agile consulting firms and mediocre tools
  • a small amount of people who’ve known each other for ages catching up and complaining about the state of agile.

So I think you’re right, Bill, agile is dead. It served a good purpose, and did a pretty good job of giving our industry the kick up the behind it needed, but it is now pining for the fjords.

To end optimistically though, there’s still a lot of great stuff going on in our industry, its just these days I’m much more interested in technically based conferences and communities, and having conversations on the side of these around process. Its from these technical communities I’ve learned about things like Kanban, for example. And its a blessed relief not to have to justify whether the team I’m on ‘is agile or not’.

Retlang & Jetlang

Retlang and Jetlang are open-source libraries for the .NET CLR and JVM that provide concurrency through in-process messaging. Mike Rettig, one of my colleagues at DRW, is the lead of both of these projects.

Today at Speakerconf I presented on these libraries, and the slides can be found here (in Keynote ’09) and here (in PDF).

I encourage you to use the project mailing lists (for Retlang and Jetlang) if you are interested in learning more.

2008 Gadgets Review – #4 – Twitter

Strictly speaking I signed on to Twitter in 2007 but never used it very much. I didn’t find a way to read it that I liked, and there wasn’t that much I found interesting to read.

This changed this year though. On the application front I started using twitterific on the iPhone. It’s a great thing to check a few times a day when I have a spare couple of minutes away from my computer and not talking to anyone else, waiting for something to happen. I’ll leave the exact details of when such scenarios occur as an exercise to the reader…

Secondly, I started getting a critical mass of people to follow who wrote enough that I always had something to read, but not too much as to be spamming 40 tweets a day. OK, not usually (*ahem* Josh Graham 😉 )

One interesting thing about Twitter is that it’s very much a uni-directional broadcast. People can subscribe or unsubscribe to my feed as they want and really it doesn’t make any difference to me, and I don’t really know about it. Compare this with Facebook, for instance, which is far more of a joint relationship – if someone removes me as a friend from their contacts, they are also removed from my contacts. If they want to add me back, there has to be a confirmation on my part, so I would see them attaching and detaching to my status feed, as it were.

Because Twitter has a looser coupling, I feel more able to put more status updates out when I want, tweet when I’m drunk (although that’s seldom a good idea), etc.

Facebook was my social networking app of 2007, Twitter of 2008. It’s likely by the end of 2009 I’ll have something else going on.

You can find my twitter feed at http://twitter.com/mikebroberts