Back in California

So after 3 1/2 years, I’ve returned to California for a holiday, and its great to be back. 🙂 I’d forgotten just how stunning the geography and climate are here.

Today was spent waking up in San Jose, then visiting Monteray and the adjacent spectacular 17 Mile Drive. During this trip the temperature varied between 60 and 90 degrees (15 and 32 degrees for the rest of us!), and that was all during the day without climbing to any kind of altitude.

California is so different to the east coast, where I’ve been living for the last 3 months, that it might as well be a different country. The buildings are different, the attitude is different, even the road signs are different (well actually, there aren’t any. Not any of any use that is. Not that I missed the on-ramp to the 101 or anything, nonono…)

Photos to come (eventually!)

Another reason why Google rocks

So Google now has a calculator. Try typing in some of the following into Google, or via your Google bar (or just click the link):

how many millilitres in 13.5 ounces

what is 2 times the radius of the earth

square root 3 * 4

square root (3 * 4)

square root (3 * 4) to the 7

square root (3 * 4) to the e

square root (3 * 4) to the e times i

square root (3 * 4) to the e times i over 12

e to the (i * pi)

5 factorial

5 factorial cosinr 12 (deliberate spelling mistake 😉 )

5 factorial cos (12 mod 3)

10 km over 50 seconds

This is sweet. 🙂 From a technical point of view, there’s a tonne of natural language processing going on, and considering everything its blisteringly quick.

UPDATE

OK, finding new stuff in this thing has me hooked. 🙂

200 miles over 4 hours in kph

65 centigrade in farenheit (I don’t know how to spell!)

Where to put Unit Tests in .NET

When you write a .NET application that has (for sake of argument) several DLLs, you have various options about where you compile your NUnit Unit Test Fixtures to. They are:

1. Create one ‘testing and production’ DLL that contains all the classes in your application (production and test fixture), and run NUnit against just that.

2. Put the Test Fixtures in the same (production) DLL as the classes they are testing, then run NUnit across all the production DLLs.

3. Create one ‘testing’ DLL for your application and put all your Fixtures and test classes in it. Run NUnit against this, which itself calls the production DLLs.

4. Create one ‘testing’ DLL for each production DLL. Run NUnit against these, which themselves call the production DLLs.

I’ve seen most of these used. At my client right now, we are currently using option (1) as it offers the easiest and quickest way of compiling and running all the Unit Tests in your application. The problems with it though are:

– it breaks the encapsulation between your projects

– if you miss something out in compiling your special DLL you end up not actually testing what goes into production.

Option (2) is what we are using in CruiseControl.NET. There are some benefits to this choice:

– You only need to compile production DLLs

– Your tests are available in production for debugging if necessary

This second benefit is something I was specifically discussing with a colleague the other day, as I think it maybe a drawback of this method. To me, I have a bad feeling from a security and efficiency point-of-view about putting test-classes into production. That said, if you are developing a bespoke server application deployed locally (which is the situation which would benefit most for such debugging opportunities), and you have tight security, maybe this is worthwhile.

A drawback of options (1) and (2) is that for development’s sake, you’d probably create sub-namespaces for testing. e.g. If I’m testing the Sheep class in the Farmyard.Animals namespace, I’d probably create a SheepTest class in the Farmyard.Animals.Test Namespace. This means that in my SheepTest class I need to add a using statement for the actual Namespace I’m testing.

Options (3) and (4) are similar to each other in that both allow you to test the real production DLLs, while at the same time allowing you to only deploy production classes to production. They also both enable you to write your test classes in a separate project and DLL, but at the same time use the same Namespace as the target class (the real DLL doesn’t depend on the test DLL, so ‘Intellisense’ still works nicely.) If you choose not to deploy these DLLs to production, you can always save the binaries, or recompile later, should you want to run the tests in a production environment.

The drawback of these options is you end up compiling more DLLs, and having more Visual Studio projects, than you have production DLLs.

Weighing-up between options (3) and (4), we see the following comparisons:

– Option 3 allows you to run all the tests for your application by running NUnit against just one DLL, which itself is useful for development speed.

– Option 4 more closely models the componentization of your application. This means you have a natural componentization for your test DLLs, you can easily run just the test fixtures for one component, and if you move components between applications you can more easily move the unit tests. That said, if you change the structure of your components, you also need to change the structure of your test DLLs.

So, which of these to use? Well, more and more I’ve been using the <solution> task in NAnt to compile my .NET projects. It naturally fits, and works well, with the Visual Studio .NET model of an application. VS has its drawbacks, but more and more I see the benefit of a build tool that works closely with it, and it also does model intra-application / inter-project dependencies quite nicely. When using a combination of <solution> and Visual Studio, having extra projects really is little overhead. Therefore, the ‘extra-baggage’ drawback of options (3) and (4) are negated to the extent that I think they feel the ‘right’ thing to do. Right now, I prefer option (4) due to its better fit of the application, but it really is a close call to whether it is better than option (3).

Spam Virus Gone Crazy

As reported all places here there’s a new strain of the ‘sobig’ spam virus circulating today. And I’m being hit bad. My own laptop is not infected, but through no fault of my own in the last 3 hours I’ve been receiving the spam virus at a rate of roughly one per minute, each of which carries the 100kbyte payload.

That means my account will receive more than 100 Megabytes of spam today!!!

This is the latest variant in a succession of the virus, and it has me worried. It looks to me like virus-caused spam is increasing at a huge rate. If no measures are found to stop this kind of thing soon enough, and such virusses grow at a higher rate than the available bandwidth of the internet, what happens?

OK, I’ll stop being apocolyptic now and get back in my box.

MP3s & Album Art

I was walking to work this morning, scrolling through my iPod choosing what to listen to, and I thought “wouldn’t it be great if I could view by album cover, just like MusicMatch can”. I think it would be good since there’s some weird emotional association between the album art and the music itself.

Then I was checking my news feeds this morning when I got it in and saw Clutter which is kind of along the same lines.

All of which makes me think that if a lot of mp3 players (software and hardware) had the ability to show album covers then mp3’s would take off even more. I think this would also make packaging mp3’s as a sellable form more appealing. Certainly I’d start buying mp3 format rather than CD format.

If you bought your album in mp3 form with the album art, why stick to static pictures though? Why not have a little repeated film loop or something? Imagine the Sergeant Pepper cover where everyone on the front was moving around. Maybe the album art could change over time, or change based on the time of day (so the White Album would actually be the Black Album at night). That would be cool.

Good Practice rather than Best Practice

I’m a bit of a stickler when it comes to use of language. I’m not very good at it, but I appreciate the importance of trying to use it in a good way. Communication often leads to an emotional response and as a communicator you want the people you are projecting to to have the response you were aiming for. As a responsible communicator you have a responsiblity not to attempt to invoke a response that is invalid.

As an example of all this, take the phrase Best Practice which seems to be in fashion in the software industry. It doesn’t sit quite right with me, and I think I’d rather use the alternative phrase Good Practice. Why?

Best Practice is often used to describe a technique out of any particular context. In most cases there are often many alternative techniques and you can only pick the best one once you have a context to choose it within. Martin‘s book on Enterprise Patterns stresses this point. Transaction Script, Domain Model, and Table Module are all Good Practices but the best one depends on the context of the application you are developing. As a responsible communicator I should not say that one of these is always the best.

Best Practice can also invoke a negative response in the listener. By saying ‘You should use pair programming because it is a Best Practice’, I am implying that my opinion is better than your’s, no matter what the reason for you not using pair programming. Before we even start discussing why I think pair programming is useful, you may have already got a negative emotion towards the conversation and are therefore less likely to accept what I’m going to say.

I’ve used the phrase Best Practice frequently in the past. I’m going to try just to use Good Practice in future.