Connecting to a remote PostgreSQL Database on Heroku from Clojure

Sometimes it’s useful to be able to debug a local application running against a remote production (or staging) database. The app I’m currently working on is a Clojure app, running on Heroku, using PostgreSQL as a database. It wasn’t entirely obvious (to me) how to do this, but here’s how I did it in the end.

  1. First, this assumes you’re using at least [org.clojure/java.jdbc “0.2.3”] . I thought at first it required later, but 0.2.3 seems to work.
  2. Get your regular Heroku DB URL. This will be of the form ‘postgres://[user]:[password]@[host]:[port]/[database]’
  3. Form a new DB URL as follows (substituting in the tokens from above) : ‘jdbc:postgresql://[host]:[port]/[database]?user=[user]&password=[password]&ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory’.
  4. Then for your db connection, use the map ‘{:connection-uri “[new-url]”}’

If I was going to do this frequently it would be easy to create a function to map the original Heroku URL to this remote debugging URL. Assuming you’ve parsed out the server, port, etc., the following gist will work as a basis for this.


(defn remote-heroku-db-spec [host port database username password]
{:connection-uri (str "jdbc:postgresql://" host ":" port "/" database "?user=" username "&password=" password
"&ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory")})

view raw

gistfile1.clj

hosted with ❤ by GitHub

My Evernote Conference 2013

The Evernote Conference (EC 2013), which happened last week (Sept 26, 27) in San Francisco, was not my usual conference. Typically I go to events that are mostly or solely geared around software development, that I’ve heard good things of directly from friends or colleagues, and where I know I’ll come across a few people I know. EC 2013 had none of these. So why did I go? And how did it turn out? I’ll give you the skinny – I didn’t get quite all that I hoped, but got more than I expected. For more, read on.

I’m in the early days of starting my own business. I’m a big fan of Evernote both as a basic app and as an integration platform. It fulfills a number of needs for me – organization, planning, content archiving, ‘read later’ items, list sharing (with my wife), etc. It’s also the backing platform for my journaling app – Orinoco. In Evernote’s first 5 years of existence it’s been very successful in its own right but the third-party application and integration ecosystem has in my mind been surprisingly sparse. I see this as an opportunity.

I went to EC 2013 with 3 main goals:

  • Get a good idea of concrete numbers of active Evernote users and Evernote Business customers
  • Get a better understanding of Evernote as a company – are they a group of people that I believe can continue to produce a platform successful enough that I want to build on it?
  • Networking with Evernote employees, other 3rd party developers, and potential customers for any business focussed work I may pursue

EC 2013 was Evernote’s 3rd annual public conference. The first 2 were primarily developer focussed but this year they opened up the theme to be much more user oriented. There were plenty of developer sessions though, and Evernote’s app competition – Dev Cup – was a big part of the event.

The morning keynotes each day were mostly geared around product launches. The first day’s was consumer focussed (including somewhat strange launches for Evernote-partnered bag manufacturing as part of the new Evernote Market), the second’s business focussed (centering on the new Salesforce.com integration.)

The evening keynotes were both fascinating on one hand (Chris Anderson talking about his drone company 3D Robotics) and disappointing on the other (David Allen giving an overview of the thinking behind Getting Things Done, without adding anything significant that couldn’t be understood from his 10+ year old book.)

There were some decent breakout sessions. Evernote’s own Seth Hitchings gave a good ‘State of the Platform’ talk, giving some useful data of where things stand with the Evernote Platform (the central storage / search / synchronization infrastructure that all Evernote and 3rd party apps integrate with), plus also some useful announcements of things that are coming (support for reminders from the API; allowing users to give 3rd party apps access to only part of their Evernote account, etc.) Julien Boëdec (partner integrations manager at Evernote) gave a great, concise, workshop session on Evernote Business integration which included some descriptions of some actual 3rd party integrations with Evernote Business.

My favorite part though was, as is common with me and conferences, the time between the sessions chatting to all different types of people. I met a good number of Evernote’s own employees (I’m pretty certain that most, if not all, of the company were there) including a couple of product managers, developers, their head of business sales, developer relations folk, etc. My takeaway from all of those conversations was that Evernote is built on a bunch of enthusiastic, smart, decent people. As an example I spent an enjoyable and enlightening hour or so with one of their developers chatting about various API concerns.

So what about my original goals?

  • Evernote have 75 million registered users. Unsurprisingly, but disappointingly, I couldn’t get a concrete number for active users but I did hear something from someone that said it was in the 15 million range. I didn’t get any detail if that was monthly, annually, or what. I’d really like to know how many people access Evernote at least once per month. 7900 companies have bought Evernote Business, but they weren’t going into much more detail than that (I’d like to know how many have at least 20 users, at least 100 users, etc.)
  • As I said above all the people I met from Evernote came across as smart and enthusiastic. They are also capable – the new iOS 7 client was a complete re-write, going from conception to delivery, on a pretty new iOS platform, in 3 months. I dread to think the hours they were pulling to make that happen (their iOS dev team is not big) but that’s still damn impressive.
  • I’m not as gregarious as I could be but I still met plenty of folks there across the 3 categories I was concerned with.

That adds up to a decent result. Not perfect, but good.

What I also got though, and that I didn’t expect, was a really good feeling I’m on the right track. Of course everyone at the conference was an Evernote enthusiast but this is product, and platform, that has massive appeal across a broad swath of companies, individuals and technical savviness. I showed off Orinoco to a bunch of people and the feedback was universally positive. Either everyone is super nice when they’re on the west coast or this is something that shows promise.

I still don’t know the precise focus of where I want to end up (that’s what iterating is for, right?) but what the Evernote Conference showed me was that building off their platform ain’t a bad place to start.

(cross posted at http://mikeroberts.postach.io/my-evernote-conference-2013)

20 years of Desktop Computers

This week I sold my iMac. I now no longer own a desktop computer, for the first time in 20 years. I’ll get to why at the end, but I thought it might be fun to take a look back over these 2 decades of my personal PC history.

My first PC looked exactly like thisMy first computer was an IBM PS/1. It was 1993 and I was 14 years old. There was no computer in my house growing up before this time so I’d been spending a lot of time in the school computer lab.

This PS/1 was a great machine. Its initial spec was 486 DX 33 processor, 4MB RAM, 170 MB hard disk with a 14″ monitor which I typically ran at 800×600 resolution. For the time this was a pretty decent set of hardware. It ran Windows 3.1 and IBM’s ‘PC-DOS‘ 5 (simply MS-DOS 5 rebranded.) It never developed a fault while I was using it.

It was my PC through to the end of my time at high school. It had a few upgrades over these years including a sound card (natively it only had the fairly useless PC speaker), a RAM upgrade to 8MB, a hard disk upgrade to 420 MB, adding a CD-ROM drive and various OS updates, the last being Windows 95.

By the summer of ’96 it was time for me to go to University, and I bought a new PC for the move. This was the first PC I built myself and from here on for the best part of a decade I was forever tinkering with my computers. As such my memories of specs get a little hazy. I do remember that the original incarnation of my first University PC had a Cyrix 6×86 processor – this was a mistake. The Cyrix chip was slow and crashed frequently (apparently they had heat problems.) I suffered through for the best part of a year before giving up and getting a new CPU and motherboard.

In the first year at college I networked my PC for the first time – using a very long serial-port null modem cable to my friend (of then and now) Mike Mason‘s computer, who lived in a room about 50 feet away. We played Quake against each other and also amazed ourselves at being able to eject the CD-ROM drive of the other person’s machine. We clearly needed to get out more. Around this time I started using Linux, starting with the Slackware distribution.

Zip DriveIn our second year at college Mike and I shared a house with a few friends, so the networking in my machine got upgraded to have a BNC ethernet card. It was around this time that storing music as MP3 format started – it used to take 2 – 3 hours to rip a CD on my computer. Winamp was the music player of choice. I had an Iomega Zip drive which allowed me for the first time to move my data around physically on something with more capacity than a 1.44MB floppy disk. The Zip drive, like so many of the time, was thrown out when it developed the infamous ‘click death’. USB memory sticks are far superior.

In my third and final year at college I moved back into college accommodation which came with a LAN-speed internet connection. This was a huge benefit. I was pretty concerned about having my PC hooked up straight to the internet though so I bought a second desktop PC to act mostly as a firewall and secondarily as my Linux machine. This required me to get a bit more serious about my Linux usage – I wouldn’t like to guess how much time I spent recompiling kernels that year.

A bunch of us across the university had network-connected PCs and setup what we would now call a VPN using a combination of CIPE and NFS. With this we could transfer files between each other without the university network being able to tell what we were doing. We were very proud of ourselves for doing this and still needed to get out more.

Shuttle PCI continued tinkering with my tower-case enclosed PCs over the first 3 years or so after college. I also bought my first laptop around this time. In 2002-ish I bought my first Shuttle ‘small form factor’ PC. This was a speedy machine, and also (for the time) tiny. I added a 2nd Shuttle to allow further tinkering.

In 2003 through spring 2006 I did a lot of moving of countries so my desktop computing adventures slowed down here. In ’05 I bought my 2nd laptop, my 2nd Sony Vaio. At the end of 2005 I did buy my first Mac – a G4 powered Mac Mini but this was mostly as a ‘media PC’ rather than a serious desktop computer.

In late 06, now living in New York, I bought my last desktop computer – a Core 2 Duo powered iMac.  The biggest (literally) thing about this machine to me was its screen, all 24 inches of it which to me at the time seemed stupidly huge. This was also the first time I seriously used a Mac. Despite being frustrated at first with Mac OS I soon got the hang of it and wondered what I’d been doing to take so long to start using a Mac.

iMacThe iMac was great and I was still using it through until last summer – not bad for a 6 year old machine. In this period I only upgraded one piece of hardware – giving it a slight RAM upgrade to 3GB. My years of constant hardware tinkering were over.

Last summer I bought my 3rd laptop – a fully specced MacBook Air. This machine is screamingly fast and hooked up to a 30″ display easily does everything I need. The iMac was consigned to the floor of the spare room where it sat until this week and I sold it.

I still find it amazing that a machine of the diminutive proportions of my MacBook Air can perform like it does. Comparing it with my first machine it has a CPU roughly 500 times more powerful (by BogoMips), 2000 times more memory and 3000 times more disk space (and that’s with using an SSD). Truly we are living in the future.

Asking better questions

I didn’t know Aaron Swartz. I met him very briefly in December but that was all. Nevertheless it’s been a realization this last week and a half hearing from those that did know him what an amazing human he was, and how much of a loss there is for the world in his passing away too soon.

I watched online some of the memorial for Aaron that took place in New York last Saturday. I was most impressed and moved by the last speech, from his partner Taren Stinebrickner-Kauffman. There was much in what she said about the legal pressures surrounding Aaron’s last year, but what resonated most with me was this section:

Aaron didn’t believe he was smarter than anyone else, which is hard for — it was very hard for me to accept that he really believed that. He really, really believed that he was not smarter than anybody else. He just thought he asked better questions.

He believed that every single person in this room is capable of doing as much as he did, if you just ask the right questions.

Whatever you’re doing, are you confused? Is there anything that doesn’t quite make sense about what you’re doing? What is it? Never assume that someone else has noticed that niggling sense of doubt and already resolved the issue for themselves. They haven’t. The world does not make sense, and if you think it does it’s because you’re not asking hard enough questions.

If you’re in the tech sector, why are you there? What do you really believe in? If you believe that technology is making the world a better place, why do you believe that? Do you really understand what makes the world a bad place to begin with?

I’m serious. If you’re in this room and you work in the technology sector, I’m asking you that question. Do you understand what makes the world a bad place to begin with? Have you ever spent time with and listened to the people your technology is supposed to be helping? Or the people it might be hurting?

While I do believe that much needs to be done with regard to the unfairness of Aaron’s trial, there is little I can personally do about that. But the calling above is something that we all in the software development world can consider. If some of us act on this then Aaron’s passing will be a little less in vain.

The video of Aaron’s NY memorial is here. Taren’s speech is at about the 1:47 mark. Thanks to Chris Burkhardt for transcribing Taren’s speech – the full text is available here. My sympathies go out to all of Aaron’s family, friends and colleagues at this time.

Agile people over agile process

In June 2012 I gave a talk at QCon New York titled ‘Agile people over Agile process’. The full talk is here, and below are some of my thoughts on this topic.

Summary

What’s below is pretty long so if you don’t want to read it all, here’s the essence of my opinion.

In the ‘agile world’ these days I see a decent amount of pre-defined, unarguable, process and dogma – the very things that the agile movement initially tried to do away with. I think it’s worth stepping away from this and focussing first on individuals, how they communicate, and how as a team they best choose their techniques and tools

There are no such things as ‘best practices’, at least when it comes to being part of a software team or software project. There are practices that teams find useful within their context, but each context is different. Teams would do well to continually re-judge what process and tooling works best for them.

Agile teams can use values and principles to help drive their choice of process and tools.

So let’s begin…

There’s too much focus on process

When I got started in the agile world 10+ years ago we used to talk a lot about extreme programming (XP), Scrum, and the like. Obviously part of that was figuring out test driven development, pair programming, continuous integration, iterations, etc. A lot of it was also about how we needed to change as individuals though. Gone were the times where we could just sit in our cubicle and complete our individual tasks on the almighty gantt chart. No longer could we assume that we didn’t need to test code because that was someone else’s responsibility. We needed to embrace how we worked as a collaborative team, and not just argue over Emacs vs Vi. This was a revolution in how we identified as humans on a software project.

People back then accused XP of being a developer-focused methodology, and they were right, but this was with good reason. For developers to be most effective they needed to stop just being pawns in a bigger process and start talking to people, work with feedback, and take responsibility for delivery. XP helped them do this.

People in the agile world still talk a lot about Scrum, lean, kanban, etc., just like we used to 10 years ago. However I feel the tone of a lot of conversations has changed – now a lot of times it’s just about the process. Agile seems to no longer be about people changing their attitude to projects, to delivery or most importantly to people. Now often it’s just about introducing a new team management methodology in the hope that Lean or XP or whatever will be a process magic bullet that will solve all their problems.

But with process, as with many other things, there is no magic bullet.

Process is very important. It’s where the rubber of any methodology hits the road, but there are problems with an overly-zealous focus on process:

  1. Processes can become kings. Processes are at the end of the day just tools – they have no intrinsic value by themselves. They can only be judged valuable in the context of their application. When processes become kings then our discussions descend to hypothetical judgments of a supposed intrinsic value that the processes don’t have. Such discussions are a waste of time and don’t help anyone.
  2. If processes are considered axiomatic then we can no longer adapt how we work. If we believe the best way to do something is X, yet we do not understand the motivation for X, how can we decide if Y would be better?
  3. It misses the the point of what Agile was supposed to be about…

What I think is important about Agile

The Agile Manifesto starts as follows:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:
Individuals and interactions over processes and tools…

In other words the unique characteristics, personalities and abilities of members of a software development team, and the conversations that they have with each other and with their stakeholders (management, users, etc.) are worth considering more than the process and tools that they use.

This is not to say at all that processes and tools are unimportant, I am merely arguing that that they are not so important that they should drive discussion about the basic way we choose to deliver software.

Focussing on individuals sounds like a management technique. While that is important, I think it is more a call to individuals themselves to consider how they can most effectively be part of a software delivery effort.

There are many ways individuals can answer this call, but one way that may be useful is to look at the values and principles from ‘Extreme Programming Explained‘ – Kent Beck’s original book about XP. These values and principals are not specific descriptions of how to do something but guides to help people decide how they actually want to work. I’m not going to go into detail here since literature is already available for each point (and I also discuss further in my talk at around the 13 minute mark), but a list of ideas is as follows:

Values

  • Simplicity
  • Communication
  • Feedback
  • Courage

Principles

  • Assume the values are present
  • Incremental change
  • Deliver at the earliest responsible time (an addition / variant of mine that I think is worth considering separately)
  • Quality (the Jim Highsmith article I refer to in the talk is here)
  • Accepted responsibility – everyone on the team should assume they have responsibility
  • Local adaptation – change everything according to context

How this applies to practice

These values and principles are all somewhat theoretical – the application of them is the actual choice of what set of  practices and processes a team choose to use. Not one pre-defined overall process, but an active, continuing choice of what techniques to use.

This is necessary since in the case of software development teams and projects, there are no such things as best practices. There are practices that teams find useful within their own context, but this is not an absolute categorization.

How I’ve recently embraced this

In the video I describe how I applied these ideas on my most recent project. It starts at about the 28 minute mark. It made sense to include this in the talk but I’m going to leave detail out of this post.

It is worth mentioning here though that there are certain ‘typical practices’ of agile that we did use but others that we chose not to. For example we didn’t use ‘iterations’ to structure our week-to-week work. However we did often deploy new software, we frequently re-prioritized our next work, etc. Since we already did these things formal iterations in our world would have been unnecessary baggage. For many other teams formal iterations would be very valuable.

Is this really for everyone?

In discussing this subject some people have challenged me that this way of thinking is only useful for ‘experts’, such as people who already have previous agile experience. I disagree. While I think that picking an off-the-shelf methodology might make for a ‘decent first set’ of practices I think a team needs to know from the beginning some amount of why that may be so. I think for someone with experience to provide a pre-canned process set as ‘the way things should be’ is disingenuous.

I wouldn’t expect everyone on a team new to agile to be able to immediately make their own choices about the entire implementation of principle to practice, but I would expect them to know that the introspection of their process (based on values and principles), and their subsequent refinement of the same, is a more important aspect of agile development than any of the individual techniques they may be using.

Concluding thoughts

None of this is new at all, and a lot of the good agile literature from the last decade describes these ideas. As a more recent example Ross Pettit does a good job talking about them here.

I think it’s worthwhile repeating it though for 2 reasons:

  • I see some amount of the agile community as a whole moving to a ‘process first’ mindset and I disagree with it.
  • I’ve seen myself at times throughout my career treating process, practice or technique as dogmatic. Invariably when I do this it’s because I’ve missed something important. Stepping back and thinking ‘why’ always leads to improvement. I think this is a valuable reminder to myself and hopefully others.

Leaving DRW, and my take on Customer Affinity

Last month I finished working at DRW Trading after nearly 4 years there. DRW has a fantastic technical organization on both the Software Engineering (SE) and IT Operations side, from the leaders (CTO Lyle Hayhurst, COO Seth Thomson and Director of SE Derek Groothuis) down. In many ways I expect this will be one of the best jobs I ever have – my technical colleagues were fantastic (especially Jay Fields, my right hand man for the last 18 months), my team had complete management and implementation control of our projects, I didn’t have to deal with much in the way of politics at all, and yes, the pay was good!

So the obvious question is why leave? As usual in such cases it’s not a simple answer. I’m going to go on something of a tangent before I give a couple of reasons.

With software development jobs where the goal is producing a product, that product is not an end in itself for the customer [1]. The product is a tool that will be used by the customer to do something. Compare this with doctors or actors – in the first making people healthier is the sole end goal, in the second entertaining is the sole end goal [2].

(Good) software developers have some understanding about what the thing they’re making is going to be used for. Compare this time with structural engineers responsible for building a bridge – they know they’re building a bridge but they’ve no idea what the final destination is of the people traveling over that bridge.

I think most software development roles in some ways resemble being a lawyer (hear me out!) As a lawyer the principle aim, in the context of litigation at least, is to win the case, but you’re always doing so for a particular type of case. You might be a criminal lawyer, patent attorney, etc.

Most lawyers I know are interested not just in the practice of law itself, but also to some extent in their more specific field. A non-profit housing lawyer (to use an example very close to me, thank you Sara) might not just be interested in winning cases, but in helping less privileged tenants in having a fair voice against landlords with far more means.

And so we come back to software. There’s no doubt that as ‘software people’ we are interested in making stuff. We all have parts of ‘making’ we’re more interested in, whether it be the technical design, the project process, the user interface, etc. But I would describe these all as second-order goals.

The first-order goal is the actual thing that will be useful to the customer. Again, I’ll say what I did above – good developers have an understanding about the first-order goal (in other words they have what Martin Fowler calls Customer Affinity.) Excellent developers for the long term have not just an understanding, but an active interest, of the first-order goal (otherwise known in our field as the domain.)

For most of the time I was at DRW the second-order problems I was solving were fascinating to me. With a very small team we built and maintained a large, solid, well-appreciated application. Furthermore I was making good inroads into understanding the domain (commodities trading). Once we’d solved a lot of the second-order problems though what remained was understanding, and appreciating, the domain.

The problem was that I don’t find trading very interesting. To me it’s ‘just’ math and moving money around. Trading in some ways has a lot in common with gambling : assessing financial risk and making positions (‘bets’) based on what you currently see in the slice of the world around you. I’ve never been particularly interested in gambling and I think the 2 are linked. I know many excellent developers who are truly interested in trading (Jay being one of them) – I don’t have a problem with them, I just don’t share their enthusiasm for this particular domain.

Further though, I think to be an excellent developer in the long term not only must you appreciate / have passion for what the users of the software want to do with the software, you also need to have empathy for the users themselves. Martin says this at the end of the first paragraph in the link above : “[Customer Affinity] is the interest and closeness that the developers have in the business problem that the software is addressing, and in the people who live in that business world.” (emphasis mine).

This leads to a second reason I left DRW – I didn’t empathize with many of the traders I worked with. Note that I’m absolutely not saying I thought they were wrong, they’re certainly far more financially successful than I am at least, what I’m saying is that my approach to work and their’s didn’t meld in the general sense. I don’t think it’s useful to get too specific here, and probably this is something you wouldn’t know either way until you’ve been in trading for some time, but I think there’s a general lesson worth taking from this beyond just trading.

One thing I’m happy about is that I realized these things before they started to make an impact on my work. Being able to leave knowing you’ve done your best, but that you wouldn’t be doing your best if you stayed, is a very satisfying position to be in.

The reasons I give for my leaving sound negative, but I’m picking specifically the reasons I left, not the many, many reasons I stayed for nearly 4 years. Of the financial services companies I’ve worked with I enjoyed being at DRW by far the most, I don’t regret my time there at all, and I absolutely appreciate the opportunity I had (and am grateful to the leadership of DRW for giving it to me.) For developers wanting to join, or continue in, the trading industry I would recommend joining DRW wholeheartedly.

Of course, there’s an obvious postscript here. What am I doing now? I’m taking time off! I’ve never taken extended leave in my life, not ever since school, and I’m fortunate to be able to do so now. I have a few ideas of what I might do and I look forward to updating this blog as some of them (and others!) come to fruition.

[1] There are other types of software development jobs, e.g. programming language research, training, and ‘pure’ consultancy (as opposed to ‘body shopping’). The difference with consulting is that you’re not building a product – the end goal is to help other people build a product. I would even consider going back into finance as a ‘pure’ consultant.
[2] Maybe I’m over simplifying here, but I don’t think I’m too far off the mark

Syncing music to iPod / iPhone from lossless iTunes library

For listening to music at home I use an Apple TV plugged into my fancyish sound system, and so I use music stored in lossless format. Since I use an Apple TV this music is stored on a computer using iTunes. I also have an iPhone, and my music library is on there too, but I can’t fit my entire lossless library on there (it’s more than 100GB) so up until now I’ve also kept a totally separate iTunes library, on a different computer, with the same music in 128kbps AAC format that can fit on my iPhone.

iTunes for a while has had an option where when syncing to an iPod shuffle it will automatically convert songs to a low bitrate to fit more songs on. I realized a couple of weeks ago that this option now exists for iPods and iPhones too – it appears on the main iPhone screen when you look at the device in iTunes

Thanks to macyourself.com

I tried this out last week. It definitely works, but takes a long time, about 15 hours syncing from my ~4 year old iMac. I can live with that slowness though now I don’t have to look after 2 separate libraries and manually convert all my music to smaller formats myself.

 

Dual KVM Pairing

Previously when I’ve pair programmed (2 people programming at the same computer at the same time) I’ve always used one keyboard, screen and mouse (KVM – V means Video). In the last couple of weeks I’ve been trying out ‘dual KVM’ pairing though – in this scenario each programmer has their own keyboard, mouse and monitor, where the screens are setup to mirror each other (each person sees exactly the same thing)

This style of pairing isn’t new, and certainly is common on other teams at DRW, I just hadn’t used it before. In fact I had concerns, the principal ones being:

  1. Wouldn’t something be lost in communication with not having a shared physical screen? (I point at things on the screen fairly often when pairing)
  2. Wouldn’t programmers be constantly aware that they might be fighting each other for control of the mouse pointer / cursor if they had their own keyboard and mouse?

It turns out that I really like this style of pairing. My concerns with communication about the screen are largely alleviated by turning on line numbers in the code editor, and the keyboard and mouse fighting isn’t nearly the problem I feared. The benefits are chiefly ergonomic, but they are significant. Being able to look straight forward, and not having to lean in towards the keyboard and mouse makes work a lot more comfortable. The only thing I slightly miss is being able to use 2 screens for a stretched desktop, but that’s a price worth paying – I can always switch back the screens to this mode when I’m not pairing.

Experience using Scala as a functional testing language

6 months ago my team decided to migrate our functional tests to being coded in Scala rather than Java, the native language our application is written in. However we have now reverted back to writing them in plain Java. What follows is an experience report of this exercise and our reasons for bringing it to an end.

Background

The application under test is a message-driven server application. We define the functional tests of this application as those that run against the largest subset of our application we can define without requiring any out of process communication. The functional tests themselves run in process with the application under test.

Each functional test is written in a style that treats the whole system (mostly) as a black box. We stub out all external collaborators – those stubs simulate collaborators sending messages, and also collect any messages that they receive allowing the tests to make assertions about the application’s interactions with its environment.

Our application is not trivial; writing functional tests that are concise, understandable and maintainable is a tricky task. We’ve created a fair number of support classes that start the system and act as the collaborator stubs described above to help keep the tests themselves clean.

We use functional tests extensively, and typically write at least one functional test per work item on our backlog. Just in terms of numbers about 10% of all the automated tests we have are functional in style, the rest are per-class-level unit tests.

For our development environment we use IntelliJ as our IDE and Rake as our command-line build environment.

Switching to Scala

We were interested in trying Scala as our functional test language for 2 main reasons:

  1. To improve clarity and maintainability of the tests
  2. To assess Scala as a possible production-code language

We already had a good number of functional tests going into this exercise and so our first task was to rewrite these in Scala. We also rewrote most of our test-support classes in Scala.

Since this was our first time writing Scala the translation wasn’t a blisteringly fast process but the Intellij Scala plugin’s ‘copy Java, paste as Scala’ feature did help us get going. If nothing else it was a useful guide when translating generic code.

Another initial task was to setup our development environment to support Scala. Intellij’s Scala plugin, while having a number of deficiencies, does the basics well and we were very quickly compiling and testing Scala alongside Java in the same project. Even though Intellij will support Java and Scala code in the same source  we kept all Scala code in a separate source tree to avoid complications with the command line build. With that setup updating our rake script to compile Scala and run the Scala tests was relatively easy.

What was good

The main thing that attracted us to Scala was the ability to write code in a semi-functional style much more concisely than can be done in plain Java. We’ve also been coding a good amount of C# recently and we sorely miss the basic functional support in C# 3 when switching to Java. We were not disappointed by Scala’s abilities in this regard: there were many occasions where we could write 1 line of concise, readable Scala where previously we’d had 8 lines of a Java method.

Why drop it?

There were several reasons we decided to roll back to Java, and to be fair to Scala most of them were not it’s fault.

The biggest reason was that despite 6 months of experience we still found we were slower to code and debug Scala than Java. We probably spend around 5 to 10% of our coding time working with the functional tests and that just isn’t enough to really ‘get’ the language. I think this would be similar for most languages – you’ve got to use them significantly to become fluent in them – but I think this is particularly true with Scala since it is a large language with an equivalently large library.

I don’t think its just the time aspect either. Tests are a very specific style of coding and mostly procedural. Where we got most of the benefit of Scala were in our test support classes, but even they aren’t hugely complex. We never got into any meaty problems in our Scala realm and so never really pushed our knowledge of it.

Scala is absolutely a more powerful language than Java and as I mentioned above we could write code more concisely in Scala than we could in Java. However IntelliJ is a great tool and it makes up for a surprising number Java’s deficiencies. You end up with more code on the screen with Java but I’m not convinced that it takes more time to write it. Furthermore once the code is written the rest of the IDE experience is far better in Java than Scala – compiling is faster, code browsing works much better and debugging Scala in Intellij is no fun at all. (yes, we use a debugger, I know that probably makes us awful programmers in the eyes of some readers!)

Again this isn’t necessarily Scala’s fault – if I didn’t have an IDE at all Java would be more painful than Scala – but I do have an IDE and even if I don’t write the most elegant solution that’s not what my goal is – my goal is to create functioning software as quickly as I can (for the next and following releases.)

Some reasons we ditched Scala though can’t be blamed on tools or the particular problem we were trying to solve with it. Scala is large, larger than I’m comfortable with. I want a language that’s more opinionated, at least when I’m getting started.

Furthermore the libraries, especially the collection libraries, are hard to get a handle on. As a particular example Scala has both mutable and immutable Set classes … but they are both called ‘Set’ . Relying on a namespace definition to specify something as fundamental as a collection type is just plain frustrating. The Interop between Java and Scala, specifically with Collection types, can also be painful (even with some of the updates in Scala 2.8 .)

In Conclusion

To me using Scala isn’t a great fit in a polyglot environment: it’s a large language to learn, the interop still needs work and and the tool chain adds friction in comparison with plain Java. That said I think with time Scala could become a very interesting monoglot language for developing an entire app on the JVM – in that scenario developers would naturally come to learn it more throughly and wouldn’t be brushing up against interop problems.

Using Scala was a good experience overall and has been yet another push for us to write more functional-style Java. We’ll be looking at other languages to enhance our productivity but will likely leave our functional tests in Java.

The death of agile

First go and read this great post by Bill Caputo. Bill’s site doesn’t seem to allow comments right now so I’ll put my response here instead.

I think part of the problem is that the agile ‘movement’ is so top-heavy with consultants. For many of these consultancies its very hard to sell a story like agile, which isn’t for a specific technology, when it’s packaged as a technical solution. They have to make it a business process problem to get through the door at a high enough dollar rate, or (for bigger consulting firms) to get enough bums on seats to make it worth the effort.

Think of other cross company technology efforts like standards bodies. Are all of these racked to the gills with consultants? No, they have a bunch of CTOs, senior developers, or whatever who are having to live with this stuff every day in a consistent environment. They do have consultants too, but not drowning out everyone else.

I know that the agile movement started with a bunch of really smart people, most of which were consultants, and some of its current leads (tip of the hat to some of my previous colleagues still at ThoughtWorks) are brilliant and continue to add valuable insight to our industry.

However for agile to get to anywhere beyond where it’s become (mostly a big mix of fluffy ideas that are easily billable but which don’t really solve anything without the necessary discipline which most companies are incapable of) it needs a much better diversity of background of leaders. Unfortunately I don’t see that happening – it’s just too big. Take the Agile 20xx conferences – they’re now basically 3 things:

  • 101-level training for newbies
  • an expo for largely pseudo-agile consulting firms and mediocre tools
  • a small amount of people who’ve known each other for ages catching up and complaining about the state of agile.

So I think you’re right, Bill, agile is dead. It served a good purpose, and did a pretty good job of giving our industry the kick up the behind it needed, but it is now pining for the fjords.

To end optimistically though, there’s still a lot of great stuff going on in our industry, its just these days I’m much more interested in technically based conferences and communities, and having conversations on the side of these around process. Its from these technical communities I’ve learned about things like Kanban, for example. And its a blessed relief not to have to justify whether the team I’m on ‘is agile or not’.