Five reasons to learn Clojure and Emacs together

December 27, 2012 § 13 Comments

There’s often debate about whether newcomers should try to learn both Clojure and Emacs at the same time. My take on it is that yes they should and here’s five reasons why.

1) Liberation

Smart IDEs ares like powerful drugs. They treat the sick and the dying wonderfully well but tend to concentrate on the symptoms more than the problem, that of bloated code. They are great for refactoring around masses of static types, but you do so inside of a walled garden in that the features you want are built into the IDE itself, and therefore you have to rely mostly on the IDE developers to make things better. The modern powerhouse IDEs are complected.

Emacs at heart is a basic text editor, one that has been around for decades with extensibility at it’s core. Because of this, and the fact it’s open source, the plugins for Emacs are essentially limitless in what they can do. There are vast amounts of major and minor modes and colour themes. Org mode awaits you.

It’s this ability to tailor Emacs so much that makes it feel liberating; you can do what you want with it. And if you really don’t enjoy a spot of configuration-masochism then they are many ‘starter kits’ on the web that will quickly get you going, see the ‘Emacs Starter Kit’ for starting out with Clojure. You could postpone engaging with Paredit mode if you wish, and make use of Cua mode to bring the sanity back to copy and pasting. The feeling of liberation can always be throttled.

2) Change

Moving from a statically typed language to Clojure is not easy. With this in mind are we really making things that much easier by encouraging devs to stick things out in a familiar IDE? Without a pertinent and dramatic sense of change are we not just making things harder? Why try to disguise the fact that Clojure is a different development world away from Java. You can’t refactor or navigate around in the same way and the debugging just isn’t as good. Developing in a sophisticated IDE without 80% of the IDEs usefulness would likely be incredibly frustrating and a turnoff. You’d be dressed up for the wrong gig.

Change also begets change. Moving to Clojure is a chance to review a lot of things; the VCS used, testing tools, approach to CI, artefact management, anything could be up for grabs. Better to make a clean break from the past.

3) Support

Following on from 2 there is the issue of support. You could be ambivalent about devs on your team using Clojure with their favourite Java IDE but then you must consider what the support cost will be. They may stumble across some half hearted wiki page about getting a certain Clojure plugin working but then would it work with Lein 2 and not just Lein 1? Is it using the deprecated Swank-Clojure or NRepl? The Clojure landscape is still shifting, newbies should probably be using the best lifeboats.

4) Simple and Easy

Like with Clojure learning Emacs from scratch isn’t excessively easy but then in the long run it’s simpler. Why would we suggest newbies to stick with their current IDEs when there’s a danger that they might not move on afterwards? Clojure and Emacs are a package of simplicity, they go well together.

And we know that simplicity is not always for free. There has to be some learning-pain before you can reap the rewards. If a developer is truly unwilling to switch to a lighter weight, more extensible dev environment then they may also be equally unwilling to make the paradigm shift to functional programming.

Of course the flip side to this is that it’s possible a developer may go to lengths to make coding even more of a pleasurable experience in the smart IDE landscape. If so, then hats off to them.

5) Mass Adoption

Trying to spread Clojure whilst recommending an IDE change sounds very counterintuitive, but then is Clojure itself ripe for mass adoption across the industry? Is this the end goal? These questions are beyond the scope of this post, but suffice to say that I’m not sure it would be a good thing if all people at the large institutions I’ve worked at started coding everything in Clojure. Emacs certainly adds to the learning divide, and makes it more explicit.

Please Vim warriors take note that these arguments could just as easily pertain to Vim.

Clojure at a Bank – Freeing the Rules

November 29, 2012 § 1 Comment

I’ve written previous posts about a team at an investment bank making the switch from Java to Clojure. In this post I’d like to focus on the business rules being moved in the process.


I’ve found that business rules at large institutions tend to resemble viral memes and genes. This is to say that they are regularly duplicated amongst systems, manual or automated, and most will persist long after the people maintaining them move on. The hardy ones manage to jump from dying systems into new ones.

In our case the rules are migrating from a large monolithic Java stack into a new Clojure code base. We wanted to give them a reformed existence that would be more streamlined, free of elaborately crafted OO structures, and where they would not be hemmed in by overly enthusiastic and rigidly defined tests.

Tags rather than Types

One of the first things we did differently was to use tags rather than types for modeling the business domain. In the old world a financial product would immediately be given its own Java class and its similarities with other products expressed through inheritance and interfaces, perhaps with some mixin styled object composition thrown in. This approach has failed our system. Sure that with the right people, time and foresight, most systems in most languages can be beautiful, but in my opinion the generalistic approach of modeling the problem with static types opens the door wide open to epic waste and codebases with limited options.

For our Clojure codebase we instead simply treated products and trades as maps of immutable data and we built up a corresponding set of keywords – ‘tags’ – to describe what we know about them.

Imagine that we’re processing orders for cups of coffee. Without breaking the problem domain down to the nth degree, we can start by introspecting and tagging whatever data comes our way with business significant terms such as :hot, :milky, and :has-froth. Predicates to drive business rules become trivial to write, i.e. (when (superset? tags #{:sprinkles :caffiene :large}) (do-something)). The code matching up data to tags would then become an all-important kernel of the system.

CSS styled JSON building

We used tags for a range of things, one of the prime cases being to build up JSON for sending to a downstream system. We used tags to determine a) JSON schemas and b) JSON values.

Taking values first, imagine that we’ve built up a JSON structure but with the values omitted, i.e:

{:beverage {:type _ :temperature _ }}

A simple DSL for defining values may then look like:

(values/add :type #{:cappucino} "Coffee")


(values/add :temperature #{:cappucino} "Hot")

Here the rules look a lot like CSS. We use path selectors such as :type and :temperature and instead of matching on HTML class attributes we simply match on tags; #{:cappucino}.

The rules for building up the schema could work in a similar way. A :frappuccino may need a different outgoing data structure than for a :cappucino, perhaps to do with some inherent complexity of iced drinks.

What’s covered here is fairly trivial stuff but it’s a starting point for more. You could extend this DSL as to allow multiple value-rules to be defined at once, to add predicates, and to make the values Clojure functions that work off the input data. You do of course need some core boilerplate code for mashing the JSON schema and value rules together but this should be simple to write.

Like CSS there are some pros and cons. Pros are that adding new rules becomes trivial and that the rules themselves are inherently reusable. Cons are of having to manage a large set of flattened rules.

Rules as Data

On our project we’ve got a fight on our hands in that the number of business rules is large whilst having a wide amount of variance across them. Instead of spreading the rules out we have now a few thousand – and growing – number of rules living in a concentrated handful of Clojure namespaces. At first glance this looks like poor design, this is fugly and bad right? What about storing the rules in a DB or reapplying a fair bit of OO styled modeling?

First because all Clojure code is data then we absolutely do have a rule DB, and not just a pile of namespace code editable in Emacs. By thinking of the rules entirely as data we’ve opened options for ourselves.

For example we’ve built a UI that allows non-technical users to directly browse the rules and to walk the hierarchies between them. Users can play around with input data and tags to see what rules are used for a given context. They can view information about why a certain rule is selected above another, the CSS-like specificity behavour. We also show the source code, namespace and line number behind each rule. A team member recently added the ability for devs to click on an html link that opens up Emacs at exactly the right point where the rule-code is.

The rules form an audit trail of how payloads were crafted. Since we also use them for data reconciliation purposes users can now write comments on data mismatches, directly in the context of a rule.

REPL, Builds and Tests

The tags and rules-as-data approach has allowed us to build up a set of tools for experimenting in the REPL. For example it’s common to query an ElasticSearch instance from the REPL to bring back a certain population of test-data and then to throw it at the rule engine to see what happens. We’ve got build-server agents doing a similar thing to alert us when changes in the rules bring about unexpected consequences.

As a more immediate line of defense we’ve got unit tests utilising core.logic to make sure that the rules are sane and that repetition and redundancy are kept to a minimum if not prohibited. We’ve added to the UI the ability to highlight sets of rules that are too close to each other, where a revision of tags might be applied to clean things up.

And if one day we decided that our rules needed to be stored in a completely different way, then we could always write some more Clojure code to read them back in and to spit them out again as something different. The door to moving them into a graph DB, RDF, or into a fact based DB like Datomic is not closed.

Wrap up

We’ve learnt that when you’re working with rules as data then many more possibilities open up as to what you can do with them. Quite the opposite to what you typically see at large institutions where the rules are tucked away like diamonds in the earth inside of type-heavy OO modeled systems. For us Clojure has been a great emancipator of business rules.

Clojure at a Bank – Clojure Code Immaturity

November 4, 2012 § 13 Comments

I’ve posted recently about a team at an investment bank wanting to make the transition from Java to Clojure. Here I want to write about some of the issues around our Clojure code being ‘immature’. Before I do though it’s only fair I state up front that not all of our early code was terrible, Clojure is indeed a pragmatic language where you can write decent and understandable code relatively easily. Still..

Comments and LOC

Most of the devs on our team have a TDD/DDD/BDD background with half having once plied a trade as XP consultants. Our approach to writing beautiful Java code was to make it flow and to tell a story. Expressive names for classes, methods and variables, each chosen to convey clarity and meaning to the fortunate reader.

Therefore when we jumped both feet into Clojure we unconsciously brought with us the belief that comments just weren’t needed. Add in to the mix that we gave our args the shortest possible names – most of the time just single characters – one could argue that we purposefully went about trying to obfuscate what we were writing.

Then to make our code more fugly, we executed the common newbie sin of not really knowing what’s in Clojure.core but churning out bucket-loads of FP code anyway. For example we had a brave early attempt to get around assoc-in not being able to work with nested vectors as well as maps when it actually could (assoc-in m [:key1 0 :bar]). This led to some funky code existing in functions with interesting sounding names like ‘weave-in-vectors’ – the choice of naming being a sure fire smell that there must be a more idiomatic way of doing things. Then there’s the little stuff: ‘if’ and ‘let’ vs ‘if-let’, ‘let’ as its own form rather than embedded into a ‘for’ as ‘:let’. Then there’s zipmap, juxt, mapcat, group-by… a considerable list of helper functions that avoids us having to write our own cruft.

I also have to own up to having a personal fetish for a low LOC wanting it to compare ever so favourably to the old Java stack that preceded it. The cost was that some people wondered wtf some of my code did but at least there were few lines of it. There’s got to be some prize for that, right?

We matured past these issues by communicating amongst ourselves as we found better ways of doing things and thankfully we had a team where criticism was generally well received. Clojure itself is an opinionated language and when you’re coding in a more idiomatic way the pieces of the language tend to fit more easily together. Idiomatic = more graceful/simplistic. Don’t say to a colleague: “Your code is shit”, do say: “There’s a more idiomatic way of doing this”. StackOverFlow and blog posts are full of examples of how to write more idiomatic code for particular use-cases and the Clojure Google Groups are good also.


In Java/.NET development we now have extremely powerful tools that help you to navigate your way around a large code-base – i.e. Eclipse/Intellij for Java. As the amount of class files inexorably grows it never really seems to matter and you just get used to it. (Here’s a controversial Recursivity blog post entitled “IDEs Are a Language Smell”).

In FP a single namespace will nearly always contain much more logic than compared to the average OO class. Since you’ll be spending more focused time in fewer files this then creates a need for namespaces to be presentable. Comments at the top can be helpful, they should have fewer public functions, and they should be split up if they get too large – we’ve occasionally split out the cruft from a namespace into an accompanying ns-utils.clj to make the main one clean. We’ve also reapplied various bits of DDD and OO to model namespaces around business domain concepts and to keep them well encapsulated.

Then there’s (:require) vs (:use). (:require) is much better as each dependency usage is clearly marked with a prefix so you can clearly see where dependencies are used. This is kind of obvious but in the early days we used (:use) in most places – without only – and now we’re having to go back over and correct. Note that we did play around with using lein-slamhound for optimising our namespace declarations but then we found that the kind of namespaces you typically want to use this on need restructuring anyway.

Macros, Protocols, Defrecords

Having fun with macros is a right of passage. Some people passionately detest them whilst others enjoy using complicated solutions to solve complicated problems. I’ve learned that if I’m going to build a macro then it helps to keep it minimal and to delegate out the logic into a separate function relatively quickly. We once had some special code eval’ing deftest forms to generate tests based on some data that we had saved up. The idea was that the auto-generated tests would then play nicely with lein-test and consequently our build server. The trouble is that arguments to macros have to be serialized and you’re limited in by this, not to mention that the code can become that much harder to follow. By looking under the hood at what the deftest macro actually did – basically registering test functions – we replaced this little mini-framework of macros and evals without much fuss and it gave better performance in return (we stopped reloading the same data twice, once to register the tests and once to run them). Macros are helpful and powerful but they come with a cost.

Protocols are like having an extremely awkward member of the team around; someone who can get stuff done with an air of awesomeness yet at the same time you’re wondering if there is just isn’t a simpler way. They have some quirks setting them up and it’s added complexity, but on the whole I have to say that Protocols have been good for us. We had some hairy areas of the codebase where we were doing concurrent operations passing around a lot of functions, partially built up or otherwise. Introducing Protocols allowed us to pass a family of related functions around together as one thing with some immutable state. In a different area of the code-base they also laid down some hard interface definitions. We didn’t strictly need them for this purpose but enforcing a little of compiler-checked OO felt like a good thing.

Defrecords were primarily introduced for performance reasons. Where we were creating lots of little map instances we switched to using simple Defrecords instead. I would also argue that they forced us to think a bit harder about our modeling of data and use of simple data structures, leading to cleaner code in some areas.

Wrap Up

In my opinion figuring how to write Clojure code that is more idiomatic and simplistic is what makes Clojure fun, along with the fact that Clojure is blistering pragmatic. The learning never stops.

Clojure at a Bank – Support

October 24, 2012 § 2 Comments

My last post covered the rationale behind our team at an investment bank wanting to make a switch to Clojure from Java. I want to write in this post about how we were supported in our efforts by those at the bank.

Aircover from Within

First not all the team jumped in at once to use Clojure. We created what we called a ‘pod’ within our team. Invitation to the pod was extended to all but to join you had to get the Emacs/Swank/Lein environment setup working first by reading our projects github README page. People joined the pod at varying times with their own personal approach. I.e. some devs wanted some space to try out their own thing in the Clojure codebase whilst others wanted to pair up. Some devs wanted to install the very latest version of Emacs themselves and understand all of our Emacs config whilst others took a more direct route through to the code. At one point we had an Emacs versioning arms race of which I dropped out.

Some of the team didn’t join the pod initially and stayed back to provide air-cover to those within the pod. For the system we had to maintain/develop they stepped up to satisfy whatever business requirements came our way as well as handling any emergent technical problems. Gratitude is owed and deserved.

Technical Leadership Vs Bottom Up, Organic Adoption

I’ve heard of a case at a different investment bank of someone fairly senior decreeing that development going forward needs to be done using Clojure (I disclose now that I don’t have all the facts). I’m not sure how I feel about the idea of this. On the one had it feels good that someone high up has got a kick-ass mentality to drag up the quality of the tools that people in the trenches are using to eliminate waste and to speed up development. On the other hand I can see some people getting frustrated – i.e. not everyone wants to get bloody on the cutting edge and to call themselves Lisp hackers, having to revert from the cosy wizardry of professional refactoring powerhouse tools like Eclipse/Intellji to having to use Emacs or Vim, tools of old.

Changing from a statically typed OO language to a dynamically typed FP one is a fundamental change requiring lots of organic support and I’d be interested to hear if it can be forced or not. Maybe it can be. Maybe this is good. There’s also still immaturity around the Clojure tools ecosystem to grapple with. Nrepl is replacing Swank-Clojure, Lein 2 is about to be released which is a major upgrade. Although you can get copious amounts done using Clojure one has to say that the ground still has the occasional tremor and this may affect bringing onboard the masses. I would expect that there is lots of devil in the detail around Clojure mass adoption.

At our bank we were given some room and trust to take technical decisions so long as our cost/benefit case was solid. It felt right for our team at the time to start using Clojure – we were not instructed to do so.

Support from the Business

The Business were typically not as patiently supportive as our technical stakeholders. However pleasant our business customers are there’s always an element of ‘why we have got X amount of the team diverted away from serving the direct requirements of the business to do tech stuff?’

It may sound obvious but the only effective way I’ve found to deal with this is get a synergy between the ‘tech stuff’ and needs of the business as soon as is reasonable. Our pod of Clojure devs made a switch from straightforward 100% reengineering to doing a mixed blend of new requirements for the business too. Now the business are happier and we get some relief. Since development is more productive around our new infrastructure we can turn around long standing requirements for them faster and it’s fun to do. They will reap the rewards from here on in.

This does however mean that we’ve increased the amount of work we’ve signed up for and overall pressure on ourselves, this has affected our morale. We are currently sailing quite close to the wind with anti-patterns ‘scope creep’ and even at times ‘death march’ hovering. There’s no doubt that we’ve had our share of stresses and that there are more to come. I can though take some comfort that with hindsight the thought of sticking with the old status quo of a monolithic Java app does not become any more appealing. Eventually the waste will crush no matter what the resources of an institution.

Clojure at a Bank – Moving from Java

October 23, 2012 § 17 Comments

An increasing number of large institutions are now wanting to leverage Clojure. Large institutions are inherently complex and can often be dysfunctional in various areas. They do however attract lots of very good people and good people often bring with them the best tools.

Investment banks in particular have hard problems to solve. The financial instruments themselves have varying degrees of complexity, although whatever complexity they have often pales in comparison against the systems ecosystem that maintains them. I said once to a senior manager at one place that the number of systems I was seeing in various strates reminded me of Darwin’s survival of the fittest, beasts coming in all shapes and sizes. He said I was wrong because in the wild animals actually die rather than live on to relentlessly cause problems.

Over a year and half ago I was a team lead on a medium sized development team running a strategically important middle-office system. We were 100% a Java team and we had some pretty good devs onboard with decent backgrounds. We had plans to split up our Spring/Hibernate 1 million LOC behemoth and were getting in some decent wins to make this happen. Then I persuaded a Swedish colleague I’d worked with before to come work with us. I had the feeling the team needed someone to come in to and give us a kick and to challenge our general direction – he certainly lived up to this. Large scale refactorings I was fairly proud of he denounced as ‘just moving shit around’. If we were serious about changing our system then the only way to do this was to change our course radically. Although he was challenged fairly robustly – to the extent that at times I know he got quite pissed off – a few things we were already conscious of.

Firstly Java encourages lots of modelling. “Let’s sit down with all the power of OO and create of lots of classes that attempt to model the problem at hand. We won’t use UML but instead we’ll evolve it in an agile way.” The model in the investment banking world is actually quite complicated so this OO graph grows quite large. Abstraction is brought in do deal with variance; abstract classes and interfaces proliferate. The type system encourages this. I had used some dynamical languages before and it was quite obvious that we were essentially forcing lots of schema and type definition on to a problem domain that just didn’t want or need it. Sure you can minimize the pain with patterns such as composition over inheritance, better abstractions etc, but we had already admitted to ourselves in some quarters that we were fighting the wrong battle.

And soon it becomes like wading in cement. Our system was predominately conceived around 2007 when it was the heyday of using Test Driven Development (TDD) and to have a heavy reliance on dependency-injection frameworks like Spring. I should state now that I don’t actually dislike Spring with any religious fervour – it just looks redundant in retrospect now that I’m working 95% with FP code. Our app back then also had lots of Mock framework usage – we had both JMock and Mockito flavours. Before our big move to Clojure I’d already decided I just didn’t get on with mock frameworks. They slow down development as you constantly find yourself fighting against lots of incomprehensible tests written dogmatically that rarely test anything useful. I really can’t really think of many situations right now when using Mocking would be a good idea, even in a Java/OO setting.

I had also picked up a consensus view amongst industry peers that the only way to truly break up a big monolithic system like ours was to enforce hardline decoupling around service boundaries. The best way to achieve this is naturally to use different languages. If we’d have stuck with Java we’d never have ‘gone for broke’ to drop a crowbar into our system and to wrestle it apart. Now we’re using Clojure this is essentially what we’re doing, and not in a bad way.

You also need to ask yourself when contemplating a fairly seismic shift of tool usage what about the business? What about the stakeholders? Who is going to pay for this kind of re-engineering and why?

In the end there are a lot of answers to these questions. My answer of choice tends to be waste. Waste on the scale lots of project have should not be tolerated. Waste in terms of time spent wading through cement-styled Java, time spent hunting through layers upon layers of indirection, interfaces, unit/integration/acceptance tests, Spring XML files, just to find the one if-statement that actually has some business significance, Eureka! We had/have waste on a fairly large scale and to start developing our system in a fundamentally different way almost seems the only sensible option especially if you want people to stick around.

Here are some specifics – our new code is going to be an order of magnitude less in volume than the old and this is being conservative. With the new code you can use your REPL to navigate in and to run any little bit of it you like. You can change any code and see the changes immediately in your browser (our app lives inside of web-server instances) or REPL. Instead of modelling the world, every man and his dog, we’re going to instead write code that operates against simple data structures. If then we need to be able to determine what is actually going on – wtf – we’ll add lots of introspective abilities to our code. In short, to replace copious amounts of Java code along with built up DSLs that only devs use, we’re going to build a simple rule engine. And we’re going to provide some visualisation tools so that you can actually see the rules themselves. A rule editor may follow.

Sure we had lots of debates about why we couldn’t do what we need to do in Java, and in the end it became clear that you couldn’t really have the debate on these terms. The decision of switching to Clojure fell in to three natural brackets. 1) What’s good for the long term nature of the project? 2) What’s good for this present team? and 3) What’s good for the individuals involved? 1) FP seemed a better fit for our system – it’s basically functional blocks of code hanging of financial market events. Our system also suits a dynamic language making all the rigid schema definition less of a problem. On a different tack surely the long term view project is best served by using the best tools and being able to attract the best people. 2) The team will be motivated learning something new and we’ll likely get better retention. This is ultimately good for the costs of the business. 3) The majority of the team wanted to try Clojure. As Hakan made the point – we don’t get paid to enjoy ourselves at work, but it doesn’t hurt. It’s an obvious point – a happier team is more productive.

In the end at some point a team just has to take a jump. Certainly there’s been problems and long standing issues in our move to Clojure and there are some significant questions still yet to be resolved. We’re a global team and this brings with it a wealth of complexities. A year ago we were also a bunch of Clojure newbies finding our way and some of our code and decisions reflect this. I’ll write about these in a subsequent post.

Year of Clojure at a Bank

October 22, 2012 § 2 Comments

We’re roughly coming up to a year anniversary of when myself and a ragtag bunch of pan European devs started to use Clojure for our day jobs at an investment bank working on on a strategically important middle-office platform. Hakan Raberg is the change agent that when joining the team argued the case for moving to Clojure to whom I am indebted. Hakan and I did a talk at EuroClojure talking about the project – video here.

We’re also going to talk this week at FP Day in Cambridge. Here we’re going to address ‘Clojure after the honeymoon’. Here we want to give an honest account of the problems we faced introducing Clojure inside somewhat of a conversative environment, and of what is the current state of Clojure is. We also want to talk about some of the technical challenges we had particularly around making the jump from Java to Clojure. Hakan left the project to rewrite Emacs in Clojure and to widen his brain neural pathways even further, I’m still on this very project dealing with the fallout.

Time permitting I hope to write up some of our experiences talking about some of the cultural shifts and also about some of the technical stuff we did – i.e. our approach to regression testing, our Clojure rule engine, and the continued battle inside and out of a large Java monolithic beast. I’ll do this in a short series of posts.

Clojure deep-update-in

July 7, 2012 § Leave a Comment

I needed this the other day and wondered if it would help others or if there’s a better way.

There are two functions. The first goes looking for a key anywhere in a map and produces a sequence of key-paths that can be used with get-in, update-in and assoc-in. Note this code only recurses into substructures that are vectors or maps.

(defn find-key [ks k m]
  (cond (map? m)
        (reduce into (map (partial conj ks) (filter #{k} (keys m)))
                (map #(find-key (conj ks (key %)) k (val %)) m))
        (vector? m)
        (reduce into '() (map #(find-key (conj ks %1) k %2)
                              (iterate inc 0) m))))

The deep-update-in uses the above:

(defn deep-update-in [m k f]
  (reduce #(update-in %1 %2 f) m (find-key [] k m)))

m is the map, k the leaf key you're looking for (a single key, not a path of keys), and f the function you want to do the update.


July 28, 2009 § Leave a Comment

This is a post about the practice of risk management through use of a ‘DEFCON Door’.

A DEFCON door? Well if you’ve seen that early 80’s film ‘WarGames’ starring Matthew Broderick, or if you simply know your US history, then you’d know that DEFCONs – Defense Readiness Conditions – are used to be a “measure of the activation and readiness level of the US armed forces” (as described on wikipedia). During the Cuban missile crisis DEFCON 2 was reached, with the US airforce readied up to go and bomb their cold-war rival the USSR.

And how might this be applicable to your typical day-to-day IT project? For starters, once on a project what we did was to find a spare door and to configure it as such:

Defcon Door

It’s very simple really. Along the top column headers are the various DEFCONs. DEFCON 1 would mean that it is the consensus of the team that the project is going to fail because of a particular issue; that a very serious risk has indeed materialized. DEFCON 2 risks are still very serious, and DEFCON 3 ones are merely lesser risks that the team feel are important enough to warrant exposure at a particular given time.

What I found most pleasing about the DEFCON door was that it seemed to be a fairly natural extension of the Agile software development practice and was therefore well-adopted by individuals on the project. For instance in a similar way to how we do story estimation a person would read out a ‘risk card’ and then the team would count to three and display with a finger count the the DECON level that they thought the risk had reached. No fingers at all meant that the risk could simply drop off the door altogether – which ought to be a good thing.

Normally every couple of days or so after the standup we would do a quick review of the door. Sometimes, we also did it after a retrospective, because a few things that had been brought up were indeed risks that the team thought were ‘DEFCON appropriate’.

Risks would sometimes traverse up and down through the DEFCONs as the team would attempt to grapple with them, and often risks would be broken up into multiple risks, or instead merged together to form all encompassing ones. No one team member should own the door as it works very well as a collaborative tool where everyone can have input.

Thoughts on effectiveness:

I think often individuals within a team will have very real concerns about some particulars of a project. Personally speaking, one of the wants that I have when I have such concerns is that my concerns will be listened to and registered by the team, and I think the DEFCON door to an extent gives you that.

It also brings the team closer to the risk tracking that would ordinarily be done by project-managers. Although I don’t think it should be a replacement as it best works as a lightweight tool, there’s real value in having team-members contributing to tackling and identifying risks on a day-in, day-out basis.

The DEFCON door also has uses outside the immediate vicinity of the team, such as the ability to show project stakeholders the door and say “these are the major risks that the team feels are present right now”. Often stakeholders not operating within the team will have a want to gauge a collective opinion about how the project is going and the challenges that are being faced. Here the door can play a useful role.

Potential Challenges:

Firstly, it sometimes can be difficult to locate a usable door, especially a nice wooden one. If need be you could use a wall, but then you’d be harming the linguistically pleasing flow of the “DEFCON Door”. It is however, really up to you.

Understanding what the scope is of the door can be at times slightly difficult. For example it can be tempting at times to throw up enterprise-wise risks up on the door that are not directly project-related (e.g. is this architecture appropriate for adoption by the wider organization?). A strategy a colleague of mine came up with is to have a DEFCON corridor, but I fear this may be over-egging the solution. I would endeavor to keep the door simplistic, and if the door identifies risks and questions to be managed elsewhere then that can only be a good thing.

We should also note that this model of risk management is very lightweight and simple, and that there is much more about risks than can easily tracked and measured on the door. Therefore it would seem appropriate to use it to augment a more sophisticated risk management process.

In conclusion, the DEFCON door is a simple tool that aims to bring risks explicitly into the highly visible and participatory realm of an empowered, self-organising Agile team.

Flow, Pair Programming and Emotional Intelligence

July 22, 2008 § 6 Comments

I was involved in an interesting chat with some colleagues about the apparent incompatibilities between pair programming and being in the ‘zone’; a Zen like state where everything just clicks and one is able to perform seamlessly to their optimum ability given the present moment. From a developer viewpoint our egos become almost on autopilot as our hands reach out and tap away at the keyboard, the mind harmoniously serving up a variety of designs and strategies that may help us solve the problem at hand. We effortlessly approach excellence.

Psychologists refer to zone as ‘flow’. Daniel Coleman – author of Emotional Intelligence – describes flow as being “emotional intelligence at its best… the ultimate in harnessing the emotions in the service of performance and learning”. Emotions generate a plethora of thoughts for us, including distracting worries and concerns. Therefore having our emotions working for us enables us to be more focussed and to think more clearly and objectively about a specific problem.

Pair programming is a not stress free activity. Though it has been proven to raise productivity it does require more focus on the part of the developer – asking much more of them than were they coding solo. Being in a “pair” you are constantly required to explain your intricate thought processes to another person. You are challenged, and you have a responsibility to ensure that the person you are pairing with gets up to speed with the material at hand and that you are providing them with a progressive learning experience.

We need to appreciate that developers are humans, and sadly are not perfect coding punching-out machines. Pair-programming introduces an emotional burden on the developer. Thoughts such as how the other person in the pair is coping and performing are one of many, along with speculation as to what the other person is thinking… Are they frustrated that I can’t keep up? Are they worried I’m going too fast? Or worse still do they feel guilty for slowing me down? – a common concern as developers strive to be productive. In order to deal with these worries a whole new – totally justifiable – plateau of thinking opens up. The brain switches over from analysing the current software problem to instead work on strategies for making the pair more effective: ‘they have a need to see this area of the code-base as it will help them to contextualise the problem better… I want to get some “quick wins” to build our momentum as a pair, therefore it makes sense to pick off this little bit of low hanging fruit…’

I believe that whilst pair-programming has the capacity to inhibit flow, the emotional intelligence within us is the tool to help minimize the impact distractions have allowing us to closer reach a level of excellence on an individual level. EQ is essentially about being aware of our emotions and the impact they have on us – to realise when we’re worried, or when our frustrations are mounting. A higher EQ will not only help us to better meet our own needs, but it will give us more empathy and understanding to better meet the needs of the person we’re working with – to help us create a more effective “pair”. EQ is essentially about being aware, and the best thing about it is that unlike IQ it can be learned.

Lastly, I want to refer to the premise that “stress makes you stupid“. We underperform when our emotions run unchecked. I’ve seen first hand developers resigning from jobs they sought hard for because pair programming has been too much of a difficult experience. If we can improve our awareness of the distractions and difficulties pairing introduces – and in doing so our EQ – then we’ll likely make it easier for ourselves and the person we’re pairing with.

Release Planning Wall

August 24, 2007 § 2 Comments

On a previous project we used a white-board wall for release planning – a ‘Release Planning Wall’. As it became quite an important tool for us, I thought it would be pleasant to post our ‘wall configuration’ here for future use.

The release planning wall is quite different from a typical ‘card wall’. A standard card wall will track stories through their life cycle (i.e. ready-to-play, dev complete, QA complete etc) and would cover one to two iterations. This wall tracks stories in the overall context a particular release.

The rather squashed up image below can be seen properly here. I should point out that the diagram uses dummy data.

release planning wall

Below is a real application of the release planning wall, as modelled above.

card wall

As you can hopefully see, the wall is essentially segmented into individual iterations that form a release. Above each iteration-segment is displayed the iteration number, the target velocity and the actual velocity (if that iteration has been played). In each iteration-segment lies information about each invidual story, including its title or primary reference, its estimate, and it’s status (quite simply, whether or not it’s been signed off). In the images above the status is simply a tick next to each story, the estimate a number.

The images above have the names of developers next to each story. This can be useful for communication purposes, but it’s not a crucial feature of the wall. It may not be applicable in circumstances where multiple development pairs are working on the same story in parallel.

The release planning wall serves a few objectives. Firstly it gives anyone caring to look at it an overall status report of the current release in play. It can be immediately seen which iteration it currently is, the work that has gone into the release thus far, the work that is outstanding, and the capacity the release has left for additional work to be squeezed in.

Secondly it gives release planners a useful tool when assessing which stories they can fit into particular iterations. I’ve seen many examples of customers and business analysts standing in front of the wall, discussing the timeline of when various stories can be played. By having a story’s estimate and the target velocity for each iteration at hand, they can easily determine if a story can fit into a particular iteration, and which stories they can perhaps juggle around so that this can be achieved.

Significantly, the wall gives context to the development team members. In enables everyone to visualize the breadth of the iteration, and where abouts development is in the overall time-line of the current release, and therefore to some degree the whole project. BA’s tend to ‘own’ this wall, while developers will typically make changes to the story card wall which is more granular.

Go Release Planning Wall!


Here’s another release planning wall using this format. This was taken some time ago and is now looking much busier.

release planning wall