Three Historical Definitions of the Open/Closed Principle and a Claim that it’s Pointless

Bertrand Meyer first published the OCP in his influential 1980's book Object Oriented Software Construction:

  • “A module is said to be open if it is still available for extension. For example, adding new fields, or performing new functions.
  • A module will be said to be closed if it is available for use by other modules.”

It's a neat double-definition, not least because the definition of Closed is both useful, and one that might not contradict the definition of Open. But Meyer's proposed technique—use subclassing to achieve Openness of a Closed module—is widely ignored. Many of us have discovered the pain of working with inheritance hierarchies, so we savour the Gang of Four's sage dictum: “prefer composition over inheritance.”

Dynamic languages like javascript can do the open/closed trick quite easily. The danger is that, in doing so, you develop inscrutable code and and are left with system that, when it works, you don't know how it works; and so when it breaks you don't know how to fix it.

Uncle Bob all-but-redefined the Open/Closed principle by using Interfaces as his technique. The interface is fixed and Closed; modules that depend on it can rely on it not changing. But the Implementation is Open: it can be changed without breaking the interface.

It is worth mentioning a word of wisdom from the .Net team's Framework Design Guidelines: the weakness of interfaces in Java and .Net is precisely that they are 100% closed. There can be no version 2. Or rather, if there is a InterfaceV2 then it can usually have no useful relationship to InterfaceV1. You might as well call it ICompletelyUnrelatedInterface. (Or perhaps one could put the versioning at the namespace level).
This versioning problem is widely felt in service oriented systems with public interfaces. It is often addressed by creating a new endpoint for a new version of the service. Offering two versions of a service becomes on the whole precisely as expensive as offering two services, which is to say twice as expensive. This is unfortunate.

Contrast this with Meyer's vision of OCP: On Meyer's subclassing approach, version 1 clients and version 2 clients would call the same service and get the same responses. Version 2 clients would recognise, and so be able to use, the enhanced v2 capabilities; whereas version 1 clients would only recognise the version 1 capabilities. But here I see a second problem with Meyer's vision: I've almost never seen systems (or even parts of systems) that can achieve it in practise. It's a beautiful dream. But unachievable. It is a pipedream.

More recently (Dec 2016), Michael Feathers has offered an updated version, towards the bottom of the the page at Towards a galvanizing definition of technical debt:
“our code is better to the degree that we don’t have to change it much when we add features. We should be able to make modifications primarily by adding new classes and functions rather than changing existing ones”
This is much 'softer' formulation than Meyer's or Bob Martin's and you could take it as just a rule of thumb; something to weigh in the balance against other factors. Feathers implementation in this case (and I'm left with the impression that in a different codebase he'd be happy with a different implementation) is doing event driven code as most people think it should be done: use an AddEventListener() interface, which makes the code Open to all kinds of extension.
This AddEventListener() is exactly the approach used in the HTML spec and other GUI frameworks of the past 20 years. The downside is that the 'closed' bit of the interface is so small and weakly-typed that it's almost non-existent. The interface tells you nothing about the semantics. (What kind of events can I listen to? What information do I get about each event? What can I do with them? I can only find out by reading the HTML spec, which turns out to be quite hard going, or turning to MDN, or, the first port of call for many, StackOverflow. In a bespoke codebase replace this with “ask for documentation; find it is incomplete; and then hunt through the code for examples of how I can use it”).
Strongly typed interfaces are at least somewhat self-documenting—they offer a definitive list of all syntactically valid calls to the service—even if that documentation depends heavily on how well the developers chose their method and parameter names.

These three examples leave me with mixed feelings. OCP seems like trying to square the circle, and Meyer's choice of name was a well-chosen contradiction. Yet the goals—Openness for extension, Closedness for reliability—are unavoidable.

Dan North, amongst others, has suggest that OCP, and indeed all the SOLID principles, are of limited value and we should drop them in favour of something else. I sympathise—I think that SOLID is a mishmash of mixed value—but I'm willing to wrestle for a couple more years with OCP before I admit defeat.

I'd rather have the above three technique, and others, in my toolkit because my software design still has to address the two contradictory requirements that Meyer identified in the 80s:
–Because my software is still evolving, it has to be open for evolution: it has to change.
–Because my software is already is use, and hence being depended on by some other software or person, it has to be reliable and therefore can't change.

Conway’s Law & Distributed Working. Some Comments & Experience

The eye-opener in my personal experience of Conway's law was this:

A company with an IT department on the 1st floor, and a marketing department on the 2nd floor, where the web servers were managed by the marketing department (really), and the back end by the IT department.

I was a developer in the marketing department. I could discuss and change web tier code in minutes. To get a change made to the back end would take me days of negotiation, explanation and release co-ordination.

Guess where I put most of my code?

Inevitably the architecture of the system became Webtier vs Backend. And inevitably, I put code on the webserver which, had we been organised differently, I would have put in a different place.

This is Conway's law: That the communication structure – the low cost of working within my department vs the much higher cost of working across a department boundary – constrained my arrangement of code, and hence the structure of the system. The team "just downstairs" was just too far.  What was that gap made of? Even that small physical gap raised the cost of communication; but also the gaps & differences in priorities, release schedules, code ownership, and—perhaps most of all—personal acquaintance; I just didn't know the people, or know who to ask.

Conway's Law vs Distributed Working

Mark Seemann has recently argued that successful, globally distributed, OSS projects demonstrate that co-location isn't all it's claimed to be. Which set me thinking about communication in OSS projects.

In my example above, I had no ownership (for instance, no commit rights) to back end code and I didn't know, and hence didn't communicate with, the people who did. The tools of OSS—a shared visible repository, the ability to 'see' who is working on what, public visibility of discussion threads, being able to get in touch, to to raise pull requests—all serve to reduce the cost of communication.

In other words, the technology helps to re-create, at a distance, the benefits enjoyed by co-located workers.

When thinking of communication & co-location, I naturally think of talking. But @ploeh's comments have prodded me into thinking that code ownership is just as big a deal as talking. It's just something that we take for granted in a co-located team. I mean, if your co-located team didn't have access to each other's code, what would be the point of co-locating?

Another big deal with co-location is "tacit" knowledge, facilitated by, as Alistair Cockburn put it, osmotic communication. When two of my colleagues discuss something, I can overhear it and be aware of what's going on without having to be explicitly invited. What's more, I can quickly filter out what isn't relevant to me, or I can spontaneously join conversations & decisions that do concern me. Without even trying, everyone is involved when they need to be in a way that someone working in a separate room–even one that's right next door–can't achieve.

But a distributed project can achieve this too. By forcing most communication through shared public channels—mailing lists, chatrooms, pull request conversations—a distributed team can achieve better osmotic communication than a team which has two adjacent rooms in a building.

The cost, I guess, is that typing & reading is more expensive (in time) than talking & listening. Then again, the time-cost of talking can be quite high too (though not nearly as a high as the cost of failing to communicate).

I still suspect that twenty people in a room can work faster than twenty people across the globe. But the communication pathways of a distributed team can be less constrained than those same people in one building but separated even by a flimsy partition wall.

References

The Yes, The No and the Painful: using, and failing to use, estimates for a no-go decision

@AgileKateOneal recently asked for examples of effective estimate use in medium/long-term planning.

Example 1: Back-of-the-envelope Go/No-Go Decisions

Making a no-go decision sprang instantly to mind. Many such decisions are casual and quickly forgotten: the back of an envelope calculation which says that an idea is well beyond what we can afford; and the conversation moves on. But that estimate may have saved you from months of wasted effort.

An NoEstimator might object that one could profitably try out something rather than nothing. Which is sometimes true, but creative thinkers in commerce & IT can always generate a hundred more ideas than a team can try out. You can't try out everything.

Example 2: Small UK charity looking at CRM options

in November last year I worked with a small UK charity, www.redinternational.org who were badly in need of some kind of CRM software to keep in touch with supporters and project partners. They were running largely on spreadsheets built from downloaded reports from virginmoneygiving.com / mydonate.bt.com etc. They also had an Access database with a fair amount of donor & similar data in it.

Question: Is it better to pay for a CRM solution — typical charity starting price £10,000 going up to easily £100k – or get someone to do enough work on the Access database to make it a usable solution?
My Answer: I first spent some time discovering and documenting their main use-cases (to clarify: their 'business' use cases, that is the things the charity had to do whether manually or with IT). I gave that picture to the CRM providers so that they could give us a sensible proposal. And I worked out an estimate for extending/developing the Access database. Based on that, we could see that a CRM consultancy/solution looked like £10-£20k (5 year cost) and the DIY-option about 200-400 developer days.

Even with this level of accuracy it was good enough to see that DIY should be a no-go. I did not expect this. I thought that the charity's actual requirements were sufficiently small that we could do something useful for a few thousand pounds. But two hours spend going through their use-cases on my estimating spreadsheet showed me that I was wrong. So, I recommended the best value CRM option.

This, I think, is planning 101: a couple of hours working through the detail on paper is a lot cheaper than running the experiment; but can be enough to make a probably-good decision.

Example 3: Provide a system to automate a small team's manual processes for a capped price

This was for a financial services company in 2013. The team were working on PPI claims for an insolvency practitioner (obliged to pursue potential claims that might bring in some money for their clients' accounts) and had about ten thousand potential claims with hundreds per month being added. They had been working manually on spreadsheets for over a year.

I spent 4 days on analysis and listed a set of use-cases that covered the processes end-to-end; and I estimated that a suitable system could be done for about 40 days development work. The estimate cost about 3 or 4 hours on top of the analysis.

The contract to provide was capped-price. The customer was not open to a no-estimates approach. And I accepted being bargained down to below my estimate (Doh! I hear you say. Quite so). The actual cost came out close to (but above) my original estimate, but could have used another week's work to make it user-friendlier.

The better course would have been to use the estimate/budget mismatch to declare a no-go rather than accept a reduced budget. This might have resulted in the client agreeing to go ahead anyway (which might in turn have led to a no-estimates approach to the work). Or it might have led to no contract. Either way would have been less painful and more controlled than over-running the budget.

The Known Unknowns Matrix

I.T. is not the only industry to have happily latched onto the the former Secretary of State's famous phrase, "the unknown unknowns". It's a good phrase if you must plan or estimate anything because planning & estimating always involve risk.

But we should really consider the full matrix. There are pitfalls in at least two of the quadrants:

  Known Not Known
Knowns Things we know, and we know we know them Things we know but don't realise we know them. Tacit knowledge that we take for granted. Becomes a problem if we are responsible, and fail, to communicate them to people who don't know. Also a problem when we start work in a new context and do not realise that what we ‘know’ is no longer valid here, so they become unknown unknowns.
Unknowns Things we know that we don't know. We can record the risk, and estimate a cost for investigation & discovery Things we don't know that we don't know. This is the quadrant most likely to shipwreck plans.

Concerning the unknown unknowns, my experience with doing novel software is that when budgeting for development you should estimate for development, plus learning time, plus developing the things you learned about, plus solving problems you didn't know you'd have. A rule of thumb for novel systems might be, multiply your estimate by ten to cope with the unknowns. And/Or, have clear “abandon the project” criteria, even months into the project. Don't be a dupe for the sunk-cost fallacy.

Less dramatically, my takeaway from this is to use this quadrant when listing risks and assumptions. Just having a space for the possibility of unknown knowns & unknowns can be an impetus to discuss, “risk-storm” & consult, to help your team discover the as-yet-unknowns.

P.S.

I've just read the brief and brilliant mcfunley.com/choose-boring-technology which points out that you can't afford too much novelty. He suggests that for any new project you should grant yourself a limit of 3 novelty chips. When you've spent them, you get no more.

Well, not unless you really can overshoot your budget and timescales by over 1,000%.

Kudos

A slideshare by Danni Mannes on Agile Architecture pointed out to me all quadrants are worth some of our time.