Tag Archives: success and failure

The Yes, The No and the Painful: using, and failing to use, estimates for a no-go decision

@AgileKateOneal recently asked for examples of effective estimate use in medium/long-term planning, and making a no-go decision sprang instantly to mind.

Many such decisions are casual and quickly forgotten: the back of an envelope calculation which says that an idea is well beyond what we can afford; and the conversation moves on. An NoEstimator might object that one could profitably try out something rather than nothing, such is sometimes true but creative thinkers in commerce & IT can always generate a hundred more ideas than a team can try out; you can’t try out everything.

But a couple of more specific examples from recent work:

Example 1: Small UK charity looking at CRM options

in November last year I worked with a small UK charity, www.redinternational.org who were badly in need of some kind of CRM software to keep in touch with supporters and project partners. They were running largely on spreadsheets built from downloaded reports from virginmoneygiving.com / mydonate.bt.com etc. They also had an Access database with a fair amount of donor & similar data in it.

Question: Is it better to pay for a CRM solution — typical charity starting price £10,000 going up to easily £100k – or get someone to do enough work on the Access database to make it a usable solution?
My Answer: I first spent some time discovering and documenting their main use-cases (to clarify: their ‘business’ use cases, that is the things the charity had to do whether manually or with IT). I gave that picture to the CRM providers so that they could give us a sensible proposal. And I worked out an estimate for extending/developing the Access database. Based on that, we could see that a CRM consultancy/solution looked like £10-£20k (5 year cost) and the DIY-option about 200-400 developer days.

Even with this level of accuracy it was good enough to see that DIY should be a no-go. I did not expect this. I thought that the charity’s actual requirements were sufficiently small that we could do something useful for a few thousand pounds. But two hours spend going through their use-cases on my estimating spreadsheet showed me that I was wrong. So, I recommended the best value CRM option.

This, I think, is planning 101: a couple of hours working through the detail on paper is a lot cheaper than running the experiment; but can be enough to make a probably-good decision.

Example 2: Provide a system to automate a small team’s manual processes for a capped price

This was for a financial services company in 2013. The team were working on PPI claims for an insolvency practitioner (obliged to pursue potential claims that might bring in some money for their clients’ accounts) and had about ten thousand potential claims with hundreds per month being added. They had been working manually on spreadsheets for over a year.

I spent 4 days on analysis and listed a set of use-cases that covered the processes end-to-end; and I estimated that a suitable system could be done for about 40 days development work. The estimate cost about 3 or 4 hours on top of the analysis.

The contract to provide was capped-price. The customer was not open to a no-estimates approach. And I accepted being bargained down to below my estimate (Doh! I hear you say. Quite so). The actual cost came out close to (but above) my original estimate, but could have used another week’s work to make it user-friendlier.

The better course would have been to use the estimate/budget mismatch to declare a no-go rather than accept a reduced budget. This might have resulted in the client agreeing to go ahead anyway (which might in turn have led to a no-estimates approach to the work). Or it might have led to no contract. Either way would have been less painful and more controlled than over-running the budget.

The Known Unknowns Matrix

I’m sure that I.T. is not the only industry to have gratefully latched onto the the former Secretary of State’s famous phrase, “The unknown unknowns”. It’s a useful phrase to ponder if you’re responsible for planning or estimating anything. A recent slideshare by Danni Mannes on Agile Architecture pointed out to me that one should really consider the full matrix:

Known Not Known
Knowns Things we know, and we know we know them Things we know but don’t realise we know them; tacit knowledge that we take for granted. Become a problem if we are responsible, and fail, to communicate them to people who don’t know. Also a problem when we start work in a new context and don’t realise that what we ‘know’ is no longer valid, so they become unknown unknowns.
Unknowns Things we know that we don’t know. We can record the risk, and estimate a cost for investigation & discovery Things we don’t know that we don’t know. This is the quadrant most likely to shipwreck plans.

My personal takeaway from this is that I will try using this quadrant when listing risks. Just having a space for the possibility of unknown knowns & unknowns can be an impetus to do a little risk-storming & consultation, to help you discover the as-yet-unknowns.

P.S.

I’ve just read the brief and brilliant mcfunley.com/choose-boring-technology

Estimates and NoEstimates

We had a debate&discussion at XP-Man on NoEstimates for which I did some notes. Reading the NoEstimates stuff, I was most attracted to the sense of “Let’s not be satisfied with second rate” and of a thirst for continuous improvement.
I was left with the sense (possibly because I already believed it) that there are contexts in which NoEstimates works, and contexts in which it doesn’t. But I was very glad to be provoked to ask in each case, “What value if any is our estimate/planning effort adding?” and “Isn’t there a better way to deliver that value?”

What is an Estimate?

An estimate for a Project is (1) a list of things to work on; (2) a cost-range for those things; and (3) a list of risks, that is (3a) of Dependencies that 1&2 rely on, and (3b) of things that might cause significant change.

An estimate or plan for a sprint is (1) a list of things to work on, (2) a “cost” (eg story points) for those things and (3) a list of things we are uncertain about, or (4) need to get help with.

The value of a project estimate is to feed-in to (1) A go/no-go decision and (2) seeing things we want to see sooner rather than later (e.g. should we hire more people, do we need help from specific 3rd parties, is releasing in time for Christmas possible)

The value of a sprint estimate is, to see things we need to ask for in advance (ie external help or resources); to give everyone a sense of confidence about what we’re doing; to fail-faster, that is to see sooner what we can’t achieve.

Max de Vries Interview – What every project manager should know

I was recently fortunate enough to catch Max de Vries on a trip to the UK. Max has overseen complex IT projects in Europe, Asia and the US, in finance, public sector and now games for over 20 years, most recently being in charge of Popcap’s Plants vs Zombies franchise, overseeing the launch of PvZ 2 and PvZ-Garden Warfare. I managed to fire off two quick questions before he had to go:

Q: What are the top 3 things every project manager should know?
Who to trust; how to communicate; and what would success look like.
Q: What are the common mistake that inexperienced project managers make?
  • Not realizing they are inexperienced
  • Making estimation a political process
  • Miscalculating the cost of saying no vs the cost of failure
  • Not thinking about the critical path hard enough
  • Thinking that people know what they want
  • Underestimating the value of a high functioning team
  • With agile processes: Modifying a process without understanding the reasons why the process works and doesn’t work

So now you know.

IT Success and Failure — the Standish Group CHAOS Report Success Factors

The Standish Group have been famously (notoriously) publishing their CHAOS Report with IT project success/failure/challenged rates since 1994. “XXX% of all IT projects fail!” headlines are doubtless responsible for their fortuitous fame but they have also attempted to analyse ‘success factors’ over the years:

1994 1999 2001 2004 2010, 2012
1. User Involvement
2. Executive Management Support
3. Clear Statement Of Requirements
4. Proper Planning
5. Realistic Expectations
6. Smaller Project Milestones
7. Competent Staff
8. Ownership
9. Clear Vision And Objectives
10. Hard-Working, Focused Staff
1. User Involvement
2. Executive Management Support
3. Smaller Project Milestones
4. Competent Staff
5. Ownership
1. Executive Management Support
2. User Involvement
3. Competent Staff
4. Smaller Project Milestones
5. Clear Vision And Objectives
1. User Involvement
2. Executive Management Support
3. Smaller Project Milestones
4. Hard-Working, Focused Staff
5. Clear Vision And Objectives
1. Executive Support
2. User Involvement
3. Clear Business Objectives
4. Emotional Maturity
5. Optimizing Scope
6. Agile Process
7. Project Management Expertise
8. Skilled Resources
9. Execution
10. Tools & Infrastructure

Little changes at the top. Executive support & user involvement were noted in the 1970s as 2 main success/fail factors. ‘Agile Process’ is an evolution of ‘Smaller Project Milestones’ (the bit of agile that’s actually about process is “deliver working software frequently”, which is “smaller project milestones” in olde 1990s language). ‘Clear Vision and Objectives’ is re-branded as ‘Clear Business Objectives’

But note the varying evaluation of the importance of the people. At one level technical stuff will get done if and only if you have competent people actually doing it — it’s make or break. But so are all the factors above “Skilled resource” or “Competent staff”, albeit less obviously so.

Emotional Maturity is new. The cynic may note that the Standish group will sell you an Emotional Maturity Test Kit (the less cynical may say Standish are attempting to address a problem). Actually their analysis of emotional maturity is largely about character and behaviour which may be a sign that the English word ’emotion’ has grown in what it means compared to 20 years ago. They include arrogance and fraudulence; I can see arrogance as a symptom of emotional immaturity, but fraudulence may mean “I’ve considered what counts as getting ahead in our society and it seems to me that taking the company for everything I can get before they take me for everything they can get is the way to go”. Fraudulence is wrong, but ain’t always emotional. I think “personal maturity” — or just maturity — is the idea they’re grasping at. It’s about having good people.

Scarcity of essential success factors

But the top factors are not prioritisable. I think they are all essential. Rating one above the other says more about relative scarcity than relative importance. When it’s really hard to get competent, skilled, hard-working staff then that will be your top success factor. As it is, good executive support is harder to come by than competent staff. Apparently.

I’d summarise as follows:

  • Good people
    • who know what they are trying to achieve
    • with good involvement & communication with who they’re achieving it for
    • when well-supported

will succeed, if success is possible.


More serious researchers point out all that is wrong with the Chaos report, most notably:

  • unlike published academic research, the data we’d need to evaluate the claims is kept private so we can’t verify their data or methods.
  • Their definition of success is very narrow and doesn’t mean success. It means “cost, time and content were accurately estimated up front”. Which isn’t at all the same as success except in areas which are so well understood that there really is nothing at all you can learn as you are doing it. That’s not so common in technology projects.

For an alternative and probably more balanced view of the ‘state of IT projects’ I’d look at Scott Ambler’s annnual surveys: http://www.ambysoft.com/surveys/