Seductive statistics and design
In engineering design, 80% of costs are determined by design decisions.
I’ve seen the above claims in my social media feeds over the last week. I thought I’d dig around them a little — especially as they’re so neat and memorable and widely used. They set off my spidey-senses.
After doing only a little homework, it’s clear that these statistics shouldn’t be used in a generalised way because:
(1) important variables can be independent of the producer. For example cost and benefits over a product’s lifecycle will have lifecycle duration as a critical variable — something more under the consumer’s control than the manufacturer’s. Equally, product materials are a key variable — for example the product might contain a lot of metal like gold or platinum, and so have costs over time driven more by commodity markets than by decisions made by the designer.
(2) many of these statistics originated from quite narrow research contexts and small samples. Despite that, somehow the 80% stat has entered design folklore now and is just accepted as fact. Do some light Googling and you’ll see that most commentators using the stat don’t cite a source. Where there is a source, it’s often one of the authoritative institutions in design (like the Design Council, or the US National Research Council, or CABE, or one of the European research bodies), or an academic paper that cites another academic paper. It’s all very circular. There’s little original research in this space and as a result quite precise statistical conclusions from a handful of niche studies are generalised across industry and design contexts:
- The environmental impact claims appear to have been extrapolated from earlier research into the impact of design on product manufacturing costs. There’s significant and ongoing debate about whether that extrapolation is appropriate.
- Some authors of the early research referenced an internal study from Ford. When a manager at Ford was asked about it, they said “Oh, that was just an internal informal survey, I wouldn’t base too much on that”.
- Another reference was a Rolls Royce study — one that analysed 2000 part drawings and discovered that 80% of the cost reduction opportunities required changes to part design.
- Some recent commentators pointed instead to a NAO review of projects where they concluded that poor design was responsible for nearly half the project cost inflation. But they were specifically reviewing construction of nuclear-related (and regulated) infrastructure, and only examined a handful of (albeit it *huge*) projects.
Are these really sources and comparators that would stand up to scrutiny by your stakeholders?
So why do we do use these statistics?
Well, some designers aren’t confident enough when handling statistics to dive into the research studies and think critically about them. (If this is you, I highly recommend Tim Harford’s book — “How to make the world add up”.) It aligns roughly with the Pareto principle (80/20 rule), feels intuitively right (confirmation bias FTW), so they don’t think twice.
More generally, though, I think the answer is politics.
Unspoken in most of these big claims is the assumption that the 80% is within the power of the designer to control. Making this suggestion — even (especially!) if just implicitly — creates leverage.
Those in my social media feeds citing these papers were trying to:
- increase the budget of design teams (“80% of costs could be avoided pre-build, but design phase only gets 5% of budget of build…”)
- increase the length of planning and design phases, and introduce more governance and stage gates (“we’ve been moving too quickly into the manufacturing phase, before the design is properly mature”)
- get a “seat at the table” or to get their work prioritised (“the cost of bad design is so high, we can’t afford not to”).
Now, I think this is important work. Design decisions have significant consequences for firms, for consumers, for society, for the environment. And the cost of bad design is so much more than just financial.
So I don’t begrudge them their narratives. They’re doing what they think is necessary to advocate for design, and in doing so, advocate for their customers and users. I do that too. It’s part of fighting the good fight.
I just worry that, by using statistics like these uncritically, designers end up losing the credibility and trust they work so hard to build.
We’re in a world that fetishizes data. We operate in organisations obsessed with it. And in power structures, and with colleagues, that spend a lot of time trying to make “data-driven” decisions, demonstrate a robust ROI, and have a convincing enough CBA (that’s cost-benefit analysis) to project approval. Leaders are constantly having to make decisions and place bets — and they want to feel confident doing so. They appreciate you arming them with some killer stats. It’s reassuring. You’re giving them what they need, and you’re speaking their language.
Designers in this position can be massively influential — a trusted advisor even in the C-suite.
But if you’re serving up something that can be easily unpicked by anyone who bothers to scratch the surface of your assertions, it can all come apart far too easily. Even though most business cases are works of fiction, senior people still don’t like to lose face, and it is hard to re-earn their lost trust.
We need to recognise what we have to lose when we don’t think critically about the stats we’re using.
I’ve followed with a blog post about why I don’t think the 80% figure is all that anyway. I believe it’s really unhelpful framing for getting us where we should want to be so I’ll explain why.
And hot on its heels is another post deconstructing the increasingly cited statistics that 67% of all accessibility issues originate in design.
In all these articles I’m arguing the same thing: Don’t succumb to spurious accuracy just because it feels robust. It is better to be approximately right than precisely wrong.