Services — how to even start?
Start with the basics:
- What services do you have?
- Who are they for and what are the needs of those users? What are they trying to achieve and how?
- What are their intended policy outcomes? What aspects of their performance are measured?
- What are their costs and cost drivers?
- What are their databases and data flows?
Then get to know the material, the more tangible “stuff” your services are made of:
- Service pages, physical forms, marketing collateral, call centre scripts, messages and notifications — the “stuff” of touchpoints. Customer journeys. Print these off and stick them on the wall. Screenshot them and put them in [MIRO? Mural? Figjam — whatever is in your digital tooling set that colleagues will comfortably use]. You need to see them as a whole.
- Everything needed to enable this experience — in-house and outsourced capabilities, software, hardware, platforms, databases. A service blueprint. Best done in a workshop — and with finance/commercial as they can see invisible things your service is consuming, which suppliers deliver which parts, and on what terms.
- Look deeper at data. Where does it live, can it be easily accessed? Is it structured/unstructured? How is it governed? Who owns it?
Now you know what the service is — figure out if it’s any good:
- Does your user research, customer insight and service analytics suggest the service is useable, useful and used by everyone who needs it?
- Does the service meet the service standard, pass service assessments?
- Do they measure up against the 15 principles of good services?
- Is the service delivering the policy outcomes predicted by your theory of change?
- Are you securing VfM (against an appropriate benchmark) for cost of operation, cost per transaction, cost per user? As a service, is this worth the financial cost and the opportunity cost?
- Does the service meet technical performance standards — including data and cybersecurity standards?
If you can’t answer the previous questions, you have a problem. Fill that knowledge gap first — you need a state of high observability to improve and experiment systematically. Plan research that will fill your riskiest data and insight gaps. Then ask yourself and others questions like the following, to generate areas of focus for (re)design:
- Where are the pain points, where is failure demand? Have you only really designed for the happy paths? To answer this for beneficiaries look at UR, contact centre data, complaints, court cases…social media.
- Where is the pipe leaking? Service analytics should show conversion and drop-off if you’ve designed it right across channels. Your data needs to be able to confirm that target users are sticking with it, and untargeted users are dropping out.
- Who are you targeting and so actively designing for? Who is excluded, underserved, disproportionately impacted, actively harmed? Think protected characteristics, digital access and capability, marginalised communities, language skills, access needs.
Take your list of pain-points and problems to address. You now need the right clustering (by whole service, by sets of related services, by capabilities, by user group), and an agreed basis for prioritisation (by impact, by cost, by risk — recommend a spread of different types of bet).
Then get going on (re)design. Choose the right approach based on whether it’s greenfield or legacy; the cost of change and how much change is needed/uncertainty exists; contract expiry dates and break clauses. Agile, iterative user-centred design is a strategic capability you need to build — you need to plot the right route to it.
- Incremental improvement. Hypothesis-driven design, iteration with lowest possible cost of change. Move towards a low-risk, experiment-based model supported by prototyping, A/B and multivariate testing, and CI/CD. Eventually incremental improvement becomes continuous improvement.
- Step-change improvement. Usually reliant on large contract expiries on the horizon (actively plan your contract cycle and manage those exits) — to unlock new capabilities. If/when you in-house, make sure you have a clear model for service ownership, a capable service owner in place, and enough OpEx funding for the service to move into continuous improvement rather than atrophy.
- Greenfield. New product development. Hypothesis-driven design too, of course. If you can, make sure the owner of the policy intent owns the service; if you can’t, then make sure they’re at least in the tent and working collaboratively with the team.