How well is your team performing? Do you know what outcomes you’re delivering? More often than not, teams try to answer these questions by tracking the velocity or number of items completed per Sprint. Although this tells you how busy you are, it doesn’t tell you how useful that work actually is. Even worse, organizations often tell teams what to measure and then compare them with other teams. In this experiment, we outline the steps for helping teams to select their own metrics.
In this experiment, we outline the steps for helping teams to select their own metrics.
Required skill
This experiment isn’t hard when you start small — even a single metric — and start building from there.
Impact on survival
Creating transparency around outcomes is a significant driver for change, as both teams and stakeholders see what is really happening.
Steps
To try this experiment, do the following:
- Before starting this experiment, clarify the difference between output- and outcome-oriented metrics. Understanding the difference matters.
- First individually (1 min), then in pairs (2 min) then in groups of four (4 min), ask people to consider how they would know that their team is doing better. Ask: “How do we know that we’re responsive to our stakeholders? What metrics would go up when we do a good job and down if we don’t?”. Together, collect relevant metrics with the team (5 min).
- Repeat for quality. Ask: “How do we know that our work is of high quality? What metrics would go up when we do a good job and down if we don’t?”.
- Repeat for value. Ask: “How do we know that we’re delivering value through our work? What metrics would go up when we do a good job and down if we don’t?”.
- Repeat for improvement. Ask: “How do we know that we’re finding time to improve and learn? What metrics would go up when we do a good job and down if we don’t?”.
- Together, look at the selected metrics and remove obvious duplicates. First individually (1 min), then in small groups (4 min), ask people to remove metrics that the team can do without, while still being able to measure their progress on responsiveness, quality, value, and improvement. Together, keep the most minimal set that covers these areas (5 min).
- For each of the metrics left, explore how to quantify them well and where to get the data from. If additional research or setup is needed, you can add this work to the Product- or Sprint Backlog.
- Set up a dashboard — preferably just a whiteboard or flip — that the team updates (at least) once every Sprint. Create graphs for the various metrics to track trends. Resist the temptation to set up overwhelming dashboards in digital tools. First, build the discipline to track a handful of metrics and inspect them every so often. Low-tech dashboards, like whiteboards, promote experimentation because they’re easier to change in terms of presentation, content, and format.
Inspect the dashboard together during Sprint Reviews or Sprint Retrospectives. What trends are obvious? When you run an experiment, what would you expect to see change? A Liberating Structure like ‘What, So What, No What’ is well-suited for this.
Our findings
- When it comes to metrics, it is easy to try to measure too much. Be purposefully minimalistic by starting with the essentials — for example, stakeholder happiness and cycle time. Add more metrics when it helps your learning and when teams develop a rhythm in maintaining and inspecting them.
- Don’t turn metrics into Key Performance Indicators (KPIs) and work hard to prevent others from doing so. When metrics are used to appraise the performance of teams, it incentivizes them to “game” the numbers. Instead, use metrics purely for learning what works and what doesn’t.
- Don’t hide your dashboard from stakeholders. Instead, engage them in making sense of the data and finding opportunities for improvement. They benefit from it just as much as your team does.
When metrics are used to appraise the performance of teams, it incentivizes them to “game” the numbers. Instead, use metrics purely for learning what works and what doesn’t.
Looking for more experiments?
Aside from a deep exploration of what causes Zombie Scrum, our book contains over 40 other experiments (like this one) to try with your Scrum Team. Each of them is geared towards a particular area where Zombie Scrum often pops up. If you’re looking for more experiments, or if these posts are helpful to you, please consider buying a copy.