In a previous life, I inherited a product that had a bad quality problem.
This was pre-Agile Development days when we were just getting our heads around Extreme Programming. For us, at the time, there was no choice but to clear out the bugs with an extended test and fix marathon.
Being a geek, I tracked all sorts of stats, including the number of bugs fixed by individual developers. I announced that we would, for a short time, trial a self-organizing team approach.
But for fun, I named my stats spreadsheet “The Bonus Calculator” and I left it open on my desktop.
Unsurprisingly, bug fix rates went through the roof.
But so did unassign rates – where a developer decides that they can’t fix a bug and unassigns themselves from the issue.
We inadvertently turned into a team of semi-rabid-short-termist-low-hanging-fruit-pickers!
And it was a really bad place to be.
We were fixing the easy, low-risk issues first, while the high-risk, more challenging issues were being pushed to the back of the schedule and then ignored because of time constraints.
With hindsight, it was a schoolboy error, and despite my explanation, some of the team refused to believe that it had been a joke. They still believed that I was tracking bug fix rates for bonuses.
That’s where I learned about the power of KPIs.
I put a lot of effort into linking metrics to the goals and outcomes that I want my team to achieve. For example, most software source control systems will let you measure stats for:
- Number of lines in functions
- Code complexity
- Code churn or turnover
These stats are interesting but when they are used in isolation they are not clearly linked to the outcome that we desire – high-quality code.
When we’re not clear about how a KPI is linked to a goal its not uncommon to find people honoring the letter of the KPI, but not necessarily the spirit. Here’s how we might reconfigure our KPI:
- Deliver high quality focused code that works first time
- Write short methods (c. 20 lines)
- Low complexity (c. 25)
- Use unit tests to prove functions behaviour
- All code must be peer reviewed
The KPI is clearly linked to the Goal. Goals by their nature though are ‘loosey goosey’ and directional. It’s good practice to add guidelines so that people can use their judgment to support the goal.
I’ve also found that these lessons are equally valid when adding performance measures to supplier contracts. For example, in an SAP environment, we want to be able to push change through the cycle of dev, test, and pre-prod quickly with a high degree of certainty.
Anecdotally, if the cost to fix a defect in Dev is 1, in QA it’ll be 10 and in Production, it’s somewhere near 100.
In actual fact, the cost in an SAP Production environment can be even higher, especially if you can’t unbundle a transport or hit the transport back-out button.
Studying the following metrics can be particularly useful when assessing supplier performance:
- What is your transport mean cycle time?
- What do your rework stats look like?
- What is the volume of transport errors?
Yes, these things can be hard to measure, even more so in an SAP environment where there are hundreds – or thousands – of developers making changes simultaneously.
We’re currently building a dashboard that enables CIOs to measure SAP rework rates, defect volumes, mean cycle time, development velocity etc. which will enable CIOs to set and measure meaningful SAP development KPIs.
My inspiration for this blog came from sitting in the nosebleed section at an English Premiership game. Sir Alex Fergusson famously sold Jaap Stam at the age of 31 partly because his distance covered stats during games were dropping off.
Sir Alex took this as an early sign that Stam was past his best and so sold him, but he later came to realize that Stam’s stats had dropped off because he was becoming a better defender and had developed better positioning technique.
The moral of the story? Be careful what you measure, because what gets measured gets done…. literally.