Recently the Utah Development Center at Microsoft (where I work) had a “morale event” (I still think it’s funny that they explicitly call them that) where we visited a local laser tag place. We played three times and each time I ended up in the middle of the pack (yeah, I’m not that good). But I did see some interesting lessons that could be taken from the experience. Here is one of those lessons.
After the first round, an interesting thing happened. We each got a printed “score card” to tell us how we had done. It had how many kills, how many timed you were killed, breakdown by opposition player (friendly fire was off), number of shots fired, accuracy percentage and some overall team stats. The one additional number on the sheet was “rank”. Being good little programmers/engineers, most of us started to try to reverse engineer the ranking algorithm. It appeared to be surprisingly simple. We were ranked by number of kills. (I don’t think this was confirmed, but we suspected that number of timed you were killed was used as a tie breaker.)
It was like the game totally changed. Because it did.
Suddenly the feedback loop from the metrics used gave us this: Don’t worry about accuracy percentage, don’t worry about the number of times you get killed. Kill the most people and you win. The only thing that matters to your rank is number of kills. The next game the overall kill count went up significantly (about 20% increase).
Metrics have the tendency to focus us like this, which is super powerful. But remember, with that great power comes great responsibility. While many still played for “team pride”, for some people the individual ranking became the most important personal metric of success. If you no longer care about the success of your team, the number of kills that you sustain no longer matters. It is not a metric that you care about. All that matters is getting the most individual kills.
Choose the metrics that you use with care.
A recent real life example that I heard went something like this: The manager of a development group doing SINO (Scrum in name only) set a goal for the group that “this year, we’re going to get everything done that we commit to for a sprint done in that sprint.” This is a well intentioned goal. Basically do what you say you’re going to do. BUT, if that is the metric used, what happens to the amount of work committed to in a given sprint? In order to do well against the metric, the amount of work committed to drops (or estimates are WAY high). Suddenly half way through a sprint (or less) the developers have completed the work that had been committed to. All that matters is getting the work committed to done.
Some managers reaction would be to think, “Well, that didn’t work, lets add some more metrics to get what I want. Let’s continue to try to control the system.”
That may or may not be the best approach. Eventually you may end up with a system that is so bogged down with itself that no work actually gets done.
I’m not saying that metrics are bad. They can be very, very good and are very, very powerful. Just be sure that you’re using the right tool for the right job. Don’t use a nail gun where the gentle tap of a hammer is the right thing.
No comments :
Post a Comment