I had lunch with Thomas from our production engineering team today. During our chat, we talked about the future of the company’s organizational structure and plans to split into feature-focused teams after a big launch we have planned for 2013. Thomas noted that in his previous role with Amazon, teams were judged directly against metrics for the features on which they worked, and this had both good and bad elements.
“choose wisely, for while the true Grail will bring you life, the false Grail will take it from you.”
The good is obvious – a feature team that not only knows what they’re building and why, but can also see the progress and transparently share that to their teammates, managers, and the rest of the company is far more likely to succeed. They can focus on the problem, test, analyze, improve, and have all the positive emotions associated with measurable goals, to boot.
The bad is less obvious.
If, for example, we’re talking about a relatively complex software product with multiple features and sections, and the goal is # of subscribers who used the feature, there’s a real estate battle in the making. Feature teams will fight with one another, and the owners of gateways like the product homepage, website homepage, email call-to-actions, links from other sections, etc. to get traffic. This same principle applies to web properties monetized by advertising where sections are judged on their page views, or to mobile apps where teams divide up the tabs/sections of the app.
Thomas and I decided the key to solving this potential for politics, divisiveness, and infighting is not to forego metrics, but rather to choose the right ones.
Imagine if, in the software application example above, we made the metrics goal user happiness as self-reported by those surveyed. Or we chose retention rate of users who engage with the feature. Now, the feature teams aren’t focused on competing with each other for visitors, they’re dedicated to delighting users of their feature.
We could do the same thing with the ad-based web property, making the key metrics browse rate (the quantity of pages browsed in an average session), or conversion to an RSS/email subscription, or average number of return visits per visitor. Now the section teams are building toward goals that don’t demand conflict with each other in order to succeed.
I suspect that as we grow, individual and team accountability to metrics will be a great way to get results and create a transparent path for Mozzers. But, we’ll need to be constantly vigilant against those metrics that create internal competition. The right metrics will bring more TAGFEE; the wrong ones will take it from us. 😉