We’re doing performance reviews at SEOmoz right now. It’s tough, because on the one hand, I agree with a lot of the general criticism of traditional reviews and I’ve seen plenty of data to suggest that classic reviews can have negatives that outweigh their value. Some good reading on that here: 1, 2, 3, and 4.
On the other side, we have a critical need for reviews (or something like them) in order to:
- Create formal touchpoints for managers and employees that help give direction, focus, and tangible recognition
- Provide a reliable, understandable, familiar path to increasing financial compensation
- Build temporally bound periods for setting, achieving, and evaluating longer-term goals that may not be pragmatic or possible to judge in weekly 1:1s
- Give substantial, well-documented data for performance evaluation (this is critical for legal/regulatory reasons)
- Offer known evaluation points on the roadmap for promotions in title and/or responsibility
These needs are real, and so far, I haven’t found the creative solution that solves for these (that doesn’t just feel like “performance reviews” with a different name).
What I have found are some general practices that help to improve reviews. Specifically, seven of them:
- Tie reviews to core values: The company’s core values (TAGFEE for us) should be directly connected to performance reviews, baked right into the descriptions and the rating criteria. I’m shocked how few organizations rate their team on the things they profess to put above financial/goal performance. Don’t just include the language, center the components of a review around core values.
- No review should ever be a surprise: Every week (or close to it), in 1:1s, managers at Moz should be letting their reports know how they’re doing on a qualitative scale that reflects the ratings at review time. In 1:1s, I like making this part of the discussion around how happy you are, what could make you happier, what you feel your performance is, and what I can do to help you improve it. It fits naturally at the end of those topics and it sets expectations that can then be fulfilled come reviews.
- Rating scales should be simple and obvious: Don’t use a lot of jargon or raw numbers that don’t have definitions. We’ve had reasonable success with our scale – unsatisfactory → needs improvement → meets expectations → exceeds expectations → outstanding. If you’re going above and beyond the call of duty, you’re “exceeding expectations.” If you’re doing your job well, you’re “meeting expectations.” If you’ve achieved far more than what’s been asked of you, helped others to achieve their goals, and had a massive positive impact on the company’s overall performance, you’re “outstanding.” Doing this with a 1-5 scale creates far more subjectivity because the numerical system is inherently non-explicit and lacks the descriptive element.
- Make your “meets expectations” rating a positive thing: Whatever your middle rating is, make sure your team and your managers know that people who are doing a very good job at the tasks and responsibilities expected of them have earned that rating and should be proud of it. I hate when people get this idea that “meets expectations” isn’t a good rating. I’ve never given myself higher than that on a performance review and I don’t believe I’ve ever earned it. Like grade inflation in schools, the “meeting expectations is a low score” mentality can be poisonous.
- Collect team feedback (360s), but don’t let these directly influence ratings: We implemented 360s (very simple ones) a couple years ago, and so far, I really like them. We give the opportunity for both anonymous and transparent feedback, and we make it clear that anonymous feedback will be less valued and less influential. We don’t have team members give qualitative/quantitative “ratings,” though, as we’ve seen/heard that this can create political incentives. Instead, this practice is more about hearing what your team members love about you, and where they think you can improve. Managers can see this feedback, and when someone’s having a tough time seeing a weakness (or recognizing a strength), the social proof really helps.
- Create disincentives for politics: This can’t just be baked into reviews, it has to be embedded in the culture and the incentives for advancement/influence that playing politics, even if you do it well, will earn scorn and probably dismissal. At Moz, I don’t care if you’re the smartest person on the team, the one doing the best work, the only one with critical institutional knowledge, or that you carry a serum that can cure my terrible back pain. If I see you’re trying to make others look bad to make yourself look good or similar shit, we are going to have a very frank talk, possibly followed by a severance offer. Nearly everything negative said about performance reviews relates back to a culture of politics.
- Make the CEO’s review available: One of the things we’ve tried that I’ve liked so far is to have me share my review company-wide. It helps set the tone and it lets team members, particularly new folks, have a data point for comparison. This review period, I feel awful because I think I actually forgot to do this (though I’ve traditionally sent it to everyone in the past). I’m going to remedy that very soon.
- Do self-reviews first: We all do our own performance reviews first, including giving ourselves overall ratings. This helps tremendously both with creating the right expectations, and empathizing with the review process and our reviewers. It’s fairly uncommon for reviewers to disagree with the scores of their reviewees (probably only 25% of the time), and half or more of those are when reviewers think their reviewees deserve higher ratings.