Share This Story!

Data Fluency Series: How to Get Reliable Ratings Data

Last week we talked about the three most important words when it comes to data: Reliability, Variation, and Validity. This week we’re discussing the first of those words – How do we know that the people data we’re collecting is reliable? How do we know that it’s measuring what we say it’s measuring? And can you really measure something like performance in another person?

There are three ways to measure things. You can count it (ie, how many inches tall are you?). You can rate it (how tall do I think you are?).  And you can rank it (list the people on your team from tallest to shortest).

Ranking is the least-helpful measurement tool, because your rankings may tell you something in one context, but it’ll be irrelevant in any other group. HR data is all about interpersonal comparisons, and you can’t do that with ranking because one group’s rankings are irrelevant to another group’s.

The best, most reliable way to measure anything is to count it, because counting has inter-rater reliability. That means that no matter who is counting, the number will always be the same. Countable people data like payroll, time and attendance, and length of service will always be good, reliable data.

Unfortunately, when it comes to people data, many of the things we want to measure aren’t countable. We can’t count your performance or your leadership skills, your strategic thinking or your engagement. So to measure these things, we have to rate them.

Most of the people data we have is ratings data – and most of the ratings data is, unfortunately, bad. It’s not reliable. It doesn’t measure what we say it measures. And most of that is due to rater unreliability.

Maybe you’ve heard of the Idiosyncratic Rater Effect, but as a quick refresher, it means that humans are unreliable raters of anything besides their own experiences and intentions. If you are asked to rate another person on something, your rating will reflect much more about you as a rater than the other person. In fact, 61% of a rating is a result Unconscious Rater Bias. That means that your performance rating reflects your manager, and not you. And that’s a problem! We pay you, train you, promote you, fire you, as though it reflects you… and it doesn’t!

And maybe you’re thinking, “Well, if I just get more people to rate Marcus then our idiosyncrasies will be averaged out!” Hate to break it to you, but bad data piled on top of bad data doesn’t suddenly make it good data. It means you have an even bigger pile of bad data.Bad Data + Bad Data does not equal Good Data. Share on X

The other reason ratings are rarely reliable is called Data Insufficiency. When you’re rating a person, you rarely have enough data to rate them reliably. If your manager’s manager is rating you and they see you once a week, how is that rating going to be good data?

So, how do we solve for that? Well, a person can only reliably rate his or her own experiences and intentions. Whenever you see a ratings system that asks you to rate someone on some quality or competency, it is bad data. When a survey tool asks you to rate your own experience or your own intentions, it’s good, reliable data. It’s important to know the difference, so you can tell when you are reliably rating someone else, or when someone else is unreliably rating you. A person can only reliably rate their own experiences and intentions. Share on X

Check back next week for more information on how to become data fluent, and the impact of bad data on our businesses.

Share This Story!

4 Comments

  1. Jason Durkee November 7, 2017 at 3:22 PM

    This series is hands down the most useful I’ve watched this year.
    Marcus: While probably embarrassingly fundamental and old news to you, most of us people folks haven’t studied measurement legitimately. Your clear, frustrated tirade in ten minutes or less with the key concepts and clear examples teaches more than the an entire library of HRD related measurement books. Please keep it up. And try to finish before you get too bored or busy.

  2. Francois Simpson February 22, 2018 at 7:31 AM

    Great explanations Marcus!
    As a purveyor of strategic and data-driven action plans, I’m constantly trying to influence executive decisions. This includes increasing their awareness / understanding of the “pitfalls” of data reliability you’ve described above… I will definitely add this short video to my coaching toolkit. Thank you!

  3. why is a name required February 19, 2020 at 10:30 PM

    <>
    Law of big numbers *guarantees* that our idiosyncrasies will be averaged out. The challange is to have enough people to rate you.

    • Meredith Bohling February 24, 2020 at 8:58 AM

      Hi! You should check out Chapter 6 of Nine Lies about Work, “Lie #6: People Can Reliably Rate Other People.” It explains why this common belief (the Wisdom of Crowds) doesn’t work when it comes to performance ratings. Bad data + bad data simply equals more bad data.

Comments are closed.