Splitting hairs among the ranks

4

This morning I logged onto my e-mail at 6:45 AM to learn that SAS was ranked as the No. 3 Best Company to Work For.

No. 3 is not as high as No. 1.  But it's very, very close.  Perhaps even barely distinguishable, in the larger scheme of things.

I couldn't head right into the office today to celebrate the achievement because I had a prior commitment.  I was once again volunteering as a science fair judge, where I had an opportunity to do a bit of ranking of my own.

I'm always impressed by the high quality of student projects at the science fair.  The kids obviously work hard, and they learn a lot.  As a judge, I wish I could award each one a special prize.  But I can't.  My main deliverable is a ranking: 1st, 2nd, 3rd and Honorable Mention -- drawn from a pool of dozens of projects.

The task of picking the best is easier than you might think.  The "cream" rises to the top.  Yes, all of the kids work hard and have good projects.  But just a few of them really stand out.  They pick the most thoughtful experiments.  They have the amazing display boards, with clear data and graphs. When interviewed, the stand-out kids can answer every question you pose and demonstrate their above-average insight.

Yes, it's easy to pick the best set.  The hard part is ranking those best four into 1-2-3-4.  Today we had a situation where the judges could not decide between two projects for a 2nd place award.  But we could not have two 2nd places (not enough ribbons!); someone had to be 3rd.  I served as the tie-breaking vote, evaluating each project closely.  I couldn't really find a flaw with either one, so we had to ask the question "which project did a better job of reaching its potential?"  With some deliberation, we sorted it out to everyone's satisfaction.

I imagine that the Great Places To Work Institute goes through a similar process each year as they rank USA workplaces.  It may be easy enough to come up with a Top 10, but I'll bet it's tricky to rank the companies within that elite set.

But that's not a problem for me.  This year the official ranking for SAS is No. 3, but I still regard this place as No. 1.  Anyone who says otherwise is just splitting hairs.

Share

About Author

Chris Hemedinger

Director, SAS User Engagement

+Chris Hemedinger is the Director of SAS User Engagement, which includes our SAS Communities and SAS User Groups. Since 1993, Chris has worked for SAS as an author, a software developer, an R&D manager and a consultant. Inexplicably, Chris is still coasting on the limited fame he earned as an author of SAS For Dummies

4 Comments

  1. That's true on admissions committees and grant review panels as well. The candidates or proposals that are clearly over their head are easy to identify. Then there is that group that you would select every one of them if you could, but somehow you have to pick the 40 out of the 100 honor students with good test scores, good recommendations and impressive extracurricular records. Of the 30 projects that deserve funding, you need to select 25 and turn down the other five. You're right, it is splitting hairs by the final lap.

  2. Great post, and thanks for serving as a science fair judge.

    The question of how to rank things has some interesting statistical issues when the ranks are based on multiple scores (for example, scores from different judges). In most cases, I advocate including confidence intervals for these kinds of rankings. See http://blogs.sas.com/content/iml/2011/03/30/ranking-with-confidence-part-2/

    There are also statistical ways to find "the cream" and "the dregs" without ranking. Funnel plots are one idea that have received a lot of attention lateley. See
    http://blogs.sas.com/content/iml/2011/04/15/funnel-plots-an-alternative-to-ranking/

    Of course, neither of these solve the "blue ribbon problem": People WANT a ranking, even when there are no statistically significant differences between some of the participants. No one wants to hear that little Susie's project is ranked between 2 and 4, with 95% confidence. Judges use their experience, opinions, and gut feelings to break ties, and little Susie and her friends learn that life is not completely objective.

    • Chris Hemedinger
      Chris Hemedinger on

      Rick, that's absolutely correct. In the judging process, we have about 15 minutes to spend with each student. Each judge interviews about 8 students, and then we get together and rank the 50 projects in the particular category/age group. Who comes out on top is as much a function of the "judge advocate" as it is of the student performance, since we don't all get to hear the "student pitch" firsthand.

      In our case, the stress of ranking is relieved a bit when we realize that: 1) our rankings do not influence the project grades, which have already been assigned by the teacher, and 2) our awards don't directly determine who moves on to a regional competition, as those projects must meet certain requirements that we don't evaluate during the fair. Our main objective is to recognize the outstanding work, and to provide a positive experience for all participants.

  3. Pingback: Good news travels fast – Why #ilovesas - SAS Voices

Back to Top