Driving in The Fog

John Whitaker Recruiting, Selection

I love my guts. Seriously.

I’m the first to admit that for much of my career, a large part of my success as a staffing professional and/or generalist relied on instinct, “feel,” or some other non-tangible determinant; and, frankly, I was good at it. I didn’t invent the term “gray area” as it relates to HR, but it is certainly applicable.

Let’s face it, many times we found ourselves working with limited information, so we became proficient at driving in the fog—that’s how many of us could describe our day-to-day responsibilities, navigating the gap between fact and fiction, precedent and future, right and wrong.

At least there appears to be a lack of information; perhaps the data is there, but we aren’t looking.

Nowhere is that more apparent than in our recruiting and selection processes.

In the anthology, What’s Next In Human Resources (Greyden Press, 2015), I discuss a new mindset for the Human Resources professional—especially for those in a traditionally subjective role, like Talent Acquisition or HR Business Partnership. The “art” of selection needs to allow for “science.” A colleague of mine, Whitney Martin, speaks to this very thing in her chapter on “Improving the Science of Selection.” We have integrity tests, behavioral interviews, skills assessments, cognitive tests, personality tests, panel interviews, presentations—but there doesn’t seem to be a real strategy behind how and why we are using the various tools. And while it appears that staffing is clearly attempting new data points to shrink the information vacuum, a study by Aberdeen (Lombardi, 2014) notes that “only 14% of all organizations actually have data to show the business impact of their assessment strategy.”

That could be indicative of several dynamics at work:

  • There’s no plan: We find a new data point and bolt it on to our existing processes, without considering if/how this new data point is connected to a productive hire.
  • We are chasing false assumptions: You’re recruiting for sales people and you find that 7 of your top 10 sales performers are “Blue” personalities based on your assessment tool. You then begin looking for more “Blues,” knowing you now have the secret sauce for a successful hire. What you didn’t notice is that 8 of your bottom 10 performers were also “Blue.” Guess what the false assumption is here?
  • We aren’t scrutinizing our own results: How often do we review the effectiveness of our selection methods? I’d much rather have an internal (to Staffing) focus on the linkage between our hiring practices and productivity data than be challenged by my clients. What’s the validity of our methods? If you rely solely on a few selection processes, you might be shocked to find how little they offer in the way of predictive ability.

The reality is this: Professional recruiters must become more attuned to the collection, interpretation, and validity of data. Technology has increased the size of our net to the point where even the most slippery fish cannot escape our reach. The challenge now is to find ways to refine our haul into landing a meaningful catch.

To do so requires skills that are less ethereal, and more evidence-based. We fancy the “art” of our profession, but the amount of data available to us now leaves little excuse not to also embrace the science.

Even if that means ignoring our beloved guts.

FOT Note: This post is sponsored by the good folks at CareerBuilder.com, who care so much about the world of recruiting and human resources that they’ve become an annual sponsor at FOT.  Here’s where it gets good: As part of the CareerBuilder sponsorship, FOT contributors get to write anything we want on a monthly basis, and CareerBuilder doesn’t get to review it.  We’re also doing a monthly podcast called the “Post and Pray Podcast,” which is also sponsored by CareerBuilder.  Good times.