I’ve been on a huge spring cleaning binge as of late. Reorganizing, removing…just really getting everything in its place. Not an easy task and I’ve been attending to it in bits. Last week, it was my desk, the week before it was a closet, the week before that…something else.
This spring cleaning and organizing binge has overflowed into my sourcing as well. I am a self-confessed digital hoarder. Lovely if you need me to track something down, a bit of a headache for IT. So…baby steps….I cleaned up Outlook, weeded out duplicate documents and continued to move through the process of “what do I really need”. I’m closing in on my third anniversary with our firm and needless to say, there’s a lot of digital information.
During all of this, there was a truckload of work to be done. Which is awesome of course, but it made us look as a team at how I was saving things – and did it make sense? I’ve always saved search strings that work and quickly moved on from the ones that failed. But that brings about a certain point, if you don’t categorize the strings that fail as failures, how do you know you won’t repeat them, again and again?
Reality for a sourcer is that we look at thousands of results, pull in hundreds of passive candidates and really funnel down to those that make it to the in person round of an interview process. Remembering what strings that fail? That’s not realistic.
So now I’m recording what works (and by what works, I mean what candidates get selected into a first round) and what fails (strings that either generate no viable results or generate results that create no first round candidates). Anytime I have to change my process, I get somewhat agitated. You know how it is, you’ve got your flow. But this concept of what works, and what doesn’t work is really working for me. I’m saving the info in Google docs and sharing with our team. Yes…it’s boolean crafted and probably gives them an insane headache to see something like:
sql|oracle|crm data|database analyst -administrator
But I can explain it and also reference back to it to see why it failed. And the team is all on the same page. We can revisit criteria and quickly troubleshoot a search. Maybe the string needs to be tweaked…maybe it’s not a realistic search for LinkedIn but does wonders on .Me or G+ profiles, maybe it reveals we should change methodology and hunt names instead of profiles.
But what we’ve found…maintaining the data that fails is turning out to be just as an important as the data that succeeds.
Kelly is the Recruitment Manager for Westat, a leading social science research organization headquartered in Rockville, Maryland.