Stop Checking the Box

Dan Carusi Dan Carusi, HR, Learning and Development, T+D

I recently attended an industry conference and was enjoying some spirits with my peers when the topic of measuring training was put on the table.  I swear it was only minutes before everyone was sharing awesome percentage of completion numbers and giving each other high fives, even talking about winning awards.  Everyone seemed pleased by how many employees had checked the box that they completed or participated in a program.

Then I sucked all the life out of the room with one question – but was the training effective?

There was no response; just a large chorus of crickets.  Seriously, no one was curious to take a deeper look and see if the training intervention delivered the desired outcome, addressing why the training was needed to begin with.  Even more frightening, many present these “percentage of completion” results to their organizational leadership and CEOs to demonstrate success or maybe to justify their existence.  We wonder why Talent and Learning leaders struggle to get the proverbial seat at the table.  We have even gone as far as replacing ROI with ROE (Return on Expectations).  As an industry, it sounds like we are making an excuse for how we measure success.

I’m pretty confident my CEO could care less about my expectations for the training versus a favorable return on the dollars he gave me to spend.  Honestly, I’m too much of a chicken to even consider pitching ROE to find out the response.  I don’t develop or roll out any learning programs unless it solves a business problem.  So the question I am asked is very straightforward: Did it work?

So back to the conference, drinks and the stimulating conversation which now has evolved to discussing the best metrics model for measuring true effectiveness of training interventions.  Most people would be discussing the British Open or the latest scandals in Washington, D.C.,  but not us.  We were discussing all levels of Kirkpatrick, as well as some really fancy software programs that generate charts, graphs, spreadsheets and more.  Very cool, right?  It depends on what is being measured and what should be measured.

New jobs have been created in the learning field simply to measure the effectiveness of the training interventions.  It’s almost like we implemented our own stimulus program for Learning & Development. I’m not saying that measurement is not important.  I’m saying let’s cut to the chase and measure the one thing that is important to everyone at the leadership level in the organization – business impact (did it work?).  Understand the business problem, the role the learning solution plays with solving it and what success looks like at the end.

As an example, if consultants do not have the skills or competencies to be competitive resulting in a loss of new business and lower billable rates, we know the problem as well as the desired outcome to work towards.  Win more business and increase the average billable rate.  If you can demonstrate this – then it worked.  Very simple, yes?  Then why do we make it so damn hard?  Probably because it’s much easier to report on total percentage of completion and course evaluations versus the real business impact.

There are countless articles on why Talent and Learning leaders need to be more strategic, partner with the CEO and help execute the corporate strategy.  Good gracious, I think I have read the majority of them.  However, for this to happen we need to cut through knee deep crap and start measuring business impact.  Stop looking at how many people checked the box by completing or participating in a program.  Start looking at how it will help solve a business problem and support the larger corporate strategy.  Put yourself in the position to answer yes when your CEO asks the question, did it work?