Check out this alternative model for evaluating training
- select the contributor at the end of the page -
For years, we’ve followed evaluation methodologies pioneered by Donald Kirkpatrick, who established a universal standard for assessing the quality and impact of the training delivered by organizations of all sizes.There is no doubt that Kirkpatrick was a brilliant man, who made an unquantifiable contribution to the world of enterprise learning and development. There were many spirited debates about some of the details associated with each of Kirkpatrick’s levels, but he forced conversations about evaluation that might not have happened if not for his commanding presence in the learning and development industry.
Unfortunately, much has changed since Kirkpatrick began his work in the area of training evaluation in the 1950s and published his initial book on the subject of evaluation in 1994. We now live in a vastly different world; what used to be a centrally controlled, top down, enterprise focused model for developing the skills of the workforce has become almost the opposite. Each of us now has an almost endless array of options for increasing our capabilities including, but by no means limited to, those offered by our employers. And while Kirkpatrick’s methods and tools continue to be useful for many enterprise-lead learning interventions, they simply don’t work for the learner-centric, learner-controlled skills development model many of us have adopted.
Among many other innovations, the Internet has spawned an entire industry devoted to the community-based evaluation of purchased products and services. Sites like TripAdvisor, Open Table and Angie’s List, to name just a few, provide vast amounts of information about hotels, restaurants, contractors and health care providers. Many of us routinely check these sites before booking a reservation or signing a contract, and then, on the back end, we contribute to them to document our experiences. The question is whether or not a similar approach would work for evaluating training. I contend the answer is a resounding yes.
So what might a community-based training evaluation model look like? There are four key questions that most people consider when judging the overall quality of training:
When you're searching for training, you could look at the data for each of the available alternatives—format, value, expectations and utility—and decide which, if any, are worth pursing. You also can make decisions about which of the criteria are most important to you. For example, you may be less concerned about utility if you're exploring a new discipline or just learning for learning’s sake.
Classroom training may not be an option for you, so you may choose to emphasize format less than one of the other criteria. And you can use the comments to provide additional color to the numeric ratings. In the end, you would be able to make better, more informed decisions than you can make today. And, perhaps more importantly, this approach is quick, easy, and requires very little administrative control or oversight.
Here's another important question to consider: Would anyone use a model like this if it were available? At a recent webinar on learning measurement, participants responded to questions about this more community-based evaluation process, and the results were overwhelming positive. Eighty percent of respondents said they would both evaluate a course using this model, and use the results when trying to make a training selection. They also indicated that that while the model would be useful for classroom programs and e-learning, it would be less so for informal learning and web resources.
Let's be clear, the Kirkpatrick model still has a role to play in the evaluation of learning, but it's no longer sufficient. The alternative presented here is intended to reinforce the idea that a community-based evaluation model could work for learning in just the same way that it works for other products and services.
By actively engaging the large community of learners in a process that creates useful information in an easy-to-digest format, that can be accessed at moment of need, can help ensure that investments in learning--whether made by or employers or ourselves--yield meaningful results. And we can, at the same time, help ensure that vendors who offer programs that don’t deliver value are identified and held accountable—just as each of us are held accountable for what we do.
- Was the format of the training appropriate for the content being presented?
- Did the training meet my expectations?
- Did the training have utility; can I apply what I learned in some meaningful way?
- Overall, did I receive good value in return for my time and monetary investment?
