Interview with Learning Expert: Will Thalheimer

Measuring Learning Results

T/D: You've written a research in practice report called Measuring Learning Results which is an understanding of learning and focusing on measurement.  Can you tell us more about that?

Thalheimer: Over the years I've been immersed in learning and looking at the research on learning.  One of the things I noticed in our field is that there are some leverage points - things that affect the whole field, things that drive us to do what we do.  One of those is measurement.  How we measure affects how we practice instructional design.  So I'll share with you a couple of things that I found.  One thing is that smile sheets aren't good enough.

T/D: Right, don't we know that at a gut level?

Thalheimer: We may know it at a gut level, but we keep doing it. I did some research with the e-learning Guild and we looked at how it's the most popular method, after completion rate, for an e-learning program.  So they've done like a research meta-analysis on this, and they found that smile sheet ratings don't even correlate with learning results.

T/D: Interesting.

Thalheimer: They correlated like 0.09 - which is…

T/D: Negligible.

Thalheimer: Negligible.  Smile sheets don't correlate with on the job performance either.

T/D: What does your opinion have to do with your performance?  Nothing.

Thalheimer: Right, so that's intriguing.  So one of the things we're trying to do when we measure stuff is to get better.  There are really three reasons that you might want to measure learning - one is to support learning, to give the learners feedback.  The other thing is to improve the learning interventions we create.  So improve our learning designs.  The third thing is to prove our results to our organization; our certification, grading, that kind of thing.  So three main reasons there.  If we're going to improve our learning designs one of the things that we need to do is make sure that our learning metrics are predictive of people's actual performance.  If our smile sheets aren’t predictive, then they're no good at doing that, they're no good at giving us good information.

I've been playing with trying to improve my own smile sheet so I'm going to recommend a couple of things.  One - let's not ask people just overall questions.  Let's put in some real learning points that we wanted to get across, because people aren't very good at remembering everything after a session, right?  So if you ask them in general did you like it you're going to get general responses. Instead; this is one of the learning points that we talked about, how valuable was that to you?

Then I ask them - what's the likelihood that you're going to use this within the next two weeks?  I have from a zero to 100% in 10% increments.

T/D: That's brilliant.

Thalheimer: I also ask them - what's the probability in the next two weeks that you'll share this with one of your co-workers?  I also really focus on the open ended comments.  I find that those are really some of the most valuable feedback that you can get to improve your course.  I've gotten a lot of bad feedback too, but I've gotten some really good feedback on smile sheets that has told me - you really need to improve that exercise.  Or, have more stuff going on in the afternoon that's fun because people were tired - stuff like that.  Okay, you hear that over and over - you better change it.

Thalheimer: Another thing I do with smile sheets is I also like to follow up two weeks later.  I follow up and I do a couple of interesting things.  One is I ask the same question on the same six point scale about how valuable you thought the training was.  Now they've gone back to the work place.  They now really know how valuable it is.  So it gives a better anchoring.

I’ve learned a couple of other things from looking at measurement from a learning fundamental standpoint.  One thing is, if we measure learning right at the end of learning, what is it we are measuring?  We're measuring the learning intervention’s ability to create understanding.  Which is fine, that's good, but we're not measuring the learning intervention’s ability to minimize forgetting.  You have a full day session or you have an e-learning program and you measure people right away - everything is top of mind.

T/D: You should be able to recall it.

Thalheimer: You can easily recall it and not only that, but you feel confident that you can recall it.  So that's a really biased way to get information.  That's one of the reasons that make it a bad proxy for your ability to remember it on the job.The final mistake we make is to measure something in the same context that people learned it.  That context is going to remind them of what they learned and because of that they're going to get better results than if we measured them in a different context.  So we need to do one of two things - we either need to know that our results are biased because we're in the same context that they learned in.  Or we need to change the context and make it more like the real world context so that it's more realistic.  It's more predictive of that real world environment.

T/D: Great points! Thank you Will.

Will Thalheimer founded Work-Learning Research which is a consulting practice that helps clients build more effective learning interventions.  He’s been in the learning and performance field since 1985.  His professional focus is on bridging the gap between the research side and the practice side